Okay you say your null hypothesis is the manufacturer's claim of 99.9% killing efficiency. For this you really need to have each dish tested to determine the actual percentage of bacteria that was killed. Then you take the average of the results (say 95% of bacteria killed on average), compute the standard deviation, and then you can come up with your confidence intervals. (Also if you want to be pedantic you have to first assume the results will be normally distributed.)
The problem with this is you don't really have a way to accurately measure how much bacteria is killed. If you're going to break the outcome down into 2 outcomes versus an infinite set of outcomes, then it's a different problem to consider. For 2 outcomes (either bacteria grew or it didn't) you wouldn't really be able to get a confidence interval because your outcomes aren't a continuous and infinite set (such as the percentage range between 0 and 100). You would just have to get 1000 petri dishes and if bacteria grew on 1 of them then the average killing rate would indeed be 99.9%. If bacteria grew on half of them then you would say the killing rate is 50%. This is probably not feasible for you since buying that many petri dishes is tedious.
Also, the claim that 99.9% of bacteria is killed is how much the product will kill on contact. What it doesn't guarantee is that the surviving bacteria reproduces over time if not totally eradicated. So the real test should be to get a sample, test that there is indeed bacteria in that sample, then use the product, and immediately test if the product was successful in killing the advertised rate of 99.9% of the bacteria in the sample.
Sorry about being so annoying with my answer but I don't think you can feasibly test the null hypothesis by letting the bacteria grow on a petri dish unless you constrain your outcomes to a binary set.