Small Sample size

montoyaei

New member
Joined
Sep 27, 2007
Messages
3
I am sorry if this has been posted, I am hoping someone can lead me to the right link, but I am trying to understand why sample sizes do not have to be the same. I would think that the only way to get accurate results when testing two small populations would be to test the same number of samples in each.

I am not very good at math and maybe that is why I am having a hard time understanding this.

Thanks for any direction, Elizabeth
 
I was under the impression sample size does matter, mostly due to the "law of large numbers." The bigger the random sample is, the more likely the observations will be accurate. So if you take a sample of n > 1 and a sample of n<sup>2</sup>, the results could be very different with the latter more accurate.

I'm not too up on statistics, though...
 
What really might pin it down for you is a destructive test. Try testing missiles. It is very, very expensive to test them. The mandate it to test the minimum necessary!

Populations with very little variability require much smaller sample sizes for reasonable results. Try a survey of the number of legs on a dog, for example.

Smaller populations with great variability require larger samples for reasonable results. As the population shrinks and the sample size grows, you must throw in another ideas, the finite population correction factor, or the required sample size may exceed the population. That's no good.

There are "paired" statistics that require identical sample sizes.

I wouldn't try to get one idea in your head concerning sample size. One must extract what is needed. Situations vary. Sometimes, you have what you have and there is no opportunity for more sampling. One must learn to deal with what is available.
 
tkhunny said:
What really might pin it down for you is a destructive test. Try testing missiles. It is very, very expenseive to test them. The mandate it to test the miimum necessary!

Populations with very little variability require much smaller sample sizes for reasonable results. Try a survey of the number of legs on a dog, for example.

Smaller populations with great variability require larger samples for reasonable results. As the population shrinks and the sample size grows, you must throw in another ides, the finite population correction factor, or the required sample size may exceed the population. That's no good.

There are "paired" statistics that require identical sample sizes.

I wouldn't try to get one idea in your head concerning sample size. One must extract what is needed. Situations vary. Sometimes, you have what you have and there is no opportunity for more sampling. One must learn to deal with what is available.

Thank you for your help. What you wrote puts it into better persepctive. :)
 
Ok, so that I am understanding, I just wanted confimration that I understand this:

A t statistic approximates the z statistic as n approches 30, right? But what does it mean when it says "as n increases in size, the t distribution resembles a smaller distribution and the t statistic becomes smaller" How does an increase make it smaller?
 
For any particular experiment, you may have only one sample size. In this context, it makes no sense to talk about a growing sample size.

If you have a larger sample size available, your t-distribution will be closer to the Normal Distribution than if you had a smaller sample size.

That's all it is.
 
Top