I have a couple of questions relatingto the Law of Large Numbers, and Standard Deviation.
FYI – I will stipulate in advancethat when I took my college entrance exams, ( lo these many years ago), my verbal-related scores were maxed out and my math-related scoreswere in the low 50's. Historically, for me that means that if youcommunicate your responses very clearly and very simply there is achance that my mathematically-retarded brain might understand it. Iwill probably need to ask a few more questions to understand anyresponses. Thanks in advance for any help you fine folks might offerin overcoming my probabilistic obtuseness.
The Law of Large Numbers states that theaverage of the results obtained from a large number of trials shouldbe close to the expected value, and will tend to become closer asmore trials are performed.
Well and good. That makes sense to me.If I flip a fair coin 10 times it would be no surprise that eventhough the expected odds are 50% for each possible outcome came outto be 6-4 or even 7-3 biased to one outcome or the other. Thatdoesn't mean that the coin is biased. It is just expected deviation.If I understand correctly though, if I flip that fair coin 100 times,the odds of deviation from the expected 50-50 outcome drop. And if Iflip the coin 1000 times the odds of deviation drop even more,correct?
From googling around on the internet, Ihave determined that out of a sample size of 100 events, ( 100 faircoin flips ), the standard deviation from the expected result wouldbe 5 and that there is a 68.2% chance of the observed results fallingwithin that deviation range. I have also learned that two standarddeviations would be 10 and that there is a 95.45% chance of theobserved results falling within that deviation range.
So here is my first question- whatwould be the standard deviation on 1000 flips of that fair coin, andwhat would be the chances of the observed results falling within thatdeviation range ( within one standard deviation of the expectedresult ). What would be the chances of the observed results fallingwithin TWO standard deviations of the expected results? If Iunderstand the Law of Large Numbers correctly, then I assume that thestandard deviation should be a smaller percent of the total number ofevents but I do not know how to calculate that standard deviation.
Next question- given that the observedresults are supposed to get closer to the expected average as thesample size of events grow, how large of a sample size would you needto have before you were able to look at the results and conclude thatthe coin itself was biased or that the events were non-random? Sayfor instance, if I had a coin and I flipped it 1000 times and theresults came out to be 550 Heads and 450 tails, what would be theodds against that happening if the coin were truly non-biased and theevents were truly non-random? That would be a deviation of 5%, whichfor 100 flips would be standard deviation and should be expected tooccur 31.8% of the time. How often should one expect that 5%deviation with 1000 flips? Odds of 5% deviation on 2000 flips?
I have some observed results out of alarge sample size of real-life events that would seem to intuitivelyindicate that they are non-random. ( a 5% bias towards one result outof 1000 events that should theoretically have an equally likelybinary solution set / outcome one way or the other). I am trying todecide whether there is a likely bias or if I am just seeing standarddeviation and imputing bias where there may be none. I know thatintuition is often disproved by mathematical reality. Perhaps beingpatient and increasing the sample size of events to 2000 would clearthis all up.
Thank You in advance!
FYI – I will stipulate in advancethat when I took my college entrance exams, ( lo these many years ago), my verbal-related scores were maxed out and my math-related scoreswere in the low 50's. Historically, for me that means that if youcommunicate your responses very clearly and very simply there is achance that my mathematically-retarded brain might understand it. Iwill probably need to ask a few more questions to understand anyresponses. Thanks in advance for any help you fine folks might offerin overcoming my probabilistic obtuseness.
The Law of Large Numbers states that theaverage of the results obtained from a large number of trials shouldbe close to the expected value, and will tend to become closer asmore trials are performed.
Well and good. That makes sense to me.If I flip a fair coin 10 times it would be no surprise that eventhough the expected odds are 50% for each possible outcome came outto be 6-4 or even 7-3 biased to one outcome or the other. Thatdoesn't mean that the coin is biased. It is just expected deviation.If I understand correctly though, if I flip that fair coin 100 times,the odds of deviation from the expected 50-50 outcome drop. And if Iflip the coin 1000 times the odds of deviation drop even more,correct?
From googling around on the internet, Ihave determined that out of a sample size of 100 events, ( 100 faircoin flips ), the standard deviation from the expected result wouldbe 5 and that there is a 68.2% chance of the observed results fallingwithin that deviation range. I have also learned that two standarddeviations would be 10 and that there is a 95.45% chance of theobserved results falling within that deviation range.
So here is my first question- whatwould be the standard deviation on 1000 flips of that fair coin, andwhat would be the chances of the observed results falling within thatdeviation range ( within one standard deviation of the expectedresult ). What would be the chances of the observed results fallingwithin TWO standard deviations of the expected results? If Iunderstand the Law of Large Numbers correctly, then I assume that thestandard deviation should be a smaller percent of the total number ofevents but I do not know how to calculate that standard deviation.
Next question- given that the observedresults are supposed to get closer to the expected average as thesample size of events grow, how large of a sample size would you needto have before you were able to look at the results and conclude thatthe coin itself was biased or that the events were non-random? Sayfor instance, if I had a coin and I flipped it 1000 times and theresults came out to be 550 Heads and 450 tails, what would be theodds against that happening if the coin were truly non-biased and theevents were truly non-random? That would be a deviation of 5%, whichfor 100 flips would be standard deviation and should be expected tooccur 31.8% of the time. How often should one expect that 5%deviation with 1000 flips? Odds of 5% deviation on 2000 flips?
I have some observed results out of alarge sample size of real-life events that would seem to intuitivelyindicate that they are non-random. ( a 5% bias towards one result outof 1000 events that should theoretically have an equally likelybinary solution set / outcome one way or the other). I am trying todecide whether there is a likely bias or if I am just seeing standarddeviation and imputing bias where there may be none. I know thatintuition is often disproved by mathematical reality. Perhaps beingpatient and increasing the sample size of events to 2000 would clearthis all up.
Thank You in advance!