probability question dealing with sets

helpplease35

New member
Joined
Jul 30, 2020
Messages
5
If I have a larger set of A and B (made up variables) with a random distribution and I view a smaller portion of the set and see more A than B, are the odds greater than 50% that a member of the remaining set selected at random is B?

Basically the larger question is does my knowledge of the smaller set affect my knowledge of the remaining set assuming the entire set as a whole is randomly selected with an equal chance of both variables?
 
I studied this in a class at one point and I understand that if I flip a coin 5 times in a row and get heads every time the odds are still 50/50 that I get tails and heads the 6th flip. However I can't remember if the same principle applies to sets when you have knowledge of one portion of the set and you're trying to say something about the remaining set.
 
What have you tried? Please show us your work so that we have something to talk about. This is a math help site where we help students solve their own problem rather than just give them the answer.
 
If I have a larger set of A and B (made up variables) with a random distribution and I view a smaller portion of the set and see more A than B, are the odds greater than 50% that a member of the remaining set selected at random is B?

Basically the larger question is does my knowledge of the smaller set affect my knowledge of the remaining set assuming the entire set as a whole is randomly selected with an equal chance of both variables?
I think this is common sense, not math. If I remove more A's than B's, from a population with equal numbers of A and B, then there will be more B's left than A's. But if the sample you remove is small enough, the effect would be minimal. So the answer, in practice, depends on things you haven't told us.

If you don't know that the population is exactly half A, and had to base your conclusion solely on the sample; and if the sample were not removed but only "viewed" as you said, the answer would be very different.
 
Hi I'm not doing this for homework but rather just a practical application. I think I have figured it out though. I don't know how to write it out in math language because for me it's just a theoretical question. I know that the more instances of a random 50/50 event like a coin flip, the closer to a an even 50% distribution you'll get on average, so the temptation is for me to think that if i have variance in a sample, the remaining members of the set would be more likely to have variation in the opposite direction to balance it out. But then i realized that tenancy towards 50% is caused by the repeated instances of the 50/50 affect taking place. For example if I have 1 apple and 6 pears and someone keeps giving me fruit and has a 50% chance of giving me an apple or a pear, eventually I'm going to have much closer to 50% apples and 50% pears. So I'm assuming that if i have a set with a random distribution of A and B even if my sample of 35 has 25 A and 10 B, that says nothing about the rest of the set and it's still a 50/50 chance for each randomly selected member in the rest of the set.
 
I think this is common sense, not math. If I remove more A's than B's, from a population with equal numbers of A and B, then there will be more B's left than A's. But if the sample you remove is small enough, the effect would be minimal. So the answer, in practice, depends on things you haven't told us.

If you don't know that the population is exactly half A, and had to base your conclusion solely on the sample; and if the sample were not removed but only "viewed" as you said, the answer would be very different.
For purposes of the problem A and B are equally likely (like a coin flip) but not necessarily even. I'm actually just trying to understand the general probabilistic principle so the number of instances isnt as important
 
Hi I'm not doing this for homework but rather just a practical application. I think I have figured it out though. I don't know how to write it out in math language because for me it's just a theoretical question. I know that the more instances of a random 50/50 event like a coin flip, the closer to a an even 50% distribution you'll get on average, so the temptation is for me to think that if i have variance in a sample, the remaining members of the set would be more likely to have variation in the opposite direction to balance it out. But then i realized that tenancy towards 50% is caused by the repeated instances of the 50/50 affect taking place. For example if I have 1 apple and 6 pears and someone keeps giving me fruit and has a 50% chance of giving me an apple or a pear, eventually I'm going to have much closer to 50% apples and 50% pears. So I'm assuming that if i have a set with a random distribution of A and B even if my sample of 35 has 25 A and 10 B, that says nothing about the rest of the set and it's still a 50/50 chance for each randomly selected member in the rest of the set.
I would like confirmation that I'm correct though because it isn't for homework and no one is going to tell me if I'm wrong. I'll just keep on believing the wrong thing
 
For purposes of the problem A and B are equally likely (like a coin flip) but not necessarily even. I'm actually just trying to understand the general probabilistic principle so the number of instances isnt as important
If this is anything like a coin flip, then it's entirely different from the scenario you described! There are no "remaining" flips that are affected by the sample you "removed". This is why probability questions require careful description of the actual circumstances, with no attempt to make it look like something else.

Hi I'm not doing this for homework but rather just a practical application. I think I have figured it out though. I don't know how to write it out in math language because for me it's just a theoretical question. I know that the more instances of a random 50/50 event like a coin flip, the closer to a an even 50% distribution you'll get on average, so the temptation is for me to think that if i have variance in a sample, the remaining members of the set would be more likely to have variation in the opposite direction to balance it out. But then i realized that tenancy towards 50% is caused by the repeated instances of the 50/50 affect taking place. For example if I have 1 apple and 6 pears and someone keeps giving me fruit and has a 50% chance of giving me an apple or a pear, eventually I'm going to have much closer to 50% apples and 50% pears. So I'm assuming that if i have a set with a random distribution of A and B even if my sample of 35 has 25 A and 10 B, that says nothing about the rest of the set and it's still a 50/50 chance for each randomly selected member in the rest of the set.
What you're describing here sounds like the Gambler's Fallacy, a FALSE idea that past events affect the probability of future events. The last half of what you say here is correct: The average will revert to 50:50 not because future events are more or less likely, but just because things average out in the long run.

Search for Gambler's Fallacy; here is one source: https://en.wikipedia.org/wiki/Gambler's_fallacy

In reality, if I tossed a coin 100 times and got 60 heads, rather than thinking the next toss is more likely to be tails, I'd suspect the coin of being unfair, and bet on more heads (if I ever bet). This is the "very different" conclusion I mentioned before.
 
Top