Rock, Paper, Scissors

rmart100

New member
Joined
Jan 22, 2018
Messages
21
Hey,

This is probably a super simple problem, but I'm not sure if my logic is correct here:

In rock paper scissors, assuming my opponent throwing purely random hands, would it be statistically optimal to throw the same outcome each time and leave the burden of variability on my opponent's side, or would it be equally as likely to result in a win if I also threw random hands.

Framing the question as, what is the probability of Rock winning, There's a .33 chance that it will result in a win. Framing it the question as what is the probability of my throwing a winning hand randomly, against another random hand, I'm not sure I'm making the right consideration. Is it correct to consider it as .33 chance since there are 3 out of 9 outcomes that would result in success, or am I to assume that the probability of me throwing paper (33%) and they throwing rock (33%) would be written as 0.33 x .33 = ~11%?
 
I can't edit my post but I figured where my logic was failing.

It would amount to about 11%, but that's for one of the 3 win outcomes. So it's a combination of Nonmutually exclusive events happening with mutually exclusive outcomes so I would just add 11% for each win outcome aaand it's 33% all the same.

That said, is there case to stick with the lower sample size of 1/3 as opposed to 3/9? I feel like there's an argument to be made there, but I just can't drum it up.
 
Just saying "throws randomly" does not actually say what probability distribution but I suspect you mean that "rock", "paper", and "scissors" are equally likely. If your opponent throws each of "rock", "paper", "scissors" each equally likely and you always throw the same thing then:
1) you always throw "rock" then 1/3 of the time you have "rock, rock", 1/3 of the time "rock, paper", and 1/3 of the time "rock, scissors" so you draw 1/3 of the time, win 1/3 of the time, and lose 1/3 of tge time.
2) you always throw "paper" the same thing happens.
3) you always throw "scissors" again the same thing happens.
Over all, with your opponent throwing at random and you throwing always the same thing you win 1/3 of the time, lose 1/3 of the time, and tie 1/3 of the time.

Of course, if you keep always throwing the same thing, your opponent may eventually figure that out, stop throwing randomly, and you will lose all of the time!
 
There are three states. A beats B. B beats C. C beats A.

If you always play X and your opponent does not notice and so plays randomly, you will win one third of the time on the next round and lose one third of the time. If, however, you always play X and your opponent notices, you will lose all the time.

If both of you play randomly, there are 3 ways you win, 3 ways you lose, and 3 ways you tie. So the probability of winning on a given round if both play randomly is 1/3. If you continue round after round on that basis until someone wins, the probability that you win will continue to equal the probability that you will lose, and those probabities will approach 1/2.
 
Thanks HallsofIvy and JeffM. I thought perhaps because both outcomes were being randomized, I was treating it like a nonmutually exclusive event and that would lower my chances, but I wasn't accounting for the other win scenarios after solving it for one instance.

That said;

How would my variance be affected at smaller sample sizes? So I know over a large sample of games, let's just say 1000, those values should definitely both lead up to a 1/3 success rate, 1/3 failure, 1/3 tie. But would limiting the variance on my playing a certain hand affect this in any way at a sample size of say 10 games? I may not be using appropriate terms because I'm just cobbling together the vestiges of Highschool statistics, but in my mind I imagine this would visually look like a line graph with 2 very different curves but the same limit.
 
Top