There is a share for 4 individuals.

Individual 1: 4712.50

Share 1: 333.35

Individual 2: 525.00

Share 2: 35.98

Individual 3: 4137.50

Share 3: 291.54

Individual 4: 1825.00

Share 4: 123.32

Share 3: 291.54

Individual 4: 1825.00

Share 4: 123.32

Just wondering what formula was used to find the share for each individual

Thank you. ]]>

I need a response ASAP as my coursework is due on Monday. I’m comparing the effects of 2 different drugs (eye dilation drugs) on pupil diameter over a period of 42 minutes. To get the higher mark, I need to do a statistical test but I’m unsure as to which one to use. Also, it was 2 different patients, not the same one.

Please help! ]]>

example number 101 can assume 0 to 100.

my goal is to choose 24 of this 78,000 prime numbers and predict if this 24 are odd or even

Who Have you a solutions?

I could have a history of extraction but i don't know if could be a help.

Thanks ]]>

In one test, a beetle is on his own in an arena and we measured the % of time he spends in each of THREE types of habitat (open, bush and underground).

I repeat this experiment with several beetles.

In the second test, beetles are placed together with a predator in the arena and they are again observed for their % time in each habitat.

I want to test if the amount of time spent in each habitat is affected by the presence of the predator.

I know I can use a Chi Squared test for habitat preference in test 1 on its own- since the expected % time in each habitat would be 33%.

But how can I compare the % time in each of the three habitats when the predator is absent (test 1) and present (test 2)?

Can I use a nested or a two-way Anova? Or a different test altogether? ]]>

I have been Provided with the answer which is 1.24x10 -68 But i do not Know how they got this answer ]]>

I am looking some help please,

I have a total target value for a year.

Each week an amount is produced towards that target.

The graph of production is sinusoidal.

I wish to produce a weekly target for production.

See Image Attached.

If this was linear ten it would obviously be total divided by number of weeks but I need the formula for when sinusoidal.

Total Target =32000

Number of Periods (Weeks) 52

Thank You in Advance

C J B

From a box containing n=white marbles, m=black marbles and p=red marbles, we get a marble at a time, check its color and place it back in the box until we get 'x' iterations of white marbles.

s=the number of marbles retrieved. We need to calculate the probability L={the number of black marbles retrieved is bigger than the red ones}.

Until now, i replaced n, m and p with values - 2, 2, 10 (from a similar problem) so i could get a rough formula(but i couldn`t). I found that 1/7 is the chance to get a white ball, and 1/49 is the chance to get 2 white balls one after the other(unnecessary i think) and this is where i`m stuck at.

Thank you! ]]>

[tex] H_1 : \mu > \mu_{0}[/tex]

\begin{align}%\label{} W(X_1,X_2, \cdots,X_n)=\frac{\overline{X}-\mu_0}{\sigma / \sqrt{n}}, \end{align} |

If the null hypothesis is true we expect the sample mean to be relatively small.

I don't understand why it would expect it to be small. There's no restriction on [tex]\mu_{0}[/tex] that I can see. ]]>

using a tabular method to find out complete the cumulative frequencies , mean and standard deviation how would I do this Expenditures (£) |
Frequency |

Under 400 | 33 |

400 and under 600 | 27 |

600 and under 800 | 18 |

800 and under 1000 | 11 |

1000 and under 1200 | 8 |

1200 and under 1400 | 2 |

1400 and under 1600 | 3 |

1600 and under 1800 | 2 |

1800 and over | 1 |

Total: |
105 |

There are 22 females 20 males and 8 have no response in their gender.

Whats's the percentage of male students and female students? Do we take into account that the frequency is 50 or 42? ]]>

One goes like this

σ=

And the other like this

S.D =

There is a table we have to solve for standard deviation

Interval | f | Xm | fXm | Xm^2 | fXm^2 |

41-50 | 14 | 45.5 | 773.3 | 2070.25 | 28,983.5 |

31-40 | 25 | 35.5 | 887.5 | 1260.25 | 31,506. 25 |

21-30 | 24 | 25.5 | 612 | 650.25 | 15,606 |

11-20 | 16 | 15.5 | 248 | 240.25 | 3,844 |

1-10 | 6 | 5.5 | 33 | 30.25 | 181.5 |

N=88 ΣfXm= 2553.8 ΣfXm^2= 80,121.25

First formula solution

80,121.25 / 88 = 910.46875

910.46875 - 842.3186983471074 = 68.15

Second formula solution

80,121.5 * 88 = 7,050,670

2553.8 ^2= 6,521,894.44

7,050,670 - 6521,894.44 = 528,775.56

528,775.56 / 7656 = 69.06

Why are the two solutions different? Is there something wrong? ]]>

This has been bothering me for a while because both instinct and logic tell me that the answer is 25% for each of the four percentages. However I find myself caught in a loop that I can't quite untangle when I start thinking like so;

If on attempt 1, the odds of picking the correct item first time are 25%, then it is 3 times more likely that you will pick incorrectly first (25% / 75%). If you do pick incorrectly first, then the chances of each of the other items being the right choice become 33% (of 75%, three of four remaining choices). 33% x 75% is still 25%, so we're still fine on the maths and logic front. It is now twice as likely within this sub-choice of three, that the correct item is either the third of fourth choice (33% / 66%) but once again, 66% x 75% = 50% and since we grouped two possibilities together, we must divide by two to get 25% for each.

I do understand that this logic is completely and utterly circular and therefore, seems pointless to contemplate; each option will be chosen 25% of the time by the end no matter what way you twist the maths.

In spite of myself though, I just want to confirm that there is no statistical significance to grouping, for example, a third and fourth choice together other than having to do back multiplication and division to get to the inevitable figure of 25%, and that grouping does not somehow make them more likely outcomes than earlier ones.

Apologies if this question has been irritating, but I have no background in statistics or probability and I just want to be absolutely sure that there isn't some strange rule or calculation that might change the maths, because I know probability can be weird at times. It just feels to me like the answers would somehow be closer to 20%, 25%, 35% and 20%, though I cannot possibly tell you why. Thanks in advance. ]]>

I wanted to check I haven’t made a mistake with something:

I have a range of percentage errors and i need to test that the percentage error rate is >20%, Then I need to find the confidence error level (95%) for the population mean.

I basically took the mean of the figures below and it = 26 and then used this mean to calculate the confidence internal (using the standard deviation, which was 3.31)

I got 24.9-27.2

Can some please advise if I have done something wrong.

Thank you and much appreciated.

Here are the numbers

31.7 | 21.7 | 20.7 | 21.7 | 29.7 | 25.7 | 24.7 | 27.7 | 28.7 |

25.7 | 28.7 | 27.7 | 23.7 | 21.7 | 31.7 | 27.7 | 27.7 | 25.7 |

19.7 | 25.7 | 27.7 | 23.7 | 24.7 | 30.7 | 26.7 | 22.7 | 28.7 |

I am trying to figure out how to statistically determine the best location for a second sensor. I have a primary sensor (center of attached image) that periodically gets blocked. I plan to place a second sensor at pre-planned locations and take measurements. I expect to get time correlated data that looks like this:

Time Pos1 Pos2 Pos3 Pos4...etc (probably 80 different positions)

14:31 35 34 36 35

14:32 34 35 35 35

14:33 36 38 34 35

*Higher numbers indicate a better result

I have only one sensor. So I will move that sensor every day and take a new set of data. At the end I’d like to know which position provides the least blockage. I thought about just averaging the numbers, or counting how many measurements fall below a threshold. But I’m hoping to get a more accurate answer with statistics.

A quick explanation of the attachment:

- The green rings indicate a fixed distance from the center

- The lines are direction in degrees

- I plan to name the positions by ring then direction (i.e. c270, or g135)

As a bonus, I’d like to also plot the data. Like a heat map, where the better locations are dark-blue and the worse locations light-blue. To plot it, I need a result for each position. Maybe something like percentage of time a position exceeds/lags the primary measurement. Not really sure.

Found something called "Anova: Two-Factor Without Replication" in the Excel Data Analysis tools. Looks like it might be right, but I really have no idea.

Any help is greatly appreciated!!!stats_problem.jpg

Brandon

a population ion has a mean of 51 and a standard dev of 3.4. If repeated samples of 20 units are taken from this population, what is the likely standard dev of the sample means?

0.76

3.4

0.17

2.55

i would like to understand the workings to get to the answer please. ]]>