Homemade metric - has anyone heard of it?

mariathatsme

New member
Joined
May 27, 2020
Messages
2
I have two groups with five numbers in each group. I want to compare how equal the groups are to each other. I have figured out the metric that I would like to use:

[MATH]\text{formula_without_name} = \frac{(\mu_A - \mu_B)^2}{\mu * \sigma}[/MATH],

where [MATH]\mu_A[/MATH] is the mean of the values in group A, [MATH]\mu_B[/MATH] is the mean of the values in group B, [MATH]\mu[I][/I][/MATH] is the mean of all ten values and [MATH]\sigma[/MATH] is the mean of the standard deviation in group A and the standard deviation in group B.

What I wonder is if this metric has a name. Has anyone else in history used it before me?
 
To the best of my knowledge, the aforementioned metric has no name because it has no theoretical justifications. I suggest you look into standard metrics associated with ANOVA.
 
Does that mean that the metric is no good? Perhaps you could help me find another metric that fits my needs?

I'm looking for one that gives a smaller error when comparing groups of higher magnitude, but with the same absolute distance. For instance I would like the groups A1 = [101, 102, 103, 104, 105] and B1 = [106, 107, 108, 109, 110] to have a smaller error than A2 = [1, 2, 3, 4, 5] and B2 = [6, 7, 8, 9, 10]. I have tried the t-test, but that one fails on this point.

If you multiply all the numbers by one factor, I'd like them to give the same error. For instance, I would like the error between the groups A2 and B2 to be equal to the error between A3 = [100, 200, 300, 400, 500] and B3 = [600, 700, 800, 900, 1000]. I have tried the t-test, but that one fails on this point.

On top of that, a larger variation within one group should create a smaller error. For instance, the error between A4 = [1, 2, 3, 4, 5] and B4 = [4, 6, 8, 10, 12] should be smaller than the error between A2 and B2, even though the means are the same. I've tried percentage difference, which gives equal errors here.
 
I have two groups with five numbers in each group. I want to compare how equal the groups are to each other. I have figured out the metric that I would like to use:

[MATH]\text{formula_without_name} = \frac{(\mu_A - \mu_B)^2}{\mu * \sigma}[/MATH],

where [MATH]\mu_A[/MATH] is the mean of the values in group A, [MATH]\mu_B[/MATH] is the mean of the values in group B, [MATH]\mu[/MATH] is the mean of all ten values and [MATH]\sigma[/MATH] is the mean of the standard deviation in group A and the standard deviation in group B.
What you need to do is to define clearly what "how equal the groups are to each other" means to you. How can one thing be "more equal" than another? (Sounds like Animal Farm.)

Your examples don't really help me understand your thinking:
I'm looking for one that gives a smaller error when comparing groups of higher magnitude, but with the same absolute distance. For instance I would like the groups A1 = [101, 102, 103, 104, 105] and B1 = [106, 107, 108, 109, 110] to have a smaller error than A2 = [1, 2, 3, 4, 5] and B2 = [6, 7, 8, 9, 10]. I have tried the t-test, but that one fails on this point.

If you multiply all the numbers by one factor, I'd like them to give the same error. For instance, I would like the error between the groups A2 and B2 to be equal to the error between A3 = [100, 200, 300, 400, 500] and B3 = [600, 700, 800, 900, 1000]. I have tried the t-test, but that one fails on this point.

On top of that, a larger variation within one group should create a smaller error. For instance, the error between A4 = [1, 2, 3, 4, 5] and B4 = [4, 6, 8, 10, 12] should be smaller than the error between A2 and B2, even though the means are the same. I've tried percentage difference, which gives equal errors here.
The second paragraph suggests you want your statistic to be invariant under scaling. The first suggests it should involve relative rather than absolute differences. I don't understand the third at all.

One thing that sometimes helps in clarifying what statistic is appropriate is to explain how you are going to use the result. The actual application may be important.
 
Top