Percentage referenced to mean? How much bigger does my new TV look?

CherylJosie

New member
Joined
May 18, 2016
Messages
2
Although an engineer, my math is no longer strong. Besides that, my probablility instructor and text were awful and I never did much with statistics after.

I frequent an internet forum dedicated to audio-visual topics and one question that keeps coming up is, how big should my TV be? measured on the diagonal.

The question always boils down to a comparison between television A vs. television B of different sizes (but usually of similar aspect ratio) and the implied query is, how much improvement do I get with the larger screen? or how far can I sit from my screen before it looks too small?

There are a couple of web sites with simple forms that calculate the percentage change between displays e.g. diagonal a=0.5 vs diagonal b=1, B is 100% larger than A, but A is 50% smaller than B so the answer is ambiguous. Also, sometimes display area rather than diagonal is used for comparison.

Another possible metric is the horizontal viewing angle (angle subtended by the horizontal dimension at a given viewing distance) but none of these metrics accurately predicts how large a TV appears to be or gives a single metric that identifies how much improvement a larger TV brings.

So in coming up with a simple metric to answer the question, I faced several challenges, one of them still incomplete.

First, I assumed that the apparent size of an object is a direct function of the size of its image on the retina. I also assumed that the relationship between the size of a display panel and the size of the image on the retina was best described by the arctangent, since the display is flat but the retina is curved and the eyeball approximately converts linear dimensions into radians.

Here is an example for comparing two differently sized displays at the same viewing distance. A more generalized form would include a viewing distance for each display also, but I omitted it for simplicity.

Assume:
both displays and the viewing position are centered
the perceived size is exactly a function of the size of the image on the curved retina
the display diagonal inherently does rms on the height and width, and is appropriate dimension to use for comparison
the linear dimension of the diagonal on the display translates into angular dimension in the eye via the arctangent function and creates an arc on the curved retina
the rods and cones are distributed over the retina evenly and linearly translate radians into perception of size

v = viewing distance
a = diagonal measurement of the smaller display
b = diagonal measurement of the larger display
A = angle subtended by diagonal a
B = angle subtended by diagonal b

For on-axis viewing,
A = 2*arctan(0.5*a/v)
B = 2*arctan(0.5*b/v)
averaged subtended angle C = (A+B)/2
percent change in subtended angle = 100*(B-A)/C


Instead of just reporting the change in radians, I chose to report a percentage, since people are not going to understand how much difference a change in radians amounts to without it being referenced to some baseline radians.

The diagonal contains an inherent root-mean-squared and represents the largest dimension on the display. I chose to use the diagonal as the input to my function because it represents the longest distance between two points on the screen and on the retina. I assumed that the apparent size is a function of the longest line between any two points on the retina and that the RMS inherent to the diagonal adequately addresses questions of aspect ratio, at least when comparing two displays with the same aspect ratio.

I am confident in everything up to the averaging part, where I have used the arithmetic average as the denominator in the percentage calculation without understanding if that is the best way to handle this calculation or not. The intention is to center the percentage calculation between the two display sizes to approximate a differential divided by the mean value, but I am not sure this is mathematically valid approach since like I said my background in statistics is basically nil given my single required course was a dud. I do not even remember if that calculation makes any sense conceptually since I am either re-inventing the wheel or I am lost in the brambles.

It might be that the calculation in red is a valid approach under some conditions and I just expressed it in some altered form for a special case. Dunno.

The point is, I want to develop a single metric that answers the question, 'How much bigger does it look' that is as mathematically accurate to the geometry as possible and unambiguous.

The problem is, there are several strong nonlinearities in this calculation and I am trying to come up with a single quality factor that can be expressed as a percentage improvement in perceived size. I have gotten stuck at the limits of my knowledge in math.

The first nonlinearity is from the RMS inherent to the diagonal but I see no way around that, having no idea what the psychovisual impact of aspect ratio is on perceived size. Diagonal should be good enough based on the assumption that the largest visual dimension is going to dominate the perceived size. In the absence of scholastic information it will have to do. I can always modify later if RMS is inappropriate.

Second nonlinearity is the arctangent but since that only translates the diagonal into radians it should be transparent to the percentage calculation? or no? I chose the diagonal because it does the RMS but also because as the largest dimension it will also be subject to the strongest compression in the arctangent, reducing the benefits of size increase as the average size of both displays increases. By choosing the largest dimension of the display (diagonal) I hope that makes the calculation off the arctangent compression more accurate to the actual perceived change in size.

I am assuming that the nonlinearity in the arctan itself is immaterial to the calculation of percentage since it only translates between display and retina and has no other impact.

Third nonlinearity is inherent to the percentage calculation. If the larger display is used in the denominator (how much smaller is it?), the ratio ranges linearly between 1 and 0 (0% smaller to 100% smaller). If the smaller display is used in the denominator, the ratio ranges between 1 and infinity (0% to infinite% larger). One of these is a linear function and the other is 1/x type hyperbolic function.

Should I just use the larger angle in the denominator and report the percentage improvement that way? Problem is, it may underestimate the improvement going from a very small display to a much larger one.

Should I just use the smaller angle in the denominator? Problem is, it may overestimate the improvement going from a very small display to a much larger one.

So once I have the angles I am left with how to calculate the 'quality factor', my single e.g percentage metric of the improvement from going to a larger display.

For the sake of creating a single percentage change that is accurate and unambiguous, I chose to use the difference angle in the numerator and the arithmetic mean of both angles in the denominator of the percentage change rather than using a straight ratio between angles. I invented that metric on the spur of the moment without really understanding exactly what it accomplishes but it seems intuitively to have some merit.

If anyone has any insight into what my percentage calculation actually accomplishes, what its formal name is (if I have re-invented the wheel), and what might be a better approach, please share? I am not even certain that I am using percentage in a valid fashion.

I am reading up on the Pythagorean means etc. on Wikipedia, and I sort of remember RMS of sine waves, but my conceptual visualization of such things is poor given the long time between education and application along with my total lack of functional background in statistics.

I was hoping I could choose a mean that makes sense, such as geometric mean maybe? to use as the denominator, or maybe scrap and redo the metric entirely. I am assuming that perceived size is a linear function of the angle at the iris that translates linearly into an arc across the retina so it should be as simple as measuring the change in output of a linear function but it needs a valid baseline.

Geometric mean seems like it maybe equalizes the impact of larger and smaller display? Would that help normalize the percentage better? What about the harmonic mean? What is its advantage, and would it apply to calculating a centered percentage change between two values? This is tough, I am not even sure if my question makes sense.

Someone must have solved this problem before, like maybe several hundred years ago?

thx for being patient. ps I just registered and have not explored the math editor at all yet.
 
Part of the issue may be in how one defines "the right size" or "when it starts looking small". For instance, I have poor hearing, so I have the closed-captioning on all the time. I "read" most of my shows. So how small is "small" for me? When the closed-captioning starts getting hard to read. When is that? It will probably vary with the settings and the manufacturer; some captioning's text looks bigger that does others.

I guess what I'm saying is that any metric will likely generate complaints from somebody. It's unlikely that you'll be able to find one formula that suits everybody. Either way, though, good luck! ;)
 
Part of the issue may be in how one defines "the right size" or "when it starts looking small". For instance, I have poor hearing, so I have the closed-captioning on all the time. I "read" most of my shows. So how small is "small" for me? When the closed-captioning starts getting hard to read. When is that? It will probably vary with the settings and the manufacturer; some captioning's text looks bigger that does others. I guess what I'm saying is that any metric will likely generate complaints from somebody. It's unlikely that you'll be able to find one formula that suits everybody. Either way, though, good luck! ;)
Well I did some more thinking and decided that the best way to handle this is to calculate the percentage of retina that each screen occupies. So I calculated the arc formed at the iris by the display diagonal and divided by pi, since it seems that angle of view ranges from +pi/2 to -pi/2 per the arctangent function and the pinhole camera with spherical retina approximation I developed. Then, knowing the percentage of retina occupied by each display, it is simple to subtract the two percentages from each other and calculate one simple, unambiguous metric that exactly defines the actual improvement of a larger or closer display. Referencing both displays to the retina itself was the trick I was looking for that merges both percentages into one unambiguous measurement. It just took a while for it to percolate. Whether it will catch on or not remains to be seen but it seems like the logical choice. Anyway I am going to finish documenting my method on the forum and toss it out there for criticism. Maybe it will earn me a place in AV fame some day (yeah like that is going to happen!). I tried plotting some numbers too and it appears that he actual percentage change is largest when the displays cover approximately half the retina. When they are covering almost no retina, doubling the display makes little difference. When they are covering almost all retina, doubling the display makes little difference. For any given display sizes, adjusting the distance from them so they bracket 50% coverage of the retina gives the most improvement in perceived size.
 
Top