Square Root of Zero

Mr. Bland

Junior Member
Joined
Dec 27, 2019
Messages
195
This thread got me thinking about something I've had in the back of my mind for a while...

It is generally accepted that [MATH]\sqrt{0} = 0[/MATH]. In my experience, it has universally been the case that [MATH]0^2 = 0[/MATH] has been cited as evidence of this claim. The logic is that if square root of some number [MATH]x[/MATH] is defined as the number [MATH]r[/MATH] such that [MATH]r^2 = x[/MATH], then zero is its own square root.

But... I'm not completely sold on that idea. It's a narrow, incomplete view of square root, and the claim doesn't hold up under broader scrutiny...

Square root has another relationship: [MATH]\frac{x}{\sqrt{x}} = \sqrt{x}[/MATH]. If [MATH]\sqrt{0} = 0[/MATH], then it should also be true that [MATH]\frac{0}{\sqrt{0}} = \frac{0}{0} = 0[/MATH], but then there's the fact that [MATH]\frac{0}{0}[/MATH] is known to be undefined. This is an apparent proof by contradiction that [MATH]\sqrt{0} \ne 0[/MATH].

Far be it from me to be the guy who says that "literally all mathematicians everywhere" are wrong about the square root of zero, so I figure there's something else at play that isn't obvious. Is my [MATH]\frac{0}{0}[/MATH] example committing some sort of fallacy? Are there some special circumstances surrounding the square root of zero that don't apply to non-zero numbers? Is it something that's context-dependent?

... or am I a revolutionary genius on the cusp of a mathematical renaissance, whose likeness will be carved into mountains and whose legacy will live on forever?
 
Your proposition falls apart at your first statement.

\(\displaystyle \dfrac{x}{\sqrt{x}} = \sqrt{x},~ \forall x > 0\) is the correct way of stating that property of square roots.

It never makes mathematical sense to talk about dividing by zero because that operation is undefined.
Further the square root of negative numbers is undefined on the reals.
 
First, your definition is not quite complete: the square root of x (that is, [MATH]\sqrt{x}[/MATH]) is defined as the non-negative number y such that [MATH]y^2 = x[/MATH].

Since [MATH]0^2 = 0[/MATH], and [MATH]0 \ge 0[/MATH], we can say by definition that [MATH]\sqrt{0} = 0[/MATH].

That's just a matter of definition.

Now, the rest of what you say is based not on a definition, but about a fact that happens to be true for non-zero numbers, but which you have not proved to be necessarily true for all non-negative numbers x.

What you are doing is ignoring conditions in your statements, and therefore deriving contradictions.
 
\(\displaystyle \dfrac{x}{\sqrt{x}} = \sqrt{x},~ \forall x > 0\) is the correct way of stating that property of square roots.

To obtain this property, I'm guessing it would it be valid to start with:-

\(\displaystyle x=\sqrt{x}\cdot\sqrt{x}, ~ \forall x \geq 0\)

since this follows from the definition in post#3.

And at the point of dividing both sides by \(\displaystyle \sqrt{x}\) you have to exclude the case x=0

\(\displaystyle \dfrac{x}{\sqrt{x}} = \sqrt{x},~ \forall x > 0\)

Thanks everyone - I found that interesting.

And I got a laugh from "carved into mountains" in post #1 :)
 
\(\displaystyle \dfrac{x}{\sqrt{x}} = \sqrt{x},~ \forall x > 0\) is the correct way of stating that property of square roots.
Although the radical notation has its nuances, my intent is to discuss the concept of square root in a general sense. In that general sense, the operand is not necessarily restricted: it continues to function in the complex realm as well, making it valid for negative real inputs.

Now, the rest of what you say is based not on a definition, but about a fact that happens to be true for non-zero numbers [...]
This may be where "literally all mathematicians everywhere" and I don't see eye-to-eye. The rest of my reply is the ramblings of a madman and not specifically in response to yours...



From my perspective, square root works the way it does regardless of how humans choose to articulate it, a fact that was instrumental in the discovery of complex arithmetic. There was a time (for those who don't know the story) when it was considered a non-operation to take the square root of a negative number. Under that definition, it was impossible to solve certain cubics because square roots of negative numbers turned up during the algebra. So the "nonsensical" operation was investigated, and since that time, we've learned more about the nature of arithmetic and are now able to solve all forms of cubics as a result.

We learned about square root because we studied it, not because we defined it.

If we say that square root is, exactly, "the value that when squared gives the input" (plus or minus a plus-or-minus), then sure, zero is its own square root. However, from a practical sense, where we work with discrete operations and study their relationships, we encounter situations where square root is meaningful yet not fitting within that definition.

When accounting for complex and higher-order arithmetic, unless I'm terribly mistaken (which might be the case), dividing a number by (one of) its square root(s) gives (one of) its square root(s) for all values except zero, because division by zero is undefined. This notion is analogous to how you can't use division to "undo" a multiplication by zero, which to me is sensible considering square root's relation to division.

And from my perspective, this makes it look as though the square root of zero is undefined.
 
From my perspective, square root works the way it does regardless of how humans choose to articulate it, a fact that was instrumental in the discovery of complex arithmetic. There was a time (for those who don't know the story) when it was considered a non-operation to take the square root of a negative number. Under that definition, it was impossible to solve certain cubics because square roots of negative numbers turned up during the algebra. So the "nonsensical" operation was investigated, and since that time, we've learned more about the nature of arithmetic and are now able to solve all forms of cubics as a result.

We learned about square root because we studied it, not because we defined it.
You need to keep in mind that, even if you think of mathematical entities as having their own separate existence, when we talk about them we have to start with definitions for our terms and notations. If two people talk about the same thing, but use different definitions, then they are not really talking about the same thing, and will be unable to convince one another of anything. So we have to start with an agreed definition of what we mean by "the square root".

This is, in fact, why there are actually two definitions of "square root": "A square root" of a number is ANY number whose square is that number; while "THE [principal] square root of a number ([MATH]\sqrt{x}[/MATH])" is the non-negative square root. Each is a valid concept; and each is valuable in certain contexts (e.g. solving an equation vs. evaluating an expression). But we have to make it clear which concept we are talking about at any given time.

Yes, I know the story of the "invention" or "discovery" of "imaginary" numbers. In effect, what happened there is a broadening of the definition of "number", more than of "square root". It allowed us to talk about square roots of numbers we couldn't before. But it didn't actually change the definition, which remained just what you say next (apart from what the principal root means).

If we say that square root is, exactly, "the value that when squared gives the input" (plus or minus a plus-or-minus), then sure, zero is its own square root. However, from a practical sense, where we work with discrete operations and study their relationships, we encounter situations where square root is meaningful yet not fitting within that definition.

When accounting for complex and higher-order arithmetic, unless I'm terribly mistaken (which might be the case), dividing a number by (one of) its square root(s) gives (one of) its square root(s) for all values except zero, because division by zero is undefined. This notion is analogous to how you can't use division to "undo" a multiplication by zero, which to me is sensible considering square root's relation to division.

And from my perspective, this makes it look as though the square root of zero is undefined.

I don't understand either statement in bold. Are you claiming that the concept of square root is broader than the definition, or narrower? What situation do you have in mind that doesn't fit the accepted definition? What new definition would you offer? Why would the fact that you can't divide by zero invalidate the square root?
 
Thanks for bearing with me. I understand that I may not be adequately expressing what's in my head, and I'm grateful for your patience.

Are you claiming that the concept of square root is broader than the definition, or narrower? What situation do you have in mind that doesn't fit the accepted definition? What new definition would you offer? Why would the fact that you can't divide by zero invalidate the square root?
Fundamentally, my belief is that the meaning of square root is broader than the isolated case where [MATH]\left(\sqrt{x}\right)^2 = x[/MATH] (omitting [MATH]\pm[/MATH] in this post for simplicity), which is the spirit of the accepted definition of square root.

The counterpoint is [MATH]\frac{x}{\sqrt{x}} = \sqrt{x}[/MATH]. The first case works for zero ([MATH]0^2 = 0[/MATH]), but this second case does not (it would have zero in the denominator). I believe that both of these cases are intrinsic characteristics of square root, and that the second one is not simply something that happens to be true for some subset of square root-able numbers. If it is a characteristic of square root, the fact that it doesn't work for zero (and as far as I know, only zero) makes me think that zero is a special case for square root like it is for division.
 
So you're narrowing the definition, by excluding one value, namely zero. Why do you say you are broadening it???

So what, in your opinion, is the "correct" definition? And how does "correcting" it improve mathematics?

Just so you know, I used the phrase "happens to be" somewhat ironically. It's easy to prove that it is true for all numbers except zero, starting from the definition. Why do you take this (narrow) corollary to be more important than the (broader) definition?

But the important thing is this: We can change definitions, but only because doing so produces interesting or useful new mathematics. All you appear to be doing is to remove some mathematics; no one is going to go along with you. Saying "I believe" does not make it true.
 
It could be observed that since [MATH]ab = c[/MATH], then [MATH]\frac{c}{b} = a[/MATH]. This can lead into [MATH]0(0) = 0 \therefore \frac{0}{0} = 0[/MATH], which is of course incorrect. It is not sufficient to define how many pieces zero splits itself into, evidenced by contradictions in other operations.

The definition of the square root of zero where [MATH]0^2 = 0 \therefore \sqrt{0} = 0[/MATH] does not allow for related operations to be true, such as [MATH]\frac{0}{\sqrt{0}} = \sqrt{0}[/MATH] or [MATH]log_{0}{0} = 2[/MATH]. It is not sufficient to define the square root of zero, evidenced by contradictions in other operations.

Whether these arguments contribute to (or otherwise) the utility of square root in the greater landscape of mathematics ultimately won't affect what square root is or means. We can study it to learn about it, but it's not up to us to decide what we get out of it.

EDIT:
It occurs to me that [MATH]\sqrt{0} = 0[/MATH], if false (I'm not sold one way or the other), the assumption may historically have never mattered. The only contradictory equations I can come up with (such as the aforementioned division and logarithm examples) would still fail if given zero without a square root in the mix. Finding examples where zero itself isn't a troublemaker (or where zero is a requirement) could shed some light on this.
 
Last edited:
What you're saying is that if any operation results in zero, you're going to disallow it, because there are some things you can't do with zero. So you should be saying that 1-1 = 0 is wrong, because you can't divide by the result. Frankly, this is nonsense.

As they say, "hard cases make bad law"; likewise, "special cases make bad definitions". Let the definition be what it is; if you get a number you can't do something with, just don't do that!

It sounds like you need to see evidence that we need to be able to take the square root of zero! Is it sufficient to point out that the distance formula tells us that the distance from (a,b) to (a,b) is [MATH]\sqrt{(a-a)^2 + (b-b)^2} = \sqrt{0}[/MATH]? If the answer isn't zero, then we can't find the distance from a point to itself! But clearly it's zero ...
 
Can we state then:

01/2 = 01 ....................\(\displaystyle \to\)................... 1/2 = 1..................1 = 2

Again I needed that emoji showing "tongue planted in cheek".

I remember, that laws of exponentiation does not apply to 0 or 1 (from high school).

It is like (-1) * (-1) = 1.

Cannot prove it but I have learned to take it as truth.
 
Last edited by a moderator:
Is it sufficient to point out that the distance formula tells us that the distance from (a,b) to (a,b) is [MATH]\sqrt{(a-a)^2 + (b-b)^2} = \sqrt{0}[/MATH]?
It's a far stronger example than the [MATH]0^2 = 0[/MATH] thing, thank you for posting it. I've also considered the [MATH]x \gt 0, 0^x = 0[/MATH] case where [MATH]x = \frac{1}{2}[/MATH], which is equivalent to [MATH]\sqrt{0}[/MATH]. However, I haven't had much luck finding out exactly how that exponentiation is computed.

Can we state then:

02 = 0 = 03 \(\displaystyle \to\) 2 = 3
Or equivalently: [MATH]2(0) = 3(0)\therefore 2 = 3[/MATH]. There should be a quiz form with a bunch of questions like these.
 
Again I needed that emoji showing "tongue planted in cheek".

I use ? or ? depending on the level of zany. But my kids sometimes laugh at my naive emoji choices!


I think younger students would not have the patience (or capacity) to take on board the "full definitions" of many functions and methods at first introduction. This might perhaps be in the context of working with natural numbers only. But when reaching a higher level of mathematics these things should be revisited, giving full definitions and constraints, for the reasons outlined above:- communication; and maximising the circumstances in which the student can apply a technique/ function.
 
Another example, the other week I could not find a complete set of constraints of when it's OK to use this "power of power" reduction:-

\(\displaystyle \left(a^b\right)^c \to a^{b\cdot c} \)

...I gave up because all the Google results I clicked on were aimed at beginners. So I ended up plugging in numbers to find it holds for both of the following cases:-
  1. a≥0 AND b,c any real
  2. a,b any real AND c integer
You couldn't really confront a beginner with these, especially since line 2 can give complex results. But it is useful to know (I hope I got this right, and complete). BTW I did not consider a,b,or c as complex numbers.
 
...and actually there's an extra constraint that if a=0, then b≠ 0 and c≠ 0
 
The one thing I would argue with is the statement that "\(\displaystyle \frac{0}{0}\) is "undefined". I would say that \(\displaystyle \frac{a}{0}\), for a nonzer0, is undefined because writing \(\displaystyle \frac{a}{0}= x\) is equivalent to a= 0(x) which is not true for any x. But writing \(\displaystyle \frac{0}{0}= x\) is equivalent to \(\displaystyle 0= 0(x)\) which is true for any x. We cannot assign a specific value but for a completely different reason! I would, as would many texts, say that \(\displaystyle \frac{0}{0}\) is "undetermined[/b] rather than "undefined".

Another point: \(\displaystyle \lim_{x\to a}\frac{f(x)}{g(x)}\) does not exist if f(x) goes to some non-zero limit, a, while g(x) goes to 0 while the limit, if f(x) and g(x) both go to 0, may exist.
 
I would, as would many texts, say that \(\displaystyle \frac{0}{0}\) is "undetermined rather than "undefined".

...or perhaps even better might be to say "undeterminable"? Using "undetermined" might imply that it has not yet been determined, but possibly could be.
 
The formal term (in my experience) is "indeterminate", though that most properly applies to limits.

And I say that 0/0 is "undefined because it is indeterminate" - that is, there are two different reasons why division by zero can't be defined (no possible answers, or too many possible answers), and this is the latter. In either case, it is not defined.

I have observed too many students hearing "undefined" and thinking that's a special number: 1/0 = undefined, so 1 = 0 * undefined. So I often say "not defined" rather than "undefined".
 
Top