Final 0.999... question I promise

But how can you assume 1/3=0.333... without first assuming that 0.999... is equal to 1. So this argument is invalid because in order to prove that 0.999... is equal to one, you must prove 0.333... is equal to 1/3 but in order to do that you must prove that 0.999... is equal to one. In other words 1/3 of 0.999... is 0.333... but do we know for sure that 1/3 of 1 is equal to 0.333...? no we don't not unless we know for sure that 0.999... is equal to 1.
That would be a perfect counter-argument if in fact I derived that 1/3 = 0.333... by dividing 0.999... by 3. But I did not make that circular argument. As Subhotosh Khan has said, 1/3 = 0.333... comes by dividing 3 into 1 and is not derived by assuming 1 = 0.999... Try the division yourself, just making sure to continue it forever (infinity again).
 
Ha,ha ... good one mackdaddy. If I understand, this kind of example is what I would use to attack my repeating integers notation. Dealing with infinity is tricky remember what happened to Georg Cator.

Setting that a side, If you are thinking of limits inspired by these examples ... then both limits equal 1.
I must make assumptions about your notation so let's use the sequence S again to communicate.

For 1.0...1 do you mean the limit of Sn as n -> +infinity
S0 = 1.1
S1 = 1.01
S2 = 1.001
S3 = 1.0001
S4 = 1.00001
The Limit here is 1

For 1.0...2 do you mean the limit of Tn as n -> +infinity
T0 = 1.2
T1 = 1.02
T2 = 1.002
T3 = 1.0002
T4 = 1.00002
The Limit here is 1

Sorry if I have miss understood.
However, if you mean by 1.0...1 notation, to be representing the LIMIT of the implied SEQUENCE then,
1 = 1.0...1 and
1 = 1.0...2

No No, I understand and appreciate this, but I was leaning toward the actual and physical number 1.0...1 being unequal to 1, not the limit of the sequence. But I understand that the limit of T subscript n is 1, and I totally agree with that.
 
That would be a perfect counter-argument if in fact I derived that 1/3 = 0.333... by dividing 0.999... by 3. But I did not make that circular argument. As Subhotosh Khan has said, 1/3 = 0.333... comes by dividing 3 into 1 and is not derived by assuming 1 = 0.999... Try the division yourself, just making sure to continue it forever (infinity again).

Ok I see, what about 0.333...34
0.333...34*3=1.000...02 but try finding the difference between 1 and 1.000...02
 
Ok I see, what about 0.333...34
0.333...34*3=1.000...02 but try finding the difference between 1 and 1.000...02
0.333...34 makes no sense. You are assuming an end to the infinite, but the infinite is defined to have no end. So long as you keep thinking in finite terms about infinitely repeating decimal notation, you will keep making logical errors. You can define 0.334........, but that and 0.333... definitely have a finite, non-zero difference that is not zero because one has an infinite number of 4s at the end and the other has an infinite number of 3s at the end so the difference equals 0.00111..., which has an infinite number of 1s at the end and is clearly > 0.001 > 0.
 
Last edited:
The way I think about it is there are infinite threes and a 4 after the infinitieth three. the four does not end the number but simply is the last digit in an infinitely large number. Also 0.00111... is not
a finite, non-zero difference
 
Last edited:
The way I think about it is there are infinite threes and a 4 after the infinitieth three. the four does not end the number but simply is the last digit in an infinitely large number. Also 0.00111... is not
There is no last to infinity, and what does the "last" mean if it is not the end? There is your error: you keep thinking about a finite number of digits. And, by the way \(\displaystyle 0.00111.... = \dfrac{1}{900} > 0.\) Do the division.
 
Last edited:
No I bolded what your error was. 0.00111... isn't a FINITE number. And Bob Brown MSEE understands what I mean when I use such decimals as 0.000...01. Look at previous posts. Also please regard my post on why 10^-infinity isn't equal to zero.
 
No I bolded what your error was. 0.00111... isn't a FINITE number. And Bob Brown MSEE understands what I mean when I use such decimals as 0.000...01. Look at previous posts. Also please regard my post on why 10^-infinity isn't equal to zero.
Mackdaddy. Have it your own way. You are not paying attention. You have not paid attention, for example, to Hall's post that an infinite numeral can represent a finite number. You have not demonstrated that 10 ^minus infinity is greater than zero; you have asserted it as a given without proof whereas I gave you a reasoned argument why it is equal to zero. I have been exceptionally patient, but enough is enough.
 
Last edited:
Mackdaddy. Have it your own way. You are not paying attention. You have not paid attention, for example, to Hall's post that an infinite numeral can represent a finite number. You have not demonstrated that 10 ^minus infinity is greater than zero; you have asserted it as a given without proof whereas I gave you a reasoned argument why it is equal to zero. I have been exceptionally patient, but enough is enough.

Ok alright sorry to waste your time, sorry for wasting my own time, it's like a missionary that doesn't convert anyone. But I still feel I have a strong point, atleast from a simple arithmetic 9th grade point of view. And I gave a proof as to why 10^-infinity is not equal to 0. But you have not acknowledged that.
 
No I bolded what your error was. 0.00111... isn't a FINITE number. And Bob Brown MSEE understands what I mean when I use such decimals as 0.000...01. Look at previous posts. Also please regard my post on why 10^-infinity isn't equal to zero.

I do not fault you for thinking this way. I, once upon a time, made a similar bad judgement call about the number pi (I was a beginning computer science major then, without the experience and appreciation for the rigor required for doing mathematics). I both assumed that this number was not a fixed genuine quantity, and misused the term finite as you are doing.

3.1415926535898...

How can one plot this point on a number line? "We can't," I argued, thus the "number" couldn't be used for jack. I thought this way because I grasped onto my "logic" (i.e. intuition) as if it were truth, but if mathematics is anything, it certainly isn't always intuitive. Hence why some have issue with "why" .999...=1. The answer is... because it just is! No fancy arithmetic needed here.

Now of course I know this number pi is finite. Every real number is finite, it is just a number. Secondly, every real number is actually an infinite sequence of integers (yes, the sequence itself, not the limit of that sequence. I will let you google the information on that). Why is infinite the wrong word here? If I gave you a circle of radius 1, you would agree the circumference of this circle is finite, yes? You would agree that the radius is also finite yes? Well, pi is just the circumference over two times the radius, for any circle. Its decimal digits sure do go on forever if we were to count them, and even "randomly" so. But in the real world none of this makes sense. Circles can't actually be drawn, neither can a line of length 1in (did that just blow your mind?).

Secondly, when trying to be rigorous, we never "plug in" infinity. It is not a real number and cannot be treated as one (just like we can't divide by zero). Calculus is very careful about using that little symbol, and in the case of a function, say 10^(-x), there can only be one obvious meaning, and that is the limit. Yes, I have been guilty of saying oddball things like 1/0^2 = +infinity, but it really is just shorthand for a complicated analysis for the given situation... in this case being the limit as x approaches 0 of the function 1/x^2. Perhaps in 9th grade you have not had the pleasure of seeing horizontal (or oblique, parabolic, etc) asymptotes, but that is the meaning of the limit as x tends to infinity for a function f(x). The function 10^{-x} approaches 0 as x gets very large (y=0 is a horizontal asymptote for 10^(-x)). Depending on the context 10^(-infinity) either "does not make sense" or it equals 0, and I can see no other arguments.

If you want to treat infinity as a number, you have to be extra careful and even change some very basic notions. Such a system is the Extended Real Numbers that make additional assumptions (for the version I worked with, for example, infinity times 0 is 0).
 
Ok alright sorry to waste your time, sorry for wasting my own time, it's like a missionary that doesn't convert anyone. But I still feel I have a strong point, atleast from a simple arithmetic 9th grade point of view. And I gave a proof as to why 10^-infinity is not equal to 0. But you have not acknowledged that.
  1. There is very little in mathematics that is about conversion. Prove what you are saying or don't.
  2. You do not have a strong point. What you have is a poor understanding of the concept of a mathematical proof. Proof is proof. It does not leave an open question. It does not invite additional views. Once you have proof to the contrary, abandon your point - whether you think it strong or not.
  3. You did stumble upon an ancient paradox. This is very good. However, this has nothing to do with 1 = 0.9999…. – Not a thing. There is not value between 1 and 0.999… This makes them equal. It’s not a limit. It’s not a sequence. It’s equality.
  4. I do not believe that there is yet much feeling of wasting time. However, as you continue to resist proof, you will start to look like a troll.
My Views. I welcome others'.
 
(Edit: Halls of Ivy expressed this thought much better than I did.)

Hallsofivey doesn't need to say much -- he cuts to the core of an issue quickly.
I keep saying that the root of the confusion is that Repeated Decimal Expressions are expressions of numbers not numbers. When these expressions are simply numerals then the distinctions are cosmetic. Roman Numerals, Digits expressed with a radix other than 10, etc. If we assume that repeating decimals are just an alternative expression of the rationals, then I agree.

However, the repeating decimals are a theoretical partitioning of the much more controversial expression of real numbers. These kinds of expressions may contain distinctions worth preserving. We loose these distinctions if we just pair them up with the Rationals. For repeating decimals to stand alone as a field, we need to demonstrate the properties of a field like closure, +, * etc. We need to make them well-defined, using an equality rule or making some forms illegal. That's NOT too easy for repeating decimals (Although every grade school student is given applied math tools that suggest these definitions).

Is actually easier to do this task for the repeating integer numbers that I suggested. The task is easier still, if the digits are expressed in a base that is prime (rather than base 10) . Repeating integers have the nice distinction that a minus sign is unnecessary. They lend themselves to + and * operations that are more like those learned in school. I don't know if they have distinctions that would be interesting to computer science, but perhaps (base 2 is prime).
I found the distinction that 2...000001 = 1 has real meaning and informs some of the issues expressed by the OP.

I would still like someone to comment if they agree (or disagree) that ...66667 = 1/3 :wink:
 
Last edited:
Hallsofivey doesn't need to say much -- he cuts to the core of an issue quickly.
I keep saying that the root of the confusion is that Repeated Decimal Expressions are expressions of numbers not numbers. When these expressions are simply numerals then the distinctions are cosmetic. Roman Numerals, Digits expressed with a radix other than 10, etc. If we assume that repeating decimals are just an alternative expression of the rationals, then I agree.

However, the repeating decimals are a theoretical partitioning of the much more controversial expression of real numbers. These kinds of expressions may contain distinctions worth preserving. We loose these distinctions if we just pair them up with the Rationals. For repeating decimals to stand alone as a field, we need to demonstrate the properties of a field like closure, +, * etc. We need to make them well-defined, using an equality rule or making some forms illegal. That's NOT too easy for repeating decimals (Although every grade school student is given applied math tools that suggest these definitions).

Is actually easier to do this task for the repeating integer numbers that I suggested. The task is easier still, if the digits are expressed in a base that is prime (rather than base 10) . Repeating integers have the nice distinction that a minus sign is unnecessary. They lend themselves to + and * operations that are more like those learned in school. I don't know if they have distinctions that would be interesting to computer science, but perhaps (base 2 is prime).
I found the distinction that 2...000001 = 1 has real meaning and informs some of the issues expressed by the OP.

I would still like someone to comment if they agree (or disagree) that ...66667 = 1/3 :wink:
I respectfully disagree, and our disagreement is confusing a kid. Infinitely repeating decimals imply infinity, which is a strange beast of thought with no analogue in the physical universe. One approach would be to eliminate infinity from our toolbox, in which case the conclusion is that some rational and all irrational numbers cannot be expressed exactly in decimal form. Infinitely repeating decimals would not exist. Another approach is to follow Cantor's development of the transfinite numbers and the 19th century's development of the real number system. Probably there are other approaches that are logically rigorous. But so long as we are going to talk about infinitely repeating decimals, the expression 0.666...667 is meaningless because there is no end to the infinite by definition. Of course, you can develop an alternative definition of infinity and work out the consequences of that alternative, but the question that was originally posed was a question within the confines of standard analysis, and in those terms, we have a perfect proof by contradiction:
.
\(\displaystyle 1 \ne 0.999...\implies \dfrac{1}{3} \ne \dfrac{0.999...}{3} = 0.333...\)
.
\(\displaystyle But\ \dfrac{1}{3} = 0.333...\) according to the ordinary rules of turning a fraction into a decimal.
.
There is a contradiction so the premise is false.
.
\(\displaystyle Thus\ 1 = 0.999...\)
.
Some system of non-standard analysis might eliminate infinitely repeating decimals or alter their meaning so that in that system 0.999... is not equal to 1 (perhaps because they are separated by an infinite number of super-duper-hyper reals). The question, however, related to a statement that arises in standard analysis, and according to the rules of that system, 0.999... does equal 1. We got a question about football and talked about double plays.
 
I respectfully disagree, and our disagreement is confusing a kid.
(NOTE1: Calling Mack "a kid" is not as respectful as we can be.) Infinitely repeating decimals imply infinity, which is a strange beast of thought with no analogue in the physical universe. One approach would be to eliminate infinity from our toolbox, in which case the conclusion is that some rational and all irrational numbers cannot be expressed exactly in decimal form. Infinitely repeating decimals would not exist. Another approach is to follow Cantor's development of the transfinite numbers and the 19th century's development of the real number system. Probably there are other approaches that are logically rigorous. But so long as we are going to talk about infinitely repeating decimals, the expression 0.666...667 (NOTE 2: This is not in the quote to which you are responding.) is meaningless because there is no end to the infinite by definition. Of course, you can develop an alternative definition of infinity and work out the consequences of that alternative, but the question that was originally posed was a question within the confines of standard analysis, and in those terms, we have a perfect proof by contradiction:
.
\(\displaystyle 1 \ne 0.999...\implies \dfrac{1}{3} \ne \dfrac{0.999...}{3} = 0.333...\)
.
\(\displaystyle But\ \dfrac{1}{3} = 0.333...\) according to the ordinary rules of turning a fraction into a decimal.
.
There is a contradiction so the premise is false.
.
\(\displaystyle Thus\ 1 = 0.999...\)
(I agree, but none of this is in the Quote to which you are responding.)
Some system of non-standard analysis might eliminate infinitely repeating decimals or alter their meaning so that in that system 0.999... is not equal to 1 (perhaps because they are separated by an infinite number of super-duper-hyper reals). (NOTE 3: I agree but none of this is in the Quote to which you are responding.) The question, however, related to a statement that arises in standard analysis, and according to the rules of that system, 0.999... does equal 1. We got a question about football and talked about double plays. (Note 4: I agree.)
Note 1: Mack may be less confused than you realize.
Note 2: The quote is 2...0001 = 1. Not 0.666...667
Note 3: The expressions ARE different, IF they both DO represent a rational number then that number is 1/1 in both cases (I agree)
Note 4: If we miss quote or refuse to tie comments back to OP issues, then I agree.

I am not sure why you quoted me.
I am not sure why you believe that your points are directed at the post you quoted.
The only thing that you say that might relate is Note 2: I will address that one. If you had taken the suggestion in my post, you would have seen in what way 2...0001 = 1 has meaning and why I do not assume that expressions like 0.666...667 are obviously silly.
We cannot ignor or dismiss an OP observation just because it is novel.

Here is the connection:
I said, "I would still like someone to comment if they agree (or disagree) that ...66667 = 1/3"
...66667 * 3 = 2...0001
 
Last edited:
Consider \(\displaystyle \overline{6}7\) <---- This isn't a finite number.
Before you dismiss this too quickly, lets try one of the non-limit-type arguments.

... 6667 = n \(\displaystyle \ \ \ \ \infty \ = n \ \ \) It's not logical to set ...6667 = n, because n isn't a finite number.
... 6670 = 10 n \(\displaystyle \ \ \ \ 10*\infty \ = \ \infty \ \ \) So, the subtraction would be of the form \(\displaystyle \ \ \infty - \infty.\)
------------------
... 0003 = 9n = <---- referring to the left \(\displaystyle 10n - n = 9n \ \ge \ 9, \ \ not \ \ 3, \ \ for \ \ integers \ \ n \ \ge \ 1\) --> n = 3/9 = 1/3 Therefore, n = 3/9 cannot be implied from "...0003 = 9n."
If you were looking at some different situation, such as a finite number, specifically \(\displaystyle \ \ 0.\overline{6}7, \ \, \) then one might state: \(\displaystyle \ \ Let \ \ 0.666...6667 = n. \ \ \ Then \ \ 6.666...66670 = 10n, \ \ and \ \ so \ \ 6.000...0003 = 9n \ \implies \ \dfrac{6.000...0003}{9} = \dfrac{9n}{9} \ \implies \ 0.666...6667 = n\)
 
Last edited:
Note 1: Mack may be less confused than you realize.
Note 2: The quote is 2...0001 = 1. Not 0.666...667
Note 3: The expressions ARE different, IF they both DO represent a rational number then that number is 1/1 in both cases (I agree)
Note 4: If we miss quote or refuse to tie comments back to OP issues, then I agree.

I am not sure why you quoted me.
I am not sure why you believe that your points are directed at the post you quoted.
The only thing that you say that might relate is Note 2: I will address that one. If you had taken the suggestion in my post, you would have seen in what way 2...0001 = 1 has meaning and why I do not assume that expressions like 0.666...667 are obviously silly.
We cannot ignor or dismiss an OP observation just because it is novel.

Here is the connection:
I said, "I would still like someone to comment if they agree (or disagree) that ...66667 = 1/3"
...66667 * 3 = 2...0001
I am not being disrespectful to anyone. Mack himself has pointed out that he is in 9thgrade, which means he is about 14, an age that is certainly not adult. Presumably, he has not yet studied geometry, where students are introduced to formal proofs. He has not studied calculus, where students are introduced to a little bit of standard analysis. He presumably has not studied abstract algebra, where students are introduced to a formal presentation of the real number system. Given where these threads have wandered, it would be surprising if he were not confused. He certainly seems to be confused because he claims not to understand the relatively simple reasoning that leads within standard analysis to 0.999... = 1.000... If he is not confused and is simply “messing with me,” then he has succeeded.
.
I am saying that ...66667 is meaningless within the standard definitions of infinity and infinitely repeating decimals because there is no last digit in terms of those definitions. I have no idea what an infinitely repeating decimal would be under an alternative definition of infinity. For all I know, it may follow from such a definition that ...66667 is well defined and exactly equals 1/3. But under the definitions that are generally accepted ...66667 is an approximation to 2/3.
.
I certainly did not intend to misquote anyone, nor do I think I have. You asked someone to comment on ...66667 = 1/3:
.
I would still like someone to comment if they agree (or disagree) that ...66667 = 1/3 :wink:
.
I disagree with that statement (at least in terms of the standard conventions of mathematics.) Nor do I think it helps Mack understand the logic that leads to 1.000... = 0.999...
 
JeffM, my post just above pointed out a difference between \(\displaystyle \overline{6}7 \ \ and \ \ .\overline{6}7 \ (or \ \ 0.\overline{6}7, \ \ if \ \ you \ \ will.) \ \ \ \) \(\displaystyle The \ \ expression \ \ ...6667\) (given by Bob Brown) tells me it is an infinite number, not an infinite decimal, (an infinite number of the 6-digit, followed by the single digit of 7). Otherwise, if someone were meaning to indicate a decimal number, that person could not do it with the expression "...6667," because there is one decimal point and only two dots of the three needed for an ellipsis. Anyway, to have clarity and meaning for a decimal number, anyone should have typed something along the lines of ".6...6667"
 
Last edited:
If you were looking at some different situation, such as a finite number, specifically \(\displaystyle \ \ 0.\overline{6}7, \ \, \) then one might state: \(\displaystyle \ \ Let \ \ 0.666...6667 = n. \ \ \ Then \ \ 6.666...66670 = 10n, \ \ and \ \ so \ \ 6.000...0003 = 9n \ \implies \ \dfrac{6.000...0003}{9} = \dfrac{9n}{9} \ \implies \ 0.666...6667 = n\)


This is excellent and I agree.

However the notes in bold (in the rest of your post #35 above) are a tautology .
Your notes in the quote do not address my assertion at all.
I claim that a consistent definition of finite repeating integer digit numbers in this form may be possible.
All you did was claim that \(\displaystyle \overline{6}7\) is not 1/3 and is in fact infinite.
Then you use the assumption that it is not 1/3 to prove that it is not equal to 1/3.

No proof that these symbols cannot be used to define a field, was offered.
No attempt to define + and * with resulting demonstration of contradiction, was offered.
I claim that grade school digit manipulation is sufficient and easier than when used with repeating decimals.
If you want to show an inconsistency in the arithmetic you can not start with an assumption that I don't make.

A confusion here, is that this is NOT an analysis issue. (my challenge that \(\displaystyle \overline{6}7\) = 1/3)
This is a number theory question.
Base 10 repeating integers may not form a field. I don't know. It would be helpful to me to see an argument that addresses it.
10 is not prime. Using a base that is prime is accepted as an internally consistent field (p-adic) and is well studied.
 
Last edited:
JeffM, my post just above pointed out a difference between \(\displaystyle \overline{6}7 \ \ and \ \ .\overline{6}7 \ (or \ \ 0.\overline{6}7, \ \ if \ \ you \ \ will.) \ \ \ \) \(\displaystyle The \ \ expression \ \ ...6667\) (given by Bob Brown) tells me it is an infinite number, not an infinite decimal, (an infinite number of the 6-digit, followed by the single digit of 7). Otherwise, if someone were meaning to indicate a decimal number, that person could not do it with the expression "...6667," because there is one decimal point and only two dots of the three needed for an ellipsis. Anyway, to have clarity and meaning for a decimal number, anyone should have typed something along the lines of ".6...6667"
With all due respect to both you and Bob Brown, I do not agree that 67 or 667 or 6667 = 1/3. In fact, 6...7 > 6667 > 667 > 67 > 1 > 2/3 > 1/3. Moreover, if 0.6...66667 is meaningful, then 0.6...66667 - 0.6... = 0.0...00001 > 0 so 0.6...66667 > 0.6... = 2/3 > 1/3. Furthermore, if you look at post #33 you will see that I took your suggestion about clarifying what I thought to be Bob's intent by posting 0.666...66667 in lieu of ...66667. This may be what Bob meant when he accused me of misquoting him. All of this, however, is minor compared to my primary point. I do not think under the generally accepted definition of infinity that 0.6...66667 or ...66667 is meaningful if the 7 is supposed to be the digit following the last of an infinite number of 6's because there is no last 6.
 
(I wish I were able to post in separate lines.) \(\displaystyle n = 67, \ \ 10n = 670, \ \ 10n - n = 603, \ \ 9n = 603, \ \ 9n/9 = 603/3, \ \ n = 67 \ \ \ \)\(\displaystyle \ \ n = 667, \ \ 10n = 6670, \ \ 10n - n = 6003, \ \ 9n = 6003, \ \ 9n/9 = 6003/3, \ \ n = 667. \ \ And \ \ so \ \ on.\) I am meaning your "...0003" expression can't equal 3, because as the number of 6-digits increase without bound, the difference, 10n - n, is never equal to 3. Then it is false to equate ...0003 to 3, and so \(\displaystyle n \ \ne \ 3/9.\)
> > > All of this, however, is minor compared to my primary point. I do not think under the generally accepted definition of infinity that 0.6...66667 or ...66667 is meaningful if the 7 is supposed to be the digit following the last of an infinite number of 6's because there is no last 6. < < <
Yes to this part of your quote, JeffM.
Bob Brown MSEE said:
All you did was claim that \(\displaystyle \overline{6}7\) is not 1/3 and is in fact infinite.
Then you use the assumption that it is not 1/3 to prove that it is not equal to 1/3.
No, I didn't "claim" it. I showed/explained it is meaningless to refer to it by "n," because it must be a finite number. At this stage, I will leave this thread (until further notice), because you deny the content of my reality of what I typed and have decided to call what I posted as something else. I am attributing "troll status" to your involvement in this thread.
 
Last edited:
Top