Proving a/c + b/c = a+b/c. At WHAT point does one learn to PROVE this VS have faith

einstein

New member
Joined
Sep 14, 2010
Messages
46
Math is meant to be based on reason, not faith.

Every textbook i ever read makes the statement a/c + b/c = a+b/c for how one adds fractions with common denominator.

How is this math??!

The books make a statement and, sure, i can use it to solve problems and the answers are right, at what point in a study of math (i.e. high school, undergrad, post-grad) does on ACTUALLY get a proof for this? Or actually get taught how to prove this and other statements in textbooks which i'm supposed to believe based on faith with no proof. I want to understand math. How is it understanding when you just get told "this is true, use it" by every textbook.

I want to understand math and truly believe it is true i.e. PROOF--the whole point of math. Is it some kind of joke where students are told "Math is about reason and makes sense and you don't need faith" yet one is never shown how to arrive at the conclusion a/c + b/c = a+b/c, for example, and is just told it is true.

If you ask most high school teachers why do you invert and multiple when dividing by a fraction.. they cannot tell you (with proof).

At what point in being a math student will a textbook actually hand over the goods? Can someone point me to a place i can actually learn this. (Im basically at pre-calculus level of math)

In other words, what subject in math gives me these answers i am seeking.. i.e. the proof for a/c + b/c = a+b/c? And everything else i've been taught as being "true" in math textbooks from age of 13-17. Sure, it gives true answers. But where the heck is the proof and the understanding part going to enter my math education so that i can say... "THIS is why.... a/b / c/d = a*d / b * c"
 
I think I understand what you're getting at, and it was something that frustrated me for a very long time. For one thing, my recollection is that many of my textbooks up until the college level were surprisingly light on proofs. And when they did feature a proof, it usually had a few steps which seemed to make no sense, or just plain have the authors admit that "Such a proof is beyond the scope of this book." Unfortunately, the fact about math is, there's really no such thing a full-and-complete proof of anything. At some point, we make basic assumptions that we just say they're true and use them as building blocks for higher level math.

In your example a/c + b/c = a+b/c, the equation you've presented is not, in general true. Only certain values of a, b, and c will satisfy it. Namely, any relationship a + b = ac + b. However, if I insert some very very important grouping symbols a/c + b/c = (a+b)/c, then it becomes an identity, an equation that is true for every value of the variables.

Now, a proof of such a statement would need only rely on the fact that the two fractions being added have the same denominator, so we can just add straight across. However, what seems to be core issue you're having is, how do we know that is a true thing and a valid step? Well, this is merely the basic rules of adding fractions. It could potentially be abstracted and broken down even further, but there's generally no need. This could easily be one of the aforementioned basic rules (sometimes called axioms) that we just assume to be true, without proof. 1 + 1 = 2 is another such axiom.

At my university, there's a 200-level math class called "Foundations of Mathematics" in which we revisited basic elementary mathematics and define addition, subtraction, etc. such that there is a "proof" of 1 + 1 = 2. But, even there, it merely shifts the goalposts. Instead of "completing solving the system", it just gives you new axioms to work with. You can look up set theory and Von-Neumann Ordinals for more information on these topics. A different set of axioms used to accomplish the same purpose is called the Peano Postulates. But it, too, suffers from the same key "flaw" of requiring axioms, although they're different axioms than the set theory approach uses.

In the end, there are some things in math that just, for all intents and purposes, cannot be proven.
 
Math is meant to be based on reason, not faith.

Every textbook i ever read makes the statement a/c + b/c = a+b/c for how one adds fractions with common denominator.

How is this math??!

The books make a statement and, sure, i can use it to solve problems and the answers are right, at what point in a study of math (i.e. high school, undergrad, post-grad) does on ACTUALLY get a proof for this? Or actually get taught how to prove this and other statements in textbooks which i'm supposed to believe based on faith with no proof. I want to understand math. How is it understanding when you just get told "this is true, use it" by every textbook.

I want to understand math and truly believe it is true i.e. PROOF--the whole point of math. Is it some kind of joke where students are told "Math is about reason and makes sense and you don't need faith" yet one is never shown how to arrive at the conclusion a/c + b/c = a+b/c, for example, and is just told it is true.

If you ask most high school teachers why do you invert and multiple when dividing by a fraction.. they cannot tell you (with proof).

At what point in being a math student will a textbook actually hand over the goods? Can someone point me to a place i can actually learn this. (Im basically at pre-calculus level of math)

In other words, what subject in math gives me these answers i am seeking.. i.e. the proof for a/c + b/c = a+b/c? And everything else i've been taught as being "true" in math textbooks from age of 13-17. Sure, it gives true answers. But where the heck is the proof and the understanding part going to enter my math education so that i can say... "THIS is why.... a/b / c/d = a*d / b * c"
First of all I think you looking for a proof of the statement:

a/c + b/c = (a + b)/c ................. Those () are important - otherwise the meaning of your statement changes.

Now let me ask you:

Have you seen a proof of 1/2 + 1/2 = 2/2 = 1

or

7/8 + 1/8 = 8/8 = 1

These are very good questions that are discussed in course for Foundation of Mathematics.
 
This is a rather complicated answer.

First, as was said in post # 2, math ultimately relies on a very small number of unproven assertions (axioms, postulates, common notions). An example is the Peano Axioms. From that point on, things are proved.

Second, neither historically nor pedagogically is math developed along that logical basis. It is a virtual certainty that arithmetic developed as a useful empirical science. Enough examples like
3 + 7 = 7 + 3 accumulated that people decided that a + b = b + a was a general rule. Moreover, for the purposes of practical life, it is important that everybody understand arithmetic even if they cannot or will not read Bourbaki. An analogy is a building where you enter on the ground floor even though there is a foundation below that floor.

Third, understanding how and why the foundation is built the way that it is requires a fair amount of sophistication that comes only with mathematical experience gained on the upper floors.

Here is a "proof."

\(\displaystyle \text {Definition 11: } \dfrac{u}{v} \equiv u * v^{-1}.\)

Of course that definition depends on earlier definitions of multiplication and the multiplicative inverse.

\(\displaystyle \text {Theorem 20: } (r * s) + (r * t) = r * (s + t).\)

And of course that theorem must have been proved from the definitions of multiplication and addition and some axioms and earlier theorems. But let's assume that has been done.

\(\displaystyle \text {Theorem 8: } m * n = n * m.\)

And of course that theorem must also have been proved, but let's assume that has been done.

\(\displaystyle d = \dfrac{a}{c} + \dfrac{b}{c} \implies\)

\(\displaystyle d = (a * c^{-1}) + (b * c^{-1}) \implies\)

\(\displaystyle d = (c^{-1} * a) + (c^{-1} * b) \implies\)

\(\displaystyle d = c^{-1} * (a + b) \implies\)

\(\displaystyle d = (a + b) * c^{-1} \implies\)

\(\displaystyle d = \dfrac{a + b}{c}.\)

I suppose that the set of texts that you want is Bourbaki or some more recent equivalent.
 
Last edited:
Unfortunately, the fact about math is, there's really no such thing a full-and-complete proof of anything. At some point, we make basic assumptions that we just say they're true and use them as building blocks for higher level math.

This was a very helpful comment, thanks.

Now, a proof of such a statement would need only rely on the fact that the two fractions being added have the same denominator, so we can just add straight across. However, what seems to be core issue you're having is, how do we know that is a true thing and a valid step? Well, this is merely the basic rules of adding fractions. It could potentially be abstracted and broken down even further, but there's generally no need. This could easily be one of the aforementioned basic rules (sometimes called axioms) that we just assume to be true, without proof. 1 + 1 = 2 is another such axiom.

I can see in my mind how if the denominator is common then you add numerators and it will be correct. It just makes sense. So from what you're saying, and if you are indeed correct, this is just an axiom to be accepted.

At my university, there's a 200-level math class called "Foundations of Mathematics" in which we revisited basic elementary mathematics and define addition, subtraction, etc. such that there is a "proof" of 1 + 1 = 2. But, even there, it merely shifts the goalposts. Instead of "completing solving the system", it just gives you new axioms to work with. You can look up set theory and Von-Neumann Ordinals for more information on these topics. A different set of axioms used to accomplish the same purpose is called the Peano Postulates. But it, too, suffers from the same key "flaw" of requiring axioms, although they're different axioms than the set theory approach uses.

In the end, there are some things in math that just, for all intents and purposes, cannot be proven.

Very insightful. So consider negative exponents or negative fractional exponents. Where do they come from? Are they just defined to be consistent with the system of arithmetic? And at what point did they come to be defined a such? Did someone just say they must be like THIS because it's consistent with everything else, and then all mathematicians started using them?

For example, did someone just say: "a^-1=1/a" because it was consistent with other exponents and then THAT just becomes a definition that is accepted?
 
For example, did someone just say: "a^-1=1/a" because it was consistent with other exponents and then THAT just becomes a definition that is accepted?
No. In terms of the kind of math you are talking about the definition is

\(\displaystyle \text {For any number } p \ne 0,\ \exists \text { a number } p^{-1} \text { such that } p * p^{-1} \equiv 1.\)

It is the notation for the multiplicative inverse.
 
Less complicated answer:

1 dog + 2 dog = 3 dog

\(\displaystyle 1\;dog\;+\;2\;dog\;\ne\;3\;cat\)
 
Almost everybody can explain what does 3 * 2 "means".

I like see their faces when I ask "So in that light, what does π * √2 mean?"
 
This is a rather complicated answer.

First, as was said in post # 2, math ultimately relies on a very small number of unproven assertions (axioms, postulates, common notions). An example is the Peano Axioms. From that point on, things are proved.

Second, neither historically nor pedagogically is math developed along that logical basis. It is a virtual certainty that arithmetic developed as a useful empirical science. Enough examples like
3 + 7 = 7 + 3 accumulated that people decided that a + b = b + a was a general rule. Moreover, for the purposes of practical life, it is important that everybody understand arithmetic even if they cannot or will not read Bourbaki. An analogy is a building where you enter on the ground floor even though there is a foundation below that floor.

Third, understanding how and why the foundation is built the way that it is requires a fair amount of sophistication that comes only with mathematical experience gained on the upper floors.

Here is a "proof."

\(\displaystyle \text {Definition 11: } \dfrac{u}{v} \equiv u * v^{-1}.\)

Of course that definition depends on earlier definitions of multiplication and the multiplicative inverse.

\(\displaystyle \text {Theorem 20: } (r * s) + (r * t) = r * (s + t).\)

And of course that theorem must have been proved from the definitions of multiplication and addition and some axioms and earlier theorems. But let's assume that has been done.

\(\displaystyle \text {Theorem 8: } m * n = n * m.\)

And of course that theorem must also have been proved, but let's assume that has been done.

\(\displaystyle d = \dfrac{a}{c} + \dfrac{b}{c} \implies\)

\(\displaystyle d = (a * c^{-1}) + (b * c^{-1}) \implies\)

\(\displaystyle d = (c^{-1} * a) + (c^{-1} * b) \implies\)

\(\displaystyle d = c^{-1} * (a + b) \implies\)

\(\displaystyle d = (a + b) * c^{-1} \implies\)

\(\displaystyle d = \dfrac{a + b}{c}.\)

I suppose that the set of texts that you want is Bourbaki or some more recent equivalent.

Very, very helpful answer. It's like you read my mind and validated a whole lot of stuff i was thinking. Thank you very much.

I've struggled with the paradox that math is all based on proof and certainty, but as you say it almost certainly originated based on inductive or empirical reasoning like people noticing 2 apples + 3 apples, or 2 + 3, is the same as 3 + 2, and therefore a + b = b + a is true, and math seemingly developed on that "truth". Yet that's not deductive proof, it's inductive reasoning. It's no different to observational science. It is not deductive. So THEN..... how come any mathematician from 2000BC to 1800s were so confident in their math. It is based on inductive reasoning. To me it didn't (and doesn't?) make sense. Why is the confidence in Math justified if this is the case.

However, i think, the answer may be that a+b=b+c is just an axiom we accept. And as post #2 said, even if we go back and prove it on solid foundation it's still just shifting goal posts to other axioms we just accept. So we just accept it, as an axiom, and build math on it. And math seems to be pretty reliable (i.e. man on moon) so we trust it... we trust axioms that cannot be proved but "appear" to be true... and deduce from there. And for whatever reason (known to God) it just **** works. Am i right?

If i am right then that brings me back to my original problem/post. How can i identify which things are axioms and which are not and i can therefore prove. And that was my original question: a/b + c/b = (a+c)/b is this an axiom or is there a proof for it. And where do i learn these proofs? I can totally see WHY this is true... it's super obvious that denominator stays the same and you add numerators and it will be correct. But should i be satisfied with that? If there is a proof then obviously that would be better... and i should demand algebraic proof... then i would really understand why it is so that a/b + c/b = (a+c)/b, and have proof (the point of math that i was sold on), rather than just "feeling" with my minds eye that it is so.
 
Last edited:
Very, very helpful answer. It's like you read my mind and validated a whole lot of stuff i was thinking. Thank you very much.

I've struggled with the paradox that math is all based on proof and certainty, but as you say it almost certainly originated based on inductive or empirical reasoning like people noticing 2 apples + 3 apples, or 2 + 3, is the same as 3 + 2, and therefore a + b = b + a is true, and math seemingly developed on that "truth". Yet that's not deductive proof, it's inductive reasoning. It's no different to observational science. It is not deductive. So THEN..... how come any mathematician from 2000BC to 1800s were so confident in their math. It is based on inductive reasoning. To me it didn't (and doesn't?) make sense. Why is the confidence in Math justified if this is the case.

However, i think, the answer may be that a+b=b+c is just an axiom we accept. And as post #2 said, even if we go back and prove it on solid foundation it's still just shifting goal posts to other axioms we just accept. So we just accept it, as an axiom, and build math on it. And math seems to be pretty reliable (i.e. man on moon) so we trust it... we trust axioms that cannot be proved but "appear" to be true... and deduce from there. And for whatever reason (known to God) it just **** works. Am i right?

If i am right then that brings me back to my original problem/post. How can i identify which things are axioms and which are not and i can therefore prove. And that was my original question: a/b + c/b = (a+c)/b is this an axiom or is there a proof for it. And where do i learn these proofs? I can totally see WHY this is true... it's super obvious that denominator stays the same and you add numerators and it will be correct. But should i be satisfied with that? If there is a proof then obviously that would be better... and i should demand algebraic proof... then i would really understand why it is so that a/b + c/b = (a+c)/b, and have proof (the point of math that i was sold on), rather than just "feeling" with my minds eye that it is so.
Thank you for your kind words. I am happy to have given you something to ponder. If you are interested in such questions, a course on foundations would be right up your alley.

I think you are perhaps going a bit too far. The basic axioms are very few and very plausible and they are massively confirmed empirically. Talking about faith seems like hyperbole to me: no mathematician says "prorsus credible est, quia ineptum est."

You are correct I believe about the 1800's. The Greeks culminating in Euclid created the first rigorous mathematics, but since the eighteenth century mathematicians have developed a much higher standard of rigor.

I did not say, nor do I believe, that modern mathematicians view arithmetic as an empirical science. I do believe that its historical origins were empirical, but it is now DEDUCED from the foundations of mathematics. History and epistemology are two different disciplines. Although experimentation may suggest propositions, proof is now required for virtually everything.
 
Last edited:
The whole system of scientific argument is based on axioms and postulate. The basis of plane geometry is the Euclid's postulates. Theorems are proven assuming the validity and consistency of those postulates.

Then whole Riemenian geometry was developed by renouncing the Euclid's fifth postulate.

Then there are experimental facts - and theories arising from that. Special theory of relativity is based on one of the astounding experimental fact - measured speed of electromagnetic waves is independent of the speed of the source. There is no why or how for this observation - this is what we observe.

Then there are conjectures - we cannot prove or disprove those. Fermat's famous equation fell into this category for a long time - till it was proven to be "theorem" after proving couple of more conjectures. I believe there are whole host of famous conjectures waiting to be proven true - Goldberg's conjecture comes to mind.

Mathematical operations are assigned some properties - like - associative, reflexive, distributive etc.(by the way I am stepping out of my knowledge boundary.). We have assigned reflexive properties to the operation addition in number field. Hence a + b = b + a. If we do not want to assign this property to some operation (like matrix multiplication), we cannot call that operation addition in number field.

If we question that operation, we can make up a new operation, or new field or new elements.
 
Last edited by a moderator:
Subhotosh


It has been a very long time, but my recollection is that the basic rules of arithmetic such as
a + b = b + a have been proved using, for example, the Peano Postulates. My memory may be wrong, but if it is correct, then a + b = b + a is not something assigned (eg an axiom or postulate) but a theorem. This is all memory that is now almost 50 years old and so is shaky, but my recollection of the definition of addition was something like


\(\displaystyle m + 1 = s(m), \text{ where } s(m) \text{ is the successor func}\text{tion.}\)


Of course we could develop arithmetic on an axiomatic basis, where things like
a + b = b + a were axioms, but must we do so?
 
Last edited by a moderator:
I wanted to say that there are some assigned properties of operations - like commutative properties.

Commutative properties were assigned to "addition" but NOT to "subtraction" (i.e. a-b <> b-a).

Like you, I took classes ~50 years ago - and that too as an engineering student. So my recollection of these "fundamental" may be fundamentally wrong.
 
I wanted to say that there are some assigned properties of operations - like commutative properties.

Commutative properties were assigned to "addition" but NOT to "subtraction" (i.e. a-b <> b-a).

Like you, I took classes ~50 years ago - and that too as an engineering student. So my recollection of these "fundamental" may be fundamentally wrong.
There are Internet articles of varying degrees of sophistication on Peano Arithmetic. I looked at three.

This seems to be the least sophisticated:

http://planetmath.org/peanoarithmetic

The one above definitely says that the commutivity of addition of natural numbers can be proved from the Peano Postulates.

Another article that I looked at sets finding the proof of the commutivity of addition as a problem for the student.

I am sort of proud of myself. I actually remembered half the definition of addition after 50 odd years.

Getting back to the OP, the rules of arithmetic can indeed be proven, but behind those proofs there are unproved axioms.
 
There are Internet articles of varying degrees of sophistication on Peano Arithmetic. I looked at three.

This seems to be the least sophisticated:

http://planetmath.org/peanoarithmetic

The one above definitely says that the commutivity of addition of natural numbers can be proved from the Peano Postulates.

Another article that I looked at sets finding the proof of the commutivity of addition as a problem for the student.

I am sort of proud of myself. I actually remembered half the definition of addition after 50 odd years.

Getting back to the OP, the rules of arithmetic can indeed be proven, but behind those proofs there are unproved axioms.
But a+b = b+a is not only restricted to natural numbers.
As I read the article, Peano arithmetic proves commutative property for addition of natural numbers only. Does not include fractions, vectors, matrices ....
So - again I am not a bonafied mathematician - Peano postulate supports commutation properties in natural number field. But does it prove it?
 
Top