How to prove this? f(x)=c.x^n f'(x)=c.n.x^(n-1)

miregal

New member
Joined
Dec 22, 2020
Messages
29
And also, what is this equation called?
 

Attachments

  • SmartSelectImage_2021-01-29-16-58-50.png
    SmartSelectImage_2021-01-29-16-58-50.png
    67.1 KB · Views: 2
And also, what is this equation called?
If \(\large f(x)=x^n\text{ then }f'(x)=n\cdot x^{n-1}\) is often called the basic power rule.
To stabatpatriae: The fact you asked causes one to wonder if you are in a class studying calculus?
Do you know the limit definition of derivative? \(\mathop {\lim }\limits_{h \to 0} \dfrac{{f(x + h) - f(x)}}{h} = f'(x)\) if it exists??
Please tell us about your situation.
 
If \(\large f(x)=x^n\text{ then }f'(x)=n\cdot x^{n-1}\) is often called the basic power rule.
To stabatpatriae: The fact you asked causes one to wonder if you are in a class studying calculus?
Do you know the limit definition of derivative? \(\mathop {\lim }\limits_{h \to 0} \dfrac{{f(x + h) - f(x)}}{h} = f'(x)\) if it exists??
Please tell us about your situation.
I know the limit definition I'm 9th grader but I'll try to understand
 
First, it is not an equation. It is what logicians would call a conditional proposition. Translated part way out of mathematical notation into English, it means

[MATH]\text { If } f(x) = x^n \text {, then the first derivative of } f(x) \text { equals } nx^{(n - 1)}.[/MATH]
That proposition is an important theorem in differential calculus, used so often as a formula that it is simply called the power rule.

Derivatives of a function are themselves functions that provide useful information about the original function. They are the topic studied in differential calculus.
 
Last edited:
The meaning of the first derivative is this. Certain functions change smoothly as their argument changes. Such functions are called differentiable. A very simple such function is f(x) = x.

[MATH]f(x) = x = x^1 \implies f’(x) = 1 \cdot x^{(1-1)} = x^0 = 1.[/MATH]
Now if you go back to first year algebra, the slope of y = x is 1. The first derivative is, loosely speaking, the slope of a function.

Sketch a graph of f(x) = x^2. It’s a parabola. Let’s calculate the first derivative.

[MATH]f(x) = x^2 \implies f’(x) = 2 \cdot x^{(2-1)} = 2x^1 = 2x.[/MATH]
If x is negative then 2x is negative. also, if x is negative, the parabola is trending down; its slope is negative.

If x is positive, then 2x is positive. Also if x is positive, the parabola is trending up; its slope is positive.

Where is the derivative zero? At x = 0, which happens to be where the parabola has its least value.

This exemplifies this general theorem:

[MATH]a < b < c \text { and } fx) \text { is differentiable in } (a, \ c) \text { and has a local minimum or maximum at } b \implies f’(b) = 0.[/MATH]
That gives us a tool to find where functions have maximum and minimum value.
 
Last edited:
I know the limit definition I'm 9th grader but I'll try to understand
To understand this, we need to know that for a positive integer \(n\)
\((x+h)^n=\sum\limits_{k = 0}^n {\dbinom{n}{k}{x^{n - k}}{h^k}} \)
thus \((x+h)^n=\sum\limits_{k = 0}^n {\dbinom{n}{k}{x^{n - k}}{h^k}}-f(x)=\sum\limits_{k = 1}^n {\dbinom{n}{k}{x^{n - k}}{h^k}} \)
Note that each term contains an \(h\) so dividing by \(h\) and finding the limit as \(h\to 0\) all that is left is \(n\cdot x^{n-1}\)
I hope you see that calculus is rather easy, but you need the algebra to get there.
 
To understand this, we need to know that for a positive integer \(n\)
\((x+h)^n=\sum\limits_{k = 0}^n {\dbinom{n}{k}{x^{n - k}}{h^k}} \)
thus \((x+h)^n=\sum\limits_{k = 0}^n {\dbinom{n}{k}{x^{n - k}}{h^k}}-f(x)=\sum\limits_{k = 1}^n {\dbinom{n}{k}{x^{n - k}}{h^k}} \)
Note that each term contains an \(h\) so dividing by \(h\) and finding the limit as \(h\to 0\) all that is left is \(n\cdot x^{n-1}\)
I hope you see that calculus is rather easy, but you need the algebra to get there.
Thanks a lot, yes calculus is easy when you know algebra
 
You can also prove it, perhaps more easily, by induction on n.

When n= 1, \(\displaystyle x^n= x^1= x\). f(x)= x and f(x+h)= x+h The difference quotient is \(\displaystyle \frac{(x+ h)- x}{h}= \frac{h}{h}= 1\). Of course the limit of that, as h goes to 0 is 1. The derivative of \(\displaystyle x^n= x^1\) is \(\displaystyle 1= 1x^0= nx^{n-1}\).

Assume that, for some n= k, \(\displaystyle (x^k)'= kx^{k-1}\). Then, for n= k+1, \(\displaystyle x^{k+1}= x(x^k)\) and we can use the "product rule", (fg)'= f'g+ fg'. Here that is \(\displaystyle (x^{k+1})'= (x(x^k))'= x'(x^k)+ x(x^k)'=1(x^k)+ x(kx^{k-1})= x^k+kx^k= (k+1)x^k\).
 
Of course, that requires that you first have proved the "product rule": (fg)'= f'g+ fg'.

If F(x)= f(x)g(x) then F(x+ h)= f(x+h)g(x+ h) so F(x+h)- F(x)= f(x+h)g(x+h)- f(x)g(x)= f(x+h)g(x+h)- f(x)g(x+h)+ f(x)g(x+h)- f(x)g(x)
= (f(x+h)- f(x))g(x+ h)- f(x)(g(x+h)- g(x).

\(\displaystyle \frac{F(x+h)- F(x)}{h}= \frac{f(x+h)- f(x)}{h}g(x+h)+ f(x)\frac{g(x+h)- g(x)}{h}\).

Now, as h goes to 0 the two "difference quotients" go to the derivative and g(x+h) goes to g(x).
 
Of course, that requires that you first have proved the "product rule": (fg)'= f'g+ fg'.

If F(x)= f(x)g(x) then F(x+ h)= f(x+h)g(x+ h) so F(x+h)- F(x)= f(x+h)g(x+h)- f(x)g(x)= f(x+h)g(x+h)- f(x)g(x+h)+ f(x)g(x+h)- f(x)g(x)
= (f(x+h)- f(x))g(x+ h)- f(x)(g(x+h)- g(x).

\(\displaystyle \frac{F(x+h)- F(x)}{h}= \frac{f(x+h)- f(x)}{h}g(x+h)+ f(x)\frac{g(x+h)- g(x)}{h}\).

Now, as h goes to 0 the two "difference quotients" go to the derivative and g(x+h) goes to g(x).
Sorry, but I do not entirely buy your proof. You must rigorously show that g(x+h) goes to g(x)!
 
This is one of the reasons that this site is not (and cannot be) well adapted for proofs: what theorems may be assumed as already proven and what axioms apply will vary with each inquiry.

[MATH]F(x) = f(x) \cdot g(x) \implies F(x + h) - F(x) = f(x + h) \cdot g(x + h) - f(x) \cdot g(x) =[/MATH]
[MATH]f(x + h) \cdot g(x + h) + \{ f(x) \cdot g(x + h) - f(x) \cdot g(x + h) \} - f(x) \cdot g(x) = [/MATH]
[MATH]\{f(x + h) \cdot g(x + h) - f(x) \cdot g(x + h)\} + \{f(x) \cdot g(x + h) - f(x) \cdot g(x)\} =[/MATH]
[MATH]g(x + h)\{f(x + h) - f(x)\} + f(x)\{g(x + h) - g(x)\} \implies[/MATH]
[MATH]\dfrac{F(x + h) - F(x)}{h} = g(x + h) \cdot \dfrac{f(x + h) - f(x)}{h} + f(x) \cdot \dfrac {g(x + h) - g(x)}{h}.[/MATH]
Now we are assuming that f(x) and g(x) are differentiable in some relevant interval (a, b), which means that, by definition, the limits of the two difference quotients on the RHS, the derivatives, exist in that interval. There is a theorem that says if f(x) and g(x) are differentiable in (a, b), then f(x) and g(x) are also continuous in that interval. Therefore, by definition, the limit of g(x + h) as h approaches zero exists and equals g(x). Moreover there is a theorem that says that if two summands each have a limit, then their sum has a limit equal to the sum of the limits. And finally there is a theorem that if two multiplicands each have a limit, then their product has a limit equal to the product of the limits.

Applying all that we get

[MATH]F’(x) = g(x)f’(x) + f(x)g’(x).[/MATH]
But who says those theorems are valid. Their proofs require other theorems. It is simply impossible to know where to start on proofs because this site is not a text book where axioms are specified and theorems worked out in a logical sequence.
 
Sorry, but I do not entirely buy your proof. You must rigorously show that g(x+h) goes to g(x)!
Yes, I did not explicitly state that f and g must be differentiable which would imply that they are continuous.
 
This is one of the reasons that this site is not (and cannot be) well adapted for proofs: what theorems may be assumed as already proven and what axioms apply will vary with each inquiry.

[MATH]F(x) = f(x) \cdot g(x) \implies F(x + h) - F(x) = f(x + h) \cdot g(x + h) - f(x) \cdot g(x) =[/MATH]
[MATH]f(x + h) \cdot g(x + h) + \{ f(x) \cdot g(x + h) - f(x) \cdot g(x + h) \} - f(x) \cdot g(x) = [/MATH]
[MATH]\{f(x + h) \cdot g(x + h) - f(x) \cdot g(x + h)\} + \{f(x) \cdot g(x + h) - f(x) \cdot g(x)\} =[/MATH]
[MATH]g(x + h)\{f(x + h) - f(x)\} + f(x)\{g(x + h) - g(x)\} \implies[/MATH]
[MATH]\dfrac{F(x + h) - F(x)}{h} = g(x + h) \cdot \dfrac{f(x + h) - f(x)}{h} + f(x) \cdot \dfrac {g(x + h) - g(x)}{h}.[/MATH]
Now we are assuming that f(x) and g(x) are differentiable in some relevant interval (a, b), which means that, by definition, the limits of the two difference quotients on the RHS, the derivatives, exist in that interval. There is a theorem that says if f(x) and g(x) are differentiable in (a, b), then f(x) and g(x) are also continuous in that interval. Therefore, by definition, the limit of g(x + h) as h approaches zero exists and equals g(x). Moreover there is a theorem that says that if two summands each have a limit, then their sum has a limit equal to the sum of the limits. And finally there is a theorem that if two multiplicands each have a limit, then their product has a limit equal to the product of the limits.

Applying all that we get

[MATH]F’(x) = g(x)f’(x) + f(x)g’(x).[/MATH]
But who says those theorems are valid. Their proofs require other theorems. It is simply impossible to know where to start on proofs because this site is not a text book where axioms are specified and theorems worked out in a logical sequence.
In my opinion just before proving the product rule you prove the lemma showing that g(x+h) goes to g(x). Then in the proof of the product rule you state that you are using that lemma. This is how I was taught the product rule and how I taught it for decades. If I recall correctly, it is that way in all the calculus books I looked at. If it wasn't like that in the textbook I would never had said anything to Halls about it.
 
In my opinion just before proving the product rule you prove the lemma showing that g(x+h) goes to g(x). Then in the proof of the product rule you state that you are using that lemma. This is how I was taught the product rule and how I taught it for decades. If I recall correctly, it is that way in all the calculus books I looked at. If it wasn't like that in the textbook I would never had said anything to Halls about it.
Jomo

I truly respect you, and I admit that I have not one atom of mathematician in me. But surely the very definition of continuity includes

[MATH]f(x) \text { is continuous at } a \iff f(a) \exists \land \lim_{h \rightarrow 0}f(a + h)= f(a).[/MATH]
If that is the case, what lemma can be necessary? Obviously, we need the general theorem that differentiability entails continuity. But, given that theorem, I am not seeing your point that a specific lemma is needed for the product rule.

I am quite possibly being dense. I find analysis unbelievably ugly and stopped taking courses in math at Columbia after being subjected to analysis.
 
Top