I'm studying for my test on integrals and I've stumbled across something I find really confusing!
\(\displaystyle \mbox{40e. }\, f(x)\, =\, {}^2\log\left(\dfrac{1}{x}\right)\, =\, {}^2\log\left(x^{-1}\right)\, =\, -^2\log(x)\)
. . . . .⇒F(x)=−ln(2)1⋅(xln(x)−x)+c=ln(2)−xln(x)+x+c
I understand that log2(1/X) = -log2(x), but I don't get why -log2(x) becomes -ln(x)/ln(2), instead of log(x)/log(2) and that you have to find the integral of ln(x) but not 1/ln(2). I can't find anything about this in my math book so I was wondering if someone could explain this to me?
\(\displaystyle \mbox{40e. }\, f(x)\, =\, {}^2\log\left(\dfrac{1}{x}\right)\, =\, {}^2\log\left(x^{-1}\right)\, =\, -^2\log(x)\)
. . . . .⇒F(x)=−ln(2)1⋅(xln(x)−x)+c=ln(2)−xln(x)+x+c
I understand that log2(1/X) = -log2(x), but I don't get why -log2(x) becomes -ln(x)/ln(2), instead of log(x)/log(2) and that you have to find the integral of ln(x) but not 1/ln(2). I can't find anything about this in my math book so I was wondering if someone could explain this to me?
Attachments
Last edited by a moderator: