The First Obstacle to Solve PDE: partial-y/partial-t = (k/r)(r partial-y/partial-r) + Q(r)

mario99

Junior Member
Joined
Aug 19, 2020
Messages
248
One millimeter between me and mastering the Bessel equations.

One trillion millimeters between me and mastering the PDE (the Partial Differential Equations).

And here is the first obstacle.

[math]\frac{\partial y}{\partial t} = \frac{k}{r}\frac{\partial}{\partial r}\left(r\frac{\partial y}{\partial r}\right) + Q(r)[/math]
[math]0 < r < a[/math]
[math]t > 0[/math]
[math]y(r,0) = f(r)[/math]
[math]y(a,t) = T[/math]
The final solution is​

[math]y(r,t) = T + \sum_{n=1}^{\infty}\left[\frac{q_n}{\lambda_n k} + \left(f_n - \frac{q_n}{\lambda_n k}\right)e^{-\lambda_n kt}\right]J_0(\sqrt{\lambda_n} r)[/math]
where

[imath]\lambda_n = \frac{z^2_n}{a^2}[/imath]

[imath]z_n[/imath] is the [imath]n[/imath]th zero of the Bessel function

[imath]f_n[/imath] and [imath]q_n[/imath] are Fourier Bessel series coefficients

How to solve the above PDE to get that final solution? I know how to solve this PDE if [imath]Q(r)[/imath] = 0.

The inclusion of this [imath]Q(r)[/imath] function reminds me by ordinary differential equations where we seek to find a particular solution. But here the function is unknown, how would we seek for a particular solution? If we look at the final solution, the function [imath]Q(r)[/imath] is not there which suggests it is equal to zero, but when I solve the PDE in that manner, I get a different solution.
 
One millimeter between me and mastering the Bessel equations.

One trillion millimeters between me and mastering the PDE (the Partial Differential Equations).

And here is the first obstacle.

[math]\frac{\partial y}{\partial t} = \frac{k}{r}\frac{\partial}{\partial r}\left(r\frac{\partial y}{\partial r}\right) + Q(r)[/math]
[math]0 < r < a[/math]
[math]t > 0[/math]
[math]y(r,0) = f(r)[/math]
[math]y(a,t) = T[/math]
The final solution is​

[math]y(r,t) = T + \sum_{n=1}^{\infty}\left[\frac{q_n}{\lambda_n k} + \left(f_n - \frac{q_n}{\lambda_n k}\right)e^{-\lambda_n kt}\right]J_0(\sqrt{\lambda_n} r)[/math]
where

[imath]\lambda_n = \frac{z^2_n}{a^2}[/imath]

[imath]z_n[/imath] is the [imath]n[/imath]th zero of the Bessel function

[imath]f_n[/imath] and [imath]q_n[/imath] are Fourier Bessel series coefficients

How to solve the above PDE to get that final solution? I know how to solve this PDE if [imath]Q(r)[/imath] = 0.

The inclusion of this [imath]Q(r)[/imath] function reminds me by ordinary differential equations where we seek to find a particular solution. But here the function is unknown, how would we seek for a particular solution? If we look at the final solution, the function [imath]Q(r)[/imath] is not there which suggests it is equal to zero, but when I solve the PDE in that manner, I get a different solution.
You are going to need to expand Q(r) as a series in Bessel functions. Do you know how to do that?

-Dan
 
Expanding [imath]Q(r)[/imath] as a Fouries Bessel series is not a problem.

[imath]Q(r) = \sum_{n=1}^{\infty}q_n J_0(\sqrt{\lambda_n} r)[/imath]

where [imath]q_n = \frac{\int_{0}^{a} rJ_0(\sqrt{\lambda_n} r)Q(r) \ dr}{||J_0(\sqrt{\lambda_n} r)||^2}[/imath]

But how this will help me in solving the PDE?
 
Expanding [imath]Q(r)[/imath] as a Fouries Bessel series is not a problem.

[imath]Q(r) = \sum_{n=1}^{\infty}q_n J_0(\sqrt{\lambda_n} r)[/imath]

where [imath]q_n = \frac{\int_{0}^{a} rJ_0(\sqrt{\lambda_n} r)Q(r) \ dr}{||J_0(\sqrt{\lambda_n} r)||^2}[/imath]

But how this will help me in solving the PDE?
I'm on my phone, so LaTeX is a bit tough to write. I'll do the example talking about a power series... The method is identical.

When you expand a solution into an orthogonal series, the goal is to be able to equate both sides of the equation by matching coefficients of like terms. So if we have
[imath]a_0 + a_1 x + a_2 x^2 + \dots = 0[/imath]

for all values of x in a domain, we know that
[imath]a_0 = 0 \\ a_1 = 0 \\ a_2 = 0[/imath]

and so on.

In your example, then, we need Q(r) to be expanded into a Bessel function series so we can match coefficients of [imath]J_n(x)[/imath] and set them to 0.

-Dan
 
Do you mean that I should do this?

[imath]Q(r) = \sum_{n=1}^{\infty}q_n J_0(\sqrt{\lambda_n} r) = q_1 J_0(\sqrt{\lambda_1} r) + q_2 J_0(\sqrt{\lambda_2} r)+ q_3 J_0(\sqrt{\lambda_3} r) + ..........[/imath]

[imath]q_1 + q_2 + q_3 + ...... = 0[/imath]

But I already know the value of [imath]q_n[/imath]. Thanks to the definition.
 
Do you mean that I should do this?

[imath]Q(r) = \sum_{n=1}^{\infty}q_n J_0(\sqrt{\lambda_n} r) = q_1 J_0(\sqrt{\lambda_1} r) + q_2 J_0(\sqrt{\lambda_2} r)+ q_3 J_0(\sqrt{\lambda_3} r) + ..........[/imath]

[imath]q_1 + q_2 + q_3 + ...... = 0[/imath]

But I already know the value of [imath]q_n[/imath]. Thanks to the definition.
No, that's not what I'm saying. Presumably you are working from some set of notes. I would imagine that you have a worked example or two to show you the general method.

You have a function that can be expanded as:
[imath]\displaystyle y(r,t) = \sum_{n = 0}^{\infty} a_n J_0( \lambda_n r) T_n(t)[/imath]

and a function
[imath]\displaystyle Q(r) = \sum_{n = 0}^{\infty} q_n J_0( \lambda_n r)[/imath]

So plug this into your differential equation:
[imath]\dfrac{\partial y(r,t)}{\partial t} = \dfrac{k}{r} \dfrac{\partial}{\partial r} \left ( r \dfrac{\partial y(r,t)}{\partial r} \right ) + Q(r)[/imath]

The RHS can be expressed as an (orthogonal) series of Bessel functions. Now apply the boundary conditions and use the orthogonality condition to write expressions for the coefficients. (By the way, looking at the given answer, you are going to need to expand f(r) into a Bessel series as well.)

I presume you've covered Fourier series and learned how to find the coefficients in a given expansion. (The standard development covers this before Bessel functions.) It's the same process: Write your unknown function in terms of a Fourier series, apply the boundary conditions, then use the orthogonality condition to write integrals for the coefficients. See here for an example of the Fourier method. The Bessel function method is the same thing, just with a different orthogonality condition. This generalizes to any orthogonal series expansion: the only difference between them is the orthogonality condition, so once you've learned how to do one method, you've essentially learned how to do them all. (Though, admittedly, the Bessel integrals tend to be a bit harder than most.)

I have a feeling you don't have much of a handle on the method. As usual, please post your work. I can't help you out any better until I can see what you know how to do.

-Dan
 
You have a function that can be expanded as:
[imath]\displaystyle y(r,t) = \sum_{n = 0}^{\infty} a_n J_0( \lambda_n r) T_n(t)[/imath]

and a function
[imath]\displaystyle Q(r) = \sum_{n = 0}^{\infty} q_n J_0( \lambda_n r)[/imath]

So plug this into your differential equation:
[imath]\dfrac{\partial y(r,t)}{\partial t} = \dfrac{k}{r} \dfrac{\partial}{\partial r} \left ( r \dfrac{\partial y(r,t)}{\partial r} \right ) + Q(r)[/imath]

I have a feeling you don't have much of a handle on the method. As usual, please post your work. I can't help you out any better until I can see what you know how to do.

-Dan
[imath]T_n = T_n(t)[/imath]
[imath]J_0 = J_0(\sqrt{\lambda_n} r)[/imath]

I will omit the variables [imath]t[/imath] and [imath]r[/imath] to save some ink.

[imath]\displaystyle \sum_{n = 0}^{\infty} a_n J_0 \frac{dT_n}{dt} = \sum_{n = 0}^{\infty} a_n T_n\frac{k}{r}\frac{d}{dr}\left(r\frac{dJ_0}{dr}\right) + \sum_{n = 0}^{\infty} q_n J_0[/imath]

Theorem.

[imath]\displaystyle \frac{d}{dx}(x^n J_n(x)) = x^n J_{n-1}[/imath]


[imath]\displaystyle \sum_{n = 0}^{\infty} a_n J_0 \frac{dT_n}{dt} = \sum_{n = 0}^{\infty} a_n T_n\frac{k}{r}\frac{d}{dr}\left(\sqrt{\lambda_n}rJ_{-1}\right) + \sum_{n = 0}^{\infty} q_n J_0[/imath]

Property.

[imath]J_{-n} = (-1)^nJ_n(x)[/imath]


[imath]\displaystyle \sum_{n = 0}^{\infty} a_n J_0 \frac{dT_n}{dt} = \sum_{n = 0}^{\infty} a_n T_n\frac{k}{r}\frac{d}{dr}\left(-\sqrt{\lambda_n}rJ_{1}\right) + \sum_{n = 0}^{\infty} q_n J_0[/imath]


[imath]\displaystyle \sum_{n = 0}^{\infty} a_n J_0 \frac{dT_n}{dt} = -\sum_{n = 0}^{\infty} a_n T_nk\lambda_nJ_0 + \sum_{n = 0}^{\infty} q_n J_0[/imath]


[imath]\displaystyle \sum_{n = 0}^{\infty} a_n J_0 \frac{dT_n}{dt} + \sum_{n = 0}^{\infty} a_n T_nk\lambda_nJ_0 - \sum_{n = 0}^{\infty} q_n J_0 = 0[/imath]


[imath]\displaystyle a_n\frac{dT_n}{dt} + a_n T_nk\lambda_n - q_n = 0[/imath]
 
[imath]T_n = T_n(t)[/imath]
[imath]J_0 = J_0(\sqrt{\lambda_n} r)[/imath]

I will omit the variables [imath]t[/imath] and [imath]r[/imath] to save some ink.

[imath]\displaystyle \sum_{n = 0}^{\infty} a_n J_0 \frac{dT_n}{dt} = \sum_{n = 0}^{\infty} a_n T_n\frac{k}{r}\frac{d}{dr}\left(r\frac{dJ_0}{dr}\right) + \sum_{n = 0}^{\infty} q_n J_0[/imath]

Theorem.

[imath]\displaystyle \frac{d}{dx}(x^n J_n(x)) = x^n J_{n-1}[/imath]


[imath]\displaystyle \sum_{n = 0}^{\infty} a_n J_0 \frac{dT_n}{dt} = \sum_{n = 0}^{\infty} a_n T_n\frac{k}{r}\frac{d}{dr}\left(\sqrt{\lambda_n}rJ_{-1}\right) + \sum_{n = 0}^{\infty} q_n J_0[/imath]

Property.

[imath]J_{-n} = (-1)^nJ_n(x)[/imath]
Not bad. Two points:

1. You can't use this identity right away: you've got the wrong indexes and a derivative of [imath]J_0[/imath] inside of your derivative. You'll have to do it the long way:
[imath]\dfrac{d}{dr} \left ( r \dfrac{dJ_0}{dr} \right )[/imath]

[imath]= \dfrac{d}{dr} \left ( r \dfrac{1}{2} \sqrt{\lambda_n} (J_{-1} - J_1) \right )[/imath]

(Note: The index of [imath]\lambda_n[/imath] does not change when we take the derivative, so it's [imath]J_{-1}(\lambda_n r)[/imath].)

Now, [imath]J_{-1} = (-1)^1 J_1 = -J_1[/imath], so this is

[imath]= \dfrac{d}{dr} \left (- r \sqrt{\lambda_n} J_1) \right )[/imath]

and now you can use the identity for the second derivative.[imath]^*[/imath]

2. I guess there's nothing wrong about taking the series expansion of LHS at the start, but it isn't really going to help any. Derive the overall equation leaving the LHS the same, and then expand it when you put in the boundary conditions.

[imath]^*[/imath] As it turns out, due to the symmetry of the differential operator, you do have the correct expression for the RHS. Make sure you rederive this correctly.

So at this point, you have
[imath]\displaystyle \dfrac{\partial y}{\partial t} = \sum_{n=1}^{\infty} \left ( - a_n \lambda _n T_n(t) + q_n \right ) J_0(\lambda_n r)[/imath]

Now apply the boundary conditions. (Hint: Do the second condition first, so you can find [imath]T_n(t)[/imath].) Recall that we put everything into a single summation on one side of the equation:
[imath]\displaystyle \sum_{n=1}^{\infty} b_n J_0(\lambda_n r) = 0[/imath]

which, if true for all r, means that [imath]b_n = 0[/imath] for each n.

Can you finish?

-Dan
 
[imath]\displaystyle \frac{dJ_n(x)}{dx} = \frac{1}{2}[J_{n-1}(x) - J_{n + 1}(x)][/imath]

I like the above identity that you used for the derivative of [imath]J_0[/imath].

I am just wondering why my identity isn't correct, although it gives the correct result?

[imath]\displaystyle \frac{d[x^nJ_n(x)]}{dx} = x^nJ_{n-1}(x)[/imath]

When [imath]n = 0, \displaystyle \frac{d[x^0J_0(x)]}{dx} = \frac{d[J_0(x)]}{dx} = x^0J_{0-1}(x) = J_{-1}(x)[/imath]

Now apply the boundary conditions. (Hint: Do the second condition first, so you can find [imath]T_n(t)[/imath].)
I prefer first to apply the initial condition to find [imath]T(0),[/imath] so that I can solve the differential equation to find [imath]T(t)[/imath].

[imath]\displaystyle y(r,0) = \sum_{n = 1}^{\infty} a_n J_0( \sqrt{\lambda_n} r) T_n(0) = f(r)[/imath]


[imath]\displaystyle a_nT_n(0) = \frac{\int_{0}^{a} rJ_0(\sqrt{\lambda_n} r)f(r) \ dr}{||J_0(\sqrt{\lambda_n} r)||^2}[/imath]


[imath]\displaystyle T_n(0) = \frac{\int_{0}^{a} rJ_0(\sqrt{\lambda_n} r)f(r) \ dr}{a_n||J_0(\sqrt{\lambda_n} r)||^2}[/imath]


[imath]\displaystyle a_n\frac{dT_n}{dt} + a_n T_nk\lambda_n - q_n = 0[/imath]


[imath]\displaystyle T_n(t) = \frac{q_n}{a_nk\lambda_n} +c_1e^{-k\lambda_n t}[/imath]


where [imath]\displaystyle c_1 = T_n(0) - \frac{q_n}{a_nk\lambda_n}[/imath]


[imath]\displaystyle y(r,t) = \sum_{n = 1}^{\infty} a_nT_n(t)J_0( \sqrt{\lambda_n} r) [/imath]


[imath]\displaystyle y(r,t) = \sum_{n = 1}^{\infty} a_n\left(\frac{q_n}{a_nk\lambda_n} +c_1e^{-k\lambda_n t}\right)J_0( \sqrt{\lambda_n} r) [/imath]


[imath]\displaystyle y(r,t) = \sum_{n = 1}^{\infty} \left(\frac{q_n}{k\lambda_n} +c_1a_ne^{-k\lambda_n t}\right)J_0( \sqrt{\lambda_n} r) [/imath]


[imath]\displaystyle y(r,t) = \sum_{n = 1}^{\infty} \left(\frac{q_n}{k\lambda_n} +\left[T_n(0) - \frac{q_n}{a_nk\lambda_n}\right]a_ne^{-k\lambda_n t}\right)J_0( \sqrt{\lambda_n} r) [/imath]


[imath]\displaystyle y(r,t) = \sum_{n = 1}^{\infty} \left(\frac{q_n}{k\lambda_n} +\left[a_nT_n(0) - \frac{q_n}{k\lambda_n}\right]e^{-k\lambda_n t}\right)J_0( \sqrt{\lambda_n} r) [/imath]


Let [imath]f_n = a_nT_n(0)[/imath]


[imath]\displaystyle y(r,t) = \sum_{n = 1}^{\infty} \left(\frac{q_n}{k\lambda_n} +\left[f_n - \frac{q_n}{k\lambda_n}\right]e^{-k\lambda_n t}\right)J_0( \sqrt{\lambda_n} r) [/imath]


We are almost arriving to the given solution. I will apply the boundary condition.


[imath]\displaystyle y(a,t) = \sum_{n = 1}^{\infty} \left(\frac{q_n}{k\lambda_n} +\left[f_n - \frac{q_n}{k\lambda_n}\right]e^{-k\lambda_n t}\right)J_0( \sqrt{\lambda_n} a) = 0 \ \ [/imath] as [imath] \ \ J_0(\sqrt{\lambda_n} a) = 0[/imath]

but the boundary condition tells us that [imath]y(a,t) = T[/imath], so the only assumption that I can make is that I have to add a constant to our solution.

[imath]\displaystyle y(r,t) = C + \sum_{n = 1}^{\infty} \left(\frac{q_n}{k\lambda_n} +\left[f_n - \frac{q_n}{k\lambda_n}\right]e^{-k\lambda_n t}\right)J_0( \sqrt{\lambda_n} r) [/imath]


Applying the boundary condition again.


[imath]\displaystyle y(a,t) = C + \sum_{n = 1}^{\infty} \left(\frac{q_n}{k\lambda_n} +\left[f_n - \frac{q_n}{k\lambda_n}\right]e^{-k\lambda_n t}\right)J_0( \sqrt{\lambda_n} a) = C = T[/imath]


[imath]\displaystyle y(r,t) = T + \sum_{n = 1}^{\infty} \left(\frac{q_n}{k\lambda_n} +\left[f_n - \frac{q_n}{k\lambda_n}\right]e^{-k\lambda_n t}\right)J_0( \sqrt{\lambda_n} r) [/imath]


Finally, I got the exact final solution. Tell me if my approach was correct, or if I did some steps in a false way.
 
Okay. An apology on two points:
1. We aren't doing the expansion into the Bessel series like I had thought. What we are really doing is
[imath]\displaystyle y(r,t) = C + \sum_{n=1}^{\infty} T_n(t) J_0( \sqrt{ \lambda_n } r)[/imath]
My only excuse is that I've never had to do that before and it didn't occur to me for this case. As it turns out, there should be a way to get it to work without the C term: the differential operator is the radial part of the Laplacian in cylindrical coordinates, which means we may think of y as a potential function. We can always add a constant to a potential and change nothing, so its presence or lack changes nothing in the end. But it does change how we define the expansion for f(r). It appears that your source did add it in and, frankly, it does make the solution look a tad nicer. The cost is that the [imath]f_n[/imath] aren't quite defined the way you might think.

2. Note that I dropped the [imath]a_n[/imath] in the above expansion. We really don't need it... the [imath]T_n(t)[/imath] will automatically take care of any extra constants. The [imath]a_n[/imath] just turns into an annoyance. Sorry about that.

[imath]\displaystyle \frac{dJ_n(x)}{dx} = \frac{1}{2}[J_{n-1}(x) - J_{n + 1}(x)][/imath]

I like the above identity that you used for the derivative of [imath]J_0[/imath].

I am just wondering why my identity isn't correct, although it gives the correct result?
The identity is
[imath]\dfrac{d}{dx} (x^n J_n(x)) = x^n J_{n-1}(x)[/imath]

I really don't know why you think you should be able to apply this to
[imath]\dfrac{d}{dx} \left ( x^n \dfrac{dJ_n}{dx} \right )[/imath]

As I said before, you need to apply the derivative rule first, simplify the [imath]J_{-1} = -J_1[/imath], then use the above identity. The reason you got the correct answer? Luck. You applied the identity incorrectly and it somehow came out right.

I'm going to add that extra C into your quote and make comments as needed.
I prefer first to apply the initial condition to find [imath]T(0),[/imath] so that I can solve the differential equation to find [imath]T(t)[/imath].

[imath]\displaystyle y(r,0) = C + \sum_{n = 1}^{\infty} a_n J_0( \sqrt{\lambda_n} r) T_n(0) = f(r)[/imath]

[imath]\displaystyle a_nT_n(0) = \frac{\int_{0}^{a} rJ_0(\sqrt{\lambda_n} r)f(r) \ dr}{||J_0(\sqrt{\lambda_n} r)||^2}[/imath]
Okay, let's stop a moment. The orthogonality condition is
[imath]\displaystyle \int_0^1 J_p ( \sqrt{\lambda_{pm}} r ) J_p ( \sqrt{\lambda_{pm}} r ) r \, dr = \dfrac{1}{2} \dfrac{1}{J_{p+1}^2(\sqrt{\lambda_{pn}})} \delta _{mn}[/imath]

So here we have
[imath]\displaystyle a_n T_n(0) = \dfrac{1}{2} \dfrac{1}{J_1^2(\sqrt{\lambda_n})} \int_0^1 (f(r) - C) J_0( \lambda_n r ) r \, dr[/imath]

Now all we need to do is equate this to our expansion:
[imath]\displaystyle C + \sum_{n=1}^{\infty} a_n T_n(t) J_0( \lambda_n r ) = f(r)[/imath]

[imath]\displaystyle \sum_{n=1}^{\infty} a_n T_n(t) J_0( \lambda_n r ) = f(r) - C[/imath]

and we may simply call the RHS
[imath]\displaystyle \sum_{n = 1}^{\infty} f_n J_0( \lambda_n r )[/imath]

which is what your source did. It's not technically wrong to do this, but your source should have made clear that they were.

So equating the two series, we may immediately say that
[imath]a_n T_n(t) = f_n[/imath]

and we don't have to worry about the appearance of the C in the [imath]f_n[/imath] definition, and your error in the orthogonality condition gets erased.

Now you go on to apply the differential equation.
[imath]\displaystyle a_n\frac{dT_n}{dt} + a_n T_nk\lambda_n - q_n = 0[/imath]


[imath]\displaystyle T_n(t) = \frac{q_n}{a_nk\lambda_n} +c_1e^{-k\lambda_n t}[/imath]


where [imath]\displaystyle c_1 = T_n(0) - \frac{q_n}{a_nk\lambda_n}[/imath]


[imath]\displaystyle y(r,t) = C + \sum_{n = 1}^{\infty} a_nT_n(t)J_0( \sqrt{\lambda_n} r) [/imath]


[imath]\displaystyle y(r,t) = C + \sum_{n = 1}^{\infty} a_n\left(\frac{q_n}{a_nk\lambda_n} +c_1e^{-k\lambda_n t}\right)J_0( \sqrt{\lambda_n} r) [/imath]


[imath]\displaystyle y(r,t) = C + \sum_{n = 1}^{\infty} \left(\frac{q_n}{k\lambda_n} +c_1a_ne^{-k\lambda_n t}\right)J_0( \sqrt{\lambda_n} r) [/imath]


[imath]\displaystyle y(r,t) = C + \sum_{n = 1}^{\infty} \left(\frac{q_n}{k\lambda_n} +\left[T_n(0) - \frac{q_n}{a_nk\lambda_n}\right]a_ne^{-k\lambda_n t}\right)J_0( \sqrt{\lambda_n} r) [/imath]


[imath]\displaystyle y(r,t) = C + \sum_{n = 1}^{\infty} \left(\frac{q_n}{k\lambda_n} +\left[a_nT_n(0) - \frac{q_n}{k\lambda_n}\right]e^{-k\lambda_n t}\right)J_0( \sqrt{\lambda_n} r) [/imath]

Let [imath]f_n = a_nT_n(0)[/imath] <- Technically you are applying this, not "letting" it.

[imath]\displaystyle y(r,t) = C + \sum_{n = 1}^{\infty} \left(\frac{q_n}{k\lambda_n} +\left[f_n - \frac{q_n}{k\lambda_n}\right]e^{-k\lambda_n t}\right)J_0( \sqrt{\lambda_n} r) [/imath]


We are almost arriving to the given solution. I will apply the boundary condition.


[imath]\displaystyle y(a,t) = C + \sum_{n = 1}^{\infty} \left(\frac{q_n}{k\lambda_n} +\left[f_n - \frac{q_n}{k\lambda_n}\right]e^{-k\lambda_n t}\right)J_0( \sqrt{\lambda_n} a) = 0 \ \ [/imath] as [imath] \ \ J_0(\sqrt{\lambda_n} a) = 0[/imath]

Now this is taken care of.
but the boundary condition tells us that [imath]y(a,t) = T[/imath], so the only assumption that I can make is that I have to add a constant to our solution.

[imath]\displaystyle y(r,t) = C + \sum_{n = 1}^{\infty} \left(\frac{q_n}{k\lambda_n} +\left[f_n - \frac{q_n}{k\lambda_n}\right]e^{-k\lambda_n t}\right)J_0( \sqrt{\lambda_n} r) [/imath]

[imath]\displaystyle y(a,t) = C + \sum_{n = 1}^{\infty} \left(\frac{q_n}{k\lambda_n} +\left[f_n - \frac{q_n}{k\lambda_n}\right]e^{-k\lambda_n t}\right)J_0( \sqrt{\lambda_n} a) = C = T[/imath]



[imath]\displaystyle y(r,t) = T + \sum_{n = 1}^{\infty} \left(\frac{q_n}{k\lambda_n} +\left[f_n - \frac{q_n}{k\lambda_n}\right]e^{-k\lambda_n t}\right)J_0( \sqrt{\lambda_n} r) [/imath]
You did pretty good. The detail about the constant C wasn't your fault (your source apparently never mentioned it, and I gave you the wrong series expansion), and you did the rest reasonably well.

My suggestion is practice, practice, practice. Solving a differential equation by an orthogonal series expansion is actually a pretty basic technique, but some of the expansion functions themselves can get a bit... interesting. Bessel functions are one of the more interesting ones. I presume that you are still self-studying. I would suggest, after working some more problems with Bessel functions, you move on to Fourier series (if you haven't done them already), Legendre polynomials, Associated Legendre polynomials, Laguerre polynomials, and if you like you might at least glance at the hypergeometric functions. Those last are so general that practically every function you've seen up to now is actually a hypergeometric function, or directly related to one. I've never had to use them, myself (I rarely even use Bessel functions), so I wouldn't say that the hypergeometrics are critical, but I'm a Physicist and you are studying Mathematics: you might want to generalize a bit more than my advice.

-Dan

Addendum: The whole orthogonal expansion thing has a name: If you haven't seen it already, you might want to take a look at Sturm-Louiville theory. It's too advanced for your level, but it might be useful to take a look at it for future reference.
 
Okay. An apology on two points:
No problem, and thanks for mentioning these two points where the solution depends on the first one greatly.


The identity is
[imath]\dfrac{d}{dx} (x^n J_n(x)) = x^n J_{n-1}(x)[/imath]

I really don't know why you think you should be able to apply this to
[imath]\dfrac{d}{dx} \left ( x^n \dfrac{dJ_n}{dx} \right )[/imath]
I didn't apply the identity to [imath]\dfrac{d}{dx} \left ( x^n \dfrac{dJ_n}{dx} \right )[/imath] but rather to only [imath]\dfrac{dJ_n}{dx}[/imath]

Isn't [imath]\dfrac{dJ_0}{dx} = \dfrac{d(x^0J_0)}{dx} [/imath]?

Then from the identity [imath]\dfrac{d(x^0J_0)}{dx} = x^0J_{0-1}(x) = J_{-1}(x)[/imath]


Okay, let's stop a moment. The orthogonality condition is
[imath]\displaystyle \int_0^1 J_p ( \sqrt{\lambda_{pm}} r ) J_p ( \sqrt{\lambda_{pm}} r ) r \, dr = \dfrac{1}{2} \dfrac{1}{J_{p+1}^2(\sqrt{\lambda_{pn}})} \delta _{mn}[/imath]
The orthogonality condition is tricky bastard. While working on the domain [imath]0 < r < 1[/imath], it is true that

[imath]\displaystyle \int_0^1 J_p ( \sqrt{\lambda_{pm}} r ) J_p ( \sqrt{\lambda_{pm}} r ) r \, dr = \dfrac{1}{2} \dfrac{1}{J_{p+1}^2(\sqrt{\lambda_{pn}})} \delta _{mn}[/imath]

A noob like Mario will think that it works exactly the same on other domains. This condition is tricky on domains such as [imath]0 < r < a[/imath]

[imath]\displaystyle \int_0^a J_p ( \sqrt{\lambda_{pm}} r ) J_p ( \sqrt{\lambda_{pm}} r ) r \, dr \neq \dfrac{1}{2} \dfrac{1}{J_{p+1}^2(\sqrt{\lambda_{pn}})} \delta _{mn}[/imath]

But rather

[imath]\displaystyle \int_0^a J_p ( \sqrt{\lambda_{pm}} r ) J_p ( \sqrt{\lambda_{pm}} r ) r \, dr = \dfrac{1}{2} \dfrac{1}{a^2J_{p+1}^2(\sqrt{\lambda_{pn}}a)} \delta _{mn}[/imath]


So here we have
[imath]\displaystyle a_n T_n(0) = \dfrac{1}{2} \dfrac{1}{J_1^2(\sqrt{\lambda_n})} \int_0^1 (f(r) - C) J_0( \lambda_n r ) r \, dr[/imath]
You mean.

[imath]\displaystyle a_n T_n(0) = \dfrac{2}{a^2J_1^2(\sqrt{\lambda_n}a)} \int_0^a (f(r) - C) J_0( \lambda_n r ) r \, dr[/imath]


Now all we need to do is equate this to our expansion:
[imath]\displaystyle C + \sum_{n=1}^{\infty} a_n T_n(t) J_0( \lambda_n r ) = f(r)[/imath]

[imath]\displaystyle \sum_{n=1}^{\infty} a_n T_n(t) J_0( \lambda_n r ) = f(r) - C[/imath]
You are so right about this new edited expansion. It works pretty good. This new look with combining [imath]a_nT_n(t)[/imath] as only [imath]T_n(t)[/imath] makes the work neat and saves a lot of ink.


You did pretty good. The detail about the constant C wasn't your fault (your source apparently never mentioned it, and I gave you the wrong series expansion), and you did the rest reasonably well.
I am so happy that we started the solution without [imath]C[/imath], technically [imath]T[/imath]. It was a mistake, but you learn only when you make a lot of mistakes. It also gave us some ideas where we could start the solution with a guess and then we edit and improve it whenever it is necessary so that it can satisfy the initial and boundary conditions.


My suggestion is practice, practice, practice.
This is what I am doing for years, but noobs remain noobs.


Bessel functions are one of the more interesting ones. I presume that you are still self-studying. I would suggest, after working some more problems with Bessel functions, you move on to Fourier series (if you haven't done them already), Legendre polynomials, Associated Legendre polynomials, Laguerre polynomials, and if you like you might at least glance at the hypergeometric functions. Those last are so general that practically every function you've seen up to now is actually a hypergeometric function, or directly related to one. I've never had to use them, myself (I rarely even use Bessel functions), so I wouldn't say that the hypergeometrics are critical, but I'm a Physicist and you are studying Mathematics: you might want to generalize a bit more than my advice.
I have solved Airy Equation from scratch. I tried to do the same to the Bessel equation. I am currently working on Legendre equation and trying to solve some Sturm-Louiville problems. I have already covered Fourier series but this is the first time to hear about Laguerre polynomials.

I am not sure about what is a hypergeometric function! Wave function? Or all classic functions?

I understand that most PDEs are divided into three categories, elliptic, parabolic, and hyperbolic such as laplace equation, heat equation, and wave equation respectively.

Regarding Physics.
Physics is fun and I am sure that you agree with me as you are a Physicist. Everyday, I open my physics book and I solve a problem randomly in any topic even in Quantum Physics which I am guessing that it is your main focus. Once I got up early, opened my physics book, coincidently it was nuclear physics. I was given a bone for a dead animal and I had to find how long since the animal died. Thanks to the differential Calculus, the radioactivity decay law, and carbon-14. The age of the bone was 9500 years.

Thanks topsquark for the help, but......

In fact, we have cheated to solve the problem. We looked at the final solution and guessed that the solution will look like

[imath]\displaystyle y(r,t) = C + \sum_{n=1}^{\infty} T_n(t) J_0( \sqrt{ \lambda_n } r)[/imath]

And we solved the problem luckily successfully based on that. The real question remains on how would we attack the problem if we were not given the final solution? In other words, say, you have the final solution, but you will never look at it until you try your own solution from scratch without any cheating. How would you do that?
 
I didn't apply the identity to [imath]\dfrac{d}{dx} \left ( x^n \dfrac{dJ_n}{dx} \right )[/imath] but rather to only [imath]\dfrac{dJ_n}{dx}[/imath]

Isn't [imath]\dfrac{dJ_0}{dx} = \dfrac{d(x^0J_0)}{dx} [/imath]?

Then from the identity [imath]\dfrac{d(x^0J_0)}{dx} = x^0J_{0-1}(x) = J_{-1}(x)[/imath]
Ah! Now I see what you were trying to do. Yes, that works fine.

The orthogonality condition is tricky bastard. While working on the domain [imath]0 < r < 1[/imath], it is true that

[imath]\displaystyle \int_0^1 J_p ( \sqrt{\lambda_{pm}} r ) J_p ( \sqrt{\lambda_{pm}} r ) r \, dr = \dfrac{1}{2} \dfrac{1}{J_{p+1}^2(\sqrt{\lambda_{pn}})} \delta _{mn}[/imath]

A noob like Mario will think that it works exactly the same on other domains. This condition is tricky on domains such as [imath]0 < r < a[/imath]

[imath]\displaystyle \int_0^a J_p ( \sqrt{\lambda_{pm}} r ) J_p ( \sqrt{\lambda_{pm}} r ) r \, dr \neq \dfrac{1}{2} \dfrac{1}{J_{p+1}^2(\sqrt{\lambda_{pn}})} \delta _{mn}[/imath]

But rather

[imath]\displaystyle \int_0^a J_p ( \sqrt{\lambda_{pm}} r ) J_p ( \sqrt{\lambda_{pm}} r ) r \, dr = \dfrac{1}{2} \dfrac{1}{a^2J_{p+1}^2(\sqrt{\lambda_{pn}}a)} \delta _{mn}[/imath]
Written in its full form, the orthogonality condition is
[imath]\displaystyle \int_0^a J_p \left ( z_{pn} \dfrac{r}{a} \right ) J_p \left ( z_{pm} \dfrac{r}{a} \right ) r \, dr = \dfrac{a^2}{2} J_{p+1}^2(z_{pn}) \delta _{mn}[/imath]

where the [imath]z_{pn}[/imath] is the nth zero of the pth Bessel function. I guess I dropped the a's. Sorry about that. I usually work on a normalized domain. (And I don't typically work with Bessel functions, either. You might have noticed that.)

I am not sure about what is a hypergeometric function! Wave function? Or all classic functions?
The special functions are defined to be used for specific operators or differential equations. Fourier series are used to analyze periodic functions; the 6 Bessel functions are for cylindrical problems; Legendre, associated Legendre, Spherical Bessel, and Spherical Harmonics for spherical symmetry; Laguerre for radial symmetry; and hypergeometric functions... well, are complicated. I don't know of a particular symmetry or operator that they apply to (aside from the defining differential equation), but any 2nd order linear (ordinary) differential equation that can be specified by three regular singular points can be solved by a hypergeometric function, which makes them applicable in just about any situation you are likely to run into. And there are a ton of identities that you can use with them. I've never really needed them, so I've only scratched the surface, myself.

Wavefunctions are just a type of solution to a differential equation. There are specific types of wave functions, but they are not considered to be "special" functions, like Bessel functions. The Schrodinger equation is a Poisson type, but the Dirac equation is merely first order (but actually represents four coupled first order differential equations.) They don't fall into any one type of category.

In fact, we have cheated to solve the problem. We looked at the final solution and guessed that the solution will look like

[imath]\displaystyle y(r,t) = C + \sum_{n=1}^{\infty} T_n(t) J_0( \sqrt{ \lambda_n } r)[/imath]

And we solved the problem luckily successfully based on that. The real question remains on how would we attack the problem if we were not given the final solution? In other words, say, you have the final solution, but you will never look at it until you try your own solution from scratch without any cheating. How would you do that?
Actually, you are skipping a major step that isn't mentioned in your coursework. You aren't far from this step, but until you finish the coursework for basic special functions you aren't quite there yet.

Here's the key: look at the differential equation you started with. It contains the operator:
[imath]\dfrac{1}{r} \dfrac{\partial}{\partial r} \left ( r \dfrac{\partial}{\partial r} \right )[/imath]

With some experience you will recognize this as the radial part of the Laplacian operator in cylindrical coordinates. Bessel functions were developed to solve equations with this operator in them. Thus we start with a Bessel function expansion. You can actually solve this problem using power series, Fourier series, Legendre series,... pretty much anything you want. But because the operator is cylindrically symmetric, it's likely that Bessel functions will give you the simplest solution method. And, as you can see from the solution, the only Bessel functions that we end up with are the same [imath]J_0(\sqrt{\lambda_n} r)[/imath] that we started with. No other n contributions show up. If the operator were something like just
[imath]\dfrac{\partial ^2}{\partial r^2}[/imath]

we would have other contributions. (You'd likely want to use Fourier for this one.)

If you don't recognize the operator, pick a series. When in doubt, try a power series. (Or, more specifically, a Laurent series.) They are the simplest series and can always be applied. If there were an existing series that would have been better to use, you might be able to see that from the solution. Otherwise, try a couple of series and see where they get you. Solving differential equations is something of an art form: There's a direct method to use to solve [imath]3x^3 + 2x^2 - 10x + 4 = 0[/imath], but solving a differential equation may take several different attempts to find an efficient method, or one that is most applicable to the situation. As always, the more experience you have, the better your guesses will be.

-Dan
 
The special functions are defined to be used for specific operators or differential equations. Fourier series are used to analyze periodic functions; the 6 Bessel functions are for cylindrical problems; Legendre, associated Legendre, Spherical Bessel, and Spherical Harmonics for spherical symmetry; Laguerre for radial symmetry; and hypergeometric functions... well, are complicated. I don't know of a particular symmetry or operator that they apply to (aside from the defining differential equation), but any 2nd order linear (ordinary) differential equation that can be specified by three regular singular points can be solved by a hypergeometric function, which makes them applicable in just about any situation you are likely to run into. And there are a ton of identities that you can use with them. I've never really needed them, so I've only scratched the surface, myself.
This means that a hypergeometric function is a special function, like Bessel, Legendre, and Green. It is used to solve a specific type of second order linear differential equations. It will be fun if I could solve a differential equation by a hypergeometric function. If this method is powerful, it is most likely, we can solve any linear partial differential equation by combining this method with other methods or techniques.

I have taken your advice to read about the Sturm-Liouville theory. It is complicated somehow, but at least for now, I can differentiate between regular and singular Sturm-Liouville problems.

I tried to solve the Legendre Equation from scratch, but because of its recurrence complexity, I stopped. I think that It is enough for me for now to be familiar with the structure of the Legendre Equation and the Associated Legendre Equation so that I can try to solve some partial differential equations involving them. I have not tried yet, but I think that I can handle one since I have also familiarized myself by their properties. And I was also so surprised that the Bessel Equation in three dimensions can come with a type called Spherical Bessel Equation. In my noob experience I have discovered that handling such equations is not the difficult part. The difficult part is that when you get a differential equation that almost looks like one of them and you have to transform it to one of them so that you will be able to use the technique you know to solve the problem.

Regarding solving the equation without cheating. (Without looking at the final solution.)
You think that I have to solve a lot of equations that involve basic special functions. Later with experience, I will be able to understand that the solution will involve which function based on the equation's differential operator. While this is true, most of the time, you will have to solve the problem from scratch without experience from the first few times. And here, you have to pick a method, say, the Fourier series and by trial and error, you will pick the correct specific type of the Fourier series for the solution. Although, it takes time, it will eventually work, but it is more difficult than cheating. Newbies are lucky because most of the time, the problem will tell them what method they should use and I think that this is the perfect solution.

Believe it or not, while I was able to solve (by your help) the partial differential equation which looked somehow complicated to some people, I don't know how to solve this simple looking equation, [imath]3x^3 + 2x^2 - 10x + 4 = 0[/imath].

I will end this post here. Thank you topsquark for the help.
 
Top