Euler Equations
In this section we want to look for solutions to
ax2y′′+bxy′+cy=0 (1)
around x0=0. These types of differential equations are called Euler Equations.
Recall from the previous section that a point is an ordinary point if the quotients,
have Taylor series around x0=0. However, because of the x in the denominator neither of these will have a Taylor series around x0=0 and so x0=0 is a singular point. So, the method from the previous section won’t work since it required an ordinary point.
However, it is possible to get solutions to this differential equation that aren’t series solutions. Let’s start off by assuming that x>0 (the reason for this will be apparent after we work the first example) and that all solutions are of the form,
y(x)=xr (2)
Now plug this into the differential equation to get,
Now, we assumed that x>0 and so this will only be zero if,
ar(r−1)+b(r)+c=0 (3)
So solutions will be of the form (2) provided r is a solution to (3). This equation is a quadratic in r and so we will have three cases to look at : Real, Distinct Roots, Double Roots, and Complex Roots.
Real, Distinct Roots
There really isn’t a whole lot to do in this case. We’ll get two solutions that will form a fundamental set of solutions (we’ll leave it to you to check this) and so our general solution will be,
y(x)=c1xr1+c2xr2
Example 1 Solve the following IVP
2x2y′′+3xy′−15y=0, y(1)=0 y′(1)=1
Solution:
We first need to find the roots to (3).
2r(r−1)+3r−15=0
The general solution is then,
To find the constants we differentiate and plug in the initial conditions as we did back in the second order differential equations chapter.
The actual solution is then,
With the solution to this example we can now see why we required x>0. The second term would have division by zero if we allowed x=0 and the first term would give us square roots of negative numbers if we allowed x<0.
Double Roots
This case will lead to the same problem that we’ve had every other time we’ve run into double roots (or double eigenvalues). We only get a single solution and will need a second solution. In this case it can be shown that the second solution will be,
y2(x)=xrlnx
and so the general solution in this case is,
y(x)=c1xr+c2xrlnx=xr(c1+c2lnx)
We can again see a reason for requiring x>0. If we didn’t we’d have all sorts of problems with that logarithm.
Example 2 Find the general solution to the following differential equation.
x2y′′−7xy′+16y=0
Solution:
First the roots of (3).
r(r−1)−7r+16=0
r2−8r+16=0
(r−4)2=0 ⇒r=4
So, the general solution is then,
y(x)=c1x4+c2x4lnx
Complex Roots
In this case we’ll be assuming that our roots are of the form,
r1,2=λ±μi
If we take the first root we’ll get the following solution.
xλ+μi
This is a problem since we don’t want complex solutions, we only want real solutions. We can eliminate this by recalling that,
xr=elnxr=erlnx
Plugging the root into this gives,
Note that we had to use Euler formula as well to get to the final step. Now, as we’ve done every other time we’ve seen solutions like this we can take the real part and the imaginary part and use those for our two solutions.
So, in the case of complex roots the general solution will be,
y(x)=c1xλcos(μlnx)+c2xλsin(μlnx)=xλ(c1cos(μlnx)+c2sin(μlnx))
Once again, we can see why we needed to require x>0.
Example 3 Find the solution to the following differential equation.
x2y′′+3xy′+4y=0
Solution:
Get the roots to (3) first as always.
The general solution is then,
We should now talk about how to deal with x<0 since that is a possibility on occasion. To deal with this we need to use the variable transformation,
η=−x
In this case since x<0 we will get η>0. Now, define,
u(η)=y(x)=y(−η)
Then using the chain rule we can see that,
u′(η)=−y′(x)and u′′(η)=y′′(x)
With this transformation the differential equation becomes,
In other words, since η>0 we can use the work above to get solutions to this differential equation. We’ll also go back to x’s by using the variable transformation in reverse.
η=−x
Let’s just take the real, distinct case first to see what happens.
Now, we could do this for the rest of the cases if we wanted to, but before doing that let’s notice that if we recall the definition of absolute value
we can combine both of our solutions to this case into one and write the solution as,
Note that we still need to avoid x=0 since we could still get division by zero. However, this is now a solution for any interval that doesn’t contain x=0.
We can do likewise for the other two cases and the following solutions for any interval not containing x=0.
We can make one more generalization before working one more example. A more general form of an Euler Equation is,
a(x−x0)2y′′+b(x−x0)y′+cy=0
and we can ask for solutions in any interval not containing x=x0. The work for generating the solutions in this case is identical to all the above work and so isn’t shown here.
The solutions in this general case for any interval not containing x=a are,
Where the roots are solutions to
ar(r−1)+b(r)+c=0
Example 4 Find the solution to the following differential equation on any interval not containing x=−6.
3(x+6)2y′′+25(x+6)y′−16y=0
Solution:
So, we get the roots from the identical quadratic in this case.
3r(r−1)+25r−16=0
3r2+22r−16=0
The general solution is then,
Review : Power Series
Before looking at series solutions to a differential equation we will first need to do a cursory review of power series. A power series is a series in the form,
(1)
where, x0 and an are numbers. We can see from this that a power series is a function of x. The function notation is not always included, but sometimes it is so we put it into the definition above.
Before proceeding with our review we should probably first recall just what series really are. Recall that series are really just summations. One way to write our power series is then,
(2)
Notice as well that if we needed to for some reason we could always write the power series as,
All that we’re doing here is noticing that if we ignore the first term (corresponding to n=0) the remainder is just a series that starts at n=1. When we do this we say that we’ve stripped out the n=0, or first, term. We don’t need to stop at the first term either. If we strip out the first three terms we’ll get,
There are times when we’ll want to do this so make sure that you can do it.
Now, since power series are functions of x and we know that not every series will in fact exist, it then makes sense to ask if a power series will exist for all x. This question is answered by looking at the convergence of the power series. We say that a power series converges for x=c if the series,
converges. Recall that this series will converge if the limit of partial sums,
exists and is finite. In other words, a power series will converge for x=c if
is a finite number.
Note that a power series will always converge if x=x0. In this case the power series will become
With this we now know that power series are guaranteed to exist for at least one value of x. We have the following fact about the convergence of a power series.
Fact
Given a power series, (1), there will exist a number 0≤ρ≤∞ so that the power series will converge for |x−x0|<ρ and diverge for |x−x0|>ρ. This number is called the radius of convergence.
Determining the radius of convergence for most power series is usually quite simple if we use the ratio test.
Ratio Test
Given a power series compute,
then,
Let’s take a quick look at how this can be used to determine the radius of convergence.
Example 1 Determine the radius of convergence for the following power series.
Solution:
So, in this case we have,
Remember that to compute an+1 all we do is replace all the n’s in an with n+1. Using the ratio test then gives,
Now we know that the series will converge if,
and the series will diverge if,
In other words, the radius of the convergence for this series is,
As this last example has shown, the radius of convergence is found almost immediately upon using the ratio test.
So, why are we worried about the convergence of power series? Well in order for a series solution to a differential equation to exist at a particular x it will need to be convergent at that x. If it’s not convergent at a given x then the series solution won’t exist at that x. So, the convergence of power series is fairly important.
Next, we need to do a quick review of some of the basics of manipulating series. We’ll start with addition and subtraction.
There really isn’t a whole lot to addition and subtraction. All that we need to worry about is that the two series start at the same place and both have the same exponent of the x−x0. If they do then we can perform addition and/or subtraction as follows,
In other words, all we do is add or subtract the coefficients and we get the new series.
One of the rules that we’re going to have when we get around to finding series solutions to differential equations is that the only x that we want in a series is the x that sits in (x−x0)n. This means that we will need to be able to deal with series of the form,
where c is some constant. These are actually quite easy to deal with.
So, all we need to do is to multiply the term in front into the series and add exponents. Also note that in order to do this both the coefficient in front of the series and the term inside the series must be in the form x−x0. If they are not the same we can’t do this, we will eventually see how to deal with terms that aren’t in this form.
Next, we need to talk about differentiation of a power series. By looking at (2) it should be fairly easy to see how we will differentiate a power series. Since a series is just a giant summation all we need to do is differentiate the individual terms. The derivative of a power series will be,
So, all we need to do is just differentiate the term inside the series and we’re done. Notice as well that there are in fact two forms of the derivative. Since the n=0 term of the derivative is zero it won’t change the value of the series and so we can include it or not as we need to. In our work we will usually want the derivative to start at n=1, however there will be the occasional problem were it would be more convenient to start it at n=0.
Following how we found the first derivative it should make sense that the second derivative is,
In this case since the n=0 and n=1 terms are both zero we can start at any of three possible starting points as determined by the problem that we’re working.
Next, we need to talk about index shifts. As we will see eventually we are going to want our power series written in terms of (x−x0)n and they often won’t, initially at least, be in that form. To get them into the form we need we will need to perform an index shift.
Index shifts themselves really aren’t concerned with the exponent on the x term, they instead are concerned with where the series starts as the following example shows.
Example 2 Write the following as a series that starts at n=0 instead of n=3.
Solution:
An index shift is a fairly simple manipulation to perform. First, we will notice that if we define i=n−3 then when n=3 we will have i=0. So, what we’ll do is rewrite the series in terms of i instead of n. We can do this by noting that n=i+3. So, everywhere we see an n in the actual series term we will replace it with an i+3. Doing this gives,
The upper limit won’t change in this process since infinity minus three is still infinity.
The final step is to realize that the letter we use for the index doesn’t matter and so we can just switch back to n’s.
Now, we usually don’t go through this process to do an index shift. All we do is notice that we dropped the starting point in the series by 3 and everywhere else we saw an n in the series we increased it by 3. In other words, all the n’s in the series move in the opposite direction that we moved the starting point.
Example 3 Write the following as a series that starts at n=5 instead of n=3.
Solution:
To start the series to start at n=5 all we need to do is notice that this means we will increase the starting point by 2 and so all the other n’s will need to decrease by 2. Doing this for the series in the previous example would give,
Now, as we noted when we started this discussion about index shift the whole point is to get our series into terms of (x−x0)n. We can see in the previous example that we did exactly that with an index shift. The original exponent on the (x+4) was n+2. To get this down to an n we needed to decrease the exponent by 2. This can be done with an index that increases the starting point by 2.
Let’s take a look at a couple of more examples of this.
Example 4 Write each of the following as a single series in terms of (x−x0)n.
Solution:
First, notice that there are two series here and the instructions clearly ask for only a single series. So, we will need to subtract the two series at some point in time. The vast majority of our work will be to get the two series prepared for the subtraction. This means that the two series can’t have any coefficients in front of them (other than one of course…), they will need to start at the same value of n and they will need the same exponent on the x−x0.
We’ll almost always want to take care of any coefficients first. So, we have one in front of the first series so let’s multiply that into the first series. Doing this gives,
Now, the instructions specify that the new series must be in terms of (x−x0)n, so that’s the next thing that we’ve got to take care of. We will do this by an index shift on each of the series. The exponent on the first series needs to go up by two so we’ll shift the first series down by 2. On the second series will need to shift up by 1 to get the exponent to move down by 1. Performing the index shifts gives us the following,
Finally, in order to subtract the two series we’ll need to get them to start at the same value of n. Depending on the series in the problem we can do this in a variety of ways. In this case let’s notice that since there is an n-1 in the second series we can in fact start the second series at n=1 without changing its value. Also note that in doing so we will get both of the series to start at n=1 and so we can do the subtraction. Our final answer is then,
In this part the main issue is the fact that we can’t just multiply the coefficient into the series this time since the coefficient doesn’t have the same form as the term inside the series. Therefore, the first thing that we’ll need to do is correct the coefficient so that we can bring it into the series. We do this as follows,
We can now move the coefficient into the series, but in the process of we managed to pick up a second series. This will happen so get used to it. Moving the coefficients of both series in gives,
We now need to get the exponent in both series to be an n. This will mean shifting the first series up by 4 and the second series up by 3. Doing this gives,
In this case we can’t just start the first series at n=3 because there is not an n−3 sitting in that series to make the n=3 term zero. So, we won’t be able to do this part as we did in the first part of this example.
What we’ll need to do in this part is strip out the n=3 from the second series so they will both start at n=4. We will then be able to add the two series together. Stripping out the n=3 term from the second series gives,
We can now add the two series together.
This is what we’re looking for. We won’t worry about the extra term sitting in front of the series. When we finally get around to finding series solutions to differential equations we will see how to deal with that term there.
There is one final fact that we need take care of before moving on. Before giving this fact for power series let’s notice that the only way for
a+bx+cx2=0
to be zero for all x is to have a=b=c=0.
We’ve got a similar fact for power series.
Fact
If,
for all x then,
an=0,n=0,1,2,…
This fact will be key to our work with differential equations so don’t forget it.
Taylor Series
We are not going to be doing a whole lot with Taylor series once we get out of the review, but they are a nice way to get us back into the swing of dealing with power series. By time most students reach this stage in their mathematical career they’ve not had to deal with power series for at least a semester or two. Remembering how Taylor series work will be a very convenient way to get comfortable with power series before we start looking at differential equations.
Taylor Series
If f(x) is an infinitely differentiable function then the Taylor Series of f(x) about x=x0 is,
Recall that
f(0)(x)=f(x) f(n)(x)=nth derivative of f(x)
Let’s take a look at an example.
Example 1 Determine the Taylor series for f(x)=ex about x=0.
Solution:
This is probably one of the easiest functions to find the Taylor series for. We just need to recall that,
f(n)(x)=ex n=0,1,2,…
and so we get,
f(n)(0)=1 n=0,1,2,…
The Taylor series for this example is then,
Of course, it’s often easier to find the Taylor series about x=0 but we don’t always do that.
Example 2 Determine the Taylor series for f(x)=ex about x=−4.
Solution:
This problem is virtually identical to the previous problem. In this case we just need to notice that,
f(n)(−4)=e−4 n=0,1,2,…
The Taylor series for this example is then,
Let’s now do a Taylor series that requires a little more work.
Example 3 Determine the Taylor series for f(x)=cos(x) about x=0.
Solution:
This time there is no formula that will give us the derivative for each n so let’s start taking derivatives and plugging in x=0.
Once we reach this point it’s fairly clear that there is a pattern emerging here. Just what this pattern is has yet to be determined, but it does seem fairly clear that a pattern does exist.
Let’s plug what we’ve got into the formula for the Taylor series and see what we get.
So, every other term is zero.
We would like to write this in terms of a series, however finding a formula that is zero every other term and gives the correct answer for those that aren’t zero would be unnecessarily complicated. So, let’s rewrite what we’ve got above and while were at it renumber the terms as follows,
With this “renumbering” we can fairly easily get a formula for the Taylor series of the cosine function about x=0.
For practice you might want to see if you can verify that the Taylor series for the sine function about x=0 is,
We need to look at one more example of a Taylor series. This example is both tricky and very easy.
Example 4 Determine the Taylor series for f(x)=3x2−8x+2 about x=2.
Solution:
There’s not much to do here except to take some derivatives and evaluate at the point.
So, in this case the derivatives will all be zero after a certain order. That happens occasionally and will make our work easier. Setting up the Taylor series then gives,
In this case the Taylor series terminates and only had three terms. Note that since we are after the Taylor series we do not multiply the 4 through on the second term or square out the third term. All the terms with the exception of the constant should contain an x−2.
Note in this last example that if we were to multiply the Taylor series we would get our original polynomial. This should not be too surprising as both are polynomials and they should be equal.
We now need a quick definition that will make more sense to give here rather than in the next section where we actually need it since it deals with Taylor series.
Definition
A function, f(x), is called analytic at x=a if the Taylor series for f(x) about x=a has a positive radius of convergence and converges to f(x).
We need to give one final note before proceeding into the next section. We started this section out by saying that we weren’t going to be doing much with Taylor series after this section. While that is correct it is only correct because we are going to be keeping the problems fairly simple. For more complicated problems we would also be using quite a few Taylor series.
Laplace's Equation
The next partial differential equation that we’re going to solve is the 2-D Laplace’s equation,
A natural question to ask before we start learning how to solve this is does this equation come up naturally anywhere? The answer is a very resounding yes! If we consider the 2‑D heat equation,
We can see that Laplace’s equation would correspond to finding the equilibrium solution (i.e. time independent solution) if there were not sources. So, this is an equation that can arise from physical situations.
How we solve Laplace’s equation will depend upon the geometry of the 2-D object we’re solving it on. Let’s start out by solving it on the rectangle given by 0≤x≤L,0≤y≤H. For this geometry Laplace’s equation along with the four boundary conditions will be,
(1)
One of the important things to note here is that unlike the heat equation we will not have any initial conditions here. Both variables are spatial variables and each variable occurs in a 2nd order derivative and so we’ll need two boundary conditions for each variable.
Next, let’s notice that while the partial differential equation is both linear and homogeneous the boundary conditions are only linear and are not homogeneous. This creates a problem because separation of variables requires homogeneous boundary conditions.
To completely solve Laplace’s equation we’re in fact going to have to solve it four times. Each time we solve it only one of the four boundary conditions can be nonhomogeneous while the remaining three will be homogeneous.
The four problems are probably best shown with a quick sketch so let’s consider the following sketch.
Now, once we solve all four of these problems the solution to our original system, (1), will be,
u(x,y)=u1(x,y)+u2(x,y)+u3(x,y)+u4(x,y)
Because we know that Laplace’s equation is linear and homogeneous and each of the pieces is a solution to Laplace’s equation then the sum will also be a solution. Also, this will satisfy each of the four original boundary conditions. We’ll verify the first one and leave the rest to you to verify.
u(x,0)=u1(x,0)+u2(x,0)+u3(x,0)+u4(x,0)=f1(x)+0+0+0=f1(x)
In each of these cases the lone nonhomogeneous boundary condition will take the place of the initial condition in the heat equation problems that we solved a couple of sections ago. We will apply separation of variables to each problem and find a product solution that will satisfy the differential equation and the three homogeneous boundary conditions. Using the Principle of Superposition we’ll find a solution to the problem and then apply the final boundary condition to determine the value of the constant(s) that are left in the problem. The process is nearly identical in many ways to what we did when we were solving the heat equation.
We’re going to do two of the cases here and we’ll leave the remaining two for you to do.
Example 1 Find a solution to the following partial differential equation.
Solution:
We’ll start by assuming that our solution will be in the form,
u4(x,y)=h(x)φ(y)
and then recall that we performed separation of variables on this problem (with a small change in notation) back in Example 5 of the Separation of Variables section. So from that problem we know that separation of variables yields the following two ordinary differential equations that we’ll need to solve.
Note that in this case, unlike the heat equation we must solve the boundary value problem first. Without knowing what λis there is no way that we can solve the first differential equation here with only one boundary condition since the sign of λwill affect the solution.
Let’s also notice that we solved the boundary value problem in Example 1 of Solving the Heat Equation and so there is no reason to resolve it here. Taking a change of letters into account the eigenvalues and eigenfunctions for the boundary value problem here are,
Now that we know what the eigenvalues are let’s write down the first differential equation with λ plugged in.
Because the coefficiFnaove is positive we know that a solution to this is,
However, this is not really suited for dealing with the h(L)=0 boundary condition. So, let’s also notice that the following is also a solution.
You should verify this by plugging this into the differential equation and checking that it is in fact a solution. Applying the lone boundary condition to this “shifted” solution gives,
0=h(L)=c1
The solution to the first differential equation is now,
and this is all the farther we can go with this because we only had a single boundary condition. That is not really a problem however because we now have enough information to form the product solution for this partial differential equation.
A product solution for this partial differential equation is,
The Principle of Superposition then tells us that a solution to the partial differential equation is,
and this solution will satisfy the three homogeneous boundary conditions.
To determine the constants all we need to do is apply the final boundary condition.
Now, in the previous problems we’ve done this has clearly been a Fourier series of some kind and in fact it still is. The difference here is that the coefficients of the Fourier sine series are now,
instead of just Bn. We might be a little more tempted to use the orthogonality of the sines to derive formulas for the Bn, however we can still reuse the work that we’ve done previously to get formulas for the coefficients here.
Remember that a Fourier sine series is just a series of coefficients (depending on n times a sine. We still have that here, except the “coefficients” are a little messier this time that what we saw when we first dealt with Fourier series. So, the coefficients can be found using exactly the same formula from the Fourier sine series section of a function on 0≤y≤H we just need to be careful with the coefficients.
The formulas for the Bn are a little messy this time in comparison to the other problems we’ve done but they aren’t really all that messy.
Okay, let’s do one of the other problems here so we can make a couple of points.
Example 2 Find a solution to the following partial differential equation.
Solution:
Okay, for the first time we’ve hit a problem where we haven’t previous done the separation of variables so let’s go through that. We’ll assume the solution is in the form,
u3(x,y)=h(x)φ(y)
We’ll apply this to the homogeneous boundary conditions first since we’ll need those once we get reach the point of choosing the separation constant. We’ll let you verify that the boundary conditions become,
h(0)=0 h(L)=0 φ(0)=0
Next, we’ll plug the product solution into the differential equation.
Now, at this point we need to choose a separation constant. We’ve got two homogeneous boundary conditions on h so let’s choose the constant so that the differential equation for h yields a familiar boundary value problem so we don’t need to redo any of that work. In this case, unlike the u4 case, we’ll need −λ.
This is a good problem in that is clearly illustrates that sometimes you need λ as a separation constant and at other times you need −λ. Not only that but sometimes all it takes is a small change in the boundary conditions it force the change.
So, after adding in the separation constant we get,
and two ordinary differential equations that we get from this case (along with their boundary conditions) are,
Now, as we noted above when we were deciding which separation constant to work with we’ve already solved the first boundary value problem. So, the eigenvalues and eigenfunctions for the first boundary value problem are,
The second differential equation is then,
Because the coefficient of the φ is positive we know that a solution to this is,
In this case, unlike the previous example, we won’t need to use a shifted version of the solution because this will work just fine with the boundary condition we’ve got for this. So, applying the boundary condition to this gives,
0=φ(0)=c1
and this solution becomes,
The product solution for this case is then,
The solution to this partial differential equation is then,
Finally, let’s apply the nonhomogeneous boundary condition to get the coefficients for this solution.
As we’ve come to expect this is again a Fourier sine (although it won’t always be a sine) series and so using previously done work instead of using the orthogonality of the sines to we see that,
Okay, we’ve worked two of the four cases that would need to be solved in order to completely solve (1). As we’ve seen each case was very similar and yet also had some differences. We saw the use of both separation constants and that sometimes we need to use a “shifted” solution in order to deal with one of the boundary conditions.
Before moving on let’s note that we used prescribed temperature boundary conditions here, but we could just have easily used prescribed flux boundary conditions or a mix of the two. No matter what kind of boundary conditions we have they will work the same.
As a final example in this section let’s take a look at solving Laplace’s equation on a disk of radius a and a prescribed temperature on the boundary. Because we are now on a disk it makes sense that we should probably do this problem in polar coordinates and so the first thing we need to so do is write down Laplace’s equation in terms of polar coordinates.
Laplace’s equation in terms of polar coordinates is,
Okay, this is a lot more complicated than the Cartesian form of Laplace’s equation and it will add in a few complexities to the solution process, but it isn’t as bad as it looks. The main problem that we’ve got here really is that fact that we’ve got a single boundary condition. Namely,
u(a,θ)=f(θ)
This specifies the temperature on the boundary of the disk. We are clearly going to need three more conditions however since we’ve got a 2nd derivative in both r and θ.
When we solved Laplace’s equation on a rectangle we used conditions at the end points of the range of each variable and so it makes some sense here that we should probably need the same kind of conditions here as well. The range on our variables here are,
0≤r≤a−π≤θ≤π
Note that the limits on θ are somewhat arbitrary here and are chosen for convenience here. Any set of limits that covers the complete disk will work, however as we’ll see with these limits we will get another familiar boundary value problem arising. The best choice here is often not known until the separation of variables is done. At that point you can go back and make your choices.
Okay, we now need conditions for r=0 and θ=±π. First, note that Laplace’s equation in terms of polar coordinates is singular at r=0 (i.e. we get division by zero). However, we know from physical considerations that the temperature must remain finite everywhere in the disk and so let’s impose the condition that,
|u(0,θ)|<∞
This may seem like an odd condition and it definitely doesn’t conform to the other boundary conditions that we’ve seen to this point, but it will work out for us as we’ll see.
Now, for boundary conditions for θ we’ll do something similar to what we did for the 1‑D head equation on a thin ring. The two limits on θ are really just different sides of a line in the disk and so let’s use the periodic conditions there. In other words,
With all of this out of the way let’s solve Laplace’s equation on a disk of radius a.
Example 3 Find a solution to the following partial differential equation.
Solution:
In this case we’ll assume that the solution will be in the form,
u(r,θ)=φ(θ)G(r)
Plugging this into the periodic boundary conditions gives,
Now let’s plug the product solution into the partial differential equation.
This is definitely more of a mess that we’ve seen to this point when it comes to separating variables. In this case simply dividing by the product solution, while still necessary, will not be sufficient to separate the variables. We are also going to have to multiply by r2 to completely separate variables. So, doing all that, moving each term to one side of the equal sign and introduction a separation constant gives,
We used λ as the separation constant this time to get the differential equation for φ
to match up with one we’ve already done.
The ordinary differential equations we get are then,
Now, we solved the boundary value problem above in Example 3 of the Eigenvalues and Eigenfunctions section of the previous chapter and so there is no reason to redo it here. The eigenvalues and eigenfunctions for this problem are,
Plugging this into the first ordinary differential equation and using the product rule on the derivative we get,
This is an Euler differential equation and so we know that solutions will be in the form G(r)=rp provided p is a root of,
So, because the n=0 case will yield a double root, versus two real distinct roots if n≠0 we have two cases here. They are,
Now we need to recall the condition that |G(0)|<∞. Each of the solutions above will have G(r)→∞ as r→0 Therefore in order to meet this boundary condition we must have
Therefore, the solution reduces to,
G(r)=c1rn n=0,1,2,3,…
and notice that with the second term gone we can combine the two solutions into a single solution.
So, we have two product solutions for this problem. They are,
Our solution is then the sum of all these solutions or,
Applying our final boundary condition to this gives,
This is a full Fourier series for f(θ) on the interval −π≤θ≤π, i.e. L=π. Also note that once again the “coefficients” of the Fourier series are a little messier than normal, but not quite as messy as when we were working on a rectangle above. We could once again use the orthogonality of the sines and cosines to derive formulas for the An and Bn or we could just use the formulas from the Fourier series section with to get,
Upon solving for the coefficients we get,
Prior to this example most of the separation of variable problems tended to look very similar and it is easy to fall in to the trap of expecting everything to look like what we’d seen earlier. With this example we can see that the problems can definitely be different on occasion so don’t get too locked into expecting them to always work in exactly the same way.
Before we leave this section let’s briefly talk about what you’d need to do on a partial disk. The periodic boundary conditions above were only there because we had a whole disk. What if we only had a disk between say α≤θ≤β.
When we’ve got a partial disk we now have two new boundaries that we not present in the whole disk and the periodic boundary conditions will no longer make sense. The periodic boundary conditions are only used when we have the two “boundaries” in contact with each other and that clearly won’t be the case with a partial disk.
So, if we stick with prescribed temperature boundary conditions we would then have the following conditions
Also note that in order to use separation of variables on these conditions we’d need to have g1(r)=g2(r)=0 to make sure they are homogeneous.
As a final note we could just have easily used flux boundary conditions for the last two if we’d wanted to. The boundary value problem would be different, but outside of that the problem would work in the same manner.
We could also use a flux condition on the r=a boundary but we haven’t really talked yet about how to apply that kind of condition to our solution. Recall that this is the condition that we apply to our solution to determine the coefficients. It’s not difficult to use we just haven’t talked about this kind of condition yet. We’ll be doing that in the next section.
112 videos|65 docs|3 tests
|
|
Explore Courses for Mathematics exam
|