Fourier series

Fourier series basics

A Fourier series looks like this:

f(x)=12a0+n=1(ancos(nx)+bnsin(nx))f(x)=12a0+a1cos(x)+b1sin(x)+a2cos(2x)+b2sin(2x)+.

The constants an and bn are called Fourier coefficients. The summand terms ancos(nx) and bnsin(nx) are called Fourier components or frequency components. They represent sinusoidal contributions with frequency n/2π.

A Fourier series can be viewed as having two parts, the even part and the odd part:

f(x)=12a0+n=1ancos(nx)even+n=1bnsin(nx)odd=12a0+a1cos(x)+a2cos(2x)+a3cos(3x)+(even)+b1sin(x)+b2sin(2x)+b3sin(3x)+(odd).

Even and odd functions

An “even” function f(x) satisfies f(x)=f(x), while an “odd” function f(x) satisfies f(x)=f(x).

  • So cos(nx) is even, while sin(nx) is odd
  • Sum rule: (even)+(even)=(even) while (odd)+(odd)=(odd)
  • Product rule: (even)(even)=(even) and (odd)(odd)=(even) while (even)(odd)=(odd)

Given any function f(x), we can write it as a sum of even and odd parts. Let fe(x)=12(f(x)+f(x)), and fo(x)=12(f(x)f(x)). Then fe(x) is even, while fo(x) is odd, and f(x)=fe(x)+fo(x).

Even and odd parts are uniquely determined

If fe+fo=ge+go then fege=h(x)=gofo is both even and odd. But if h(x)=h(x) and h(x)=h(x) then h(x)=h(x) and necessarily h(x)=0.

Question 14-01

Even and odd parts of a Fourier series

Let f(x)=12a0+n=1(ancos(nx)+bnsin(nx)).

Prove that the even part fe(x) is the cosine series, while the odd part fo(x) is the sine series.

Periodic functions

  • A periodic function g(x) satisfies g(x+T)=g(x) for some constant period T>0.
  • Trig functions are periodic with period T=2π. Fourier Series are also periodic, by default with period T=2π.
  • Periods can be stretched or contracted by changing variables xx/P for some L>0. For example if f(x) has period 2π, then f(x/L) has period 2πL.
  • Periodic functions are completely described by their restrictions to a single period cycle. For example cos(x) and sin(x) are determined by their values on the interval [π,π].

Even and odd periodic extensions It is sometimes useful to take a function f(x) defined on a half interval like [0,π] and extend it to a function defined on the whole interval [π,π], and from there to all of as a periodic function with period 2π.

Even periodic extension: g(x)={f(x)x[0,π]f(x)x[π,0]g(x+2π)x

Odd periodic extension: g(x)={f(x)x[0,π]f(x)x[π,0]g(x+2π)x

Fourier series can be used to approximate many other functions. These functions should be either periodic functions on , or else defined or considered only on a finite interval [a,b]. (Fourier series can be shifted and scaled to accommodate intervals [a,b] other than [π,π].)

Fourier series convergence

If f(x) is a function for which f(x) and dfdx are both piecewise continuous, then a Fourier series is determined which approximates f(x).

This series converges to the exact values f(x0) at points x0 where f is continuous, and converges to the average of limxx0f(x) and limxx0+f(x) at points x0 where f is discontinuous.

Fourier series also converge in many other cases, for example for all functions of bounded variation, such as any difference of monotone functions.

Fourier series are frequently used to reconstruct “signals” in electronics, for example, the square wave or the sawtooth wave:

Try approximating a variety of signals on this Mathlet: https://mathlets.org/mathlets/fourier-coefficients/ You can turn on the “Distance” number which quantifies the approximation error. For Signal A, using only Odd terms of the Sine series, I was able to get the error distance to 0.14420. Can you beat it?

Gibb’s phenomenon

When f is not continuous but only piecewise continuous, it can happen that Fourier convergence is not uniform across the interval.

The Gibb’s phenomenon is illustrated at the corners of the square wave. In this case of the Gibb’s phenomenon, overshoot remains at 9% regardless of the number of Fourier components used.

Fourier series: orthogonality relations

The terms of a Fourier series have an orthogonality property that is essential for their study. This property allows one to compute the Fourier coefficients quickly, and to show that Fourier partial sums give good approximations.

Define an inner product on pairs of functions [π,π] as follows:

f,g=ππf(x)g(x)dx.

This product behaves like a dot product, having linearity in each input as well as symmetry:

  • Symmetry: f,g=g,f
  • Sums: f1+f2,g=f1,g+f2,g
  • Scalars: λf,g=λf,g

Two functions f and g are said to be orthogonal when f,g=0. The norm of f is defined as f=f,f.

An important feature of Fourier components is that they constitute an orthogonal set of functions, as summarized in the following facts:

cos(nx),cos(mx)=ππcos(nx)cos(mx)dx={πn=m0nmsin(nx),sin(mx)=ππsin(nx)sin(mx)dx={πn=m0nmcos(nx),sin(mx)=ππcos(nx)sin(mx)dx=0.

Another way to put this orthogonality: any pair of Fourier components ϕ(x) and ψ(x) satisfies

ϕ,ψ={πϕ=ψ0ϕψ.

Here ϕ and ψ could be any of cos(nx) or sin(nx) for any n.

Question 14-02

Establishing Fourier orthogonality

Show that ππcos(nx)cos(mx)dx={πn=m0nm. Hint: average the trig identities for cos(A+B) and cos(AB).

Using the inner product, we can easily find the Fourier coefficients of a given function that has a Fourier series.

If f(x)=12a0+n=1(ancos(nx)+bnsin(nx)), then:

an=1πf,cos(nx)=1πππf(x)cos(nx)dx,n=0,1,2,3,bn=1πf,sin(nx)=1πππf(x)sin(nx)dx,n=1,2,3,

For comparison: orthogonal bases and coefficients

These formulas should be compared to the basis representation of a vector using orthogonal projections.

Suppose 𝐯1,,𝐯n are orthogonal vectors, and we have a linear combination representation of 𝐮:

𝐮=a1𝐯1++an𝐯n.

Then by taking the dot product of both sides with 1𝐯i𝐯i𝐯i, we have:

ai=𝐮𝐯i𝐯i𝐯i.

Comparing to the notation of the Fourier series, cos(nx) and sin(nx) are like the vectors 𝐯1,,𝐯n, the fraction 1π is like the fraction 1𝐯i𝐯i, and the dot product 𝐮𝐯i is like the inner product f,cos(nx) or f,sin(nx).

The main novelties are: (i) the even and odd components of Fourier series, cos(nx) and sin(nx), are written separately, and (ii) Fourier components constitute an infinite list of items.

Example

Fourier coefficients for a square wave

Problem: Find the Fourier series for the square wave given by f(x)=1 for x[0,π] and extended to an odd periodic function.

Solution: As an odd function, this signal will be represented by the sine series alone. So an=0 for all n, while

bn=1πππf(x)sin(nx)dx=1ππ01sin(nx)dx+1π0π+1sin(nx)dx=2π0πsin(nx)dx=2π1ncos(nx)|0π=2nπ(cos(nπ)1)={4nπn odd0n even.

The final Fourier series for the square wave is then:

f(x)=4πsin(x)+43πsin(3x)+45πsin(5x)+47πsin(7x)+

Boundary value problems

In the next Packet we will study the wave equation and heat equation. The solution procedures will involve Fourier series, but also some of the basic ideas about boundary value problems.

The idea of a boundary value problem (BVP) is a counterpoint to the idea of an initial value problem (IVP). The BVP is to find solutions to the differential equation that pass through specified points (x0,y0),(x1,y1), (if their graphs are considered), or which take specified values at specified points (if their inputs only are considered in space). The ‘x’ variable is used again here because it is often a spatial variable. These points are called ‘boundary values’ because of the fact that in real applications, these values are fixed known quantities taken at the boundaries of a system.

Both BVPs and IVPs are problems of determining which (if any) members of a complete family of solutions satisfy the given conditions. The BVP or IVP becomes a problem of algebra as soon as the complete family of solutions is known.

Boundary value problems are generally much more complicated than initial value problems. Given some ‘complete family of solutions,’ it is often a tricky question whether any of these solutions take on the boundary values. On the other hand, our standard theorems of existence and uniqueness often apply to yield (at least local) solutions of initial value problems. It may be hard to see why the IVP is easier than the BVP just by looking at a complete family of solutions, however.

BVP = IVP for first-order ODE

Notice! BVP = IVP in the case of first-order ODEs. Both conditions are simply the choice of a single input-output pair (x0,y0) on the solution curve.

Example

BVP of sinusoids

Consider the ODE given by y+ω2y=0. We study the BVP given by y(0)=0 and y(x1)=y1. The ‘boundaries’ in question are x=0 and x=x1.

We have already found the complete family of solutions to the ODE:

y(x)=C1cos(ωx)+C2sin(ωx)=Asin(ωx+ϕ).

The first boundary condition determines C1=0, or (in the second formulation) ϕ=πk. If we assume A>0, then it will be enough to consider just the two options ϕ=0 and ϕ=π. (Why?)

The solutions compatible with y(0)=0 are therefore sinusoids with angular frequency ω that pass through the origin. At x1 these solutions take the value C1sin(ωx1). With fixed ω, by varying C1 we can control the values at x1 to a certain extent, but we are not completely free. https://www.desmos.com/calculator/sdj2z3kumw

Here is a summary of what we can do:

  • If y1=0, then we must have x1=kπ/ω, or else C1=0 (hence y=0 everywhere)
  • If y10, then we can solve for C1 provided x1kπ/ω; but if x1=kπ/ω there is simply no solution.

That is a complete solution to the BVP in question.

Example

Fixing boundary, varying parameter

We can instead consider the problem from another angle. Suppose we fix the boundary data, and ask instead for the possible parameters ω for which nonzero solutions exist.

Let us fix the second boundary point at (x1,0). Solutions (other than y0) are only possible when x1=kπ/ω. Treating x1 as a constant, we see that nontrivial solutions exist if and only if ω=kπx1.

In other words, the oscillation frequency must be such that the sinusoid crosses through y=0 at the boundary point x1. This will occur precisely when ω=πx1,2πx1,3πx1,4πx1,.

Eigenfunctions

We have seen previously that linear differential operators determine linear differential equations of the form L[f]=g for any driving function g.

Given a linear operator L, an eigenfunction of L is a function f (not identically zero) satisfying L[f]=λf for some scalar λ.

For example, the operator L=ddx has eigenfunction f=eλx that satisfies L[f]=λf with eigenvalue λ. For another example, the operator L=d2dx2 has eigenfunctions f=Asin(ωx+ϕ) that satisfy L[f]=ω2f with eigenvalue ω2.

Linearity and the concept of eigenfunction are applicable to boundary value problems when the specified boundary values are all zero. (Otherwise the sum of solutions would not be a solution.) So, for example, the BVP given by L[f]=λf and y(0)=0=y(x1) determines a vector space of solutions. All solutions can be written as linear combinations of eigenfunctions. Infinite series of eigenfunctions also give solutions, provided they converge.

Eigenfunction solutions have a special importance in the basic theory of PDEs because many important physical PDEs specify an eigenvalue problem in each variable. This means the solutions must be eigenfunctions, but the eigenvalues themselves are free – they are not determined by the problem setup. The complete family of solutions therefore comprises the linear combinations of eigenfunctions of the BVP.

Exercise 14-01

Boundary value problem

  • (a) Solve the BVP: y+3y=cosx,y(0)=0=y(π).
  • (b) Find the eigenvalues and eigenfunctions for the following BVP: yλy=0,y(0)=0=y(L).

Note carefully the primes for boundary conditions.

PDE: Wave equation

The 1D scalar wave equation is:

2yt2=v22yx2

for some constant v and function y(x,t) that depends on position x and time t.

If y(x,t) describes the vertical displacement of a rubber band, then y(x,t) will (to good approximation) satisfy the wave equation.

Many physical systems satisfy this equation to some degree of approximation. There are higher dimensional analogues as well. For example in 3D the wave equation looks like this:

2ut2=v2(2ux2+2uy2+2uz2).

Some fields u(t,x,y,z) that satisfy the 3D wave equation include:

  • Density of air (sound waves).
  • Each vector component of the electric or magnetic field.

An extremely important property of the wave equation is linearity. Given any two solutions y1(x,t) and y2(x,t) to the wave equation, any linear combination of them:

y(x,t)=c1y1(x,t)+c2y2(x,t)

is also a solution. In physical settings, linearity implies the superposition principle that we have discussed before. In addition to its physical interest, superposition has a practical consequence: solutions to the wave equation can be broken into sums (or infinite series) of well-behaved solutions, meaning waves that are easy to describe, analyze, and visualize.

In this Packet we focus on the 1D scalar wave equation. We will study it from two points of view: traveling waves and standing waves. The former are sometimes just called waves, and the latter are sometimes called normal modes or simply modes for short.

These points of view correspond to wave decompositions. Traveling waves preserve a shape function as they move through space. Thinking in terms of traveling waves enables the study of energy transmission, reflections, and media interfaces. Standing waves lead to simplification of a different sort. They satisfy separation of variables which in turn leads to sinusoidal terms with nodes and peaks that are fixed in space. These sinusoids are independent of each other. They interact nicely with Fourier series.

For the rest of the section we return to the scalar wave equation, 2yt2=v22yx2.

Traveling waves

A traveling wave y(x,t) is given by a function of a linear combination of space and time:

y(x,t)=F(x+at).

At any given fixed t, the ‘shape’ of the solution y across the x variable is described by the same shape function F, but with a horizontal shift given by at.

Traveling waves y=F(x+at) satisfy the wave equation, for any shape function F, when a=±v:

Exercise 14-02

Traveling waves

Verify that the functions y(x,t)=(x+vt) (left-traveling wave) and y(x,t)=r(xvt) (right-traveling wave) solve the wave equation. Here (u) and r(u) are arbitrary ‘shape functions’ for these waves.

Verify that the left-traveling wave travels to the left, and the right-traveling wave travels to the right. How fast do they go? Explain.

Conversely, d’Alembert discovered an argument that essentially all solutions to 2yt2=v22yx2 are given by a sum of traveling waves, with one term traveling leftwards, and another term traveling rightwards:

Wave equation solutions are traveling waves

Suppose y(x,t) has all requisite partial derivatives, and it satisfies the wave equation 2yt2=v22yx2.

Then y(x,t)=(x+vt)+r(xvt) for some (u) and r(u) shape functions.

Derivation of d’Alembert’s solution

Let us change variables to p=x+vt and q=xvt. Then:

yx=yppx+yqqx=yp+yq,2yx2=2yp2px+2yqpqx+2ypqpx+2yq2qx=2yp2+22ypq+2yq2,

and:

yt=yppt+yqqt=vypvyq,2yt2=v22yp2v22yqpv22ypq+v22yq2=v2(2yp2+2yq2)2v22ypq.

It follows that the wave equation is equivalent to the equation:

2ypq=0.

Now we can integrate this equation in p and then in q:

2ypqdp=0yq=f(q)yqdq=f(q)dqy=F(q)+G(p).

Here f(q) is introduced as a constant of the p integration, F is an antiderivative (so F=f), and G(p) is a constant of the q integration.

Then by plugging in q=x+vt and p=xvt we obtain:

y(x,t)=F(x+vt)+G(xvt).

Example

Initial shape no velocity, divides into left-moving and right-moving of same shape half size

Problem: Suppose the initial condition is given by y(x,0)=f(x) and yt(x,0)=0. How does the wave evolve?

Solution: Write the solution in the format y(x,t)=(x+vt)+r(xvt). The initial conditions determine two equations:

y(x,0)=(x)+r(x)=f(x)yt(x,0)=v(x)vr(x)=0.

The first equation implies (x)+r(x)=f(x), which we solve for r and plug into the second:

vv(f)=02v=vf2=f=12f+c.

Now plug this in for in the first equation to obtain r=12fc, and therefore:

y(x,t)=12(f(x+vt)+f(xvt)).

This is the equation of a 12 wave with shape f moving to the left superposed on a 12 wave with shape f moving to the right.

Exercise 14-03 (in class)

Divided wave

Using the solution form provided in the example, characterize the future evolution of the wave with initial condition y(x,0)=f(x) and yt(x,0)=0 with f(x)={2x[1,1]0else. Draw a picture of your answer.

Standing waves

Waves that do not travel laterally are called standing waves. This lack of lateral motion may be described mathematically by specifying a fixed shape function F(x); the standing wave is then described by y(x,t)=a(t)F(x) where the amplitude a(t) depends on time, but at any given time, the shape of the wave across space (ignoring the amplitude) is F(x). In many applied contexts, standing waves are called normal modes or simply modes.

Let us find the normal modes of the 1D wave equation. These are solutions of the form y(x,t)=a(t)F(x). From this it follows that yxx=a(t)F(x) and ytt=a(t)F(x). The wave equation then dictates:

a(t)F(x)=v2a(t)F(x)

which implies

1v2a(t)a(t)=F(x)F(x).

The left side does not depend on x, and the right side does not depend on t. Equality of the sides implies that both sides equal some constant λ. (Adding the minus sign for future convenience.)

So we end up two second order equations that are completely independent:

a+v2λa=0F+λF=0.

The solutions when λ<0 are sinusoids with frequency ratio v:

a(t)=Acos(vλt+ϕ),F(x)=Bcos(λx+ψ).

In the absence of boundaries, these solutions can occur with any frequency.

Free standing waves can be related to traveling waves by rewriting the product with the trig identity 2cos(A)cos(B)=cos(A+B)+cos(AB):

y(x,t)=ABcos(vλt+ϕ)cos(λx+ψ)=AB2cos(λ(x+vt)+ϕ+ψ)+AB2cos(λ(xvt)+ϕψ).

These are waves traveling with velocity v, to the left and the right respectively, with the same amplitude. “Interference” of the traveling waves is equivalent to the presentation as standing waves. https://en.wikipedia.org/wiki/Standing_wave#/media/File:Waventerference.gif

Standing waves have a natural significance in the context of boundary value problems where the normal modes frequently receive a large contribution of the energy of natural vibrations. The boundary conditions have the effect of allowing only certain discrete values of λ, which is a kind of quantization phenomenon.

Suppose a vibrating rubber band satisfies the wave equation v22yx2=2yt2 and boundary conditions y(0,t)=0=y(L,t). The natural mode solutions to this BVP are given by un(x,t)=cos(nπLvt)sin(nπLx).

https://en.wikipedia.org/wiki/Standing_wave#/media/File:Standing_waves_on_a_string.gif

Problems due Friday 26 Apr 2024 by 11:59pm

Easier Problems

Problem 14-01

Fourier series of triangle wave

Find the Fourier series of the triangle wave in the figure: Be sure to account for the period of 4 rather than 2π. (You should change variables so that the wave covers one cycle on the interval [π,π] in the new variable, and then change back at the very end to express your final answer.)

Problem 14-02

Full Fourier series with rescaling, even and odd parts

Compute the Fourier series for the function ex defined on the interval [1,1].

You should first change to a new independent variable for which that interval corresponds to the interval [π,π], and change back at the end.

Problem 14-03 = Exercise 14-02

Traveling waves

Verify that the functions y(x,t)=(x+vt) (left-traveling wave) and y(x,t)=r(xvt) (right-traveling wave) solve the wave equation. Here (u) and r(u) are arbitrary ‘shape functions’ for these waves. Verify that the left-traveling wave travels to the left, and the right-traveling wave travels to the right. How fast do they go?

Harder Problems

Problem 14-04

Fourier series can teach us about π

  • (a) Compute the Fourier series of the following sawtooth wave with L=1:
  • (b) Evaluate your Fourier series at (1/2,1/2) to establish Leibniz’s famous formula:
113+1517+19111+=π4.
  • (c) Find a similar series expression for π28 using your solution to Problem 14-01.
Problem 14-05

Fourier series can solve initial value problems

For this problem, you should solve the given ODE by plugging in a generic Fourier series and solving for the coefficients. Fourier coefficients are uniquely determined because Fourier terms are orthogonal.

  • (a) y+ω2y=sin(nt),y(0)=0=y(0)
  • (b) y+ω2y=f(t),y(0)=0=y(0), where f(t) is the triangle wave in Problem 14-01. (Substitute the Fourier series you found for f(t).)
Problem 14-06

Plucked band

A rubber band fixed at x=0 and x=π is plucked by stretching the center point to a height of 1 and releasing it. At the point of release, the band shape is an isosceles triangle with base width π and height 1.

Find the function y(x,t) that describes the future evolution of the band, written as a sum of normal modes. Is y(x,t) a standing wave? (If yes, prove it; if no, demonstrate that it cannot be one.)

You should assume the band satisfies the BVP given by v22yx2=2yt2 and y(0,t)=0=y(π,t) for all t. To find your solution y(x,t), first write down the initial shape function y(x,0)=F(x) at the moment of release, and then compute its Fourier series in x. (Review and modify your work in Problem 14-01.) Any solution is a sum of normal modes. Using the theory of orthogonality of Fourier terms, solve for the coefficients of the normal modes in your solution.