# aakashl

## Calculus of Variations Part 1: Establishing the Basis

Calculus of variations is an extremely useful and amazing tool in physics, math, computer science, and a variety of fields. Similar to how regular calculus is focused around functions and differentials, this field focuses on functionals and variations. A functional $\inline&space;F[y]$ takes in a function and spits out a number. The following are examples of functionals.

$\inline&space;S[y]=\int_a^b\sqrt{1+{y'}^2}\textup{d}x$ (arc length of $\inline&space;y$ from $\inline&space;a$ to $\inline&space;b$)

$\inline&space;N[\phi]=\frac{1}{\int_{-\infty}^\infty\phi\textup{d}x}$ (normalization factor for $\inline&space;\phi$ i.e $\inline&space;\int_{-\infty}^\infty&space;N[\phi]\phi\textup{d}x=1$ which is useful in quantum)

$\inline&space;P[f]=f(\pi)$ (value of $\inline&space;f$ at $\inline&space;x=\pi$)

In some sense, this is just notational and should not be thought about too intensely.

A very important result of calculus is the ability to find minimums and maximums or more generally extrema. Now consider the differential of a function in regular calculus $\inline&space;\textup{d}f=\lim_{h\rightarrow0}f(x+h)-f(x)$. The corresponding idea in calculus of variations is the variation of a functional denoted and defined $\inline&space;\delta&space;F[h]=F[y+h]-F[y]$. $\inline&space;h$ and $\inline&space;y$ here are functions. There exists an extremum for a function if $\inline&space;\textup{d}f=0$ and analogously a functional if $\inline&space;\delta&space;F[h]=0$. However, finding a function that minimizes a functional is a lot less straightforward than finding a number in a range. One cannot just systematically test all possibilities. In this case, it helps to talk about specific forms of functionals. The following is a VERY common functional.

$F[y]=\int_a^b&space;f(x,y,y')\textup{d}x$

$y(a)=A,&space;y(b)=B$

The most common is arc length where $\inline&space;f=\sqrt{1+y'^2}$

Let’s look at the variation of this functional.

$\delta&space;F[h]=\int_a^b&space;f(x,y+h,y'+h')-f(x,y,y')\textup{d}x$

Here, $\inline&space;h(a)=h(b)=0$ in order to preserve the boundary conditions for $\inline&space;y(x)$. We know that when $\inline&space;h$ is small, the following is true.

$\int_a^b&space;f(x,y+h,y'+h')-f(x,y,y')\textup{d}x=\int_a^b&space;f_y(x,y,y')h+f_{y'}(x,y,y')h'\textup{d}x$

Here, we will use the following theorem (the proof is at the bottom of the post).

Theorem: Let $\inline&space;h(x)$ be any continuous function such that $\inline&space;h(a)=h(b)=0$. Assume then that, for all possible $\inline&space;h(x)$, the following is true.

$\int_a^b&space;\alpha(x)h(x)+\beta(x)h'(x)\textup{d}x=0$

This implies $\inline&space;\beta'(x)=\alpha(x)$. (Highly recommend looking at the proof of this on the bottom before continuing).

Going back to our original integral, we see that $\inline&space;\alpha(x)=f_y$ and $\inline&space;\beta(x)=f_{y'}$. This means $\inline&space;\frac{\mathrm{d}&space;}{\mathrm{d}&space;x}f_{y'}=f_y$. This is more formally written below and is called Euler’s equation.

$\frac{\partial&space;f}{\partial&space;y}-\frac{\mathrm{d}&space;}{\mathrm{d}&space;x}\frac{\partial&space;f}{\partial&space;y'}=0$

However, sometimes not all variables are used so there exists 3 other forms of this.

If $\inline&space;f$ does not depend on $\inline&space;y$, then $\inline&space;\frac{\partial&space;f}{\partial&space;y}=0$ so Euler’s equation reduces to the following.

$\frac{\mathrm{d}&space;}{\mathrm{d}&space;x}\frac{\partial&space;f}{\partial&space;y'}=0&space;\textup{&space;(}y\textup{-independent)}$

If $\inline&space;f$ does not depend on $\inline&space;y'$, then $\inline&space;\frac{\partial&space;f}{\partial&space;y'}=0$ so Euler’s equation becomes

$\frac{\partial&space;f}{\partial&space;y}=0\textup{&space;(}y'\textup{-independent)}$

If $\inline&space;f$ does not depend on $\inline&space;x$, then we have to use some clever tricks to come to a simpler equation because $\inline&space;y$ still depends on $\inline&space;x$. Let’s multiply the left side of the equation by $\inline&space;y'$ which doesn’t change the value because it equals 0 anyway (I alternate between prime and differential notation and put terms in weird places but it will become clear why later).

$\frac{\partial&space;f}{\partial&space;y}-\frac{\mathrm{d}&space;}{\mathrm{d}&space;x}\frac{\partial&space;f}{\partial&space;y'}=\frac{\partial&space;f}{\partial&space;y}\frac{\partial&space;y}{\partial&space;x}-y'\frac{\mathrm{d}&space;}{\mathrm{d}&space;x}\frac{\partial&space;f}{\partial&space;y'}$

From here, we make a simple change to the equation.

$\frac{\partial&space;f}{\partial&space;y}\frac{\partial&space;y}{\partial&space;x}+\frac{\partial&space;f}{\partial&space;y'}\frac{\partial&space;y'}{\partial&space;x}-\frac{\partial&space;f}{\partial&space;y'}\frac{\partial&space;y'}{\partial&space;x}-y'\frac{\mathrm{d}&space;}{\mathrm{d}&space;x}\frac{\partial&space;f}{\partial&space;y'}$

Because we know $x$ has no influence on $f$, the first two terms form a chain rule.

$\frac{\mathrm{d}&space;f}{\mathrm{d}&space;x}-\frac{\partial&space;f}{\partial&space;y'}\frac{\partial&space;y'}{\partial&space;x}-y'\frac{\mathrm{d}&space;}{\mathrm{d}&space;x}\frac{\partial&space;f}{\partial&space;y'}$

Looking a little harder, it becomes clear that the last two terms are the result of a product rule.

$\frac{\mathrm{d}&space;f}{\mathrm{d}&space;x}-\frac{\mathrm{d}&space;}{\mathrm{d}&space;x}\left[y'\frac{\partial&space;f}{\partial&space;y'}\right]=\frac{\mathrm{d}&space;}{\mathrm{d}&space;x}\left[f-y'\frac{\partial&space;f}{\partial&space;y'}\right]$

Because it equals 0, we come to the final equation.

$f-y'\frac{\partial&space;f}{\partial&space;y'}=c&space;\textup{&space;(}x\textup{-independent)}$

Altogether, this is a VERY powerful equation that is used in a huge range of physics and mathematics but it is understandable if there doesn’t seem to be any obvious use right now. This post was simply to establish a basis and derive the above equation. In the next post, I will give examples and uses of the equation which might make more clear what the purpose of the above steps were. In fact, I will be able to summarize 2 different past posts using this equation. If you want to get a head start, try proving that a line minimizes the distance between two points with this equation (Hint: use the arc length equation mentioned earlier and the $\inline&space;x$-independent equation).

If you want to know more or see where I learned it from, read the book “Calculus of Variations” by Gelfand and Fomin (amazing book). You can find it in the Books section of this site.

Theorem: Let $\inline&space;h(x)$ be any continuous function such that $\inline&space;h(a)=h(b)=0$. Assume then that, for all possible $\inline&space;h(x)$, the following is true.

$\int_a^b&space;\alpha(x)h(x)+\beta(x)h'(x)\textup{d}x=0$

This implies $\inline&space;\beta'(x)=\alpha(x)$

Proof:

Lemma: Let $\inline&space;h(x)$ be any continuous function such that $\inline&space;h(a)=h(b)=0$. Assume then that, for all possible $\inline&space;h(x)$, the following is true.

$\int_a^b\alpha(x)h'(x)\textup{d}x=0$

Then this implies $\inline&space;\alpha(x)=c$
Proof of Lemma: Define $\inline&space;c$ as the constant such that the following is true.

$\int_a^b&space;(\alpha(x)-c)\textup{d}x=0$

Then let $h(x)$ be defined as

$h(x)=\int_a^x&space;(\alpha(z)-c)\textup{d}z$

A simple calculation shows that this function satisfies the boundary conditions. Now we see that the following integral reduces nicely.

$\int_a^b&space;(\alpha(x)-c)h'(x)\textup{d}x=\int_a^ba(x)h'(x)-ch'(x)\textup{d}x=0-c(h(b)-h(a))=0$

We can also reduce it a different way.

$\int_a^b&space;(\alpha(x)-c)h'(x)\textup{d}x=\int_a^b&space;(\alpha(x)-c)^2\textup{d}x$
We see that both of these statements can only hold true if $\inline&space;\alpha(x)-c=0$ because the square restricts the possibilities to all positive values. This means $\inline&space;\alpha(x)=c$.

Now back to the original proof.

Define $\inline&space;A(x)=\int_a^x\alpha(x)\textup{d}x$. Now let’s take the following integral by parts.

$\int_a^b\alpha(x)h(x)\textup{d}x=A(x)h(x)\Small|_a^b&space;-\int_a^bA(x)h'(x)\textup{d}x=-\int_a^bA(x)h'(x)\textup{d}x$

We can plug this back into our original integral.

$\int_a^b\alpha(x)h(x)+\beta(x)h'(x)\textup{d}x=\int_a^b-A(x)h'(x)+\beta(x)h'(x)\textup{d}x$

From here, we can rearrange and apply our earlier lemma.

$=\int_a^b\left(-A(x)+\beta(x)\right)h'(x)\textup{d}x$

$-A(x)+\beta(x)=c$

Differentiating both sides yields our desired result.

$-\alpha(x)+\beta'(x)=0$

$\beta'(x)=\alpha(x)$

### One response to “Calculus of Variations Part 1: Establishing the Basis”

1. Anonymous

sure here as say prof horia orasanu

Like