Skip to content

Existence & Uniqueness Theorem

存在唯一性定理 | Existence and Uniqueness Theorem

This chapter we focus on ODE with initial value problem

\[ \begin{equation} \frac{dy}{dx} = f(x, y), \quad y(x_0) = y_0. \label{eq-cauchy} \end{equation} \]

The above problem is also called Cauchy Problem, for Cauchy firstly prove that the solution to the problem \(\ref{eq-cauchy}\) exists uniquely when \(f(x, y)\) has continuous partial derivative to \(y\), that is, \(\frac{\partial f}{\partial y} \in C(G)\).

预备知识 | Preliminary knowledge

The following inequation is really useful in estimating solution of ODE by offering the upper bound.

Gronwall 不等式 | Gronwall Inequation

Assume \(\alpha(x), u(x) \in C[a, b]\) are non-negative functions and \(C, K\) are non-negative constants. If

\[ u(x) \leq C + \int_{a}^{b} \left[ \alpha(s) u(s)+K\right] ds \]

then

\[ \begin{align*} u(x) &\leq \left[ C + \int_{a}^{x}Ke^{-\int_{a}^{s}\alpha(t)dt}ds\right]e^{\int_{a}^{x}\alpha(s)ds}\\ &\leq \left[ C + K(x-a)\right]e^{\int_{a}^{x}\alpha(s)ds} \end{align*} \]

Prove it.

Convert it into Ordinary Differential Inequation. Then solve it like what you do in solving ODE.

一致有界的定义 | Definition of Uniform Bound

Assume \(\Lambda\) is an infinite set, set \(I\)(or interval \([x, b]\)) \(\subset \mathbb{R}\). A family of function \(\{f_\lambda\}_{\lambda\in\Lambda}\) defined on \(I\) is Uniformly Bounded, if \(\exists M>0\), s.t.

\[ |f_\lambda(x)|\leq M, \quad \forall \lambda\in \Lambda, \forall x \in I \]

The above definition means that functions in \(\{f_\lambda\}_{\lambda\in\Lambda}\) are all bounded by some constant \(M\).

等度连续的定义 | Definition of Equicontinuous

Assume \(\Lambda\) is an infinite set, set \(I\)(or interval \([x, b]\)) \(\subset \mathbb{R}\). A family of function \(\{f_\lambda\}_{\lambda\in\Lambda}\) defined on \(I\) is Equicontinuous, if \(\forall \varepsilon>0, \exists \delta>0\), s.t. \(\forall x_1, x_2\in I\) and \(|x_1-x_2|<\delta\)

\[ |f_\lambda(x_1)-f_\lambda(x_2)|< \varepsilon \]

The above definition means that functions in \(\{f_\lambda\}_{\lambda\in\Lambda}\) are continuous equally. Usually \(\delta>0\) depends on \(f_\lambda\), but here we mean the degree of continuouity of these functions is similar.

Example. Function sequence \(\{f_n(x)\}\) satisfying

\[ f_n(x) = (-1)^{n} + x^n \]

is uniformly bounded and equicontinuous on region \(\{x: |x|\leq 1/2\}\), is uniformly bounded but not equicontinuous on region \(\{x: |x|\leq 1\}\), and is either not uniformly bounded and equicontinuous on region \(\{x: |x|\leq 2\}\).

The condition of the following theorem can be narrowed down to denumerable sets \(\Lambda\) and interval \(R \subset \mathbb{R}\).

引理1: 一致有界的函数列有收敛函数子列(点态) | Lemma 1: Uniformly Bounded Function Sequence has convergent Function Subsequence

Assume set \(\Lambda\) is denumerable. If a family of function \(\{f_\lambda\}_{\lambda\in\Lambda}\) defined on \(I \subset \mathbb{R}\) is uniformly bounded, then \(\forall A = \{x_m\}_{m=1}^{\infty} \subset I\), \(\exists \{f_{\lambda_k}\}_{k=1}^\infty\), a function subsequence of \(\{f_\lambda\}_{\lambda\in\Lambda}\), such that

\[ \{f_{\lambda_k}(x)\}, \forall x \in A \]

is convergent.

Prove it.

Using diagonal methods.

Note that \(\{f_\lambda(x_1)\}_{\lambda \in \Lambda}\) is obviously bounded. So by Weierstrass Balzano Theorem, we get a convergent subsequence

\[ f_{11}(x_1), f_{12}(x_1), \cdots, f_{1n}(x_1) \cdots \]

which converges to \(y_1\).

Now consider substitute \(x_1\) by \(x_2\), which becomes

\[ f_{11}(x_2), f_{12}(x_2), \cdots, f_{1n}(x_2) \cdots \]

According to the condition "Uniformly Bounded", the above sequence is also bounded, so we can find another convergent subsequence from it

\[ f_{21}(x_2), f_{22}(x_2), \cdots, f_{2n}(x_2) \cdots \]

which converges to \(y_2\).

So we can say function subsequence of \(\{f_\lambda(x_1)\}_{\lambda \in \Lambda}\)

\[ f_{21}(x), f_{22}(x), \cdots, f_{2n}(x) \cdots \]

is convergent on \(x\in \{x_1,x_2\}\). That is, \(\lim\limits_{n\rightarrow \infty}f_{2n}(x_1) = y_1\), \(\lim\limits_{n\rightarrow \infty}f_{2n}(x_2) = y_2\).

Continue the above procedure, we can get another function subsequnce

\[ f_{31}(x), f_{32}(x), \cdots, f_{3n}(x) \cdots \]

which is convergent on \(x \in \{x_1,x_2,x_3\}\). That is, \(\lim\limits_{n\rightarrow \infty}f_{3n}(x_1) = y_1\), \(\lim\limits_{n\rightarrow \infty}f_{3n}(x_2) = y_2\), \(\lim\limits_{n\rightarrow \infty}f_{3n}(x_3) = y_3\).

Continue, it is not hard to imagine we get a function subsequence \(\{f_{nn}(x)\}\) which converges to \(y(x)\) on \(x \in \{x_1,x_2,\cdots, x_n\}\), with \(y(x)\) defined as

\[ y(x_m) = y_m,\quad m=1,2,\cdots \]

So to express the subsequence more specificly, we can define \(\tilde{f}_n(x) = f_{nn}(x)\), and get a subsequence \(\{\tilde{f}_n(x)\}_{n=1}^{\infty}\) such that

\[ \lim_{n\rightarrow \infty}\tilde{f}_n(x) = y(x), \forall x\in I. \]

And we are done.

The above theorem offers that we can find a function sequence convergent on denumerable set \(A\), which is pointwise convergence.

Now if we let the above function subsequence \(\{f_{\lambda_k}(x)\}\) to be Equicontinuous, then it can be uniformly convergent on the whole interval \(I\). This is exactly the following theorem(in which \(I\) is a general region, not just an interval). Note that the above denumerable set can be said as "dense set".

引理2: 点态收敛、等度连续的函数列在紧集上一致收敛 | Lemma 2: Equicontinuous Function Sequence is Uniformly Convergent given Pointwise Convergence on Denumerable Sets

Assume \(\{f_n\}_{n=1}^\infty\) defined on compact set \(I \subset \mathbb{R}\) is Equicontinuous, and there exists a dense subset \(R \subset I\), such that \(\{f_n\}_{n=1}^\infty\) is convergent on \(R\), then \(\{f_n\}_{n=1}^\infty\) is uniformly convergent on \(I\). And denote

\[ f_n(x) \rightrightarrows f(x), \quad n\rightarrow \infty \]

where the convergent function \(f(x)\) is continuous on \(I\).

Prove it.

Using Equicontinuity and compact characteristic of set \(I\).

We hope to prove that \(\forall \varepsilon, \exists N>0\), s.t. \(\forall n>m>N\), \(\forall x \in I\), we have

\[ |f_n(x)-f_m(x)| < \varepsilon \]
  • use the Equicontinuity.

That is, \(\forall \varepsilon\), \(\exists \delta>0\), \(\forall x_1,x_2\in I\) and \(|x_1-x_2|<\delta\), \(\forall n\), we have

\[ |f(x_1)-f(x_2)| < \frac{\varepsilon}{9} \]
  • use the characteristic of compact set.

Note that there exists open coverings of finite number for compact set \(I\). That is, we have \(x_j\in I\), \(j=1,2,\cdots, k_0\), and \(\{O_{\delta'}(x_j)\}_{j=1}^{k_0}\) s.t.

\[ I \subset \bigcup_{j=1}^{k_0}O_{\delta'}(x_j) \]

To let it help us prove, we can let \(\delta'<\delta\) and we can still have a valid corresponding \(k_0\) which is finite.

Because \(R\) is a dense set on \(I\), so \(\forall O(x_j)\), \(\exists y_j\in R\) such that \(y_j \in O(x_j)\).

  • use the hypothesis of pointwise convergence.

Because \(\{f_n\}_{n=1}^{\infty}\) is convergent on dense set \(R\), we can have \(\forall \varepsilon>0\)(choose the same one as the above one), \(\exists N>0\), \(\forall n>m>N\), \(\forall y_j \in R\), we have

\[ |f_n(y_j)-f_m(y_j)|<\frac{\varepsilon}{9} \]

That is, as long as \(n>m>N\), we have

\[ \begin{align} |f_n(x_j)-f_m(x_j)| &\leq |f_n(x_j)-f_n(y_j)| + |f_n(y_j)-f_m(y_j)| + |f_m(y_j)-f_m(x_j)| \nonumber\\ &< |f_n(x_j)-f_n(y_j)| + \frac{\varepsilon}{9} + |f_m(y_j)-f_m(x_j)|\quad \text{by pointwise-convergence}\nonumber\\ &<\frac{\varepsilon}{3} \quad \text{the 1st and 3rd hold for equicontinuity} \label{bound-open-covering} \end{align} \]

Note that \(\forall x \in I\), \(\exists j\in {1,2,\cdots, k_0}\), such that \(x\in O_{\delta'}(x_j)\), so

\[ \begin{align*} |f_n(x)-f_m(x)| &\leq |f_n(x)-f_n(x_j)| + |f_n(x_j)-f_m(x_j)| + |f_m(x_j)-f_m(x)|\\ &< |f_n(x)-f_n(x_j)| + \frac{\varepsilon}{3} + |f_m(x_j)-f_m(x)|\quad \text{by the above condition }\ref{bound-open-covering} \\ &<\varepsilon \quad \text{the 1st and 3rd hold for equicontinuity} \end{align*} \]

In this case, we do not have to find a open covering of \([a,b]\) but just partition the closed interval \([a,b]\) into \(k_0\) distinct closed intervals whose length are less then \(\delta\). Denote these intervals as \(I_j\), \(j=1,2\cdots, k_0\).

Then \(\forall x \in [a, b]\), \(\exists j\in {1,2,\cdots, k_0}\) such that \(x\in I_j\). Because \(R\) is dense on \([a,b]\), then we are able to find an element \(x_j \in I_j\), which means

\[ |x-x_j|< \text{length of }I_q <\delta \]

So

\[ \begin{align*} |f_n(x)-f_m(x)| &\leq |f_n(x)-f_n(x_j)| + |f_n(x_j)-f_m(x_j)| + |f_m(x_j)-f_m(x)|\\ &< |f_n(x)-f_n(x_j)| + \frac{\varepsilon}{9} + |f_m(x_j)-f_m(x)|\quad \text{by pointwise-convergence} \\ &<\frac{\varepsilon}{3}<\varepsilon \quad \text{the 1st and 3rd hold for equicontinuity} \end{align*} \]

By utilizing the above two theorems, we can prove the following important theorem.

Ascoli-Arzelà 定理 | Ascoli-Arzelà Theorem

Assume \(\Lambda\) is denumerable. If a family of sequence \(\{f_\lambda\}_{\lambda\in\Lambda}\) is uniformly bounded and equicontinuous on closed interval \([a, b]\), then there exists a function subsequence \(\{f_{\lambda_k}\}_{k=1}^{\infty}\) of \(\{f_\lambda\}_{\lambda\in\Lambda}\) which is uniformly convergent on \([a, b]\).

Prove it.

Making use of the above two lemmas.

Firstly, we have to find a dense set \(R\). Naively, we can choose rational numbers on \([a,b]\). That is,

\[ R \overset{\Delta}{=}\mathbb{Q}\cap [a,b]. \]

Then by lemma 1, using its uniformly bounded, we can find a function subsequence of \(\{f_\lambda(x)\}_{\lambda\in\Lambda}\), denoted as \(\{f_{n}\}_{n=1}^{\infty}\), which is convergent on \(R\). Finally, by lemma 2, using its equicontinous, the subsequence is uniformly convergent on \([a,b]\).

And by property of uniformly convergent function sequence, we can see that the convergent funtion is continuous on \(I\).

Picard存在唯一性定理 | Picard Theorem of Existence and Uniqueness

Picard uses the following condition to prove his theorem.

Lipschitz条件的定义 | Definition of Lipschitz Condition

Function \(f(x, y)\) defined at region \(G\), satisfies Lipschitz condition with respect to \(y\), if \(\exists L\) s.t. \(\forall (x, y_1), (x, y_2) \in G\)

\[ |f(x, y_1)-f(x,y_2)|\leq L|y_1-y_2| \]

Also, Picard focuses on a typical rectangular region

\[ \begin{equation} R = \{(x,y): |x-x_0| \leq a, |y-y_0|\leq b\} \label{region} \end{equation} \]

to give his iterative method.

Picard定理 | Picard Theorem

Assume \(f(x, y) \in C(G)\) satisfies Lipschitz condition with respect to \(y\), then Cauchy Problem has unique solution on interval \([x-\alpha, x+\alpha]\), where

\[ \alpha = \min \left\{a, \frac{b}{M} \right\}, \quad M = \max_{(x, y) \in R}\left\{\left|f(x, y)\right|\right\} \]

Prove it.

There typically 4 steps.

Firstly, Convert the differential problem into an equivalent integral problem. Then, formulate the so-called Picard sequence and prove it convergent. Furthermore, we have to prove that Picard sequence converges to the solution of integral equation. Finally, we prove the uniqueness by Gronwall Inequation.

  • Convert the differential problem into an integral problem. That is, solving equation \(\ref{eq-cauchy}\) equals to solving integral equation
\[ \begin{align} y(x) &= y(0) + \int_{x_0}^{x} f(s, y(s))ds \nonumber\\ &=y_0+ \int_{x_0}^{x} f(s, y(s))ds \label{eq-integral} \end{align} \]

readers can prove its equivalence(by proving solution of one side is also solution of the other).

  • Formulate Picard sequence.

Define:

\[ \begin{align*} y_0(x) &= y_0\\ y_1(x) &= y_0 + \int_{x_0}^{x} f(s, y_0(s))ds\\ y_2(x) &= y_0 + \int_{x_0}^{x} f(s, y_1(s))ds\\ &\vdots\\ y_n(x) &= y_0 + \int_{x_0}^{x} f(s, y_{n-1}(s))ds\\ \end{align*} \]

We can say the above function sequence \(\{y_n(x)\}_{n=1}^\infty\) is well-defined because of the following condition it satisfies:

\[ |y_n(x)-y(x)| \leq b \quad \& \quad y_n(x)\in C(R) \]

The above conition enables \(f(x, y_{n-1}(x))\) still falls on \(R\) and can be integrated(readers can prove that above two condition by induction).

  • Prove Picard Sequence convergent.

This is the most magic part. The following deduction may be the inspiration:

\[ \begin{align*} |y_1(x)-y_0(x)| &= \left| \int_{x_0}^{x} f(s, y_0)ds \right| \leq M\left| x - x_0\right| \\ |y_2(x)-y_1(x)| &= \left| \int_{x_0}^{x}\left[ f(s, y_1(s)) - f(s, y_0(s))\right]ds \right| \\ &\leq \int_{x_0}^{x} \left| f(s, y_1(s)) - f(s, y_0(s)) \right| ds \\ &\leq L \int_{x_0}^{x} \left|y_1(s) - y_0(s)\right| ds\quad \text{(Using Lipschitz Condition)}\\ &\leq LM \int_{x_0}^{x}\left|s - x_0\right|ds =\frac{LM}{2}|x-x_0|^2 \quad \text{(Using the first item)} \end{align*} \]

So we can allege that

\[ |y_n(x)-y_{n-1}(x)| \leq \frac{ML^{n-1}}{n!}|x-x_0|^n \]

and prove it by induction(to be proved by readers).

With the above condition, we can use Weierstrass test to prove Picard sequence converges. To be specific, we can see that the Picard Sequence is controlled by a Series of constant terms, which satisfies Cauchy Convergence Theorem. That is, \(\forall \varepsilon>0, \exists N>0, \forall n>m>N\), s.t.

\[ \begin{align*} |y_n(x)-y_{m}(x)| &\leq \sum_{k=m+1}^{n}\frac{ML^{k-1}}{k!}|x-x_0|^k \\ &= \frac{M}{L}\sum_{k=m+1}^{n}\frac{L^{k}}{k!}|x-x_0|^k \\ &\leq \frac{M}{L}\sum_{k=m+1}^{n}\frac{(L\alpha)^{k}}{k!} \end{align*} \]

And we notice that Series of constant terms

\[ \sum_{n=1}^\infty \frac{(L\alpha)^{n}}{n!} \]

converges(to be specific, converges to \(e^{L\alpha}\)), so Picard Sequence also converges.

  • Prove Picard Sequence converges to solution of equation \(\ref{eq-cauchy}\).

This part is quite easy, to be done by readers.

  • Prove uniqueness.

Follow the traditional logic: contradiction.

Assume there are \(\varphi_1(x), \varphi_2(x)\) two distinct solutions to equation \(\ref{eq-cauchy}\), then subtract one from the other:

\[ \begin{align} |\varphi_1(x) - \varphi_2(x)| &=\left| \int_{x_0}^{x} \left[ f(s, \varphi_1(s)) - f(s, \varphi_2(s))\right]ds \right| \nonumber\\ &\leq L \int_{x_0}^{x} \left|\varphi_1(s) - \varphi_2(s)\right|ds \quad \text{(Using Lipschitz Condition)} \label{eq-same} \end{align} \]

And by using Gronwall inequation, we can get

\[ |\varphi_1(x) - \varphi_2(x)| \leq 0 \]

So \(\varphi_1(x) = \varphi_2(x)\), that is, there exists only one solution for equation \(\ref{eq-cauchy}\).

In another way, if we don't want to use Gronwall Inequation, we can still say that the two solutions are the same. Assume that \(\varphi_1(x), \varphi_2(x)\) have a common region \(J = [x_0-d,x_0+d]\), where \(0<d\leq \alpha\). Then \(|\varphi_1(x)-\varphi_2(x)|\) is continuous and bounded on region \(J\) and denote

\[ K = \max_{x\in J}\{|\varphi_1(x)-\varphi_2(x)|\} \]

So the right side of inequation \(\ref{eq-same}\) can be bounded:

\[ \begin{align} |\varphi_1(x) - \varphi_2(x)| &\leq L \int_{x_0}^{x} \left|\varphi_1(s) - \varphi_2(s)\right|ds \nonumber\\ &\leq LK |x-x_0| \label{eq-same1} \end{align} \]

Substitute inequation \(\ref{eq-same1}\) again into the right side of inequation \(\ref{eq-same}\) and get

\[ \begin{align*} |\varphi_1(x) - \varphi_2(x)| \leq KL^2 \frac{|x-x_0|^2}{2} \end{align*} \]

repeat the above method iteratively and we can prove the following by induction:

\[ |\varphi_1(x) - \varphi_2(x)| \leq K \frac{(L|x-x_0|)^n}{n!} \]

Let \(n \rightarrow \infty\), we get \(|\varphi_1(x) - \varphi_2(x)| \rightarrow 0\).

Osgood 条件 | Osgood Condition

We consider a condition which is slightly weaker than Lipschitz condition but still can guarrantee the convergence of Picard Sequence and its uniqueness.

Osgood 条件 | Osgood Condition

Assume \(D\) is a region on \(\mathbb{R}^2\), and function \(f(x,y)\in C(D)\). If \(\forall (x,y_1), (x,y_2)\in D\), s.t.

\[ |f(x,y_1)-f(x,y_2)|\leq F(|y_1-y_2|) \]

where \(F(r)>0 \in C(\mathbb{R})\) satisfies

\[ \int_{0}^{\varepsilon}\frac{1}{F(r)}dr = +\infty, \quad \forall \varepsilon >0 \]

then we say that \(f(x,y)\) satisfies Osgood condition with respect to \(y\).

Obviously, if \(f(x,y)\) satisfies Lipschitz condition, it satisfies Osgood condition. In fact, we can choose \(F(r) = Lr\).

Osgood 定理 | Osgood Theorem

If \(f(x,y) \in C(D)\) satisfies Osgood condition with respect to \(y\), then \(\forall (x_0,y_0)\in D\), th solution to Cauchy problem \(\ref{eq-cauchy}\) exists uniquely.

Prove it.

The existence is guarranteed by Peano Theorem in the following part. You only need to prove the uniqueness, which is proved by contradiction, which is similar to prove the uniqueness of \(f(x, y)\) that decrease monotonically with respect to \(y\).

Assume we have two distinct solution \(y_1(x)\), \(y_2(x)\), then there exists \(x_2\) such that \(y_1(x_2)\neq y_2(x_2)\). Let \(y_1(x) > y_2(x)\).

Then by feature of guarantee code(保号性), there must exist a region such that \(y_1(x) > y_2(x)\), so let \(x_1 = \max\{x\in [x_0,x_2] : y_1(x) = y_2(x)\}\).

So we have

\[ y_1(x) > y_2(x), \quad \forall x\in (x_1,x_2] \]

define \(r(x) = y_1(x) - y_2(x) \in (0, m]\), where m is determined by \(m = \max\limits_{x\in (x_1,x_2]}\{y_1(x) - y_2(x)\}\)

So by condition of the proposition, we have

\[ |r'(x)| = |y_1'(x) - y_2'(x)| = |f(x,y_1(x))- f(x,y_2(x))| \leq F(|y_1(x) - y_2(x)|) \]

because \(y_1(x) > y_2(x)\), so we take "\(\vert \cdot \vert\)" out and get

\[ r'(x) = y_1'(x) - y_2'(x) = f(x,y_1(x))- f(x,y_2(x)) \leq F(y_1(x) - y_2(x)) \]

divide both sides \(F(y_1(x) - y_2(x))\) and integrate on \((x_1, x_2]\), which is an improper integral on the left

\[ \int_{x_1}^{x_2}\frac{r'(x)}{F(r(x))}dx\leq \int_{x_1}^{x_2}dx = x_2-x_1 \]

so the left item can be \(+\infty\) by condition while the right item is less than \(+\infty\), i.e.

\[ +\infty=\int_{0}^{r(x_2)}\frac{1}{F(r)}dr=\int_{r(x_1)}^{r(x_2)}\frac{1}{F(r)}dr\leq x_2-x_1 <+\infty \]

which contradicts!

Peano定理 | Peano Theorem

When \(f(x,y)\) does not satisfy Lipschitz Condition with respect to \(y\), we cannot guarantee the existence and uniqueness of the solution to Cauchy Problem \(\ref{eq-cauchy}\). However, when \(f(x,y)\) is continuous, Peano proved that Cauchy Problem \(\ref{eq-cauchy}\) has solution.

Peano定理 | Peano Theorem

Assume \(f(x, y)\) is continous in \(R(\ref{region})\), then Cauchy Problem \(\ref{eq-cauchy}\) has at least one solution in interval \([x_0-\alpha, x_0+\alpha]\), where

\[ \alpha = \min \left\{a, \frac{b}{M} \right\}, \quad M = \max_{(x, y) \in R}\left\{\left|f(x, y)\right|\right\} \]

Prove it.

There are several methosd to prove Peano Theorem, like Euler's Arc method, Tonelli(托内利) Sequence method, fixed-point method in functional analysis, while Euler's Arc method is thought to be the dawm to calculating ODE numerically, so we will prove it this way.

The idea is to construct approximation solution which converges to the real solution. In fact, Picard Sequence is also an approximation solution.

  • Cauchy Problem \(\ref{eq-cauchy}\) can be converted equivalently to integral equation \(\ref{eq-integral}\).

We only discuss the existence of solution of right side(\([x_0, x_0+\alpha]\)). We can make similar treatment to the left side.

  • Formulate Euler Polygons/Polygonal Arc(欧拉折线).

The idea is simple: go ahead step by step as small as possible, employing the slope of this sampling points.

Firstly, we partition the region \([x_0,x_0+\alpha]\) into \(n\) parts(usually of equal length), that is

\[ x_0 < x_1 < x_2 < \cdots < x_n = x_0 + \alpha \]

and then define curves

\[ \begin{align*} l_1: y &= y_0 + f(x_0, y_0)(x-x_0), \quad x_0\leq x\leq x_1 \\ l_2: y &= y_1 + f(x_1, y_1)(x- x_1), \quad x_1 \leq x\leq x_2\\ &\vdots \\ l_n: y &= y_{n-1} + f(x_{n-1}, y_{n-1})(x - x_{n-1}), \quad x_{n-1} \leq x \leq x_n \end{align*} \]

we can formulate a polygonal arc on \([x_0, x_0+\alpha]\):

\[ E_n = \bigcup_{s=1}^{n}l_s \]

and its function expresstion is

\[ \begin{equation} y_n(x) = y_0 + \sum_{i=0}^{j-1}f(x_i, y_i)(x_{i+1} - x_{i}) + f(x_j, y_j)(x - x_j) \label{eq-euler} \end{equation} \]

where \(\forall x \in (x_0, x_0 +\alpha], \exists j\) s.t.

\[ x_j < x\leq x_{j+1} \]

Here closed interval \([x_0, x_0+\alpha]\) is of great importance, for it can guarantee the condition for Ascoli-Arzelà Theorem to be true.

  • 一致有界 | Uniform Bound

By definition, it is easy to prove that

\[ |y_n(x)-y_0| \leq b, \quad \forall n, \forall x \in [x_0,x_0+\alpha] \]
  • 等度连续 | Equicontinuous

By definition, it is easy to see that \(\forall \varepsilon>0\), \(\exists \delta = \varepsilon/M\), s.t. \(\forall x_1, x_2 \in [x_0,x_0+\alpha]\) and \(|x_1-x_2|<\delta\), we have

\[ |y_n(x_1)-y_n(x_2)|\leq \max_{(x,y)\in R}|f(x,y)| |x_1-x_2| < M \cdot \frac{\varepsilon}{M} = \varepsilon \]
  • Using Ascoli-Arzelà Theorem

So the function sequence \(\{y_n(x)\}\) has a uniformly convergent subsequence \(\{y_{n_j}(x)\}\).

  • Prove Euler's Arc converges to the solution of ODE.

This means that we have to consider the error of \(y_{n_j}(x)\) and \(y(x)\). Firstly, we rewrite the formula of Euler's Arc \(\ref{eq-euler}\).

Note that

\[ \begin{align*} f(x_i,y_i)(x_{i+1}-x_i)&=\int_{x_i}^{x_{i+1}}f(x_i,y_i)dx\\ &=\int_{x_i}^{x_{i+1}}f(x,y_n(x)) + f(x_i,y_i) - f(x,y_n(x))dx\\ \end{align*} \]

Define

\[ d_n(i) \overset{\Delta}{=} \int_{x_i}^{x_{i+1}}[f(x_i,y_i) - f(x,y_n(x))]dx \]

then

\[ \begin{equation} f(x_i,y_i)(x_{i+1}-x_i) = \int_{x_i}^{x_{i+1}}f(x,y_n(x))dx + d_n(i) \label{eq-front} \end{equation} \]

Similarly, we have \(x_j<x\leq x_{j+1}\)

\[ \begin{align*} f(x_j,y_j)(x-x_j)&=\int_{x_j}^{x_{j+1}}f(x_j,y_j)dx\\ &=\int_{x_j}^{x_{j+1}}f(x,y_n(x)) + f(x_j,y_j) - f(x,y_n(x))dx\\ \end{align*} \]

Define

\[ d^*_n(x) \overset{\Delta}{=} \int_{x_j}^{x}[f(x_j,y_j) - f(x,y_n(x))]dx \]

then

\[ \begin{equation} f(x_j,y_j)(x-x_j) = \int_{x_j}^{x_{j+1}}f(x,y_n(x))dx +d_n^*(x) \label{eq-back} \end{equation} \]

So with the above two transformation \(\ref{eq-front}\), \(\ref{eq-back}\), the formula of Euler's Arc \(\ref{eq-euler}\)(summation of linear expresstions) becomes an integral-like expression:

\[ \begin{align*} y_n(x) &= y_0 + \sum_{i=0}^{j-1}f(x_k, y_k)(x_{i+1} - x_{i}) + f(x_j, y_j)(x - x_j)\\ & = y_0 + \int_{x_0}^{x}f(x,y_n(x))dx + \delta_n(x) \end{align*} \]

where

\[ \begin{equation} \delta_n(x) = \sum_{i=0}^{j-1}d_n(i) + d^*_n(x) \label{eq-residue} \end{equation} \]

we know that for each \(y_n\), we can have variables \(x, y\) bounded by

\[ |x-x_j|\leq \frac{\alpha}{n}, \quad |y-y_j|\leq M \frac{\alpha}{n} \]

So \(\forall \varepsilon >0\), \(\exists N(\varepsilon) = \frac{\alpha}{\varepsilon}\), s.t. \(n>N\),

\[ |x-x_j|\leq \frac{\alpha}{N} = \varepsilon, \quad |y-y_j|\leq M \frac{\alpha}{N} = M\varepsilon \]

which means the region of each integral interval can be as small as possible so long as \(n\rightarrow \infty\).

So in this case, for each item in residue \(\ref{eq-residue}\), we can bound it into small value corresponding to \(\varepsilon\) by making use of the continuity of \(f(x,y)\), which means \(\forall\varepsilon>0\), \(\exists \delta>0\), s.t. \(\forall (x,y) \in O[(x_j,y_j),\delta]\),

\[ |f(x,y)-f(x_j,y_j)| <\varepsilon \]

The above one is easy to accomplish because we can let \(n\) be large enough, so all \(x \in [x_j,x_{j+1}]\) with its \(y_n(x)\) can fall in \(O[(x_j,y_j),\delta]\).

So the whole residue \(\ref{eq-residue}\) can be bounded:

\[ \delta_n(x) \leq \varepsilon \frac{\alpha}{n} + j \varepsilon \frac{\alpha}{n} < \alpha \varepsilon \]

So we can say \(y_{n_j}(x)\) defined as

\[ y_{n_j}(x) = y_0 + \int_{x_0}^{x}f(s, y_{n_j}(s))ds + \delta_{n_j}(x), \quad x \in [x_0,x_0+\alpha] \]

where \(\delta_{n_j}(x)\) satisfies

\[ \delta_{n_j}(x)\rightrightarrows 0 \]

Which also means, if we denote

\[ \varphi(x) \overset{\Delta}{=} \lim_{n_j\rightarrow \infty}y_{n_j}(x) \]

and let \(n_j \rightarrow \infty\), we get

\[ \varphi(x) = y_0 + \int_{x_0}^{x}f(s, \varphi(s))ds \]

which means function subsequence \(\{y_{n_j}(x)\}\) converges to the solution of integral form \(\ref{eq-integral}\) of ODE \(\ref{eq-cauchy}\).

Comments on Previous

Here we list the difference between Picard Theorem and Peano Theorem.

Theorem Picard Theorem Peano Theorem
Condition \(f\in C(R)\), Lipschitz Condition with respect to \(y\) only \(f\in C(R)\)
Sequence the whole Picard Sequence \(\{y_n(x)\}\) converges subsequence of Euler's Arc \(\{\varphi_n(x)\}\) converges
Solution exists uniquely exists only, might have a lot of solutions
  • Note 1: Uniqueness can help bound Euler's Arc. See the following theorem.

有唯一解的柯西问题,其欧拉折线全序列收敛 | Euler's Arc converges given uniqueness

If Cauchy problem \(\ref{eq-cauchy}\) has unique solution, then the whole sequence of Euler's Arc \(\{\varphi_n(x)\}\) converges.

Prove it.

Use contradiction.

Assume that the conclusion does not hold, then by definition of negative proposition, we have

\[ \exists \varepsilon_0>0, \exists \overline{x} \in I, \forall N_0>0, \exists n_0,m_0>N_0, |\varphi_{n_0}(\overline{x})-\varphi_{m_0}(\overline{x})|\geq\varepsilon_0 \]

which is the negative proposition of the Cauchy Convergence form

\[ \forall \varepsilon>o, \forall x\in I, \exists N>0, \forall n>m>N, |\varphi_{n}(x)-\varphi_{m}(x)|<\varepsilon. \]

In order to help us prove, we can take two sequence from the above negative proposition. That is, let \(N_0=j\), \(j=1,2,\cdots\), we can take the corresponding \(n_j\), \(m_j\) out to compose a subsequence of Euler's arc \(\{\varphi_n(x)\}\), i.e.

\[ \begin{equation} \exists \varepsilon_0>0, \exists \overline{x} \in I, \forall j=1,2,\cdots, \exists n_j,m_j>j, |\varphi_{n_j}(\overline{x})-\varphi_{m_j}(\overline{x})|\geq\varepsilon_0 \label{con-non-convergent} \end{equation} \]

See that \(\{\varphi_{n_j}(x)\}\) is still uniformly bounded and equicontinous on \(I\), so there exists its subsequence \(\{\tilde{\varphi}_{n_j}(x)\}\) such that

\[ \{\tilde{\varphi}_{n_j}(x)\} \rightrightarrows \varphi_1(x), \quad \forall x\in I \]

and also \(\varphi_1(x)\) is still the solution of Cauchy problem \(\ref{eq-cauchy}\). Similarly, we can have

\[ \{\tilde{\varphi}_{m_j}(x)\} \rightrightarrows \varphi_2(x), \quad \forall x\in I \]

where \(\{\tilde{\varphi}_{m_j}(x)\}\) is a subsequence of \(\{\varphi_{m_j}(x)\}\). According to the condition of the theorem, we have

\[ \varphi_1(x) = \varphi_2(x), \quad \forall x \in I \]

while in \(\ref{con-non-convergent}\) we also take the subsequence of \(\{\varphi_{n_j}(x)\}\) and \(\{\varphi_{m_j}(x)\}\)

\[ |\tilde{\varphi}_{n_j}(\overline{x})-\tilde{\varphi}_{m_j}(\overline{x})|\geq\varepsilon_0 \]

which means \(\varphi_1(x)\) and \(\varphi_2(x)\) must have distance on some \(\overline{x}\), which contradicts with \(\varphi_1(x) = \varphi_2(x)\) on all \(x\in I\)!