Skip to content

Numerical Integration

Generally speaking, we encounter an integration problem

\[ I(f) =\int_a^b\rho(x)f(x)dx \]

which cannot be solved by indefinite integral, so we have to define a numerical formula to approximate the integration. Typical quadrature formula (求积公式) can be written as

\[ \begin{equation} I_n(f)=\sum_{k=1}^nA_kf(x_k)\label{I-genaral} \end{equation} \]

where the subscript \(n\) denotes sampling on \(n\) points, \(x_k(k=1,2,\cdots,n)\) are nodes on \([a,b]\) and \(A_k\) are coefficients, whose value are only related with \([a,b]\), \(\rho(x)\) and the sampling points, rather than the formula of \(f\) itself.

Naively, we get error

\[ E_n(f)= I(f)-I_n(f) \]

which is obviously hard to get because we do not know the exact integration value. So we transfer to a concept called Algebraic Precision.

代数精度 | Algebraic Precision

The degree of accuracy, or precision, of a quadrature formula is the largest positive integer \(m\) such that the formula is exact for \(x^k\) , for each \(k = 0, 1,\cdots, m\). That is, we call a formula has \(m\) algebraic precision if there exists \(m\in \mathbb{N}^+\), s.t.

\[ E_n(x^k)=0, k=0,1,\cdots,m,\quad E_n(x^{m+1})\neq 0. \]

It is easy to see that for all polynomial \(p(x)\) of degree no more than \(m\), we have \(E_n(p)=0\).

If we find a simpler function \(p(x)\) that approximates \(f(x)\) on \([a,b]\), and \(I(p)\) is easy to get, then we can use \(I(p)\) to approximate \(I(f)\). That is, if

\[ \|f-p\|=\max_{x\in [a,b]}|f(x)-p(x)| <\varepsilon \]

then

\[ |I(f)-I(p)|=\left|\int_a^b\rho(x)[f(x)-p(x)]dx\right| <\varepsilon (b-a) \]

which is also small enough.

Newton-Cotes 公式 | Newton-Cotes Formulas

The simplest function is polynomials. So we try to find a polynomial \(p(x)\) to approximate \(f(x)\) and then use \(I(p)\) to approximate \(I(f)\). The following formulas are really natural if readers have been familiar with Interpolating Polynomial.

Given \(n+1\) points \(x_0<x_1<\cdots<x_n\), we have a Lagrange Polynomial

\[ p_n(x)=\sum_{i=0}^n\prod_{j=0\atop j\neq i}^n\frac{(x-x_j)}{x_i-x_j}f(x_i) \]

use the above polynomial as an approximation for integration. That is,

\[ \begin{equation} \begin{cases} \displaystyle I_{n+1}(p)=I(p_n)=\sum\limits_{i=0}^nA^n_if(x_i)\\ \displaystyle A_i^n=\int_a^b\prod\limits_{j=0\atop j\neq i}^n \frac{(x-x_j)}{(x_i-x_j)}dx, i=0,1,\cdots, n. \end{cases}\label{quadrature} \end{equation} \]

推导 | Deduction

In equation \(\ref{quadrature}\), if we let \(\rho(x)\equiv1\) and choose equidistant points:

\[ x_k=a+kh, \quad h=\frac{b-a}{n},\quad k=0,1,\cdots,n \]

then the corresponding quadrature formula is called Newton-Cotes Formula. In fact, if we let \(x=a+th\), \(t\in [0,n]\), then we have

\[ \begin{align*} A_i^n&=\int_0^n\prod_{j=0\atop j\neq i}^n \frac{(th-jh)}{(ih-jh)}d(a+th)\\ &=h\int_0^n\prod_{j=0\atop j\neq i}^n\frac{(t-j)}{(i-j)}dt \end{align*} \]

common rule of quadrature formula

(i) Trapezoidal Rule

If we let \(n=1\) and get Trapezoidal Rule of quadrature

\[ I_2(f)=I(p_1)=\frac{b-a}{2}[f(a)+f(b)] \]

\(f\) is approximated by a linear expression.

(ii) Simpson’s Rule(Commonly used)

If we let \(n=2\) and get Simpson’s Rule of quadrature

\[ I_3(f)=I(p_2)=\frac{b-a}{6}\left[f(a)+4f\left(\frac{a+b}{2}\right)+f(b)\right] \]

\(f\) is approximated by a parabola expression.

(iii) Cotes' Rule

If we let \(n=4\) and get Cotes' rule of quadrature

\[ I_5(f)=I(p_4)=\frac{b-a}{90}\left[7f(x_0)+32f(x_1)+12f(x_2)+32f(x_3)+7f(x_4)\right] \]

where \(x_i=a+(b-a)/4\cdot i\), \((i=0,1,2,3,4)\).

误差分析 | Error analysis

Assume \(x_i(i=0,1\cdots,n)\) are equidistant(\(h\) apart), then we introduce

\[ \omega_n(x)=\prod_{i=0}^n(x-x_i) \]

which have some properties.

Properties of \(\omega_n(x)\)

(i) \(\omega_n((a+b)/2+\xi)=(-1)^{n+1}\omega_n((a+b)/2-\xi)\)

(ii) For \(\xi\neq x_i(i=0,1,\cdots,n)\), the following statements hold.

\[ \begin{align*} &|\omega_n(\xi+h)|<|\omega_n(\xi)|,\quad \forall a<\xi+h\leq(a+b)/2\\ &|\omega_n(\xi)|<|\omega_n(\xi+h)|,\quad \forall (a+b)/2\leq\xi<b \end{align*} \]

Look at the graph, in which \(n=5,6\).

Then we define an integral of \(\omega_n(x)\)

\[ \Omega_n(x)=\int_a^x \omega_n(s)ds, n=1,2,\cdots \]

which has a great property.

Properties of \(\Omega_n(x)\)

When \(n\) is en even number, then

(i) \(\Omega_n(a)=\Omega_n(b)=0\)

(ii) \(\Omega_n(x)>0, \quad\forall x\in(a,b)\)

Then we can prove the error equation for Newton-Cotes Formula.

Theorem of Error Analysis for Newton-Cotes Formula

(i) If \(n\) (number of intervals) is even, and \(f\in C^{n+2}[a,b]\), then we have

\[ E_{n+1}(f)=\frac{k_n}{(n+2)!}f^{(n+2)}(\eta),\quad a<\eta<b \]

where

\[ k_n=\int_a^bx\omega_n(x)dx<0 \]

(ii) If \(n\) is odd, and \(f\in C^{n+1}[a,b]\), then the error expression is

\[ E_{n+1}(f)=\frac{k_n}{(n+1)!}f^{(n+1)}(\eta),\quad a<\eta<b \]

where

\[ k_n=\int_a^b\omega_n(x)dx<0 \]

Using remainder of interpolation polynomial in Newton's divided difference formula.

Corollary of Error Analysis

(i) For Tranpezoidal Formula, the rounding error is

\[ E_2(f)=-\frac{(b-a)^3}{12}f''(\eta),\quad a<\eta<b \]

(ii) For Simpson Formula, the rounding error is

\[ E_2(f)=-\frac{(b-a)^5}{2880}f^{(4)}(\eta),\quad a<\eta<b \]

数值稳定性 | Numerical Stability

Actually, it is usually hard to get an accurate value of \(f(x_k)\), so we introduce error here, which may influence afterwards computation. Now we quantify this effect.

Assume we use \(\tilde{f}(x_k)\) to replace \(f(x_k)\), denote \(\varepsilon_k=f(x_k)-\tilde{f}(x_k)\), then the integral error

\[ \begin{align*} |I_n(f)-I_n(\tilde{f})|&=\left|\sum_{k=0}^n A_k f(x_k)-\sum_{k=0}A_k \tilde{f}(x_k)\right|\\ &\leq \sum_{k=0}^n |A_k||\varepsilon_k|\\ &<\sum_{k=0}^n |A_k|\max_{0\leq k\leq n}{|\varepsilon_k|} \end{align*}\]

So if \(\sum_{k=0}^n |A_k|\) is bounded, then the error can be bounded. However, we could prove that this item would go to infinity as \(n\rightarrow \infty\), so higher order of Newton-Cotes formula is impractical.

复化积分 | Composite Numerical Integration

We could use low order Newton-Cotes formula with Narrowing down \(h\). That is, use Simpson's formula multiple times on little interval of length \(h\).

Romberg 积分 | Romberg Integration

Here we have another way to improve precision of integration, which is Richardson's Extrapolation. This is a general method.

  • Richardson's Extrapolation
\[ \begin{equation} T_0(h) - I=\alpha_1 h + \alpha_2 h^2 +\cdots \label{richardson} \end{equation} \]

Let

\[ \begin{equation} T_0(h/2)-I=\alpha_1 h/2 + \alpha_2 (h/2)^2 + \cdots \label{richardson-2} \end{equation} \]

multiply \(2\) to both sides of equation \(\ref{richardson-2}\) and Subtract equation \(\ref{richardson}\), get

\[ \frac{2T_0(h/2)-T_0(h)}{2-1}-I=-\frac{1}{2}\alpha_2 h^2 +\cdots \]

So

\[ \begin{align*} T_1(h) &= \frac{2T_0(h/2)-T_0(h)}{2-1} = I + \beta_1 h^2+\beta_2 h^3+\cdots \\ T_2(h) &= \frac{2^2T_1(h/2)-T_1(h)}{2^2-1} = I + \gamma_1 h^3 +\cdots \\ &\vdots\\ T_n(h) &= \frac{2^nT_{n-1}(h)-T_{n-1}(h)}{2^n-1} = I + \xi_1 h^{n+1} +\cdots \end{align*} \]

自适应求积方法 | Adaptive Quadrature Methods

非等距求积公式 | Non-equidistant Quadrature formula

Could we free some limits of the above quadrature, and get some more accurate formula? To be more specific, could we let the coefficient of the quadrature to vary, and do not limit the node to be integrated, and then obtain a better result?

Let us analyze that, if a quadrature has \(m\) algebraic precision, then its coefficients and nodes of formula \(\ref{I-genaral}\) must follow the system

\[ \begin{cases} A_1+A_2+\cdots+A_n&=\int_{a}^b\rho(x)dx\\ A_1x_1+A_2x_2+\cdots+A_nx_n&=\int_{a}^b\rho(x)xdx\\ \cdots&\\ A_1x_1^m+A_2x_2^m+\cdots+A_nx_n^m&=\int_{a}^b\rho(x)x^mdx\\ \end{cases} \]

一致系数公式 | Uniform Coefficient Formula

Given coefficients \(A_1,A_2,\cdots,A_n\), the above non-linear system of variables \(x_1,x_2,\cdots,x_n\) might have solution, with at least \(m\) algebraic precision.

Assume \(\rho(x)\equiv 1\), \(A_1=A_2=\cdots=A_n\), then it must follow

\[ I_n(f)=A_n\sum_{i=1}^nf(x_i) \]

and

\[ A_n\sum_{i=1}^nx_i^k=\frac{1}{k+1}(b^{k+1}-a^{k+1}),\quad k=1,2\cdots,n \]

Common expression for uniform coefficient formula

(i) \(n=1\), then \(A_1=b-a\), \(x_1=(a+b)/2\), so

\[ I_1(f)=(b-a)f\left(\frac{a+b}{2}\right) \]

(ii) \(n=2\), then \(A_2=(b-a)/2\), so

\[ I_2=\frac{b-a}{2}\left[f\left(\frac{b+a}{2}-\frac{b-2}{2\sqrt{3}}\right) +f\left(\frac{b+a}{2}+\frac{b-2}{2\sqrt{3}}\right) \right] \]

Notice that here nodes are central symmetry about \(\frac{(a+b)}{2}\).

Gaussian 求积公式 | Gaussian Quadrature

Given \(n\) unknown nodes \(x_1<x_2<\cdots<x_n\), and denote

\[ \omega_n(x)=\prod_{i=1}^n(x-x_i) \]

If we let

\[ A_i=\int_a^b\rho(x)\frac{\omega_n(x)}{(x-x_i)\omega_n'(x)}dx, \quad i=1,2,\cdots,n \]

then the above quadrature must have at least \(n-1\) algebraic precision(readers can check this because it is Lagrange interpolation polynomial, but non-equidistant nodes). That is, when using \(n\) nodes to interpolate and then integrate, we must get a quadrature formula with \(n-1\) algebraic precision. Now we analyze where the upper bound is for this quadrature formula if we choose a proper position of there \(n\) nodes.

Interpolation a function has a similar error formula like in Newton-Cotes Formula

\[ E_n(f)=\int_a^b\rho(x)f[x_1,x_2,\cdots,x_n,x]\omega_n(x)dx \]

Notice that the degree of \(f[x_1,x_2,\cdots,x_n,x]\) declines \(n\)(Because \(x\) is \(n\) times divided), and assume \(f(x)\) is a polynomial of degree larger than \(n-1\). If we want to let the quadrature formula have \(m(m>n)\) algebraic precision, then if and only if for all polynomial \(q(x)\) of degree no more then \(m-n\), we have

\[ \int_{a}^b\rho(x)q(x)\omega_n(x)dx=0 \]

Apparently, \(m-n\) needs to be less than \(n\), for \((\omega_n,\omega_n)>0\)(inner product), which means \(m\) could be at most \(2n-1\). If \(\omega_n(x)\) is a \(n\)th orthogonal polynomial, it can satisfy the above condition. So if we choose the roots of this polynomial as nodes to be integrated, then the corresponding quadrature formula has \(2n-1\) algebraic precision.

Theoretically speaking, Gaussian quadrature formula always exists, but in practice it is not easy to solve. Here we use method of undetermined coefficients.

Using different orthogonal polynomials, we can directly deduce some Gaussian quadrature.

Examples of Gaussian Quadrature

(i) \(\rho(x)\equiv 1\) integrated on region \([-1,1]\), the corresponding polynomial is Legendre Polynomial

\[ P_n(x)=\frac{1}{2^n n!}\frac{d^n{(x^2-1)}^n}{dx^n} \]

So the coefficients are

\[ A_i=\int_{-1}^1 \frac{P_n(x)dx}{(x-x_i)P'_n(x_i)}=\frac{2}{(1-x_i^2)P'_n(x_i)^2} \]

with error

\[ E_n(f)=\frac{2^{2n+1}(n!)^4}{[(2n)!]^3(2n+1)}f^{(2n)}(\eta),\quad \eta\in(-1,1) \]

(ii) \(\rho(x)=\sqrt{1-x^2}\), integrated on region \([-1,1]\), the corresponding polynomial is Second class Chebyshev polynomial

\[ U_n(x)=\frac{\sin[(n+1)\arccos x]}{\sqrt{1-x^2}} \]

using its roots as integration nodes we get

\[ \int_{-1}^1\sqrt{1-x^2}f(x)ds\approx \sum_{k=1}^n\frac{\pi}{n+1}\sin^2 \frac{k\pi}{n+1}f\left(\cos \frac{kx}{n+1}\right) \]

with error

\[ E_n(f)=\frac{\pi}{2^{2n+1}(2n)!}f^{(2n)}(\eta),\quad \eta\in(-1,1) \]