May, 2022 - François HU
Master of Science - EPITA
This lecture is available here: https://curiousml.github.io/
bisection
algorithmfixed_point
algorithmnewton
algorithmIn linear systems: $$ \begin{bmatrix} a_{0,0} & a_{0,1} & \cdots & a_{0,n-1} \\ a_{1,0} & a_{1,1} & \cdots & a_{1,n-1} \\ & \vdots & \\ a_{n-1,0} & a_{n-1,1} & \cdots & a_{n-1,n-1} \end{bmatrix} \times \begin{bmatrix} x_0\\ x_1\\ \vdots\\ x_{n-1} \end{bmatrix} = \begin{bmatrix} b_0\\ b_1\\ \vdots\\ b_{n-1} \end{bmatrix}\iff Ax - b = 0 $$
In nonlinear systems: $$ f(x) = 0 $$
We wish to find a solution to the non linear equation $f(x) = 0$ with $x = \begin{bmatrix} x_0\\ x_1\\ \vdots\\ x_{n-1} \end{bmatrix}$ the unkwown variable in the form of a "floating point" number. An equation of this type may have none, one or more solutions. We will assume that it has at least one which we will try to determine.
We quantity the condition number that measures how sensitive the output of a function is on its input:
$$ \text{change in output} = \text{condition number}\times \text{change in input} $$a solution $x^*$ is called simple if $f(x^*) = 0$ and $f'(x^*) \neq 0$
for each iteration we are $q$ times more precise.
if $q=1$ the convergence is called a linear convergence,
bisection(f, a, b, eps)
Input : function f, extremities a and b, precision eps
Output : m such that |b-a| < eps
while b - a >= eps do:
m = (a+b)/2
if f(a) x f(m) > 0 do:
a = m
else:
b = m
Remarks:
Remarks:
Let us denote $g$ a function verifying: $g(x^*) = x^*$.
Remarks:
In dimension 1.
Define a function bisection(f, a, b, eps)
that iteratively solves the system $f(x) = 0$ in the interval $[a,b]$ (for a given tolerance $eps$) using a bisection method. Implement this method on $f(x) = x^{3} - 7x + 2$ in $[0, 1]$.
Define a function fixed_point
that iteratively solves, with a fixed-point iteration method, a nonlinear system $f(x) = 0$ with $f(x) = x^{3} - 7x + 2$.
Define a function newton
that iteratively solves, with a newton's method, a nonlinear system $f(x) = 0$ with $f(x) = x^{3} - 7x + 2$.
Let us solve the system:
$$ \begin{bmatrix} a_{0, 0} & a_{0, 1} & a_{0, 2}\\ a_{1, 0} & a_{1, 1} & a_{1, 2}\\ a_{2, 0} & a_{2, 1} & a_{2, 2}\\ \end{bmatrix} \times \begin{bmatrix} x_{0, 0} & x_{0, 1} & x_{0, 2}\\ x_{1, 0} & x_{1, 1} & x_{1, 2}\\ x_{2, 0} & x_{2, 1} & x_{2, 2}\\ \end{bmatrix} = \begin{bmatrix} b_{0, 0} & b_{0, 1} & b_{0, 2}\\ b_{1, 0} & b_{1, 1} & b_{1, 2}\\ b_{2, 0} & b_{2, 1} & b_{2, 2} \end{bmatrix} \iff A\times X = B $$This is equivalent to solving the systems with the Gauss method:
$$ \begin{bmatrix} a_{0, 0} & a_{0, 1} & a_{0, 2}\\ a_{1, 0} & a_{1, 1} & a_{1, 2}\\ a_{2, 0} & a_{2, 1} & a_{2, 2} \end{bmatrix} \times \begin{bmatrix} x_{0, j}\\ x_{1, j}\\ x_{2, j} \end{bmatrix} = \begin{bmatrix} b_{0, j}\\ b_{1, j}\\ b_{2, j} \end{bmatrix} \iff A\times X_j = B_j $$For a matrix $n\times n$, the use of the Gauss reduction requires:
$\implies \dfrac{n^4}{3}$ operations
The problem with this approach is that when triangularising the matrix $A$ the vector $B_j$ must be updated.
Suppose that a matrix $A$ can be decomposed as a product: $A = L\times U$ where $L$ is a lower triangular matrix and $U$ an upper triangular matrix. Therefore:
$$ A\times X = B \iff L\times U\times X = B $$An improved approach would be to use the LU decomposition as follows:
Example of LU decomposition:
$$ \begin{bmatrix} 10 & 7 & 8\\ 7 & 5 & 6\\ 8 & 6 & 10 \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0\\ 7/10 & 1 & 0\\ 4/5 & 4 & 1 \end{bmatrix} \times \begin{bmatrix} 10 & 7 & 8\\ 0 & 1/10 & 2/5\\ 0 & 0 & 2 \end{bmatrix} \iff A = L \times U $$pseudo-code of the LU decomposition:
LU_decomposition(A)
Input : square matrix A of dimension n
Output : L the lower triangular matrix and U an upper triangular matrix
L = Identity matrix
U = A
for i=0 to n-1 do:
for j=i+1 to n-1 do:
L[j, i] = (U[j, i]/U[i, i])
U[j, i..] = U[j, i..] - L[j, i] * U[i, i..]
Remarks: the naive approach may not give a solution if the pivot is zero, whereas the system can be solved (the Gauss pivot method requires exchanging rows in case of a zero pivot).