Solution of Nonlinear Equations
Bisection Method
Approximation of the root of a continuous function in an interval
If
- If
, then . - If
, then .
Newton-Raphson Method
To find the root of
Secant Method
Iterative approximation without the need to compute derivatives:
False Position Method
Similar to the bisection method, but using a secant line:
Interpolation and Function Approximation
Linear Interpolation
Lagrange Polynomial Interpolation
Given a set of points
where
Newton Interpolation
where the coefficients
Function Approximation
Least Squares Method
To fit a line
Linear approximation:
Coefficients of the line:
Polynomial Approximation
Taylor’s Formula
Numerical Differentiation
First Derivative
Forward Differences
Backward Differences
Central Differences
Finite Differences (Second Derivative)
Central Differences
Numerical Integration
Trapezoidal Rule
Approximation of an integral over the interval
Simpson’s Rule (1/3)
Approximation with parity in the subintervals:
Gaussian Quadrature
Approximation of an integral using a weighted linear combination of function values at specific points:
where
Solution of Systems of Linear Equations
Gaussian Elimination Method
For a system
LU Factorization
Factorization of a matrix
Jacobi Method
Iteration to solve
Gauss-Seidel Method
Similar to the Jacobi method, but uses the new approximations as they are computed:
Eigenvalue and Eigenvector Approximation
Power Method
To approximate the dominant eigenvalue of a matrix
where the approximate eigenvalue is:
QR Method
QR decomposition of a matrix
Methods for Solving Ordinary Differential Equations
Euler Method
To solve
where
Fourth-Order Runge-Kutta Method
To solve
Adams-Bashforth Method
Multi-step method for solving
where
Methods for Solving Partial Differential Equations
Finite Difference Method
Approximation of partial differential equations using finite differences on the derivatives.
For a parabolic type equation, like the heat equation:
The discretization can be:
Finite Element Method
Used to solve partial differential equations through variational analysis: The domain is divided into elements, and shape functions are used to approximate the solution:
where
Galerkin Method
Based on the principle that the approximate solution minimizes the residual of the differential equation in a function space:
for all function
Error Analysis
Absolute Error
Defined as the difference between the exact value and the approximate value:
Relative Error
Ratio of the absolute error to the exact value:
Convergence
A method is said to converge if, as the step size (or number of iterations) decreases, the approximate solution approaches the exact solution:
Error Estimation
In numerical methods, the error can be estimated using the derivation of Taylor’s formulas, where terms representing the error are obtained:
Stability Analysis
Von Neumann Stability Criterion
For a numerical method, it is considered stable if perturbations in the initial data do not grow exponentially over time. For a finite difference method, evaluate:
where
Cauchy Criterion
A problem is considered well-posed if:
- The solution exists.
- The solution is unique.
- The solution depends continuously on the initial data.
Numerical Optimization
Gradient Method
To minimize a function
where
Newton’s Method
Uses the second derivative to find a minimum:
Linear Programming (Simplex Method)
To maximize (or minimize) a linear function
Numerical Errors
Types of Errors
Absolute Error
where
Relative Error
Rate of Convergence
Rate of Convergence
If
where