Language: EN

formulas-calculo-numerico

Numerical Calculation cheatsheet

Solution of Nonlinear Equations

Bisection Method

Approximation of the root of a continuous function in an intervalwhere.

If, thenis the root; if not:

  • If, then.
  • If, then.

Newton-Raphson Method

To find the root of, starting from an initial approximation.

Secant Method

Iterative approximation without the need to compute derivatives:

False Position Method

Similar to the bisection method, but using a secant line:

Interpolation and Function Approximation

Linear Interpolation

Lagrange Polynomial Interpolation

Given a set of points, the interpolating polynomial is:

where

Newton Interpolation

where the coefficientsare calculated using divided differences:

Function Approximation

Least Squares Method

To fit a line, minimize the squared error:

Linear approximation:

Coefficients of the line:

Polynomial Approximation

Taylor’s Formula

Numerical Differentiation

First Derivative

Forward Differences

Backward Differences

Central Differences

Finite Differences (Second Derivative)

Central Differences

Numerical Integration

Trapezoidal Rule

Approximation of an integral over the interval:

Simpson’s Rule (1/3)

Approximation with parity in the subintervals:

Gaussian Quadrature

Approximation of an integral using a weighted linear combination of function values at specific points:

whereare the Gauss points andare the associated weights.

Solution of Systems of Linear Equations

Gaussian Elimination Method

For a system, transform the matrixinto an upper triangular matrix and then solve by back substitution.

LU Factorization

Factorization of a matrixas, whereis a lower triangular matrix andis an upper triangular matrix. Onceandare obtained, the systemis solved in two steps:

Jacobi Method

Iteration to solve:

Gauss-Seidel Method

Similar to the Jacobi method, but uses the new approximations as they are computed:

Eigenvalue and Eigenvector Approximation

Power Method

To approximate the dominant eigenvalue of a matrix, iterate:

where the approximate eigenvalue is:

QR Method

QR decomposition of a matrix, whereis orthogonal andis upper triangular. To approximate eigenvalues:

Methods for Solving Ordinary Differential Equations

Euler Method

To solve, with initial condition:

whereis the step size.

Fourth-Order Runge-Kutta Method

To solve:

Adams-Bashforth Method

Multi-step method for solvingusing previous estimates:

whereare coefficients that depend on the number of steps.

Methods for Solving Partial Differential Equations

Finite Difference Method

Approximation of partial differential equations using finite differences on the derivatives.

For a parabolic type equation, like the heat equation:

The discretization can be:

Finite Element Method

Used to solve partial differential equations through variational analysis: The domain is divided into elements, and shape functions are used to approximate the solution:

whereare the shape functions andis a known function.

Galerkin Method

Based on the principle that the approximate solution minimizes the residual of the differential equation in a function space:

for all functionin the test space.

Error Analysis

Absolute Error

Defined as the difference between the exact value and the approximate value:

Relative Error

Ratio of the absolute error to the exact value:

Convergence

A method is said to converge if, as the step size (or number of iterations) decreases, the approximate solution approaches the exact solution:

Error Estimation

In numerical methods, the error can be estimated using the derivation of Taylor’s formulas, where terms representing the error are obtained:

Stability Analysis

Von Neumann Stability Criterion

For a numerical method, it is considered stable if perturbations in the initial data do not grow exponentially over time. For a finite difference method, evaluate:

whereis the Fourier transform of the solution vector.

Cauchy Criterion

A problem is considered well-posed if:

  1. The solution exists.
  2. The solution is unique.
  3. The solution depends continuously on the initial data.

Numerical Optimization

Gradient Method

To minimize a function:

whereis the learning rate andis the gradient of.

Newton’s Method

Uses the second derivative to find a minimum:

Linear Programming (Simplex Method)

To maximize (or minimize) a linear functionsubject to constraints. The algorithm iterates over the vertices of the polyhedron defined by the constraints until finding the optimum.

Numerical Errors

Types of Errors

Absolute Error

whereis the approximate value andis the true value.

Relative Error

Rate of Convergence

Rate of Convergence

Ifis a sequence that converges to, the rate of convergencecan be defined as:

whereis the order of convergence.