This code fits a Curve to a given points using the Polynomial Interpolation method of second order, where you need just to input the X and Y point that needed to be fitted.
- Chart Presents the Given Points and the Resultant Polynomial Fitted Curve
- The Fitted Y Values of any inserted X Points, Vector or Scalar.
Interpolated Value for Xi = 20.0062
- X Points
- Y Points
- Xi the value you want to interpolate
About The Method:
Polynomial interpolation is the interpolation of a given data set by the polynomial of lowest possible degree that passes through the points of the dataset
Polynomial interpolation is also essential to perform sub-quadratic multiplication and squaring such as Karatsuba multiplication and Toom–Cook multiplication, where an interpolation through points on a polynomial which defines the product yields the product itself. For example, given a = f(x) = a0x0 + a1x1 + … and b = g(x) = b0x0 + b1x1 + …, the product ab is equivalent to W(x) = f(x)g(x). Finding points along W(x) by substituting x for small values in f(x) and g(x) yields points on the curve. Interpolation based on those points will yield the terms of W(x) and subsequently the product ab. In the case of Karatsuba multiplication this technique is substantially faster than quadratic multiplication, even for modest-sized inputs. This is especially true when implemented in parallel hardware.
Given a set of n + 1 data points (xi, yi) where no two xi are the same, one is looking for a polynomial p of degree at most n with the property
The unisolvence theorem states that such a polynomial p exists and is unique, and can be proved by the Vandermonde matrix, as described below.
The theorem states that for n + 1 interpolation nodes (xi), polynomial interpolation defines a linear bijection
where Πn is the vector space of polynomials (defined on any interval containing the nodes) of degree at most n.
Constructing the interpolation polynomial:
Suppose that the interpolation polynomial is in the form
The statement that p interpolates the data points means that
If we substitute equation (1) in here, we get a system of linear equations in the coefficients ak. The system in matrix-vector form reads the following multiplication:
We have to solve this system for ak to construct the interpolant p(x). The matrix on the left is commonly referred to as a Vandermonde matrix.
The condition number of the Vandermonde matrix may be large, causing large errors when computing the coefficients ai if the system of equations is solved using Gaussian elimination.
Several authors have therefore proposed algorithms which exploit the structure of the Vandermonde matrix to compute numerically stable solutions in O(n2) operations instead of the O(n3) required by Gaussian elimination. These methods rely on constructing first a Newton interpolation of the polynomial and then converting it to the monomial form above.
Alternatively, we may write down the polynomial immediately in terms of Lagrange polynomials:
For matrix arguments, this formula is called Sylvester’s formula and the matrix-valued Lagrange polynomials are the Frobenius covariants.