Taylor Polynomials
Suppose we have a function , with derivatives. How could we approximately match it?
Well, one potential way of doing this might be by matching derivatives at a point ! If we can capture ’s entire state at a point, we may be able to approximate the entire function, or at least approximate nearby points.
We say two functions and have contact of order at if
Example: Contact of Order
Let , . These two functions have a contact of order at , as
With Taylor Polynomials, we are trying to give and the highest contact of order we can! This has a ton of practical applications in the real world.
Notation: Kronecker Delta
For convenience, we will define the notation
Theorem: Taylor Polynomial
Let be an open interval containing , and fix .
Then, if is -times differentiable, there exists a unique Taylor Polynomial such that and have contact of order at .
Proof
Start with the guess that the polynomial exists, with the form
We show that we can choose our ’s to form such a polynomial. Let’s first force
So,
Giving us , so
We can show uniqueness by assuming two polynomials, and showing their coefficients are the same, by evaluating their derivatives at (which drops everything else to 0).
We verify the intermediate step below. Suppose we have
Then, for various values , we have
So,
We can generalize this for all !
What if we extended this for ? That is, given , is it true that
This is possible, but only in some cases. This is where the remainder theorem comes in!
For any function , we can write it as
Where is its remainder term. The following theorem defines this remainder term for us.
Theorem: Lagrange Remainder Theorem
Fix , and let . Let be times differentiable. Then, , there such that
Note that depends on and . Thus, changing or may give us a different .
Proof
Since and have contact of order at ,
By the Function Value Theorem, we have that for any , such that
Because .
Then, for to equal it’s Taylor Series, we need this remainder to drop to 0 as !
When does this remainder term converge to 0?
Notice that in the denominator, we have a factorial , whereas in the numerator we have a power .
Theorem: Factorials vs Powers
Proof
Using the Ratio Test, our sum converges if our limit
In which case our series converges to 0.
As seen above, factorials grow faster than powers - so for convergence, we need our numerator term to be a power or slower! So, if we assume our function grows no faster than a power function,
For some , then we obtain convergence of our remainder term.
This is given in the below theorem.
Theorem: Convergence of Taylor Polynomial
Let , and let . Suppose such that
Note that this condition is sufficient, but not necessary for all cases of convergence.
Then, is equal to its Taylor series on .
Example: Convergence of Taylor Polynomial
Convergence occurs because every derivative of is bounded between and . In other words,
Where , .
Example: Convergence of Taylor Polynomial (2)
Let’s show convergence directly through remainder theorem. Our Lagrange Remainder is given as
depends on both and ! So, we need to restrict our to be able to restrict , and claim convergence.
As this works for any we can expand to infinity to obtain convergence along all !
Example: Convergence of Taylor Polynomial (3)
Does the Taylor Polynomial for converge?
Yes, but only for . Otherwise, our term would be 1 or greater, making our remainder term fail to converge!
We attempt to show convergence using the Lagrange Remainder Theorem. By the Lagrange Remainder Theorem, we obtain remainder
- Case 1: Suppose . Then, we have , and we furthermore know that , so our remainder converges to 0 as ! In other words, our Taylor series converges to on .
- Case 2: Suppose . Then, we have , so we have remainder term
Which fails to converge to 0 for all $x \in (0,1)!Notice how Lagrange fails in case 2, but we still have convergence on ! Thus, Lagrange is a sufficient condition to prove convergence, but is not necessary for convergence.
Analytic Functions
We say is (real)-analytic if , and such that
For , and some . In other words, all points in the function has a localized power series expansion!
Note that if is analytic, then it is smooth (). However, the converse of this is false! See the below example.
Example: Analytic Function Counterexample
Consider the bump function (mollifier)
Where is chosen to make the integral equal to 1.
This is a smooth function, but it doesn’t have a globally convergent Taylor series, nor is it analytic!
Theorem: Weierstrass Approximation Theorem
Let . Then, , there exists a polynomial of degree such that ,
Note that can be a very high degree!
This theorem is very strong, but needs all of the clauses! See the below counterexample.
Example
Suppose is not compact. Then, we can form a counterexample to this theorem with on !
Suppose for all . Then, we get a counterexample as for any non-zero polynomial, it will eventually go to or , so it cannot be bounded!
In other words,
So as , this forces to go to , but this is not possible for a non-zero polynomial!