2025-10-13 Global and Piecewise Interpolation

  • Runge phenomenon as ill-conditioning

  • Stability and Chebyshev polynomials

  • Piecewise methods

See the FNC

[1]:
using LinearAlgebra
using Plots
using Polynomials
default(lw=4, ms=5, legendfontsize=12, xtickfontsize=12, ytickfontsize=12)

# Here's our Vandermonde matrix again
function vander(x, k=nothing)
    if isnothing(k)
        k = length(x)
    end
    m = length(x)
    V = ones(m, k)
    for j in 2:k
        V[:, j] = V[:, j-1] .* x
    end
    V
end

# And Vandermonde with Newton polynomials
function vander_newton(x, abscissa=nothing)
    if isnothing(abscissa)
        abscissa = x
    end
    n = length(abscissa)
    A = ones(length(x), n)
    for i in 2:n
        A[:,i] = A[:,i-1] .* (x .- abscissa[i-1])
    end
    A
end

# And a utility for points distributed via cos
CosRange(a, b, n) = (a + b)/2 .+ (b - a)/2 * cos.(LinRange(-pi, 0, n))
[1]:
CosRange (generic function with 1 method)

A well-conditioned polynomial basis

Recall the Legendre basis from the assignment.

[2]:
function vander_legendre(x, k=nothing)
    if isnothing(k)
        k = length(x) # Square by default
    end
    m = length(x)
    Q = ones(m, k)
    Q[:, 2] = x
    for n in 1:k-2
        Q[:, n+2] = ((2*n + 1) * x .* Q[:, n+1] - n * Q[:, n]) / (n + 1)
    end
    Q
end
[2]:
vander_legendre (generic function with 2 methods)
[3]:
# Let's look at the condition number as we add columns
vcond(mat, points, nmax) = [cond(mat(points(-1, 1, n))) for n in 2:nmax]

# Plot
plot(range(2, 20), [vcond(vander_legendre, CosRange, 20)], yscale=:log10, xlabel="rows", xticks=range(2, 20, step=2), ylabel="cond(A)", label="\$cond(A)\$")
[3]:

Different interpolation points

The interpolation points we pick effect the conditioning of the interpolation.

[4]:
a = 20
plot(range(2, a), [vcond(vander, LinRange, a)], yscale=:log10, legend=:topleft, xticks=range(2, a, step=2), xlabel="# of columns", ylabel="cond(A)", label="Vander, LinRange")
plot!(range(2, a), [vcond(vander, CosRange, a)], yscale=:log10, label="Vander, CosRange")
plot!(range(2, a), [vcond(vander_legendre, LinRange, a)], yscale=:log10, label="Legendre, LinRange")
plot!(range(2, a), [vcond(vander_legendre, CosRange, a)], yscale=:log10, label="Legendre, CosRange")
[4]:

Ill-conditioning impact

We can clearly see that the choice of basis polynomials and point locations changes the condition number, but why does this matter? Look at the following interpolating polynomials.

[5]:
runge(x) = 1 / (1 + 10*x^2) # What's this "Runge"?

# This is a good outcome
x = CosRange(-1, 1, 20)
y = runge.(x)
plot(runge, xlims=(-1, 1), label="true")
scatter!(x, y, label="true")
[5]:
[6]:
# Let's solve for a fit
ourvander = vander_legendre # This is a good pick
p = ourvander(x) \ y;

# And see how good it is
scatter(x, y, label="data")
s = LinRange(-1, 1, 100)
plot!(s, runge.(s), label="true")
plot!(s, ourvander(s, length(x)) * p, label="fit")
[6]:

The singular values from the SVD show that this matrix is well-conditioned.

[7]:
svdvals(ourvander(s, length(x)) / ourvander(x))
[7]:
20-element Vector{Float64}:
 2.8563807370813343
 2.8414668608815554
 2.8202363866684848
 2.782971972925489
 2.7474280450222004
 2.684651346430287
 2.636781366412177
 2.5450839711629127
 2.486134057068077
 2.3619485533713127
 2.2916184164905107
 2.1320708818800687
 2.0460305389351454
 1.8547347714535933
 1.7347507488398972
 1.5501994693431675
 1.3481998183116588
 1.2519073527674074
 0.946254637146537
 0.9419175178348704

Let’s compare that to the ill-conditioned set of points and Vandermonde matrix.

[8]:
# This is a bad outcome
x = LinRange(-1, 1, 20)
y = runge.(x)
plot(runge, xlims=(-1, 1), label="true")
scatter!(x, y, label="true")
[8]:
[9]:
# Let's solve for a fit
ourvander = vander
p = ourvander(x) \ y;

# And see how good it is
scatter(x, y, label="data")
s = LinRange(-1, 1, 100)
plot!(s, runge.(s), label="true")
plot!(s, ourvander(s, length(x)) * p, label="fit")
[9]:

The singular values here show the ill-conditioning.

[10]:
svdvals(ourvander(s, length(x)) / ourvander(x))
[10]:
20-element Vector{Float64}:
 4141.821777889664
  756.1886115696186
   11.201394442103451
    5.200366811378892
    2.3263899157608905
    2.288653157584787
    2.2826626548099376
    2.2826582312605956
    2.282657730773874
    2.2826577307583387
    2.2826577297322554
    2.2826577141928723
    2.282593863235459
    2.282352200318817
    2.2484549740292827
    2.2449230243660097
    1.9014541632683717
    1.7794033223452121
    1.0093167855538236
    1.0064482710909068

Chebyshev interpolating polynomials

Define

\[T_n \left( x \right) = \cos \left( n \arccos \left( x \right) \right)\]

This is a polynomial, but it’s not obvious why or how this can be true.

Recall that

\[\cos \left( a + b \right) = \cos \left( a \right) \cos \left( b \right) - \sin \left( a \right) \sin \left( b \right)\]

Let \(y = \arccos \left( x \right)\) and check

\[\begin{split} \begin{split} \begin{split} T_{n+1} \left( x \right) &= \cos \left( \left( n+1 \right) y \right) = \cos \left( ny \right) \cos \left( y \right) - \sin \left( ny \right) \sin \left( y \right) \\ T_{n-1} \left( x \right) &= \cos \left( \left( n-1 \right) y \right) = \cos \left( ny \right) \cos \left( y \right) + \sin \left( ny \right) \sin \left( y \right) \end{split}\end{split}\end{split}\]

Adding these together produces

\[T_{n+1} \left( x \right) + T_{n-1} \left( x \right) = 2 \cos \left( ny \right) \cos \left( y \right) = 2 x \cos \left( ny \right) = 2 x T_n \left( x \right)\]

which yields a recurrence relationship.

Chebyshev interpolating polynomials

\[\begin{split} \begin{split}\begin{split} T_0 \left( x \right) &= 1 \\ T_1 \left( x \right) &= x \\ T_{n+1} \left( x \right) &= 2 x T_n \left( x \right) - T_{n-1} \left( x \right) \end{split}\end{split}\end{split}\]

Naturally, we need a Vandermonde matrix

[11]:
function vander_chebyshev(x, n=nothing)
    if isnothing(n)
        n = length(x) # Square by default
    end
    m = length(x)
    T = ones(m, n)
    if n > 1
        T[:, 2] = x
    end
    for k in 3:n
        T[:, k] = 2 * x .* T[:,k-1] - T[:, k-2]
    end
    T
end
[11]:
vander_chebyshev (generic function with 2 methods)
[12]:
x = LinRange(-1, 1, 50)
plot(x, vander_chebyshev(x, 5), title="Chebyshev polynomials", label=["\$T_0\$" "\$T_1\$" "\$T_2\$" "\$T_3\$" "\$T_4\$"])
[12]:

Well-conditioned

The Chebyshev polynomials are well-conditioned.

[13]:
a = 20
plot(range(2, a), [vcond(vander, LinRange, a)], yscale=:log10, legend=:topleft, xticks=range(2, a, step=2), xlabel="# of columns", ylabel="cond(A)", label="Vander, LinRange")
plot!(range(2, a), [vcond(vander, CosRange, a)], yscale=:log10, label="Vander, CosRange")
plot!(range(2, a), [vcond(vander_legendre, LinRange, a)], yscale=:log10, label="Legendre, LinRange")
plot!(range(2, a), [vcond(vander_legendre, CosRange, a)], yscale=:log10, label="Legendre, CosRange")
plot!(range(2, a), [vcond(vander_chebyshev, LinRange, a)], yscale=:log10, label="Chebyshev, LinRange")
plot!(range(2, a), [vcond(vander_chebyshev, CosRange, a)], yscale=:log10, label="Chebyshev, CosRange")
[13]:

Piecewise polynomials

Let’s take a different approach. We’ll re-examine the Lagrange polynomials as we vary the points.

[14]:
x = LinRange(-1, 1, 11) # What about CosRange?
s = LinRange(-1, 1, 100)
A = vander_chebyshev(s, length(x)) /
    vander_chebyshev(x)
plot(s, A[:,1:5], size=(1000, 600), label=["\$l_0\$" "\$l_1\$" "\$l_2\$" "\$l_3\$" "\$l_4\$"])
scatter!(x[1:5], ones(5), color=:grey, label="values")
scatter!(x, zero(x), color=:black, label="roots")
[14]:

Let’s look at a specific example that seems like it should be easy to fit but shows strange behavior.

[15]:
# Fit the function |x|
y = abs.(x)
plot(s, [abs.(s) A * y], label=["true" "fit"])
[15]:

Switching to a better set of points or basis doesn’t fully fix the issue.

[16]:
x = CosRange(-1, 1, 11)
s = LinRange(-1, 1, 100)
A = vander_chebyshev(s, length(x)) /
    vander_chebyshev(x)

# Fit the function |x|
y = abs.(x)
plot(s, [abs.(s) A * y], label=["true" "fit"])
[16]:

The effects of the higher order functions seem to be producing different behavior between the points we are fitting than we want. The absolute value function looks like two linear functions glued together. Let’s try that.

New interpolation matrix

[17]:
function interp_nearest(x, s)
    A = zeros(length(s), length(x))
    for (i, t) in enumerate(s)
        loc = nothing
        dist = Inf
        for (j, u) in enumerate(x)
            if abs(t - u) < dist
                loc = j
                dist = abs(t - u)
            end
        end
        A[i, loc] = 1
    end
    A
end
interp_nearest(LinRange(-1, 1, 3), LinRange(0, 1, 4))
[17]:
4×3 Matrix{Float64}:
 0.0  1.0  0.0
 0.0  1.0  0.0
 0.0  0.0  1.0
 0.0  0.0  1.0
[18]:
x = LinRange(-1, 1, 11)
A = interp_nearest(x, s)
plot(s, A[:, 1:5], label=["\$c_0\$" "\$c_1\$" "\$c_2\$" "\$c_3\$" "\$c_4\$"])
scatter!(x, ones(length(x)), label="values")
[18]:

The condition number seems reasonable.

[19]:
x = LinRange(-1, 1, 40)
A = interp_nearest(x, s)
@show cond(A)
println("Singular values:")
display(svdvals(A)[1:5])
display(svdvals(A)[end-5:end]);
cond(A) = 1.224744871391589
Singular values:
5-element Vector{Float64}:
 1.7320508075688772
 1.7320508075688772
 1.7320508075688772
 1.7320508075688772
 1.7320508075688772
6-element Vector{Float64}:
 1.4142135623730951
 1.4142135623730951
 1.4142135623730951
 1.4142135623730951
 1.4142135623730951
 1.4142135623730951
[20]:
plot(s, A * runge.(x), label="fit")
plot!(s, runge.(s), label="true")
scatter!(x, runge.(x), label="data")
[20]:

This looks pretty good, but how accurate is it really?

[21]:
function interp_chebyshev(x, xx)
    vander_chebyshev(xx, length(x)) * inv(vander_chebyshev(x))
end

function interp_monomial(x, xx)
    vander(xx, length(x)) * inv(vander(x))
end

function interp_error(ieval, x, xx, test)
    """Compute norm of interpolation error for function test
    using method interp_and_eval from points x to points xx.
    """
    A = ieval(x, xx)
    y = test.(x)
    yy = test.(xx)
    norm(A * y - yy, Inf)
end

function plot_convergence(ievals, ptspaces; xscale=:log10, yscale=:log10, maxpts=40)
    """Plot convergence rates for an interpolation scheme applied
    to a set of tests.
    """
    xx = LinRange(-1, 1, 100)
    ns = 2:maxpts
    fig = plot(title="Convergence",
        xlabel="Number of points",
        ylabel="Interpolation error",
        xscale=xscale,
        yscale=yscale,
        legend=:bottomleft,
        size=(1000, 700))
    for ieval in ievals
        for ptspace in ptspaces
            for test in [runge]
                errors = [interp_error(ieval, ptspace(-1, 1, n), xx, test)
                         for n in ns]
                plot!(ns, errors, marker=:circle, label="$ieval, $ptspace")
            end
        end
    end
    for k in [1, 2, 3]
        plot!(ns, ns .^ (-1.0*k), color=:black, label="\$n^{-$k}\$")
    end
    fig
end
[21]:
plot_convergence (generic function with 1 method)
[22]:
plot_convergence([interp_monomial, interp_chebyshev, interp_nearest], [LinRange, CosRange])
[22]:

Piecewise interpolation seems to be not especially accurate, but it doesn’t have the runaway Runge phenomenon.

Exploration

Can you make a function that does peiecwise interpolation with higher order polynomials than constant functions? Perhaps linear or quadratic?