Before you turn this problem in, make sure everything runs as expected. First, restart the kernel (in the menubar, select Kernel → Restart) and then run all cells (in the menubar, select Cell → Run All).

Make sure you fill in any place that says BEGIN SOLUTION and END SOLUTION, as well as your name and collaborators below:

[ ]:
NAME = ""
COLLABORATORS = ""

2025-11-14 Numerical Differentiation

This activity will explore numerical approximation of derivatives using methods of the from

\[f' \left( x \right) \approx \frac{f \left( x + h \right) - f \left( x \right)}{h}\]

for some small \(h > 0\). This is know as the first-order forward difference.

[ ]:
using LinearAlgebra
using Plots
using Test
default(linewidth=3)

# Approximate the derivative with a forward finite difference
function diff_fwd(f, x, h)
    (f.(x .+ h) - f.(x)) / h
end

# And plot
plot([cos,
      x -> diff_fwd(sin, x, .1),
      x -> diff_fwd(sin, x, .5)],
    label=["\$cos ( x )\$" "\$h = 0.1\$" "\$h = 0.5\$"],
    xlims=(-4, 4))

In this figure, every point in the \(h = 0.1\) and \(h = 0.5\) lines is calculated by a finite difference estimate of the derivative. The analytic derivative is shown in blue. The \(h = 0.1\) line appears to be more accurate than \(h = 0.5\), which is a good sign.

To be quantitative, let’s examine how the error (taken to be the maximum error on a set of sample points) depends upon the parameter \(h\).

[ ]:
# Compute the error from the true derivative
function compute_errors(diff, f, fprime; x=LinRange(-4, 4, 50))
    hs = 10. .^ (-15:.5:0)
    errors = [norm(diff(f, x, h) - fprime.(x)) for h in hs]
    hs, errors
end

# And let's plot
hs, errors = compute_errors(diff_fwd, sin, cos)
scatter(hs, errors, label="forward diff", xlabel="\$h\$", ylabel="error", xscale=:log10, yscale=:log10)
plot!(hs, hs, label="\$O(h)\$")
plot!(hs, 1e-15 ./ hs, label="\$O(1/h)\$", legend=:bottomleft)

Evidently the error behaves like \(\mathcal{O} \left( h \right)\) for large values of \(h\) and like \(\mathcal{O} \left( 1 / h \right)\) for small values of \(h\). To minimize error, we should choose a value of \(h \approx 10^{-8} \approx \sqrt{\epsilon_\text{machine}}\) in this case, but the specific choice depends on the function.

Backward difference

There is nothing special about adding \(h\). We can also consider the first-ordre backward difference

\[f' \left( x \right) \approx \frac{f \left( x \right) - f \left( x - h \right)}{h}\]

for some small \(h > 0\).

Complete the function below and compare with our forward difference.

[ ]:
# Approximate the derivative with a backward finite difference.
function diff_back(f, x, h)
    ### BEGIN SOLUTION

    ### END SOLUTION
end

# And let's plot
hs, errors = compute_errors(diff_back, sin, cos)
scatter(hs, errors, label="backward diff", xlabel="\$h\$", ylabel="error", xscale=:log10, yscale=:log10)
plot!(hs, hs, label="\$O(h)\$")
plot!(hs, 1e-15 ./ hs, label="\$O(1/h)\$", legend=:bottomleft)

It appears that we don’t see particularly different behavior between the forward and backward difference equations. Perhaps if we combined them?

Centered difference formula

We can do similar experiments for the “centered” difference formula

\[f' \left( x \right) \approx \frac{f \left( x + h \right) - f \left( x - h \right)}{2 h}\]
[ ]:
# Finite difference with centered difference formula
function diff_cent(f, x, h)
    ### BEGIN SOLUTION

    ### END SOLUTION
end

hs, errors = compute_errors(diff_cent, sin, cos)
scatter(hs, errors, label="centered diff",
    xlabel="\$h\$", ylabel="error",
    xscale=:log10, yscale=:log10, ylims=(1e-12, 10))
plot!(hs, hs.^2, label="\$O(h^2)\$")
plot!(hs, 1e-15 ./ hs, label="\$O(1/h)\$", legend=:bottomleft)

Caption

Write a caption for this figure, explaining the effect of rounding error and discretization error. Explain how to use the estimates \(1/h\), \(h\), and/or \(h^2\) to produce an expression involving \(\epsilon_\text{machine}\) to justify a good choice of \(h\).

BEGIN SOLUTION

END SOLUTION

Shifting the domain

Now, we consider the effect of shifting from comparing accuracy on the interval \(\left( -4, 4 \right)\) to the interval \(\left( 10^5 \pi - 4, 10^5 \pi + 4 \right)\). Note that our test function \(f \left( x \right) = \sin \left( x \right)\) is \(2 \pi\) periodic.

[ ]:
# Gather our data
hs, errors1 = compute_errors(diff_fwd,  sin, cos, x=1e5*π.+LinRange(-4, 4, 50))
_,  errors2 = compute_errors(diff_cent, sin, cos, x=1e5*π.+LinRange(-4, 4, 50))

# And plot
scatter(hs, errors1, label="forward diff",
    xlabel="\$h\$", ylabel="error",
    xscale=:log10, yscale=:log10, ylims=(1e-12, 10))
scatter!(hs, errors2, label="centered diff")
plot!(hs, hs, label="\$O(h)\$")
plot!(hs, hs.^2, label="\$O(h^2)\$")
plot!(hs, 1e-15 ./ hs, label="\$O(1/h)\$", legend=:bottomleft)

This definitely is higher error than we saw before. Create a caption discussing the difference between the plots above and this most recent plot.

BEGIN SOLUTION

END SOLUTION

Estimate constants

Suppose you are given two positive values \(h_0\) and \(h_1\) as well as two errors \(e_0 = e \left( h_0 \right)\) and \(e_1 = e \left( h_1 \right)\). Use the ratios \(h_1 / h_0\) and \(e_1 / e_0\) to estimate \(p\) assuming our error is of the form \(e \left( h \right) = c h^p\).

Hint: take ratios of the equations and use a logarithm (log).

[ ]:
function convergence_order(h0, e0, h1, e1)
    hratio = h1 / h0
    eratio = e1 / e0
    ### BEGIN SOLUTION

    ### END SOLUTION
end

convergence_order(hs[end-6], errors1[end-6], hs[end-7], errors1[end-7])
[ ]:
@test abs(convergence_order(hs[end-5], errors1[end-5], hs[end-7], errors1[end-7]) - 1) < 1e-4
@test abs(convergence_order(hs[end-4], errors2[end-4], hs[end-3], errors2[end-3]) - 2) < 1e-4

Note: While the last assignment gave insight into some ideas behind the Finite Element Method, this assignment gives insights to Finite Difference Methods.

Extension

We can iteratively apply these ideas to approximate higher order derivatives. Read the Wikipedia article about higher order differences and pick one of the formulations for estimating the second derivative. That is to say, pick one of the formulations

\[f'' \left( x \right) \approx \dots\]

Implement the function below.

[ ]:
# Second derivative via finite differencing
function diff2(f, x, h)
    ### BEGIN SOLUTION

    ### END SOLUTION
end

Let’s be bold here. Here’s a function for computing the error in a second order. Make a plot like the ones above showing the error in this second order finite difference. Add appropriate reference lines (\(h\), \(h^2\), \(1/h\), etc) that match the error we see.

[ ]:
# Compute the error from the true second derivative
function compute_errors_diff2(diff2, f, fprimeprime; x=LinRange(-4, 4, 50))
    hs = 10. .^ (-15:.5:0)
    errors = [norm(diff2(f, x, h) - fprimeprime.(x)) for h in hs]
    hs, errors
end
[ ]:
### BEGIN SOLUTION

### END SOLUTION

Caption this plot as you did above.

BEGIN SOLUTION

END SOLUTION