The first derivative of an analytic function can be estimated by:
$$\displaystyle h\frac{df}{dx}\Bigg|_{x=0} \approx -\frac{1}{2}f(-h) + \frac{1}{2}f(h) $$
where $h$ is a small distance. This is the common central difference formula; a more accurate but less frequently seen one is:
$$\displaystyle h\frac{df}{dx}\Bigg|_{x=0} \approx \frac{1}{12}f(-2h) – \frac{2}{3}f(-h) + \frac{2}{3}f(h) – \frac{1}{12}f(2h) $$
This formula uses two additional data points and the coefficients of the $f(\pm h)$ terms have changed. The trick is finding the proper coefficients for maximum accuracy on a given collection of data points. For this post we’re going to stay with central differences, where there are an equal number of data points to either side of the center and they are all evenly spaced. So with $N$ pairs of data points with coefficients $a_{k,N}$ to be found, the formulas will have the form:
$$\displaystyle h\frac{df}{dx}\Bigg|_{x=0} = \sum_{k=1}^N \frac{a_{k,N}}{2} (f(kh) – f(-kh)) $$
The coefficients are found by expanding the left and right sides as Maclaurin series and equating the terms with $h^s$ for each $s$:
$$\displaystyle h\delta_{1,s}f^{(1)}(0) = \sum_{k=1}^N \frac{a_{k,N}}{2}(f^{(s)}(0)(kh)^s – f^{(s)}(0)(-kh)^s) $$
where $s = 1, 2, 3, … N$. Each value of $s$ supplies a constraint on all the $a_{k,N}$, so $N$ constraints will be enough to identify the $a_{k,N}$ exactly. The Maclaurin expansion can be simplified by canceling powers of $h$ and noting that the term in the sum is zero when $s$ is even:
$$\displaystyle \delta_{1,2s+1} = \sum_{k=1}^N \frac{a_{k,N}}{2}2k^{2s+1} = \sum_{k=1}^N (ka_{k,N})(k^2)^s $$
after reindexing. This set of constraints can be written as a matrix equation:
$$\displaystyle \delta_{1,2s+1} = V_{sk} A_{k} $$
where $A_{k} = ka_{k,N}$ and $V_{sk} = (k^2)^s$. Since we want to find the coefficients $a_{k,N}$, we ultimately want the inverse matrix of $V_{sk}$ which we will call $U$:
$$\displaystyle U_{ps} V_{sk} = \delta_{pk}$$
$V$ is a Vandermonde matrix, since it consists of monomials $x^s$ evaluated at points $x_k = k^2$. These matrices are poorly suited to computational practice because they tend to be poorly conditioned, especially when the number of points is large. So we would run into trouble if we wanted to find these coefficients by having a computer solve for them numerically. Instead we are looking for a formula that sidesteps all those problems.
Unfortunately it’s not easy to find the Vandermonde inverse by hand either. But looking at the problem conceptually lets us skip a lot of work. Viewed as a linear transformation, $V^T$ transforms polynomial coefficients to the polynomial evaluated at the points $x_k = k^2$. Therefore its inverse $V^{-T} = U^T$ must be the transformation that transforms values at the $x_k$ to polynomial coefficients. Put another way, $[U^T]_{sp}$ is the coefficient of $x^s$ in the Lagrange basis polynomial associated with $x_p = p^2$. Since we want
$$\displaystyle A_{k} = (V_{sk})^{-1} \delta_{1,2s+1} = U_{ks} \delta_{1,2s+1} = U_{k,0}$$
we only need the first column of $U_{ps}$ (corresponding to $s=0$), which is the first row of $[U^T]_{sp}$. These are the constant terms on the Lagrange basis polynomials, which are easy enough to write down directly:
$$\displaystyle [U^T]_{0,k} = U_{k,0} = A_{k} = \text{constant term of } \frac{\prod_{j\neq k} (x-j^2)}{\prod_{j\neq k} (k^2 – j^2)} $$
$$\displaystyle A_{k} = \frac{\prod_{j\neq k} (-j^2)}{\prod_{j\neq k} (k^2 – j^2)} = \frac{ (-1)^{N-1} \left( \frac{N!}{k} \right)^2 }{ \prod_{j\neq k} (k-j)(k+j) }$$
$$\displaystyle A_{k} = \frac{(-1)^{N-1} N!^2}{k^2 (k-1)! (N-k)! (-1)^{N-k} \frac{(N+k)!}{k!(2k)}} $$
$$\displaystyle A_k = \frac{2(-1)^{k-1} N!^2}{(N-k)!(N+k)!} $$
In terms of binomial coefficients:
$$\displaystyle A_k = \frac{2(-1)^{k-1} \binom{N}{k}}{\binom{N+k}{k}} $$
Finally, accounting for the difference between $A_k$ and $a_{k,N}$:
$$\displaystyle a_{k,N} = \frac{1}{k}A_k = \frac{2(-1)^{k-1}}{k} \frac{\binom{N}{k}}{\binom{N+k}{k}}$$
Which gives back the same two formulas at the beginning, plus infinitely many more beyond.