Automatic differentiation may be one of the best scientific computing techniques you’ve never heard of. If you work with computers and real numbers at the same time, I think you stand to benefit from at least a basic understanding of AD, which I hope this article will provide; and even if you are a veteran automatic differentiator, perhaps you might enjoy my take on it.
What it is
Wikipedia said it very well:
Automatic differentiation (AD), also called algorithmic differentiation or computational differentiation, is a set of techniques to numerically evaluate the derivative of a function specified by a computer program … derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor more arithmetic operations than the original program.
This bears repeating: any derivative or gradient, of any function you can program, or of any program that computes a function, [*] with machine accuracy and ideal asymptotic efficiency. This is good for
- real-parameter optimization (many good methods are gradient-based)
- sensitivity analysis (local sensitivity =