Welcome to Backprop
Automatic heterogeneous back-propagation.
Write your functions normally to compute your result, and the library will automatically compute your gradient!
gradBP (\x -> x^2 + 3) (9 :: Double) =>
18.0
Differs from ad by offering full heterogeneity -- each intermediate step and the resulting value can have different types (matrices, vectors, scalars, lists, etc.)
gradBP2 (\x xs -> sum (map (**2) (sequenceVar xs)) / x)
(9 :: Double )
([1,6,2] :: [Double]) =>
(-0.5061728395061729,[0.2222222222222222,1.3333333333333333,0.4444444444444444])
Useful for applications in differentiable programming and deep learning for creating and training numerical models, especially as described in this blog post on a purely functional typed approach to trainable models. Overall, intended for the implementation of gradient descent and other numeric optimization techniques. Comparable to the python library autograd.
Get started with the introduction and walkthrough! Full technical documentation is also available on hackage if you want to skip the introduction and get right into using the library. Support is available on the gitter channel!