Automatic differentiation and control (2 positions)

Postdoc position, 24 months, and

Research engineer position, 18 months

Inria Centre at Université Côte d’Azur (Sophia)

More and more innovative algorithms in the field of AI and its declinations in optimal control or transport require the use of automatic differentiation (AD): schematically, the minimization of a loss function in machine learning necessitates the computation of a direction of descent, generally associated with a more or less weak notion of derivative (see, for instance, the notion of path differentiability and its applications in Bolte, Pauwels and Vaiter, 2023). The estimation of this derivative classically requires to differentiate the calculation produced by an algorithm. Automatically generating the algorithm for calculating this derivative often has many virtues (efficiency, numerical robustness, etc.) Although well established, the technical ecosystem of AD has undergone an extremely important revival over the last decade with the explosion of problems and techniques associated with ML. A quick review allows us to cite historical state-of-the-art codes such as Tapenade, developed at Inria Sophia (Hascoët and Pascual, 2013), and numerous more recent achievements such as TensorFlow, Pytorch, JAX, ForwardDiff, ReverseDiff or Zygote, directly resulting from the work of the ML community. We also note the recent importance for AD of approaches that operate in a language-agnostic way, directly at the level of an Intermediate Representation (LLVM code in the case of the Enzyme project, for example). In our academic context, it could also be relevant to cross this theme with another discipline well represented in Nice-Sophia: proof of code in language theory. The numerical aspects being crucial for the quality of the results provided by an algorithm, proving that these differentiation methods are correct would give an important added value to this work. These postdoc + engineer recruitments will allow to review and evaluate AD tools in close connection with the fundamental algorithms in optimisation / optimal control / optimal transport, with a strong focus on the Julia language. The performance of these tools will be benchmarked on use cases stemming from ongoing developments of the control-toolbox project (Caillau, Cots and Martinon, 2022) and applications in learning.


Jean-Baptiste Caillau, Thibaud Klozcko and Samuel Vaiter


Bolte, J.; Pauwels, E.; Vaiter. S. One-step differentiation of iterative algorithms. NeurIPS 2023.

Caillau, J.-B.; Cots, O.; Martinon, P. ct: control toolbox - Numerical tools and examples in optimal control. IFAC PapersOnLine 55 (2022), no. 16, 13-18. Proceedings of 18th IFAC Workshop on Control Applications of Optimization, Paris, July 2022.

Hascoët, L.; Pascual, V. The Tapenade Automatic Differentiation tool: Principles, Model, and Specification. ACM Transactions On Mathematical Software 39 (2013), no. 3, 1-43.