Workshop Abstract

Differential equations form the bedrock of scientific computing, while neural networks have emerged as the preferred tool of modern machine learning. These two methods are not only closely related to each other but also offer complementary strengths: the modelling power and interpretability of differential equations, and the approximation and generalization power of deep neural networks.

While progress has been made on combining differential equations and deep neural networks, most existing work has been disjointed, and a coherent picture has yet to emerge.  Thus, a theoretical foundation for integrating deep neural networks and differential equations remains poorly understood, with many more questions than answers. For example: How can we incorporate a given ordinary/partial differential equation (ODE/PDE) into an architecture of a deep neural network? Under what assumptions can we approximate a system of ODEs/PDEs by deep neural networks? How good are these approximations? How can we interpret deep neural networks from the perspective of ODEs/PDEs? How well-developed mathematical tools for ODEs/PDEs can be leveraged to help us gain a better understanding of deep neural networks and improve their performance? Substantive progress will require a principled approach that integrates ideas from the disparate lens, including differential equations, machine learning, numerical analysis, optimization, optimal transport, computer graphics, and physics.

The goal of this workshop is to provide a forum where theoretical and experimental researchers of all stripes can come together not only to share reports on their progress but also to find new ways to join forces towards the goal of coherent integration of deep neural networks and differential equations. Topics to be discussed include, but are not limited to:

  • Deep learning for high dimensional PDE problems
  • PDE and stochastic analysis for deep learning
  • PDE and analysis for new architectures; stable architecture design using numerical stability approaches
  • Inverse problems approaches to learning theory; regularization of the loss in deep learning, convergence in the data sampling limit
  • PDEs on graphs
  • Physics-inspired neural networks
  • Numerical tools and library for interfacing deep learning models and ODE/PDE solvers
  • Deep learning for computer graphics
  • Optimal transport for deep generative models
  • Applications of deep learning + differential equations in scientific problems

Confirmed Speakers

Maarten V. de Hoop (Rice University)
Ron Fedkiw (Stanford University)
Wilfrid Gangbo (University of California, Los Angeles)
Xavier Bresson (Nanyang Technological University)
Tom Goldstein (University of Maryland)
Ricky T. Q. Chen (University of Toronto)
Gavin Portwood (Los Alamos National Laboratory)



Call for Papers and Submission Instructions

We invite researchers to submit anonymous extended abstracts of up to 4 pages (including abstract, but excluding references). No specific formatting is required. Authors may use the ICLR style file, or any other style as long as they have standard font size (11pt) and margins (1in).

Submissions should be anonymous and are handled through the OpenReview system. Please note that at least one coauthor of each accepted paper will be expected to attend the workshop in person to present a poster or give a contributed talk.

Papers can be submitted at the address:

Important Dates

  • Submission Deadline (EXTENDED): 23:59 pm PST, Tuesday, February 18th
  • Acceptance notification: Tuesday, February 25th
  • Camera ready submission: Sunday, April 19th
  • Workshop: Sunday, April 26th


Richard G. Baraniuk                Stanley Osher                        Anima Anandkumar                    

Animesh Garg                             Bao Wang                          Tan M. Nguyen         

Please email with any questions.