Learning operators

Abstract As the Neural Network framework(Physics-informed Neural Network) is introduced to solve PDEs and dynamical systems, solutions of differential equations are learned from data, and this could replace or improve the conventional numerical solvers. Still, there are limitations on this “learning functions”. A new framework of “learning operators” came up in order to cure the limitations, and in specific, DeepONet provided a Neural Network architecture for learning operators which map functions to functions. This talk will show the mechanism and deeds of DeepONet(2020) and mention its follow-up studies.

View slides




Enjoy Reading This Article?

Here are some more articles you might like to read next:

  • Wasserstein Proximals Stabilize Training of Generative Models and Learn Manifolds
  • Lipschitz-Regularized Gradient Flows and Latent Generative Particles
  • Lipschitz Regularized Gradient Flows and Latent Generative Particles
  • Sample generation from unknown distributions - Particle Descent Algorithm induced by (f, Γ)-gradient flow
  • Python ODE solving tutorial