Differentiable Implicit Layers

Andreas Look, Simona Doneva, Melih Kandemir, Rainer Gemulla, Jan Peters

Research output: Contribution to conference without publisher/journalPaperResearchpeer-review

10 Downloads (Pure)

Abstract

In this paper, we introduce an efficient backpropagation scheme for non-constrained implicit functions. These functions are parametrized by a set of learnable weights and may optionally depend on some input; making them perfectly suitable as a learnable layer in a neural network. We demonstrate our scheme on different applications: (i) neural ODEs with the implicit Euler method, and (ii) system identification in model predictive control.
Original languageEnglish
Publication date14. Oct 2020
Publication statusPublished - 14. Oct 2020
EventWorkshop on machine learning for engineering modeling, simulation and design @ NeurIPS 2020 -
Duration: 12. Dec 202012. Dec 2020

Conference

ConferenceWorkshop on machine learning for engineering modeling, simulation and design @ NeurIPS 2020
Period12/12/202012/12/2020

Fingerprint

Dive into the research topics of 'Differentiable Implicit Layers'. Together they form a unique fingerprint.

Cite this