Abstract
In this paper, we introduce an efficient backpropagation scheme for non-constrained implicit functions. These functions are parametrized by a set of learnable weights and may optionally depend on some input; making them perfectly suitable as a learnable layer in a neural network. We demonstrate our scheme on different applications: (i) neural ODEs with the implicit Euler method, and (ii) system identification in model predictive control.
Original language | English |
---|---|
Publication date | 14. Oct 2020 |
Publication status | Published - 14. Oct 2020 |
Event | Workshop on machine learning for engineering modeling, simulation and design @ NeurIPS 2020 - Duration: 12. Dec 2020 → 12. Dec 2020 |
Conference
Conference | Workshop on machine learning for engineering modeling, simulation and design @ NeurIPS 2020 |
---|---|
Period | 12/12/2020 → 12/12/2020 |