Constrained Neural Nets Workbook
As a student researcher at Lawrence Berkeley National Labs during the summer of 2019, I investigated two methods for constraining neural networks with partial differential equations (PDEs). The first method treats neural network training as an optimization problem and converts the training process to a constrained optimization problem. The second method operates on a fully-trained neural network and nonlinearly “projects” the trainable parameters of the network to a valid configuration which satisfies the constraints. Experimentally, we discovered that neither method seemed practically useful, despite promising theoretical guarantees. In light of these negative outcomes, we did not publish our results. However, to aid future researchers investigating this or similar topics, we provide in this repository the code for our project. Additionally, we offer a writeup of our theory, experimental procedure, and results as well as the slides for a seminar I gave on the topic. We hope these will provide insight to future experimentation on the topic of constrained neural networks.