Shoot from the HIP: Hessian Interatomic Potentials without derivatives
We show how Hessians can be predicted directly from a deep learning model, without relying on automatic differentiation or finite differences. We observe that one can construct SE(3)-equivariant, symmetric Hessians from irreducible representations (irrep) features up to degree l=2 computed during message passing in graph neural networks. This makes HIP Hessians one to two orders of magnitude faster, more accurate, more memory efficient, easier to train, and improves scaling with system size. HIP leads to consistently better performance on downstream tasks like transition state search, accelerated geometry optimization, zero-point energy corrections, and vibrational analysis.