The Case for Learning Mechanics

Fourteen researchers across Berkeley, MIT, Harvard, and EPFL published a 41-page manifesto arguing that a scientific theory of deep learning is not just desirable but already forming. They call it "learning mechanics" and point to five converging research threads — solvable models, tractable limits, empirical laws, hyperparameter theories, and universal behaviors — that together look something like what statistical mechanics looked like before it became statistical mechanics.

Read more →