The past posts on optimization scaling laws [1, 2] focused on problems that do not become significantly harder as the problem size increases. We showed that for some problems, as the dimension \(d\) goes to infinity, the optimality gap converges at a sublinear rate \(\Theta(k^{-p})\) for some power \(p\) depending on the problem, but independent of \(d\). But not all problems have this nice limiting behavior, and some become harder as the problem size increases.| Machine Learning Research Blog
Revisiting scaling laws via the z-transform| francisbach.com