People in ML recently started using “inference” to mean running a model: getting an output given a concrete input 1. (The framework people, constrained perhaps by taste or backwards-compatibility, still call the relevant function .predict() or .eval() or .forward() rather than .infer().) This clashes pretty hard with the conventional usage (from statistics), which is much more like training. We used to infer the values of unobserved parameters \(\theta\) from observed \(X\); now we “inf...