We should lower our expectations from balancing. Sampling might not improve class separation, and even if it does, it will typically come at a cost to calibration and log loss.| simplicityissota.substack.com
Here comes the Part 3 on learning with not enough data (Previous: Part 1 and Part 2). Let’s consider two approaches for generating synthetic data for training. Augmented data. Given a set of existing training samples, we can apply a variety of augmentation, distortion and transformation to derive new data points without losing the key attributes. We have covered a bunch of augmentation methods on text and images in a previous post on contrastive learning.| lilianweng.github.io