Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
When learning from images, it is desirable to augment the dataset with plausible transformations of its images. Unfortunately, it is not always intuitive for the user how much shear or translation to apply. For this reason, training multiple models through hyperparameter search is required to find the best augmentation policies. But these methods are computationally expensive. Furthermore, since they generate static policies, they do not take advantage of smoothly introducing more aggressive augmentation transformations. In this work, we propose repeating each epoch twice with a small difference in data augmentation intensity, walking towards the best policy. This process doubles the number of epochs, but avoids having to train multiple models. The method is compared against random and Bayesian search for classification and segmentation tasks. The proposal improved twice over random search and was on par with Bayesian search for 4% of the training epochs. © 2019, Springer Nature Switzerland AG.
Year of publication: 2019