Google Brain & CMU Semi-Supervised ‘Noisy Student’ Achieves 88.4% Top-1 Accuracy on ImageNet
Researchers from Google Brain and Carnegie Mellon University have released models trained with a semi-supervised learning method called “Noisy Student” that achieve 88.4 percent top-1 accuracy on ImageNet. Just three months ago the same team — which includes research scientist and Google Brain Team founding member Quoc Le — introduced a simple self-training method for unlabelled data which achieved 87.4 percent top-1 accuracy on ImageNet.
ImageNet is a large scale hierarchical image database created by researchers from Stanford University, Princeton University and University of North Carolina and comprising 14 million labelled images across more than 20,000 categories. The annual ImageNet Large Scale Visual Recognition Challenge started in 2010 and has become a benchmark for object detection and image classification at large scale.
The research team says their proposed method’s 88.4 percent accuracy on ImageNet is 2.0 percent better than the SOTA model that requires 3.5B weakly labelled Instagram images. And that’s not all: “On robustness test sets, it improves ImageNet-A top-1 accuracy from 61.0% to 83.7%, reduces ImageNet-C mean corruption error from 45.7 to 28.3, and reduces ImageNet-P mean flip rate from 27.8 to 12.2.”
The researchers describe the steps that led to their models’ impressive results:
- Train an EfficientNet as the teacher model on labelled images (EfficientNets were used as the baseline models due their high data capacity)
- Use the teacher to generate pseudo labels on 300M unlabelled images.
- Train another larger EfficientNet as a student model on the combination of labelled and pseudo labelled images.
This algorithm was iterated a number of times by treating the student model as a teacher model to relabel the unlabelled data and train a new student model. An important element of the approach was to ensure the student model was noised with dropout, stochastic depth, data augmentation, etc. via RandAugment during its training. This noising pushed it learn harder from pseudo labels. The teacher model meanwhile was not noised during the generation of pseudo labels to maintain its accuracy.
On Twitter, Quoc Le summed up the team’s research, including a YouTube video explanation of the first version of the paper. Oriol Vinyals, who co-leads the AlphaStar project replied: “Amazing! Crossing fingers to see 90% in 2020 : )”.
The paper Self-training with Noisy Student improves ImageNet classification is on arXiv.
Author: Yuqing Li | Editor: Michael Sarazen
Thinking of contributing to Synced Review? Synced’s new column Share My Research welcomes scholars to share their own research breakthroughs with global AI enthusiasts.
We know you don’t want to miss any story. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.
Need a comprehensive review of the past, present and future of modern AI research development? Trends of AI Technology Development Report is out!
2018 Fortune Global 500 Public Company AI Adaptivity Report is out!
Purchase a Kindle-formatted report on Amazon.
Apply for Insight Partner Program to get a complimentary full PDF report.