LeCun vs Rahimi: Has Machine Learning Become Alchemy?

Synced
4 min readDec 12, 2017

--

The medieval art of alchemy was once believed capable of creating gold and even human immortality. The trial-and-error method was however gradually abandoned after pioneers like Issac Newton introduced the science of physics and chemistry in the 1700s. But now, some machine learning researchers are wondering aloud whether today’s artificial intelligence research has become a new sort of alchemy.

The debate started with Google’s Ali Rahimi, winner of the the Test-of-Time award at the recent Conference on Neural Information Processing (NIPS). Rahimi put it bluntly in his NIPS presentation: “Machine learning has become alchemy.”

Ali Rahimi speaks at NIPS

According to Rahimi, machine learning research and alchemy both work to a certain degree. Alchemists discovered metallurgy, glass-making, and various medications; while machine learning researchers have managed to make machines that can beat human Go players, identify objects from pictures, and recognize human voices.

However, alchemists believed they could cure diseases or transmute basic metals into golds, which was impossible. The Scientific Revolution had to dismantle 2000 years worth of alchemical theories.

Rahimi believes contemporary machine learning models’ successes — which are mostly based on empirical methods — are plagued with the same issues as alchemy. The inner mechanisms of machine learning models are so complex and opaque that researchers often don’t understand why a machine learning model can output a particular response from a set of data inputs, aka the black box problem. Ranimi believes the lack of theoretical understanding or technical interpretability of machine learning models is cause for concern, especially if AI takes responsibility for critical decision-making.

“We are building systems that govern healthcare and mediate our civic dialogue. We would influence elections. I would like to live in a society whose systems are built on top of verifiable, rigorous, thorough knowledge, and not on alchemy,” said Rahimi.

That triggered Facebook Director of AI Research Yann LeCun, who responded to Rahimi’s talk the next day, saying the alchemy analogy was “insulting” and “wrong”. “Criticizing an entire community (and an incredibly successful one at that) for practicing ‘alchemy’, simply because our current theoretical tools haven’t caught up with our practice is dangerous.”

LeCun said skeptical attitudes like Rahimi’s were the main reason the machine learning community ignored the effectiveness of artificial neural networks in the 1980s. As the main contributor to the current development of convolutional neural networks, LeCun is concerned the scenario could repeat.

“The engineering artifacts have almost always preceded the theoretical understanding,” said LeCun. “Understanding (theoretical or otherwise) is a good thing. It’s the very purpose of many of us in the NIPS community. But another important goal is inventing new methods, new techniques, and yes, new tricks.”

Six hours after LeCun posted his comment on Facebook, Rahimi replied with a softened tone, saying he appreciated LeCun’s thoughtful reaction: “The ‘rigor’ I’m asking for are the pedagogical nuggets: simple experiments, simple theorems.”

LeCun agreed with Rahimi’s views on pedagogy, saying “Simple and general theorems are good… but it could very well be that we won’t have ‘simple’ theorems that are more specific to neural networks, for the same reasons we don’t have analytical solutions of Navier-Stokes or the 3-body problem.”

The Rahimi — LeCun debate grew into a wide-ranging discussion at NIPS and on the internet. Dr. Yiran Chen, Director of the Duke Center of Evolutionary Lab, attempted to make peace, suggesting LeCun had overreacted, and that the opposing positions were actually not so contradictory.

“What Rahimi means is that we now lack a clear theoretical explanation of deep learning models and this part needs to be strengthened. LeCun means that lacking a clear explanation does not affect the ability of deep learning to solve problems. Theoretical development can lag behind,” said Dr. Chen.

Other academicians, like Facebook’s Research Scientist and Manager Yuandong Tian, characterized the exchange as a common contradiction between the First Principle and Empiricism: “Such debates are always happening in academia.”

Sparked by the Rahimi — LeCun’s debate, discussion regarding the understanding or lack of understanding of machine learning models will likely continue for some time. Do we want more effective machine learning models without clear theoretical explanations, or simpler, transparent models that are less effective in solving specific tasks? As AI gradually penetrates critical fields such as law and healthcare, the debate is bound to reignite.

Journalist: Tony Peng | Editor: Michael Sarazen

--

--

Synced
Synced

Written by Synced

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global

Responses (7)