Skip to content

Soul of an Engineer #9

This Issue: AI Winter, Machine Shortcuts, Centaurs, Moravec's Paradox.

Amarinder Sidhu
Amarinder Sidhu
3 min read

Today's update has an AI flavor. For some inexplicable reason, I found myself on the AI bunny trail this week.


AI Winter

Melanie Mitchell is one of my favorite AI researchers. Melanie just published Why AI is Harder Than We Think.

In which she describes how the field of AI has seasons - "AI Springs" and "AI Winters".

"Since its beginning in the 1950s, the field of artificial intelligence has cycled several times between periods of optimistic predictions and massive investment (“AI spring”) and periods of disappointment, loss of confidence, and reduced funding (“AI winter”).

Needless to say, we are in an "AI Spring" right now. But despite "apparent breakthroughs", we aren't any closer to the goal of machine general intelligence. Melanie writes:

"Even with today’s seemingly fast pace of AI breakthroughs, the development of long-promised technologies such as self-driving cars, housekeeping robots, and conversational companions has turned out to be much harder than many people expected."  

A brief summary of her arguments is as follows:

Narrow intelligence in the fields like Go and Chess isn't equivalent to progress towards general intelligence. Because what is surprising is that things we find easy, like riding a bike, or having a free flowing conversation, are very hard for machines. Computers may have mastered Alpha Go but they are far from learning highly contextual games like charades. Perhaps the biggest thing - we can't understand human brain separate from the human body. We have an integrated brain-body cognition, coupled with emotions and social lives, which is extremely hard to copy within a disembodied machine.

But current over-optimism can lead to disappointment (and loss of confidence and funding). For example, despite public pronouncements of Elon Musk, Tesla acknowledged for the first time in a recent SEC filing that it may not get to full self driving at all.  

To avoid next "AI Winter", Melanie calls for better ways to assess where we stand on AI and understand human intelligence.

"It’s clear that to make and assess progress in AI more effectively, we will need to develop a better vocabulary for talking about what machines can do. And more generally, we will need a better scientific understanding of intelligence as it manifests in different systems in nature."

Surprise! Machines take shortcuts. Not really...

Deep Neural Networks (DNNs) are the workhorses of the state-of-the-art machine learning.  A group of researchers who study the DNN failures note that algorithms take shortcuts (problematic ones!!) rather than actually learning in these failure cases. Given my area of work, I found the example below quite interesting.

"A machine classifier successfully detected pneumonia from X-ray scans of a number of hospitals, but its performance was surprisingly low for scans from novel hospitals: The model had unexpectedly learned to identify particular hospital systems with near-perfect accuracy (e.g. by detecting a hospital-specific metal token on the scan). Together with the hospital’s pneumonia prevalence rate it was able to achieve a reasonably good prediction—without learning much about pneumonia at all."

The algorithm was just correlating the hospital name on the scan with the prevalence rate for that hospital to guess. It wasn't actually "reading" the image.

Now, humans takes shortcuts in learning too. The sole reason the field of behavior psychology exists.

But this machine example above is just cheating :) :).  

Source: Shortcut Learning in Deep Neural Networks  


Centaurs

Pedro Domingos is another person to follow, and learn from, if you are interested in understanding AI themes at a deeper level. According to him, it is the human computer teams which will dominate the future, not computers.


Afterthought: Moravec's Paradox

Moravec's Paradox was coined by Hans Moravec, Rodney Brooks and Marvin Minsky in 1980s.

"It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility"

It is considered as the single biggest insight of the cumulative AI research.