IBM & MIT: Looking Ahead to The Next Generation Of AI

Share on facebook
Share on twitter
Share on linkedin
Share on whatsapp

IBM provided an update on their joint $240M project and lab this week focused on making a better AI, and apparently, they share my view that AI is a stupid name.  What does “Artificial Intelligence” even mean? Either something is smart, or it is not, and if you were to call someone artificially intelligent, you’d imply they look smart but are not smart.  It would certainly be an insult.

That aside current generation AIs aren’t very smart.  They are called Narrow AIs and simple can-do things like identifying shapes and faces but unable to make more complex determinations like telling you if there are more trees or houses in a picture of trees and houses.  (Granted you can likely figure this out yourself, but that is to point out that current generation AIs are dumber than a typical 5-year-old).  

Now fixing this is where it gets interesting.  

The Birth of Symbolic AI

State of the art AI right now is Deep Learning, and it is the new name for what we once called Neural Networks.  It requires massive amounts of data, but unlike Machine Learning, which is the most common form of AI today, it doesn’t require as much human oversight.  The machine trains itself but through a massively data-intensive process. But its limitations are it seems objects as complete things making it easy to fool and significantly lowering its accuracy.  And it tied hard to that training so if someone wanted to compromise the AI they’d have to miss-categorize some items and the AI would become confused and increasingly inaccurate because, if say an Apple was categorized as a dog, it would begin to think all Apples were Dogs.  As I say, not very smart.

What Symbolic AI does is it looks at things like components.  If it walks, quacks, has feathers, and web feet it is a duck, and if someone were to miss tag an Apple as a Duck the Symbolic AI is far less likely to be fooled because the Apple doesn’t walk, quack, have feathers or web feet.  So, it’s not only more resistant it can better infer that say a Duck that was run over by a car is still a Duck and not some creative form of a pancake.   

It is also interesting to note that that to reach nearly 100% accuracy a Symbolic AI needs about 1% of the data a Deep Learning AI needs for 92% accuracy, so it is potentially far cheaper and faster to train.  IBM and MIT are working on blending the new concepts to create a far more capable AI solution.  

Causality And Predictability  

One of the big shortcomings to current AI systems is they can neither predict outcomes nor determine causality.  This particular skill is kind of an important thing to fix because we really would like to know the cause of a problem so we can fix it as well as what the likely outcome of a sequence of events is likely to be.  Now the industry dies use simulation heavily to train some classes of AI (like for autonomous driving) which does somewhat mitigate the forecasting problem. But an AI that could tell you that what you are doing will end badly or point you to the one thing you need to address rapidly to fix a major problem would be invaluable in the market.  But one of the problems is there are a lot of Spurious correlations, and these tend to mess AI systems up when trying to make predictions. Spurious Correlations are correlations that look like there is causality, like Margarine use and divorce rates, but have little or any actual connection let alone a cause-effect relationship.  (If you check out the above link, note that there appears to be a correlation between Nicolas Cage movies and the number of people who drown in a pool, resulting in the false prediction that swimming after watching one of his movies would be unwise.  Oh, and you might also be motivated to avoid eating cheese if you don’t want to be killed by your sheets).

However, if you can break an image into components, you can far more easily predict the outcome of certain changes and more effectively model and report on the likely outcomes.  And the equivalent of a technology-driven crystal ball would be amazing.   

This concept is also is part of the joint IBM/MIT effort, and it will be a major game-changer once it matures.

Wrapping Up: 

IBM and MIT are making significant advancements in AI and setting the framework to move the industry from Narrow AI to Broad AI, much like they created the enterprise AI market with Watson.  Their work should lead to far more powerful future AIs, and it is good they are focused on Ethics, human enhancement rather than replacement, and security because if these things ever went rogue, we’d be in a ton of trouble.  

And other good news from the presentation is that while Broad AI capability is coming fast, General AI (or the ability to replace humans broadly) is still outside the planning horizon (current estimate is 50 years) so we have time to get ready for our coming AI overlords.  Hopefully, that too will help you sleep at night.  

Author