How IBM Sees The Future Of Artificial Intelligence

Greg Satell
5 min readMar 10, 2019

Ever since IBM’s Watson system defeated the best human champions at the game show Jeopardy!, artificial intelligence (AI) has been the buzzword of choice. More than just hype, intelligent systems are revolutionizing fields from medicine and manufacturing to changing fundamental assumptions about how science is done.

Yet for all the progress, it appears that we are closer to the beginning of the AI revolution than the end. Intelligent systems are still limited in many ways. They depend on massive amounts of data to learn accurately, have trouble understanding context and their susceptibility to bias makes them ripe targets for sabotage.

IBM, which has been working on AI since the 1950s, is not only keenly aware of these shortcomings, it is working hard to improve the basic technology. As Dario Gil, Chief Operating Officer of IBM Research recently wrote in a blog post, the company published over 100 papers in just the past year. Here are the highlights of what is being developed now.

Working To Improve Learning

What makes AI different from earlier technologies is its ability to learn. Before AI, a group of engineers would embed logic into a system based on previously held assumptions. When conditions changed, the system would need to be reprogrammed to be effective. AI systems, however, are designed to adapt as events in the real world evolve.

This means that AI systems aren’t born intelligent. We must train them to do certain tasks, much like we would a new employee. Systems often need to be fed thousands or even millions of examples before they can perform at anything near an acceptable level. So far, that’s been an important limiting factor for how effective AI systems can be.

“A big challenge now is being able to learn more from less,” Dr. John Smith, Manager of AI Tech at IBM Research, told me. “For example, in manufacturing there is often a great need for systems to do visual inspections of defects, some of which may have only one or two instances, but you still want the system to be able to learn from them and spot future instances.”

“We recently published our research on a new technique called few-shot or one-shot learning, which learns to generalize information from outliers”, he continued. “It’s still a new technique, but in our testing so far, the results have been quite encouraging.” Improving a system’s ability to learn is…

--

--

Greg Satell

Co-Founder: ChangeOS | Bestselling Author, Keynote Speaker, Wharton Lecturer, HBR Contributor, - Learn more at www.GregSatell.com