We Expect Humans To Be Accountable. We Should Expect No Less Of AI

Greg Satell
5 min readJun 17
Photo by Pavel Danilyuk

About ten years ago, IBM invited me to talk with some key members on the Watson team, when the triumph of creating a machine that could beat the best human players at the game show Jeopardy! was still fresh. I wrote in Forbes at the time that we were entering a new era of cognitive collaboration between humans, computers and other humans.

One thing that struck me was how similar the moment seemed to how aviation legend Chuck Yeager described the advent of flying-by-wire, four decades earlier, in which pilots no longer would operate aircraft, but interface with a computer that flew the plane. Many of the macho “flyboys” weren’t able to trust the machines and couldn’t adapt.

Now, with the launch of ChatGPT, Bill Gates has announced that the age of AI has begun and, much like those old flyboys, we’re all going to struggle to adapt. Our success will not only rely on our ability to learn new skills and work in new ways, but the extent to which we are able to trust our machine collaborators. To reach its potential, AI will need to become accountable.

Recognizing Data Bias

With humans, we work diligently to construct safe and constructive learning environments. We design curriculums, carefully selecting materials, instructors and students to try and get the right mix of information and social dynamics. We go to all this trouble because we understand that the environment we create greatly influences the learning experience.

Machines also have a learning environment called a “corpus.” If, for example, you want to teach an algorithm to recognize cats, you expose it to thousands of pictures of cats. In time, it figures out how to tell the difference between, say, a cat and a dog. Much like with human beings, it is through learning from these experiences that algorithms become useful.

However, the process can go horribly awry. A famous case is Microsoft’s Tay, a Twitter bot that the company unleashed on the microblogging platform in 2016. In under a day, Tay went from being friendly and casual (“humans are super cool”) to downright scary, (“Hitler was right and I hate Jews”). It was profoundly disturbing.

Greg Satell

Co-Founder: ChangeOS | Bestselling Author, Keynote Speaker, Wharton Lecturer,@HBR Contributor, - Learn more at www.GregSatell.com