4 Things You Need To Know About Big Data And Artificial Intelligence

Greg Satell
5 min readOct 28, 2018

In the seven years since IBM’s Watson beat two human champions in the game show Jeopardy, cognitive technologies have gone from a science fiction pipe-dream to a platform for essential business initiatives. Clearly, if you don’t have a plan for cognitive transformation, your chances for survival will be somewhat dim.

Yet progress to this point has been uneven. While there have clearly been some successes, we’ve all had tortured moments such as trying to access a human on customer service call. In some cases, things have gone seriously awry, such as when Amazon’s Echo has ordered unwanted merchandise.

Progress is never smooth. The early industrial revolution certainly had more than its share of problems, as did the dotcom era (remember Webvan?). The key is to go in with your eyes open, understanding that every transformation has its growing pains. With that in mind, here are four things you should know about big data and artificial intelligence.

1. Most Cognitive And Data Projects Fail

While the potential for cognitive technologies is undeniable, most initiatives fail. All too often, managers become enamored with all the hype and don’t pursue a businesslike approach. So you want to be aspirational, but stay focused on results rather than “gee whiz” use cases or “shiny objects.”

The best way to do that is to start with concrete business outcomes, such as “increasing operating efficiency by 30%.” From there you can move on to specific processes that support your objectives and only then can you begin to discuss technological approaches. That’s how you stay grounded.

One thing most people fail to recognize is that cognitive technologies aren’t discreet protocols like the Internet was, but a variety of algorithmic approaches, all of which have different advantages and disadvantages. So it is essential to go in with a clear idea of what you want to achieve or you’re likely to end up spinning your wheels.

Another best practice is to start with a small initiative and build up, rather than trying to do a wholesale transformation from day one. More often than not, those type of massive projects devolve into a five-year death march to oblivion.

2. Ethics Is Emerging As A Top Issue

Artificial intelligence has had a long history that dates back as far as 1956, when a group of luminaries met at Dartmouth University to discuss the possibility of creating machines that could “learn, make use of language and form concepts.” The brainstorming lasted about two months and they thought it would take about 20 years to develop the technology.

Yet it was not to be. After an initial period of excitement, the problem turned out to be a lot harder than anyone had supposed. By the 1970s, funding dried up and we entered a long AI winter that lasted until early this century. When interest picked up again a little over a decade ago, researchers competed vigorously to bring the technology to market.

It wasn’t until a few years ago that people began to realize that we didn’t really understand the consequences of this new technology. How, for instance, do we hold algorithms accountable for their decisions? Now that we have machines that can learn, how are they being taught? What moral values and cultural norms should we embed in our technology?

Today, AI ethics is becoming an important field and industry giants have set up the Partnership on AI as a forum to work through the issues. Data bias, in particular, has emerged as a massive problem. Increasingly, those who do not follow established best practices will find that they may be legally liable for damages that arise.

3. User Experience Will Be Key To Success

When a new technology first emerges, the focus is always on capabilities, such as how fast a web page can load or how accurate a voice recognition algorithm is. Yet over time the emphasis will always shift to user interaction, because that’s what truly creates value. No matter how powerful a technology is, its utility is always limited to its use in the real world.

That’s the point that artificial intelligence is at now. The basic technology has already become incredibly powerful, but we often struggle to use it effectively. We find ourselves repeating iterations of phrases into our devices to get them to do what we want. Usually, we can work around it, but it can be incredibly frustrating.

The truth is that artificial intelligence needs conversational intelligence. We need algorithms to not only understand words and phrases, but context, such as what came before in a conversation, earlier junctures on a logical chain or what is displayed on a screen as we speak. That will be key to unlocking value in the exabytes of data we have stored in our systems.

We solved this problem in web technologies and devices through developing clear user experience principles and a similar effort is underway for artificial intelligence. We’re still in the early stages of that effort, so progress has been limited, but over the next few years improving interfaces will increasingly be a focus.

4. The Winners Won’t Be Those That Reduce Costs, But Those That Extend Capabilities

When a new technology appears, we always seem to assume that it will replace human workers and reduce costs, but that rarely ever happens. For example, when automatic teller machines first appeared in the early 1970s, most people thought it would lead to less branches and tellers, but actually just the opposite happened.

What really happens is that as a task is automated, it becomes commoditized and value shifts somewhere else. That’s why today, as artificial intelligence is ramping up, we increasingly find ourselves in a labor shortage. Most tellingly, the shortage is especially acute in manufacturing, where automation is most pervasive.

That’s why the objective of any viable cognitive strategy is not to cut costs, but to extend capabilities. For example, when simple consumer service tasks are automated, that can free up time for human agents to help with more thorny issues. In much the same way, when algorithms can do much of the analytical grunt work, human executives can focus on long-term strategy, which computers tend to not do so well.

The winners in the cognitive era will not be those who can reduce costs the fastest, but those who can unlock the most value over the long haul. That will take more than simply implementing projects. It will require serious thinking about what your organization’s mission is and how best to achieve it.

– Greg

An earlier version of this article first appeared in Inc.com
Image: Pixabay

Originally published at www.digitaltonto.com.

--

--

Greg Satell

Co-Founder: ChangeOS | Bestselling Author, Keynote Speaker, Wharton Lecturer, HBR Contributor, - Learn more at www.GregSatell.com