Take a moment to think about what the world looked like exactly a century ago. By 1920, the disruptive technologies of the day, electricity and internal combustion, were already almost 40 years old, but had little measurable economic impact. For most people, life largely went on as it always had, with little to indicate that much was amiss.
Over the next decade, however, that would change. As ecosystems formed around the new technologies, productivity soared and living standards dramatically improved. However, the news wasn’t all good. While technology did much to improve people’s lives, it also facilitated war and genocide on an unprecedented scale.
Today, we are likely at a similar point. Nascent technologies have the potential to create a new era of productivity, but also horrific destruction. Too often, we forget that technology should serve humans and not the other way around. Make no mistake, This is not a problem we can innovate our way out of. Technology will not save us. We need to make better choices.
The End Of Moore’s Law And A Shift To A New Era Of Innovation
Over the past several decades, innovation has become almost synonymous with digital technology. As we learned to cram more and more transistors onto a silicon wafer, value shifted to things like design and user experience. The speed of business increased and agility became a primary competitive attribute. Strategy and planning gave way to experimentation and iteration.
The success of venture-backed entrepreneurs led to arrogance and eventually the myth that Silicon Valley had somehow hit on a model that could be applied to any problem in any industry or context. With valuations of tech companies exploding, a new sense of technological libertarianism began to emerge in which many began to value algorithms over human judgment.
Yet today, that narrative is beginning to unravel for two reasons. First our ability to cram more transistors onto a silicon wafer, commonly known as Moore’s Law, is ending. Second, we’re beginning to realize that technology has a dark side. For example, artificial intelligence is vulnerable to bias and social media can have negative psychological effects.
At the same time, we’re beginning to enter a new era of innovation, which will be powered by new computing architectures, such as quantum and neuromorphic computing as well as revolutions in synthetic biology, materials science and machine learning. These will require a much more collaborative, multidisciplinary approach. No one will be able to go it alone.
A New Ethical Universe
On July 16th, 1945, the world’s first nuclear explosion shook the plains of New Mexico. J. Robert Oppenheimer, who led the scientific team that developed the atomic bomb, chose the occasion to quote from the Bhagavad Gita. “Now I am become Death, the destroyer of worlds,” he said. It was clear that we had crossed a moral Rubicon.
Many of the scientists of Oppenheimer’s day became activists, preparing a manifesto that highlighted the dangers of nuclear weapons, which helped lead to the Partial Test Ban Treaty. The digital era, on the other hand, has seen little of the same reverence for the power and dangers of technology. In fact, for the most part, Silicon Valley’s engineering culture has eschewed moral judgments about its inventions.
Today, however, as our technology becomes almost unimaginably powerful, we increasingly need to confront significant ethical dilemmas. For example, artificial intelligence raises a number of questions, ranging from dilemmas about who is accountable for the decisions a machine makes to how we should decide what and how a machine learns.
Or consider CRISPR, the gene editing technology that is revolutionizing life sciences and has the potential to cure terrible diseases such as cancer and Multiple Sclerosis. We already have seen the problems hackers can create with computer viruses; how would we deal with hackers creating new biological viruses?
There have been some encouraging developments. Most major tech companies have joined with the ACLU, UNICEF, and other stakeholders to form the Partnership On AI to create a forum that can develop sensible standards for artificial intelligence. Salesforce has hired a Chief Ethical and Human Use Officer. CRISPR pioneer Jennifer Doudna has begun a similar process at the Innovative Genomics Institute. But these are little more than first steps.
The Challenge Of Populist Authoritarianism
It seems fitting that the fall of the Berlin Wall happened during the same month, November 1989, that Tim Berners-Lee created the World Wide Web. What followed was a time of great optimism in which both information and people enjoyed unprecedented freedom. The twin powers of technology and globalization seemed unstoppable.
Across the world, free-market technocrats pushed a brand of market fundamentalism known as the Washington Consensus. To receive loans, developing nations were made to accept harsh economic measures that would never have been accepted in western industrialized nations. Within developed countries, the interests of labor lost ground to those of corporations.
These policies led to genuine achievements. Hundreds of millions were lifted out of poverty. Free trade and free travel increased. Technology enabled even a relatively poor kid in a poor country, armed with an Internet connection, to be able to access the same information as a wealthy scion studying at an Ivy League university.
However, in many ways, technology and globalization have failed us. Income inequality is at its highest level in over 50 years. Across most industries, power is increasingly concentrated in just a handful of firms. In America social mobility and life expectancy in the white working class are declining, while anxiety and depression are rising to epidemic levels. Clearly, too many people have been left behind
Perhaps not surprisingly, we’ve seen a global rise in populist authoritarian movements that have shifted governance dramatically against the type of open policies that fueled globalization and technological advancement in the first place. The pendulum has swung too far. We need to refocus our energy from technology and markets back to the humans they are supposed to serve.
We Are The Choices We Make
While the problems we have today can seem unprecedented and overwhelming, we’ve been here before. After World War II, the world teetered between liberal democracy and authoritarianism. New technologies, such as nuclear power, antibiotics, and computers represented unprecedented possibilities and challenges.
Yet in the wake of destruction an entirely new international system was created. The United Nations provided a forum to resolve problems peacefully. Bretton Woods stabilized the global financial system. The creation of the welfare state helped mitigate the harsher effects of the market economy and stronger protections for labor helped build a vibrant middle class. Arms agreements reduced the risk of armageddon.
Today, we are at a similar crossroads. We are present at the creation of a new technological era in the midst of a pivotal political moment. The choices we make over the next decade will have repercussions that will reverberate throughout the new century. Will we serve our technologies or will they serve us? Will we create a new global middle class or pledge fealty to a global elite?
One thing is clear: These choices are ours to make. Technology will not save us. Markets will not save us. We can, as we did in the 1920s and 30s, choose to ignore the challenges before us or, as we did in the 1940s and 50s, choose to build institutions that can help us overcome them and build a new era of peace and prosperity. The ball is in our court.
Greg Satell is an international keynote speaker, adviser and bestselling author of Cascades: How to Create a Movement that Drives Transformational Change. His previous effort, Mapping Innovation, was selected as one of the best business books of 2017. You can learn more about Greg on his website, GregSatell.com and follow him on Twitter @DigitalTonto