“If the 20th century was the age of atomics, then the 21st is the age of the internet” DKosig/Getty Images
If you cast your mind back over the past two and a half decades, a bizarre fact emerges: everyone from business investors to teachers has been planning for a future ruled by communications technology. If the 20th century was the age of atomics, then the 21st is the age of the internet.
Combining the power of radio, video and telephones, the internet is like a super-communication machine that completely upended our notion of what tomorrow would bring. Now, it seems that all our futures depend on how much we can say to each other, in zillions of different formats.
You might argue that artificial intelligence is the next new thing. But what do AI firms suggest we will do with their neoteric products? Write emails, make graphics for slide-show presentations and generate podcasts and movies. All of these uses – even shady deepfakes – are about communication. At this point, only spaceships can compete when it comes to signifying an advanced new world.
Advertisement
In my previous two columns about futurism, I talked about 19th– and 20th-century ideas of the future. Now, we are coming up to the present day, and it’s time to talk about… well, talking. What happens to the present when we assume the future will be shaped by conversation machines?
AT&T’s “Picturephone”, which added video to telephone calls, in 1964 AT&T Photo Service/United States Information Agency/PhotoQuest/Getty Images
To get to the answer, we need to travel briefly back to the year 1965, when Intel co-founder Gordon Moore formulated his now-famous “Moore’s law”, which held that the number of components on a microchip would double every year. He revised this calculation many times, as technology changed. In 2025, the law is mostly considered dead. Even so, the idea of exponential growth behind Moore’s law was infectious, influencing predictions about the rate of innovation in fields as diverse as biology and space exploration.
Most futurism contains two ingredients: a plausible, evidence-based observation and a mythical narrative. Moore’s law has both. Moore’s original observation was factual: in 1965, microchip efficiency was absolutely accelerating at an exponential rate. But his accurate prediction morphed into a kind of industrial fairy tale. The rapid growth of the computer industry – and, by extension, the internet – became an aspirational story about civilisation itself. Thanks to computers, humans would become more productive, our cultures would transform and amazing new inventions would arrive faster than ever.
Gordon Moore’s accurate prediction about microchip efficiency morphed into an industrial fairy tale
It was a very seductive way to imagine the future – as long as Moore’s law held true. By the 21st century, forecasters were predicting a world where everyone lived and worked via the internet, which would bring together international teams of geniuses to solve all our problems (aided by AI, of course).
Meanwhile, Twitter-fuelled protests during the Arab Spring in 2011 and Donald Trump’s Facebook-enhanced 2016 election campaign made it seem like social media could accelerate political change, too. Chatting with each other was going to change everything! Investors responded by dumping billions into internet companies, especially ones with social or AI components.
You can see the results in nearly every product. I own a coffee table that has an associated social network. Apple offers iPhone owners the exciting prospect of getting AI summaries of their text messages. Communications technology is being smeared everywhere, even when that causes things to break.
The myth of Moore’s law suggested that one nifty form of technology would set the pace for everything else in our civilisations. When you anticipate the future using that narrative, it leads to overinvestments in very niche tech. This isn’t to say there is no place for social media and AI in our future – of course there is. But we also need to invest in better sewage systems, malaria treatments, aeroplane safety and science education itself. AI won’t solve the looming climate crisis. We need to cultivate a diverse ecosystem of technologies and political institutions to do that.
Back in the 1960s, when Star Trek wowed audiences with the crew’s communicators and sci-fi legend Ursula K. Le Guin dreamed up the ansible (instant messages across thousands of light years), it seemed better communication would be the answer to all our problems. Especially because the microchip would put electronic communication in the hands of ordinary people. But now that dream has become a nightmare, with AI chatbots generating lies and authoritarian leaders using social media to control nations.
This isn’t the fault of our technology. It’s ours, for believing a single form of rapidly improving tech could make humanity improve rapidly, too. Sometimes, futurism prevents us from seeing what is actually coming next.
Annalee’s week
What I’m reading
One Day, Everyone Will Have Always Been Against This by Omar El Akkad, a gorgeous manifesto.
What I’m watching
I recently saw heavy metal cello band Apocalyptica play live, and my ears are still ringing happily.
What I’m working on
Making a website from raw HTML.
Annalee Newitz is a science journalist and author. Their latest book is Stories Are Weapons: Psychological warfare and the American mind. They are the co-host of the Hugo-winning podcast Our Opinions Are Correct. You can follow them @annaleen and their website is techsploitation.com
Topics: