A few days before Christmas, I conducted a bit of an experiment. Every day, for ten days straight, I instructed the artificial intelligence ChatGPT to write a new column for me, to be published in the Swedish daily that I often write for.
We published every day, including an analysis of how good or bad it turned out.
We received a large number of reactions, and they can be divided into three types:
- The optimists. Some people start to think about utility, innovation, and possibilities. Totally reasonable.
- The pessimists. Some people start to wonder when an AI decides to kill us all, or at least spread malicious computer viruses because someone has managed to manipulate it. Also totally reasonable.
- The seen-it-alls. Some people are doing everything they can to point out weaknesses in the robot’s first results. They refer to almost magical notions of what is human, spiritual, intuitive, artistically valuable, or scientifically sound.
It is this last group that has clearly had the greatest impact on a fourth group of people in society: the columnists. Because if there’s one thing I’ve been showered in, since ChatGPT was unleashed to the sound of chins dropping to the floor across the world, it is analysts taking a comfortably laid-back or complacent position in the face of this new thing.
I think we would do well to take this very seriously instead. The artificial intelligence breakthrough in late autumn 2022 is a historic event. Here are three things to keep in mind:
- First, it is not finished. Right now, it’s learning more at breakneck speed as millions of users feed it new information and new questions. It should therefore not be judged by what it is, but by what it has the potential to become.
- Second, the age-old question of power and responsibility remains unresolved. Right now, it seems like ChatGPT is pretty good at not behaving unethically. But who defines that? According to what principles? Have you heard a single person in a responsible position at the conglomerate behind OpenAI speak clearly about this, or speak at all?
- Third, we need to take responsibility for our own knowledge. You can read a thousand texts like this one – or test, learn and think for yourself.
We are only at the beginning. Please do have a look at this new keynote I just made on the matter. It is not only wonderfully well produced by the team I worked with – I think it might be useful for anyone who is curious about what lies around the corner during the decade to come.
Andreas Ekstrom is a conference speaker and prominent figure in the field of digital futurism in Europe. He is the author the the bestselling book “The Google Code” that delves into the impact of one of the most culturally, technologically, and commercially significant companies in history.
His insights and perspectives on the intersection between technology and society are commendable, and his emphasis on promoting a sustainable approach to technology is critical for creating a more equitable and responsible future.
To book Andreas for your event please contact your JLA agent.