Los Angeles, 2029 A.D.
Futuristic ships soar overhead as a massive tank crushes bones and skulls beneath it.
Remember the movie, The Terminator, where Skynet becomes self-aware and turns on humanity? It’s a classic sci-fi nightmare: AI decides we’re the enemy and tries to wipe us out.
But is that a valid fear? Let’s find out!
For the record, my biggest fears about AI don’t involve a Skynet-like scenario from the Terminator movie; my biggest fears are around human behavior related to AI.
#1 – Nefarious people use AI for nefarious purposes (spoiler: it’s already happening).
#2 – AI bots replace or significantly diminish real human connections. Imagine a bot that listens deeply and only tells you what you want to hear. Why bother with humans who might disagree, annoy, or challenge us?
Four Future Scenarios
In his book Co-Intelligence, Professor Ethan Mollick (Wharton Business School) shares four possible AI scenarios. Here is a summary:
- As Good As It Gets: AI advancement stalls out. Unlikely, but possible if heavy regulation kicks in. Even then, AI has already changed the game with its ability to create hyper-realistic images, videos, and voice clones.
- Slow Growth: AI growth slows to 10-20% per year. Bad actors use AI for scams and weapons, but effective regulation emerges. AI transforms workplaces, replacing human jobs in various sectors. Society-wide benefits appear, with AI pushing innovation in research and autonomous scientific experiments.
- Exponential Growth: Everything in scenario 2 happens faster and more intensely. AI invents deadly weapons rapidly. An “AI-tocracy” emerges to keep bad actors in check. Work changes drastically, with AI-powered robots and autonomous agents monitored by humans.
- The Machine God: AI reaches human-level intelligence (AGI) and self-improves. Human supremacy ends. AIs might watch over us benevolently or view us as a threat. While possible, there’s no concrete reason to expect this scenario.
Instead of worrying about an AI apocalypse, Mollick argues we should focus on the many potential “small catastrophes” AI could bring. Better yet, he encourages readers to plan for a “eucatastrophe”—the opposite of a catastrophe. A eucatastrophe (a term coined by J.R.R. Tolkien when describing fairy tales) is a sudden, joyous turn of events.
What’s Your Next Step?
What are your biggest AI fears? Are they based on fiction or knowledge? How will you educate yourself about AI to join the conversation?
To shape our AI future, we need engaged citizens and serious discussions. That starts with education. In the next part, we’ll explore how to use AI effectively, and I’ll share some of my favorite resources.
Remember, the future of AI isn’t set in stone – our collective choices and actions shape it, and your engagement matters. Start your AI journey today, and be part of the conversation to shape our technological future.
Here are five possible steps you can take to educate yourself and join the conversation:
- Take concrete steps to learn more about AI
- Engage with the topic critically
- Consider personal and societal impacts
- Participate in shaping AI’s future
- Look forward to the next part of the series
Stay tuned for part 5!
Until next time
PS – If you are benefitting from my weekly newsletter, leave me a tip so I can keep creating.
Sources
Mollick, Ethan, Co-Intelligence, p. 193-210
*Photo by Julien Tromeur on Unsplash