The line between human and artificial intelligence is becoming increasingly blurred, raising profound ethical questions.
We often view computer programs and algorithms as mere lines of code. But once AI begins to take on human characteristics, things get more complicated. Furthermore, every computer program, algorithm, and LLM (large language model a type of AI model trained on massive amounts of text data, capable of understanding and generating human-like language) was designed by humans—humans with values, world-views, and ethical and moral codes.
How do we align AI with human values and goals? It’s already in process as researchers are using a combination of logic, mathematics, philosophy, computer science, and improvisation (Mollick, 30).
As Mollick argues, this is no easy task, as humans often have conflicting values and goals, and yet I can’t help but notice something missing from his list. Whether you’re religious or not, it’s impossible to deny the shaping influence of the Judeo-Christian worldview on vast amounts of both Eastern and Western society. And yet there is no mention of it whatsoever in the ethics discussion.
Deeply Human
The predominant AIs of the moment are large language models or LLMs, and as you know, language is deeply human.
The source of this language data is both diverse and sometimes surprising. Many AI companies keep their source text a secret, but it typically comes from internet text, public domain books and research articles, and other free sources of materials. Some of the weirder sources used to train LLMs are:
- The entire email database from Enron (remember that company that was shut down for corporate fraud) because it was made freely available
- Amateur romance novels because the internet is full of them
- Copyrighted information used without permission
Due to the variety of data sources, there are inherent biases, errors, and falsehoods. AI itself has no ethical boundaries and is happy to give advice on everything from embezzlement to committing murder.
Beyond that, most AI companies are not asking for permission from the people whose data they are using to train AI—both LLMs and other forms of “generative AI” like the ones designed to create high-quality images (Midjourney and DALL-E are examples).
Here’s a simple example I created with DALL-E, probably inspired by my recent hiking trip to the White Mountains with my son in peak fall conditions!
[Prompt] “Create an image. Draw me a picture of a mountain view over a valley in peak fall conditions with the leaves changing color.” You’ll notice it’s slightly reminiscent of the Yosemite Valley though less pronounced.The Real Cost of AI
After training LLMs on human text, real humans are brought into the process as AI undergoes a fine-tuning approach. A combination of highly paid experts and poorly paid contract workers from poor English-speaking nations such as Kenya read AI answers and judge them on such things as accuracy and screening out violent or pornographic content. In some cases, these workers were traumatized by the graphic and violent content they needed to appraise. In other words, these companies were willing to violate the ethical boundaries of their contract workers in order to train their LLM. (Mollick, 38)
Financially speaking, more advanced LLMs cost over $100 million to train and use large amounts of energy in the process.
Your Next Step
AI is complicated and fraught with moral and ethical challenges. As AI continues to advance, we must consider the cost—human, environmental, and otherwise. Take a minute and write down your answer to these two questions.
What are your primary ethical concerns with AI? How do you think AI’s ethical challenges will impact your personal and professional life in the next five years?
Then in the next 24-48 hours, have a conversation with someone about what you learned from this post. Maybe you even want to email it to them!
I love to hear your thoughts! Leave your comments below…
Sources
Mollick, Ethan, Co-Intelligence
Leave a Reply