Artificial Intelligence Desperately Needs Human Wisdom
The future of AI depends on our own wise choices
What happens when we are intelligent but not necessarily wise? As a former religion scholar and a lifelong self-proclaimed ‘wisdom seeker,’ I have spent a lot of time exploring why I feel uneasy with artificial intelligence. I know that knowledge is power, and I know that AI has and will continue to undoubtedly bring many benefits to society, but at what cost?
Our modern society may be quite intelligent, but we are running at a wisdom deficit.
We adopt technologies and new media platforms to enhance our lives, engage in conversations, and to ‘progress’ civilization. But how often do these moves have unintended or negative consequences for our mental health, or even on society as a whole?
How often do we adopt new technologies simply because they are new?
How much AI will we incorporate for the sake of using AI?
The more I read about AI, the more I have come to wonder: is it possible that the missing ingredient is AW (Artificial Wisdom)?
I decided to ask ChatGPT if it think it’s wise. This was its reply:
I think its response highlights my sentiment perfectly: human oversight is crucial to ensure responsible and ethical use.Like politics and religion, AI is tool that can be used for great or terrible things by humans. Even the AI bots know this.
Intelligence builds AI, wisdom asks why we want to use it.
Intelligence creates technology, builds structures and systems, and helps people earn good grades in school. Wisdom asks us if we are going in the best direction with the things we learn, create, and utilize.
Our society honours intelligence, but it moves too fast to honour wisdom.
Wisdom is honed through stillness and reflection. It is learned in deep time; where we take our intelligence and ponder its purpose. When I look at the people who influence our society the most in our modern era, I see intelligence (though sometimes even that is questionable), but rarely do I see the most prominent voices say things that are wise.
Wise voices in the tech world, like Silicon Valley’s “tech guru” Jaron Lanier, stands out for his way of moving and thinking in our fast world. Watching an interview with him on Channel 4 news, it was obvious that his views were not received well. The host even sought to relegate him to the realm of “new age hippie.”
Lanier takes on the role of speaking truth to power in Silicon Valley. He does not use social media; he even wrote a book on why we need to quit it. But he is no luddite, he loves technology, he merely uses it in ways that I would consider wise. He asks why when he uses it.
This reflection lead me back to a Wired article I read about how the Amish use technology. Most people see them as anti-technology, but what they have done is incorporated the why (wisdom) into their use.
The Amish move through life with the slowness that allows for wisdom; their spiritual orientation encourages them to ask the “why” question. I see the value in taking time to ask myself how a new piece of tech will benefit me, and how it may harm me (and my community). My answer will be different from theirs, but it is no less important for me to ask.
When I look at the conversation around AI, I see common themes surrounding our collective fears. Both brilliant minds in the field, and ordinary citizens (aka people like me with a rudimentary understanding of the intricacies of AI), are nervous about its future.
Will it take over humanity and kill us all? Will it develop a consciousness of its own? Will it make us obsolete? Will it use up swathes of energy and plunge us even more quickly into ecological collapse?
I feel these fears are warranted because AI is run like the majority of us in modern society: too quickly to pause and reflect, too fast to be wise.
We continue to wage wars for profit, despite the social and environmental damage. We continue to mindlessly consume products and electronic media. We continue to adopt technologies because they are new, exciting, cool.
We are intelligent, but we are not wise.
So how could an AI that we create be any different? If it runs us into obsoletion or destroys us, would it not be our own fault for creating a beast in our own modern image?
Back to my “conversation” with ChatGPT:
“Wisdom involves not only knowledge and problem-solving abilities but also an understanding of context, ethical considerations, and the ability to make sound judgments based on a deep understanding of human values. AI systems lack the ability to have genuine experiences, emotions, or moral intuitions.” — ChatGPT prompt answer
The lack of context, ethical considerations, and deep understanding of human values means that AI does not have wisdom. When we have debates at the UN, in court, or in our governments, we bring our human interests, needs, and desires to those conversations.
It may be intelligence that pushes much of our societal agenda, but it is wisdom that reins us in.
Like lobbyists for environmental causes, there is always a human conscience, no matter how small, that pushes back against untethered intelligence.
AI has knowledge about wisdom traditions, philosophy, religious beliefs, and modern social issues. But, it is developed by humans, and if it is used to teach us things and make decisions without the input of wisdom, it will progress humanity forward without our humanity.
So can we teach AI to be wise, or do we ourselves need to be more wise in our use of it? I think we know the answer.
I think the fears around AI and the policies that many countries are seeking to adopt are warranted, because in a way, we are subconsciously aware that AI is operating like us in modern society. There are not enough checks and balances. There is not enough pause or reflection. We have begun to use ChatGPT for editing and writing, and as similar technologies develop they will pose challenges for teachers and employers alike.
My tattoo artist lamented would-be clients coming to the studio with AI generated art and asking artists to tattoo the piece. Some tattoo artists (and artists in general) have begun to use AI generated art at the expense of the time-consuming practice that hones their craft. I go to fine art galleries to be awed by the thousands of hours that artists have spent to hone their skills, and the thousands more hours of their careers they have used to create beauty with their hands.
In much the same way, I love to learn. I love the process of attaining knowledge and wisdom. I like to take time to pause and reflect on what I have learned and how it relates to my life and the world around me. I am not always opposed to using AI, but ultimately I still want to be doing the thinking and the learning.
I do not want to see our world with less humanity, and I think the only way to have less fear (justifiably so) around AI, is to reframe our relationship with it. And that is something we are all called upon: to force ourselves to be the wise body meeting with the zeros and ones of the digital world.
The next time you want to use ChatGPT, image generators, and the like, ask yourself the why. And keep on asking. It is the why that makes us human.
Ashley, what model humans do you want AI to learn from and emulate?
And why?