Discover more from The Pulse
The Ethics of AI: Deepfake Porn & ChatGPT
Is technology evolving too quickly for us to steward it ethically?
Set Your Pulse: Take a breath. Turn your attention to your body and release any tension. Breathe slowly into the area of your heart for 60 seconds, focusing on feeling a sense of ease. Stay connected to your body as you read. Click here to learn why we suggest this.
AI is both a fascinating topic to me and a scary one. On one hand AI has helped, and can continue to help, our lives in many ways - a statement that makes me slightly tense just to write because we’ve come to mostly think of AI as a ‘dangerous’ thing these days. Which brings me to my other point.
There are some huge concerns about the way AI is being used and the type of future that could create if we as people are not careful, or perhaps if we don’t increase the quality of our consciousness and being.
I want to layout some ethical concerns with AI that are already occurring in the real world, scaling from minor to major concerns.
Let’s start with a quick look at the various types of AI out there.
As stated by the website BuiltIn:
Reactive Machines: Technology capable of responding to external stimuli in real time, but unable to build a memory base and store information for future use. (Examples: Email filtering, movie or song recommendations based on what you like.")
Limited Memory: Machines that can store knowledge and use it to learn and train for future tasks. (Chatbots, robot vacuums, self driving cars)
Theory of Mind: The concept of AI that can sense and respond to human emotions as well as perform the tasks of limited memory machines.
Self-aware: The final stage of AI where machines can not only recognize the emotions of others, but also have a sense of self and a human-level intelligence.
We can see that some AI has been very useful to us, and has been around for a long time. We also can see that the fourth type of AI sounds rather scary. If someone is eventually successful in creating this, we literally have no idea what could happen.
A Minor Inconvenience
But even with the AI we have now, there are many problems and ethical questions we have to examine. Let’s start with something simple.
Below is a story from an author who has written for CE and The Pulse. He shares some thoughts on the problem with artificial intelligence.
My interest in the potential consequences of machines evaluating the information in other machines came to light a few weeks ago when I travelled out of state. While getting gas the pump malfunctioned and I moved to another pump to finish fuelling.
Then my credit card was turned down and I had to call the company. In addition, a possible “fraud alert” came in to my email. Since the pump malfunction had been odd I thought maybe I had been hacked, but when I called the company it turned out that there was no fraud at all – the “computers” had just decided that my anomaly was irregular enough to warrant inconveniencing me by shutting off my credit.
When I complained about this the fraud person said simply, “It’s our policy. If the software triggers an alert we have to act.” It reminded me of Treasury Secretary Hank Paulson’s famous comment when asked why the banks needed an $800 billion bailout in 2007.
He said, “The computers told us.”
The problem is that much of this “artificial intelligence” is unfounded, unproven, and just plain wrong. Just as there had been no fraud on my credit card, just a glitch at a gas pump – but how do you hold a computer program accountable?
This same thing has happened to me multiple times but in different ways. “The computers determined there was a mistake.” Oh, so we can see there are some big differences between how a computer can respond and a human can respond when it comes to some scenarios. How often does this happen? How can we correct for those scenarios?
The Pulse is a reader-supported publication. If you believe in what we do, consider becoming a free or paid subscriber.
Trouble Builds: Academia & Sensemaking
ChatGPT is an AI tool making waves today. It’s a ChatBot that answers questions posed by users. The responses mimic a natural-sounding and human-like conversation. Answers are pulled from information it gathers from around the internet. It can be used to write articles, essays, social media posts, books, and more.
Some ethical questions being raised are around university essays and projects. If students can now type a few questions to a ChatBot and get full essays back, should they still get the same credit? If access to information is this simple now, do we risk losing out on the critical thinking and creativity doing the work ourselves brings about?
ChatGPT can also be used in the journalism space. And if the COVID era taught us anything, it is that this is a scary thought.
A journalist or reporter for any outlet could simply pump some questions about a current event into ChatGPT and an article can be churned out that would be hard to distinguish between being written by a human or AI.
But beyond that is the question of “what is truth?” When information can be censored online, and “facts” are decided by ‘fact checkers,’ how does AI know how to navigate this problem? Will AI even be able to pull from facts that have been scrubbed from the internet like in the case of Jordan Walker’s employment with Pfizer? Obviously not.
As revenue is being stripped from media outlets, primarily independent media outlets like us, companies will be incentivized to replace writers and reporters with AI bots. There will be even less minds involved in good sensemaking, and stories can be formed by powerful people seeding the ideas they want to public to know about while scrubbing the others.
It’s a concerning thought to think our information landscape may get even worse. But in case you’re wondering, no, this isn’t something we would do.
But that brings us full circle to the problem Tom explained above with his credit card. Aside from the fact that things like social credit systems are problematic to begin with, if machines begin to help determine what is important and whose social credit is valid based on algorithms and parameters, how will we know they aren’t making basic mistakes that lack important human nuance?
Who would responsible for these mistakes?
Even Bigger Trouble: What Is Real?
Take for example a recent development called deepfakes. A process where AI composites a face of anyone you want, including celebrities or presidents, over the face of another person in a video. This makes it look like someone is saying something or doing something they aren’t. Have a look at the video below.
Recently, a 28 year old woman who creates content on Twitch under the handle QTCinderella was sadly caught up in a deepfake porn incident.
Each month hundreds of thousands of people watch her play video games, bake cakes, and interact with fans online. She gets paid for it too. But on January 30th it wasn’t a fun stream. Instead, she logged on to address the issue and showcase the pain it had caused her to find out someone had created deepfake porn using her image and likeness.
“This is what it looks like to feel violated. This is what it looks like to feel taken advantage of, this is what it looks like to see yourself naked against your will being spread all over the internet. This is what it looks like.”
Not only do we have the creator of the deepfake porn taking harmful action, but one Twitter user responded to her stream by taking a screenshot of her reaction and posting it with the statement:
This is where I can’t help but feel that we as a society are too deeply disconnected and emotionally stagnant to be able to steward technologies like this.
It’s the age old question: is it the technology itself or the humans who use it? For me, in most cases with AI, it’s the quality of us as humans at the moment, as well as the way our world is designed, that worries me more than most of these technologies themselves.
We seem so deeply disconnected from our emotions and how others feel that it’s common for people to want to tear down others who are going through a tough experience just to provide a ‘take’ that could go viral.
We don’t seek to understand or truly listen to what pain someone is going through because, in many cases, we’d rather judge them for their looks, skin color, popularity or privilege, so we don’t have to truly connect with them. It’s like we’ve built ways in which we can defend ourselves from connecting with each other.
Why do so many of us have a hard time empathizing with other people we have not even met and don’t know? Do we see this as a healthy path forward even if AI use cases like this didn’t exist?
Either way, I believe that the need for us to truly focus on the quality of our emotions, connection to others, and being-ness is huge, and will be the foundation of how we navigate the upcoming years in order to take action towards a more thriving future.
With AI entering into an already difficult to navigate information landscape, critical thinking and embodiment is becoming more important than ever. To up your critical thinking and bias detection skills, check out our course below.
→ Overcoming Bias & Improving Critical Thinking: A course that combines coherent embodiment, mastering self awareness, and critical thinking to help you notice bias in seconds, and think more critically in every area of your life. Join 1,050+ students.