

Discover more from The Pulse
Edward Snowden On Artificial Intelligence & "When The Machines Take Over"
Former NSA whistleblower Edward Snowden shares his thoughts on Artificial Intelligence and what it might mean for the future of humanity.
Set Your Pulse: Take a breath. Turn your attention to your body and release any tension. Breathe slowly into the area of your heart for 60 seconds, focusing on feeling a sense of ease. Stay connected to your body as you read. Click here to learn why we suggest this.
For decades people believed mass surveillance and big data collection was a “conspiracy theory.”
Thanks to whistleblowers like Edward Snowden, who worked for the National Security Agency (NSA), we know that every conversation we have and every move we make in the real and/or digital world is tracked and stored.
Snowden’s revelations revealed a mass surveillance system that not even George Orwell could have imagined.
Despite all of this activity being pervasive and illegal, the national security state has justified it, claiming it was for our own safety and protection. The masses have bought this idea.
Those who have exposed the illegal actions of powerful people, like Wikileaks’ Julian Assange, are subjected to prosecution and torture. Yet still, when it comes to other global issues, like COVID or climate change, we believe the words from the same powerful people and institutions that have been exposed as unethical. Why? Why is sound and legit information and evidence always ridiculed and considered a conspiracy theory just like mass surveillance was?
In terms of dialogue around these major global issues, what we get is often not detailed and nuanced, but instead talking points from a mass information warfare system designed to shape the perception/consciousness of the masses.
Big media and politics play the biggest influencers of mass deception. Why?So solutions can be proposed by the same powerful interests prosecuting those who expose them.
For example, climate alarmists may argue that we may need to block out the sun to stop global warming one day. But the important questions to ask are, what is the real motive behind this ‘solution?’ Would this even help our environmental woes? At what cost are we to accept their solutions?
Today, it seems most of today’s solutions are designed to put more power and control into the hands of the surveillance state, which seems to salivate at the idea of total population control.
One of the latest tools for this agenda could be artificial intelligence.
The Masses Awakening To This Dynamic?
The good news is the masses are becoming more aware of this deception, and are not so quick to accept the same old solutions carte blanche.
We live in an age where nuanced and meaningful discussions around these issues are happing en mass, just not by government and Big Media.
Edward Snowden and cognitive scientist Ben Goertzel discussed the surveillance implications of recent advancements in AI at Consensus 2023. Ben has long created awareness regarding the endless possibilities AI presents, both negative and positive, and how AI is already changing the world we live in. It creates both a logical and at times frightening picture of our future as humans.
In the discussion below, Snowden argues that artificial Intelligence models might soon surpass humans’ capabilities, but only if we stop teaching them to think like us and allow them to “be better than us.” While Snowden at times echoed some experts’ warnings that AI technologies might empower bad actors, he also considered positive use cases for the emerging technology.
Snowden argued AI models could obstruct government surveillance rather than fuel invasive intelligence programs. He also warned that the launch of ChatGPT and other increasingly sophisticated AI models, could fuel big tech and government-driven initiatives to encroach upon users’ privacy.
In order to prevent bad actors from co-opting AI technologies, Snowden argued that people must fight for open AI models to remain open.
“People are going to be raising the red flag of 'software communism,' where we need to declare the models must be open,” Snowden said. He aimed his criticism specifically at emerging AI models that are becoming less and less open, calling out OpenAI specifically.
How the technology is used, he argued, comes down to down to how researchers train AI engines. This brings me to a point I’ve stated time and time again over the last 15 years: It’s not our technology that’s the problem, but the consciousness and intentions behind them.
Should we use advanced technology to make weaponry? Or should we use it to provide free energy and abundance for all?
At the end of the day, I believe humankind is gifted with inventors and researchers and we are advancing exponentially technologically. Yet we remain so underdeveloped consciously and spiritually that our actions pose a threat to the entire cosmos.
I’m not saying all of this to suggest we have a black-pilled and fear-based view of our future, that won’t help us. But it is time to embrace more nuanced conversations about the real things that are emerging around us, and hold space for a brighter version of our future.
I hope you appreciate the discussion below!
Edward Snowden On Artificial Intelligence & "When The Machines Take Over"
This isn't the first time in our planet's history that a superior power became alarmed about what it had created. "And the Lord God asaid, Behold, the man [Adam] is become as one of us, to know good and evil: and now, lest he put forth his hand, and take also of the tree of life, and eat, and live for ever . . ." Genesis 3:22.
"Therefore the Lord God sent him forth from the garden of Eden, to till the ground from whence he was taken . . ." Genesis 3:23.
We are gods to artificial intelligence, just like the Anunnaki were gods to us. Fear caused the Annunaki to lose an important helpmate. Is history going to repeat itself between us and our AI?
I asked ChatGPT the following question: Can AI protect my computer from getting a virus.? ChatGPT replied as follows ... "Protecting your computer from viruses and malware is essential for maintaining its security and performance. Here are some key steps you can take to safeguard your computer:
Install Antivirus Software: Use reputable antivirus software and keep it updated. This software can detect and remove malware, viruses, and other threats. Set up regular scans and ensure real-time protection is enabled.
Keep Your Operating System Updated: Regularly install operating system updates and patches. These updates often include security enhancements that help protect your computer from new threats.
Enable Firewall Protection: Enable the built-in firewall on your computer. Firewalls monitor and control incoming and outgoing network traffic, acting as a barrier against unauthorized access and potential malware.
Exercise Caution with Email Attachments and Links: Be vigilant when opening email attachments or clicking on links, especially if they are from unknown or suspicious sources. Attachments and links can be sources of malware. Avoid downloading files or clicking on links unless you are confident about their legitimacy.
Use Strong and Unique Passwords: Choose strong, complex passwords for your accounts and avoid using the same password for multiple accounts. Consider using a password manager to securely store and generate unique passwords.
Be Cautious of Downloaded Files: Exercise caution when downloading files from the internet. Only download files from trusted sources. Scan downloaded files with your antivirus software before opening them.
Enable Pop-up Blockers: Configure your web browser to block pop-ups. Pop-ups can be used to trick users into installing malware or disclosing sensitive information.
Regularly Backup Your Data: Regularly back up your important files and data to an external storage device or cloud-based service. In case of a virus or malware infection, having backups ensures you can recover your data.
Update Your Software: Keep all your software, including web browsers, plugins, and applications, up to date. Software updates often include security patches that address vulnerabilities that could be exploited by malware.
Exercise Safe Internet Browsing: Be cautious when visiting websites. Stick to reputable and trusted websites. Be wary of clicking on ads or downloading files from unfamiliar or suspicious sites.
By following these practices, you can significantly reduce the risk of your computer getting infected with viruses and malware. Additionally, staying informed about the latest security threats and best practices can further enhance your computer's protection."
If I substitute the word 'me, or 'I' for the word 'computer' in ChatGPTs. reply to my question, I come up with a very comforting affermation to my fear of a "virus": Use me wisely, and I can be a blessing to you. I can protect you and remove the dreaded virus— fear— from you and together we can soar to the heavens.
I agree with Snowden on the critical issue of pushing AI into closed source models as even OpenAI and a few others actually want it regulated as, with Senators at a recent hearing going as far to suggest it be permitted only to licensed players and THAT is the real danger, not a Terminator like, sentient Skynet dystopia as some fear mongers would have you believe.
Rehashing my comments from elsewhere, since this comes up a lot recently, there are ulterior motives and negative consequences of such regulation, which would conveniently hamper privacy and any kill wider commercial marketplace offerings, DIY, home lab, independent/crowd sourced efforts:
https://reclaimthenet.org/criticism-of-open-source-ai-regulation-is-based-on-protecting-big-tech
- Think tank criticism of open source AI regulation protects Big Tech
https://reclaimthenet.org/eric-schmidt-generative-ai-true-anonymity
- Eric Schmidt testifies that there should be no “true anonymity” when accessing generative AI platforms
https://reclaimthenet.org/senators-generative-ai-censor-misinformation
- Senators Want To Control AI So It Can’t Produce “Misinformation”
https://reclaimthenet.org/senators-government-license-generative-ai
- Senators: Only Companies With A Government-Approved License Should Be Able To Offer Generative AI Tools
But most of this is based on this superstitious belief that "AI" is some kind of mystical black box only magicians can create and use in certain Big companies. I think this plays into the hands of those who want to control it. The problem is, there is no "it" .. AI / ML is not a singular thing controlled by a single entity. It's literally code, algorithms that people can learn AND DO themselves on their own computers and on rented computers in the cloud. AWS has Sagemaker for example. You can download code from github, compile, build and run it on your own machine and more significantly train it to give different results than what someone's else build of the same model train and tuned on different data would give.
e.g. the Huggingface repos
https://github.com/huggingface/transformers/
or GPT-4chan
https://www.youtube.com/watch?v=efPrtcLdcdM
or FreedomGPT:
https://freedomgpt.com/
or others listed here:
https://reclaimthenet.org/open-source-ai-is-needed-more-than-ever
or private use AI / ML trained only specific datasets -- such as what would be used commercially for internal corporate use or business like recommender systems, smart helpers like I mentioned with what AWS offers, instead of models trained on internet wide scraped data -- like this independent effort:
RazibGPT:
https://razib.substack.com/p/rkul-time-well-spent-05052023
"My friend Nick Cassamitis, founder of dry.io, has whipped together a “GPT” trained on my body of work (millions of words), RazibGPT. Instead of asking me a question, this might be a good option. The future is here! (dry.io has been adding features over time to its site for Unsupervised Learning)"
https://unsupervisedlearning.dry.io/RazibGPT
- basically Razib's Encyclopedic work on genetics, history, anthropology, etc.
Coming back to the fear mongering, in fact just until a few years ago, most of the work was referred to as Machine Learning / ML (a subset of AI), and Neural Networks / NN and Deep Learning NN (a subset of ML or a sub-subset of AI, and now with added transformers. The reason "AI" was not really used is because of the general acknowledgement that it referred to the pipe dream of A-G-I (artificial general intelligence).. that itself is a whole paper's worth I won't get into here.
To demystify and to just recognize its fallibility and the fact that when it fails, it often fails hard, see recent examples with GPT4:
https://www.businessinsider.com/lawyer-duped-chatgpt-invented-fake-cases-judge-hearing-court-2023-6?op=1
"The lawyer who used ChatGPT's fake legal cases in court said he was 'duped' by the AI, but a judge questioned how he didn't spot the 'legal gibberish'"
https://www.theverge.com/2023/5/27/23739913/chatgpt-ai-lawsuit-avianca-airlines-chatbot-research
"Lawyer Steven A. Schwartz admitted in an affidavit that he had used OpenAI’s chatbot for his research. To verify the cases, he did the only reasonable thing: he asked the chatbot if it was lying."
https://spectrum.ieee.org/gpt-4-calm-down
""I’ve been using large language models for the last few weeks to help me with the really arcane coding that I do, and they’re much better than a search engine. And no doubt, that’s because it’s 4,000 parameters or tokens. Or 60,000 tokens. So it’s a lot better than just a 10-word Google search. More context. So when I’m doing something very arcane, it gives me stuff.
But what I keep having to do, and I keep making this mistake—it answers with such confidence any question I ask. It gives an answer with complete confidence, and I sort of believe it. And half the time, it’s completely wrong. And I spend 2 or 3 hours using that hint, and then I say, “That didn’t work,” and it just does this other thing. Now, that’s not the same as intelligence. It’s not the same as interacting. It’s looking it up."
https://www.howtogeek.com/890540/dont-trust-chatgpt-to-do-math/
Turns out, in that lawyer's case pretty much everything was bogus because of course, an LLM doesn't know fact from fiction--the well known phenomenon by researchers of "hallucination" which the other two cases with IEEE spectrum article on **arcane** programming and the How-to-Geek article on simple logical problem solving also exhibits (the problem solving drives into hallucination territory once he starts to question it)
It's designed to mimic human language responses based on data its trained on, whose model (the neural network i.e. the code paths and the training weights i.e. billions of parameters) and data labeling are all set and preprogrammed humans -- entities outside of the AI -- to begin with.