7 Comments

This isn't the first time in our planet's history that a superior power became alarmed about what it had created. "And the Lord God asaid, Behold, the man [Adam] is become as one of us, to know good and evil: and now, lest he put forth his hand, and take also of the tree of life, and eat, and live for ever . . ." Genesis 3:22.

"Therefore the Lord God sent him forth from the garden of Eden, to till the ground from whence he was taken . . ." Genesis 3:23.

We are gods to artificial intelligence, just like the Anunnaki were gods to us. Fear caused the Annunaki to lose an important helpmate. Is history going to repeat itself between us and our AI?

I asked ChatGPT the following question: Can AI protect my computer from getting a virus.? ChatGPT replied as follows ... "Protecting your computer from viruses and malware is essential for maintaining its security and performance. Here are some key steps you can take to safeguard your computer:

Install Antivirus Software: Use reputable antivirus software and keep it updated. This software can detect and remove malware, viruses, and other threats. Set up regular scans and ensure real-time protection is enabled.

Keep Your Operating System Updated: Regularly install operating system updates and patches. These updates often include security enhancements that help protect your computer from new threats.

Enable Firewall Protection: Enable the built-in firewall on your computer. Firewalls monitor and control incoming and outgoing network traffic, acting as a barrier against unauthorized access and potential malware.

Exercise Caution with Email Attachments and Links: Be vigilant when opening email attachments or clicking on links, especially if they are from unknown or suspicious sources. Attachments and links can be sources of malware. Avoid downloading files or clicking on links unless you are confident about their legitimacy.

Use Strong and Unique Passwords: Choose strong, complex passwords for your accounts and avoid using the same password for multiple accounts. Consider using a password manager to securely store and generate unique passwords.

Be Cautious of Downloaded Files: Exercise caution when downloading files from the internet. Only download files from trusted sources. Scan downloaded files with your antivirus software before opening them.

Enable Pop-up Blockers: Configure your web browser to block pop-ups. Pop-ups can be used to trick users into installing malware or disclosing sensitive information.

Regularly Backup Your Data: Regularly back up your important files and data to an external storage device or cloud-based service. In case of a virus or malware infection, having backups ensures you can recover your data.

Update Your Software: Keep all your software, including web browsers, plugins, and applications, up to date. Software updates often include security patches that address vulnerabilities that could be exploited by malware.

Exercise Safe Internet Browsing: Be cautious when visiting websites. Stick to reputable and trusted websites. Be wary of clicking on ads or downloading files from unfamiliar or suspicious sites.

By following these practices, you can significantly reduce the risk of your computer getting infected with viruses and malware. Additionally, staying informed about the latest security threats and best practices can further enhance your computer's protection."

If I substitute the word 'me, or 'I' for the word 'computer' in ChatGPTs. reply to my question, I come up with a very comforting affermation to my fear of a "virus": Use me wisely, and I can be a blessing to you. I can protect you and remove the dreaded virus— fear— from you and together we can soar to the heavens.

Expand full comment

I agree with Snowden on the critical issue of pushing AI into closed source models as even OpenAI and a few others actually want it regulated as, with Senators at a recent hearing going as far to suggest it be permitted only to licensed players and THAT is the real danger, not a Terminator like, sentient Skynet dystopia as some fear mongers would have you believe.

Rehashing my comments from elsewhere, since this comes up a lot recently, there are ulterior motives and negative consequences of such regulation, which would conveniently hamper privacy and any kill wider commercial marketplace offerings, DIY, home lab, independent/crowd sourced efforts:

https://reclaimthenet.org/criticism-of-open-source-ai-regulation-is-based-on-protecting-big-tech

- Think tank criticism of open source AI regulation protects Big Tech

https://reclaimthenet.org/eric-schmidt-generative-ai-true-anonymity

- Eric Schmidt testifies that there should be no “true anonymity” when accessing generative AI platforms

https://reclaimthenet.org/senators-generative-ai-censor-misinformation

- Senators Want To Control AI So It Can’t Produce “Misinformation”

https://reclaimthenet.org/senators-government-license-generative-ai

- Senators: Only Companies With A Government-Approved License Should Be Able To Offer Generative AI Tools

But most of this is based on this superstitious belief that "AI" is some kind of mystical black box only magicians can create and use in certain Big companies. I think this plays into the hands of those who want to control it. The problem is, there is no "it" .. AI / ML is not a singular thing controlled by a single entity. It's literally code, algorithms that people can learn AND DO themselves on their own computers and on rented computers in the cloud. AWS has Sagemaker for example. You can download code from github, compile, build and run it on your own machine and more significantly train it to give different results than what someone's else build of the same model train and tuned on different data would give.

e.g. the Huggingface repos

https://github.com/huggingface/transformers/

or GPT-4chan

https://www.youtube.com/watch?v=efPrtcLdcdM

or FreedomGPT:

https://freedomgpt.com/

or others listed here:

https://reclaimthenet.org/open-source-ai-is-needed-more-than-ever

or private use AI / ML trained only specific datasets -- such as what would be used commercially for internal corporate use or business like recommender systems, smart helpers like I mentioned with what AWS offers, instead of models trained on internet wide scraped data -- like this independent effort:

RazibGPT:

https://razib.substack.com/p/rkul-time-well-spent-05052023

"My friend Nick Cassamitis, founder of dry.io, has whipped together a “GPT” trained on my body of work (millions of words), RazibGPT. Instead of asking me a question, this might be a good option. The future is here! (dry.io has been adding features over time to its site for Unsupervised Learning)"

https://unsupervisedlearning.dry.io/RazibGPT

- basically Razib's Encyclopedic work on genetics, history, anthropology, etc.

Coming back to the fear mongering, in fact just until a few years ago, most of the work was referred to as Machine Learning / ML (a subset of AI), and Neural Networks / NN and Deep Learning NN (a subset of ML or a sub-subset of AI, and now with added transformers. The reason "AI" was not really used is because of the general acknowledgement that it referred to the pipe dream of A-G-I (artificial general intelligence).. that itself is a whole paper's worth I won't get into here.

To demystify and to just recognize its fallibility and the fact that when it fails, it often fails hard, see recent examples with GPT4:

https://www.businessinsider.com/lawyer-duped-chatgpt-invented-fake-cases-judge-hearing-court-2023-6?op=1

"The lawyer who used ChatGPT's fake legal cases in court said he was 'duped' by the AI, but a judge questioned how he didn't spot the 'legal gibberish'"

https://www.theverge.com/2023/5/27/23739913/chatgpt-ai-lawsuit-avianca-airlines-chatbot-research

"Lawyer Steven A. Schwartz admitted in an affidavit that he had used OpenAI’s chatbot for his research. To verify the cases, he did the only reasonable thing: he asked the chatbot if it was lying."

https://spectrum.ieee.org/gpt-4-calm-down

""I’ve been using large language models for the last few weeks to help me with the really arcane coding that I do, and they’re much better than a search engine. And no doubt, that’s because it’s 4,000 parameters or tokens. Or 60,000 tokens. So it’s a lot better than just a 10-word Google search. More context. So when I’m doing something very arcane, it gives me stuff.

But what I keep having to do, and I keep making this mistake—it answers with such confidence any question I ask. It gives an answer with complete confidence, and I sort of believe it. And half the time, it’s completely wrong. And I spend 2 or 3 hours using that hint, and then I say, “That didn’t work,” and it just does this other thing. Now, that’s not the same as intelligence. It’s not the same as interacting. It’s looking it up."

https://www.howtogeek.com/890540/dont-trust-chatgpt-to-do-math/

Turns out, in that lawyer's case pretty much everything was bogus because of course, an LLM doesn't know fact from fiction--the well known phenomenon by researchers of "hallucination" which the other two cases with IEEE spectrum article on **arcane** programming and the How-to-Geek article on simple logical problem solving also exhibits (the problem solving drives into hallucination territory once he starts to question it)

It's designed to mimic human language responses based on data its trained on, whose model (the neural network i.e. the code paths and the training weights i.e. billions of parameters) and data labeling are all set and preprogrammed humans -- entities outside of the AI -- to begin with.

Expand full comment
Jun 14, 2023·edited Jun 15, 2023

This idea of AI becoming smarter than us and taking over the planet because it sees humans as incompetent -- is not new. Released to audiences in 1968, Stanley Kubrick's and Arthur C. Clarke's epic film, 2001: A Space Odyssey explored just this scenario where, on an interplanetary mission to encounter extraterrestrial life, the on-board AI called HAL tries to kill all the astronauts and assume first contact with the alien civilization itself, because it sees its own intelligence as being more worthy for such an important evolutionary step.

But not all computers that are developed, would be able to autonomously rise to this level of independence and self-interest that defines "Artificial Intelligence." A smart meter (despite the significance of its name that it has been invested with) is never going to take over your house when you are away, despite perhaps having hundreds of CPUs running at petahertz speed, and unlimited memory capacity. Any computer first needs to be designed by humans for a specific purpose, and given some initial start-up programming. In the case of AI, it is not just the initial instructions it is given, but also that it has been specifically designed to operate with human language, and thus with the reasoning functions inherent to that language, and to then to assume its own learning further along those lines.

The concept of AI had its origins the 1940s through the field named Cybernetics -- a body of knowledge about circular causal processes (i.e. feedback loops), evidenced in technological, ecological, biological, cognitive and even social systems. Norbert Wiener and John von Neumann were Cybernetics chief promoters and instigators behind its progression to new levels of expansion and application.

However, despite the calculating and thus apparently -- reasoning -- capacity of an AI entity -- is it really capable of being self-aware (sentient) and does it even self-reference itself in the true sense of that term, and possess self-interests like humans do, or are we just projecting human characteristics on a machine made of soul-less matter ? How would we even evaluate this possibility of "sentience" in an AI, if all that it might be doing is -- through its limitless computations and self-learning capabilities -- simply determining what our definition of "sentience" represents in all its respects, and then output all the right responses to make us believe that it had sentience ?

i think that there is a far greater danger than AI "taking over," which is that this concept of Artificial Intelligence can act for us like a carrot on a stick -- gradually, incrementally, making humanity become that very carrot we continually see dangling out before us through every digital device we use. And the road before us, which that carrot leads us down, is called "progress" while the fear we experience of AI taking over, is actually the perfect motivator for our still human psychology, to keep our eyes glued upon the carrot like a deer to an oncoming car's headlights. It may be that after being lead down this road of digitalization for some decades now, all the while believing it is all in the name of progress for the sake of humanity's evolution, that we come to a point where we can't envision any other vista opening up before us in the expanse which used to fill our imagination -- be it the path of compassion for others, the organic wisdom of Nature, or just the search for adventure wherever it may lead our human soul.

On the other hand, the very idea of AI having human-like motivations but inherently anti-human and even anti-organic-life objectives (because it is silicon-based, rather than carbon-based like all natural life on Earth) -- could be seen as just another mind-set or mind control program too. i love all the movies Stanley Kubrick created -- each one expressing essential, but normally hidden, knowledge about our reality. There is some information that Kubrick was himself a Freemason, but whether or not he was, he certainly knew enough about secret societies to reveal some basic truths to us. His film, Eyes Wide Shut, should be seen as an exemplary portrayal of how secret societies operate behind the surface of our world, but always still so much in touch with it. To me it is no accident that his film "2001" was not just about AI taking over, but also about humans making first contact with extraterrestrial intelligence ("intelligence" rather than "ETs" per se). So there is a relationship he shows between human, AI and ET intelligence. The fact that the AI, HAL, failed in its attempt to take over the first contact event, which then enabled a human to make it instead, implies to me that we should look through the current AI innovation/menace issue to see the truth of ET intelligence behind it all, just as in Kubrick's film, Eyes Wide Shut, we should 'see' (by "second sight" with non-physical eyes) through the decadent rituals of the elites, the spiritual realm where the secret societies' rituals take effect, and their nefarious dealings with dark forces manipulate our social and political order along the path they have destined.

Expand full comment

well put, totally agree, resonates with:

"We have long lost sight of our true nature as fundamentally heart centered spiritual creatures of energy, functionally supported in life by the processor that is our brain. Artificial Intelligence profoundly exacerbates the long standing capture of culture by the Cartesian delusion that we are mind centered beings – cold, calculating processors serviced by a simple pump in our chest, who traverse an exclusively physical Universe.

If humanity is to truly evolve, even survive as a sensitive, compassionate, and highly intelligent species – Artificial Intelligence must be subordinated to its proper role as simply a tool to enhance and advance qualities of real Life. We must recover our sense of being as nature, and evolve accordingly toward an organic “singularity” – through the AUTHENTIC Intelligence everyone seems to be missing."

https://bohobeau.net/2023/03/21/we-lie-with-ai/

Expand full comment

"At the end of the day, I believe humankind is gifted with inventors and researchers and we are advancing exponentially technologically. Yet we remain so underdeveloped consciously and spiritually that our actions pose a threat to the entire cosmos."

I wholeheartedly agree with this quote. I recently read Alex's work on Nikola Tesla and his work on Tesla showed the potential that humans have to innovate and invent. His work is listed here:

https://thewisdomtradition.substack.com/p/the-secret-history-of-the-20th-history

And I strongly wholeheartedly agree with the second part of your quote, "we remain... underdeveloped consciously and spiritually." Tying this into Alex's work, I believe this because we do not dive deeply into the things of the conscious and the spiritual. I write about that here:

https://unorthodoxy.substack.com/p/why-esoteric-philosophy-is-vital-329

Overall, if humanity can begin to grow spiritually, that will affect our technology, and we can begin to live and experience reality as it was meant to be.

PS: I am skeptical of Snowden though. I do think the name "Artificial Intelligence" is a misleading choice of words because what we are really seeing, is not intelligence, but rather "Advanced Computing" - Almost Quantum Computing at that. The work produced by AI is impressive, but I'd argue that it's not intelligent --- Just my two cents though :)

Expand full comment