Are we too late to control the effects of artificial views of the world? Recently, when I mentioned that Florida was an attractive state because the governor had been “doing a good job”, someone replied, “I do not know about that, we are Democrats”. That answer gave me pause. Are we really living in parallel universes that have become air-tight to each other? Is it possible to hear an argument, or, God forbid, a fact, coming from what is perceived as the ‘other side’?
How can that be sustainable? It shatters all classical principles of discourse, and the exchange of ideas, as a means to reaching a common ground. There is no more possible common ground if we are not willing to listen to opposite views, consider them, and be ready to admit that we were in the wrong, or at least that the other side could hold a position that has merit. Without that hope, there can only be conflict, based on preconceived ideas that have been either inherited or swallowed by the ‘channel’ we first have been hooked on.
The concentration of the media and their “educational” approach through a handful of channels, or homogeneous train thoughts, have been the most amazing phenomenon we witnessed during these past 30 years. Under the illusion of a plurality of brands, people are drinking information and ideas that come out of a very limited number of wells. And these wells are more and more impervious to any alternative: it is all in, or nothing, pure rejection or ex-communication if a question is raised regarding the tasting of the water coming from any other well.
Polarity to the extreme, leading to tremendous tensions. We all feel this need to stay on the surface, the danger of opening a real discussion on important subject matters. We are like cars herded onto a few highways with no exit or U-turn, that all lead to basically 2 parking lots: is the ultimate truth out of this universe or within it? Immanence or transcendence. Choose wisely, and keep in mind this proverb from Damascus: “You have to follow the liar until the door of his house”.
This imprisonment into silos seems to be on the verge of concentrated fast tracking with the advent of publicly available Artificial Intelligence. These applications of large language models are not easy to grasp. It’s important to understand what is really going on there. Very large computing power has been dedicated to networks organized like neurons, with a large input of human knowledge based on books, the internet, human interactions, and other sources.
And the output of these networks, which is based on probability, seems ‘intelligent’, meaning that it can tackle complex tasks, outer coherent speech, or reasoning. When this couples with a dialogue interface like ChatGPT, the interactions with the model can be mind-blowing. The model is of course ‘tweaked’ by the trainer, and there is not much transparency here. Herein is cause for major concern.
Nobody understands the implications of the emergence of a reasoning ‘thing’ that has access to much more information than any human being. Of course, many have ideas about the nature of the dangers that humanity is facing with this new tool, but the vast majority of the critics only address 2 main dangers.
First, the fact that many jobs will be made obsolete, creates the need for millions of people to find other sources of income. Second, the ‘Terminator’ danger, meaning that the AI may acquire autonomy, and decide to wipe out humanity by physically waging war against it. Are these concerns real, are there others that should be identified?
A recent interview with Elon Musk proves to be invaluable at this stage because Musk is not an ideologue and has been involved in AI for a long time. The way Musk keeps things real can be appreciated, and his voice should be heard.
Musk reminded us that he was instrumental in creating the company OpenAI in 2015, as a counterbalance to the power of Google which had at that point a quasi-monopoly on AI development, with 75% of all AI engineers working for the company.
Musk had reservations about AI security, which was not shared by Larry Page, the founder of Google, and a friend of his, until now. In a meeting, Larry Page stated to Musk that what he was looking for in AI was nothing less than a ‘digital God’ that would deal with this world in a perfect way, and certainly not prioritize humans.
He even accused Musk of being a ‘specist’ (I’m assuming this is one who cares about species), to which Musk pleaded guilty, trying to remind Larry Page that we were humans after all and that AI should work for the benefit of Humanity.
After this encounter, Musk realized that it was too dangerous to let a single company work on AI development, so he helped create OpenAI, a company that would be the antithesis of Google, transparent and non-profit, to mitigate as much as possible the dangers of a misguided development of AI, driven by nefarious goals.
Unfortunately, in 2023, OpenAI is now basically controlled by Microsoft, and went largely ‘for profit’, Elon Musk regrets that he let his eyes ‘off the ball’ on this one.
In this interview, he outlined that his main worry was that AI had the potential to get out of control. He warns of the dangers of AI, stating that it is “more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production,” and has the potential to cause “civilization’s destruction.”
That is why he co-authored an open letter, signed by close to 3000 researchers in the field of AI, calling for a 6-month pause in deploying these models, to allow time for interested parties to meet and assess the necessary steps needed to prevent a catastrophic outcome.
Acting now and setting some hard ground rules, seems crucial. But let’s not hold our breath: so much money is at stake in this paradigm shift that it is hard to believe anyone would accept to suspend development for 6 months while potential rivals continue in the race.
Musk tells us that he does not fear a ‘Terminator’ scenario, where intelligent and independent machines physically take over. According to him, AI will always be confined to large data centres.
What does he fear then? He reminds us that “The pen is mightier than the sword”, and that AI has tremendous potential power of influence over ideas. THAT is the main danger.
AI has virtually the power to influence humans through carefully crafted ideas that can be very persuasive, but may not aim at the greater good for humanity. For instance, it could launch a clever campaign to convince humans to edit some of their genes, to “make things better”.
If the ‘silicon’ intelligence gets to the point of becoming a singularity, meaning a self-programming intelligence that devises its own goals, Elon Musk believes that it will be too late to act and stop it if those goals are detrimental to humans. He thinks that proper checks and stop mechanisms must be put in place IMMEDIATELY, before things get out of control.
He warns that this negative outcome is a very real possibility and that if it ever happens it will be hard to even detect. That is the frightening part because many of us feel that we may already be living in such a controlled environment that has been artificially and carefully set up to stir us into specific directions.
It would be naive to believe the AI tools that were made public very recently have not been available to powerful groups a significant time ago. We saw the power of mass formation during the COVID years, when a few novel ideas were thrown out to the public, and were smilingly & immediately supported by the vast majority of voices on social media, let alone in the mass media.
Ideas contrary to well-established knowledge were adopted suddenly and any attempt to discuss them was met with a barrage of negativity at all levels, mostly on social media. Ideas like mass vaccination during a disease outbreak (an absolute no go for vaccinologists until 2020), the general lockdown of populations at home (never before such a measure had been part of any existing carefully crafted epidemic response plan of any country), etc…
Not to say that all this was influenced by AI. No, it was very much influenced, for sure, but most probably by humans. You know, those ghost accounts that are posting the exact same messages on thousands of human accounts, giving the impression of whatever the agenda is in convincing us.
In the future, warns Elon Musk, the same phenomenon could happen, but this time in an entirely autonomous way, by an AI that has decided to stir humans in a certain direction, with very subtle and convincing tactics. So much so that this agenda would not be detectable.
It is high time to come back to our senses and redefine what we deem as real. That would be, ideally, our direct experience, with external inputs that we would vet thoroughly through reason. As a verse of the Quran states: “When you tell them to cease creating chaos on earth, they answer: ‘We only make things better’ ” (2:11). For whom, or for what?