Anthony Furey: The OpenAI Drama and the Very Real Concerns About Artificial Intelligence

Anthony Furey: The OpenAI Drama and the Very Real Concerns About Artificial Intelligence
OpenAI CEO Sam Altman during a meeting at the Station F in Paris on May 26, 2023. (Joel Saget/AFP via Getty Images)
Anthony Furey
11/22/2023
Updated:
11/23/2023
0:00
Commentary

We are regularly warned about the future perils of Artificial Intelligence (AI). This is in fact the subplot to the recent drama surrounding OpenAI and its former CEO Sam Altman. It’s unclear how society plans to respond to these warnings. But one thing is for certain: we must take them seriously.

Our next major black swan event could very well be something related to AI. A black swan event is typically described as an unexpected large-sized event that comes with major consequences. The 2008 subprime mortgage collapse, the 9/11 attacks, and COVID-19 are all examples.

When you’re in the middle of them, you feel like there could be no end to how bad things can get. Society always bounces back though, aside from those who tragically lose it all. But there are always worries that the next one will be worse—bigger, deadlier, longer, or even one that substantially alters humanity or ends us as a species entirely.

Most definitions of a black swan event refer to how they’re unanticipated but in hindsight are seen as predictable. What this in turn means is that we should never laugh off serious concerns about potentially harmful events, no matter how far-fetched they seem.

My 2017 book “Pulse Attack: The Real Story Behind the Secret Weapon that Can Destroy North America,” detailed one such possible event. I described how our increasingly electronic civilization is hanging by a thread due to vulnerabilities in our electricity grid that, if left unchecked, could lead to lengthy blackouts of weeks, months, or even years that fundamentally change our lives.

This could be caused by a naturally occurring solar flare that fries our electricity grid, which actually happened in the 19th century; but because we had very few electronics it didn’t disrupt our daily lives. It could also be caused by an attack from another country or terror group, either through bombing our transformer stations en masse or orchestrating a widespread electromagnetic pulse attack.

It all sounds very science-fiction or something out of a video game. It’s also unlikely. But it’s possible. For that reason, I concluded that we needed to invest in grid resilience measures to protect ourselves. (Since then, some regulatory bodies have begun to take these issues more seriously via studies, policies, and retrofits.)

The worst-case scenario fears around AI also sound very far-fetched, yet some very high-profile and credible individuals are telling us to take them seriously. We’d be well-advised to listen.

Earlier this year, in March, over 1,000 leaders in the tech community signed an open letter warning about AI and calling for a pause to developing its more advanced components. They cited the “profound risks to society and humanity” as the cause for concern.

The letter was released by the Future of Life Institute and signatories included Elon Musk and Apple co-founder Steve Wozniak.

The letter adds that AI developers are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one - not even their creators - can understand, predict or reliably control.”

These concerns played a role in the high-profile ouster of Sam Altman from OpenAI on Nov. 17. (Altman rejoined the company a few days later under a different board of directors than the one that axed him.)

The Washington Post reported that the firing involved a “power struggle” between Altman’s push to further commercialize AI and those board members concerned with its safety. This, The Post reports, “mirrors a larger rift in the world of advanced AI, where a race to dominate the market has been accompanied by a near-religious movement to prevent AI from advancing beyond human control.”

The “real safety concern” some board members reportedly had with Altman is that he had not been forthcoming about how he was raising money from regimes in the Middle East that could potentially use OpenAI technology for mass surveillance and human rights abuses.

In response to the unfolding drama, Musk posted to social media that “given the risk and power of advanced AI, the public should be informed of why the [OpenAI] board felt they had to take such drastic action.”

In other words, Musk wants us all to reflect on the perils of this issue a lot more.

He’s right. We should. The supposed dangers of AI can all seem very over-the-top. But we certainly won’t be saying that if and when any of them come to pass.

Views expressed in this article are opinions of the author and do not necessarily reflect the views of The Epoch Times.