Geneva UN Summit on AI Explores Globalist Interventions

Geneva UN Summit on AI Explores Globalist Interventions
A smartphone displaying the logo of the artificial intelligence OpenAI research laboratory, in Manta, Italy, on March 31, 2023. (Marco Bertorello/AFP via Getty Images)
7/8/2023
Updated:
7/8/2023

The Swiss government and the United Nations convened a summit on artificial intelligence (AI) in Geneva on July 6, with the goal of garnering advice on how to control the spread of AI in a globalist framework.

The U.N. summit is titled “AI for Good Global Summit,” with 3,000 experts from companies like Microsoft and Amazon, academics, and 40 U.N. sister agencies taking part.

“This technology is moving fast,” said Doreen Bogdan-Martin, head of the International Telecommunication Union (ITU), the U.N.’s information and communications technology agency.

“It’s a real opportunity for the world’s leading voices on AI to come together on the global stage and to address governance issues,” she told reporters. “Doing nothing is not an option. Humanity is dependent upon it. So we have to engage and try and ensure a responsible future with AI.”

She said the summit will explore frameworks and guardrails for safe AI use, and it will examine proposals for global U.N. intervention for controlling AI use.

In the United States, President Joe Biden wants to intervene in the technology’s spread, with an initiative headed by Vice President Kamala Harris.

White House chief of staff Jeff Zients’ office is preparing a set of actions for the federal government regarding the use of AI, according to the White House.

“We’re kind of in a perfect storm of suddenly having this powerful new technology—I don’t think it’s super-intelligent—being spread very widely and empowered in our lives, and we’re really not prepared,” AI entrepreneur Gary Marcus told AFP ahead of the Geneva meeting.

“We’re at a critical moment in history when we can either get this right and build the global governance we need, or get it wrong and not succeed and wind up in a bad place where a few companies control the fates of many, many people without sufficient forethought,” he said.

A panel of humanoid robots appeared before journalists on Friday during the summit, in the world’s first human-robot press conference.

The AI-powered robots even said that robots can run the world better than humans.

These robots were developed by Hanson Robotics and Osaka University of Japan, among other companies.

The "Palais des Nations", which houses the United Nations Offices, is seen at the end of the flag-lined front lawn in Geneva on Sept. 4, 2018. (Fabrice Coffrini/AFP via Getty Images)
The "Palais des Nations", which houses the United Nations Offices, is seen at the end of the flag-lined front lawn in Geneva on Sept. 4, 2018. (Fabrice Coffrini/AFP via Getty Images)

U.N. agencies are already using AI, such as the World Food Program’s HungerMap project which pulls together data to identify areas sliding toward hunger. It is also developing remote-controlled trucks to deliver emergency aid in danger zones.

The World Health Organization is working on a benchmarking system to ensure the accuracy of AI disease diagnoses.

The ITU brings together 193 countries and over 900 organizations including universities and companies like Google and Chinese state-owned Huawei Technologies. It allocates global radio spectrum and satellite orbits and is involved with setting standards for artificial intelligence.

US Legislation on AI and Nuclear Weapons

In the United States, lawmakers proposed legislation in June to prevent AI from taking control of the U.S. nuclear arsenal in case it comes to the conclusion by itself that an attack is needed.

Rep. Ted Lieu (D-Calif.) has proposed an amendment to the defense policy measure for 2024 that would require the Pentagon to implement a system that ensures that “meaningful human control is required to launch any nuclear weapon.”

The revision calls for mandatory human presence on the final decision for a nuclear attack and the picking of a target, in case AI is involved in nuclear weapons deployment.

Bipartisan support for Mr. Lieu’s amendment indicates that legislators are increasingly concerned that AI could make decisions as rapidly as it can evaluate the situation.

Rep. Stephen Lynch (D-Mass.) also offered an amendment in February that “requires the Secretary of Defense, in carrying out any program, project, or other activity involving the use of artificial intelligence or autonomous technology, to adhere to the best practices set forth in the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy issued by the Biden Administration in February 2023.”

The non-binding guidance cited in Mr. Lynch’s amendment states, among other things, that nations should “maintain human control and involvement for all actions critical to informing and executing sovereign decisions concerning nuclear weapons employment.”

“States should design and engineer military AI capabilities so that they possess the ability to detect and avoid unintended consequences and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior,” it reads. “States should also implement other appropriate safeguards to mitigate risks of serious failures.”

According to the Government Accountability Office (GAO), the Department of Defense (DOD) is pursuing advanced AI capabilities. The GAO recommends that AI acquisition guidance should first be established within the DOD, and only after that AI use should be explored.

GAO issued such recommendations to the three military branches, and these recommendations are not binding in nature.

Savannah Hulsey Pointer and Reuters contributed to this report.