Can AI weapons distinguish civilians from warriors?

newsmeki Team
5.7K

Can AI weapons minimize damage to civilians? Will it retreat if humans don't intervene? The primary AI weapon is to allow machines to make killer decisions by reducing humans to data.

image.png
General Atomics' Gambit drone is designed with AI and autonomous systems in mind.

According to Asia Times, questions arise from the near impossibility of ensuring that AI weapons comply with the principles of international humanitarian law.

Much has been said and written about how artificial intelligence (AI) will revolutionize the world. But concern is growing that using AI weapons in war and climate change could lead to devastating consequences for humanity.

AI already has combat experience, but what about ethics?

In March 2020, a Turkish-made drone used facial recognition to participate in the conflict in Libya. Three years on, there is still a lack of regulation of the operation of these "autonomous" AI weapons.

Can we believe AI weapons distinguish civilians from warriors? Can AI weapons minimize damage to civilians? Would it be able to withdraw when emotional decisions would be necessary if humans hadn't intervened?

Without human control, it would be difficult to attribute responsibility for war crimes.

Ethically, allowing machines to make murderous decisions by reducing humans to data can be seen as digital dehumanization.

Strategically, the proliferation of AI weapons technology will make it easier for countries to arm AI weapons.

In addition to the increasing use of AI weapons technologies, if accessibility and costs are significantly lowered, non-state actors can include them in their arsenals. This will undoubtedly have dire consequences.

AI arms race

image.png
Ukrainian service members learn to drop explosives from drones, May 12.

Russian President Putin said: "AI is the future, not only for Russia but also for all humanity. Whoever becomes a leader in this field will become a world hegemon."

Currently, the AI arms race is quietly developing in many countries.

In 2023, the Pentagon asked Congress for $145 billion in a fiscal year to increase spending on critical technology and strengthen partnerships with the private sector.

In December 2022, the Pentagon established the Office of Strategic Capital (OSC) to encourage private-sector investment in technologies for military use.

The OSC also solicits ideas from the private sector for next-generation military technologies, one of which has been described as "reliable autonomous AI."

In March, a US Government official spoke about using large language models (LLMs) in information warfare.

ChatGPT has been criticized (and sued) for creating "hallucinations" or fake information, more sophisticated LLM that countries can deploy to develop illusions against adversaries.

A military-grade LLM can enhance fake news, even undermining a country's entire information ecosystem. According to the manual, US Department of Defense officials have called ChatGPT a "village-level communication."

Meanwhile, China's military-civilian fusion (MCF) policy is similar to that of the US.

China's MCF first appeared in the late 1990s. However, it has increasingly aimed at forming a military-industrial complex since Xi Jinping came to power.

Like the OSC, the policy of the MCF also pursues leadership in AI weapons.

In Israel, retired military officials also quickly "give birth" technology companies specializing in civil-military technology.

The Israeli military has also admitted to using AI tools to select targets for airstrikes.

Unlike regulating nuclear proliferation, the world has made little progress in controlling the AI arms race. This may be due to the rapid development of AI.

The opportunity to control AI is rapidly shrinking.

Source from the Internet

You might also like

How do you feel?

Related