MENU

《经济学人》AI 武器化的担忧

2019 年 09 月 07 日 • 经济学人

本期 Leader 板块的最后一篇文章题为《Artificial intelligence and war》的文章讲的是对未来 AI 武器化的担忧,主要关注中美两个大国。美国公司在复杂的战略类游戏上取得了很大的成功和突破,中国公司则在获取大数据方面有优势。有 AI 加持的武器比人类更强,但也可能造成令人担忧的后果,如 AI 机器人误发导弹不留时间给人类考虑,AI 系统被黑被操纵等。

The Economist, September 7th-13th 2019.

文章以 20 世纪的核武器为例,认为近几十年没有发生核武器对抗灾难主要是有三方面的原因: 大国间的核武威慑、军备控制以及安全措施到位。但由于 AI 技术本身的特殊性,这重要的三点 AI 都很难复制。就目前而言,人类对 AI 武器化问题还没有足够重视,如果放任把战争交给计算机,那很可能会给世界带来灾难。

Artificial intelligence and war

Mind control

Artificial intelligence and war

As computers play a bigger role in warfare, the dangers to humans rise

The contest between China and America, the world’s two superpowers, has many dimensions, from skirmishes over steel quotas to squabbles over student visas. One of the most alarming and least understood is the race towards artificial-intelligence-enabled warfare. Both countries are investing large sums in militarised artificial intelligence (ai), from autonomous robots to software that gives generals rapid tactical advice in the heat of battle. China frets that America has an edge thanks to the breakthroughs of Western companies, such as their successes in sophisticated strategy games. America fears that China’s autocrats have free access to copious data and can enlist local tech firms on national service. Neither side wants to fall behind. As Jack Shanahan, a general who is the Pentagon’s point man for ai, put it last month, “What I don’t want to see is a future where our potential adversaries have a fully ai-enabled force and we do not.”

ai-enabled weapons may offer superhuman speed and precision (see article). But they also have the potential to upset the balance of power. In order to gain a military advantage, the temptation for armies will be to allow them not only to recommend decisions but also to give orders. That could have worrying consequences. Able to think faster than humans, an ai-enabled command system might cue up missile strikes on aircraft carriers and airbases at a pace that leaves no time for diplomacy and in ways that are not fully understood by its operators. On top of that, ai systems can be hacked, and tricked with manipulated data.

During the 20th century the world eventually found a way to manage a paradigm shift in military technology, the emergence of the nuclear bomb. A global disaster was avoided through a combination of three approaches: deterrence, arms control and safety measures. Many are looking to this template for ai. Unfortunately it is only of limited use—and not just because the technology is new.

Deterrence rested on the consensus that if nuclear bombs were used, they would pose catastrophic risks to both sides. But the threat posed by ai is less lurid and less clear. It might aid surprise attacks or confound them, and the death toll could range from none to millions. Likewise, cold-war arms-control rested on transparency, the ability to know with some confidence what the other side was up to. Unlike missile silos, software cannot be spied on from satellites. And whereas warheads can be inspected by enemies without reducing their potency, showing the outside world an algorithm could compromise its effectiveness. The incentive may be for both sides to mislead the other. “Adversaries’ ignorance of ai-developed configurations will become a strategic advantage,” suggests Henry Kissinger, who led America’s cold-war arms-control efforts with the Soviet Union.

That leaves the last control—safety. Nuclear arsenals involve complex systems in which the risk of accidents is high. Protocols have been developed to ensure weapons cannot be used without authorisation, such as fail-safe mechanisms that mean bombs do not detonate if they are dropped prematurely. More thinking is required on how analogous measures might apply to ai systems, particularly those entrusted with orchestrating military forces across a chaotic and foggy battlefield.

The principles that these rules must embody are straightforward. ai will have to reflect human values, such as fairness, and be resilient to attempts to fool it. Crucially, to be safe, ai weapons will have to be as explainable as possible so that humans can understand how they take decisions. Many Western companies developing ai for commercial purposes, including self-driving cars and facial-recognition software, are already testing their ai systems to ensure that they exhibit some of these characteristics. The stakes are higher in the military sphere, where deception is routine and the pace is frenzied. Amid a confrontation between the world’s two big powers, the temptation will be to cut corners for temporary advantage. So far there is little sign that the dangers have been taken seriously enough—although the Pentagon’s ai centre is hiring an ethicist. Leaving warfare to computers will make the world a more dangerous place.■

This article appeared in the Leaders section of the print edition under the headline"Mind control"

赞助博客

2016~2024 年经济学人高清 PDF 合集
赞助合集
2016~2024 年经济学人高清 PDF 合集
赞助合集