Humans may not always grasp why AIs act. Don’t panic
THERE is an old joke among pilots that says the ideal flight crew is a computer, a pilot and a dog. The computer’s job is to fly the plane. The pilot is there to feed the dog. And the dog’s job is to bite the pilot if he tries to touch the computer.
在飞行员中有一个老笑话,说理想的机组成员是电脑、飞行员和狗。计算机的任务是驾驶飞机,飞行员在那里喂狗,而狗的工作就是在飞行员试图触碰电脑时咬他。
Handing complicated tasks to computers is not new. But a recent spurt of progress in machine learning, a subfield of artificial intelligence (AI), has enabled computers to tackle many problems which were previously beyond them. The result has been an AI boom, with computers moving into everything from medical diagnosis and insurance to self-driving cars.
把复杂的任务交给电脑并不新鲜。但是最近突飞猛进的机器自我学习,人工智能的一个分支,使计算机能够处理许多以前无法解决的问题,造成人工智能的繁荣,计算机进入了从医疗诊断和保险到自动驾驶汽车的所有领域。
There is a snag, though. Machine learning works by giving computers the ability to train themselves, which adapts their programming to the task at hand. People struggle to understand exactly how those self-written programs do what they do. When algorithms are handling trivial tasks, such as playing chess or recommending a film to watch, this “black box” problem can be safely ignored. When they are deciding who gets a loan, whether to grant parole or how to steer a car through a crowded city, it is potentially harmful. And when things go wrong—as, even with the best system, they inevitably will—then customers, regulators and the courts will want to know why.
不过,有一个障碍。机器自我学习的工作原理是给计算机提供训练自己的能力,使其能够适应现在的任务。人们很难确切地理解那些自我编写的程序是如何完成的。当算法处理一些微小的任务时,比如下棋或推荐电影观,这个“黑箱”问题可以被安全地忽略。当他们决定谁获得贷款,是否给予假释或如何驾驶汽车穿过拥挤的城市时,这可能是有损害的。当事情出错时,即使是最好的系统,不可避免地,客户、监管者和法院都想知道为什么。
For some people this is a reason to hold back AI. France’s digital-economy minister, Mounir Mahjoubi, has said that the government should not use any algorithm whose decisions cannot be explained. But that is an overreaction. Despite their futuristic sheen, the difficulties posed by clever computers are not unprecedented. Society already has plenty of experience dealing with problematic black boxes; the most common are called human beings. Adding new ones will pose a challenge, but not an insuperable one. In response to the flaws in humans, society has evolved a series of workable coping mechanisms, called laws, rules and regulations. With a little tinkering, many of these can be applied to machines as well.
对有些人来说,这是阻止人工智能的一个原因。法国数字经济部长Mounir Mahjoubi表示,政府不应该使用任何算法,因为其决定无法解释。但这是一个过度反应。尽管它们具有未来色彩,但智能电脑带来的困难并非史无前例。社会已经有丰富的经验处理有问题的黑匣子;最常见的被称为人类。增加新事物会带来挑战,但不是无法克服的。为了应对人类的缺陷,社会已经进化出一系列可行的应对机制,称为法律、法规和规章。做些小小的调整,其中许多可以应用到机器上。
未完待续...