论文标题
下一波人工智能:健壮,可解释,适应性,道德和负责任
Next Wave Artificial Intelligence: Robust, Explainable, Adaptable, Ethical, and Accountable
论文作者
论文摘要
AI的历史包括几个“浪潮”。从1950年代中期到1980年代的第一波,重点是逻辑和象征性手工编码的知识表示,即所谓的“专家系统”的基础。第二波从1990年代开始,重点是统计和机器学习,其中,程序员构建了可以在大型数据集上培训的“统计学习算法”的程序员,而不是对行为进行手工编程规则。在最近的Wave研究中,AI的研究主要集中在深(即多层)神经网络上,这些神经网络受到大脑的启发,并受“深度学习”方法训练。但是,尽管深层神经网络导致了许多成功和新的功能,并在计算机视觉,语音识别,语言处理,游戏玩法和机器人技术方面取得了成功,但它们的广泛应用潜力仍受到多个因素的限制。 一个关于限制的一个是,即使是当今AI系统中最成功的人都遭受了脆弱的困扰 - 当面对与已经接受过培训的情况完全不同的情况时,他们可能会以意外的方式失败。这种缺乏鲁棒性也出现在AI系统对对抗攻击的脆弱性中,在这种攻击中,对手可以通过保证AI系统的特定错误答案或操作来巧妙地操纵数据。 AI系统还可以从培训数据中吸收基于性别,种族或其他因素的偏见,并在随后的决策中进一步扩大这些偏见。综上所述,这些各种局限性阻止了AI系统(例如自动医疗诊断或自动驾驶汽车)足够值得信赖的广泛部署。在整个社会中,AI的大规模扩散将需要从根本上新的想法,以产生不会牺牲我们的生产力,我们的生活质量或价值观的技术。
The history of AI has included several "waves" of ideas. The first wave, from the mid-1950s to the 1980s, focused on logic and symbolic hand-encoded representations of knowledge, the foundations of so-called "expert systems". The second wave, starting in the 1990s, focused on statistics and machine learning, in which, instead of hand-programming rules for behavior, programmers constructed "statistical learning algorithms" that could be trained on large datasets. In the most recent wave research in AI has largely focused on deep (i.e., many-layered) neural networks, which are loosely inspired by the brain and trained by "deep learning" methods. However, while deep neural networks have led to many successes and new capabilities in computer vision, speech recognition, language processing, game-playing, and robotics, their potential for broad application remains limited by several factors. A concerning limitation is that even the most successful of today's AI systems suffer from brittleness-they can fail in unexpected ways when faced with situations that differ sufficiently from ones they have been trained on. This lack of robustness also appears in the vulnerability of AI systems to adversarial attacks, in which an adversary can subtly manipulate data in a way to guarantee a specific wrong answer or action from an AI system. AI systems also can absorb biases-based on gender, race, or other factors-from their training data and further magnify these biases in their subsequent decision-making. Taken together, these various limitations have prevented AI systems such as automatic medical diagnosis or autonomous vehicles from being sufficiently trustworthy for wide deployment. The massive proliferation of AI across society will require radically new ideas to yield technology that will not sacrifice our productivity, our quality of life, or our values.