The British introduced the first robot to robot ethical standard inhibition mine_清翼

The British introduced the first robot to robot "ethical standard inhibition" [editor’s note] about AI generation robot behavior standard, you will certainly think of Asimov (Isaac Asimov) the famous "robot three principles: no harm, obey human command, protect yourself. Now, the British Standards Institute (BSI) officially released an official guide designed to help the designer to develop ethical robots, as to the robot with a hoop curse". The name of this guide is called "Application guide" ethical design and robot and machine system, reads like a health and safety manual, but it emphasizes the "bad behavior" is like excerpts from out of science fiction. Guide pointed out that the problem of fraud, robot robot addiction, behavior and self-learning system for others, these are the issues that should be considered in the robot manufacturer. Held in the British Oxford county "social robots" and "AI" (Social Robotics and AI) conference, University of the West of England robotics professor Alan Winfield (Alan Winfield) said that the guidelines represent the "ethical values into the first step in the field of robotics and AI". "As far as I know, this is the first in the history of the design standards for robot ethics," said Winfield. "This is a little more complicated than Asimov’s three principles of robotics, which provides a moral hazard assessment of the robot." Principles and problems of this guide are beginning to put forward some broad moral principles: "the primary purpose is not to kill or injure a human robot; need to be responsible for what is human, not a robot; for any robot and behavior should be found behind the responsible person may." The guidelines then highlight a range of more controversial issues, such as the emotional bond between humans and robots, especially when a robot is designed to interact with children or the elderly. Sharkey, an emeritus professor of robotics and AI at the University of Sheffield in the United Kingdom, said Noel (Noel) said that such robots might inadvertently deceive our feelings. "A recent study looked at the use of small robots in kindergarten," he said. "The children love these robots and have a feeling of attachment to them. But later in the interview, the children clearly thought that the robot’s cognitive ability than the family pet." The guidelines suggest that robot designers should be transparent, but scientists say it is easier said than done. "One of the problems with AI systems, especially deep learning systems, is that you can’t know why they make decisions," Winfield said. When it comes to deep learning technology programming, it’s not that they’re done in a fixed way. Instead, they try millions of times in the process of learning, and then develop a strategy to perform tasks. Sometimes, these strategies will be beyond the expectations of the creator, so that they can not understand. The guidelines for human discrimination.相关的主题文章: