Do no harm, don’t discriminate: official guidance issued on robot ethics

A professor of robotics in England says “The problem with AI systems right now, especially these deep learning systems, is that it’s impossible to know why they make the decisions they do.” If this is true, then robots are already out of control. -Technocracy News Editor

The Guardian has the story:

RobotIsaac Asimov gave us the basic rules of good robot behaviour: don’t harm humans, obey orders and protect yourself. Now the British Standards Institute has issued a more official version aimed at helping designers create ethically sound robots.

The document, BS8611 Robots and robotic devices, is written in the dry language of a health and safety manual, but the undesirable scenarios it highlights could be taken directly from fiction. Robot deception, robot addiction and the possibility of self-learning systems exceeding their remits are all noted as hazards that manufacturers should consider.

View article →