Safety Intelligence and Legal Machine Language: Do we need the Three Laws of Robotics?
The aim of this chapter is to offer a fundamental framework for a legal system focused on safety issues involving New Generation Robots (NGRs). This framework is offered in response to the lack of clarity regarding robot safety guidelines despite the impending development and release of tens of thousands of robots into workplaces and homes around the world. The authors propose a Safety Intelligence (SI) concept for NGRs that addresses issues tied to open texture risk for robots that will have a relatively high level of autonomy in interactions with humans. We express doubt that Asimov’s Three Laws of Robotics model will be a suitable foundation for creating an artificial moral agency for ensuring robot safety. Instead, we will offer an alternative Legal Machine Language (LML) model that utilizes non-verbal information from robot sensors and actuators to protect both humans and robots. To implement an LML model, robotists must design a biomorphic nerve reflex system, and legal scholars must define safety content for robots having a certain degree of “self-awareness
Yueh-Hsuan Weng, Chien-Hsun Chen and Cheun-Tsai Sun,” Safety Intelligence and Legal Machine Language-Do we need the Three Laws of Robotics? ”, in Yoshihiko Takahashi (Ed.) Service Robot Applications, Vienna: InTech Education & Publishing , August 2008. ISBN 978-953-7619-00-8 Available at: http://works.bepress.com/weng_yueh_hsuan/3