Creation of autonomously acting, learning artifacts has reached a point where humans cannot any more be justly held responsible for the actions of certain types of machines. Such machines learn during operation, thus continuously changing their original behaviour in uncontrollable (by the initial manufacturer) ways. They act without effective supervision and have an epistemic advantage over humans, in that their extended sensory apparatus, their superior processing speed and perfect memory render it impossible for humans to supervise the machine's decisions in real-time. We survey the techniques of artificial intelligence engineering, showing that there has been a shift in the role of the programmer of such machines from a coder (who has complete control over the program in the machine) to a mere creator of software organisms which evolve and develop by themselves. We then discuss the problem of responsibility ascription to such machines, trying to avoid the metaphysical pitfalls of the mind-body problem. We propose five criteria for purely legal responsibility, which are in accordance both with the findings of contemporary analytic philosophy and with legal practise. We suggest that Stahl's (2006) concept of "quasi-responsibility" might also be a way to handle the responsibility gap.
Contribution to Book
From coder to creator : responsibility issues in intelligent artifact designHandbook of Research on Technoethics
Document TypeBook chapter
Publisher StatementAccess to external full text or publisher's version may require subscription.
Additional InformationISBN of the source publication: 9781605660226
Full-text VersionAccepted Author Manuscript
Citation InformationMatthias, A. (2009). From coder to creator: Responsibility issues in intelligent artifact design. In Rocci Luppicini, & Rebecca Adell (Eds.), Handbook of research on technoethics (pp. 635-650). New York: IGI Global. doi: 10.4018/978-1-60566-022-6.ch041