Technology Convergence: Governance and Gaps in the Era of Enhancement (or “ZombAIs ante Portas!”)
Cite as: S H E Harmon and W Abel, "Technology Convergence: Governance and Gaps in
the Era of Enhancement (or “ZombAIs ante Portas!”)", (2010) 7:2 SCRIPTed
© Shawn H.E. Harmon and Wiebke Abel 2010.
This work is licensed under a Creative Commons Licence. Please click on the link to read the terms and conditions.
The term “zombie” has been defined variously as “a mechanically driven human corpse,”1 “a supernatural power or spell that … can enter into and reanimate a corpse, and a corpse revived in this way”,2 and “a person who is or appears to be lifeless, apathetic, or totally lacking in independent judgment; automaton”.3 Drawing on these definitions, a “zombAI” might be described as “an otherwise inert artificial intelligence-controlled human being.” Just as human zombies are very real,4 so zombAIs may (in future) become a reality. But what issues are thrown up by zombAIs? Is the law equipped to deal with them? What sort of reforms might be needed to govern them appropriately? This paper undertakes a very preliminary – indeed speculative – exploration of some of the issues implicated by the convergence of several high technologies and their potential to create zombAIs. After framing the assessment via a fictional scenario, we consider the state and trajectory of medical biotechnologies, nanotechnologies and artificial intelligence (AI), arguing that their convergence will lead to zombAIs and the scenario offered. Next, we identify some key social, ethical and legal issues raised by the scenario before querying the capacity of existing regulatory instruments to address these issues (i.e. to address the development of moral machines and intelligent implants). We conclude that there is a need for more “joined-up” regulation to govern such convergences, which will see the science fiction of today (both utopian and dystopian) become a reality.
2. ZombAIs? Convergence, Dynamism and Possibilities
It takes no specially gifted imagination to come up with a credible depiction of our zombAIs within a perfectly mundane scenario. Consider the following:
A soaring, cacophonous metropolis. A tangle of congested streets. A chaotic intersection, innumerable vehicles negotiating a complex dance of ill-tempered commutership. A serene, almost somnambulant, jet-haired woman behind the wheel of an expensive hybrid coupe. Time is precious: she must interview applicants, revise projects, contact clients, order groceries, and collect the “little prince” from school. She has to compartmentalise her reality, or be swept away by it. So she gives herself away to her implanted AI.
Frenetic sidewalks, choked with pedestrians eager to get somewhere. Feeble urban trees bow limply in the dead heat. Traffic light turns green. Vehicles launch into motion. A man steps off the curb against his signal, distracted by the internet data streaming across the monocle patched over one eye. An expensive coupe brushes him aside as it accelerates away to the next smog-blurred intersection. Incandescent pain. Pedestrians shout after the vehicle and transmit the registration number to the authorities.
Police. Kevlar. High-yield, AI-driven, nano-production firearms enzyme-sealed to palms sweaty with anticipation. A surly thud. A boy opens the door, squeals for his mother. A handsome woman with jet hair.
“You own specified vehicle?”
“Driving today on specified street at specified time?”
“You’re under arrest.” (Threatening poses adopted.)
“Hit and run. Terrible injuries.”
“It wasn’t me!”
“Witnesses saw you.”
“It wasn’t me!”
“Tell it to the judge, ma’am.”
“I turned my vehicle over to my AI.”
“You’re aware that your full consciousness must be directed at operating any and all motorised vehicles; it’s a condition of licensing that you cannot drive as a ZombAI.”
“The techno-physician said the AI was more than capable.”
“Apparently not, or you set your parameters inappropriately; either way, you’re coming with us, and we’ll need your implant.”
“Fine, extract the implant until the matter is resolved, but the department must guarantee that it will not be recorded, duplicated or wiped.”
While some may question whether this scenario will ever be possible, we argue that current capabilities and general trajectories in a range of high technologies make it not only possible, but quite probable – not so much “around the corner”, but “down the road”. In particular, developments in medical biotechnologies, nanotechnologies, and AI research are all moving us toward this or a similar potentiality.
Medical biotechnology comprises that collection of technologies, techniques and practices which implicate animal and/or human physiology and therefore are (primarily) aimed at medical innovation (as compared to green biotechnologies which are aimed at the environment and include agro-biotech and GM crops). Molecular biology and synthetic biology are particularly relevant.5 The objectives of the former are to understand biological functions, whereas the objectives of the latter are to design simple, bespoke organisms with carefully chosen components that can then be mass produced and which can serve as drug-delivery systems and bio-based medicines (smart drugs), and bio- or hybrid biological computational devices.6 Work has been ongoing to create a bacterium that produces amino acids, and to create an artificial organism capable of producing proteins,7 and, just a few months ago, a living artificial organism was indeed created using synthetic biology.8
Nanoscience is an interdisciplinary undertaking drawing on biotechnology, chemistry, biochemistry, engineering, physics, and physical/material sciences which is performed on an atomic, molecular or macromolecular scale (“nanoscale”) so as to produce nanotechnologies, including materials, devices, and systems, with fundamentally new properties or functions resulting from their scale. Nanomaterials are not smaller versions of their macroscale counterparts, but are materials with new and unique properties. Such downsized structures exhibit altered electronic, magnetic, mobility, mechanical, optical, (chemically) reactive, solubility, or strength properties leading to new and novel effects.9 It is believed that nanotechnologies will increasingly drive advancements in nascent fields such as physical materials, medicine, pharmaceuticals, cosmetics, biotechnology, chemical engineering, electronics, information technologies, optics, energy production, environmental sciences, food and processed food production, and more, with particular implications for pollution reduction and resource conservation.10
In the health setting, it is already possible to combine biological units with manufactured bioinorganic nanostructures to create products (e.g. biosensors) or interfaces (e.g. implanted health monitors, dose regulators, etc), and work is underway to create processes such as increased catalytic reactions and biochemical and pharmaceutical separations, and products such as smart bandages, artificial skin, next-generation pacemakers, some of which will soon be at the clinical testing stage.11 In other fields, nano-based stay-fresh packaging and nano-calcium enriched milk are already on the market,12 and a single carbon nanotube radio has already been created.13 Additionally, work is being undertaken to develop:14
improved (that is stain or bacteria resistant) sports goods and clothing;
improved cosmetics and dietary supplements;
microchips with dramatically increased processing capacity;
new computational pathways such as quantum dots and quantum wires;
ultra-fast switches and single-electron-event-controlled devices;
artificial photosynthesis in regularly used materials such as paints;
create tiny, living viral batteries based on mollusc physiology;
create solar cells and molecular motors; and
strong composites for construction, automotive and aeronautical applications.
There has also been speculation about the ability to create nano-assemblers (either autonomous or non-autonomous) which construct objects atom-by-atom from the bottom up using carbon feed, air, or waste.15
AI is described as a collection of activities implicating the science and engineering of making intelligent machines. Involving a variety of technical pursuits (e.g. robotics, natural language processing, machine learning, etc), the general objectives of the field are to understand the mechanisms of human thought, and to create systems that think and act like humans or (alternatively) think and act rationally (i.e. create machines that match or exceed human intelligence for specific tasks). Specific ongoing undertakings include the creation of digital personal servants to perform repetitive tasks such as bookings,16 the creation of machines that understand and respond to complex questions from a human voice and thus might compete with humans,17 the creation of functioning brains on supercomputers,18 and the creation of brain-computer interfaces which either assist, augment or repair human cognitive or sensory-motor functions,19 or externalise brain information so as to control external devices.20
Several phenomena are apparent from the above. First, the pace of innovation in these and other fields is accelerating. In the computational sector, Moore’s Law – that the number of transistors that can be placed inexpensively on a circuit will double every two years21 – has proved both accurate and applicable to a range of operations, including processing speed, memory capacity, number and size of pixels supported by an instrument, and so on, and this pace is also being experienced in the medical biotechnology setting.22 Second, these fields are increasingly converging, with knowledge from one field being applied in another and breakthroughs influencing practices and possibilities across fields. For example, AI technologies are closely allied to ICTs (e.g personal computers, mobile phones, laptops, etc), which increasingly pervade our lives, and new developments are causing these and related devices to become a part of our bodies, either because we wear them, or because they are implanted,23 and there exists a long history of implantable medical devices.24 Convergence is also exemplified by bioinformatics, which pulls together biology, computer science and information technology into a single discipline so as to gain insights from which unifying principles can be discerned, largely for the benefit of advancing the understanding of human functioning.25 Importantly, this technical convergence is being mirrored by institutional, industry, and economic convergence.26 Third, great leaps are being made with respect to the interface between the human person and the machine. For example, it has been argued that the physical interface of the future will likely be a virtual and augmented reality approach that cocoons the individual in information:
For the software designer, the task is to bring the critical system variables to the forefront and allow their emergent properties to become constraints and contexts for the “sea-lane” that the controller “pilots” their craft (system) through. This conversion or transformation of complex alpha-numeric and analogue information into unified graphic distillations represents the software design challenge of the twenty-first century.27
Also relevant to interfacing is ongoing work aimed at mixing human neural processing, through implants, with machine intelligence and the internet, and the growth of biological brain tissue on a cultured network within a robot body.28
Ultimately, then, the cooperative evolution of medical biotechnology, nanotechnologies, and AI research will make implantable information processors, AIs and other smart enhancements a reality, and stand-alone synthetic intelligences may well be ubiquitous thereafter. In short, our zombAI – our professional woman who turns the operation of her vehicle over to her implanted AI while she performs other tasks through an implanted informational interface – is not so fanciful. And science fiction writers – who foresaw the Internet and coined such iconic terms as “cyberspace” – would certainly be cautious about dismissing it. Arthur C Clarke once said that, “if an elderly but distinguished scientist says that something is possible he is almost certainly right, but if he says that it is impossible he is very probably wrong.”29
3. Problems? Risk, Governence and Gaps
We live in a risk society, that is a society, which sees (and fears) risks everywhere and which expends a lot of social and legal energy on finding, discussing, and trying to manage risks. The risk society has been defined as “a society increasingly preoccupied with the future (and also with safety), which generates the notion of risk.”30 While societies have always been exposed to risks and have therefore had to adapt mechanisms to deal with such risks, the recent modernisation and technologisation of societies has resulted in the evolution of new types of risks: man-made or manufactured risks partially created by man as opposed to external or natural catastrophes.31 A risk society is mainly concerned with these manufactured risks, and the convergence of the above technologies causes such manufactured risks. The most obvious of the myriad risks that arise from the advancement and convergence of the above technologies include (from most likely and immediate to less):
the loss of human identity and value through artificial enhancement of humans;
the potential threat to human dignity through artificial enhancement of humans;
the alteration of human behaviour and nature through constant and both explicit and subtle monitoring of people by commercial institutions and/or governments;
the spread of miniaturised weapons and surveillance devices;
the development of a silicon brain which exhibits consciousness, demands rights, or, being far more computationally efficient than us, begins to control systems and subjugate humans;
the evolution of artificial synthetic viruses that can pass from software to biological physiologies, creating harmful symptoms in both;
the realisation of manmade human-like entities with human-like capabilities (and foibles), including a lower class of “synthetic humans” for war or labour;
the release of free-range self-replicating nanites that destroy habitat and impinges on human survivability.
While these are (primarily dystopian) future risks, the unfolding convergence raises a number of issues that are perhaps of more immediate concern, issues which encompass the socio-ethical and the legal/regulatory. With respect to the former, one might ask the following questions:
Who is paying for this research, who will benefit, and how are findings reported/distributed?
How might we respond (individually and socially) to new organic or semi-organic (implantable) intelligences?
How might such organisms challenge the basis of the ethics and human rights paradigms on which we base science research governance (i.e. can core principles be extended to them)?
Can self-aware artificial intelligences be owned/commodified?
With respect to governance, the following questions are obvious, and some of them are rather pressing:
Under what circumstances should prior ethics review be necessary (in, for example, the computation, nanotechnology, or AI research setting)?32
Where should ethical review be undertaken and are reviewers appropriately trained to deal with the complexity of these joined-up sciences?33
How do you create an integrated governance regime for disparate disciplines working in collaboration, and who oversees or monitors this work?
What legal/ethical responsibilities do these new entities bear, and how might we impose and enforce them?
The relevance of ethical rules guiding technological advancement was highlighted by science fiction writer Isaac Asimov more than 50 years ago through his articulation of the Three Laws of Robotics.34 While the true scientific value of Asimov’s work has been the subject of academic debate,35 it highlights the necessity for an ethical framework guiding research into advanced technologies, particularly in light of the fact that scientific imagination and technical capabilities are (far) preceding society’s ability to collectively understand, debate, and determine the propriety of certain techniques and directions of inquiry; they are also out-pacing legislative and regulatory attempts to restrain or direct them. Nissenbaum points out that:
In such cases, we cannot simply align the world with the values and principles we adhered to prior to the advent of technological challenges. Rather, we must grapple with the new demands that changes wrought by the presence and use of (information) technology have placed on values and moral principles.36
However, regulation is too often hurried and reactionary (rather than truly reflexive), often as a result of intuitive responses rather than considered, socio-ethical dialogues, with the result that governance instruments, almost always late if ever drafted at all, are incomplete and ill-suited to meet demands made on them. On the whole, the law is not converging and transforming in a manner that permits it to lead these technologies, which may dramatically reduce our ability to appropriately govern science trajectories, and, more importantly, to ensure that everyone benefits equitably.37
Returning to the scenario, a number of important broad social, legal and technical questions come to mind. For example:
How do zombAIs advance/challenge human dignity?
How do zombAIs transform/challenge human identity (conceptions of the human self)?
Should every person be allowed to be implanted/enhanced, and what relevance, if any, should mental capacity and past criminal activity have?
What measures are appropriate for ensuring equality (and avoiding a lower class of Analogs or Non-Mods)? Can we avoid the creation of new divisions of man and society (i.e. new forms of discrimination)?
How can we protect against tampering with internally-held data, and how might we characterise the act of doing so? Is it a data protection infraction, a privacy invasion, an assault and battery, or something more intimate?
Now let us think more narrowly, and consider the position of the parties in the civil and criminal proceedings which might follow. Obviously, our (imagined) zombAI implants are quite qualitatively different from existing implants, and will therefore raise unique questions, including the following:
How should we legally separate the Carrier (the woman) from the Actor (the AI inside her and operating the vehicle) for purposes of responsibility? This is an important conceptual issue which, depending on how it is solved, will impose variable evidentiary obligations.38
Who are the most appropriate parties to any (civil) action? Who else might bear legal responsibility to compensate the injured claimant or the defendant Carrier in case of malfunction? Other relevant parties might be the AI manufacturer,39 the AI programmer,40 or the techno-physician who installed and wet-wired the implant.
How might coping with implants and demanding their extraction for a variety of reasons regularise affronts to bodily integrity? Under what circumstances should the defendant Carrier be forced to consent to the extraction of the implant,41 which may be integral to a number of domestic and professional undertakings and may contain all manner of data which could become subject to scrutiny?42
How are human rights of privacy and of not giving evidence against oneself implicated and protected?
How these questions might be answered is wide open to debate. As the number of questions posed here suggests, our (notional) zombAIs are (or will be) confronted largely by a policy vacuum. Having said that, important general principles can be drawn from existing legal instruments, particularly those in the realm of soft international law. Some key principles include:
human dignity and justice;43
personal integrity and autonomy;44
privacy and personal data protection;45
precaution and proportionality.46
While the specific nature of these concepts may vary across cultures, they have resulted in shared practical legal mechanisms such as (1) risk assessment exercises and accepted risk management strategies, (2) prior informed consent to any interferences with physical integrity, and (3) confidentiality of personal information and a range of data protection practices and procedures. Each of these have been applied in a variety of settings that impact on human wellbeing, ranging from the environmental, to the medical, to
For example, consider the landmark ruling of the German Federal Constitutional Court (BVerfG) dealing with the constitutionality of online searches of computers by police and secret services.47 In that case, the Court held that citizens have a constitutional right in the confidentiality and integrity of information technology systems. Hence, sensible digital data stored on ICT devices is protected against access by the state. In arriving at this conclusion, the BVerfG acknowledged that information technology has gained an importance and significance for the personality and development of individuals that could not have been foreseen, and it found that ICTs are now omnipresent and their use is central to the lives of many citizens. It established that these developments entail new endangerments for the personality, and privacy, and data protection rights of citizens, and that existing legal frameworks are insufficient to adequately protect citizens from these new dangers.
In short, the German Court developed a new constitutional right of privacy which applies to all information technology systems that can be used to store sensitive and private data (eg: personal computers, laptops, mobile phones, mp3 players, digital calendars) and which would provide an insight into the personality of the user.48 In creating this right – unique in its scope and technical awareness – the Court showed a true engagement with the technologies implicated. For our scenario, the question would be whether this data protection and privacy principle would also apply to the implant, i.e. would it be considered an information technology system? If so, the police would not be able to seize the implant and read the data. While a consideration of this question is beyond the scope of this article, the judgment highlights the fact that the increased reliance on new technologies and their integration into everyday life gives rise to new legal challenges that require new legal frameworks and principles.
However, the development and application of practical rights and principles across technological settings remains very uneven, which means that existing regulatory mechanisms will be insufficient. Returning to the German example, it has been argued that the legal system is not adequately prepared to tackle the new and complex issues thrown up by medical biotechnology because practitioner and judicial training has not transformed in keeping with the social context:
If the public’s expectations of law and the legal system are sometimes reminiscent of secularised expectations of salvation, then we must ask whether lawyers should not be able to make use of specialised training that puts them in a position to assess the contents of biotechnology. In Germany, lawyers are not trained with this in mind. This also used to be the case in the field of environmental law. On the contrary, lawyers who researched and taught this “far-out” area were viewed with suspicion … … Even when the renowned American EINSHAC Institute in Washington, which has advised presidents and has given itself the task of bringing judges and scientists together in order to solve cases … invites the German Constitutional Court to participate, these invitations typically go unanswered. This is not because of ignorance, but rather because this kind of “training” is unbefitting and still not well known in Germany. However, knowledge is an essential factor in creating legal security and monitoring unknown dangers.49
Given the above, we reiterate that early consideration of the myriad legal issues raised by technology convergence scenarios is warranted if we hope to fashion legal regimes that are flexible and absorptive, and suggest that legal training must recognise the pressures created by convergent technologies and respond accordingly.
Science is not neutral, not linear, and not predictable. The scientific and technological outputs we generate are shaped by our personal values, political institutions, financial arrangements, social trends, and by serendipity. However, if people (particularly those with resources) want (to be) zombAIs in order to cope with the demands of modern life, then zombAIs, or a similar technological solution, will be sought and will, most likely, be achieved. Importantly, this is not intended to be a call-to-arms. We make no value judgment on zombAIs and our aim is not to highlight pitfalls so that zombAIs and the like might be circumvented. Rather, our intention is only to highlight issues so that we, as a society, might give serious and sober forethought to the governance issues that they raise, and think creatively about how they might reasonably be addressed so that zombAIs, if and when they appear, can be integrated into our legal frameworks (and society) as rationally, as comprehensively, and justly as possible. Thus, while much about this article is speculative, it was not a purely fanciful exercise. It is intended to raise serious issues and highlight areas where the law will have to adapt, sometimes dramatically.
* Research Fellow, Innogen, ESRC Centre for Social Economic Research on Innovation in Genomics, Research Fellow in Law and Medical Technologies, SCRIPT, AHRC Centre for Research on Intellectual Property and Technology Law, both at the University of Edinburgh; Editor-in-Chief, SCRIPTed – A Journal of Law, Technology & Society.
† PhD Candidate, SCRIPT, AHRC Centre for Research on Intellectual Property and Technology Law, at the University of Edinburgh; Visiting Scholar, Liebniz Centre for Law, University of Amsterdam.
1 Duhaime Legal Dictionary, available at http://www.duhaime.org/LegalDictionary/Z/Zombi.aspx (accessed 22 Apr 2010).
4 See the contribution of D Inglis, “The Zombie from Myth to Reality: Wade Davis, Academic Scandal and the Limits of the Real” in this volume.
5 The former is the study of biology at a molecular level, drawing particularly from the fields of biochemistry and genomics, to understand cell and virus function. The latter is the design and construction of artificial biological systems by having reference to engineering and computational disciplines that deal with complex systems.
6 In the green setting, activities are aimed at environmental clean-up agents, and corporations see the potential for billion-dollar organisms.
7 See J Randerson, “Scientists a Step Nearer to Creating Artificial Life”, The Guardian, 6 Sept 2007 (available at http://www.guardian.co.uk/science/2007/sep/06/2/print (accessed 30 Jul 2009)), and E Pilkington, “I am Creating Artificial Life, Declares US Gene Pioneer” The Guardian, 6 Oct 2007 (available at http://www.guardian.co.uk/science/2007/oct/06/genetics.climatechange/print (accessed 30 July 2009)).
8 See D Gibson et al, “Creation of a Bacterial Cell Controlled by a Chemically Synthesized Genome”, ScienceExpress, 20 May 2010, available at http://www.sciencemag.org/cgi/rapidpdf/science.1190719v1.pdf (accessed 25 May 2010). This prompted E Pennisi, “Synthetic Genome Brings New Life to Bacterium” (2010) 328 Science 958-959; Economist, “Genesis Redux”, The Economist, 22-28 May 2010, at 88-90; and many other responses.
9 See B Bhushan (ed), Handbook of Nanotechnology, 2nd ed (Berlin: Springer, 2005); O Renn and M Roco, “Nanotechnology and the Need for Risk Governance” (2006) 8 Journal of Nano Research 153-191.
10 See, for instance, O Renn and M Roco, “Nanotechnology and the Need for Risk Governance” (2006) 8 Journal of Nanoparticle Research 153-191.
12 See Nanowerk, “‘Nanoscience in Food’ event highlights benefits of using micro and nanotechnology in food and drink”, Nanowerk, 6 Aug 2009, available at http://www.nanowerk.com/news/newsid=12040.php (accessed 25 May 2010).
13 See E Regis, “The World’s Smallest Radio”, Scientific American, March 2009, available at http://www.scientificamerican.com/article.cfm?id=the-worlds-smallest-radio (accessed 24 May 2010).
14 See: S Luntz, “Photosynthetic Factories and Nanosponges” (2001) 22 Australasian Science 30-32; S Anton et al, The Global Technology Revolution: Bio/Nano/Materials Trends and their Synergies with Information Technology by 2015 (Santa Monica: Rand, 2001); T Harper and P Hollister, “Nanotechnology: The Emerging Cutting-Edge Technology” (2002) 19 Asia Pacific Tech Monitor 41-46; B Gordijn, “Nanoethics: From Utopian Dreams and Apocalyptic Nightmares Towards a More Balanced View” (2005) 11 Sci Engineering Ethics 521-533.
15 See: E Drexler, Nanosystems: Molecular Machinery, Manufacturing and Computation (NY: John Wiley & Sons, 1992); E Drexler, Engines of Creation: The Coming Era of Nanotechnology (London: Fourth Estate, 1996); C Phoenix and E Drexler, “Safe Exponential Manufacturing” (2004) 15 Nanotechnology 869-872.
16 S Mitra, “A Time for AI”, Forbes, 29 May 2009, available at http://www.forbes.com/2009/05/28/ai-reardon-commerce-intelligent-technology-mitra.html (accessed 30 Jul 2009).
17 A Trembly, “Artificial Intelligence vs Humans: Is AI Up to the Challenge?”, Insurance Network, 14 May 2009, available at http://www.insurancenetworking.com/news/insurance_technology_artificial_intelligence_jeopardy_contest-12341-1.html (accessed 30 Jul 2009).
18 In the Blue Brain Project, Swiss scientists are producing a functional silicon rat neo-cortex on a supercomputer. See: C Witchalls, “Lab Comes One Step Closer to building Artificial Human Brain”, The Guardian, 20 Dec 2007, available at http://www.guardian.co.uk/technology/2007/dec/20/research.it/print (accessed 30 Jul 2009); J Palmer, “Simulated Brain Closer to Thought”, BBC News, 22 Apr 2009, available at http://news.bbc.co.uk/1/hi/sci/tech/8012496.stm (accessed 30 Jul 2009).
19 Matthew Nagle, a 25-year-old man paralyzed by a knife wound in 2001, had a BCI successfully implanted which allowed him to control a computer cursor, thereby permitting him to move through e-mail programmes and use a computer to operate a television: BBC News, “Brain Chip Reads Man’s Thoughts”, BBC News, 31 Mar 2005, available at http://news.bbc.co.uk/1/hi/4396387.stm (accessed 30 Jul 2009).
20 The US military’s Defence Advanced Research Projects Agency budget for fiscal year 2009-2010 includes $4 million for a program named Silent Talk, which aims to ‘allow user-to-user communication on the battlefield without the use of vocalized speech through analysis of neural signals’; it is expected to be rolled out en masse by 2035: K Drummond and N Shachtman, “Pentagon Preps Soldier Telepathy Push”, Wired, 14 May 2009, available at http://www.wired.com/dangerroom/2009/05/pentagon-preps-soldier-telepathy-push/ (accessed 24 May 2010).
21 G Moore, “Cramming more Components onto Integrated Circuits”, Electronics, 19 Apr 1965.
22 C Casabona, “Human Biotechnology, Transculturality, Globalisation and Symbolic (Criminal) Law” in N Knoepffler et al (eds), Humanbiotechnology as Social Challenge (Aldershot: Ashgate, 2007) 57-72.
23 For example, RFIDs and sub-dermal GPS devices are injectable tags for animals or humans can permit worldwide tracking.
24 Examples include cardiovascular pacers (which prompt heart-beating), programmable drug pump implants (which regulate drug delivery), cochlear and auditory brainstem implants (which send sounds to auditory nerves or stimulate cochlear nuclei in the brainstem), cortical and ocular implants (which deploy digital cameras to bypass damaged retinas or optic nerves), neuro-stimulation implants (which modify electrical nerve activity for chronic pain, incontinence, seizure control, tremor control), biosensors (implanted internal monitoring transmitters), and artificial hippocampus (which are implanted memory prostheses).
25 And bioinformatics is playing a key role in the Blue Brain Project, which is aimed at better understanding human (brain) disorders and developing better treatments, and could be important for creating artificial silicon-biological brains as repositories of vast amounts of information on single subjects (i.e. neurosciences) to better deploy human-generated knowledge.
26 S Lee and D Olson, Convergenomics: Strategic innovation in the Convergence Era (Surrey: Gower, 2010) at ch. 3.
27 P Hancock, Mind, Machine and Morality (Surrey: Ashgate, 2009), at 83.
28 For more on these, see K Warwick, “Implications and Consequences of Robots with Biological Brains” (2010) Ethics & Information Technology, published online at http://www.springerlink.com/content/g452p350q22hk071/fulltext.pdf (accessed 24 May 2010).
29 Quote attributed to esteemed sci-fi author Arthur C Clarke: see http://www.saidwhat.co.uk/quotes/favourite/arthur_c_clarke/if_an_elderly_but_distinguished_scientist_5316 (accessed 20 Oct 2009).
30 A Giddens, Runaway World: How Globalisation is Reshaping Our Lives (London: Profile, 1999).
32 In short, at what point should the research governance framework that we have erected for medical and human subject research be implicated?
33 In short, whose Research Ethics Committees should conduct the reviews? Should they be institutional or broader public bodies?
34 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law; 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law: I Asimov, Runaround (NY: Faucett Crest, 1942).
35 For example, see M Coeckelbergh, “Moral Appearances: Emotions, Robots, and Human Morality” (2010) Ethics & Information Technology, available at http://springerlink.com/content/f6866544337v5822/?p=aae5dabbfc124d209892279845683758&pi=1 (accessed 19 May 2010).
36 H Nissenbaum, “How Computer Systems Embody Values” (2001) 34 Computer 118-120, at 120.
37 Currently, many of the fields which are most relevant to this scenario have very little by way of direct research control. Generalised research controls relating to the medical field where human wellbeing is directly implicated, may become involved in later stages. Examples include the Helsinki Declaration (1964+), Biomedicine Convention (1997), MRC Medical Research Guidelines (1998), EU Directive 2001/20 on Clinical Trials on Medical Products, etc.
38 Would the matter be best and most efficiently solved by adopting a strict liability approach (i.e. if the injured claimant demonstrates that the AI was operating the vehicle, which is forbidden by law, liability will rest for all injuries)? Or would a more nuanced approach be preferable, whereby once the injured claimant demonstrates that the AI was operating the vehicle, the burden shifts to the defendant Carrier to rebut a presumption of causative negligence?
39 What if it is an automated nanofactory?
40 What if the AI is a learning computer? Can the original programmer or software team be held responsible? Recall that at the heart of these implanted AIs is a complex and powerful computer, and computers, whether super-computers, laptops, or nano-computers, are alterable, both before and after implantation. Their programmes are only restricted by our imagination, the abilities of the programmer, and the power of the machine (which we might expect will give humans both improved functionality and novel capabilities).
41 Assuming its implantation and removal is invasive and not the same as providing a urine or hair sample.
42 In short, how is the incident isolated such that the Carrier’s privacy is protected?
43 For example, see Article 1 of the Council of Europe’s Biomedicine Convention (1997), Article 1 of the EU Charter of Fundamental Rights (2000), Article 3 of UNESCO’s Universal Declaration on Bioethics & Human Rights (2005), and others.
44 For example, see Article 1 and Chapter 2 of the Council of Europe’s Biomedicine Convention (1997), Articles 2 and 3 of the EU Charter of Fundamental Rights (2000), Articles 5-8 of UNESCO’s Universal Declaration on Bioethics & Human Rights (2005), and others.
45 For example, see Article 10 of the Council of Europe’s Biomedicine Convention (1997), Articles 7 and 8 of the EU Charter of Fundamental Rights (2000), Article 9 of UNESCO’s Universal Declaration on Bioethics & Human Rights (2005), and see EU Directive 95/46 on Personal Data, and EU Directive 02/58 on Data & Electronic Communications.
46 For example, see European Commission Communication on the Precautionary Principle of February 2000, and Articles 16 and 17 of UNESCO’s Universal Declaration on Bioethics & Human Rights (2005).
47 BVerfG, NJW 2008, 822.
48 For a more detailed discussion of the judgment, see W Abel and B Schafer, “The German Constitutional Court on the Right in Confidentiality and Integrity of Information Technology Systems – A Case Report on BVerfG, NJW 2008, 822” (2009) 6:1 SCRIPTed 106-123.
49 J Simon, “Human Biotechnology as a Legal Challenge”, in N Knoepffler et al (eds), Humanbiotechnology as Social Challenge (Aldershot: Ashgate, 2007) 75-84, at 82.