Chapter 6 More Recent Developments: Signposts to Work from the 1960s to the Present
The blossoming of theoretical and practical work from the late 1940s to the early 1960s, described in Chap. 5, continued to gather pace as the 1960s progressed. The period from the 1960s to the present day has witnessed significant developments in the field, and the work has branched into a variety of novel application areas. Most of these developments are well described in existing publications, so our detailed review of the early history of the field ends here. In this chapter we describe the general nature and focus of this more recent work, and provide references to other sources that review these developments in detail.
One of the most comprehensive reviews of work in this period, with an emphasis on hardware implementations, is provided by Robert Freitas and Ralph Merkle in their book Kinematic Self-Replicating Machines (Freitas Jr & Merkle, 2004). This covers developments with all three kinds of replicator (standard-, evo- and maker-) but particularly focuses on maker-replicators. A more concise overview, with emphasis on work in software covering all three kinds of replicator, is provided by Moshe Sipper in (Sipper, 1998). Both of these publications include useful diagrammatic lineages of work in this area from the 1950s onward ((Sipper, 1998, p. 238), (Freitas Jr & Merkle, 2004, p. xviii)). Another excellent general review of the area, covering both hardware- and software-focused work involving all three kinds of replicator, is provided by Michele Ciofalo in (Ciofalo, 2006). Finally, Matthew Moses and Gregory Chirikjian provide a very recent review, mainly focused on physical replicators, which covers developments that have occurred after the publication of Freitas and Merkle’s earlier review right up to the year 2019 (Moses & Chirikjian, 2020).
In the following subsections we signpost some of the broad trends that have developed in theoretical explorations and in software and physical implementations of self-reproducing systems. These developments in scientific and engineering theory and practice have been accompanied by continued interest in the idea of self-reproducing systems in science fiction. Some of the most notable examples from 1960s sci-fi include works by Poul Anderson (Anderson, 1962), Stanislaw Lem (Lem, 1973), Fred Saberhagen (Saberhagen, 1967a) and John Sladek (Sladek, 1968); examples from more recent decades are too numerous to list.109
6.1 Theoretical and Philosophical Work
Von Neumann’s foundational studies, described in Sect. 5.1.1, laid the groundwork for many further theoretical developments. Much of the relevant work from the 1960s and 70s took place in the field of automata theory. Many of these studies continued to use cellular automata, or closely related models, as a simplified platform for implementation. An early review of these developments, written by computer scientist and neuroscientist Michael A. Arbib, appeared in the proceedings of Towards a Theoretical Biology—an influential conference series in the late 1960s (Arbib, 1969). Arbib’s review highlighted topics such as what he referred to as the fixed point problem of components; that is, ensuring that a self-reproducing system is able to manufacture a copy of each of its constituent parts. We will return to this topic, and related issues concerning closure in self-reproducing systems, in Sect. 7.3.1. The first part of Arbib’s discussion concentrated on the design principles of standard-replicators. This was followed by an exploration of issues relating to evo-replicators, the origin of life and real-world complexities such as dealing with noise and interaction with a rich environment. More recent reviews of work from this period can be found in (Freitas Jr & Merkle, 2004, ch. 2) and (Sipper, 1998).
Among Arbib’s many other works of interest from around this time was a paper entitled “The Likelihood of the Evolution of Communicating Intelligences on Other Planets”, published in 1974 (Arbib, 1974). Speculating on the technologies that intelligent species might utilise for interstellar communication, Arbib suggested that, while most discussion up to that point had assumed radio communication, another possibility would be the use of self-reproducing machines (Arbib, 1974, pp. 65–66). He envisaged that the devices could be directed to “reproduce every time they travel a constant distance … to yield a sphere moving out from the home planet with a constant density of these … machines” (Arbib, 1974, p. 66). Of course, the idea of a self-reproducing spacecraft had first been proposed forty-five years earlier by Bernal (Sect. 4.2.1), but Arbib’s suggestion placed more emphasis on the potential of self-reproducing technology for exponential growth in numbers. This potential—which is a property of each kind of replicator including the basic standard-replicator—was utilised in Arbib’s vision to achieve (at least in theory) omnidirectional communication without loss of signal strength.110
A few years later, mathematical physicist Frank J. Tipler used the idea of self-reproducing machines to argue that extraterrestrial intelligent species do not exist (Tipler, 1980), (Tipler, 1981a), (Tipler, 1981b), (Tipler, 1981c).111 Inspired by von Neumann’s theoretical concept of a self-reproducing universal constructor, Tipler suggested that any intelligent species engaging in interstellar communication would “eventually develop a self-replicating universal constructor” (Tipler, 1980, p. 268). This technology would be employed, he argued, not just for interstellar communication (as suggested by Arbib) but also for interstellar travel to explore and colonise the galaxy. He referred to such spacecraft as von Neumann probes (Tipler, 1980, p. 276).
Tipler’s line of reasoning utilised not just the self-reproductive capabilities of von Neumann’s architecture (which allowed a cost- and time-efficient means of exploring the galaxy) but also its capacity for universal construction—that is, its abilities as a maker-replicator. The key point, he explains, is that “once a von Neumann machine has been sent to another solar system, the entire resources of that solar system become available to the intelligent species that controls the … machine; all sorts of otherwise-too-expensive projects become possible” (Tipler, 1980, p. 270). Furthermore, “in a fundamental sense a von Neumann machine cannot become obsolete … [because it] can be instructed by radio to make the latest devices after it arrives at the destination star” (Tipler, 1980, p. 271). Having set out the case for the use of self-reproducing spacecraft for interstellar travel by intelligent species, he went on to utilise the “where are they?” argument to conclude that such species did not exist.112
More recently, in discussing ways in which self-reproducing probes could be used to allow humans to colonise other planets in the age of superintelligent AI, Max Tegmark invoked a different use for a probe with universal construction capabilities. In assuming the existence of superintelligent AI with access to far more advanced technology than anything that looks even remotely possible today, the idea pushes credibility to the very limits. In Tegmark’s scenario, the humans would not join the probes on their interstellar journeys. Instead, once the probes had arrived on a new planet and prepared it for our coming, they would establish a superintelligent AI (perhaps with the aid of information transmitted from the mother civilisation) which would then construct a human colony in situ by constructing embryos, or even adult humans, using nanoassembly techniques (Tegmark, 2017, p. 225).
Returning to the more general and down-to-earth landscape of work on the theory of self-reproducing systems in recent decades, a significant development was the establishment of the field of Artificial Life (ALife) in the late 1980s.113 This is a discipline that brings together computer scientists, biologists, ecologists, complex systems scientists, philosophers and others united in an interest in synthesizing and simulating living systems in non-biological media, including software, hardware and “wetware” (molecular systems).
To highlight just one of the interesting early works from the ALife field, J. Doyne Farmer and Alletta d’A. Belin published a paper in 1991 entitled Artificial Life: The Coming Evolution (Farmer & Belin, 1991), which we quoted from at the start of Chap. 1. The paper argued that reproducing and evolving artificial lifeforms could be expected to emerge within fifty to a hundred years. In addition to providing another good review of work on self-reproducing systems in the late 1980s (with a particular focus on software systems), the paper also discussed the possibility that artificial life might evolve through non-Darwinian processes. The authors considered the potential of artificial lifeforms to accelerate the rate of evolution of their physical design by modifying their own genetic material. Farmer and Belin regard this as a kind of Lamarckian evolution (Gissis & Jablonka, 2011), i.e. a process by which, in contrast to Darwinian evolution, beneficial characteristics acquired during an individual’s lifetime are passed on to the individual’s offspring. Indeed, the process discussed in their paper goes beyond what is normally considered as Lamarckian evolution because it involves not just the inheritance of acquired characteristics but also the intentional self-modification of the species by the species itself.
Farmer and Belin were by no means the first authors to explore these possibilities. The idea of self-designing machines was a common theme in the early sci-fi stories discussed in Sect. 4.1.3. And within the scientific community, Richard Laing had already demonstrated in the 1970s that Lamarckian evolution could be achieved in a simple automaton model by a process of reproduction by self-inspection (R. Laing, 1975), (R. Laing, 1977), (R. A. Laing, 1977). We will return to this topic in Sect. 7.1.4.
Looking at current ALife research, there is an emerging focus in the field on the topic of open-ended evolution—the capacity apparent in the biological world to continually evolve, to discover new tricks and to increase its maximum complexity over time in a seemingly never-ending way (T. Taylor, Bedau, et al., 2016), (Packard et al., 2019). No artificial evolutionary system to date exhibits anything like this capacity; instead, after an initial burst of activity they tend to reach a more or less stable state beyond which no further innovations are observed. In contrast to work on maker-replicators, those studying ALife evo-replicator systems are keen to understand and unleash the creative power observed in biological evolution.
As mentioned in Sect. 1.4, open-ended evolution has recently been described as a “grand challenge” for the field (Stanley et al., 2017). Some view it as a promising route for producing agents with highly sophisticated artificial intelligence,114 even including superhuman-level artificial general intelligence (AGI)115; this is, of course, merely the latest manifestation of the core idea behind much of the work we described in Chaps. 3–5 which dates back as far as the 1860s. Related to this, open-ended evolution could be a route whereby evo-replicators develop the ability to act according to their own ends and desires, beyond any original goals set for them by their human designers. We return to this topic in Sect. 7.3.4.
6.2 Software Implementations
From the 1960s onwards, when computers became more widely available as a tool for scientists and engineers, many more researchers started implementing self-replicators in software.
Following von Neumann’s original cellular model and the developments in automata theory referred to in the previous section, there has been much further work on cellular automata models and implementations of self-reproduction. Most of this work, particularly in the earlier years, investigated design issues in the process of self-reproduction itself, and ways to make the systems perform other tasks in addition to self-reproduction—that is, the focus of this work has generally been on software standard- and maker-replicators rather than evo-replicators. Good overviews of this area can be found in (Reggia, Chou, et al., 1998) and (Reggia, Lohn, et al., 1998).
In contrast, evo-replicators have been the main focus of another branch of software-based work which investigates the evolution of self-reproducing computer programs. Much of this work has occurred within the field of Artificial Life, where the approach was made popular in the early 1990s by the Tierra system developed by ecologist Tom Ray (T. S. Ray, 1991).
ⓘ Afterword: See also Fred Stahl and others (Sect. 8.3) |
In Tierra, populations of computer programs compete for space and CPU time to build copies of themselves within the computer’s memory. The copying process is subject to some noise so that the copies are not always perfect and small variations start to appear in the offspring programs. Because memory space and CPU time are limited, programs best adapted for survival and reproduction in this environment persist by natural selection, and less well-adapted programs die out. Ray observed not only the evolution of increasingly faster, more efficient self-reproducing programs but also the emergence of various ecological interactions. For example, small parasitic programs were seen to evolve which were unable to reproduce unaided but, instead, hijacked the code of neighbouring programs to copy themselves.
This line of research is still thriving today, especially in work using the Avida software platform (Ofria & Wilke, 2004) as a test bed for studies in experimental digital evolution (see Fig. 6.1). These systems are described at length in many sources (e.g. (Banzhaf & Yamamoto, 2015a, pp. 195–223), (Johnston, 2008, pp. 215–274), (T. J. Taylor, 1999, pp. 51–57)). As mentioned in the previous section, a current focus of research in this area is in developing an understanding of how to build software evo-replicator systems with the capacity for open-ended evolution.
Partially overlapping with these approaches, the research area of Artificial Chemistries encompasses a variety of approaches to modelling life processes such as self-reproduction and evolution; see (Banzhaf & Yamamoto, 2015a) for a comprehensive recent review. A branch of this field that focuses on interactions at the ecological level is Artificial Life Ecosystems, described in (Banzhaf & Yamamoto, 2015a, pp. 163–165) and (Dorin et al., 2008).
Looking forward, it has been suggested that the process of standardisation of web technologies now presents the prospect of using the web as a globally distributed environment in which evolving software agents might find a persistent home where they could thrive “in the wild” (T. Taylor, Auerbach, et al., 2016).
The work outlined above has a particular focus on modelling processes of evolution, self-reproduction and related aspects of biological systems. In addition, a vast body of work has developed that uses software-based evolution primarily as an optimisation technique. The history of this work, which comes under the general name of evolutionary computation, has been described in various sources (e.g. (Angeline, 1998), (Mitchell, 1996)).116
In a less salubrious line of development, the 1970s witnessed the emergence of computer viruses (Szor, 2005). One of the first examples of a worm that spread via the Internet, causing widespread damage and attracting the attention of the mainstream media, was Robert Morris’ Internet Worm of November 1988 (Denning, 1989). A good review of the history of computer viruses can be found in (Spafford, 1994).
Much has been written about the developments described above, in the works we have mentioned and elsewhere. We will therefore leave our review of software self-replicators here and turn our attention to recent progress in the implementation of physical self-reproducing systems.
6.3 Physical Implementations
Over the last sixty years there have been many advances in physical self-replicating systems, both at the macro-scale and at the molecular scale. A full discussion of many of the developments described in this section, and references to a wide variety of other related projects, can be found in (Freitas Jr & Merkle, 2004), which covers work up to 2004. A good review of work over the period 2004–2019 can be found in (Moses & Chirikjian, 2020).
Penrose’s early work on self-reproducing blocks (Sect. 5.3.1) has inspired a lineage of further studies, ranging from systems based upon magnetic (Breivik, 2001), (Virgo et al., 2012) or electromechanical (Griffiths et al., 2005) units to those employing more complex programmable robotic units (Suthakorn et al., 2003), (Zykov et al., 2005). These works have generally focused upon systems that can produce exact copies of themselves (i.e. standard-replicators), although some could in theory transmit heritable mutations and thereby act as evo-replicators given sufficient time and raw materials. However, the time and rather specialised environments required for these systems to produce their offspring mean that a great deal of further research and development is required to produce a physical self-reproducing machine that exhibits any significant evolutionary behaviour in practice.
At the same time, other researchers are exploring how additive manufacturing technology (3D printing) might be employed for the fabrication of complete robotic systems. While the technology is not yet at the stage of allowing the unassisted printing of a full robot in the general case (including all the different materials required for its electronics, actuators, power source, etc.), work is rapidly progressing in that direction (e.g. (Bartlett et al., 2015), (Khoo et al., 2015), (MacCurdy et al., 2016), (Lee et al., 2017)). In the meantime, a growing number of projects are investigating the use of “human-in-the-loop” 3D printing systems to partially automate the process of evolving new robot designs (e.g. (Lipson & Pollack, 2000), (Hiller & Lipson, 2012), (Rieffel et al., 2014), (Brodbeck & Iida, 2015), (Brodbeck et al., 2015), (Hale et al., 2019)). These lines of development might ultimately lead to the creation of fully autonomous self-reproducing and evolving systems (e.g. (Bowyer, 2011), (Howard et al., 2019)).
These developments are closely associated with the more general field of evolutionary robotics, which emerged in the early 1990s alongside Artificial Life. While many interesting advances have come out of this field, the majority of work tends to focus not on self-reproduction but on the evolution of controllers which are then implanted into robots of fixed physical form.117 A good review of the field can be found in (Vargas et al., 2014).
In the 1950s and early 1960s, Homer Jacobson (Sect. 5.3.2) and Norbert Wiener (Sect. 5.5) had both suggested that a self-replicating system could be developed using electronic circuits. Forty years later, in the late 1990s, this idea was realised in the Embryonics project, which aimed to develop an architecture for highly robust integrated circuits with the capacity for self-repair and self-replication (Mange et al., 2000).
As mentioned in Sect. 5.4.2, Konrad Zuse had started thinking about the potential of self-reproducing machines in the 1950s. His main interests lay in the possible uses of maker-replicators, although he also discussed the evolutionary potential of evo-maker-replicators. After a decade of working on other projects, he returned to the topic in the second half of the 1960s.
In 1967, Zuse published an article setting out some more detailed ideas for the implementation of the technical germ-cell that he had first discussed a decade earlier (Zuse, 1967). We devote some time to it here because it is an extension of the work we described in Sect. 5.4.2, and because it has not been widely discussed elsewhere.
In the paper he discussed the biological cell as the inspiration for his idea of a technical germ-cell, providing an incentive for “a project that at first seems absurd to continue given the state of the art” (Zuse, 1967, p. 58).118 Zuse introduced the concept of the Rahmen (frame) of a self-reproducing system, being “the environment in which the systems are viable” (Zuse, 1967, p. 59),119 including all the external facilities required to provide the system’s inputs and to accommodate its outputs. The inputs to the Rahmen might include raw materials, energy, information, prefabricated parts, tools, etc. (Eibisch, 2016, pp. 91–105). He saw the degree of autonomy of a self-replicator as depending upon the complexity of the Rahmen required for its operation (Zuse, 1967, p. 60). The concept of a Rahmen is therefore a formalisation of the question of how much a “self”-replicator relies upon properties of its environment to achieve reproduction. As we have seen previously, this issue was discussed by von Neumann, Penrose and Jacobson before him, and we will return to the issue in Sect. 7.3.
Zuse suggested that progress could be made in the challenge of creating more autonomous self-replicators by making radical simplifications in the standardisation of individual parts, thereby reducing the inventory of parts required from the Rahmen (Zuse, 1967, p. 61). Regarding the question of information and control of the process, he suggested that these systems could be kept external to the self-reproducing system itself and supplied as part of the Rahmen (Zuse, 1967, p. 63). Zuse acknowledged that this would raise the question of the extent to which the resulting system could be called self-replicating, but nevertheless he suspected that this would be the most practically useful design approach. This exemplifies Zuse’s focus on the manufacturing and construction aspects of the problem over the information and control aspects, which was in many ways the opposite of von Neumann’s approach. In the paper Zuse also discussed some of the potentially transformational uses of the technology, not only on Earth but also, in the distant future, for space travel and exploration. The paper ends with an appeal that, although these ideas seem “a bit fantastic … we must have the courage to include such possibilities in our considerations” (Zuse, 1967, p. 64).120
Over the following years Zuse began building an automatic assembly machine, the SRS72, as a starting point for a self-reproducing system (Eibisch, 2012). His plan was to simplify the practical difficulties of the system as far as possible by employing a modular design built from standardised parts. However, it appears that the machine was not completed to a working state, and Zuse abandoned the project in 1974 for unknown reasons (Eibisch, 2012). The art conservator Nora Eibisch has recently written a book (in German) describing Zuse’s work on the SRS72 (Eibisch, 2016); a more limited source of further information in English can be found in Zuse’s autobiography (Zuse, 1993).121
Zuse’s work brings to mind J. D. Bernal’s conception of self-reproducing spacecraft for interstellar exploration (Sect. 4.2.1). From the 1970s onward there has been a wide variety of further developments in this area (e.g. (F. Dyson, 1979, pp. 194–204), (O’Neill, 1977), (Barrow & Tipler, 1986, pp. 578–586)). In 1979, Freeman Dyson set out a series of thought experiments describing how various kinds of maker-replicators could be used to transform desert regions on Earth and to terraform other planets (F. Dyson, 1979, pp. 197–203). Dyson noted that the exponentially increasing scale of operation, which was a common feature of his examples and comes about without human intervention once the first self-replicator has been set in motion, elicited a sense of getting “something for nothing”:
“The paradox forces us to consider the question, whether the development of self-reproducing automata can enable us to override the conventional wisdom of economists and sociologists. I do not know the answer to this question. But I think it is safe to predict that this will be one of the central concerns of human society in the twenty-first century. It is not too soon to begin thinking about it now.”
— Freeman Dyson, Disturbing the Universe, 1979 (F. Dyson, 1979, p. 200)
It is fair to say that there is still no conclusive answer to Dyson’s question, although it remains as relevant today as it was when he raised it forty years ago. Echoing Zuse’s idea of a technical germ-cell (Sect. 5.4.2),122 Dyson went on to discuss extending von Neumann’s work by going beyond what he called the “unicellular level” (i.e. a single monolithic machine) to build a “germ cell of a higher organism” out of which could arise “descendants of many different kinds [that] function in a coordinated fashion” (F. Dyson, 1979, p. 202). He suggested that an analysis is required of the minimum number of conceptual components required to build a system that can act as such a germ cell; this is related to Arbib’s fixed point problem of components (Sect. 6.1)—we will further discuss this topic in Sect. 7.3.1.
The most substantial exploration to date of self-reproducing technology for the exploration and exploitation of other planets was an extended study by NASA in 1980 ((Freitas Jr & Merkle, 2004, pp. 42–51)) (see Fig. 6.2). The team that conducted the study was led by Richard Laing, whose earlier theoretical work on reproduction by self-inspection we mentioned in Sect. 6.1. Another participant was Robert Freitas, co-author of the book Kinematic Self-Replicating Machines highlighted at the start of this chapter. In their end-of-project report, the team considered potential long-term outcomes of such research, together with philosophical, ethical and religious questions that arose from it ((Freitas Jr & Gilbreath, 1982, pp. 240–249), (R. Laing, 1989)).
The fear that such technology might become out-of-control and ultimately pose a threat to the future of humanity was as real a concern for these authors as it had been for Samuel Butler over a hundred years earlier (Sect. 3.1). Although the primary focus of the study was on maker-replicators, the report suggested that “any machine sufficiently sophisticated to engage in reproduction in largely unstructured environments and having, in general, the capacity for survival probably must also be capable of a certain amount of automatic or self-reprogramming” (Freitas Jr & Gilbreath, 1982, p. 240). And yet, granting these machines any capacity for change and evolution opens the door to unforeseen and potentially catastrophic outcomes.
Taking a somewhat different view, Freitas and Merkle later discussed the possibility of designing safe maker-replicator machines that are inherently incapable of undergoing evolution; they offered suggestions for how this might be achieved by “human-in-the-loop” approaches where we retain the ability to regulate the control architecture or supply of raw materials to the machines (Freitas Jr & Merkle, 2004, p. 199). They concluded by recommending that “[a]rtificial kinematic self-replicating systems which are not inherently safe should not be designed or constructed, and indeed should be legally prohibited.”
Although NASA did not take their 1980 project forward, work on physical self-replicating systems for space exploration and exploitation has continued in various forms. A good review of developments in this area up to the early 2000s can be found in (Freitas Jr & Merkle, 2004), and a review of more recent work is given in (Moses & Chirikjian, 2020). We highlight a few of these projects here just to give a flavour of recent developments.
A number of researchers have proposed the use of 3D printing as a practical means by which maker-replicator mining and manufacturing machines might be developed on the Moon. For example, Philip Metzger and colleagues’ proposal, published in 2013, features an evo-maker-replicator approach that begins with a subreplicating system, remotely operated from Earth, and “evolves toward full self-sustainability … via an in situ technology spiral” (Metzger et al., 2013, p. 18). The envisaged system would employ 3D printer-based manufacturing along with a range of other technologies. Metzger et al. argue that the development of such systems is now economically feasible because of the discovery of lunar polar ice, meaning that the Moon “has every element needed for healthy industry” (Metzger et al., 2013, p. 18). Echoing the dreams of Bernal and others before them, Metzger and colleagues suggest that their proposal would allow the production of material and energy resources that can be transported back to Earth, the terraforming of Mars, the establishment of space colonies in the solar system and, eventually, the colonisation of other solar systems (Metzger et al., 2013, p. 28).
With similar goals to those of Metzger and colleagues, work by Alex Ellery addresses the challenge of designing self-replicators built only from materials available on the Moon (Ellery, 2016), (Ellery, 2017). Ellery’s approach is also based upon 3D printers, but with a particular focus upon what he regards as a key hurdle: the 3D printing of motors. In addition, he outlines approaches to solving other key aspects of a self-replicating machine, including printable electronics and sensors, and the chemical processing of raw materials. Ellery concludes that “[a]lthough there are many problems with which to contend, there appear to be no fundamental hurdles” (Ellery, 2016, p. 325).
Elsewhere, Will Langford and colleagues have recently proposed an approach to reduce the complexity of physical self-replicators by using a small set of just thirteen basic part-types (Langford et al., 2017). The part-types are categorised into four functional groups: structural, flexural, electronics and actuation. This work calls to mind Zuse’s earlier proposal of simplfying the realisation of self-replication by using a modular design built from standardised parts (Eibisch, 2012).
A number of researchers have suggested biologically-based techniques for industrial activities in space. These include Lynn Rothschild and colleague’s proposal for what they call myco-architecture, which uses bioengineered fungi to generate surface structures that could be grown in situ on other planets (Rothschild et al., 2019). Another example is the recent BioRock experiment on the International Space Station, which studied the feasibility of using biomining (the use of microorganisms to extract valuable materials from ores) in microgravity environments (Loudon et al., 2018). If these kinds of technologies prove viable, it is easy to envisage how they could be incorporated as part of a bio-technological hybrid self-replicating system for space applications.
At a smaller scale, progress is being made towards the goal of molecular-level self-assembly and self-replication in the form of wetware and nanobot systems. Reviews covering various different flavours of this work can be found in (Freitas Jr & Merkle, 2004, pp. 89–144, 201–217), (Ciofalo, 2006, pp. 66–71), (Rasmussen et al., 2008), (Bissette & Fletcher, 2013), (Duim & Otto, 2017) and (Zhang et al., 2014). In addition to technical progress in these areas, there has also been much debate of the potential dangers of this work (e.g. (Drexler, 1986), (Baum, 2003)). In 2000, the USA-based think tank the Foresight Institute published a set of guidelines for the development of nanotechnology, which particularly focused on replicator technology (Foresight Institute, n.d.). The guidelines recommended against the development of designs that could withstand mutation or undergo evolution. We look at more broad-ranging efforts to develop guidelines for the responsible development of advanced AI systems next.
6.4 Addressing the Risks Associated with Self-Replicators
In recent years there have been increasingly well-organised and multinational efforts to consider risks associated with the development of advanced AI technology.
Several governments (including the US (National Science and Technology Council Committee on Technology, 2016), the European Parliament (European Parliament, 2017) and the UK (Science and Technology Committee, 2016)) have commissioned reports on the future of AI in order to develop appropriate policies in this area. The UK report noted that “the verification and validation of autonomous systems was `extremely challenging’ since they were increasingly designed to learn, adapt and self-improve during their deployment” (Science and Technology Committee, 2016, p. 16). In developing the report for the European Parliament, a study for the Committee on Legal Affairs noted that “the self-replication of robots, and especially nanorobots, might prove difficult to control and potentially dangerous for humanity and the environment, thus requiring strict external control of research activities” (Nevejans, 2016, p. 11). In a subsequent report for the Commission on Civil Law Rules on Robotics, the Committee on Civil Liberties, Justice and Home Affairs stated that “robotics and artificial intelligence, especially those with built-in autonomy, including the … possibility of self-learning or even evolving to self-modify, should be subject to robust conceptual laws or principles” (Delvaux, 2017, p. 36).
At the same time, several new institutes have been established to address these kinds of issues. A forerunner in this area is the Foresight Institute, mentioned in the previous section. Other more recent examples include the Future of Life Institute (Cambridge MA, USA), the Future of Humanity Institute (Oxford, UK), the Centre for the Study of Existential Risk (Cambridge, UK) and the Machine Intelligence Research Institute (Berkeley CA, USA). One example of their activities is the development (by the Future of Life Institute) of what has become known as the Asilomar AI Principles123 to govern the safe, ethical development of powerful AI systems. At the time of writing, over 3,800 AI researchers and other endorsers have signed up to support these principles.124 Principle number 22 states: “AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.” We will return to the discussion of risk management in self-replicator research in Chap. 7.
* * *
We have now covered the full history of the idea of self-replicator technology: from the initial inklings of the notion of self-reproducing machines, first conceived of as standard-replicators in the seventeenth century (Chap. 2); followed by the additional idea born in the nineteenth century that machines might not only be able to reproduce but also to evolve—evo-replicators (Chaps. 3–4); up to the first serious theoretical treatments of the subject, the crystallisation of the new idea of maker-replicators and the first implementations of self-reproducing machines in the mid-twentieth century (Chap. 5); and ending with an overview of more recent developments up to the present day (this chapter). Turning to the final chapter, we will now offer some of our own thoughts on what has been achieved, the various goals that have driven this research, technical issues that remain unresolved and prospects for future developments.
References
For additional references, see the partial—yet extensive—list of self-reproducing machines in fiction on Wikipedia (https://en.wikipedia.org/wiki/Self-replicating_machines_in_fiction).↩︎
More recently, physicist S. Jay Olson has employed the same property in a proposed mechanism that might be used by advanced civilisations to aid their rapid expansion across intergalactic distances. The scenario involves the release of a wave of “expander” probes that “are designed to reproduce themselves and adjust their velocity slightly at pre-determined intervals, so that the expanding sphere of probes maintains a roughly constant density” (Olson, 2015, p. 5).↩︎
The first three of these papers were published in the Quarterly Journal of the Royal Astronomical Society, and the fourth, a shorter summary of the first three, was published in Physics Today.↩︎
As noted by Tipler, the “where are they?” argument had been employed by others before him (but without the focus on self-reproducing spacecraft); its origin is generally attributed to the physicist Enrico Fermi (see (Sagan, 1963, p. 495)). However, as Tipler states in (Tipler, 1981a, pp. 136–137), the same argument is apparent in a seventeenth century work by none other than Bernard de Fontenelle, whom we met in Sect. 2.1.↩︎
The field was born out of a 1987 workshop organised by Christopher G. Langton of the Los Alamos National Laboratory, NM (Langton, 1989b). That and subsequent workshops have now developed into an annual conference series, overseen by the International Society for Artificial Life.↩︎
A curated series of video interviews with leading current AI researchers on the potential of evolutionary techniques is available at https://evolution.ml/experts/.↩︎
For an interesting discussion of the possibility of evolving AGI, see (Shulman & Bostrom, 2012). For a longer and more general discussion of the prospects for AGI and its implications for humankind, see (Tegmark, 2017).↩︎
Many of the most notable early papers in evolutionary computation, including some early work on artificial life ecosystems and papers by Nils Barricelli (Sect. 5.2.1), have recently been republished in a single volume (Fogel, 1998).↩︎
The related fields of swarm robotics and self-reconfigurable modular robotics involve systems whose physical form can change, although self-reproduction is not a common concern in these fields either.↩︎
” … ein Projekt, welches zunächst dem Stand der Technik nach absurd erscheint, weiterzuverfolgen” (Zuse, 1967, p. 58).↩︎
“die Umwelt dar, innerhalb deren die Systeme lebensfähig sind” (Zuse, 1967, p. 59).↩︎
” … noch etwas phantastisch erscheinen, jedoch müssen wir den Mut haben, auch solche Möglichkeiten in unsere Betrachtungen einzubeziehen” (Zuse, 1967, p. 64).↩︎
Additional sources of information include the Konrad Zuse Internet Archive and the website of the Deutsches Museum.↩︎
However, Dyson does not cite Zuse in his discussion.↩︎