Centauri Dreams has a series of posts: "Artificial Intelligence and the Starship" which review "an absorbing new paper called “Artificial Intelligence for Interstellar Travel, now submitted to the Journal of the British Interplanetary Society" by Andreas Hein and Stephen Baxter. Baxter is the well-known science-fiction writer.
I checked out their paper on the arxiv. It's disappointing.
Here's the abstract.
"The large distances involved in interstellar travel require a high degree of spacecraft autonomy, realized by artificial intelligence. The breadth of tasks artificial intelligence could perform on such spacecraft involves maintenance, data collection, designing and constructing an infrastructure using in-situ resources.They use some formalisation and present a taxonomy of four different kinds of AI:
Despite its importance, existing publications on artificial intelligence and interstellar travel are limited to cursory descriptions where little detail is given about the nature of the artificial intelligence. This article explores the role of artificial intelligence for interstellar travel by compiling use cases, exploring capabilities, and proposing typologies, system and mission architectures.
Estimations for the required intelligence level for specific types of interstellar probes are given, along with potential system and mission architectures, covering those proposed in the literature but also presenting novel ones.
Finally, a generic design for interstellar probes with an AI payload is proposed. Given current levels of increase in computational power, a spacecraft with a similar computational power as the human brain would have a mass from dozens to hundreds of tons in a 2050-2060 time-frame.
Given that the advent of the first interstellar missions and artificial general intelligence are estimated to be by the mid-21st century, a more in-depth exploration of the relationship between the two should be attempted, focusing on neglected areas such as protecting the artificial intelligence payload from radiation in interstellar space and the role of artificial intelligence in self-replication."
"We distinguish between four types of AI probes:These are engineering classifications and don't correspond to any sensible theoretical taxonomy of agent types. Perhaps that wasn't the intention but in terms of defining a research program which can dovetail with an interstellar vehicle programme, we do actually need a sensible roadmap for AI in the appropriate terms. Referencing AGI doesn't cut it, because today that term labels the problem only.
Explorer
Philosopher
- capable of implementing a previously defined science mission in a system with known properties (for instance after remote observation);
- capable of manufacturing predefined spare parts and components; Examples: the Icarus and Daedalus studies.
Founder
- capable of devising and implementing a science program in unexplored circumstances; capable of original science: observing unexpected phenomena, drawing up hypotheses and testing them;
- capable of doing this within philosophical parameters such as planetary protection;
- capable of using local resources to a limited extent, e.g. manufacturing sub-probes, or replicas for further exploration at other stars.
Ambassador
- capable of using local resources on a significant scale, such as for establishing a human-ready habitat;
- capable of setting up a human-ready habitat on a target object such as part of an embryo space colonization programme;
- perhaps modifying conditions on a global scale (terraforming).
- equipped to handle the first contact with extraterrestrial intelligence on behalf of mankind, within philosophical and other parameters: e.g. obeying a Prime Directive and ensuring the safety of humanity."
I didn't find their mathematical transliteration of their verbal points useful. How can I convey my problem?
∃x.question(me, unspecified-audience, conversation-procedure(x, describes(problem(me, non-utility-of-their-maths), unspecified-audience))).
I trust you are now enlightened in all senses.
I plan to write some more here about a more ecological way of thinking about agent taxonomies. Here's a brief preview.
Agents are discrete entities which exhibit behaviour in their environments. Agents are bound by the laws of physics, which therefore don't per se differentiate between agents which we find boring (lumps of rock) and agents we find interesting (animals, people).
Non-trivial agents are entities whose behaviour deviates from the behaviour of a similarly-sized-and-placed lump of rock - an entity whose behaviour could be predicted from the laws of physics and easily-obtained boundary conditions without too much difficulty. Non-trivial agents have complex and inaccessible internal states which produce enhanced behaviour by use of free-energy.
Agents are characterised by these four intentional parameters: beliefs + goals and perceptions + actions. These also work for rocks but it's a trivial case. An interesting problem is to link the intentional level of description to the input-output behaviourist level and then back to the laws of physics. This can always be done in principle.
Non-trivial agents are always mechanisms, whether biologically-living or fabricated. They do not in general need to be constructed so as to use explicit symbolic manipulation (theorem-provers or planners) as part of their mechanisms, although research scientists may use such concepts to describe and analyse their behaviour.
Agents get a lot more interesting when they're social, and when social objectives and individual goals are contingent, possibly contradictory and need to be dynamically negotiated. It's believed that mutual-modelling, language and conversation, and consciousness are all emergent from that scenario.
It's possible to devise a scale of intellectual competence for agents, linked to the capacity to effectively deploy abstractions to cope with complex and novel situations. It's not too clear how to model this taxonomy in the architecture space apart from such obvious points as "more processors and memory, and cranking up the clock rate" and their neuronal equivalents. Those remedies are not of course wrong but it's not enough.
Now apply this to the design of autonomous interstellar probes.
In almost every respect, the engineering domain of interstellar missions is an application area for AI rather than something which raises fundamentally new theoretical questions.