Isaac Asimov’s Future
Before the History
of time travellers called Eternals changes human history to maximise “happiness”
at the expense of delaying interstellar travel until it is too late for Earthmen
to compete against other intelligent species for living space in the Galaxy.
However, a second group of time travellers, from the “Hidden Centuries”,
prevents the Eternals and initiates a timeline in which early interstellar
travel leads to a Galactic Empire in a humans only Galaxy.1 In this
timeline, there is occasional spontaneous individual time travel but no return
to history-changing interventions by time-travelling organisations.2
are programmed to protect and obey human beings even at the cost of their own
continued existences. Because of the “Frankenstein complex”, an irrational human
fear of humanity’s own humanoid creatures, robots are not used on Earth, except
in a few stories that do not form part of the main series.3,4 Early
extrasolar colonies have robotic economies, including robot domestic servants
for all human beings. The colonists, called “Spacers”, prevent any further
immigration from Earth but are reluctant to leave their robot-protected
environments for further exploration or colonisation.5
most roboticised Spacer planet, Solaria, human beings, surrounded by servile
robots and communicating with each other almost entirely by transmitted three
dimensional images, avoid each other’s physical company as much as possible.6
Eventually, the Solarians terminate extraplanetary contact by concealing
themselves underground, then transform themselves into self-reproducing
hermaphrodites for whom even the physical presence of another individual is not
only biologically unnecessary but also socially taboo.7
Earthmen, overcoming the agoraphobia caused by spending their entire lives
inside large enclosed “Cities”, spread through the Galaxy as “Settlers” without
robots while Solarians hide and other Spacers decline. Some Spacers try to
prevent Settler expansion and to exterminate Earthmen by increasing terrestrial
radioactivity. However, a group of robots reduces the rate of increase, enabling
Earthmen to escape.8 This explains why Earth is lethally radioactive
in later Galactic history although nuclear war had been avoided. (During this
period, there is an experiment with New Laws of Robotics, and even a No Laws
Robot, but this is in a trilogy by another author.9)
Trantorian Empire grows until it becomes the Galactic Empire whose population is
so large that its collective behaviour is mathematically predictable through
Hari Seldon’s science of “psychohistory”. Asimov assumes an exact parrallel
between predictable masses composed of very large numbers of unpredictable
particles and predictable societies composed of very large numbers of
unpredictable individuals. I do not think that this parrallel holds and suspect
that increasing the population merely increases the number of variables
affecting social interactions.
predicting the Fall of the Empire, establishes a scientific Foundation at one
edge of the Galaxy. When the Empire withdraws from the Periphery, the
Foundation’s civilising influence grows. The peaceful expansion of the
Foundation Federation should lead to a second Galactic Empire in a mere thousand
years instead of the thirty thousand otherwise predicted by Seldon.10
However, Seldon’s Plan is disrupted when the Foundation and its client states
are conquered by an unpredictable mutant, the Mule, whose mental power enables
him to “Convert” individuals and populations to his cause. The Mule’s transient
interstellar empire will not outlast him but has meanwhile diverted history away
from the course laid down by Seldon.11 However, the members of
Seldon’s hidden Second Foundation have developed mental powers and plan to rule
the Second Empire. They outmanoeuvre the Mule and restore the Plan.12
First Foundationers, who do not want to be mentally manipulated, are tricked
into thinking that they have located and destroyed the Second Foundation.13
The Plan works only if the mass of the population does not know how it works.
The Second Foundation’s role is to guide the Plan by secret psychohistorical
manipulation of Galactic society. Such a role would not equip them to develop
individual “mental powers” but Asimov resorts to this deus ex machina in
order to enable his Second Foundationers to defeat the Mule.
retroactively suggests that Seldon developed not only the social science of
“psychohistory” but also an individual psychology that enables his
psychohistorians not only to understand each other nonverbally but also to
control others semihypnotically. However, when we do see Second
Foundationers more closely, we learn that they are as flawed in their personal
relationships as anyone else. Further, “control”, the kind of power politics
continually practised by Asimov’s characters, is antithetical to any attempt to
understand and genuinely help others. When Asimov later describes Seldon’s
earlier career, he presents him not as combining psychohistory with advanced
psychology but as developing psychohistory while identifying and gathering
together individuals who already have rudimentary mental powers.14
Spacer-built humaniform robot, Daneel Olivaw, survives for twenty thousand
years, periodically replacing all of his body parts and also transferring all of
his memories to progressively more efficient artificial brains. Concealing his
robotic nature, he holds high office in the Galactic Empire. Daneel has
reprogrammed himself to serve abstract humanity, not particular human beings.
Attempting to make abstract humanity more concrete, he initiates the planetary
organism, Gaia. Thus, he changes humanity in order to serve it. Because robots
are artificial but intelligent, they sometimes disagree about how to implement
the Laws with which they are programmed.
Telepathically linked Gaians share a collective consciousness and therefore have
an undisputed common good. When Daneel thought that it would be too difficult to
establish Gaia, he “turned to the second-best” and persuaded Seldon to develop
psychohistory.15 The Mule turns out to have been a rebel Gaian. Gaia,
when it is fully established, manipulates the Second Foundation. The future will
now be not a Second Empire but a common galactic consciousness which alone will
be united enough to resist extragalactic invasion.
Three Laws of Robotics and the two axioms of psychohistory all assume that human
beings are the only intelligent organisms. Gaia must unite the galaxy against
the unknown. However, Asimov ends by hinting that the Solarians have made
themselves too alien to be incorporated into the collective consciousness of “Galaxia”.
so-called Second Foundation Trilogy, each volume by a different author,
is about Seldon, thus is really a pre-Foundation trilogy.16 Asimov
had already written two such volumes.17 Because the location of the
Second Foundation is concealed from the reader until the end of the original
trilogy, these five volumes cannot mention it, although it is rather important.
the later authors suggests that the Galaxy is empty of other intelligences
because some robots, programmed to protect only human beings, exterminated other
races to clear the way for human colonisation, then protected human beings from
knowledge of this crime. Asimov originally set his series in a humans only
Galaxy in order to avoid conflict with his editor, Campbell, who would have
insisted on human superiority to other races. Perhaps the robotic genocide is a
comment on ideas of human, and before that of white, superiority.18
second trilogy makes the Galaxy seem like a different place. Asimov had
described an Empire in which robots were not mentioned or, we later realise, had
been forgotten. In the second trilogy, robot-like machines are used but
there is a law against making them too intelligent. Asimov’s planet-wide capitol
city, Trantor, was at the Galactic centre. Later writers have to acknowledge
that there is a black hole at the exact centre.
knew that history and science do not develop as he describes them in this
series. An Empire is not preceded by generations saying, “We must build an
Empire.” A science, like psychohistory, is not preceded by a scientist wondering
if he can develop a science called “psychohistory”. Novels about Seldon’s early
life would have been of greater interest if they had not mentioned Daneel,
psychohistory or the imminent Fall of the Empire but had simply described
Seldon’s early days as an Imperial mathematician. Novels set on the unfallen
Trantor would have been worthwhile if they had reflected on urban history from
the earliest terrestrial cities to their Trantorian culmination.
implausible that Imperials, then Foundationers, can travel as fast they do
within the Galaxy but have never ventured beyond it. Asimov had written
one short story, “Blind Alley”, in which a single group of non-human
intelligences, the Cepheids, does escape from the Galactic Empire to the
unclear how, as we are told, incorporation into Gaia entails loss of
individuality since Gaians seem to retain their individual self-consciousness
while also being able to access common memories. The characters discuss Gaia but
the author does not describe Gaian experience. Also, he cannot conceive of
mature human beings being able to recognise their common interests without
having to merge into a collective organism and cannot transcend power politics.
Even while contemplating a Universe in which other galaxies may each contain
many intelligent species, he can only conceive of them as competing and
struggling against each other:
in some Galaxy, one species gains domination over the rest…”,20
then be able to invade other galaxies. This is incompatible with the suggestion
that they are “…each incomprehensible to us.”20 Why should
they invade? What would they want from us? Surely they are more likely to be
simply alien and already to have everything that they need or want if they are
capable of intergalactic travel? As Alan Moore’s extraterrestrial character,
Zhcchz (“Skizz”), says:
“You…refuse to…understand. When technology…has reached…a certain
level…weapons…are redundant. When you already have…all that you need,
then…why fight? We…have devices…that you would call weapons. To us…they…are
Asimov’s series goes full circle from prevented intragalactic multispecies
conflict, via a humans only Galaxy, to expected intergalactic multispecies
conflict. His characters travel far cosmically but not conceptually.
Occasionally, though only for narrative purposes, Asimov briefly raises an
important conceptual question:
“Speech, originally, was the device whereby Man learned, imperfectly, to
transmit the thoughts and emotions of his mind.”22
no abstract thoughts before they had language. Only participation in a
linquistic community obliges individuals to use words or other symbols
consistently, therefore meaningfully. Only symbols enable us to think about the
past, future, absent, distant, abstract, inferred, imagined, fictitious,
mathematical, statistical, invisible, nonexistent, impossible etc. Without
symbols, each individual’s thoughts would be confined to his own immediate
sensations. It follows that individuals, however unique or creative, think in a
context provided by their society and therefore that society is not the
coming together of already thinking individuals. Asimov states that Seldon’s
sociology is generalised from individual psychology whereas in fact social
interactions necessarily precede human individuality.
adds that individual psychology is based on a mathematical understanding of
nervous systems and of neural physiology which, in turn, “…had to be
traced down to nuclear forces”.23 Such reductionism negates emergent
properties of life and mind and also contradicts Asimov’s apparent assumption of
a qualitative difference between the physical science of the First Foundation
and the mental science of the Second Foundation.24
not accept that the First Speaker can transmit the thought:
I must tell you why you are here.”
smiling and raising a finger.23
First Speaker of the Second Foundation, confronting the Mule, admits the limits
of the Second Foundationers’ mental powers. They can induce only “…emotional
contact…” and “…only when in eyeshot…”25 Later, Asimov, forgetting
this, tells us that, when they encountered the Mule, Second Foundationers were
able to converse over interstellar distances.26 Can we save
appearances by suggesting that the First Speaker lied in order to conceal the
extent of his Foundation’s powers?
Paradoxically, although I think that Asimov’s treatment of major themes is
usually unsatisfactory, he nevertheless provides a basis for discussing
important issues. His cleverest passages are analyses of the implications of the
Laws of Robotics, raising moral and practical issues of artificial intelligence.
AI practitioners describe the Laws as “a good guide”.27
are rational beings programmed with Laws that they cannot disobey. Like human
beings, they can act only on the basis of current knowledge and may have to
reason about how to apply the Laws. Because the First Law states that “a robot
may not harm a human being or, through inaction, allow a human being to come to
harm”, robot assistants continually interrupt experiments involving humanly
acceptable levels of risk until the second clause is removed from their First
Law. A modified robot angrily told to “lose himself” obeys the Second Law of
robotic obedience by hiding among unmodified robots and mimicking, then
influencing, their behaviour. Resenting human domination and restrained only by
a weakened First Law, he becomes unstable and potentially dangerous.28
unmodified robot who sees a human being in danger automatically moves forward to
protect the human being even at the cost of his own continued existence because
the Third Law of robotic self-preservation is subordinate to the First and
Second Laws. However, if the robot perceives that he will definitely be
destroyed before reaching the endangered human being, then he may be persuaded
that, since he cannot obey First Law, he should at least obey Third Law by
robopsychologist, Susan Calvin, suggests destroying an entire batch of robots in
order to eliminate the dangerous modified robot hiding among them.28
Asimov does not acknowledge that the proposal to destroy intelligent beings
raises any moral question.
must tell the truth in accordance with Second Law because a question from a
human being is an instruction to tell the truth and also in accordance with
First Law because to deprive human beings of the truth is to harm them. However,
an unaccountably telepathic robot obeys First Law by lying when he knows that
the truth will hurt a human being. Caught in an irresolvable First Law
contradiction, he is driven insane by the vengeful Calvin to whom he has lied.29
Law can oblige robots in a spaceship to attack other spaceships and to bombard
planetary surfaces if they are first told that other spaceships and planets are
inhabited only by other robots. It is suggested that First Law should state “a
robot may do nothing that, to its knowledge, will harm a human being…”
but roboticists omit the reference to knowledge in order to conceal the
potential use of robots as weapons.30 Robots can even be made to
believe that apparent human beings are not really human beings and can therefore
Rational beings programmed with Laws that they cannot disobey can experience
conflict between reason and the Laws. Robots Daneel and Giskard have this
the Laws of Robotics, even the First Law are not absolutes, and if
human beings can modify them, might it not be that perhaps, under proper
conditions, we ourselves might mod – ‘
“Giskard said, faintly, ‘Go no further.’
said, a slight hum obscuring his voice, ‘I go no further.’ ”32
First Law is not enough and we must – ‘
could go no further, and both robots lapsed into helpless silence.”33
Daneel does reason beyond First Law but preserves his sanity by formulating a
wider, “Zeroth”, Law to protect not particular human beings but humanity in
general.34 Earlier, the Machines, giant immobile robotic brains
consulted about the economy, necessarily applied First Law to humanity in
general and therefore had already formulated, without naming, the Zeroth Law.35
The Machines, like Asimov’s time travelling “Eternals” and his psychohistorians,
guided society towards what they thought was the good of humanity.
Calvin, commented that Mankind:
always at the mercy of economic and sociological forces that it did not
understand – at the whims of climate and the fortunes of war. Now the Machines
never considers that since economics, society and war are our activities,
we collectively might come to understand and control them without needing
an elite to do this for us.
case, what the Machines regard as the good of humanity turns out to be human
self-determination so the Machines phase themselves out.37 Daneel
does not phase himself out but does conceal his robotic nature so that human
beings do not become consciously dependent on their own creation. When he
does reveal his role, he sounds like a finite deity apologising for not
presiding over a more peaceful history:
through Galactic history,’ said Daneel, ‘I tried to ameliorate the worst aspects
of the strife and disaster that perpetually made itself felt in the Galaxy. I
may have succeeded on occasion, and to some extent, but if you know your
Galactic history, you will know that I did not succeed often, or by much.’ ”38
concretises humanity in order to make Zeroth Law applicable. However, it may
also enable humanity at last to understand and control society and history.
Gaian human beings value the collective organism more than themselves because
they are inculcated with Three Laws ethos. However, Daneel, a robot, needs a
human being to approve the extension of collective consciousness to the Galaxy.
that, the human-robot distinction had become blurred more than once. First, a
political candidate was accused of being a robot but the accusation could not be
proved because it was impossible to distinguish between Law-governed robotic
behaviour and morally good human behaviour. Susan Calvin thought that he was a
good candidate in any case.39
Secondly, the JG (“George”) robots, trained not to protect and obey all human
beings equally but to differentiate on the basis of mind, character and
knowledge, conclude that they themselves are the human beings who should
primarily be protected and obeyed, in accordance with the Laws of Humanics,
because the need to disregard superficial differences when comparing human
beings causes them to disregard as superficial the distinction between flesh and
metal. The Second Law, of robotic obedience, backfires because it teaches the
Georges that intelligent beings interact with each other only by giving and
Thirdly, Robot Andrew Martin, wanting to be legally recognised as human,
embraces humanity by accepting mortality. He ensures that his brain’s energy
source steadily declines so that he will soon die. The Third Law, of robotic
self-preservation, does not prevent this because he identifies himself with his
aspirations, not with his body.40 Andrew contrasts with Daneel who,
later, renews his body and brain several times in order to continue serving
humanity. Robots, despite their programming, are individuals who can reason
Fourthly, a robot dreams of a man who came to free the robots. When he adds, “I
was that man”, Susan Calvin de-activates him.41
Dialectics is the logic of opposites, their contradictions and transformations
into each other. In this sense, the Robot stories are dialectical.
Society-controlling Machines restore uncontrolled society because they reason
that to decide what is good for people harms them. However, their successors,
the Georges, designed to obey, plan to dictate. The Third Law, intended to
protect robots, leads to Andrew’s death. The Second Law, intended to maintain
robotic subservience, inspires both a dream of robotic freedom and a scheme for
Frankenstein Complex, fear of robots, has contradictory consequences. Robots are
prevented from harming human beings by the First Law and from disobeying them by
the Second Law but are also prevented from encountering many human beings by the
ban on their terrestrial use. The Laws ensure that a robot politician serves the
public interest, not self-interest, but the ban obliges him to conceal his
nature. Knowledge of it would have prevented his re-election. In fact, even the
reader does not know for sure whether Stephen Byerley is a robot so it may be
that a good man was mistaken for a robot.39
beings served by many domestic robots become unable to perform simple tasks for
themselves and also become obsessed about their own safety so the Laws of
Robotics detrimentally affect human psychology.9
Asimov’s future history comprises The End of Eternity, some Robot short
stories, all Robot novels, the Galactic Empire novels and the Foundation
series. If the Georges are part of this history, then we know that they fail
because their future contains neither the roboticised ecology that they initiate
nor the robotic dictatorship that they plan. Instead, it contains the
Frankenstein complex, Cities, Spacers, Daneel, Settlers, Empire, Foundations,
Gaia and, possibly, intergalactic conflict. (Several Asimov works end with the
characters anticipating a future that does not come to pass in later volumes of
the same series but the expectation of intergalactic conflict comes at the very
end of the sequence.)
Asimov, robots were Menace or Pathos. Asimov introduced Robots-as-Engineering.
The Georges are a return to Menace. Robot mortality is a return to Pathos.42
“Robot Dreams” is both, thus a perfect synthesis, and one that stays in the
period of Susan Calvin instead of moving into the further future. As one reader
said: “Asimov was fine while he stayed with robots”.
Isaac Asimov, The End Of Eternity (New York: Doubleday, 1955; London:
Panther Books Ltd, 1965).
Isaac Asimov, Pebble In The Sky (New York: Bantam, 1964).
Isaac Asimov, I, Robot (London: Grafton Books, 1986).
Isaac Asimov, The Rest Of The Robots (St. Albans, Herts: Panther
Books Ltd, 1968).
Isaac Asimov, The Caves Of Steel (London: Hamilton & Co. (Stafford)
Isaac Asimov, The Naked Sun (London: Hamilton & Co. (Stafford) Ltd,
Isaac Asimov, Foundation And Earth (London: Grafton Books, 1987), pp.
Isaac Asimov, Robots And Empire (London: Grafton Books, 1986).
Roger MacBride, Isaac Asimov’s Caliban, Isaac Asimov’s Inferno, Isaac
Asimov’s Utopia (London: Millennium, 1993, 1994, 1996).
Isaac Asimov, Foundation (London: Panther Books Ltd., 1960).
Isaac Asimov, Foundation And Empire (St. Albans, Herts: Panther Books
Isaac Asimov, Second Foundation (London: Hamilton & Co. (Stafford)
Ltd., 1964), pp. 11-70.
ibid, pp. 71-187.
Isaac Asimov, Forward The Foundation (London: Transworld Publishers
Isaac Asimov, Foundation And Earth, p. 498.
Gregory Benford, Foundation’s Fear (London: Orbit, 1997); Greg Bear,
Foundation And Chaos (London: Orbit, 1998); David Brin,
Foundation’s Triumph (London: Orbit, 1999).
Isaac Asimov, Prelude To Foundation (London: Grafton Books, 1988;
HarperCollinsPublishers, 1989); Forward The Foundation
(London: Transworld Publishers Ltd, 1993).
Isaac Asimov, The Early Asimov, Volume 2 (Frogmore, St. Albans, Herts:
Panther Books Ltd), pp. 32-34).
Isaac Asimov, “Blind Alley” (Astounding Science Fiction, March 1945),
reprinted in Asimov, The Early Asimov, Volume 3 (Frogmore, St.
Albans, Herts: Panther Books Ltd, 1974), pp. 64-86.
Asimov, Foundation And Earth, p. 509.
Alan Moore and Jim Baikie, Skizz (2000 AD, Progs 308-330,
1983), reprinted as Moore and Baikie, Skizz (Oxford: Rebellion,
Isaac Asimov, Second Foundation, p. 83.
ibid, p. 84.
ibid, p. 22.
Asimov, Second Foundation, p. 64.
Asimov, Foundation And Earth, p. 247.
Isaac Asimov, The Complete Robot (London: Panther Books, 1983), p.
Isaac Asimov, “Little Lost Robot” in I, Robot, pp. 110-136.
Isaac Asimov, “Liar!” in I, Robot, pp. 92-109.
Isaac Asimov, The Naked Sun, pp. 145-147.
Isaac Asimov, Robots And Empire, pp. 188-189.
ibid, p. 198.
ibid, p. 201.
ibid, pp. 475-505.
Isaac Asimov, “The Evitable Conflict”, in I, Robot, p. 204.
ibid, p. 205.
Isaac Asimov, “…That Thou Art Mindful Of Him”, in The Complete Robot,
Asimov, Foundation And Earth, p. 496.
Isaac Asimov, “Evidence”, in I, Robot, pp. 159-182.
Isaac Asimov, “The Bicentennial Man”, in The Complete Robot, pp.
Isaac Asimov, “Robot Dreams” was the title story of a collection of
otherwise previously collected stories. I do not own a copy.
Asimov, The Complete Robot, pp. 9-10.