Plan А1 “Super UN” Plan А2 Friendly AI Plan В Survive Plan С

Global Catastrophic Risks Prevention Plan
Plan А1
“Super UN”
2015
2020
2030
2040
2050
Research
Social support
UN role is
growing
Risk control
Creation of united worldwide
risk prevention authority
• Comprehensive list of risks
• Integration of different
approaches
• Probability assessment
• Prevention roadmap
• Cooperative scientific community
with shared knowledge
and low rivalry (CSER)
• Popularization (articles, books,
forums, media)
• Public support, street action
(аnti-nuclear protests in 80s)
• Political support (parties)
• All states in the world
contribute to the UN to fight
some global risks
• A smaller catastrophe could
help unite humanity (pandemic,
small asteroid, local nuclear war)
Technology bans
• International ban of dangerous technologies or
voluntary relinquishment
(like not creating new flu strains)
• Freezing potentially dangerous projects for 30 years
• Lowering international confrontation
Technology speedup
• Differential technological development: develop
safety and control technologies first (Bostrom)
• Laws and economic stimulus (Richard Posner, carbon
trade)
Surveillance
• International control systems (IAEA)
• Transparent society (David Brin)
Values
transformation
Elimination of certain risks
• Universal vaccines
• Asteroid detection (WISE)
• Public desire for life extension and
global security is growing
• Reduction of radical religious (ISIS) or
nationalistic values
• Popularity of transhumanism
Second half of the 21 century
• Worldwide video-surveillance
and control
• Center for quick response to any
emerging risk
• Peaceful unification of the planet based on
the system of international treates
Plan А2
Friendly AI
Plan А3
Rising
robustness
Plan А4
Space
• Study of Friendly AI theory
• Promotion of Friendly AI
(Bostrom and Yudkowsky)
• Fundraising (MIRI)
• Teaching rationality (LessWrong)
• Slowing other AI projects
(recruiting scientists)
Improving
sustainability of
civilization
• Intrinsically safe critical systems
• Growing diversity of human beings
and habitats
• Universal methods of prevention
(resistances structures,
strong medicine)
• War for world domination
• One country uses bioweapons to kill all
world’s population except its own which is
immunized
• Use of a super-technology (like nanotech)
to quickly gain global military dominance
• Blackmail by Doomsday Machine for world
domination
• Space stations as temporary
asylums (ISS)
• Cheap and safe launch systems
• Creation of space colonies on the
Moon and Mars (Elon Musk)
the catastrophe
Plan С
Preparation
• Fundraising and promotion
• Textbook to rebuild civilization
(Dartnell’s book “Knowledge”)
• Hoards with knowledge, seeds and
raw materials (Doomsday vault in
Norway)
• Survivalist communities
Time capsules
with information
Leave backups
• Underground storage
with information and DNA for future
non-human civilizations
• Eternal disks from Long Now
Fondation (or “M disks”)
Plan D
Saved by non-human
intelligence
Deus ex machina
Bad plans
• Maybe extraterrestrial are
looking for us and will save us
• Maybe we live in a simulation
and simulators will save us
• The Second Coming,
a miracle, or life after death
• Send radio messages in space
asking for help if a catastrophe is
enevitable
• Narrow AI
• Human emulations
• Value loading
Improvement of
human intelligence
and morality
needed to quickly pass
risk window
• Nootropics, brain stimulation, and gene
therapy for higher IQ
• Education, rationality, and fighting
cognitive biases
• High empathy for new geniuses
needed to prevent them from becoming
superterrorists
• Many rational, positive, and cooperative
people need to reduce x-risks
• Lower proportion of destructive beliefs,
risky behaviour, and selfishness
• Investment in super-technologies
(nanotech, biotech)
• High speed technical progress
helps to overcome slow
process of resource depletion
•
•
•
•
•
• Underground bunkers
• Remote islands
• Nuclear submarines
Messages to
ET civilizations
Quantum immortality
• If many world interpretation of QM is
true, an observer will survive any sorts
of death including any global catastrophe
(Moravec, Tegmark)
• It may be possible to make almost univocal correspondance between observer
survivor and survaval of a group of people
(e.g. if all are
in submarine)
• Another human civilizations must exist in
the infinite Universe
Prevent x-risk research
because it only increases risk
Controlled regression
• Keep secret dangerous
information and ideas from
terrorists
• Do not advertise the idea of
man-made global catastrophe
• Don’t try to control risks as it
only would rise them
• As we can’t measure probability of the global catastrophe
it maybe unresonable to try to
change it
• Use small catastrophe to prevent large
one (Willard Wells)
• Luddism (Kaczynski):
relinquishment of dangerous science
• Creation of ecological civilization without
technology (“World made by hand”, anarchoprimitivism)
• DLimitation of personal and
collective intelligence to prevent
dangerous science
• Exploring the Universe
Unfriendly AI
• Kills all people and maximizes non-human values
(paperclip maximiser)
• People are alive but suffer extensively
on highest possible level
•
•
•
•
Nanotech-based immortal body
Diversification of humanity into several successor species capable of living in space
Mind uploading
Integration with AI
AI based on uploading
of its creator
• Friendly to the value system of its creator
• Its values consistently evolve during its self-improvement
• Limited self-improvement may solve friendliness
problem
Interstellary distributed
humanity
• Many unconnected human civilizations
• New types of space risks (space wars, planets and stellar
explosions, AI and nanoreplicators, ET civilizations)
Rebuilding civilisation after catastrophe
• Rebuilding population
• Rebuilding science and technology
• Prevention of future catastrophes
Resurrection by
another civilization
• Mechanical life
• Preservation of information about
humanity for billions of years
• Safe narrow AI
Technological
precognition
• Prediction of the future based
on advanced quantum technology
and avoiding dangerous
world-lines
• Search for potential terrorists using
newscanning technologies
• Special AI to predict and
prevent new x-risks
Depopulation
• Could provide resourсe preservation
and make
control simplier
• Natural causes:
pandemics, war, hunger
(Malthus)
• Birth control (Bill Gates)
• Deliberate small
catastrophe (bio-weapons)
Manipulating of the
extinction probability
using
Doomsday argument
• Changing total future births
(method UN++ by Bostrom)
• Changing birth density to
get more time
Unfriendly AI maybe better
than nothing
• Any super AI will have some
memory about humanity
• It will use simulations of human
civilization to study the probability of
its own existance
• It may share some human
values and distribute them
through the Universe
Reboot of the civilization
• Several reboots may happen
• Finally there will be total collapse
or a new supercivilization level
Robots-replicators in space
• Create conditions for
appearance of new intelligent
life on Earth
• Directed panspermia
(Mars, Europe, space dust)
• Preservation of biodiversity
(apes, habitats)
Random strategy may help
us to escape some dangers
that killed all previous
civilizations in space
• Colonization of the Galaxy
Timely achievement of immortality
• “Orion” style nuclear generation ships with colonists
• Starships on new physical principles with immortal people on board
• Von Neumann self-replicating probes with human embryos
Crew training
Crews in bunkers
Crew rotation
Different types of asylums
Frozen embryos
Strange strategy
to escape
Fermi paradox
• Colonization of the solar system, interstellar travel
and Dyson spheres
Global
catastrophe
Interstellar travel
Preservation of
earthly life
• Interstellar radio messages
with encoded human DNA
• Hoards on the Moon,
frozen brains
• Voyager-style space сrafts with
information about humanity
• Wrong command makes shield system
attack people
• Geoengineering goes awry
• World totalitarianism leads to catastrophe
• Seed AI quickly improves itself and
undergoes “hard takeoff”
• It becomes dominant force on Earth
• AI eliminates suffering, involuntary death,
and existential risks
• AI Nanny – one hypothetical variant of super
AI that only acts to prevent
existential risks (Ben Goertzel)
Readiness
Building
Fatal mistakes in world control system
Creation of a small AI capable
of recursive self-improvement and based
on Friendly AI theory
Colonisation of the Solar system
• Self-sustaining colonies on Mars
and large asteroids
• Terraforming of planets and asteroids using
self-replicating robots
• Millions of independent colonies inside asteroids
and comet bodies in the Oort cloud
• “A world order in which there is a
single decision-making agency
at the highest level” (Bostrom)
• Super AI which prevents all possible risks and
provides immortality and happiness to humanity
Superintelligent AI
High-speed
tech development
Singleton
• Worldwide government system based on AI
Seed AI
AI practical studies
• Human values theory and decision theory
• Proven safe, fail-safe, intrinsically safe AI
• Preservation of the value system during
AI self-improvement
• A clear theory that is practical to implment
Temporary asylums in space
colonization
Plan В
Survive
Solid Friendly AI theory
• Geoengineering against global warming
• Worldwide missile defense and
anti-asteroid shield
• Nano-shield – distributed system of
control of hazardous replicators
• Bio-shield – worldwide immune system
• Merger of the state, Internet
and worldwide AI in uniform
monitoring, security and control system
• Isolation of risk sources a great distance
from Earth
Planet unification war
Global
catastrophe
Study and Promotion
Active shields
• Creation of a civilization which have a lot of common
values and traits with humans
• Resurrection of concrete people
Control on the simulation
(if we are in it)
• Live an interesting life so our
simulation isn’t switched off
• Dont let them know that we know
that we live in simulation
• Hack the simulation and
control it
• Negotiation with the simulators or
prey for help
Attracting good
outcome by
positive thinking
• Preventing negative
thoughts about the end of
the world and about
violence
• Maximum positive
attitude “to attract”
positive outcome
• Start parting now
Global
catastrophe
Plans A1-A4 represent best possible ways to prevent global catastrophe. These plans could be realized simultaniously
until late stages.
Plans B, С and D are subsequently worse and give only small chance of survival.
(c) Alexey Turchin, 2014