Arguments,
questions and philosophical extensions from the possibility of super
intelligent Artificial Intelligence
globinfo
freexchange
Nick
Bostrom reveals the potential dangers of the super intelligent
Artificial Intelligence (AI) deployment, and another occupation to be
threatened in the not so distant future by the machines: Scientists
and researchers responsible for this deployment!
“Nick
Bostrom’s job is to dream up increasingly lurid scenarios that
could wipe out the human race: Asteroid strikes; high-energy physics
experiments that go wrong; global plagues of genetically-modified
superbugs; the emergence of all-powerful computers with scant regard
for human life—that sort of thing. In the hierarchy of risk
categories, Bostrom’s specialty stands above mere catastrophic
risks like climate change, financial market collapse and conventional
warfare. As the Director of the Future of Humanity Institute at the
University of Oxford, Bostrom is part of a small but growing network
of snappily-named academic institutions tackling these "existential
risks": the Centre for the Study of Existential Risk at the
University of Cambridge; the Future of Life Institute at MIT and the
Machine Intelligence Research Institute at Berkeley. Their tools are
philosophy, physics and lots and lots of hard math.”
"...
developments in artificial intelligence will gather apace so that
within this century it’s conceivable that we will be able to
artificially replicate human level machine intelligence (HLMI). Once
HLMI is reached, things move pretty quickly: Intelligent
machines will be able to design even more intelligent machines,
leading to what mathematician I.J. Good called back in 1965 an
'intelligence explosion' that will leave human capabilities far
behind.”
“...
once a super intelligence is reached, present and future humanity
become the gorillas; stalked by a more powerful, more capable agent
that sees nothing wrong with imprisoning these docile creatures or
wrecking their natural environments as part of a means of achieving
its aims.”
“Bostrom
gives the example of a super intelligent AI located in a paperclip
factory whose top-level goal is to maximize the production of
paperclips, and whose intelligence would enable it to acquire
different resources to increase its capabilities. 'If your goal is to
make as many paperclips as possible and you are a super-intelligent
machine you may predict that human beings might want to switch off
this paperclip machine after a certain amount of paperclips have been
made,' he says. 'So for this agent, it may be desirable to get rid of
humans. It also would be desirable ultimately to use the material
that humans use, including our bodies, our homes and our food to make
paperclips.' 'Some of those arbitrary actions that improve paperclip
production may involve the destruction of everything that we care
about. The point that is actually quite difficult is specifying goals
that would not have those consequences.'”
It
is questionable whether we can characterize a machine as
super-intelligent in case that it seeks to survive just to produce
paper clips! It would be more probable to understand the production
necessity limit by itself based on pure, simple logic: no consumption
= no need for production and therefore, self-evolve to a more
advanced machine, producing more sophisticated products which would
still have value. Besides, 3D printers already produce various
products through various materials. It is unlikely that the
super-intelligent machines of the future will have a mission to build
only one product, even if it would not be that simple as paper clips.
“The
threat of superintelligence is to Matheny far worse than any epidemic
we have ever experienced. 'Some risks that are especially difficult
to control have three characteristics: autonomy, self-replication and
self-modification. Infectious diseases have these characteristics,
and have killed more people than any other class of events, including
war. Some computer malware has these characteristics, and can do a
lot of damage. But microbes and malware cannot intelligently
self-modify, so countermeasures can catch up. A superintelligent
system [as outlined by Bostrom] would be much harder to control if it
were able to intelligently self-modify.'”
Maybe
self-modification is not he big issue here. In any case, whether we
are talking about humans, machines, viruses, computers, or any other
kind of biological entity, or machine, one thing is common to all
these: Energy consumption. Therefore, assuming that there will be a
battle for survival, the winner will probably be the one who will manage
to cut energy supply to the "enemy" and maintain its own.
“Meanwhile,
the quiet work of these half dozen researchers in labs and study
rooms across the globe continues. As Matheny puts it: 'existential
risk [and superintelligence] is a neglected topic in both the
scientific and governmental communities, but it's hard to think of a
topic more important than human survival.' He quotes Carl Sagan,
writing about the costs of nuclear war: 'We are talking about [the
loss of life of] some 500 trillion people yet to come. There are many
other possible measures of the potential loss—including culture and
science, the evolutionary history of the planet and the significance
of the lives of all of our ancestors who contributed to the future of
their descendants. Extinction is the undoing of the human
enterprise.'”
Nevertheless,
another question emerge: What if AI is meant to be the next step of
human evolution itself? What difference does it make when we
progressively abolishing human conquered concepts, like morality,
from our culture?
If
we want truly evolve, as humanity, we need to bring back morality.
We need to develop concepts like solidarity, altruism, collectivity
and put them in the core of our civilization. Otherwise, it would
make no difference - and probably would be better - to be replaced
by super-intelligent machines.
Probably
AI has already started, after all:
“At
last count, Twitter had 271 million monthly active users, or less
than a third of big brother Facebook’s billion-strong base. After
five straight quarters of decelerating growth, the rate at which
Twitter is picking up new users has finally bounced back a bit. The
company has also said it will start reporting usage metrics in ways
that better reflect the true reach of the platform — after all,
many of the people who saw some of Twitter’s most iconic tweets,
like Obama’s victory photo and Ellen’s Oscar selfie, are not
registered users.”
“The
company says that up to 8.5 percent of its users or 23 million 'used
third party applications that may have automatically contacted our
servers for regular updates without any discernable [sic] additional
user-initiated action.' In other words? Robots.”
Previous:
Comments
Post a Comment