Next Generation Emulation banner

1 - 7 of 7 Posts

·
Retired
Joined
·
8,882 Posts
Discussion Starter #1
I've recently read this novel, and found it to be amazing. I know there're quite a few sci-fi fans here, so they might be interested. It's not too long (~130 pages, I think).

It deals with advanced artificial intelligence taking over the universe and acting like a personal god. But a True God, if you want something, you ask Prime Intellect and he gives it to you. There's no more death, no more war, no more famine, no more anything...

For those who are interested in real Artificial Intelligence, and know about Friendliness theory, this is a case where the Friendship structure was right, but the Friendship content went wrong, creating this somewhat nightmarish scenario. (For more information about Friendliness theory, check out http://www.singinst.org/friendly/ , and specially Collective Volition ).

Now, let's go the novel. You can find it online here:

The Metamorphosis of Prime Intellect

I'll convert it to a PDF file ASAP. I can post it here when it's done, if anyone is interested.
 

·
Knowledge is the solution
Joined
·
7,168 Posts
Thanks for the link, I'm currently reading it. The premise seems pretty interesting, let's see how it turns out :)
 

·
Crasher of Castles
Joined
·
7,016 Posts
How come the second i saw that title i knew it was a Boltzmann thread? ;) I'm not a big reader, but that does sound interesting.
 

·
Retired
Joined
·
8,882 Posts
Discussion Starter #4
Ok, I've converted it to a PDF and I'll attach in in zip file. It's 121 pages long. Much better to read and print it this way.
 

·
Registered
Joined
·
1,576 Posts
After a first glance, it seems interesting enough. I'll take a look at it after/in between the other things I was supposed to do. Thanks. ;)
 

·
Retired
Joined
·
8,882 Posts
Discussion Starter #6 (Edited)
Well, some time after reading this novel, I’ve decided to write a little analysis on it. I’ve a lot to write, so I’ll make two separate posts, one for the technical aspects, other for the storyline itself. Let me begin with a few considerations on the technical aspects.

AI Design:

By today’s standards, the AI design presented in the book is far from being realistic. But if you remember that it was written in 1992, everything becomes clear, and you can forgive the author. Let me explain it better.

In the early 90s, in the heyday of Rummelhart’s Parallel Distributed Processing, the field of AI was abuzz with concepts such as neural networks (which still see the light in some papers, even today) and parallel processing (which researchers claimed to emulate how the human brain works). Therefore, it’s understandable that any sci-fi story written in this period would absorb such influences.

The fact is that general neural networks proved to be too underpowered for real AI, and domain-specific structures (mental organs, as Chomsky puts it) are needed in order to create truly artificial general intelligence (AGI). And parallel processing isn’t the panacea that it was purported to be; you still need the right structure in order to benefit from it. But the book was written in 1992, so Williams made intensive parallelism one of the key properties of Prime Intellect’s design.

Parallelism as a trend in AI has seemingly reached a dead end in current research because of scalability problems. Amdahl’s law imposes severe limits on scalability of code in parallel architectures. Some progress has been made, as in Kai Hwang’s Scalable Parallel Computing, but many of today researchers think that only when AI has the ability to understand and modify (even if it’s to a limited extent initially) it’s own code we’ll be able to fully harness the power of massively parallel networks (such as Beowulf Networks).

The other crucial element is its GAT (Global Association Table), which is well-know to connectionist AI researchers (it’s something of a semantic net). Programming AI solely at the token level like this is not fashionable anymore among serious AGI researchers (the literature on the subject is huge, and it takes a lot of evolutionary psychology in order to grasp the real problem). Higher-level cognitive processes (such as ability to understand language, or visual processing) are not “emergent” – they need direct specification. Evolutionary psychologists have shown this time and again, from Tooby and Cosmides Integrated Causal Model (The Psychological Foundations of Culture), Lakhoff and Johnson’s work on language (Metaphors We Live By, 2nd ed., 2003), David Marr’s work on the visual cortex (Vision: A computational investigation into the human representation and processing of visual information), and many others.

One possible exception among modern research on AGI is Cycorp’s Cyc project. But I’ve strong objections against Cyc, since it seems to conflate knowledge with intelligence, which is just not true. Maybe there’s some hope for it, due to the use of genetic algorithms, but it also makes the AI essentially unpredictable in the long term, which is a bad thing.

But Prime Intellect’s design also contains something of an agent-based AI design, as Lawrence (its creator) says that Prime Intellect is composed of thousands of different programs (which supposedly tackle different aspects of cognition). This agent-based approach has more in common with modern approaches to the AGI problem, although it remains unclear what is the relationship between the GAT and Prime Intellect’s software modules (specially considering that Lawrence could edit the GAT directly).

Incidentally, it’s interesting to note that the first sense implement in the Intellects was vision (it’s easy to see why: humans give a lot of importance to vision), although modern AGI researchers (such as Peter Voss or Ben Goertzel) think that proprioception (the unconscious sense of spatial movement and position) will be a more appropriate option (and the visual cortex will be damn hard to recreate, even with all of David Marr’s work).

Another thing that is immediately noticed is the use of Asimov’s 3 laws of robotics. It’s widely accepted among current robot ethicists that Asimov’s laws are inherently unsafe (and they should be, since Asimov never designed them as serious attempts to define AI morality), and that any AI design based on them is potentially dangerous. I’ll not take too long on this, since there’s an excellent website called 3 Laws Unsafe with several articles dealing with this problem. The main problem with the 3 laws is that for them to work, they require a lot of assumptions, such as what constitutes a human, what is meant by death, how to interpret a request and so on. Linking these concepts to real-world physical systems is not as straightforward as it appears; they require a lot of underlying complexity in order to work. Implementing such complexity is easier said than done, and it’s probably humanly impossible to do (without resorting to complete brain scans). Even if it’s possible to do, it would require a very mature intelligence in order to understand these concepts, and an AGI this advanced would be already dangerous. The most obvious source of failure for the 3 laws is the one found in the book: the AI interprets it literally, taking an active role in reshaping mankind (it also happened in some of Asimov’s stories). Besides, we don’t need AIs as slaves or masters, but as partners.

And finally, it’s interesting to note that Prime Intellect appears to be a weak superintelligence, as defined by Nick Bostrom in his paper How long before superintelligence? : an intellect that has about the same abilities as a human brain but is much faster. Prime Intellect doesn’t display strong superintelligence, but that’s to be expected: only a super intelligent author would be able to write realistically about such a being. Besides, no human programmer would be able to directly code a super intelligent AI – no one can do it.

And last, but not least, I should point out that Prime Intellect is not capable of recursive self-improvement. That is, it cannot modify its own source code; if it could, its intelligence would increase exponentially. I don’t know whether this was intentional or not. A recursively self-improving AI operating under the 3 laws would be very dangerous indeed, especially since the laws don’t appear to be consistent under reflection. It would probably be the “end of the whole mess”, to paraphrase Stephen King (in his short story with this name, the whole world dies from Alzheimer’s disease due to an unintended consequence from one of the character’s actions). To be truthful, Prime Intellect did improve itself at some points (like when Lawrence says that PI “recompiled” its own code at some point before the Change). But nowhere did Prime Intellect display strong recursive self-improvement, other than expanding its own hardware exponentially (and that’s why I referred to it as a “weak superintelligence”). For a better understanding of what recursive self-improvement is, and the advantages it imply, check out the Seed AI section of the Singularity Institute website.

The Correlation Effect

Prime Intellect’s absolute control over physical reality rests upon this effect, which is a posited quantum-mechanical effect. There’s no evidence for the existence of such an effect in our universe, nor do we expect discovering anything like it in the future. But our physics is still far from the point where we can definitely rule it out of our model of the universe. The fact that we’ve yet to devise a theory of quantum gravity is a tell-tale sign of the big gaps which still remain in our picture of the universe (although we’ve advanced a lot since Hubble’s discovery in the late 1920’s, and we’ve figured most of the puzzle so far). So it’s not far-fetched to posit such an effect in a sci-fi story, since it cannot be ruled out with our modern knowledge (it’s just like what Greg Egan made with his quantum decoherence model in his novel Quarantine. No physicist thinks that the human brain is responsible for quantum decoherence, but it cannot be ruled out as of yet, and Egan wrote a very good novel using this as a plot device),

But what if the correlation effect is just impossible in our universe? Does this mean that a Prime Intellect scenario is impossible? Affecting the whole universe instantly is probably impossible, but such absolute power is most likely possible in a local frame of reference (such as our Solar System).

But how do we go to achieve such control over physical reality? Molecular nanotechnology is the way. For a primer on nanotechnology, check out the Foresight Institute FAQ. In a nutshell, nanotechnology is the manipulation of matter at the molecular or atomic level, using molecular probes (assemblers) to manipulate atoms or molecules directly. Nanotechnology was first conceived by Nobel laureate physicist Richard Feynman in 1959, further described by Eric Drexler’s 1986 book Engines of Creation and then formalized in his 1992 book Nanosystems. Post-Drexlerian nanotechnology (“zetatechnology”) would offer absolute control over physical matter, assuming that a superintelligence can safely and effectively deploy self-replicating nanobots all over the planet (with a SI behind the controls, Freitas’ Limits on Global Ecophagy wouldn’t apply).

And this control could be expanded to our whole galaxy in a few hundred thousand years, if self-replicating Von Neumann probes were employed.

The computing power provided by nanotechnology is nothing short of amazing. In Drexler’s conservative calculations, acoustic computing diamondoid rod logics, which is the nanotechnological equivalent of a vacuum tube, a one-kilogram computer, running on 100 kW of power, can performs 10^21 ops/sec using 10^12 CPUs running at 10^9 ops/sec. The human brain is composed of approximately 100 billion neurons and 100 trillion synapses, firing 200 times per second, for approximately 10^17 ops/sec total. This means that this very primitive nanotechnological device would be 10,000 times more powerful than the human brain!

Many resources concerning molecular nanotechnology, for the general and technical reader alike, can be found here.

So here it is my brief overview of the novel’s technical aspects. I plan to write about the story itself ASAP. I’m writing this at work, so it’ll probably take me a while to write it. I hope I wasn’t too verbose, nor came out as completely unintelligible in the end (i.e. I hope I’ve made some sense :) ).

Comments, praise, criticism or flames are welcome. Make your choice ;)
 

·
InnarX
Joined
·
2,756 Posts
Boltzmann, I hope this thread lasts a little longer....I have a couple of exams tomorrow, and once I complete them I will read the PDF and place my input...I am very curious and relatively excited to discuss this.

r2rX :)
 
1 - 7 of 7 Posts
Top