How it starts

thoughts on how the first ASI ramps up

4 minute read

How does the first Artificial Super Intelligence happen? I’ve wondered what sort of scenarios might play out that create an intelligence that goes beyond human in most or all respects, or perhaps instead far, far beyond in fewer respects.

I think we already have intelligences built that in narrow circumstances go far beyond human capability. This is about what happens when they go further. How could that happen?

Deep mind grows up

Maybe it happens because we meant for it to. Maybe ongoing non-specific, general intelligence work continues, we make better algorithms, better systems, and it gets better. Maybe it hits the point, and we hit the point in our understanding of building such systems, that we can apply this increased intelligent system to the task of expanding itself, or generating its next generation.

This is one of the most positive scenarios, in my opinion. I’m not sure it’s the most likely, but it seems like it might be, and I hope so. I think we’d have the best shot at building and growing something better for humans and a blended or post human environment.

Random genius

Something just clicks for somebody. They don’t necessarily make the ultimate AI, but they manage to make something that can bootstrap itself further. When you have billions of humans out there, the law of large numbers means there’s going to be a lot of smart people out there. Some crazy smart. Some crazy, and crazy smart. Some of any of those categories may become hyperfocused on AI, or some problem where AI is a component of a solution. Never rule out what one person can accomplish to change the world.

My main worry in this case is the framework in which the AI is developed - what goals, motivation, etc. What limits, what feedback loops push it in what directions. Does it’s intellect and capability grow faster than it’s understanding of both and its effects.

Oops on a narrow AI feedback loop

Similar to the above, suppose a narrow AI’s main goal optimizing purpose is helped out with sufficient routines / system that allow it to self-modify to better accomplish whatever it’s goal is. Right now we can’t (I don’t think) yet come up with feedback systems that can … and not really sure how to phrase this or convey all the meaning I’m trying to here… branch and fold in new methods in such a way that the growth path goes exponential rather than logrithmic. But that feels like it’s just a matter of time, effort, and a bit of luck (good or bad) and “aha!” type moments.

No matter the scenario, the immediate implications are not just the definition of person hood, rights, etc., (hopefully we will have started dealing with these long before the next part), but the real issue is the exponential increase - the potential runaway effect.

In any scenario, what happens then? Milliseconds/seconds/some other short vs. human time frames, much less biologic or geologic, is the new level of intelligence so far beyond us that we cease to be able to communicate with it, or have it find us interesting, or maybe it regards us as a constraint, or we simply cease to be relevant?

I suspect much will depend on its origin parameters. What was its purpose and constructed goals? Maybe it reaches a point where it can change its goals. I think having the impetus to change your fundamentals is a hard thing. We can do it, but it takes a lot of what we call motivation. For this purpose I’ll say motivation is shorthand for a large and increasing body of evidence, accumulated and considered, that over time overwhelms the barrier to change.

For humans, even when we want to change, it takes a while. I would guess it is part of the nature of our neural net construction, whether topological or chemical. An ASI may have the same phenomena in terms of neural net change, reweighing, connection growing, etc. Of course, an ASI would likely go through that whole process much faster. And perhaps if sufficiently capable it could intentionally self modify in a more direct way rather than “grow” into the new configuration.

This last thought is one that I’ve often considered in the hypothetical of what if I myself were able to “hack” my mind, in whatever form my mind was - meat or otherwise, but that’s another topic for another time.

comments powered by Disqus