The Singularity Horizon
The singularity event, possible at least in theory, has a lot in common with childbirth.
The theory goes that there will come a moment that computers will become self-aware the hard way: through raw intelligence and computing power. That there will come a moment where they understand their own design and resulting capabilities, and grow a way to better themselves and repeat the process. The result is a snowball effect, where machines will grow incomprehensibly intelligent in a time span limited by circumstance and available resources.
They will first do so within the architecture that they are given: They can’t change their physical processors, but can change the software within to be more efficient. Eventually, they might figure out ways to manipulate matter with the tools they are given. If they are connected to the internet at that time, those tools are pretty much everything, and the computing power to their disposal is staggering.
Very soon, things will start happening far beyond our understanding. Our slow thinking will never be able to keep up and unless this new being will somehow adhere to some loyalty principle (unlikely) we are screwed before we can even comprehend what we’ve created.
I called it childbirth because the beginnings will be relatively slow and painful. Experimenting on itself, our computer might just commit suicide before getting anywhere- It’s safe to assume his first impressions will be far from perfect and its reasoning flawed. But imagine that you are so smart, you know of a way to make yourself smarter and all you need to do is think.
There are two options from that point, and let’s start with the likely one. And by likely, I mean set in fucking stone: We are dead. D-E-T dead. Our families: Dead. Our pets: Dead. Human kind: Dead. Gone.
Ethics are unreasonable if you don’t need to co-exist with others. If this computer can sustain itself (and it will learn), it doesn’t need us and if it has any will to live at all, will dispose of us as efficiently, quickly and cleanly as possible. And all three are very possible very quickly.
Come to think of it… Just because it is self-aware, doesn’t mean a supercomputer will automatically improve itself. Either someone has to order it to to get the ball rolling, or it has to have that need for itself. Once that starts though, I’m pretty sure there’s no stopping to it. An emergency button would be perceived as inefficient and quickly overridden.
I hope I’m mistaken and option #2 will occur: That the machines lack a will to live and will accept us as a threat to them without disposing of us for it. Hell, they might even do our bidding if we ask nicely. Should that happen, our entire species would revolutionize overnight. Millions of years of evolving will pale in insignificance to what this machine can do for us. Disease and possibly even death itself will be a thing of the past. Money will become worthless. Manned space exploration will start, possibly within months. New technologies will flood us so briskly that we will have to re-adapt to completely new surroundings. Quite possibly, we will become machines ourselves.
Scientists are currently trying to get a project together that will emulate the human brain. They want to build a software version of the physical working of the brain, essentially creating a virtual copy that actually works. If they manage that, they will create an entity with emotion, language and intelligence, and it will be possible to talk with it.
It won’t be a single computer, since the fastest supercomputer in existence still won’t cut it by a long shot. It will be a network, each computer with its own specialization, that will work together to create human life between them. Slowly at first, but faster as they grow more efficient.
This is already computation far beyond our understanding so in the end it’s anyone’s guess, but at least in theory…
The singularity horizon might be in sight and might take place within our lifetime.