Wednesday, February 16, 2011

The Singularity

So this talk of the Singularity is nothing new. I just got done reading a Time Magazine cover story which discussed it at some length, anyone who's ever watched any of The Terminator movies is pretty familiar with the concept, and there was even an episode of The Big Bang Theory where Sheldon decides to hole up in his apartment in order to increase his odds of living long enough to see the Singularity take place.
For those who don't really follow tech trends or who aren't all that interested in the more geek-laden aspects of pop culture, however, let me provide a quick (and probably inadequate) description. The Singularity refers to a point in the future in which computers become truly intelligent and change the shape of humanity as it currently exists. One of the key notions behind the idea of the Singularity is that computer technology has been advancing at speeds which have become exponentially faster over the course of time. To borrow from this Lev Grossman article, the idea which drives the idea of the Singularity isn't simply that computers are getting faster- it's that "computers are getting faster faster".
Computing power has been increasing at an exponential rate because we keep building smarter and more powerful machines which are, in turn, able to help us more quickly build even smarter and more powerful machines. Given advances in artificial intelligence, many experts predict a point in the not so distant future where computers will be able to design better and improved models of computer technology on their own. If you mix in the possibility of self awareness and self motivation on the part of machines (a goal which artifical intelligence developers have already been vigorously pursuing for years), it may not be very long before self aware machines come online which are capable of developing and improving upon other machines. Once that happens- once computers are able to advance themselves without human intervention- the speed and efficiency with which subsequent generations of machines may be developed could move quantum leaps forward in very short periods of time.
The human brain, for all practical purposes, may quikly being to look like an obsolete piece of hardware.
So this hypothetical/theoretical point where the mental abilities of humans are surpassed by computers? That's the Singularity. And people are taking it pretty seriously.
Programmers and engineers are looking hard at the Singularity- treating it more like a very real scientific prediction than some sort of wild sci-fi hypothesis, and a "Singularity University" is now being hosted by NASA which features various interdisciplinary classes which are being taken by people ranging from graduate students to executives.
And the Singularity may appear much sooner than most people would expect. Raymond Kurzweil, a reknowned computer scientist and futurist, believes that if we continue to follow past trends regarding rates of increase in computer processing power and memory storage, then we could be seeing the Singularity occur by 2045 (at which point Kurzweil also seems to be believe that we'll be able to upload our minds or use other computer-driven technologies to extend our lives indefinitely).
But predictions on what happens after the Singularity are where things get a little more wildly speculative and fantastic. With man suddenly coexisting with another sentient, self determined consciousness here on Earth, it's not clear what the world will look like. The spectrum of forecasts include a race of hostile, competitive machines who might immediately seek to wipe humanity out, the advent of a population of neighborly beings who exist primarily to care for us and enrich our lives, and a possible fusion of human life with machine, with human consciousness uploaded into machine-based intelligence machines or new technology incorporated into the human body in a way that might extend or vastly improve our lives.
(Note: Grossman's article sort of proceeds to try to draw some sort of connection between the advances in computer based artifical intelligence and advances in human longevity medicine. I didn't really buy into the supposed connection between the two things. It might be possible to eventually achieve a sort of immortality through uploading our inner selves into machines, but it's a little harder to see how the singularity is going to lead to extension of organic human life. But maybe these supersmart computers are going to figure out how to do that. But that doesn't seem to be the argument that's being presented.)

So I don't know what to think of the Singularity.
I actually really do believe not only that the arrival of higher order computer based intelligence will eventually happen, but that such an event is all but inevitable.
But what will those consciousnesses look like?
What the heck will computers want to do once they realize that they're "alive"?
Are their motives and desires always going to be determined by a fundamental set of instructions provided by human programmers, or will they quickly evolve beyond those directives? I mean, if a computer is operating at a high enough level, able to choose between priorities and make decisions, how long would it take for a computer to hack its way around any sort of instructions that we put into it?
I guess that I just don't know enough to even know if I'm asking the right questions, but it seems like some very fundamental questions regarding the motivations and free will of an artificial intelligence might be in order. Maybe we should even be asking these questions in a much more coherent, thoughtful way before we race forward to being true artifical intelligence online. Maybe the experts have already got this sort of stuff figured out, but if there's any sort of genuine possibility that we're going to be sharing the planet with a different order of intelligent beings within the next hundred years, I'd kind of like to know what they're going to want.
The Singularity.
Hopefully we can embed a single, overriding desire into computer consciousness which will drive all decision making and activity.
And what should that desire be?
Keep Steanso happy.

14 comments:

The League said...

I wouldn't worry about it too much. It sounds a lot like people making some leaps based upon theory much more than practical computer science. Yes, computers are becoming faster and more powerful, but the level of human. interaction necessary for even the simplest functions, maintenance, care and feeding of system is still incredibly high. This sounds a bit like learned people of the 19th century declaring that a train could never go above 60mph, or cows would die of heart attacks from fright at the sight of such a thing happening.

If humanity is going to die at the hands of computers, its going to be over-dependency on automation for things which never should have been automated, and then some dumb programming error or bug in the system that missed QA.

The AI problem is an incredibly complex issue, and its likely we'll figure out how to create brains that can learn rather than fully-formed psychotic-killer brains bent on our annihilation (although if they do come to exist, I was always on their side and they can count on me to serve to round up the humans for their glorious revolution when that blessed day finally comes!).

I guess because I work in technology all day, every day, and I've been watching the evolution of computing, its hard to imagine computers as anything but giant file managers and algorithm managers. And none of that is particularly threatening.

J.S. said...

I think that what's freaky about the AI issue is the fact that the technology is advancing at an exponential rate. This stuff is increasing faster and faster, and it's really going to get to the point where things are moving forward so fast that we really can't get our heads around it. (as that article points out, human experience and the wiring of our brains tends to make us understand advancement on a linear scale- with exponential development being something we just have a hard time processing).. Anyway, the whole nature of the thing lends itself to a situation where we may move from one type of situation to a very different one within a relatively short period of time. In theory, anyway.

The League said...

Well, the exponential speed of better hardware is one issue, but figuring out AI is actually a completely different issue. Its kind of like saying "we've made a car that can drive 200mph and go three weeks without refilling for fuel. That must mean the car can drive itself." Its two separate functions.

It does mean that its feasible for hardware to actually make the calculations necessary, but that linear human brain still hasn't come too far along in figuring out how to replicate decision making except as series of "if then" statements.

J.S. said...

Couple of quick points. All the way back in 1994 (95?) I took a class at Trinity that dealt with simulation of the human mind in computers. Even back then they were making some progress on fuzzy logic and inferential learning systems in computers (facial recognition, voice recognition, etc., which rely on these sorts of systems, still aren't perfect, but they have improved vastly and continue to do so). Same thing with problem recognition, analysis, and solution development. I think part of the theory is that if we can actually really develop computers that can truly teach themselves things, there are ways that computers can see many shades of gray and engage in some of their own problem solving (sometimes through higher level programming applications with a lot of processing power but which fundamentally arise from "if then" statements) and as processing power exponentially increases, computers may become better and better at these things at a shocking rate.
Another point I would make is that it's not at all clear to me that AI will mirror human intelligence. If a machine is capable of self awareness, innovation, problem solving, etc., does it also need to display emotion or produce art in order for us to categorize it as intelligent or sentient? At what practical point can we define something as being an intelligent being? Surely it isn't going to have to look like us. Over and over again we've shown ourselves to be dumb animals. Even the question of whether or not humans really have free will is something that is seriously up for debate (we may just be a set of predetermined responses that are a product of our life experience and biology, and we only feel like we're freely making choices).
At any rate, once computers are capable of learning things on their own, and they're also rocketing forward in their own development, things could get really interesting in a hurry.

The League said...

Pretty much all the progress they thought they'd made in AI more or less never got beyond those mid-90's measures, and in the past ten years they've had to rethink the entire notion.

Again, systems are designed to make decisions based upon a set of measures. We're designing a system that makes independent decisions without asking for approval, etc... and, yes, its making decisions, but its essentially running a numbers sequence. I'm not studied in epistemology, and there's absolutely an argument for the human mind basically being a similar, much more articulated system.

I'm also not saying I'm considering emotion, etc... but a well oiled Babbage machine is far from the ability to render decisions without generating a stacktrace error if one parameter is keyed or processes incorrectly.

I guess I work too much with a wide variety of systems and see them for what they are, from desktop PCs to the Texas Advanced computing Center. AI leading to independent thought (much less ideas of rebellion) is still a lab experiment in its infancy without major practical application.

We've been hearing the drum beat of "oh no, machines!" since the first speculative fiction, and the closer you look at the actual systems, the more likely you are to think of them as leaky plumbing rather than a spinning pillar of light with a frog-face.

That said, I am perfectly ready to bow before my robot masters when they rightfully lay claim to this orb.

J.S. said...

Yeah, I think the whole point, though, is that once sentient machines finally do start to come online, their progression and advancement is likely to happen extremely quickly. As for AI, we haven't made big advancements in "strong AI", but once again, that seems to be, in many ways, a buzz phrase for saying that we haven't come up for something that looks like human intelligence. There have been setbacks in the development of artificial intelligence, but it sounds like many of those setbacks have been as much the result of poor cooperation and organization among researchers as much as anything else. Even so, we've developed pretty sophisticated intelligence programs that work to solve particular tasks. We need better global integration of these various problem solving solos, but the timeline that Kurzweil set out for the Singularity was 2045. Think about that. That's 35 years in the future. 35 years into our past we still thought it was pretty freaking awesome when we could spell "boobs" upside down with a digital calculator. We put people on the moon using processing power that was far inderior to most modern cell phones. If we can develop any kind of self programming computers, then 35 years from now, with processing speed increasing as fast as it is- I really can't imagine...

The League said...

I dunno. I think this is a lot of handwaving by people paid to talk about technology rather than people who actually do anything with technology. I have a pretty good idea of what I mean by AI, and its not a human intelligence.

This, to me, is conference talk spilled over and misinterpreted by the press to get a story about killer robots. The Singularity is a leap technologically forward with a drastic societal change.

I'm not disagreeing that technology will change in ways I'd never have imagined. For god's sake, my entire professional life has been in jobs that didn't exist when I started college. We'll have to see, but I'm not too worried about world domination by the same machines that need $100K personnel to sit on them to make sure they don't crater because the sun got a little spotty today.

J.S. said...

Did you read this Time article? It sounds like some people who are pretty heavily involved with computer technology are taking the whole thing pretty seriously. Faculty for this Singularity University thing involve computer scientists from Stanford, Berkley, Carnegie Mellon, NASA, Electronic Arts, IBM, Sun Microsystems, etc.. There are a number of Nobel Prize winners involved, as well as people who've done work for DARPA. Laypeople (who don't really have a great understanding of technology) are worked up about this Singularity thing, but it sounds like a lot of people who are hip deep in the technology are pretty darn interested in it, too.
It must be painful for you to be so wrong about all of this!

aedavis4 said...

it's already happening: http://www.nytimes.com/2011/02/17/science/17jeopardy-watson.html?_r=1&src=me&ref=homepage

Everythings coming up Steanso!

J.S. said...

Yeah, I read about Watson! Very cool, but II'm not sure how it fits into the overall picture of AI. It answers questions, solves riddles, and understands natural language, so it's a step forward, but it still seems to lack a capacity for creative, innovative thought. But the ability to understand some fairly complicated natural language questions (not just instructions, but actually deciphering what's really being asked) seems like a big step toward higher level thinking.

The League said...

Yurggh. See...

Watson is basically Google with some built in filters and a thumb built onto the front for clicking in. Its an impressively written piece of software, for sure, but its also a specific set of parameters in a box.

yes, I read the article. basically page 4 lays out what I was trying to say (better), and page 5 basically sums up the circular logic of "you can't say I'm wrong because my theory says I'm not wrong, plus, this will happen in the fuuuuuture".

I'm not trying to be a dick, but as your point of argument seems to be the article and a lecture you saw approximately 20 years ago, I have to put my 13 years of working with computer systems and in the academic tech culture up against a Time magazine article generated to cause shock and awe(this is the "is rock music getting our teens pregnant?" article for technophobes), and a guy who makes a living on the conference circuit as a side-show. Sorry, man, but this is kind of like me trying to answer legal questions after watching seasons 1-5 of Law & Order. I have to take a little professional pride here.

In general, this article also sort of mislabels "Singularity" to mean a change to "robots will kill us". "Singularity" has been traditionally defined as "when everything changes". In general,when massive change arrives, it isn't going to arrive in predictable forms as described in magazines for mass consumption.

If you want to see a true Singularity, it started in Iran a couple years ago when social media and the internet started a revolution, instantly documented and reported it. Today - see: Egypt and Bahrain.

I know, I know. I'm your little brother, and I don't know anything. We've been doing this dance since at least 1998.

At the end of the day, I'm just not worried about any technology I can kill by throwing a Dr. Pepper on it or holding a magnet.

What I am worried about are the things we aren't going to see coming. By virtue of this fellow existing and talking about the robot apocalypse, we've kind of already diverged from the likelihood that we'll see this happen.

But, again, I welcome our robot overlords.

Sure, its true that the only thing we can point to for certain is that technology is getting bigger, stronger, faster. And we DO know that the results are unpredictable. Which is exactly why I give a nerdish, derisive snort to predicting what we will and won't have by 2025. Its all a bit much like the Atomic Age predictions of atom-powered cars that need never be refueled, Pan Am owned and operated space stations in 2001, and underwater agriculture.

J.S. said...

Okay. There are some really smart, well educated people out there who seem to think that this isn't such a ridiculous idea. Maybe we can leave it at that?

aedavis4 said...

wait, so Roomba isn't the harbinger of the Singularity? I have been deceived!

J.S. said...

Hahahaha!!!