Wednesday, December 10, 2014

Q&A - 10/12

Oren Etzioni

It’s time to intelligently discuss artificial intelligence. I am an AI researcher and I’m not scared. Here’s why [..]. [A]s an active researcher in the field for over 20 years, and now the CEO of the Allen Institute for Artificial Intelligence, why am I not afraid?

The popular dystopian vision of AI is wrong for one simple reason: it equates intelligence with autonomy. That is, it assumes a smart computer will create its own goals, and have its own will, and will use its faster processing abilities and deep databases to beat humans at their own game. It assumes that with intelligence comes free will, but I believe those two things are entirely different.

To say that AI will start doing what it wants for its own purposes is like saying a calculator will start making its own calculations. A calculator is a tool for humans to do math more quickly and accurately than they could ever do by hand; similarly AI computers are tools for us to perform tasks too difficult or expensive for us to do on our own, such as analyzing large data sets, or keeping up to date on medical research. Like calculators, AI tools require human input and human directions.

Now, autonomous computer programs exist and some are scary — such as viruses or cyber-weapons. But they are not intelligent. And most intelligent software is highly specialized; the program that can beat humans in narrow tasks, such as playing Jeopardy, has zero autonomy. IBM’s Watson is not champing at the bit to take on Wheel of Fortune next. Moreover, AI software is not conscious. As the philosopher John Searle put it, “Watson doesn't know it won Jeopardy!” [..]

So where does this confusion between autonomy and intelligence come from? From our fears of becoming irrelevant in the world. If AI (and its cousin, automation) takes over our jobs, then what meaning (to say nothing of income) will we have as a species? Since Mary Shelley’s Frankenstein, we have been afraid of mechanical men, and according to Isaac Asimov’s Robot novels, we will probably become even more afraid as mechanical men become closer to us, a phenomenon he called the Frankenstein Complex.

At the rise of every technology innovation, people have been scared. From the weavers throwing their shoes in the mechanical looms at the beginning of the industrial era to today’s fear of killer robots, our response has been driven by not knowing what impact the new technology will have on our sense of self and our livelihoods. And when we don’t know, our fearful minds fill in the details.


Now Etzioni is an active researcher. And by active I mean I should have read about your work or heard about it / know you are in the field. I know about Etzioni. A month ago I read one of his papers about optimizing purchase time for airline tickets. He uses [geek] a variation of reinforcement learning, time series analysis, and a rule learner out of which he creates an ensemble [/geek]. I tried to learn from this, mulled over it, so on.. So the guy is in there, doing stuff. In comparison, one of the few AI researchers (?) talking about the "AI calamity", Kurzweil, did some work, but I see nothing recent, and things he's done compared to what is going on today, I am sorry to say, is the equivalent of Pong. The rest seem to be from "shit can go wrong" fields such as Physics.

A lot of people in fear do not seem to understand how much work it takes to create a functioning AI / ML system. I can understand outsiders who are wild-eyed about this stuff who have gross expectations from it and the ML practitioners can derive an enjoyment of sorts from that maybe because they were able to impress a lot of people with what they created. But there are many details that would impede Skynet, let alone an army of T1000 that can unequivically, guaranteed kick your ass. Even for cases with specialized software, i.e. Deep Blue beating Kasparov on chess, some details are not paid attention to. Before the game DB had access all of Kasparov's previous games, but Kasparov had none of DB's playing patterns. So even if Deep Blue was playing against another AI, the game was not exactly fair. In the context of an AI calamity scenario there will be many such details (bugs so on) that'll make a much-feared widespread ass-kicking impossible.


Innovation has slowed down [..]


Again I need to argue the exact opposite: Innovation has sped up so insanely that people have lost the ability to follow what is going on. Same goes for some in and around tech, including a fella named Peter Thiel. But before I rant more, let me give another example.

The video card on your computer has brought the art of parallelism to new heights. This was done because from start video card makers had to worry about processing a lot of pixels at the same time, necessity bred innovation, the current state-of-art is the end result. Now seperate from this, some time ago someone had the brilliant idea to submit regular computation tasks to the video card as if they are graphical tasks. They would be exploiting the card's innate parallelism this way because such capability is much higher than compared to regular CPUs. Pure crazy. This technique became GPU computing (as opposed to CPU computing), a lot of Deep Learning makes use of this new type of processing actually, and the technique is spreading. I recently saw the video card maker NVidia selling a stand-alone card called Jetson which can be plugged into to a regular computer. It has 192 GPU cores on it for the cost of 192 dollars.

Now in terms of the amount of innovation we are talking about moving from black and white TV to color TV here. But did anyone take notice? Not really. Even practitioners in the field took a collective "meh" and moved on to crazier stuff.

It is sad that science and tech disconnecting from the public this way, it is even sadder that someone like Peter Thiel who deals with tech a lot doesnt see what the frack is going on. Surely we can have more innovation. Once we rid ourselves of the current defunct second-wave educational / societal shit infrastructure we'll have even more of it. But current state of things is not so bad.

Keith Wiley

[The movie] Interstellar might depict AI slavery


I was literally shaking my head in disbelief, I mean, so if future AI is not depicted on one extreme, it has to be another extreme.. ? Bots in Instellar were useful, helpful, were not T-1000 hooked-up to Skynet, then logically it follows that they are slaves..(!) Isnt there some sort of middle-ground?