Google Blogoscoped

Forum

Larry Page On AI  (View post)

Seth Finkelstein [PersonRank 10]

Tuesday, February 20, 2007
17 years ago7,116 views

Bleh.

Every few years, a pundit predicts AI is a few years away. He's playing to the Google-Is-Mysticism cult.

" And so, your program algorithms probably aren’t that complicated, ..."

This is false. The program algorithms are VERY complicated (not beyond understanding, but very complicated). It doesn't take a lot of space to write a program which has complicated algorithms. That's his fallacy.

We don't even know all the details of how, e.g. the immune system works, and that's much simpler and more observable.

mak [PersonRank 5]

17 years ago #

Is he attempting the same "simplistic" approach that they usually practice on products?
That may not be bad though!

/pd [PersonRank 10]

17 years ago #

yeah, its pretty complicated.. lets try the Turing test again... I think it was HAL that spoke :)-

Brock [PersonRank 0]

17 years ago #

I Larry think is largely correct.

The brain is complicated because it has trillions of neurons interacting in a cascading manner that create an emergent intelligence apparenty greater than the sum of its parts. It is not complicated because individual neurons are super-complicated, or because the interactions between neurons are super-complicated. Put another way, really stupid creatures (like ants or slugs) have neurons and nervous systems. We just have a bigger one, which can do more computation.

As for this prediction on when it will happen, I dunno. He's in a better position to guess than the rest of us, but he could also be too close to the work to be objective about it. I take no position on it. Time will tell.

Veky [PersonRank 10]

17 years ago #

Yes, and elephants and blue whales have even bigger ones, so what?
They should be smarter than us? :-)

Utills [PersonRank 10]

17 years ago #

I think you're giving him a hard time here. Just to cast your minds back to before Google came along with their search algorithms. At that time the theory was that search was exceptionally difficult since computers could not "understand" what the user actually wanted.

The way Google approached this was to not think about actual AI or understanding of the search terms but of how data effectively was linked together. The pagerank algorithm isn't a huge long complicated piece of software.

In the same manner we can say that the brain just takes in lots and lots of data and is able to interpret that data correctly. Perhaps it will just take lots of computational analysis of all the data the brain takes in (don't forget that the computation the brain does is much much much more powerful than any supercomputer), to get the right result.

Ultimately the goal of AI for firms such as Google is understanding context the way humans do. We as humans are always taking in data and storing it in an intelligent manner, often storing an interpretation of data rather than data in the raw format.

Once the premises of the semantic web come about, that is when we will be able to add value to the data currently stored in Google's huge databases and will be able to leverage the data in a similar manner to the way humans use data.

It is not about the algorithms themselves but about the data being supplied and interpreted "correctly".

Suresh S [PersonRank 10]

17 years ago #

Why can create a progamming language generate algorithm by itself given the facts ,test the algolrithm iteratively , modify algol.

Alex Kapranoff [PersonRank 1]

17 years ago #

Modern operating systems are big because they contain a lot of data. You know, "Microsoft Sound.wav" and the like. Executable code is a very small part of their size – for a very long time, probably since late DOS times.

Larry will do everything to convince smart people that they should continue their AI research using the largest computational platform in the world – the one which is freely available to Google employees.

Gerald Steffens [PersonRank 1]

17 years ago #

life is complicated. our logical modells are inconsistent. does a perfect answer really exit ;-)

Brock [PersonRank 0]

17 years ago #

"Yes, and elephants and blue whales have even bigger ones, so what?
They should be smarter than us? :-)"

No. For reaons that aren't totally clear, bigger bodies need bigger brains to control them. Humans have the largest brain-to-body weight ratio of any creature, by a large margin. The next largest ratios (though nowhere close to human) are the creatures you would expect: chimps, bottlenose dolphins, etc.

I would bet that if a computer has the processing power of our cerebral cortex only, that would be enough to do "AI stuff." A computer doesn't have to have as many processing nodes as the brain has neurons. A lot of our neurons are doing stuff that computers just don't need to do, like regulating our breathing or blood pressure.

I still think Larry is largely correct. Intelligence is an emergent quality, and emergent qualities only arise out of a vast sea of simple but highly parrellel and highly iterative processes. One day a 10,000-core CPU might be able to do that on a desktop, but a large server farm (such as Google possesses) will be able to do it first.

Alex Kapranoff [PersonRank 1]

17 years ago #

No, Brock, see http://en.wikipedia.org/wiki/Brain_to_body_mass_ratio.

Shrews have the largest brain-to-body ratio, much larger than humans.

Hong Xiaowan [PersonRank 10]

17 years ago #

For hardware, the CPU made of organic metrials should be the beginning of "real AI".
Brain have many super links that silicon computer can not copy.
Now AI just many rules the human being given. Not created by the computer itself. Fake AI just can make something better. Just so so.
The fastest computer not clever than a small ant. It is different indeed. Should have far distance to reach.

Mysterius [PersonRank 10]

17 years ago #

I do believe that true AI is potentially a lot closer than many people think, and that the solution will come from uncomplicated, massive computation, instead of complicated programming.

[put at-character here]Seth Finkelstein: The only part of the human body that would need to be replicated would be the brain, and not even all of that. Intelligence and consciousness comes from the firing of a massive number of neurons. Is it not conceivable that the exponential increase in computing power will give rise to an AI in the near-to-medium future?

Of course, its possible that a massive concentration of computing power will stay dormant for a long time until its given the proper stimulus or setup that allows it to develop intelligence, but the possibility will be there.

This would be a sort of brute-force AI, though. Such an AI wouldn't be much different than an upgraded human, with unlimited memory, the ability to augment its computational abilities, etc. I think that developing smarter, programmed AIs will actually be much harder. However, the development of a brute-forced AI, assuming it's benevolent, might actually help accelerate the development of such programmed AIs, as the brute-force AI fine-tunes itself and tries its hand at developing successors/progeny. It'll be fascinating (and perhaps more than a little frightening) to watch.

Mike Archbold [PersonRank 0]

17 years ago #

In order to understand AI fully you need to come to grips with something as simply as the finite and the infinite. Hegel held that the individual was a single finite one juxtaposed against an infinite thought. We get the conception of "one" because for the most part our thinking is unified. Thus the one stands in stark distinction from the infinite.

But here you are saying that AI will be just a lot of computation.... no clever whiteboard stuff?

Philipp Lenssen [PersonRank 10]

17 years ago #

By the way, if we are able to replicate "only" human intelligence, then we created another type of human – a human we can't just enslave to work for us all day, not just for moral reasons, but because that human will not accept just any order handed out. Additionally, if we create "only" another human brain, then this brain will need to be trained like any child needs to be trained; it will have to explore the environment, touch things, take years to learn about life, and so on. And this human brain will *not* be able to answer questions about the birthdate of Einstein, unless this brain is trained to be an expert on Einstein. (We *may* be able to speed up this process through virtual "hi-speed" training, of course, or some kind of memory storage copies.)

So what we really want is not a replica of the human brain, or at least, not only that. What we want is a system that can have a meaningful relationship with all data that is indexed, and at the same time, be able to converse normally, and at the same time, not have actual emotions and motivations that go beyond this task. What we want is a *controlled* AI. If we accept Larry's theory, then we also have to ask: with a brute force computing approach to replicate a brain, which may be successful some day, how will we enable us to control it? Isaac Asimov proposed 3 simple "blackboard/ whiteboard" rules, for example, but how do you program these into a brute force brain, which you may not even fully understand thanks to its "brute forceness" or evolutionary algorithms?

Harish TM [PersonRank 0]

17 years ago #

What Page is talking about is inaccurate both Biologically and Technologically. [ http://www.searchme.co.in/2007/02/larry-and-me-not-on-same-page.html]

That in itself is not surprising, what is shocking is the fact that the majority of the media/blogosphere seem to be accepting what he says implicitly. Frankly speaking his argument is a joke.

siggi [PersonRank 1]

17 years ago #

Those who do, are working in Mountain View, those who are "proving" that its all nonsense are posting to message boards and blogs. What´s more fun?

Mentifex [PersonRank 0]

17 years ago #

Primitive but genuine artificial intelligence is being demonstrated at http://mentifex.virtualentity.com/jsaimind.html where humans may interact with the AI Mind and watch the process of deep thought as the AI responds to you or follows a meandering chain of thought. The underlying AI algorithms at http://mind.sourceforge.net/aisteps.html break the rudimentary artificial intelligence into three dozen interacting mind-modules.

Forum home

Advertisement

 
Blog  |  Forum     more >> Archive | Feed | Google's blogs | About
Advertisement

 

This site unofficially covers Google™ and more with some rights reserved. Join our forum!