Advertisement

Ready made algorithm for futuristic computers?

Started by August 28, 2008 04:47 PM
39 comments, last by Hnefi 16 years, 5 months ago
If we were suddenly in possession of computers with cpu frequencies of 10^24 Hz with no memory bandwidth drawbacks, and enough and fast enough RAM to scale properly with it, are there any algorithms that would be best suited for it? Of all the different approaches to and implementations of AI, which would be the best to scale with that type of frequency and get the quickest super-AI results? I'm talking about in the area of general learning and simulating of human or super-human level intelligence in problem solving.
http://www.sharpnova.com
The main problem in "general intelligence" AI isn't performance, it's that we have absolutely no idea how to make it work. Fundamentally, we don't understand how human intelligence works; heck we can't even define it. So even if we had infinite computing resources we still wouldn't have "general intelligence" AI because we have no idea how to even start building it.

You can't implement an algorithm that doesn't exist [smile]

-me
Advertisement
I'm just basically asking what current algorithm would work best with this kind of hardware.

For example.. if I had infinite computing resources, I could just make a neural network of n-nodes, each node having f(n) connections to P(n, f(n)) other nodes with associated weights and thresholds and link it to text input and text output.

I could have it run this for random values of n from 10 billion to a trillion and random amounts and directions of connections until I got one that took the text input of some famous math problem and text outputted the correct answer and upon further testing, did this for other problems as well ( to prove it wasn't just a completely random fluke of input/output ).

I could run this algorithm for a finite amount of time and get an infinite amount of neural solutions that could solve problems that humans can't.

This is just a hypothetical way you could make use of infinite or near-infinite computing resources without any special algorithm and still getting super-intelligent AI.

What I'm asking is what known algorithm now would be best suited for some hypothetical super machine
http://www.sharpnova.com
Quote:

computers with cpu frequencies of 10^24 Hz


...running on gamma rays, it seems. ;-) Also, in order for the clock to reach all parts of this CPU before the next cycle began, the CPU would need to be smaller than an atomic nucleus!

(I know, I know; that wasn't your point.)

Still, I think this emphasizes another point: Chances are that we'll get more parallelism in future but not much more speed out of individual processors -- so I'd pose a similar but different question: What interesting massively-parallel AI algorithms are there out there that aren't currently feasible, but would be with a billion cores?

(My answer: Anything NP-hard.)
That's not intelligence that's just brute force computing...
Although what constitutes intelligence is perhaps a question for the philosophers rather than the computer scientists.

cheers,
metal
Probably a first-order logic system, such as Cyc.

FOL systems can actually produce good results for small data sets. Their drawback is that they don't scale well; the computational requirement tends to increase exponentially with the data size. But hey, if we have some magical awesome computer, maybe that's not a problem.

I don't think there are currently any really large neural-network implementations that do anything useful. Adding more computers to a NN doesn't make anything easier, because you still need to solve hard problems like: how do you prevent overtraining to a sample set, or how do you prevent destructive changes that undo your previous training. It's difficult to solve these problems for *small* NNs, and these problems only get harder as the size increases.
Advertisement
Quote:
Original post by AlphaCoder
If we were suddenly in possession of computers with cpu frequencies of 10^24 Hz with no memory bandwidth drawbacks, and enough and fast enough RAM to scale properly with it, are there any algorithms that would be best suited for it?
I'm going off topic here, but if you want to be realistic computers aren't getting much faster these days - but Moore's law still holds so they are getting more parallel. If you want a more realistic hypothetical question, I'd ask what if you had a computer running at 5GHz with 10^24 individual CPUs connected via a really fast bus ;)

This kind of parallel computer is actually extremely well suited to simulating Neural Nets.
Think of your brain (a real Neural Net); each cell is a very, very simple kind of CPU, but there are so damn many of them that it becomes a very powerful parallel computer.

Quote:
Original post by metalmidget
That's not intelligence that's just brute force computing...
Although what constitutes intelligence is perhaps a question for the philosophers rather than the computer scientists.
I read a article recently that said that computers haven't yet beat humans at chess - because while Garry Kasparov was playing chess, the computer was actually playing "chess" (i.e. the computer didn't know what it was doing, in it's version of the game, it just searched a lot of lists and picked results with big numbers next to them - there was no real reasoning process as to why it should perform actions).
Quote:
I read a article recently that said that computers haven't yet beat humans at chess - because while Garry Kasparov was playing chess, the computer was actually playing "chess" (i.e. the computer didn't know what it was doing, in it's version of the game, it just searched a lot of lists and picked results with big numbers next to them - there was no real reasoning process as to why it should perform actions).


It just depends on how you look at it. But for the most part I agree with that statement.

Computers don't understand chess, they just run a brute force negamax with all kinds of pruning optimizations. But there is this essential element to a chess program: the evaluation function. This would be the closest thing to chess understanding in the engine.

And this proves the point further because this evaluation is probably only about as smart as a FIDE master and on its own wouldn't have an icicle's chance in hell vs. Kasparov.

On a somewhat tangential point, MonteCarlo analysis truly looks to be the next great step for computer chess engine strength. And this scales with multi-core cpu's better than search+eval on its own ever could.
http://www.sharpnova.com
Quote:
Original post by AlphaCoder

I could have it run this for random values of n from 10 billion to a trillion and random amounts and directions of connections until I got one that took the text input of some famous math problem and text outputted the correct answer and upon further testing, did this for other problems as well ( to prove it wasn't just a completely random fluke of input/output ).



but it was proved about 80 years ago that there literally CANNOT be an algorithm for solving general Diophantine equations and therefore problems in general.

(this is in response to you if you are implying that you could create a computer to solve general math problems)
class Monkey{public:  Monkey( Typewriter& tw ) : tw(tw) {}  void Simulate()  {    while( tw.GetOutput().find( GetCompleteWorksOfShakespeare() ) != string::npos )    {      tw.PressKey( GetRandomInRange(0,tw.NumKeys()) );    }  }private:  Typewriter& tw;}

This topic is closed to new replies.

Advertisement