Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
IBM announces advances toward a computer that works like a human brain (siliconvalley.com)
36 points by fogus on Nov 18, 2009 | hide | past | favorite | 23 comments


http://www.modha.org/ has much more detail:

"The model reproduces a number of physiological and anatomical features of the mammalian brain. The key functional elements of the brain, neurons, and the connections between them, called synapses, are simulated using biologically derived models. The neuron models include such key functional features as input integration, spike generation and firing rate adaptation, while the simulated synapses reproduce time and voltage dependent dynamics of four major synaptic channel types found in cortex. Furthermore, the synapses are plastic, meaning that the strength of connections between neurons can change according to certain rules, which many neuroscientists believe is crucial to learning and memory formation. ... We were able to deliver a stimulus to the model then watch as it propagated within and between different populations of neurons. We found that this propagation showed a spatiotemporal pattern remarkably similar to what has been observed in experiments with real brains. In other simulations, we also observed oscillations between active and quiet periods, as is often observed in the brain during sleep or quiet waking."

I'm pretty skeptical about this line of work. The thing that's interesting about neural circuitry is not so much large scale population activity patterns but transformation of information (e.g. construction of a receptive field, place cells in hippocampus). As far as I can tell this work captures some of the large scale oscillatory dynamics but says nothing about the fine-grained information processing that is actually where the rubber meets the road.

----

Edit: that said, the tools for large scale computation being developed here could be an interesting foundation for further work -- given the right stimuli and learning rules, can one recapitulate formation of receptive fields? If yes, what would the relationship be between the simulation's implementation of receptive field structure, and the implementations found in biology?

But the media hype is pretty off scale for what is, essentially, preliminary work.


I heartily agree with your skepticism.

In my opinion, rather than attempting to simulate progressively larger brains, we should instead focus on really nailing the details of smaller ones, perhaps starting with insects.

As an example, Portia jumping spiders display very complex hunting behaviors, including trial-and-error learning in novel circumstances. Even with tiny eyes, they have the visual acuity of a cat. However, unlike a cat, they are only able to see a small portion of the visual field at a time, and must therefore scan the environment, keeping the rest of what has been scanned in memory. Portia spiders are known to spend up to an hour "analyzing" their environment before deciding on a course of action! Their sharp eyesight allows them to recognize the species of the spider they are currently stalking. One common tactic is to pluck deceptively at the web of their victim in order to manipulate it into reacting and moving into a more favorable position before the attack. Different species of spiders will respond in different ways, and so the Portia learn to intelligently apply different patterns of plucking to fit the circumstances. When encountering an unknown type of spider, Portia will try various experimental plucking motions, the successful results of which will be remembered and reused correctly in future attacks.

This amazingly rich and adaptive behavior it somehow manages to produce with a mere 500k neurons; about 2000 times less than the number used by Blue Gene in the experiment.


I'm always suspicious of anything that people claim "works like the human X". In many cases (the brain especially) we have only the most rudimentary understanding of how things actually work. Our ability to create something that works the same way is therefore only as good as that rudimentary understanding. That said, it's clear they are trying to use this computer to understand more about brains. But whenever I read an article like this, I can't help but think of all those grainy movies of early failed attempts at manned flight by people trying to immitate birds.


I think you have nailed it. I don't actually think we have a clue as to how the brain actually works. A while ago, we think it is like a telephone exchange, then we think it is like a computer, then we think it is like the cloud. The true advances in interesting computer results comes not from modeling the brain, but more from low-grade mimicking of the evolutionary process.


A while ago, we think it is like a telephone exchange, then we think it is like a computer, then we think it is like the cloud.

There's a big difference between a metaphor and a theory. No one actually thinks the brain works like a computer. It's just a useful metaphor for working memory and other psychological phenomena. Quite far removed from the neuroscience underlying it.

The true advances in interesting computer results comes not from modeling the brain, but more from low-grade mimicking of the evolutionary process.

What does this even mean?


Arguably, some late-sixties AI really did think an imperative program could simulate human intelligence.

It seems like there has been enough research that present day researchers don't think the brain is very much like either a single-CPU computer or "the cloud". This indicates some progress.

If anything, I'd say contemporary research seems to err towards mostly doing simulation rather than attempting to understand what "general intelligence" is. This has different dangerous - if we really really wind-up able to build intelligent machines but are not able understand them, dangers that were once confined to sci-fi will become more real.


Biologically Inspiried Algorithms, in particular Genetic Algorithms and Genetic Programming.


I would argue that brain simulations are the ultimate "biologically-inspired algorithm".


Many believe in 'emergent' phenomena where if you put enough components together, put them in the right environment, and stress them properly they will react in unexpected and coherent ways that while describable and maybe useful from an external perspective can't fully be analyzed because the interrelationships are too complex.

People were building bridges before they knew why the atoms in a beam of wood stuck together -- it probably wasn't even a matter they contemplated. They built complex bridgess before knowing fully how to analyze tensions and torques in such a structure. Likewise people will probably build useful biological wetware components and assemblies before knowing fully how they work.


I find this a fascinating area of research, however a prediction of my dad's always comes to mind. "A computer as intelligent as a human will take 18 years to 'program.'"


I know that's just supposed to be a piece of folky wisdom but it doesn't represent a very good understanding of how different AI is from our normal expectations.

It's not Data on Startrek. There's no reason for them to be walking around talking. You plug it directly into whatever you want it to understand: weather readings, big data sets, etc. You can 'program' it just as fast as it's processors allow it to consume. The analogy is how humans understand language or music although it's still far more direct.

Humans don't represent a upper bound on intelligence. The question isn't a computer as smart as a human, it's one several orders of magnitude faster or slower than one.


That may be true, but presumably it could be copied and run in separate instances instantly, just like modern software. That limitation might disappear pretty quickly :)


Not of it's a computer doing the programming


If it is programming is it still a computer or is it a programmer?

Computer with a human-like brain seems to be a bit of an oxymoron in that respect.


quote: "Modha imagines a cognitive computer that could analyze a flood of constantly updated data from trading floors, banking institutions and even real estate markets around the world — sorting through the noise to dentify key trends and their consequences."

... and we'll have at the end a pseudo-human opinion that also will totally fail to see the housing bubble. It will also have opinions about the existence of God, Ginger vs. Marianne, and cry because its haircut turned out poorly.


That quote along with

"A cognitive computer might also help soldiers analyze and react to chaotic events on a battlefield."

seems to be there for the benefit of people controlling the purse strings of the grant money. DARPA (or IARPA now) and potential backers from the financial world (although, do finance folks actually finance this kind of research?).

It may actually work at predicting financial trends, for a while. Then some trading floor will utilize that information to trade, and then their competitors will, then these machines will totally fail to see some other consequence of the complex trades they are making with each other.


There's a lower limit for how much processing power it will take to run a human-level intelligence. The highest that lower limit can be is at the level where we can simulate each neuron in a human brain; at that point, we can build an AI merely by simulation. It seems likely, though, that by the time we get to being able to simulate an entire human brain, we'll long have passed the actual lower limit for human-level intelligence, if we knew exactly how to build it. Assuming we survive, it'll be interesting to find out when we passed the lower limit with supercomputers, when we understand enough to figure it out.


Right now, it's possible to simulate one biologically-accurate neuron at a compartment-level (where you model a neuron by dividing it into many connected compartments), on one CPU core in real-time.

Here's the kicker: it's also been shown that these simulations can scale linearly with the number of processors (given sufficient interconnect bandwidth). So with N cores, you can simulate kN neurons for some constant k (and I think that k=1).

This implies that the limit is the number of CPU cores you throw at the problem. The human brain has around 100 billion neurons so, in theory, if you had a 100-billion core supercomputer, you could simulate the whole thing. That's a lot, but my point is that scaling a simulation is an engineering problem.

This doesn't say anything about the accuracy of the model; that depends on accurate characterisations of each part of the neuron, and of how they're arranged and connected. But the key point here is that these are low-level characteristics that can be individually examined, making whole-brain simulations a tractable problem.

Source: Djurfeldt et al. Brain-scale simulation of the neocortex on the IBM Blue Gene/L supercomputer. IBM JOURNAL OF RESEARCH AND DEVELOPMENT (2008)


The highest that lower limit can be is at the level where we can simulate each neuron in a human brain; at that point, we can build an AI merely by simulation.

This assumes, of course, that we'll be able to observe every possible action of a neuron, and that these actions are all finitely computable.


Man is working so hard to learn and recreate the human body. However, many believe that the human body was created by chance and chaos.


I believe that hills were created by chaos and chance. Doesn't mean we're incapable of constructing ramps.


I think it's great that we're learning and exploring the human body, and trying to mimic it in other areas. I'm really attempting to challenge the beliefs.

*edit - wording


Could someone tell me why my comments were downvoted? Was it because the downvoter disagreed? Why not polite discussion rather than downvoting?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: