YouTube Facebook LinkedIn Google+ Twitter Xingrss  

New Blue Gene Project to Model the Brain


By Salvatore Salamone

July 20, 2005 | IBM and the Swiss research institute E´cole Polytechnique Fédérale de Lausanne (EPFL) have announced a joint research initiative dubbed the Blue Brain Project to create a 3-D model of the brain.

Scientists from both organizations will use data from more than 10 years of wet lab experimental and research data collected at EPFL to develop a computer model of the brain’s workings at the molecular level. Specifically, the researchers will develop a 3-D model of the neocortex that takes into account the high-speed electrochemical interactions of the brain’s interior.

The model will then be used to run simulations to better understand how the brain functions.

“Modeling the brain at the cellular level is a massive undertaking because of the hundreds of thousands of parameters that need to be taken into account,” says Henry Markram, founder of the EPFL Brain and Mind Institute and the EPFL professor heading up the project. “With our [IBM’s and EPFL’s] combined resources and expertise, we are embarking on one of the most ambitious research initiatives ever undertaken in the field of neuroscience.”

To perform the simulations, the project will use a four-rack Blue Gene/L system that will deliver a peak processing of about 22.8 teraFLOPS (22.8 trillion operations per second). To put this processing power into perspective, a year ago such a system would have been roughly the world’s second most powerful computer, according to the June 2004 list of the Top 500 supercomputers.“This modeling is not possible without the processing power of Blue Gene/L,” says Tilak Agerwala, vice president, systems, IBM Research. He notes that the work with EPFL is a collaborative effort and that researchers from both organizations will develop the model and interpret the results.

The project will start with the collection of data Markram has gathered over the last decade. This information will be used to develop the computer model of the neocortex.

“The model will simulate the observable results,” says Charles Peck, an IBM researcher whose area of expertise is biometaphorical computing. “Once the model is validated, it can then be used to [accelerate] the research.” For example, the model could be used to help researchers select what type of experiments to do in the lab.

One benefit of the model is that it will allow researchers to “see” things — interactions, for example — that cannot be observed in the lab. For instance, researchers will be able to “play back” simulations and note interactions that lead to changes in the brain’s workings.

The group expects the simulation and modeling work will significantly accelerate research efforts. “With an accurate computer-based model of the brain, much of the pre-testing and planning normally required for a major experiment could be done in silico rather than in the laboratory,” says Markram. “With certain simulations we anticipate that a full day’s worth of wet lab research could be done in a matter of seconds on Blue Gene.”

Click here to login and leave a comment.  

1 Comments

  • Avatar

    Dear Sir November 30, 2010
    I am Dr. Swallow, age 75, a scientist presently doing research on designing
    and building neocortex simulators intended to exhibit full human or even
    super human intellectual performance. I have degrees in engineering
    physics, electrical engineering and biophysics. I have looked over all
    robot research efforts and find none of them attempting to reverse engineer
    the human brain and attempting to develop full human brain performance in
    their robots. Generally the approach by all robotic research projects is to
    attempt to program subset of human behaviors into their robots, therefore
    falling far short of the potential performance of their developed robots.
    My research over 40 years has sought to reverse engineer the human brain.
    The brain is actually quite simple in that the cortex structure is the same
    over the cortex grey matter. There are only two structures that matter.
    One is the area to area connection scheme which I believe is easily modeled.
    The other is the detail of the neuron structure of the gray matter of the
    cortex. To model the cortex gray matter as simply layers of neurons would
    produce a worthless net of neurons. Neurons must do normalized correlations
    to be effective. I have discovered how to produce a neuron net to do
    normalized correlations. It simply requires that the neurons be organized
    into groups of mutually inhibiting neurons. There actually is some measured
    data that the neurons of the cortex are mutually inhibiting, but those
    experiments by neurologist failed to conclude that fact. The point is,
    neurons themselves cannot do normalized correlations. Only by connecting
    them in mutually inhibiting groups gives them the ability to do normalized
    correlations. It is well known in electrical engineering that unnormalized
    correlations are generally worthless.
    With mutually inhibiting layers of neurons, the cortex should be able to
    remain flexible and functional over all time. It should be able to

Add Comment

Text Only 2000 character limit

Page 1 of 1


For reprints and/or copyright permission, please contact  Terry Manning, 781.972.1349 , tmanning@healthtech.com.