Last modified: Monday, March 10, 2008
Data processing through a fly's eye
FOR IMMEDIATE RELEASE
March 7, 2008
BLOOMINGTON, Ind. -- A team of scientists from Indiana University, Princeton University and the Los Alamos National Laboratory recently gained new insight into how blowflies process visual information. The findings, published in an article in the Public Library of Science Journals, show that the precise, sub-millisecond timing of "spikes" from visual motion-sensitive nerve cells encodes complex, detailed information of what the fly is seeing.
"There's a long-standing debate over whether precise, millisecond-scale timing is important to encode information in the nervous system," said Robert de Ruyter van Steveninck, a biophysics professor at IU who conducted many of the experiments. "Depending on the nature of the information, in some cases it might not be. But for motion sensitive neurons in the blowfly visual system, we show that timing is obviously important, especially in the context of natural visual stimulation."
Blowflies can be nearly impossible to swat. The tiny acrobats -- nicknamed the "Ferrari of the insect world" -- zip and zoom at relatively high speeds wherever they go.
For a human, the constantly changing scenery might be unsettling, bordering on overstimulation. But for a blowfly, it's a normal part of daily life, and its nervous system quickly processes the information so that the fly can respond to what it sees within about 30 milliseconds.
According to de Ruyter van Steveninck, the blowfly is an ideal candidate for this type of research not only for its highly developed method of analyzing sensory information, but because it is easy to work with and abundant in nature.
For the experiment, scientists monitored a single motion-sensitive neuron in a blowfly's brain. The sensor needle remained within microns of the neuron, even though the blowfly was attached to a spinning stepper motor, which could subtly change speeds -- or not-so-subtly change directions -- every two milliseconds. The device was placed in a wooded area, visually stimulating the fly with natural scenes, and the stepper motor mimicked the changes in speed and angular velocity experienced by a blowfly during natural flight.
To analyze the neural signals, time was chopped into samples of one millisecond, or even shorter, during which the neuron either sent a signal or it didn't -- it was either on or off. In this way, for each 30 millisecond trial period, theoretically more than one billion on/off combinations were possible. The authors wanted to know if these different combinations of "on's" and "off's" were like words, where each can have its own meaning, or if it was merely the number of spikes during the 30 millisecond period that mattered.
Not only did they discover that precise sub-millisecond timing of the spikes was important, they found it might be important over even shorter time periods than their instrumentation could reach. Furthermore, they found that the blowfly encodes detailed information at this precision even when the stimulus it was responding to was about three hundred fold slower than the encoding rate.
It seemed that each different "word" had a subtle, but significantly different meaning in the fly's world of vision -- a remarkable analysis for a tiny brain of only a half-million nerve cells to handle in such a short time frame.
"As it moves around, the blowfly is subjected to an amazingly complicated jumble of visual information from a complicated world," said de Ruyter van Steveninck. "But because of its agility in flight, it is a reasonable guess that the way in which it analyzes and encodes motion information is nearly optimal."
It is for this reason that de Ruyter van Steveninck and his colleagues study the visual systems of blowflies. If they can understand how the blowfly's relatively small brain makes sense of multiple streams of complex data efficiently to form a single sophisticated conclusion, they may be able to formulate principles that artificial systems should use to handle complex visual input. De Ruyter van Steveninck has high hopes that this line of research could lead to faster, more accurate visual and motion identification programs using cameras and computers.
De Ruyter van Steveninck and colleague Geoffrey D. Lewen, then at the NEC Research Institute in Princeton, performed the experiments while colleagues Ilya Nemenman of the Los Alamos National Laboratory and William Bialek of Princeton University handled the theoretical side. In fact, it was a mathematical breakthrough within the past decade by Nemenman and Bialek that allowed many of the computations to be made.
The research was funded by the National Science Foundation, the Gill Foundation at Indiana University, the Department of Energy, and the Swartz Foundation.