Home TECH A huge new data set pushes the limits of neuroscience

A huge new data set pushes the limits of neuroscience

0
43

So neuroscientists use an approach called “dimensionality reduction” to make such a visualization possible: They take data from thousands of neurons and, by applying clever linear algebra techniques, describe their activities using just a few variables. This is exactly what psychologists did in the 1990s to define their five main domains of human personality: openness, agreeableness, conscientiousness, extraversion, and neuroticism. They found that just by knowing how an individual scored on those five traits, they could effectively predict how that person would answer hundreds of questions on a personality test.

But the variables extracted from the neural data cannot be expressed in a single word like “openness”. They are more like motifs, patterns of activity that span entire neuronal populations. Some of these motifs may define the axes of a plot, with each point representing a different combination of those motifs: its own unique activity profile.

There are drawbacks to reducing data from thousands of neurons to just a few variables. Much like taking a 2D image of a 3D cityscape makes some buildings totally invisible, concentrating a complex set of neural data into just a few dimensions while removing a lot of detail. But working in a few dimensions is much more manageable than examining thousands of individual neurons at once. Scientists can plot evolving patterns of activity on axes defined by the motifs to observe how the behavior of neurons changes over time. This approach has proven especially fruitful in the motor cortex, a region where the confusing and unpredictable responses of a single neuron had long baffled researchers. Viewed collectively, however, the neurons follow regular, often circular, paths. The characteristics of these trajectories are correlated with particular aspects of the movement; your location, for example, is related to speed.

Olsen says he hopes scientists will use dimensionality reduction to extract interpretable patterns from complex data. “We can’t go neuron by neuron,” she says. “We need statistical tools, machine learning tools, that can help us find structure in big data.”

But this line of research is still in its infancy, and scientists struggled to agree on what the patterns and trajectories mean. “People fight all the time about whether these things are factual,” says John Krakauer, a professor of neurology and neuroscience at Johns Hopkins University. “Are they real? Can they be interpreted so easily [as single-neuron responses]? They don’t feel as grounded and concrete.”

Bringing these trajectories down to earth will require the development of new analytical tools, Churchland says, a task that will surely be made easier by the availability of large-scale data sets like those from the Allen Institute. And the institute’s unique capabilities, with its deep pockets and huge research staff, will allow it to produce more data to test those tools. The institute, Olsen says, functions like an astronomical observatory: no laboratory alone could pay for its technologies, but the entire scientific community benefits from and contributes to its experimental capabilities.

Currently, he says, the Allen Institute is working on control of a system where scientists from across the research community can suggest what kinds of stimuli the animals should be shown and what kinds of tasks they should perform, while recording thousands of their neurons. As recording capabilities continue to increase, researchers are working toward richer, more realistic experimental paradigms to observe how neurons respond to the kinds of challenging real-world tasks that drive their collective capabilities. “If we really want to understand the brain, we can’t keep showing cortex-oriented bars,” says Fusi. “We really have to move on.”

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here