Tag: neuroscience

A Hidden Mathematical Rule Governs the Distribution of Neurons in the Brain

Neuron densities in cortical areas in the mammalian brain follow a consistent distribution pattern. Image: Morales-Gregorio

Human Brain Project (HBP) researchers have uncovered how neuron densities are distributed across and within cortical areas in the mammalian brain. As reported in Cerebral Cortex, they have revealed a fundamental organisational principle of cortical cytoarchitecture: the ubiquitous lognormal distribution of neuron densities.

Numbers of neurons and their spatial arrangement play a crucial role in shaping the brain’s structure and function. Yet, despite the wealth of available cytoarchitectonic data, the statistical distributions of neuron densities remain largely undescribed. This new study from the HBP at Forschungszentrum Jülich and the University of Cologne (Germany) study advances our understanding of the organisation of mammalian brains.

The team accessed 9 publicly available datasets of seven species: mouse, marmoset, macaque, galago, owl monkey, baboon and human. After analysing the cortical areas of each, they found that neuron densities within these areas follow a consistent pattern – a lognormal distribution, pointing to a fundamental organisational principle underlying the densities of neurons in the mammalian brain.

A lognormal distribution is a statistical distribution characterised by a skewed bell-shaped curve. It arises, for instance, when taking the exponential of a normally distributed variable. It differs from a normal distribution in several ways. Most importantly, the curve of a normal distribution is symmetric, while the lognormal one is asymmetric with a heavy tail.

These findings are relevant for modelling the brain accurately. “Not least because the distribution of neuron densities influences the network connectivity,” says Sacha van Albada, leader of the Theoretical Neuroanatomy group at Forschungszentrum Jülich and senior author of the paper. “For instance, if the density of synapses is constant, regions with lower neuron density will receive more synapses per neuron,” she explains. Such aspects are also relevant for the design of brain-inspired technology such as neuromorphic hardware.

“Furthermore, as cortical areas are often distinguished on the basis of cytoarchitecture, knowing the distribution of neuron densities can be relevant for statistically assessing differences between areas and the locations of the borders between areas,” van Albada adds.

These results are in agreement with the observation that surprisingly many characteristics of the brain follow a lognormal distribution. “One reason why it may be very common in nature is because it emerges when taking the product of many independent variables,” says Alexander van Meegen, joint first author of the study. In other words, the lognormal distribution arises naturally as a result of multiplicative processes, similarly to how the normal distribution emerges when many independent variables are summed.

“Using a simple model, we were able to show how the multiplicative proliferation of neurons during development may lead to the observed neuron density distributions” explains van Meegen.

According to the study, in principle, cortex-wide organisational structures might be by-products of development or evolution that serve no computational function; but the fact that the same organisational structures can be observed for several species and across most cortical areas suggests that the lognormal distribution serves some purpose.

“We cannot be sure how the lognormal distribution of neuron densities will influence brain function, but it will likely be associated with high network heterogeneity, which may be computationally beneficial,” says Aitor Morales-Gregorio, first author of the study, citing previous works that suggest that heterogeneity in the brain’s connectivity may promote efficient information transmission. In addition, heterogeneous networks support robust learning and enhance the memory capacity of neural circuits.

Source: Human Brain Project

Brain Transmission Speeds Increase Until Middle Age

Source: CC0

It has been believed speed of information transmitted among regions of the brain stabilised during early adolescence. A study in Nature Neuroscience has instead found that transmission speeds continue to increase into early adulthood, which may explain the emergence of mental health problems over this period. In fact, transmission speeds increase until around age 40, reaching a speed twice that of a 4-year old child.

As mental health problems such as anxiety, depression and bipolar disorders can emerge in late adolescence and early adulthood, a better understanding of brain development may lead to new treatments.

“A fundamental understanding of the developmental trajectory of brain circuitry may help identify sensitive periods of development when doctors could offer therapies to their patients,” says senior author Dora Hermes, PhD, a biomedical engineer at Mayo Clinic.

Called the human connectome, the structural system of neural pathways in the brain or nervous system develops as people age. But how structural changes affect the speed of neuronal signalling has not been well described.

“Just as transit time for a truck would depend on the structure of the road, so does the transmission speed of signals among brain areas depend on the structure of neural pathways,” Dr Hermes explains. “The human connectome matures during development and aging, and can be affected by disease. All these processes may affect the speed of information flow in the brain.” In the study, Dr Hermes and colleagues stimulated pairs of electrodes with a brief electrical pulse to measure the time it took signals to travel among brain regions in 74 research participants between the ages of 4 and 51. The intracranial measurements were done in a small population of patients who had electrodes implanted for epilepsy monitoring at University Medical Center Utrecht, Netherlands.

The response delays in connected brain regions showed that transmission speeds in the human brain increase throughout childhood and even into early adulthood. They plateau around 30 to 40 years of age.

The team’s data indicate that adult transmission speeds were about two times faster compared to those typically found in children. Transmission speeds also were typically faster in 30- or 40-year-old subjects compared to teenagers.

Brain transmission speed is measured in milliseconds, a unit of time equal to one-thousandth of a second. For example, the researchers measured the neuronal speed of a 4-year-old patient at 45 milliseconds for a signal to travel from the frontal to parietal regions of the brain. In a 38-year-old patient, the same pathway was measured at 20 milliseconds. For comparison, the blink of an eye takes about 100 to 400 milliseconds.

The researchers are working to characterise electrical stimulation-driven connectivity in the human brain. One of the next steps is to better understand how transmission speeds change with neurological diseases. They are collaborating with paediatric neurosurgeons and neurologists to understand how diseases change transmission speeds compared to what would be considered within the normal range for a certain age group.

Source: Mayo Clinic

Neurons in Developing Brains are Connected by Nanoscopic Tunnels

Example of 3D imaging of segmented granule cells shown in green and orange, with nuclei in blue and purple respectively, and mitochondria in yellow. A thin connection can be seen between the two cells in blue, with subcompartments attached to the tube containing the mitochondria, shown in pink. Credit: Diego Cordero / Membrane Traffic and Pathogenesis Unit, Institut Pasteur

Over a hundred years after the discovery of the neuron by neuroanatomist Santiago Ramón y Cajal, scientists continue to deepen their knowledge of the brain and its development. Now, scientists detail novel insights into how cells in the outer layers of the brain interact immediately after birth during formation of the cerebellum, the brain region towards the back of the skull. Publishing their results in in Science Advances, the scientists demonstrated a novel type of connection between neural precursor cells via nanotubes, even before synapses form.

In 2009, Chiara Zurzolo’s team from the Institut Pasteur identified a novel mechanism for direct communication between neuronal cells in culture via nanoscopic tunnels, known as tunnelling nanotubes. These are involved in the spread of various toxic proteins that accumulate in the brain during neurodegenerative diseases – but may also be tapped for the treatment of diseases or cancers.

In this new study, the researchers discovered nanoscopic tunnels that connect precursor cells in the brain, more specifically the cerebellum – an area that develops after birth and is important for making postural adjustments to maintain balance – as they mature into neurons. These tunnels, although similar in size, vary in shape from one to another: some contain branches while others don’t, some are enveloped by the cells they connect while others are exposed to their local environment. The authors believe these intercellular connections (ICs) may enable the exchange of molecules that help pre-neuronal cells physically migrate across various layers and reach their final destination as the brain develops.

Intriguingly, ICs share anatomical similarities with bridges formed when cells finish dividing. “ICs could derive from cellular division but persist during cell migration, so this study could shed light on the mechanisms allowing coordination between cell division and migration implicated in brain development. On the other hand, ICs established between cells post mitotically could allow direct exchange between cells beyond the usual synaptic connections, representing a revolution in our understanding of brain connectivity. We show that there are not only synapses allowing communication between cells in the brain, there are also nanotubes,” says Dr Zurzolo, senior author and head of the Membrane Traffic and Pathogenesis Unit (Institut Pasteur/CNRS).

To achieve these discoveries, the researchers used a three-dimensional (3D) electron microscopy method and brain cells from mouse models to study how the brain regions communicate between each other. Very high resolution neural network maps could thus be reconstructed. The 3D cerebellum volume produced and used for the study contains over 2000 cells. “If you really want to understand how cells behave in a three-dimensional environment, and map the location and distribution of these tunnels, you have to reconstruct an entire ecosystem of the brain, which requires extraordinary effort with twenty or so people involved over 4 years,” said the article’s first author Diego Cordero.

To meet the challenges of working with the wide range of cell types the brain contains, the authors used an AI tool to automatically distinguish cortical layers. Furthermore, they developed an open-source program called CellWalker to characterise morphological features of 3D segments. The tissue block was reconstructed from brain section images. This program being made freely available will enable scientists to quickly and easily analyse the complex anatomical information embedded in these types of microscope images.

The next step will be to identify the biological function of these cellular tunnels to understand their role in the development of the central nervous system and in other brain regions, and their function in communication between brain cells in neurodegenerative diseases and cancers.

Source: Institut Pasteur

How the Brain Blocks out Unwanted Memories

Bald man
Photo by Brett Sayles on Pexels

In order to prevent the mind becoming flooded with unwanted memories, a brain region determines when a person is about to think of an unwanted memory and then signals other regions to suppress it. The discovery was recently published in JNeurosci.

Preventing unwanted memories from coming to mind is an adaptive ability of humans. This ability relies on inhibitory control processes in the prefrontal cortex to modulate hippocampal retrieval processes. How and when reminders to unwelcome memories come to trigger prefrontal control mechanisms remains unknown.

Crespo García et al. measured participants’ brain activity with both EEG and fMRI while they completed a memory task. The participants memorised sets of words (ie, gate and train) and were asked to either recall a cue word’s pair (see gate, think about train) or only focus on the cue word (see gate, only think about gate). During proactive memory suppression, activity increased in the anterior cingulate cortex (ACC), a brain region involved in cognitive control, within the first 500 milliseconds of the task. The ACC relayed information to the dorsolateral prefrontal cortex (DLPFC), which then inhibited activity in the hippocampus, a key region for memory recall. The activity levels in the ACC and DLPFC remained low for the rest of the trial, a sign of success — the memory was stopped early enough so no more suppression was needed. If the memory was not suppressed in time, the ACC generated a reactive alarm, increasing its activity to signal to the DLPFC to stop the intrusion.

Source: EurekAlert!

Experiments to Test Consciousness All Fall Short

Depiction of a human brain
Image by Fakurian Design on Unsplash

A study examining various experiments each designed to prove one of four conflicting theories of consciousness has discovered that they are all flawed: predetermined to prove the theory they are designed to test. The surprising conclusion is that the nature of the experiment largely determines its result.

In neuroscience, there are currently four leading theories trying to explain how the experience of consciousness emerges from neural activity. In this unique study, researchers re-examined hundreds of experiments that support contradictory theories.

The study, published in Nature Human Behaviour, shows that the inconsistencies in the experiments’ findings are mainly due to methodological differences or the methodological choices made by the researchers, predetermines their results.

Employing artificial intelligence, researchers re-examined 412 experiments, and found that scientists’ methodological choices actually determined the result of the experiment – so much so that an algorithm could predict which theory they were designed to support with 80% success.

Professor Liad Mudrik led the study. He explained: “The big question is how consciousness is born out of activity in the brain, or what distinguishes between conscious processing and unconscious processing,” Prof Mudrik explained. “For example, if I see a red rose, my visual system processes the information and reports that there is a red stimulus in front of me. But what allows me – unlike a computer for example – to experience this colour? To know how it feels? In recent years, a number of neuroscientific theories have been proposed to explain how conscious experience arises from neural activity. And although the theories provide utterly different explanations, each of them was able to gather empirical evidence to justify itself, based on multiple experiments that were conducted. We re-examined all these experiments, and showed that the parameters of the experiment actually determine its results. The artificial intelligence we used knew how to predict with an 80% success rate which theory the experiment would support, based solely on the researchers’ methodological choices.”

The study of consciousness has four leading theories, with contradicting predictions about the neural underpinnings of conscious experience. The Global Neuronal Workspace Theory maintains that there is a central neural network, and when information enters it, it is being broadcasted throughout the brain, becoming conscious. The Higher Order Thought Theory claims that there is a higher order neural state that ‘points’ at activity in lower-level areas, marking this content as conscious. A third theory, called Recurrent Processing Theory, claims that information that is reprocessed within the sensory areas themselves, in the form of recurrent processing, becomes conscious. And finally, a fourth theory – Integrated Information Theory – defines consciousness as integrated information in the brain, claiming that the posterior regions are the physical substrates of consciousness.

“Each of these theories offers convincing experiments to support them, so the field is polarized, with no agreed-upon neuroscientific account of consciousness,” said Prof Mudrik.

In-depth analysis of all of the 412 experiments designed to test the four leading theories showed that they were constructed differently. For example, some experiments focused on different states of consciousness, such as a coma or a dream, and others studied changes in the content of consciousness of healthy subjects. Some experiments tested connectivity metrics were tested, and others did not. “Researchers make a series of decisions as they build their experiment, and we demonstrated that these decisions alone – without even knowing the results of the experiments – already predict which theory these experiments will support. That is, these theories were tested in different manners, though they try to explain the same phenomenon,” Prof Mudrik said.

“Another one of our findings was that the vast majority of the experiments we analysed supported the theories, rather than challenging them. There appears to be a built-in confirmation bias in our scientific praxis, though the philosopher of science Karl Popper said that science advances by refuting theories, not by confirming them,” added Prof. Mudrik. “Moreover, when you put together all of the findings that were reported in these experiments, it seems like almost the entire brain is involved in creating the conscious experience, which is not consistent with any of the theories. In other words, it would appear that the real picture is larger and more complex than any of the existing theories suggest. It would seem that none of them is consistent with the data, when aggregated across studies, and that the truth lies somewhere in the middle.”

Source: EurekAlert!

Scientists Unravel Neurological Origins of the Placebo Effect

Researchers at Massachusetts General Hospital (MGH) have discovered a network of brain regions activated by the placebo effect overlaps with several regions targeted by brain-stimulation therapy for depression.

The findings of this study, published in Molecular Psychiatry, will help in understanding the neurobiology of placebo effects and could inform how brain stimulation trial results are interpreted. In addition, this could provide insights on how to harness placebo effects for the treatment of a variety of conditions.

The placebo effect occurs when a patient’s symptoms improve because they expect a therapy to help (due to a variety of factors), but not from the specific effects of the treatment itself. Recent research indicates that there is a neurological basis for the placebo effect, with imaging studies identifying a pattern of changes that happen in certain brain regions when a person experiences this phenomenon.

The use of brain-stimulation techniques for patients with depression that doesn’t respond adequately to medication or psychotherapy has gained wider use in recent years. Transcranial magnetic stimulation (TMS) delivers electromagnetic pulses to the brain, and its effect on brain activity has been established over the last three decades in animal and human research studies, with several TMS devices approved by the Food and Drug Administration for treating depression. In addition, for treatment depression, deep brain stimulation (DBS, which requires an implanted device) has shown some promise.

Senior author Emiliano Santarnecchi, PhD, saw studies of brain stimulation as a unique opportunity to learn more about the neurobiology of the placebo effect. Santarnecchi and his co-investigators conducted a meta-analysis and review of neuroimaging studies involving healthy subjects and patients to create a “map” of brain regions activated by the placebo effect. They also analysed studies of people treated with TMS and DBS for depression to identify brain regions targeted by the therapies. The team found that several sites in the brain that are activated by the placebo effect overlap with brain regions targeted by TMS and DBS.

Dr Santarnecchi and his colleagues believe that this overlap has critical importance in interpreting the results of research on brain stimulation for conditions such as depression. In clinical trials, a significant portion of depression patients receiving brain stimulation improve — but so do many patients receiving placebo (sham) treatment, in which no stimulation is administered, which has led to confusion over the therapy’s benefits.

A possible explanation is “that there is a significant placebo effect when you do any form of brain stimulation intervention,” said Dr Santarnecchi. TMS involves a clinical setting, with loud clicks as the pulse is delivered. “So the patient thinks, ‘Wow, they are really activating my brain’, so you get a lot of expectation,” said Dr Santarnecchi.

Elevated placebo effects associated with brain stimulation may create problems when studying the intervention, said first author Matthew Burke, MD, a cognitive neurologist. If brain stimulation and the placebo effect overlap in activating the same brain regions, then those circuits could be maximally activated by placebo effects, which could make it difficult to show any additional benefit from TMS or DBS, said Dr Burke. If so, this could explain the disparity of results in neurostimulation treatment of depression. Screening out placebo from brain stimulation’s direct impact on brain activity will help in designing studies where the real potential of techniques such as TMS will be more easily quantified, thus improving the effect of treatment protocols.

The findings from this study also suggest broad applications for the placebo effect, said Dr Santarnecchi. “We think this is an important starting point for understanding the placebo effect in general, and learning how to modulate and harness it, including using it as a potential therapeutic tool by intentionally activating brain regions of the placebo network to elicit positive effects on symptoms,” he said.

Dr Santarnecchi and his colleagues are currently designing trials that they hope will “disentangle” the effects of brain stimulation from placebo effects and offer insights about how they can be leveraged in clinical settings.

Source: Massachusetts General Hospital

Brain Surgeons versus Rocket Scientists: Who’s Brainier?

Source: Sammy Williams on Unsplash

A light-hearted research article published in the Christmas edition of the BMJ sought to see once for all who is ‘brainier’: brain surgeons versus rocket scientists.

Brain surgeons and rocket scientists are often put on a pedestal as the exemplars of intellectual endeavour. But which of them is smarter and deserves the accolade more? Or at all? A group of neurosurgeons – who were, of course, totally unbiased – decided to resolve this conundrum.

Delving into the background of the phrases, they wrote that, “The phrase ‘It’s not rocket science’ is thought to have originated in America in the 1950s when German rocket scientists were brought over to support the developing space program and design of military rockets,” a research team led by University College London neuroscientist Inga Usher explained in their new paper.

“The origin of ‘It’s not brain surgery’ is less clear. It is tempting to speculate that the pioneering techniques of the polymath and neurosurgeon Harvey Cushing captured the attention of the public and promulgated the phrase.”

Their study aimed to settle the debate once and for all, and to “provide rocket scientists and brain surgeons with evidence to support their self-assuredness in the company of the other party.” The researchers tested participants across cognitive domains such as emotional discrimination and motor control. Eschewing an overall winner, they assessed the cognitive characteristics of each specialty using a validated online test, the Great British Intelligence Test (GBIT). This test had been used to measure distinct aspects of human cognition, spanning planning and reasoning, working memory, attention, and emotion processing abilities in more than 250 000 members of the British public. Rather than being an IQ test, it is intended to more finely discriminate aspects of cognitive ability. The dataset also let the researchers benchmark both specialties against the general population.

The neurosurgeons showed significantly higher scores than the aerospace engineers in semantic problem solving (possibly attributable to their familiarity with Latin and Greek scientific terminology). Aerospace engineers showed significantly higher scores in mental manipulation and attention. Domain scores for memory, spatial problem solving, problem solving speed, and memory recall speed were similar for both groups. When each group’s scores for the six domains were compared with those in the general population, only two differences were significant: the neurosurgeons’ problem solving speed was quicker and their memory recall speed was slower. No significant difference was found between aerospace engineers and the control population in any of the domains. 

The researchers observed that, “despite the stereotypes depicted by the phrases ‘It’s not rocket science’ and ‘It’s not brain surgery’, all three groups showed a wide range of cognitive abilities. In the original GBIT, 90% of Britons scored above average on at least one aspect of intelligence, illustrating the importance of studying multiple domains that make up a concept of intelligence rather than a single measure.”

The researchers came to the conclusion that, based on the findings, in situations that do not require rapid problem solving, it might be more correct to use the phrase “It’s not brain surgery”. It is possible that both neurosurgeons and aerospace engineers are unnecessarily placed on a pedestal and that “It’s a walk in the park” or another phrase unrelated to careers might be more appropriate. Other specialties might deserve to be on that pedestal, and future work should aim to determine the most deserving profession.

On a more serious note, they also considered that fewer young people are choosing surgery or engineering as a career path, and that such pursuits are commonly seen as ‘masculine’, deterring many females at an early stage. Their results however, showed that neither field differed significantly in cognitive aspects from the general public, which should help reassure future candidates that there is no ‘requirement’ for any type of personality trait.

Source: The British Medical Journal

Just Ten Minutes of Running Boosts Cognitive Function

Photo by Ketut Subiyanto on Pexels

Researchers have found that a mere ten minutes of running at moderate intensity boosts blood flow to the bilateral prefrontal cortex, improving cognitive function and mood. These findings, published in Scientific Reports, may contribute to the development of a wider range of treatment recommendations to benefit mental health.

Physical activity has many benefits as noted by a great body of evidence, such as the ability to lift mood, but in previous studies, cycling was often the form of exercise studied. However, running has always played an important role in the well-being of humans. Human running’s unique form and efficiency, which includes the ability to sustain this form of exertion (ie, by jogging as opposed to sprinting), and human evolutionary success are closely linked.

Despite this fact, researchers had not yet looked closely at the effects of running on brain regions that control mood and executive functions. “Given the extent of executive control required in coordinating balance, movement, and propulsion during running, it is logical that there would be increased neuronal activation in the prefrontal cortex and that other functions in this region would benefit from this increase in brain resources,” explained senior author Professor Hideaki Soya at the University of Tsukuba, Japan.

To test their hypothesis, the research team used the well-established Stroop Colour–Word Test and measured haemodynamic changes associated with brain activity while participants were engaged in each task. For example, in one task, incongruent information is shown, eg the word ‘red’ is written in green, and the participant must name the colour rather than read out the word. To do so, the brain must process both sets of information and inhibit the extraneous information. The Stroop interference effect was quantified by the difference in response times for this task and those for a simpler version of the task – stating the names of colour swatches.

The results show that, after ten minutes of moderate-intensity running, there was a significant reduction in Stroop interference effect time. Furthermore, bilateral prefrontal activation had significantly increased during the Stroop task and participants also reported being in a better mood. “This was supported by findings of coincident activations in the prefrontal cortical regions involved in mood regulation,” noted first author Chorphaka Damrongthai.

Given that many characteristics of the human prefrontal cortex are uniquely human, this study not only sheds light on the present benefits of running but also on the possible role that these benefits may have played in the evolutionary past of humans.

Source: EurekAlert!

Why Antidepressants Take Weeks to Provide Relief

A healthy neuron.
A healthy neuron. Credit: NIH

The findings of a study published in Science Translational Medicine paint a new picture of how current antidepressant drugs work and suggest a new drug target in depression. As with most drugs, antidepressants were developed through trial and observation. Some 40% of patients with the disorder don’t respond adequately to the drugs, and when they do work, antidepressants take weeks to provide relief. Why this is has remained largely a mystery.

To figure out why these drugs have a delayed onset, the team examined a mouse model of chronic stress that leads to changes in behaviours controlled by the hippocampus. The hippocampus is vulnerable to stress and atrophies in people with major depression or schizophrenia. Mice exposed to chronic stress show cognitive deficits, a hallmark of impaired hippocampal function.

“Cognitive impairment is a key feature of major depressive disorder, and patients often report that difficulties at school and work are some of the most challenging parts of living with depression. Our ability to model cognitive impairment in lab mice gives us the chance to try and understand how to treat these kinds of symptoms,” said Professor Dane Chetkovich, MD, PhD, who led the study.

The study focussed on an ion transporter channel in nerve cell membranes known as the HCN channelPrevious work has shown HCN channels have a role in depression and separately to have a role in regulation of cognition. According to the authors, this was the first study to explicitly link the two observations.

Examination of postmortem hippocampal samples led the team to establish that HCN channels are more highly expressed in people with depression. HCN channel activity is modulated by a small signaling molecule called cAMP, which is increased by antidepressants. The team used protein receptor engineering to increase cAMP signaling in mice and establish in detail the effects this has on hippocampal HCN channel activity and, through that connection, on cognition.

Turning up cAMP was found to initially increase HCN channel activity, limit the intended effects of antidepressants and negatively impact cognition (as measured in standard lab tests).

However, a total reversal took place over a period of some weeks. Previous work by the researchers had established that an auxiliary subunit of the HCN channel, TRIP8b, is essential for the channel’s role in regulating animal behaviour. The new study shows that, over weeks, a sustained increase in cAMP starts to interfere with TRIP8b’s ability to bind to the HCN channel, thereby quieting the channel and restoring cognitive abilities.

“This leaves us with acute and chronic changes in cAMP, of the sort seen in antidepressant drug therapy, seen here for the first time to be regulating the HCN channel in the hippocampus in two distinct ways, with opposing effects on behaviour,” Prof Chetkovich said. “This appears to carry promising implications for new drug development, and targeting TRIP8b’s role in the hippocampus more directly could help to more quickly address cognitive deficits related to chronic stress and depression.”

Source: Vanderbilt University

Human Neurons Differ From Animal Ones in a Surprising Way

A healthy neuron. Credit: National Institutes of Health

Human Neurons Differ From Animal Ones in a Surprising WayIn a surprising new finding published in Nature, neuroscientists have shown that human neurons have a much smaller number of ion channels than expected, compared to the neurons of other mammals.

Ion channels are integral membrane proteins that contain pathways through which ions can flow. By shifting between closed and open conformational states (‘gating’ process), they control passive ion flow through the plasma membrane. 

The researchers hypothesise that lower channel density may have helped the human brain evolve energy efficiency, letting it divert resources elsewhere.

“If the brain can save energy by reducing the density of ion channels, it can spend that energy on other neuronal or circuit processes,” said senior author Mark Harnett, an associate professor of brain and cognitive sciences.

Analysing neurons from 10 different mammals, the researchers identified a “building plan” that holds true for every examined species — save humans. They found that as the size of neurons increases, the density of channels found in the neurons also increases.

However, human neurons proved to be a striking exception to this rule.

“Previous comparative studies established that the human brain is built like other mammalian brains, so we were surprised to find strong evidence that human neurons are special,” said lead author and former MIT graduate student Lou Beaulieu-Laroche.

Neurons in the mammalian brain can receive electrical signals from thousands of other cells, and that input determines whether or not they will fire an electrical impulse called an action potential. In 2018, Prof Harnett and Beaulieu-Laroche discovered that human and rat neurons differ in some of their electrical properties, primarily in dendrites.

One of the findings from that study was that human neurons had a lower density of ion channels than neurons in the rat brain. The researchers were surprised by this observation, as ion channel density was generally assumed to be constant across species. In their new study, Harnett and Beaulieu-Laroche decided to compare neurons from several different mammalian species to see if they could find any patterns that governed the expression of ion channels. They studied two types of voltage-gated potassium channels and the HCN channel, which conducts both potassium and sodium, in layer 5 pyramidal neurons, a type of excitatory neurons found in the brain’s cortex.

They were able to obtain brain tissue from a range of 10 mammalian species, including human tissue removed from patients with epilepsy during brain surgery. This variety allowed the researchers to cover a range of cortical thicknesses and neuron sizes across the mammalian kingdom.

In nearly every mammalian species the researchers examined, the density of ion channels increased as the size of the neurons went up. Human neurons bucked this trend, having a much lower density of ion channels than expected.

The increase in channel density across species was a surprise, Prof Harnett explained, because the more channels there are, the more energy is required to pump ions in and out of the cell. However, it started to make sense once the researchers began thinking about the number of channels in the overall volume of the cortex, he said.

In the tiny brain of the Etruscan shrew, which is packed with very small neurons, there are more neurons in a given volume of tissue than in the same volume of tissue from the rabbit brain, which has much larger neurons. But because the rabbit neurons have a higher density of ion channels, the density of channels in a given volume of tissue is the same in both species, or any of the nonhuman species the researchers analysed.

“This building plan is consistent across nine different mammalian species,” Prof Harnett said. “What it looks like the cortex is trying to do is keep the numbers of ion channels per unit volume the same across all the species. This means that for a given volume of cortex, the energetic cost is the same, at least for ion channels.”

The human brain represents a striking deviation from this building plan, however. Instead of increased density of ion channels, the researchers found a dramatic decrease in the expected density of ion channels for a given volume of brain tissue.

The researchers believe this lower density may have evolved as a way to expend less energy on pumping ions, which allows the brain to use that energy for something else, like creating more complicated synaptic connections between neurons or firing action potentials at a higher rate.

“We think that humans have evolved out of this building plan that was previously restricting the size of cortex, and they figured out a way to become more energetically efficient, so you spend less ATP per volume compared to other species,” Prof Harnett said.

He now hopes to study where that extra energy might be going, and whether there are specific gene mutations that help neurons of the human cortex achieve this high efficiency. The researchers are also interested in exploring whether primate species that are more closely related to humans show similar decreases in ion channel density.

Source: Massachusetts Institute of Technology