Somewhat unbelievably, we are now in the last 2 weeks of formal lectures, although as usual there is still an array of optional and supplementary talks going on at the various affiliated institutions. However, the taught aspect of the MSc is now all but over. This leaves only the prospect of the research project looming, lumbering into sight like some gigantic beast from a 1950s B-movie, pulverising anything that dare get in its way.
Tomorrow I meet with my supervisor/collaborator to really get the process started, and we will draw up exactly what will happen, when, and who will be responsible for what.
A few weeks ago I posted here a simplified explanation of our research proposal. I will now attempt to explain, in as accessible a form as possible, what this research has got to do with the real world. But first, a very quick recap of the basics.
The field is auditory psychophysics. At first the name alone was enough to make me want to run for the hills, but it's not half as scary as it sounds. As is so often the case in neuroscience the intimidating nomenclature masks a surprisingly simple concept. Psychophysics, it turns out, is just the study of how the brain processes the information we get from the senses, in this case, sound. Therefore the grand old title 'auditory psychophysics' really just means 'what the brain does with everything you hear'.
There is now a wealth of evidence to suggest that the structure of the brain, i.e. which cells connect with which and how strong the connections are, is constantly changing, and that this change is driven by the importance of the sensory information we receive.
In the case of sound, incoming information is processed mainly in the primary auditory cortex, or A1. I have explained before the concept of tonotopic organisation, but as a little refresher imagine that a little part of the surface of the brain is like the keys of a piano. When you hear a high pitched sound the braincells at the top of the piano are activated, and as the pitch decreases the activity moves further down as the musical pitch gets deeper.
As a result of this type of organisation each cell in A1 has what is termed a best frequency, or BF. This is frequency to which the cells responds the strongest, and as the frequency moves away from the BF the response gets smaller, until there is no response at all.
So, what happens if a certain frequency suddenly becomes very important to your behaviour. For example, consider the sound of a screeching predator, which would be a very good indicator that you should make yourself scarce as soon as possible. It would be very helpful if you processed these behaviourally relevant sounds quicker than irrelevant background sounds.
Well, when a sound like this suddenly becomes very important we find that more of the auditory neurons will change their BF to the frequency in question, making the animal more sensitive to that sound.
It sounds fairly straightforward, but we are talking about a small cluster of cells amongst tens of billions, a great many of which show a similar adaptability for the area to which they are specialised. So this will be going on not just for sound frequency, but also for the other properties of sound, such as volume. Additionally, plasticity has been shown in other domains such as vision, touch and smell. And that is just the senses, our own internal states are also constantly being monitored in a similar fashion.
The bigger picture is one of a brain that is constantly adapting to perform at peak performance in whatever environment it is placed.
This plasticity is greatest in infancy. Babies are born with far more connections between brain cells than are present in adults, perhaps as many as double. This is because most of our adaptation to our environment happens in the first few years of life. Once the infant is adapted to its environment, the irrelevant brain connections are pruned away, remaining if not dead then largely dormant.
This extreme early adaptability has a few intriguing applications. For example, if a human baby is exposed to enough monkey faces early in development it will be able to distinguish monkey faces just as well as human faces (presumably into adulthood), although for an adult this would be almost impossible to learn. Another example of this early adaptability and pruning is seen in the use of language, with babies able to learn all the different vocal intonations seen in languages around the world, even sounds that are almost indistinguishable to Western adults, such as certain African dialects that involve communication through clicks produced in the throat. This potential bilingualism does not last long, and beyond the first couple of years of life we become locked into the grammatical constraints of our first language (which incidentally is the reason that native Japanese speakers find it so hard to distinguish between R and L, a feature of language that is nor present in Japanese).
However, as I said, the connections that are pruned after infancy remain dormant rather than dead, and plasticity experiments suggest that with appropriate training they can be revived to some degree.
Plasticity, therefore, is like Darwinism happening in real time. It takes many generations for a species to physically adapt to their environment, but the clever old brain can do it in a matter of hours.