Here you go, as promised a rewritten project proposal written for beginners to neuroscience. Not quite a plain English rewording, but simplified and with key terms defined. This is what I will be doing up until September...
The way in which the brain that processes our main senses, such as sight and hearing (and by the way, don’t go thinking you only have the 5 senses we all know about, there are far more!) is fascinating. They are what we call ‘topographically organised’. Which basically means that the surface of the brain, the cortex, acts as a little map of what we perceive. For example, if you could look down on the main visual area of your brain, called V1, and see what each brain cell is processing, it would look more or less the same as the picture you see out of your eye (although this is a hugely simplified explanation, but you get the general idea).
The hearing process is slightly different. In another simplified explanation, sound is processed in terms of what we perceive as the ‘pitch of a sound’, or how ‘high or low’ it sounds; its frequency. Each frequency (or note, if that makes it simpler) is processed by a separate part of the primary auditory cortex, or A1. These areas of the A1 are also organised by frequency, a bit like the keys of a piano going from low up to high pitch notes. So if you were to scan the brain and run your hand down a piano, you would see a ripple of activation move along A1. The range of frequencies that each auditory brain cell responds to is called it’s ‘receptive field’.
If you read to the entry I wrote recently about neuroplasticity you will know that the organisation of the brain, or its ‘wiring’ can be changed. When this occurs to our sensory areas we can think of this as ‘remapping’. Research in animals has shown that the receptive fields of neurons in AI can undergo these ‘plastic’ changes very rapidly, as a result of what the animals learns to associate particular sounds with. Amazingly, these changes occur within minutes.
If something happens to make the animal associate a particular frequency with something external, the tone acquires behavioral relevance and a large number of these AI cells shift their preferred frequency and begin to respond more to the new frequency. This effect has been shown to depend on the animal paying direct attention to the stimulus. In one experiment, two groups of rats were trained to respond to musical tones. One group responded to the frequency and the second group responded to the volume of the tone. So, each were played a series of tones at either different frequencies or different volumes, and had to respond only to the frequency or volume to which they were trained to respond. The frequency rats demonstrated changes to the frequency map in the A1, with more cells firing in response to the trained tone. This didn’t happen in the rats who were trained to respond to loudness, but they did have an increase in the type of cells that respond to volume rather than frequency. This would support the idea that brain cells can be changed depending on what an animal needs them for, and what sounds hold a particular relevance for the animal. Other studies have achieved the same result without training the animal, instead electrically stimulating parts of the brain, such as the nucleus basalis, when the animal heard a particular frequency.
However, very little work has been done in humans on this subject. So our study aims to determine whether conditioning of a particular frequency can lead to improved performance in detection and/ or discrimination of that frequency amid others, as would be predicted if human receptive fields show similar plasticity to that documented in animals.
What we are planning to do is to compare the ability of people to detect a particular frequency as well as to discriminate between the frequency and others close by. In the detection task subjects will have to decide which of two successive bursts of white noise contained a ‘hidden’ embedded auditory tone. In the discrimination experiment participants will be required to decide whether the second of two successively presented pure tones was higher or lower than the first.
After this initial detection / discrimination task, subjects will undergo a training method known as ‘classical conditioning’, repeatedly pairing one distinct target frequency tone with an electric shock to the forearm so the participant comes to associate that frequency with receiving a physical shock. After this association is established, the detection / discrimination tasks are repeated, with an occasional “topping up” of the shock conditioning. After 40 minutes, the detection / discrimination task will continue without further conditioning. The absence of reinforcement of the target frequency will then lead to what is called “extinction” of the association between frequency and shock. We will then compare the ability of participants to spot the tone to which they were conditioned before and after the task.
If the animal studies are applicable to humans there is likely to be a greater number of brain cells in A1 detecting the target frequency, because it has become associated with the electric shock. More cells responding to that frequency should make people better at spotting it. If we find a significant effect we may then go on to repeat the experiment while monitoring brain activity using a method known as MEG, to see what is going on in the brain during the experiment.