Sunday, 20 December 2009
The way in which the brain that processes our main senses, such as sight and hearing (and by the way, don’t go thinking you only have the 5 senses we all know about, there are far more!) is fascinating. They are what we call ‘topographically organised’. Which basically means that the surface of the brain, the cortex, acts as a little map of what we perceive. For example, if you could look down on the main visual area of your brain, called V1, and see what each brain cell is processing, it would look more or less the same as the picture you see out of your eye (although this is a hugely simplified explanation, but you get the general idea).
The hearing process is slightly different. In another simplified explanation, sound is processed in terms of what we perceive as the ‘pitch of a sound’, or how ‘high or low’ it sounds; its frequency. Each frequency (or note, if that makes it simpler) is processed by a separate part of the primary auditory cortex, or A1. These areas of the A1 are also organised by frequency, a bit like the keys of a piano going from low up to high pitch notes. So if you were to scan the brain and run your hand down a piano, you would see a ripple of activation move along A1. The range of frequencies that each auditory brain cell responds to is called it’s ‘receptive field’.
If you read to the entry I wrote recently about neuroplasticity you will know that the organisation of the brain, or its ‘wiring’ can be changed. When this occurs to our sensory areas we can think of this as ‘remapping’. Research in animals has shown that the receptive fields of neurons in AI can undergo these ‘plastic’ changes very rapidly, as a result of what the animals learns to associate particular sounds with. Amazingly, these changes occur within minutes.
If something happens to make the animal associate a particular frequency with something external, the tone acquires behavioral relevance and a large number of these AI cells shift their preferred frequency and begin to respond more to the new frequency. This effect has been shown to depend on the animal paying direct attention to the stimulus. In one experiment, two groups of rats were trained to respond to musical tones. One group responded to the frequency and the second group responded to the volume of the tone. So, each were played a series of tones at either different frequencies or different volumes, and had to respond only to the frequency or volume to which they were trained to respond. The frequency rats demonstrated changes to the frequency map in the A1, with more cells firing in response to the trained tone. This didn’t happen in the rats who were trained to respond to loudness, but they did have an increase in the type of cells that respond to volume rather than frequency. This would support the idea that brain cells can be changed depending on what an animal needs them for, and what sounds hold a particular relevance for the animal. Other studies have achieved the same result without training the animal, instead electrically stimulating parts of the brain, such as the nucleus basalis, when the animal heard a particular frequency.
However, very little work has been done in humans on this subject. So our study aims to determine whether conditioning of a particular frequency can lead to improved performance in detection and/ or discrimination of that frequency amid others, as would be predicted if human receptive fields show similar plasticity to that documented in animals.
What we are planning to do is to compare the ability of people to detect a particular frequency as well as to discriminate between the frequency and others close by. In the detection task subjects will have to decide which of two successive bursts of white noise contained a ‘hidden’ embedded auditory tone. In the discrimination experiment participants will be required to decide whether the second of two successively presented pure tones was higher or lower than the first.
After this initial detection / discrimination task, subjects will undergo a training method known as ‘classical conditioning’, repeatedly pairing one distinct target frequency tone with an electric shock to the forearm so the participant comes to associate that frequency with receiving a physical shock. After this association is established, the detection / discrimination tasks are repeated, with an occasional “topping up” of the shock conditioning. After 40 minutes, the detection / discrimination task will continue without further conditioning. The absence of reinforcement of the target frequency will then lead to what is called “extinction” of the association between frequency and shock. We will then compare the ability of participants to spot the tone to which they were conditioned before and after the task.
If the animal studies are applicable to humans there is likely to be a greater number of brain cells in A1 detecting the target frequency, because it has become associated with the electric shock. More cells responding to that frequency should make people better at spotting it. If we find a significant effect we may then go on to repeat the experiment while monitoring brain activity using a method known as MEG, to see what is going on in the brain during the experiment.
Saturday, 19 December 2009
Friday, 18 December 2009
Rapid plastic changes in Auditory Cortex: a classical conditioning paradigm
Chris Fassnidge, Dr Christian Kluge and Professor Jon Driver
This study seeks to determine whether detection and/or discrimination of a pure auditory tone can be improved by classical conditioning, pairing a target frequency with an electric shock
Work by Merzenich, Weinberger, Irvine and others has shown that receptive field properties of neurons in primary auditory cortex (AI) can undergo rapid plastic changes in response to behavioral learning in animals (reviewed in Weinberger 2004, Weinberger 2007, Irvine 2007). Remarkably, these changes occur within minutes. During learning, when a target frequency acquires behavioral relevance a large number of AI pyramidal cells shift their best frequency towards this distinct frequency. This effect has been shown to depend on attention , i.e. behavioural relevance (Polley et al., 2006). Two groups of rats underwent operant conditioning with identical stimulus sets. One group responded to a target frequency and demonstrated tonotopic changes resulting in an increased representation of the target frequency, while the second group performed the task (with exactly the same stimuli) in response to a target loudness which led to changes in the topographic organisation of neurons’ preferred loudness. In non-human primates, Blake et al. (2006) demonstrated a crucial role for active cognitive control and involvement needed for tonotopic re-mapping to occur.
Later mechanistic assessment has revealed that the neurotransmitter acetylcholine (ACh) is crucially involved in these plastic processes. Pairing brief ACh infusions with the purely passive presentation of tones induced changes in the AI tonotopic maps similar to the ones observed in the experiments described above. In addition, stimulation of the nucleus basalis, the main source of corticopetal cholinergic projections, led to identical remapping. These findings are intriguing because they strongly argue against the long-held view that primary sensory cortices are merely passive input structures in which plastic changes of receptive fields occur only during early ontogeny. Instead, the studies summarised indicate that the sensitivity and perhaps even local network resonance patterns can be dynamically adapted to current behavioural requirements.
Very little work has been done in humans on this subject. Thus, we aim to behaviourally determine whether conditioning of one or another frequency can lead to improved performance in detection and/ or discrimination of pure tones, as would be predicted if human receptive fields show similar plasticity to that documented in animals.
Materials and Method
In a within-subject design (with conditioned frequencies counterbalanced over subjects), we will compare the detection (experiment A) as well as the discrimination (experiment B) of pure tones. The detection task will employ a two alternative forced choice (2AFC) scheme in which subjects have to decide which of two successively presented white noise stimuli actually contained a pure tone. In the discrimination experiment, participants will be required to decide whether the second of two successively presented pure tones was higher or lower than the first one. In both experiments tones of a range of frequencies will be used and this part of the experiment will last about 15 minutes.
After this initial detection / discrimination block, subjects will undergo classical conditioning, pairing one distinct target frequency tone with an electric shock to the forearm. After this association is established, the detection / discrimination 2AFC routines are repeated, interleaved with further conditioning blocks (“topping up”). After 40 minutes, the detection / discrimination task will cease to be interupted by further conditioning. The absence of reinforcement of the target frequency will then lead to extinction of the association between frequency and shock (extinction).
A number of potential follow-up studies are conceivable. First, the work by Blake and colleagues (2006) suggests that operant conditioning might be more effective in inducing tonotopic changes. Thus, modifications of the paradigm employing reward or punishment depending on performance are possible. Also, there are potential MEG versions of all experiments described which would, through analysis of early latency auditory components of the evoked magnetic fields, allow for a direct assessment of the underlying neurophysiological principles.
This series of experiments allows for three possible outcomes:
1. Conditioning may improve tone detection performance but not tone discrimination. This situation would allow for the conclusion that a greater number of neurons areresponding to the target frequency after conditioning but that this improvement does not involve a sharpening of best frequency tuning curves.
2. Conditioning may improve tone discrimination performance but not tone detection. This outcome could be interpreted as a potential increased local signal-noise ratio. This situation seems somewhat unlikely, however, since previous studies reported best frequency shifts in large numbers of cells rather than sharpening of existing tuning curves.
3. Finally, if conditioning leads to performance improvements in both detection and discrimination our interpretation would be that although there was an increase in the number of neurons responding to the target frequency, this change does not come at the expense of frequencies around it. In this situation it would be interesting to study the underlying compensatory mechanisms in a later MEG experiment.
The data will be analyzed with ANOVA (random effects) using the SPSS statistics software package. Further analysis may be required depending on results.
Preparatory work: January - March 2010
(generation of stimuli, programming of the actual experiment, pilot measurements)
Data collection: March - June 2010
(16 to 20 subjects each group)
Analysis and write up: June - July 2010
Participants will be reimbursed for their time and effort using existing research grants of the ICN attention group. No investment in equipment or software will be neccessary.
Full ethical approval will be sought from the Graduate School Research Ethics Committee prior to pilot data collection. The ethics application will be submitted in early January 2010.
Blake, D. T., Heiser, M. A., Caywood, M., & Merzenich, M. M. (2006). Experience-dependent adult cortical plasticity requires cognitive association between sensation and reward. Neuron, 52(2), 371-381.
Irvine, D. R. F. (2007). Auditory cortical plasticity: Does it provide evidence for
cognitive processing in the auditory cortex? Hearing Research, 229(1-2), 158-170.
Polley, D. B., Steinberg, E. E., & Merzenich, M. M. (2006). Perceptual Learning Directs Auditory Cortical Map Reorganization through Top-Down Influences. The Journal of Neuroscience, 26(18), 4970–4982.
Weinberger N. M. (2004) Specific long-term memory traces in primary auditory cortex. Nature Reviews Neuroscience, 5(4), 279-290.
Weinberger N. M. (2007). Associative representational plasticity in the auditory cortex: a synthesis of two disciplines. Learning & Memory, 14(1-2) 1-16.
Thursday, 10 December 2009
There are times on this course when I really need to take a deep breath and swallow down the surge of inadequacy that I feel building up inside me. It is something akin to those moments in life when one almost throws up but somehow at the last second manages to swallow down the noxious brew. Both avoid leaving oneself in an unpleasant position, but equally leave you with a revolting taste in your mouth.
A tad overdramatic? Perhaps.
It is fair to say, however, that I often feel out of my depth on this course. Those of you who have read my past entries in this blog will know that my school record was far from exemplary. You will also know that I have felt more than a slither of trepidation at being accepted to study at such a prestigious institution, a world leader no less.
It has not been the finest of weeks. Monday started off optimistically enough, with a meeting at the Institute of Cognitive Neuroscience (ICN) to discuss my upcoming research project with my soon-to-be collaborator, Dr Christian Kluge. I left the ICN buzzing at having drawn up an exciting and original piece of research with Dr Kluge, and feeling a lot more confident about what would in the new year become the embryonic stage of my thesis.
How crushed I was then, on returning home to see in my inbox the following words from Dr Kluge:
“There is an ongoing project with quite similar designs that i did not know of. Therefore, we will probably have to re-think what we want to do”.
A lesson in expectations management, perhaps. That’ll teach me to curb my enthusiasm!
Anyway, there was little time to waste, as in seven days I time I would be required to make a brief presentation to my peers and the course administrators outlining my research project. What Dr Kluge proposed was we meet with Professor Jon Driver, one of the directors of the ICN and the man who would be supervising the project that myself and Dr Kluge will spend the next nine months on.
It didn’t go well.
I came across as a bumbling, poorly read amateur. Prof Driver was clearly unimpressed, and Dr Kluge was visibly embarrassed at having brought me into his office. Nevertheless, between us we managed to thrash out a viable research project which certainly has potential to be an exciting piece of work.
The one advantage of making such a poor first impression on such an important figure within the ICN is that there is now only one direction in which his opinion of me can go. The last thing professor Driver asked of me, as he was on his way into a two hour meeting, was to draw up the slides for my presentation on Monday, and to do it before Friday so he could take a look at it.
It was in his inbox by the time he left the meeting.
The best thing about fighting down those feelings of inadequacy is that it resets ones perspective in order that we may replace them with feelings of pride and self-congratulation; quite rare for me to feel and even rarer to voice. But if I am honest I have done bloody well to get here. It can be off-putting at times to hear some of my fellow students list their accomplishments, to reel off terminology that leaves me perplexed, and to have to take the time to explain things to me in simple terms.
I should not lose sight of the fact that I came into this course without a single science A-level, and ten years after a decidedly average performance at GCSE science. In addition, contrary to what some may think, a psychology degree is far from the ideal prerequisite for a cognitive neuroscience MSc, let alone one with as little scientific content as my bachelors degree had. But then again, I had no experience of psychology before undertaking my degree, and emerged with first class honours.
This may all be new to me now, and I may have to endure some snobbery, condescending comments and pangs of self-doubt, but I will learn fast, improve exponentially and come out the other side with one heck of a valuable qualification.
And then, just maybe, repeat the whole cycle again with a PhD.
Thursday, 3 December 2009
"Men ought to know that from nothing else but the brain come joys, delights, laughter and sports, and sorrows, griefs, despondency, and lamentations. And by this, in an especial manner, we acquire wisdom and knowledge, and see and hear, and know what are foul and what are fair, what are bad and what are good, what are sweet, and what unsavoury... And by the same organ we become mad and delirious, and fears and terrors assail us... All these things we endure from the brain"
Hippocrates, 400 B.C
Wednesday, 25 November 2009
I am currently trying to think of a research project which will form my thesis. I have a pretty good idea who will be supervising me, a guy over at the institute of cognitive neuroscience who is researching plasticity in the auditory cortex.
Plasticity is something that fascinates me greatly, and one book in particular, 'the brain that changes itself' by Norman Doidge, was more or less the main reason why I applied for this particular course.
Essentially, plasticity (or neuroplasticity) is the ability of the brain to 'rewire' itself, to make new connections. For the century or so since the 'neuron doctrine' came to the forefront of thinking about the brain, it was believed that the structure of the brain was more or less fixed from adolescence onward. Scientists thought that no new brain cells could be formed, and it a part of the brain was damaged then it was lost for life.
Of course, there has to be some degree of plasticity in the brain, otherwise we couldn't form new memories, but we now know that the brain is far, far more plastic than previously thought.
The most exciting research being done in this field that I am aware of is in the area of sensory substitution, most notably the work carried out by a guy by the name of Paul Bach-y-rita.
Bach-y-rita developed a device which could allow the brain to recover one lost sense from another. If I wanted to sensationalise this, I would say he made blind people see again. But that wouldn't quite do him justice - he made them see out of their tongue.
Sounds insane, but it is actually possible. The key concept here is that we don't see with out eyes, just as we don't hear with our ears. All of our senses are essentially electrical information carried to and processed in the brain. For example, the actual physical image of what you see doesn't get any further than the back of the retina, before it becomes a complex series of electrical pulses that are then carried to the visual areas at the back of the brain.
When somebody is blind, it is generally because of a problem with their eyes, rather than the visual areas of the brain. Therefore if an alternative pathway could be found to get these electrical pulses to the visual cortex, you have vision.
What Bach-y-rita invented was a small device that sits on your tongue and converts a picture from a video camera into electrical information. Imagine a strip of plastic with hundreds of tiny electrical points covering it's surface. Each one of these points is like the pixels that make up a digital image, the brighter the pixel the stronger the electrical pulse. Multiply this over the entire surface of the tongue and you can make up a crude image of your visual field. Apparently, when this device is activated it feels like those old 'popping candy' sweets you can get that fizz in your mouth.
The amazing thing is, over time your brain learns how to process the signals as images, and slowly the signal becomes a valid, if slightly basic, black and white image. Brain scans using MRI machines have even confirmed that this information is being processed in the visual parts of the brain.
So this now becomes a philosophical question - if visual information is being processed in the visual cortex, but it just so happens it is relayed via the tongue by a video camera - is this still eyesight?
Take a look at these two videos and decide for yourself.
Monday, 9 November 2009
I realise that my last entry was pretty dull, so I found out an interesting little fact to keep your interest.
There are 100-150 billion neurons in the human brain.
Each neuron may connect with around 10,000 other neurons.
If each neuron connected with every other single neuron, our brain would be 12.5 miles in diameter (Nelson & Bower, 1990). This is the size of greater London.
Taken from Jamie Ward's book 'The students guide to cognitive neuroscience', chapter 2.
Sunday, 8 November 2009
There isn't a great deal to report about the last fortnight. We had some interesting lectures and some achingly dull ones. We learnt a little about how computational modelling can help us to understand the brain, had another case study at the hospital, and a whole load more stats.
I got my marks back for the first statistics test as I was handing in the second assignment. I did reasonably well, considering the conditions under which I wrote the last one (see my previous entry on my late-night rewrite). The marks I dropped were, I think, due mostly to the length of time it has been since I had to do any statistical analysis, and a couple of silly mistakes. I feel a lot more confident about the second test, and it would seem that I am back into the swing of things.
My main worry now is the exam in January (the only exam on this course), which is on the neurophysiological side of the course. Of course, this stuff is bloody hard, and Marty has just put a few example questions on his website, which really put the fear into me. As I was rereading my notes on how brain cells communicate with one another, I was struck by just how easy this would be it the brain were intrinsically self-aware. My neurons are firing right now, in many different parts of my brain, as are yours, as are all of ours, all the time, many many times over. We should be experts in this. Given the frequency with which our neurons fire you could argue that it is the single most practised act any human has ever performed.
So why then is this exam shaping up to be such a struggle?
Friday, 23 October 2009
The third week in, and things are getting tougher. On the plus side, though, certain things are getting more interesting.
For example, we had a lecture explaining how an MRI (magnetic resonance imaging) scanner is able to capture such detailed images of the brain. Would you like to know how it works?
Essentially, an MRI scanner is a tube containing a very strong magnetic field, thousands of times stronger than the Earth’s gravitational pull. This field is kept constant, so is absolutely uniform at every point within the scanner.
The protons inside the head are all busy jiggling around all over the place, randomly veering around at different angles, but once inside the scanner they all begin to align, with every proton facing the exact same direction, due to the power of the huge magnetic force generated by the machine.
The operator can then send a strong sudden magnetic pulse through the scanner, which flips every single proton in the scanner chamber to one side. To help you imagine this, picture all the protons facing north, and then suddenly being knocked to face east (of course this is not how it happens, but it can be easily visualised).
Now, this is the clever bit. The protons will then flip back to align with ‘north’, only in every different type of tissue this process will take slightly different amounts of time. So, bone, grey matter, white matter, even oxygenated and lesser oxygenated blood, each will take a slightly different time for their protons to recover from the knock in order to face ‘north’.
This different timing for each type of tissue is crucial, as the scanner can keep a record of how long each proton took to recover, and this will effect whether a light or dark patch appears on the screen, giving us a perfect image of the skull, brain and surrounding tissue.
Amazingly, the machine can do this in 3d, so you can then work through sections of the brain as if you were travelling through the body.
It is quite an amazing procedure, and if you are interested you can see more here
On an unrelated note, I had my first bit of coursework due in yesterday, a particularly nasty statistics worksheet. The night before it was to be handed in I had got it all finished, printed off, done and dusted. My first early night in ages, I was actually in bed by 11 which is unheard of.
12:30 a.m, I get a phonecall. It is Melissa, a girl on my course, who I collaborated with on the work. She tells me there was an error in our data, meaning all our calculations were wrong. So, at one in the morning I had no choice but to rewrite the bloody thing from scratch, redoing all my calculations, graphs and tables. I got to bed some time after 4 a.m, a bit of sleep, and handed it in an ten the next morning.
Sunday, 18 October 2009
Tuesday, 13 October 2009
The topic this week focused on the development of the brain in the womb, with a detailed explanation of how a small cluster of cells begin to slowly divide and form the beginnings of the spine, the eyes and gradually folds over on itself to take shape as the infant brain. This was fascinating, and I was really impressed by Marty’s knowledge, with frequent deviations to discuss the brains of other mammals and assorted creatures of all different sizes.
He then went on to discuss the visual system, and how surprisingly complex it is. We have no idea quite how much our brain needs to do in order to make sense of the world around us. For example, what we perceive as a single field of vision is actually processed in several different parts of the brain, such as an area for the periphery, an area for our immediate focus etc. These different maps are then seamlessly patched together in order that they be perceived as one single image. This on top of flipping the image (due to the convex shape of the retina), and creating a 3D image of the world via depth perception, all happening simultaneously without us knowing it. He also discussed the limitations of our knowledge of just how this is done, with particular reference to objects moving across our field of vision.
We had an extended lecture with Marty, 3 hours rather than the usual 2. He actually managed to maintain my full and undivided attention for almost the full 3 hours, until that is he started talking about areas of the visual cortex known as (I kid you not) ‘blobs’. This was about the point when both my interest and understanding trailed off, the final straw being the point that the areas between the ‘blobs’ are known as ‘interblobs’. Frankly if scientists can’t be bothered to name these things properly, I can’t be bothered to understand.
The reading Marty set us this week is much more accessible too. One bit I found particularly interesting is how the brain of a tennis player will come to perceive his or her racquet as an extension of their arm, increasing it’s mental map of their immediate surroundings to compensate for this increased ‘body shape’.
This follows research by Iriki et al (1996) on monkey using sticks as primitive tools, and the science is explained thusly:
“The visual receptive fields expand when the monkey uses the rake as an extension of its hand, while the somatosensory receptive fields are unchanged. This is interpreted as a change in the body image: The enlargement of the visual receptive field reflects the neural correlate of a hand representation that now incorporates the tool. The visual receptive fields return to their original size within a few minutes after tool use is discontinued. They do not expand at all if the monkey simply holds the rake without intending to use it. These rapid changes in visual receptive field size indicate that the neural connections that allow for the expansion must be in place all along.”
(Colby, C.L. and M.E. Goldberg (1999) Space and attention in parietal cortex. Annual Review of Neurosciences 22:319-349).
Anyway, as challenging as this module is I feel it may end up being a very rewarding one, and perhaps even a personal favourite, as long as I can keep up with the reading material, and maintain the pretence of understanding it.
The afternoon talk was delivered by a guest lecturer who gave a talk on Autism and Williams syndrome, both examples of what can happen when certain parts of the brain don’t function as they should. Frustratingly, all she did was read out her lecture slides without any elaboration, meaning she may as well have just emailed us all and we could have read the bloody things in 20 minutes. It didn’t help that she pitched the thing at GCSE to A-level standard, not far above the quality of science you would expect on ‘loose women’. And yes, I am fully aware that last week I was complaining about things being too hard, only to complain this week that it is too easy.
But then I’m a fussy little so-and-so.
Friday, 9 October 2009
Now, much had been made by the lecture staff about the wide variety of backgrounds and experiences we, the students, had prior to the course. For some this is their second or third masters, but for the majority it is their first postgraduate qualification. Many, like me, graduated with a BSc or BA in Psychology, but there is also a wealth of other talent on board, from computer science students to those who studied linguistics. Some studied neuroscience, and others had a more biomedical background.
Therefore one would hope that the lecture staff would really start with the basics, and build up to more complex and specialist information. Sadly not. My main complaint about Tuesday morning is that Marty Sereno, who teaches 'Structure and Measurement of the Brain' (which encompasses Neuroanatomy, Neurophysiology, and Neuroimaging Physics)seemed to assume we were all already familiar with pretty advanced maths, physics and chemistry concepts, and wasted no time in explaining much of the terminology he employed. For poor old me, who hasn't done any maths or real science in the last decade (not since the last century, in fact) this was a bit of a problem.
This was exacerbated by the fact that, rather than starting at the beginning, Marty seemed to deliver the points of his lecture in a bizarre mixed up order of concepts, often explaining how a process works far before explaining what the process actually is and does. Passionate he certainly is, and his knowledge seemed superb. But when I asked a question (basically 'what on earth are you talking about') he merely stated he would come back to my question, and then promptly forgot.
Luckily when I went home and began to plough through the reading things started to make sense. I now have a basic understanding of the signalling processes employed by neurons (or brain cells), which is far too dull to even bother explaining here, but in very simple terms it has to do with flipping the balance of positive and negatively charged ions inside and outside the cell wall to create a sudden electrical charge which then fires along to the next brain cell to convey the signal. Simple, and it only took a few hours of reading.
The rest of Tuesday was pretty much administrative stuff, learning about the departmental intranet and IT services and a little information about our major project for the course. We also discussed applying for PhD funding, which needs to be done very soon indeed. This presents me with a bit of a problem, as a large part of my reason for doing this course is that I don't yet know what field I want to go into for my PhD, and so was hoping that the masters would give me extra knowledge of all aspects of neuroscience. So it came as a bit of a shock to learn that I would need to apply for funding within the first month of the course. Arse.
By the time my second day of lectures came around, I was still worrying about my experience of Mondays class. Surely it couldn't all be this hard? If week one was going to be that complex then what the hell was week ten, twenty or fifty going to be like?
Luckily, I was presently surprised by Thursday, as much of it was very basic revision of the statistical concepts I had learnt at undergraduate level. I would say I was already familiar with about 90% of what was discussed in out first 'Advanced Quantitative Methods' class, and looking over the timetable for the coming months most of this module does seem to build on concepts with which I am already familiar. I had been dreading statistics, but I left the class feeling very, very relieved.
Which brings me on to my final class of the week, 'Lesion approaches'. This class covers what we can learn about the brain by examining what effect is observed when a specific area of the brain is damaged, and the impairments it causes. I have a feeling this may become a personal favourite of mine, and although week one was very basic and introductory in tone, it did not disappoint.
So there you have it, week one. Sorry there wasn't much to report, but due to the nature of these things this first week was mostly spend getting to know the staff, finding out where we needed to be for each class and discussing what we will be learning over the next 12 months.
I would imagine that as of Monday, the exciting stuff starts! But don't worry, I will be here to keep you posted!