My interest in Neuroscience began when I read Oliver Sacks “A man who mistook his wife for a hat” near the end of Secondary school. I was captivated with not only the many ways processes in the brain can go wrong, but the bizarre manner in which they can do so. From that point on I was determined to study Neuroscience although it wasn’t until my second year of University at Cardiff that I solidified the specific field I wanted to pursue. I received several lectures on memory and the mechanisms by which it is believed memory is stored. In particular the concept of an engram fascinated me, and the answer to how the connections between neuron’s encodes the complex memories we experience still eludes the field.
I went on to do a year placement on memory and attention in rats with Professor John Aggleton, looking at the anatomical basis of these functions and how different areas may coordinate with each other to aid in decision making. However, I found that although this project was deeply interesting, I realised that I wanted to study memory at a deeper level and look at the way neuronal interactions define the experiences we encode. This led me to Dr Andrew Lin and the project I do now, where the position advertised a multifaceted approach of looking at encoded representations on a single neuron level, studying the effect of noise on the interactions between neurons and using computational modelling to study how experimentally identified changes may contribute to the formation of representations in Drosophila. The relative simplicity of the fly system excited me as I believed this would be an effective way to really investigate the core mechanisms at play in the formation of simple representations, that could be applicable to more complex systems.
Many learning networks demonstrate analogous features, suggesting there may be something ‘optimal’ about the broad architecture, cellular morphologies and forms of synaptic plasticity used for different functions in these networks across phyla. The Drosophila mushroom body utilises a three-layered expand-converge architecture, of which we will investigate two aspects. The first aspect is the homeostatic mechanisms utilised by odour-encoding Kenyon cells in the expanding part of this network. Neurocomputational models suggest average amount of activity should be consistent across Kenyon cells, and we aim to identify the potential internal mechanisms by which they achieve this. Specifically, we hypothesise that Kenyon cells adapt their dendritic morphology to maintain a consistent level of activity. We will label single Kenyon cells with GFP, artificially increase their activity using the heat sensitive TrpA channel and quantify changes to length of dendrites and size and number of claws. This will be supported by modelling to determine the effect identified changes could have on Kenyon cell activity.
The second aspect is the nature of the noise found at Kenyon cell-mushroom body output neuron synapses in the converging side of this network. Associative learning in Drosophila occurs via depression (weakening) of synapses associated with the ‘incorrect’ action, but it isn’t clear what benefit depression has over potentiation in this role. We hypothesise that noise at KC-MBON synapses is multiplicative and therefore depression is more optimal than potentiation, as it reduces the impact that overlap (Kenyon cells activated by both aversive and approach associated odours) has on the overall probability a fly will approach or avoid an odour. We will use dual-colour calcium imaging at Kenyon cell-mushroom body output neuron synapses and measure the variability in responses to different stimuli over successive trials.
This two-pronged approach could shed some light on key mechanisms underlying homeostatic plasticity across learning networks and provide important information on depression and potentiation’s distinct roles.