Juluka

One of my favorite bands from ‘back in the day’, Juluka has been described as “Jethro Tull meets Ladysmith Black Mambazo”. If that doesn’t catch your interest, Juluka was a mixed-race band playing in South Africa at the time of apartheid. This was enough – combined with their political lyrics, no doubt – to get them repeatedly arrested and beaten by police — yet they kept performing. Founding member Johnny Clegg came to Africa from England as a boy, trained with African musicians and mastered a form of Zulu dancing, featured in the video. He is known as Le Zoulou Blanc (“The White Zulu”). Juluka was disbanded in 1986, when co-founder Sipho Mchunu returned home to his village in order to look after the family cattle. Hope you like it.

 

 

Why Are You Anonymous?

[Annotation in BOLD were made following comments on twitter, and below]

Many tweeters and bloggers are anonymous. [Several people have pointed out that what I really mean is pseudonymous, e.g., Acclimatrix below. This is correct! And I am not talking about, e.g., anonymous manuscript review.] An exchange today (18 November 2013) on twitter between @vinwalsh (who is Professor Vincent Walsh, UCL), and @neurobollocks (who is, erm, neurobollocks), regarding the appropriateness (or not) of anonymity, prompted me to write my 2 pence about the subject.

This is because I started thinking about this issue when one anonymous tweeter – whose tweets I admire – let out enough information for me to know that they were in my circle of friends – so they know me, and who I am on internet and in real life. But I cannot know who they are. Furthermore, they were planning to be at (at least) two parties/socials at the Society for Neuroscience conference, where I was planning to be — yet I didn’t know who they were! I guess felt a bit frustrated, or at the very least, that it was a lost opportunity.

The feeling was compounded when I went to the #SFNbanter party, and the first thing that happened was someone told me to take my name badge off. So, no real-name badges allowed, and no badges with twitter handles either. In other words, it was very difficult to meet any of the people who I know on twitter – anonymous or otherwise. [But importantly, it turns out there was actually NO official no-badge policy at #SFNbanter! And I want it to be clear: I did meet some people, and had great time.]

So I did what I always do when I have a question about the internet: I talked to Baxter (that’s Professor Mark Baxter, Mount Sinai, @markgbaxter). Profs like me, Baxter (and, I am guessing, Vince) are at a point where we care a lot less than we used to about what people think of us. It is one of the great joys of getting older. We’re at a stage at which we can afford to say what we think (well, more or less).

But undergrads, PhD students, even post-docs, are battling fierce competition for their next position every few years. They can’t afford to make enemies. So I can see why they might feel the need to be anonymous, in order to be healthily sceptical and constructively critical, as every scientist should be … but with (relative) impunity.

The downside to that approach, however, is this: I’ve encountered young (I presume) tweeters/bloggers in my area whose thoughts, writing and attitude impress me greatly. If I had a position going, or knew of someone with such a position going, I’d contact them/ recommend them immediately. Problem is … they’re anonymous. Another missed opportunity.

So in a way, with anonymity, something is definitely lost. I don’t have a wide-ranging solution for this problem, of course. And I can’t change people; all I can do is ask what I might be able to do differently. What I come up with is probably pie-in-the-sky, but maybe what us PIs need to do is work on our egos such that we don’t make it personal, and hold grudges, when someone merely disagrees with us. As long as they’re not rude about it. Then maybe young people won’t be so scared to be themselves.

I did say it was pie-in-the-sky.

So to summarise, I guess I understand why a young scientist might feel the need to be anonymous.

But, if you’re a Prof using a pseudonym – dude, sort yourself out. [By this I meant ‘Prof’ as it is used in the U.K., to mean the final career stage (i.e., not N.A. Assistant or Associate Prof). So this a mild jokey-poke at my Prof colleagues. It’s been pointed out, though, that there are very good reasons anyone might choose to be pseudonymous — see, e.g., Zen Faulkes’ comment below.]

The Appendix Theory of Neurogenesis

We’ve all had “theories”, haven’t we. That’s “theory”, not theory. You know what I mean by “theories”: those not-necessarily-at-all-well-formulated, probably untestable crap ideas that pass briefly through your mind when staring out the window of a bus, or repeat in your mind when you’re trying to sleep. Thankfully for the world, they usually end up going nowhere.

A “theory” of mine that I once considered for an appropriate time of about 2 seconds is The Appendix Theory of (Adult Hippocamal) Neurogenesis. If you don’t know, in the 90s an idea was confirmed, that — contrary to Cajal’s (somewhat disappointing) assumption that brain cells are never born but just die — new brain cells are born, and one of the places that happens is the dentate gyrus of the hippocampus, a structure neuroscientists automatically and immutably pair-associate with memory function. Once AHN was discovered, the critical question for those of us interested in function was what, specifically did these new neurons do for us? What is AHN for?

You’ve already guessed the idea behind Appendix Theory. It is that AHN is vestigial; the young neurons thus formed add nothing to our cognitive abilities. Indeed, these neurons are weird; they are over-responsive and highly plastic and may contribute little to the DG circuitry, other than maybe some noise.

Appendix Theory was immediately challenged and apparently proved predictably daft by experiments that found that knocking down AHN by irradiation or other methods led to impairments in spatial memory. But not everyone got this result: in some experiments knock-down had no effect, a finding consistent with Appendix Theory. A possible resolution to this ambiguity came when it was reported that AHN knock-down impaired the discrimination of similar locations – thought to require the formation of non-overlapping representations through a computational process called ‘pattern separation’ – without affecting other aspects of spatial cognition (Clelland et al., 2009). The previous inconsistencies could now be explained, because the load on pattern separation in these tasks was never controlled – in some studies it might have been high, and in some low (e.g., ambiguous cues in one water maze room, but not another). Appendix Theory received a number of subsequent coffin-nails from other groups reporting that AHN knock-down impairs the discrimination of similar locations/contexts, and indeed that increasing AHN can enhance this function. Support for this function of AHN has now come from several species, using a variety of behavioural paradigms, and from methods including lesions, patient populations, and neuroimaging. Appendix Theory seems to be dead, and right or wrong, a function for AHN in pattern separation has become the assumption.

A new paper, however, could yet breathe new life into Appendix Theory. Groves et al (2013) used a novel method to knock down AHN, and found no impairments on any spatial tasks. How could all of those studies from using all those species, paradigms and methodologies be wrong? It’s possible. For example, methods of knocking down AHN, like any lesion method, could have off-target effects. In Clelland et al, really what we should have done is include a group with selective lesions of the mature neurons in the DG — but I have no idea how that could be done! (Clever suggestions below the line please.)

The trouble is, most of the tests in this new paper – standard spatial tests like water maze and fear conditioning – are irrelevant because, as described above, plenty of people have shown little effect of AHN knock-down, and these past inconsistent results appear to have been resolved by reassessment of those old data in terms of pattern separation. However Groves et al did include a test of pattern separation that had been used previously, a radial maze task similar to that used in Clelland et al.. Unfortunately, for some reason the small separation condition wasn’t more difficult than the large. In another test, the condition that was meant to tax pattern separation elicited significantly fewer errors from the rats. So, it is hard to be convinced the rats in this study were really challenged appropriately. It’s worth noting that in some of the anti-Appendix experiments (including our own), the two conditions were equidifficult for controls. The difference is that those studies found impairments. When claiming no effect, however, it has to be clear the animals were adequately challenged.

Indeed, when claiming a negative a minimum requirement is to show that the behavioural tests used are sensitive to comparable manipulations. The authors argue their tests are sensitive because full hippocampal lesions have an effect. (But in mice, not rats.) I couldn’t find their pattern separation test in any of their other papers, but let’s assume it’s sensitive to hippocampus lesions. Trouble is, a hippocampus lesion is hardly comparable to a lesion of 10% of the cells in the dentate gyrus. Before deciding that all those studies described above are wrong, and embracing Appendix Theory, we need to see that their task is sensitive enough to pick up effects of a comparably sized lesion, ideally in the same species.

One can also reasonably ask whether this new knock-down method was functionally effective at all; after all, the point of the paper is that no impairments were found. There are indeed some effects on elevated plus maze but, as the authors very honestly pointed out, it wasn’t significant at an appropriate level of statistical rigour. To be honest though, that would be a bit of a picky criticism, for at least two reasons. First, if Appendix Theory is correct, there is no prospect of a ‘positive functional control’ because according to Appendix Theory AHN doesn’t have any function! And second, the knock-down of neurogenesis was 98%, which is huge! It seems to me this is the real strength of the study, a new method for creating a very substantial knock-down of AHN.

I should also mention that the authors add to their study a meta-analysis of studies of neurogenesis knock-down. That could be really useful. Trouble is, as far as I can tell the analysis included what appears to be only a random two papers testing pattern separation – and both of these were contextual fear conditioning, which is widely regarded as the most problematic paradigm (no manipulation of the load on pattern separation). Furthermore they failed to consider any study testing the effects of enhancement of neurogenesis, which provides particularly strong evidence for functional efficacy, especially in studies in which the increase in neurogenesis was apparently very selective. In any case such an analysis does not capture the persuasive power of the replication of the finding using very dissimilar methodologies and species, as is the case of AHN and pattern separation.

To conclude, this impressive new method for a achieving a very substantial knock-down of neurogenesis could prove very valuable for studies of the functional role of AHN, and the molecular events associated with it. (Although it should be noted that this method itself had known off-target effects, namely it affects not just neurons, but glia.) For now, though, it is clearly not quite time to embrace Appendix Theory.

Respect for Devo

Most people I know think of Devo as a comedy band who wore flowerpots on their heads. But I think they deserve much more respect. Devo were seriously innovative catalysts for the transition from punk to new wave and beyond. Furthermore unlike many bands they were driven by a kind of philosophy: De-evolution, from which they took their name. The idea came from a pamphlet — “Jocko Homo Heavenbound” — see figure below – describing how human evolution has kicked into reverse. Read some recent newspaper stories and it’s hard to disagree. You might know “Whip It” … but try this earlier one. Remember this was made in 1972. Pre-flowerpots.

Want more? They’ve re-formed. Find out more here.

And by the way, they’re not flower pots; they’re energy domes.

image

Touchscreen Dreams: A Reply to Michael R. Hunsaker

As some of you know, I and my colleague Lisa Saksida have been working together on cognitive testing using touchscreens for over 20 years. As you can imagine, along the way we’ve met with some serious scepticism (which is of course natural and healthy; this is science after all) and plenty of challenges that have sometimes taken years to surmount. Developing and validating new apparatus and tasks is not easy! But it’s fun.

So I found it very rewarding to read Michael R. Hunsaker’s recent blog — triggered by our publication of 3 touchscreen protocol papers — in which he is very positive indeed about the touchscreen approach. In his blog he touches on many of the reasons we think the approach is so promising, for example its high throughput, reproducibility, and the ability to test many aspects of cognition in the same setting, increasing comparability across different tests. I feel the need to write a reply to his lovely piece, but what can I possibly add? I guess there are a couple of things I can say …

First: The more the merrier!

Hunsaker’s right; we would love to see more people using touchscreens. The larger the database that accumulates, the more powerful the method becomes. But more than that, we’d love for people to develop their own new tasks, to add to the touchscreen toolkit. Some have already taken up this challenge. For example Yogita Chudasama’s lab at McGill has developed delay- and probability-discounting tasks.  Eva deRosa’s lab in Toronto has developed a visual search task that capitalizes on the ability to display multiple stimuli in multiple locations on the screen, something you just can’t do with any other method. Indeed, the ability to manipulate stimuli opens up myriad possibilities for cognitive testing. For example stimuli can be ‘morphed’ together – they’re using this method at J&J for drug testing  — allowing tests in which cue ambiguity can be parametrically manipulated. The possibilities for new tasks are almost endless.

Another ongoing development is the marriage of touchscreens with methods like electrophysiology and optogenetics. I know of at least three labs currently working in this area. It’s a natural progression for a computerised apparatus like the touchscreens.

Hunsaker himself suggests touchscreen testing in the home cage. Perhaps that is the future of touchscreen testing: one can imagine a day when animals test themselves, the experimenter relaxing in their office as they watch the data stream in. Some companies are already advertising apparatus to achieve exactly that.

For our part, the aim has always been improved translation, and our fantasy is that eventually, pools of chilly water and paper-and-pencil tests will be replaced with parallel touchscreen batteries in which every rodent task has a human analogue. Our preliminary work in this area has been promising. For example, anticholinesterase treatment ameliorates attentional impairments in a mouse model of AD, an effect which has been shown in the identical touchscreen task in humans with AD. More recently mice with  deletion of the schizophrenia-related gene dlg2 were impaired on a touchscreen object-location learning task, and four humans with deletion of the same gene (3 of whom have been diagnosed with schizophrenia, one who has not) all were unable to learn exactly the same task we gave to the mice.

So, all good! But …

Second: There is a hell of a lot more work to do and it ain’t easy

Although we already have over a dozen tasks working well in the touchscreen, the full translational ‘vision’ is going to take an incredible amount of effort to achieve. To take just the goal of parallel rodent-human batteries, it is not enough to have tasks for the rodent and human that look similar, even identical. That provides face validity, which is nice, but what we really want is neurocognitive validity; we want to know that the same brain mechanisms are engaged during a task. This requires neuropsychological studies, imaging … and that is only validation at the level of brain structures; ideally we‘d like pharmacological validation too, and eventually predictive validity  showing that treatments that work in the preclinical model on task X work in the clinic on task X. Of course this requirement for validation is not specific to touchscreens; indeed few behavioural tasks of any kind can boast this extent of validation. But paradoxically, the striking face validity of the touchscreen method actually serves to bring the issue of higher levels of validity into sharper relief, thus raising the bar to levels that are higher than for other, less face-valid methods. And frankly, so it should.

So it’s going to take a lot of work. But did I mention it’s fun?

Elephant Talk

And now, a post about … music. Where to start? How about Elephant Talk by King Crimson. Why not.

I could watch — and have watched — this video dozens of times. Sure, it’s “prog”, at least that’s how it gets labelled. But that term, with its negative connotations of bombast, fantasy-lit lyrics and 20-minute solos (not that I have anything against any of those things), just doesn’t do this justice. To be honest, I don’t know what this is. It probably needs a genre of its own. It has elements of rock and roll, world music, electronica. It has an anything-goes unhinged ethos, but at the same time is mathematically precise.

I watch this video over and over and keep finding something new. The song begins with some crazy sound coming from some crazy instrument you may never have seen before. It’s called a Chapman Stick, played by Tony Levin. There is no bass guitar in the song. That line playing on that instrument over and over would be enough for me all on its own.

Then there’s the drummer: Bill Bruford, one of my favourites of all time; played with Yes, Genesis, his own fusion band Bruford, and now plays jazz. Here he’s playing a kit made up of rototoms and octobans, with no ride cymbal. The repetitive rudiments on the roto-toms/octobans combine with the Stick to produce an almost African feel.

But it’s the guitarists that are even more original – and complete opposites, almost comic foils for each other. First is Robert Fripp (he founded King Crimson in 1968), stage left, with his skinny tie, sensible dark suit and pocket protector (OK not really but he might as well have one) and perfect technique, playing repetitive lines with the precision of a sequencer. If you’ve followed this guy you’ll know he is usually pretty dour and serious, but here he is so pig-in-shit happy about what’s going on around him that he actually smiles – twice!

Adrian Belew (Bowie, Zappa, Talking Heads, Nine Inch Nails) is in the big hot-pink suit. He’s a drummer first and a guitarist second, but his guitar playing has to be some of the most bizarre and creative ever seen. He plays above the nut. He taps frantic messes of wrong notes out on the fretboard. He specialises in animal sounds (yes, that’s what I said) and makes his own Heath Robinson-style effects devices to do so. Here he has a mysterious silver box attached his mike stand with which he makes elephant sounds. As you do.

After a couple of verses, Fripp and Belew trade solos. Fripp plays his guitar through a synth. Have you ever heard a guitar sound like that? No, you haven’t. Belew counters with whammy-bar madness and, of course, elephant sounds.

Add to all of this Adrian Belew’s Talking Heads-like vocals and lyrics which work through an alphabet of synonyms of “talk”, ending on ‘E’ for Elephant Talk, and my musical nerd-bliss is complete.

Hope you like it! Thanks for listening 🙂

WANT MORE? Check out Robert Fripp’s “Frippertronics”:

On optogenetics

Go Baxter!

Mark Baxter

I’ve kind of wanted for a while to write something about methodology in behavioral/cognitive neuroscience (I want to reclaim “cognitive neuroscience” from being code for human fMRI, but that’s another post). Optogenetic technology – using light-activated ion channels or other proteins to manipulate neural activity – has swept many areas of neuroscience in the last few years as it has become widely distributed and successfully implemented in many laboratories. 

There is no doubt that the ability to manipulate neural activity with extremely high temporal and spatial resolution is revolutionary and promises many advances in understanding neural information processing. However, I think we’re at a point in this technology where all the exuberance about it is leading to a certain amount of mindlessness in experimental design and interpretation.

I was really delighted by reaction norm’s post about optogenetics and the dangers of oversimplifying what happens when you start to modify neurophysiology…

View original post 1,234 more words

Pattern Separation: What’s The Problem?!

I can’t believe I am writing a blog. Well, one entry at least, we’ll see where it goes from here. Maybe I’ll write about my two Big Obsessions: Science and Music. We’ll see.

For now, over on twitter (@Timothy_Bussey) we’ve been having a nice conversation about “pattern separation”. As you may know this putative process/construct/computation is suddenly of great interest to neuroscientists in part because it has been associated with adult neurogenesis (which itself is the topic of what has become a bit of a research sub-industry). There is even a website devoted to pattern separation!

www.patternseparation.com

So what is pattern separation? I think the website provides as good a definition of it as any:

 “The process of reducing interference among similar inputs by using non-overlapping representations.“

The classic example of the kind of interference pattern separation reduces involves car parking. If I ask you about something you did 3 days ago, you can probably give me a good answer. But if you park your car in the same multi-story car park every day, and I ask you where you parked your car 3 days ago, it is exceedingly difficult. The memories of parking your car every day are so similar that they are difficult to discriminate, and become confused in memory. Pattern separation is process that helps to reduce this confusion – we’d be a lot more confused about all sorts of memories if we didn’t have it!

Seems straightforward enough. However in “the field” there seems to be considerable confusion about, amongst other things, how people “define” pattern separation, if indeed they do (see below), and how it is best studied experimentally.

I thought I’d write down my preliminary thoughts about this because actually — I don’t see any problem at all! From where I’m coming from, the study of pattern separation seems to me to be nothing new or out of the ordinary. So I am very surprised by the confusion. Let me try to explain.

I am a behavioural/cognitive neuroscientist; my degrees are in Psychology. In behavioural/cognitive neuroscience we have a basic paradigm for working. We postulate a putative process/construct/computation in the brain, e.g., working memory, attention, whatever. Then we devise tasks to try to capture that function, e.g., delayed response, target detection, whatever. We try to have parameters that we can manipulate, e.g., delay, duration of target. If, say, a prefrontal cortex (PFC) damage leads to, say, a delay-dependent impairment in our delayed response task, we take this as evidence that the PFC is involved in working memory.

The behavioural pattern separation experiments — e.g., lesion the dentate gyrus, test on a putative test of pattern separation — are more of the same. As pattern separation putatively results in reducing the confusability, increasing the discriminability, of events, the parameter we manipulate is discriminability of events. There is nothing new under the sun here.

So when, for example, Adam Santoro writes that people, including me (Clelland et al., 2009, Science), define pattern separation as

“the literal behavioral ability to discriminate related stimuli”

and proceeds to rail against such a ‘definition’, I have no idea what he is talking about!

To return to the examples above, people who do those kinds of experiments on, e.g., working memory or attention are not defining delayed response as working memory, or target detection as attention. Those are just tasks, and they are using those tasks as assays of those putative processes/constructs/computations. (Of course one can always argue whether or not these are the right tasks to tap the constructs of interest, but that is a completely different issue.)

Now, having said that, Santoro is right in that some do seem to offer “behavioural” or “psychological” definitions of pattern separation — e.g. Hunsaker & Kesner — but I don’t really get that. There is no need for some separate behavioural definition of pattern separation. There are just tasks that we use to try to tap that putative function. Is this just semantics? I don’t think so; I think it’s important because talking about “behavioural definitions” will just fuel people’s misconception that there is something fundamentally different needed when studying pattern separation. But there isn’t — you don’t need a behavioural definition of pattern separation any more than there is a behavioural definition of working memory or attention.

So, What’s the problem?!

There isn’t one.

Discuss … ?