Touchscreen Dreams: A Reply to Michael R. Hunsaker

As some of you know, I and my colleague Lisa Saksida have been working together on cognitive testing using touchscreens for over 20 years. As you can imagine, along the way we’ve met with some serious scepticism (which is of course natural and healthy; this is science after all) and plenty of challenges that have sometimes taken years to surmount. Developing and validating new apparatus and tasks is not easy! But it’s fun.

So I found it very rewarding to read Michael R. Hunsaker’s recent blog — triggered by our publication of 3 touchscreen protocol papers — in which he is very positive indeed about the touchscreen approach. In his blog he touches on many of the reasons we think the approach is so promising, for example its high throughput, reproducibility, and the ability to test many aspects of cognition in the same setting, increasing comparability across different tests. I feel the need to write a reply to his lovely piece, but what can I possibly add? I guess there are a couple of things I can say …

First: The more the merrier!

Hunsaker’s right; we would love to see more people using touchscreens. The larger the database that accumulates, the more powerful the method becomes. But more than that, we’d love for people to develop their own new tasks, to add to the touchscreen toolkit. Some have already taken up this challenge. For example Yogita Chudasama’s lab at McGill has developed delay- and probability-discounting tasks.  Eva deRosa’s lab in Toronto has developed a visual search task that capitalizes on the ability to display multiple stimuli in multiple locations on the screen, something you just can’t do with any other method. Indeed, the ability to manipulate stimuli opens up myriad possibilities for cognitive testing. For example stimuli can be ‘morphed’ together – they’re using this method at J&J for drug testing  — allowing tests in which cue ambiguity can be parametrically manipulated. The possibilities for new tasks are almost endless.

Another ongoing development is the marriage of touchscreens with methods like electrophysiology and optogenetics. I know of at least three labs currently working in this area. It’s a natural progression for a computerised apparatus like the touchscreens.

Hunsaker himself suggests touchscreen testing in the home cage. Perhaps that is the future of touchscreen testing: one can imagine a day when animals test themselves, the experimenter relaxing in their office as they watch the data stream in. Some companies are already advertising apparatus to achieve exactly that.

For our part, the aim has always been improved translation, and our fantasy is that eventually, pools of chilly water and paper-and-pencil tests will be replaced with parallel touchscreen batteries in which every rodent task has a human analogue. Our preliminary work in this area has been promising. For example, anticholinesterase treatment ameliorates attentional impairments in a mouse model of AD, an effect which has been shown in the identical touchscreen task in humans with AD. More recently mice with  deletion of the schizophrenia-related gene dlg2 were impaired on a touchscreen object-location learning task, and four humans with deletion of the same gene (3 of whom have been diagnosed with schizophrenia, one who has not) all were unable to learn exactly the same task we gave to the mice.

So, all good! But …

Second: There is a hell of a lot more work to do and it ain’t easy

Although we already have over a dozen tasks working well in the touchscreen, the full translational ‘vision’ is going to take an incredible amount of effort to achieve. To take just the goal of parallel rodent-human batteries, it is not enough to have tasks for the rodent and human that look similar, even identical. That provides face validity, which is nice, but what we really want is neurocognitive validity; we want to know that the same brain mechanisms are engaged during a task. This requires neuropsychological studies, imaging … and that is only validation at the level of brain structures; ideally we‘d like pharmacological validation too, and eventually predictive validity  showing that treatments that work in the preclinical model on task X work in the clinic on task X. Of course this requirement for validation is not specific to touchscreens; indeed few behavioural tasks of any kind can boast this extent of validation. But paradoxically, the striking face validity of the touchscreen method actually serves to bring the issue of higher levels of validity into sharper relief, thus raising the bar to levels that are higher than for other, less face-valid methods. And frankly, so it should.

So it’s going to take a lot of work. But did I mention it’s fun?

Elephant Talk

And now, a post about … music. Where to start? How about Elephant Talk by King Crimson. Why not.

I could watch — and have watched — this video dozens of times. Sure, it’s “prog”, at least that’s how it gets labelled. But that term, with its negative connotations of bombast, fantasy-lit lyrics and 20-minute solos (not that I have anything against any of those things), just doesn’t do this justice. To be honest, I don’t know what this is. It probably needs a genre of its own. It has elements of rock and roll, world music, electronica. It has an anything-goes unhinged ethos, but at the same time is mathematically precise.

I watch this video over and over and keep finding something new. The song begins with some crazy sound coming from some crazy instrument you may never have seen before. It’s called a Chapman Stick, played by Tony Levin. There is no bass guitar in the song. That line playing on that instrument over and over would be enough for me all on its own.

Then there’s the drummer: Bill Bruford, one of my favourites of all time; played with Yes, Genesis, his own fusion band Bruford, and now plays jazz. Here he’s playing a kit made up of rototoms and octobans, with no ride cymbal. The repetitive rudiments on the roto-toms/octobans combine with the Stick to produce an almost African feel.

But it’s the guitarists that are even more original – and complete opposites, almost comic foils for each other. First is Robert Fripp (he founded King Crimson in 1968), stage left, with his skinny tie, sensible dark suit and pocket protector (OK not really but he might as well have one) and perfect technique, playing repetitive lines with the precision of a sequencer. If you’ve followed this guy you’ll know he is usually pretty dour and serious, but here he is so pig-in-shit happy about what’s going on around him that he actually smiles – twice!

Adrian Belew (Bowie, Zappa, Talking Heads, Nine Inch Nails) is in the big hot-pink suit. He’s a drummer first and a guitarist second, but his guitar playing has to be some of the most bizarre and creative ever seen. He plays above the nut. He taps frantic messes of wrong notes out on the fretboard. He specialises in animal sounds (yes, that’s what I said) and makes his own Heath Robinson-style effects devices to do so. Here he has a mysterious silver box attached his mike stand with which he makes elephant sounds. As you do.

After a couple of verses, Fripp and Belew trade solos. Fripp plays his guitar through a synth. Have you ever heard a guitar sound like that? No, you haven’t. Belew counters with whammy-bar madness and, of course, elephant sounds.

Add to all of this Adrian Belew’s Talking Heads-like vocals and lyrics which work through an alphabet of synonyms of “talk”, ending on ‘E’ for Elephant Talk, and my musical nerd-bliss is complete.

Hope you like it! Thanks for listening 🙂

WANT MORE? Check out Robert Fripp’s “Frippertronics”: