Touchscreen Dreams: A Reply to Michael R. Hunsaker

As some of you know, I and my colleague Lisa Saksida have been working together on cognitive testing using touchscreens for over 20 years. As you can imagine, along the way we’ve met with some serious scepticism (which is of course natural and healthy; this is science after all) and plenty of challenges that have sometimes taken years to surmount. Developing and validating new apparatus and tasks is not easy! But it’s fun.

So I found it very rewarding to read Michael R. Hunsaker’s recent blog — triggered by our publication of 3 touchscreen protocol papers — in which he is very positive indeed about the touchscreen approach. In his blog he touches on many of the reasons we think the approach is so promising, for example its high throughput, reproducibility, and the ability to test many aspects of cognition in the same setting, increasing comparability across different tests. I feel the need to write a reply to his lovely piece, but what can I possibly add? I guess there are a couple of things I can say …

First: The more the merrier!

Hunsaker’s right; we would love to see more people using touchscreens. The larger the database that accumulates, the more powerful the method becomes. But more than that, we’d love for people to develop their own new tasks, to add to the touchscreen toolkit. Some have already taken up this challenge. For example Yogita Chudasama’s lab at McGill has developed delay- and probability-discounting tasks.  Eva deRosa’s lab in Toronto has developed a visual search task that capitalizes on the ability to display multiple stimuli in multiple locations on the screen, something you just can’t do with any other method. Indeed, the ability to manipulate stimuli opens up myriad possibilities for cognitive testing. For example stimuli can be ‘morphed’ together – they’re using this method at J&J for drug testing  — allowing tests in which cue ambiguity can be parametrically manipulated. The possibilities for new tasks are almost endless.

Another ongoing development is the marriage of touchscreens with methods like electrophysiology and optogenetics. I know of at least three labs currently working in this area. It’s a natural progression for a computerised apparatus like the touchscreens.

Hunsaker himself suggests touchscreen testing in the home cage. Perhaps that is the future of touchscreen testing: one can imagine a day when animals test themselves, the experimenter relaxing in their office as they watch the data stream in. Some companies are already advertising apparatus to achieve exactly that.

For our part, the aim has always been improved translation, and our fantasy is that eventually, pools of chilly water and paper-and-pencil tests will be replaced with parallel touchscreen batteries in which every rodent task has a human analogue. Our preliminary work in this area has been promising. For example, anticholinesterase treatment ameliorates attentional impairments in a mouse model of AD, an effect which has been shown in the identical touchscreen task in humans with AD. More recently mice with  deletion of the schizophrenia-related gene dlg2 were impaired on a touchscreen object-location learning task, and four humans with deletion of the same gene (3 of whom have been diagnosed with schizophrenia, one who has not) all were unable to learn exactly the same task we gave to the mice.

So, all good! But …

Second: There is a hell of a lot more work to do and it ain’t easy

Although we already have over a dozen tasks working well in the touchscreen, the full translational ‘vision’ is going to take an incredible amount of effort to achieve. To take just the goal of parallel rodent-human batteries, it is not enough to have tasks for the rodent and human that look similar, even identical. That provides face validity, which is nice, but what we really want is neurocognitive validity; we want to know that the same brain mechanisms are engaged during a task. This requires neuropsychological studies, imaging … and that is only validation at the level of brain structures; ideally we‘d like pharmacological validation too, and eventually predictive validity  showing that treatments that work in the preclinical model on task X work in the clinic on task X. Of course this requirement for validation is not specific to touchscreens; indeed few behavioural tasks of any kind can boast this extent of validation. But paradoxically, the striking face validity of the touchscreen method actually serves to bring the issue of higher levels of validity into sharper relief, thus raising the bar to levels that are higher than for other, less face-valid methods. And frankly, so it should.

So it’s going to take a lot of work. But did I mention it’s fun?


2 thoughts on “Touchscreen Dreams: A Reply to Michael R. Hunsaker

  1. I love that you pointed out the difficulty with content/construct/neurocognitive validity. That was always my primary worry with operant conditioning in general and the Touchscreens in a particular; that is, because the tasks look similar they are treated as exactly the same thing without a second thought and the tasks are assume to be perfect homologues.

    Not that I am trying to engage in a self promotion, but I would love to get this technology working and to do the type of lesion mapping analyses that Ray and I have been working on since early 2000. In fact, this is one of those experimental paradigms wherein I see a clear benefit to DREADD and optogenetic mapping experiments. The marked reduction in confounds relative to exploratory behavior testing (ie distal cues interfering with object recognition, lack of motivation, etc) actually may serve to increase the validity of the optogenetic lesions.

    I did not get into this, but it is also nice with the Touchscreens that I can use the quantitatively rigorous methods from quantitative psychology to analyze the data (ie change point algorithms, etc), which in my experience I more powerful than 1 rat 1 datapoint experiments.

  2. Yep, face validity is not enough. But I do think that the similarity of the tasks can provide a ‘leg up’ — it’s more likely to achieve the neurocognitive validity if the tasks are at least a little similar, as opposed to having nothing at all in comment, which is often the case.

    >change point algorithms, etc

    You’ll have to teach me 🙂


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s