I had always felt that the criticism of reverse inference, usually uttered with the same contempt one would have for a bag of doo-doo, was overblown. I wouldn’t fault scientists for overinterpreting their data. If I doubted their conclusions, I could always look at their results and draw my own inferences. If I didn’t believe their results, I wouldn’t cite them in my papers. Good and valid conclusions stand the test of time, while false ones fade into obscurity and are eventually forgotten.
The Dog Project would not only be relying on reverse inference, it would depend on reverse inference of a dog’s brain as if it were a human’s. Interspecies reverse inference. I could already imagine what my colleagues would say about this.
Fortunately, Andrew and I had decided to stick with what we knew—the reward system. Our task of deciphering function in the dog brain was going to be a lot easier. Unlike the cortex, with its labyrinthine folds, the reward system belongs to the evolutionarily older reptilian part of the brain. The heart of the reward system is the caudate. Because it is so ancient, all mammals have a caudate, and lucky for us, it looks pretty much the same in dogs and humans.
While neuroscientists can quibble about reverse inference in the cortex, when we did an analysis of reverse inference in the caudate, we found that activity in this region is almost always associated with the expectation of something good. As long as we stuck to the caudate, we would be safe in interpreting activity in this part of the dog’s brain as being a signal of a positive feeling. Everything else we found would have to be interpreted with caution.
Even if we limited ourselves to simple questions of whether the dog had positive feelings based on caudate activation, we could still accomplish a lot with brain imaging. No longer would we be stuck interpreting dogs’ behavior based on tail wagging, which is an imperfect indicator of the emotional state of a dog. Dogs wag their tails when they’re happy, when they’re anxious, or when they’re unsure of what else to do. I still wanted to know if our dogs reciprocated our love for them in any way. And although love is a complicated human emotion, the positive aspects of it have been consistently associated with caudate activation.
The first experiment was a proof of concept. Before we could move on to complicated questions, like love, we first had to demonstrate that we could measure caudate activity in the dog. But that wouldn’t be enough. We would have to show that we could interpret that activity in terms of how much the dogs liked something. Because hot dogs are so much better than peas, especially to a dog, the hand signal for hot dogs should cause more caudate activity than the signal for peas.
It seemed simple. But like everything else about the Dog Project, it was also completely wrong.
17
Peas and Hot Dogs
WITH THE APPARENT SUCCESS of the first scan session, Andrew quickly set to analyzing the data. We were giddy that we had not only captured images of the dogs’ brains, but that we had also succeeded in getting several runs of functional scans. These functional runs ranged in length from two to five minutes. At first glance, it looked like we had far exceeded our goal of acquiring a sequence of ten images. In McKenzie’s case, we had one run of 120 images. However, it soon became apparent that figuring out what we had actually captured was going to be far more difficult than we had imagined.
Once the excitement of looking at dog brains began to fade, the first thing we noticed was that the dogs didn’t keep their heads in exactly the same position. There were stretches of about ten seconds where the images appeared steady, almost as good as a scan of a human. And then the dog would move out of the field of view. This would be followed a few seconds later by the head reappearing, but not in exactly the same spot.
It was during these gaps that we had handed the treats to the dogs. Normally, a human would be lying on his back, nose up, almost touching the inside of the head coil. But because the dogs were in a sphinx position, they were facing toward the far end of the scanner, where Melissa and I were giving hand signals and dispensing the treats. At the end of each hand signal, we would grab either a pea or a tiny cube of hot dog and reach all the way to the dogs to let them eat it from our fingertips. Of course, there was no way the dogs could keep their heads still while eating, but they had seemed to settle down pretty quickly. Looking at the MRI images, it became apparent that the inconsistency of positioning was a bigger problem than we had expected.
Somehow, we needed to figure out a way to compensate for the different head positions. In the terminology of fMRI data processing, this is called motion correction. Normally, motion correction is done digitally with special computer software after all the data are collected. The software can figure this out automatically by shifting each image until it exactly overlays the first one of the sequence. For humans, it is pretty simple because they don’t move much, and the corrections are generally less than a few millimeters. Because the dogs didn’t return to the same position each time, the brain had shifted in location too much for the automated software to find it.
Instead, we reverted to an old-school approach of digitally defining landmarks in the brain. First, we identified blocks of scans in which the dog’s head was in a steady position, regardless of where it was in the field of view. For each of these blocks, we then placed four digital markers on identifiable landmarks: the olfactory bulb at the front of the brain, the left and right sides of the brain, and the brainstem at the bottom. Then we used software to shift the images until the landmarks were all aligned. The movement can be described by how far you slide it, which is called translation, and by how much it rotates. If the dog moved its head to the left, we digitally shifted it back to the right to keep it centered. If she pitched her nose up a little, we digitally rotated the image so her nose was level.
Amazingly, this worked. When we viewed the sequence of images in a rapid movie loop, the head now appeared to remain steady in one position. Even Callie, who was not as consistent as McKenzie, appeared stable in the motion-corrected images. We were ready to analyze the actual activation patterns.
Naturally, we assumed that a hand signal indicating hot dog would be much more exciting than one for peas and that this difference would be reflected in the dogs’ brains. To decode how their brains processed these hand signals, we needed to compare the brain responses for each dog to the hot dog and pea signals. Using a standard technique in brain imaging, we separated all the trials into groups of hot dogs or peas. Next, we calculated the average brain response to each of these signals and subtracted the average pea response from the average hot dog response. If our hypothesis was correct, the difference would show up in the parts of the brain that respond to reward.
Instead, we got nothing. No matter how many different ways we looked at the brain responses, it didn’t appear that the dogs distinguished between the hand signals at all.
Melissa had said from the beginning of the Dog Project that McKenzie preferred toys to food. But we couldn’t give her toys to play with in the scanner. Think of the head movement that would cause as she shook her head back and forth! There wasn’t any way around using food as the reward. Callie, of course, was highly food motivated. In fact, she might have loved food too much.