All too often we fail to realize that other species—even those closest to us—can be downright alien in some respects, especially in terms of how they perceive the world. This is especially true of dogs: despite having been one of humanity’s closest companions for nearly 15 thousand years, the brain of a canine is wired quite differently from that of a person’s, and now thanks to a combination of MRI, machine learning and some very patient pooches, canine cognitive researchers may have fetched a glimpse of how this furry friend sees the world.

Inspired by previous work into decoding how the human brain processes visual stimuli, a research team at Emory University’s Canine Cognitive Neuroscience Lab has analyzed the brain patterns of two dogs while they watched videos of scenes that a dog would encounter in real life; the patterns were then processed by machine learning algorithms to decode how the dogs process these everyday images.

“We showed that we can monitor the activity in a dog’s brain while it is watching a video and, to at least a limited degree, reconstruct what it is looking at,” explains Gregory Berns, a professor of psychology with Emory and head of the Lab. “The fact that we are able to do that is remarkable.”

The first stage of this experiment involved training dogs to be able to sit still while unrestrained in a functional magnetic resonance imaging (fMRI) machine; Berns was part of a team that pioneered such training techniques, enabling what he calls “The Dog Project”, a series of experiments using fMRI scanners to scan the brains of dogs, to move forward.

For the current study, the team found two dogs, Bhubo and Daisy, that had no problem sitting still while watching the 30-minute videos (according to some sources that’s approximately 3.5 dog-hours) used in the experiment; both dogs were able to remain still for the 90-minute sessions, consisting of three 30-minute video viewings, without needing a break.

“They didn’t even need treats,” says study lead Erin Phillips, a graduate student in ecology and evolutionary biology at Princeton University. Phillips monitored Bhubo and Daisy during their fMRI sessions. “It was amusing because it’s serious science, and a lot of time and effort went into it, but it came down to these dogs watching videos of other dogs and humans acting kind of silly.”

The videos were produced using a recorder attached to a gimbaled selfie stick that allowed the team to record video from the point of view of an average dog, somewhere around waist high. The scenes involved situations a dog might encounter in their everyday life, such as being petted, walking on a leash, receiving a treat from someone, or watching a cat walking through a house or a scooter zip by on the sidewalk.

The fMRI data was then processed by a machine learning neural net called Ivis, an AI trained to classify brain-data content, and cross-referenced with timestamps on the videos, of which were divided into either object-based classifiers (such as dog, car, human, cat) and action-based classifiers (such as sniffing, playing or eating).

A control group consisting of two human subjects that watched the same videos produced a 99 percent accuracy in mapping the brain data for both the object and action-based classifiers; when it came to decoding the brain patterns of the patient pups, the program failed to decode the object classifiers, but was found to be between 75 and 88 percent accurate for the action classifiers, illustrating a fundamental difference between how a dog’s brain processes the world around it from that of a human’s.

“We humans are very object oriented,” Berns explains. “There are 10 times as many nouns as there are verbs in the English language because we have a particular obsession with naming objects. Dogs appear to be less concerned with who or what they are seeing and more concerned with the action itself.”

This is also reflected in the difference between our respective species’ visual systems: while humans can perceive a wide range of colors—something useful for differentiating between various objects—dogs can only see in shades of blue and yellow, but have a higher density of visual receptor cells geared toward detecting motion.

“It makes perfect sense that dogs’ brains are going to be highly attuned to actions first and foremost,” Berns continues. “Animals have to be very concerned with things happening in their environment to avoid being eaten or to monitor animals they might want to hunt. Action and movement are paramount.”

Phillips finds that understanding how different animals view the world is important to her research into the ecological impact of predator reintroduction into the environment of Mozambique. “Historically, there hasn’t been much overlap in computer science and ecology,” she says. “But machine learning is a growing field that is starting to find broader applications, including in ecology.”

Another way to delve into the mind and soul of a dog is to follow the journey of Bob, the canine hero of Whitley’s 2015 ebook The Journey to Dog Heaven, available in Kindle and audiobook formats on Amazon.

Image Credits:
News Source:
Dreamland Video podcast
To watch the FREE video version on YouTube, click here.

Subscribers, to watch the subscriber version of the video, first log in then click on Dreamland Subscriber-Only Video Podcast link.

Leave a Reply