The brain is the most complex object in the known universe. Some 100 billion neurons release hundreds of neurotransmitters and peptides in a dynamic spanning timescales from the microsecond to the lifetime. Given this complexity, neurobiologists can spend productive careers studying a single receptor. Might psychologists more productively understand the mind by ignoring the brain altogether?
Marr (1977) suggested that mental processes may be studied at three levels of analysis: computational (the goals of the process), algorithmic (the method), and implementation (the hardware). The separation implies that the same computational goals and algorithms may be accomplished by a human brain or a computer, and the physical medium—neuron or silicon—is irrelevant. This concept was fundamental to the cognitive science movement and has given its practitioners permission to comfortably ignore the brain. But it has been seriously challenged: A high-level computation (e.g., deciding the next move in a chess game) can be accomplished in a virtually infinite number of ways. Building a computer model that accomplishes the computational goal says little about whether it does so in the same way that a human would. The hardware provides critical constraints on the space of possible models.
The debate about whether we need to study the brain to understand the mind is now being conducted among a network of thousands of scientists and scholars worldwide. The emerging consensus appears to be that implementation is important. Interestingly, the inverse question is also being asked by neurobiologists—do we need consider the mind to understand the brain?—and answered largely and increasingly in the affirmative.
We can learn much about the mind without knowing a neuron from an astrocyte. As I often repeat to myself and occasionally to others, “If you want to understand human performance, study human performance.” But brain data provide information about the mind that cannot be gleaned from even the most careful studies of behavior. In short, brain data provide a physical grounding that constrains the myriad otherwise-plausible models of cognition. They give us a direct window into which mental processes involve similar and different neurobiological processes, allowing us to use biology to ‘carve nature at its joints’ and understand the structure of mental processes (Kosslyn, 1994). Brain function also provides a common language for directly comparing and contrasting processes that are otherwise ‘apples and oranges,’ such as attention and emotion. This common language is a basis for the integration of knowledge across different types of research—basic and clinical, human and nonhuman.
As the general uses of neuroimaging have been eloquently discussed elsewhere, I focus here on a few examples of how functional magnetic resonance imaging (fMRI) has been useful in my work (see Jonides, Nee, & Berman, 2006). Also, as every method has its limitations, I discuss some of the pitfalls of making psychological inferences from neuroimaging data.
One use for me has been in understanding the structure of emotion and executive control processes, and the ways in which cognitive control operates in emotional and nonemotional situations. My colleagues and I have asked: Is pain different from negative emotions such as sadness and anger, or are they variants on a common theme? In meta-analyses we have found that pain and negative emotions activate distinct brain networks, but share features such as anterior cingulate and frontal cortex activity with a broader class of processes, including attention (Wager & Barrett, 2004; Wager, Reading & Jonides, 2004). In contrast, different varieties of negative emotion engage largely overlapping networks. Thus, pain appears to be distinct from negative emotion, but commonalities suggest ways in which they may share underlying processes such as heightened attention.
Answers & Comments
Explanation:
The brain is the most complex object in the known universe. Some 100 billion neurons release hundreds of neurotransmitters and peptides in a dynamic spanning timescales from the microsecond to the lifetime. Given this complexity, neurobiologists can spend productive careers studying a single receptor. Might psychologists more productively understand the mind by ignoring the brain altogether?
Marr (1977) suggested that mental processes may be studied at three levels of analysis: computational (the goals of the process), algorithmic (the method), and implementation (the hardware). The separation implies that the same computational goals and algorithms may be accomplished by a human brain or a computer, and the physical medium—neuron or silicon—is irrelevant. This concept was fundamental to the cognitive science movement and has given its practitioners permission to comfortably ignore the brain. But it has been seriously challenged: A high-level computation (e.g., deciding the next move in a chess game) can be accomplished in a virtually infinite number of ways. Building a computer model that accomplishes the computational goal says little about whether it does so in the same way that a human would. The hardware provides critical constraints on the space of possible models.
The debate about whether we need to study the brain to understand the mind is now being conducted among a network of thousands of scientists and scholars worldwide. The emerging consensus appears to be that implementation is important. Interestingly, the inverse question is also being asked by neurobiologists—do we need consider the mind to understand the brain?—and answered largely and increasingly in the affirmative.
We can learn much about the mind without knowing a neuron from an astrocyte. As I often repeat to myself and occasionally to others, “If you want to understand human performance, study human performance.” But brain data provide information about the mind that cannot be gleaned from even the most careful studies of behavior. In short, brain data provide a physical grounding that constrains the myriad otherwise-plausible models of cognition. They give us a direct window into which mental processes involve similar and different neurobiological processes, allowing us to use biology to ‘carve nature at its joints’ and understand the structure of mental processes (Kosslyn, 1994). Brain function also provides a common language for directly comparing and contrasting processes that are otherwise ‘apples and oranges,’ such as attention and emotion. This common language is a basis for the integration of knowledge across different types of research—basic and clinical, human and nonhuman.
As the general uses of neuroimaging have been eloquently discussed elsewhere, I focus here on a few examples of how functional magnetic resonance imaging (fMRI) has been useful in my work (see Jonides, Nee, & Berman, 2006). Also, as every method has its limitations, I discuss some of the pitfalls of making psychological inferences from neuroimaging data.
One use for me has been in understanding the structure of emotion and executive control processes, and the ways in which cognitive control operates in emotional and nonemotional situations. My colleagues and I have asked: Is pain different from negative emotions such as sadness and anger, or are they variants on a common theme? In meta-analyses we have found that pain and negative emotions activate distinct brain networks, but share features such as anterior cingulate and frontal cortex activity with a broader class of processes, including attention (Wager & Barrett, 2004; Wager, Reading & Jonides, 2004). In contrast, different varieties of negative emotion engage largely overlapping networks. Thus, pain appears to be distinct from negative emotion, but commonalities suggest ways in which they may share underlying processes such as heightened attention.