(In this post, I promise that the next post will be a continuation of this one. That didn’t end up happening. When I do write the second part of this post, I will just add it in to this one.)
The Hardest Problem
Suppose I’m looking at a red rose. Assuming I’m not blind, I presumably have a certain sort of subjective experience that accompanies this looking. There is something it is like to be looking at this red rose. For example, there’s something it’s like to experience this redness that I’m experiencing right now. This task of explaining the subjective character of my experiences—the fact that there’s something it’s like to be conscious—is what David Chalmers has called the “hard problem of consciousness.”
Chalmers is certainly right to identify this as a “hard problem,” and I’d go even further to say that it’s the hardest problem in all of philosophy. If anyone says that it isn’t that hard, they either don’t really understand the problem, they’re trying to ignore it, or they know something that they aren’t telling everyone else. That’s because almost all philosophers agree that we really don’t know how to solve this problem in a satisfactory way yet. Lots of philosophers have theories, but, for any theory that anyone proposes, there will be many people who find it completely unacceptable.
In this post, I want to explore one interesting way of dealing with the hard problem of consciousness that’s drawn from the work of the mid-20th century philosopher, Wilfred Sellars. I’ll lay out the methodology, explain some difficulties that one might expect it to face, and offer some ways of thinking about how we might extend the proposal to resolve these difficulties. First, however, let me explain why this problem of consciousness is so hard by going through the difficulties that a scientific attempt at explaining consciousness will inevitably encounter.
Neuroscience Comes Up Short
Some people would like to claim that, eventually, neuroscience will give us a full answer to this hard problem of consciousness. While it’s not particularly controversial to claim that one’s neurobiological states are causally sufficient for one’s conscious states, the claim that one’s neurobiological states fully explain one’s conscious states is rather dubious. I think that a thought experiment conjured up by the philosopher Gottfried Wilhelm Leibniz in the late 1600s still makes this point pretty well. Imagine you could enlarge the brain to the size of a giant factory. If a neuroscientist was to give you a tour of that factory, you could see all the parts moving, and the neuroscientist might be able to point to some mechanisms and tell you which states of consciousness come about when which mechanisms are operative. No matter how thoroughly you tour this giant factory, however, the neuroscientist won’t be able to point out consciousness itself. Even worse, on the basis of looking at the mechanisms alone, it seems like the original questions still remains: why does this qualitative state come about with that mechanism? We know from experimentation that it does, but why?
There is what philosophers of mind have called an explanatory gap between knowing the physical facts about the brain and knowing phenomenal facts about conscious experience. To illustrate this gap, the Australian philosopher Frank Jackson proposed a famous thought experiment. He has us imagine, Mary, the world’s leading color scientist. She’s an expert of unparalleled sorts in the neurophysiology of vision, and she knows all there is to know about the brain processes that occur when one sees a red rose. Mary, however, has been completely color blind from birth. Given her knowledge of all the physical facts, does she now know what the subjective quality of seeing red is like? Or, when she gets an operation to restore her color vision and sees a red rose, will she learn something new about conscious experience? Our intuition tells us that Mary will learn something new about conscious experience by actually seeing the rose. She learns what this conscious experience is like.
The result of Jackson’s thought experiment, it seems, is that the subjective quality of a particular conscious experience can only be known through a first-person perspective, not through the third-person perspective of scientific investigation. But where does this leave us when it comes to actually explaining consciousness in scientific terms? Will we forever be in the dark when it comes to understanding why this experience of red accompanies a certain neurophysiological event that is causally related to certain wavelengths of light hitting the retina? Some philosophers, like Colin McGinn, think that we will be forever in the dark. On McGinn’s view, the solution to the problem of consciousness is simply beyond the scope of our feeble human minds which didn’t evolve to worry about such difficult problems. Just like chimpanzees will never be able understand the Big Bang Theory, we will never be able to understand how consciousness arises out of inorganic matter. Most people think that McGinn is jumping to pessimistic conclusions a bit too hastily. Still, it seems clear that a radically new strategy is needed.
Another Way at the Problem
The most popular way of thinking we might reduce conscious experience to something less mysterious is by reducing it to brain states. But that’s not the only way we might try to reductively explain it. Another possibility is to try to explain first person conscious experience in terms of a special set of judgments or attitudes towards propositions. Wilfred Sellars, one of the most important philosophers of the 20th century, attempts to do just that. Sellars’ strategy is a linguistic one. He starts first with perceptual reports, and then explains the role that “subjective seemings” can play in the context of such reports.
In my last post, I outlined what Sellars calls “the game of giving and asking for reason,” as it has been developed by Robert Brandom. The game of giving and asking for reasons is a set of social norms in which various utterances come to be treated as entitling the performer to some utterance, committing her to some, and precluding entitlement to others. On neo-pragmatist accounts of language, the role an utterance plays in this game is what makes it a meaningful one. An utterance that that plays no role in this game is just nonsense. That’s why a baby’s “Goobla-Glooba-Looba” is a nonsense utterance; it doesn’t do anything in this game. On the other hand, “That’s red,” does have consequences in the game of giving and asking for reasons. For example, it commits me to the claim “That’s colored,” and it precludes me from be entitled to the claim “That’s blue.”
Sellars thinks we ought to understand all of awareness in terms of this game of giving and asking for reasons. This might sound like we’re just ignoring the problem, or retreating to a crude form of verificationism in the face of it, but that’s not what we’re doing. Let me explain a bit. Like Kant, Sellars thinks that the judgment is the basic unit of contentful thought. Judgments are the basic contentful thing that can play a role in the game of giving and asking for reasons because they’re the basic unit of content for which we can take responsibility. “Inner judgments”—judgments about our subjective experience—only make sense against a background of judgments that are intersubjectively evaluable, since, if a judgment can’t be intersubjectively evaluated, we cannot be held accountable for it (this is the familiar conclusion of Wittgenstein’s famous “Private Language Argument”).
Now, with this way of thinking about language and awareness in mind, let’s turn to Sellars’ account of perception. Suppose I look at a rose and say, “That’s red.” On Sellars’ account, this perceptual report has two dimensions: First, it is a distinct and reliable response to a stimulus that is in fact red. This is made possible by the simple fact I have cognitive faculties which have evolved to discriminate this stimulus in my environment, and I’ve been trained to make this sound when I’m struck with a stimulus of this sort. Second, it functions as a move in the “game of giving and asking for reasons,” where it carries a certain inferential weight. The inferential weight it has is directly tied to the fact that it is recognized by the participators in the game as a reliable response to a stimulus that is in fact red. In understanding that it has this inferential weight, I’m able to understand my utterance as not merely responding to the red stimulus, but as noninferentially reporting it. It is this latter dimension that distinguishes a human saying that something is red from a parrot squawking out “Red!” whenever there’s a red object in front of it.
In the context of this account of perception, Sellars thinks he can explain subjective “seemings”—the way things appear to subjective experience. A subjective seeming, according to Sellars, is just what we report if we’re inclined to make a perceptual report, but, for some reason, we hold back on actually doing so. If I’m looking at a rose, for example, and I say it looks red or appears as if it’s red, I’m making a report that’s weaker than saying it actually is red. If for some reason, the rose wasn’t red (perhaps it’s actually a white rose with red light shined upon it in such a way to make the rose itself appear red), this report would still be acceptable. In making it, I’m not committing myself to saying that the rose actually is red—only that I’m inclined to think so.
Now let’s look at the case of Mary the color scientist again in this Sellarsian perspective. For Sellars, Having observational knowledge that something is red requires being reliably disposed to respond differentially to red stimuli and understanding the inferential significance that this response has in the game of giving and asking for reasons. Since Mary is, after all, a color scientist, she already understands the inferential weight of these various color concepts. She knows, for example, that if something is scarlet, it is also red, and that, if something is red all over, it can’t also be blue. When Mary first walks outside and sees a rose, she’s suddenly disposed to form color judgments in a noninferential way. If Mary was a good Sellarsian, then upon getting the surgery she might say, “So this is what it’s like to be able to elicit color judgments noninferentially!” But there’s no fact of the matter about what that is—she’s just saying, “Now I’m able to elicit color judgments non-inferentially.” The new “understanding” that she has just been endowed with is a practical one: an understanding of how to link noninferential judgments up with inferentially articulated concepts. She does learn something, but it’s this new ability, not some particular phenomenal fact. Accordingly, physicalism is not threatened.
I think this is a very promising way of trying to tackle the hard problem of consciousness. Still, many will doubt that this actually explains conscious experience. Perhaps it explains my experience-related behavior; however, if that’s all we’re after, we might as well just stick to a reduction in terms of brain states. The problem with a neuroscientific explanation isn’t that it fails to explain my consciousness-related behavior, but that it seems like it’d be able to explain my behavior just fine without conscious experience being what it is, and so consciousness remains unexplained. This same problem seems to arise for this account. Conscious experience, it seems, could be left out the picture entirely.
To illustrate this worry a bit more elaborately, let’s employ a famous philosophical example: “philosophical zombies.” A philosophical zombie is a person who looks and acts exactly like a normal person, but who lacks conscious experience entirely. There’s nothing it’s like to be a philosophical zombie in much the same sense that there’s nothing it’s like to be a rock. Now, most people agree that philosophical zombies aren’t actually possible, but the puzzle is to explain why they’re not possible. Any satisfactory theory of consciousness must do this, and it’s surprisingly hard to do so.The zombie issue raises some real concerns for any neuroscientific explanation of consciousness. It seems that all that neuroscience gives us is an explanation of the neural events that are tied to certain stimuli, and the behaviors that those neural events induce, but it leaves open the question of why there’s any subjective experience that accompanies these neural events. The zombie issue makes this explicit. Why couldn’t there be a zombie that has the same neural events as me that induce the same behaviors, but for whom there’s nothing it’s like to have these neural events? Presumably, there can’t be such a being. But why? Doing more neuroscience doesn’t seem like it’s going to answer this question. At the very least, we’re going to have to supplement this neuroscience with some serious philosophy to explain how it actually answers the question.
Does the Sellarsian account of conscious experience help us at all here? At first glance, it might seem just as hopeless. For starters it isn’t clear that actually having conscious experience is a necessary requisite to making the sort of observation reports that Sellars describes. All we must be able to do in to make those observation reports is to reliably discriminate things in our environment and make linguistic moves whose inferential significance is tied to these reliable discriminations. A zombie, it seems, could do both of these things. Even more, all that a report of a subjective seeming requires is the ability to reliably detect when one is inclined to make an observation report, and it seems like a zombie could have this capacity as well. If the Sellarsian account
In response to this objection, the Sellarsian should be quick to note that we’ve only just explained one particular set of judgments—reports of subjective seemings. These sort of judgments have been paradigmatic of the hard problem of consciousness. However, the way we think of consciousness—the thing that separates us from zombies—includes a whole host of other judgments. When the objector concludes that a zombie could perform reports of subjective seemings in accord with the Sellarsian model, they are merely conceiving of the consciousness that the zombie would still be lacking by clinging to the other sets of judgments that we make and have not yet explained. The bold Sellarsian claim is that, once we account for all of the judgments account that a person might make that factor into our conception of conscious experience, there’s literally nothing else that needs explaining—we just need to need to figure out what all of these sorts of judgments are and how we might pragmatically explain them.
In the previous section, I used the Sellarsian strategy to explain just a few judgments: reports of subjective “seemings” and their relationship to ordinary observation reports. This gave us enough to explain something about what’s going on in the case of Mary the color scientist, but, of course, there is much more explaining to do in order to account for everything we include in our concept of conscious experience. The suggestion, however, is that we know how to do it in principle. All we have to do is explain the inferential significance of a move in the game of giving and asking for reasons, and how this move is tied to the various discriminative abilities we might possess. Holding fast to the bold Sellarsian claim means maintaining that there is a lot more explanatory work to be done, but insisting that we have the means to do it, and, as we do so, we will be getting a richer and richer understanding of conscious experience.
Some Lingering Doubts and the Path Ahead
Perhaps you’re doubtful about the potential of this approach to provide all the things a genuine account of consciousness is supposed to provide. Sellars’ approach, you might think, will never actually be able to account for the content of conscious experience. Some people think that this content of subjective experience is something called “qualia,” this strange, purely qualitative stuff that we first-personally know, but could never quite explain from outside of the first person. Other people will insist that the content of conscious experience is the objects in the world that we in fact experience. In either case, it doesn’t seem (at least on the face of it) that these things can be explained purely in terms of the discriminative capacities of organisms and the moves they’re able to make in the game of giving and asking for reasons. Why not? Well, it seems that the subjective character of our conscious experience—the what-it’s-like-ness of conscious experience—is essentially tied to the content of our experience. And it is hard to see how we could explain this on the Sellarsian approach. How could our holding back on commitments about the way the world is (the Sellarsian account of subjective “seemings”) somehow contribute to the content of our experiences?
In my next post, I will attempt to extend the Sellarsian approach to explain how this might be so. The strategy I employ, drawing from Hegel and Robert Brandom, is explaining the particular way in which we understand the contents of our consciousness as representations of things in the world. I will cash out representational content in such a way that it reduces to the Sellarsian raw materials we are allotted. One of the main allures of the Sellarsian strategy is that these raw materials are naturalistically unproblematic (as I explained in my previous post). Using these raw materials to explain the nature and content of conscious experience will give us a crucial piece of the puzzle in connecting the scientific and the manifest image.