Wilfrid Sellars and the Hard Problem of Consciousness

(In this post, I promise that the next post will be a continuation of this one.  That didn’t end up happening.  When I do write the second part of this post, I will just add it in to this one.)

The Hardest Problem

Suppose I’m looking at a red rose.  Assuming I’m not blind, I presumably have a certain sort of subjective experience that accompanies this looking.  There is something it is like to be looking at this red rose.  For example, there’s something it’s like to experience this redness that I’m experiencing right now.   This task of explaining the subjective character of my experiences—the fact that there’s something it’s like to be conscious—is what David Chalmers has called the “hard problem of consciousness.”

Chalmers is certainly right to identify this as a “hard problem,” and I’d go even further to say that it’s the hardest problem in all of philosophy.  If anyone says that it isn’t that hard, they either don’t really understand the problem, they’re trying to ignore it, or they know something that they aren’t telling everyone else.  That’s because almost all philosophers agree that we really don’t know how to solve this problem in a satisfactory way yet.  Lots of philosophers have theories, but, for any theory that anyone proposes, there will be many people who find it completely unacceptable.

Wilfrid Sellars

Wilfrid Sellars

In this post, I want to explore one interesting way of dealing with the hard problem of consciousness that’s drawn from the work of the mid-20th century philosopher, Wilfred Sellars.  I’ll lay out the methodology, explain some difficulties that one might expect it to face, and offer some ways of thinking about how we might extend the proposal to resolve these difficulties.  First, however, let me explain why this problem of consciousness is so hard by going through the difficulties that a scientific attempt at explaining consciousness will inevitably encounter.

Neuroscience Comes Up Short

Some people would like to claim that, eventually, neuroscience will give us a full answer to this hard problem of consciousness.  While it’s not particularly controversial to claim that one’s neurobiological states are causally sufficient for one’s conscious states, the claim that one’s neurobiological states fully explain one’s conscious states is rather dubious.  I think that a thought experiment conjured up by the philosopher Gottfried Wilhelm Leibniz in the late 1600s still makes this point pretty well.  Imagine you could enlarge the brain to the size of a giant factory.  If a neuroscientist was to give you a tour of that factory, you could see all the parts moving, and the neuroscientist might be able to point to some mechanisms and tell you which states of consciousness come about when which mechanisms are operative.   No matter how thoroughly you tour this giant factory, however, the neuroscientist won’t be able to point out consciousness itself.  Even worse, on the basis of looking at the mechanisms alone, it seems like the original questions still remains: why does this qualitative state come about with that mechanism?  We know from experimentation that it does, but why?

There is what philosophers of mind have called an explanatory gap between knowing the physical facts about the brain and knowing phenomenal facts about conscious experience.  To illustrate this gap, the Australian philosopher Frank Jackson proposed a famous thought experiment.  He has us imagine, Mary, the world’s leading color scientist.  She’s an expert of unparalleled sorts in the neurophysiology of vision, and she knows all there is to know about the brain processes that occur when one sees a red rose.  Mary, however, has been completely color blind from birth.  Given her knowledge of all the physical facts, does she now know what the subjective quality of seeing red is like?  Or, when she gets an operation to restore her color vision and sees a red rose, will she learn something new about conscious experience?  Our intuition tells us that Mary will learn something new about conscious experience by actually seeing the rose.  She learns what this conscious experience is like.

The result of Jackson’s thought experiment, it seems, is that the subjective quality of a particular conscious experience can only be known through a first-person perspective, not through the third-person perspective of scientific investigation.  But where does this leave us when it comes to actually explaining consciousness in scientific terms?  Will we forever be in the dark when it comes to understanding why this experience of red accompanies a certain neurophysiological event that is causally related to certain wavelengths of light hitting the retina?  Some philosophers, like Colin McGinn, think that we will be forever in the dark.  On McGinn’s view, the solution to the problem of consciousness is simply beyond the scope of our feeble human minds which didn’t evolve to worry about such difficult problems.  Just like chimpanzees will never be able understand the Big Bang Theory, we will never be able to understand how consciousness arises out of inorganic matter.  Most people think that McGinn is jumping to pessimistic conclusions a bit too hastily.  Still, it seems clear that a radically new strategy is needed.

Another Way at the Problem                                                                            

The most popular way of thinking we might reduce conscious experience to something less mysterious is by reducing it to brain states.  But that’s not the only way we might try to reductively explain it.  Another possibility is to try to explain first person conscious experience in terms of a special set of judgments or attitudes towards propositions.  Wilfred Sellars, one of the most important philosophers of the 20th century, attempts to do just that.  Sellars’ strategy is a linguistic one.  He starts first with perceptual reports, and then explains the role that “subjective seemings” can play in the context of such reports.

In my last post, I outlined what Sellars calls “the game of giving and asking for reason,” as it has been developed by Robert Brandom.  The game of giving and asking for reasons is a set of social norms in which various utterances come to be treated as entitling the performer to some utterance, committing her to some, and precluding entitlement to others.  On neo-pragmatist accounts of language, the role an utterance plays in this game is what makes it a meaningful one.  An utterance that that plays no role in this game is just nonsense.  That’s why a baby’s “Goobla-Glooba-Looba” is a nonsense utterance; it doesn’t do anything in this game.  On the other hand, “That’s red,” does have consequences in the game of giving and asking for reasons.  For example, it commits me to the claim “That’s colored,” and it precludes me from be entitled to the claim “That’s blue.”

Sellars thinks we ought to understand all of awareness in terms of this game of giving and asking for reasons.  This might sound like we’re just ignoring the problem, or retreating to a crude form of verificationism in the face of it, but that’s not what we’re doing.  Let me explain a bit.  Like Kant, Sellars thinks that the judgment is the basic unit of contentful thought.  Judgments are the basic contentful thing that can play a role in the game of giving and asking for reasons because they’re the basic unit of content for which we can take responsibility.  “Inner judgments”—judgments about our subjective experience—only make sense against a background of judgments that are intersubjectively evaluable, since, if a judgment can’t be intersubjectively evaluated, we cannot be held accountable for it (this is the familiar conclusion of Wittgenstein’s famous “Private Language Argument”).

Now, with this way of thinking about language and awareness in mind, let’s turn to Sellars’ account of perception.  Suppose I look at a rose and say, “That’s red.”  On Sellars’ account, this perceptual report has two dimensions: First, it is a distinct and reliable response to a stimulus that is in fact red.  This is made possible by the simple fact I have cognitive faculties which have evolved to discriminate this stimulus in my environment, and I’ve been trained to make this sound when I’m struck with a stimulus of this sort.  Second, it functions as a move in the “game of giving and asking for reasons,” where it carries a certain inferential weight. The inferential weight it has is directly tied to the fact that it is recognized by the participators in the game as a reliable response to a stimulus that is in fact red.  In understanding that it has this inferential weight, I’m able to understand my utterance as not merely responding to the red stimulus, but as noninferentially reporting it.  It is this latter dimension that distinguishes a human saying that something is red from a parrot squawking out “Red!” whenever there’s a red object in front of it.

In the context of this account of perception, Sellars thinks he can explain subjective “seemings”—the way things appear to subjective experience.  A subjective seeming, according to Sellars, is just what we report if we’re inclined to make a perceptual report, but, for some reason, we hold back on actually doing so.  If I’m looking at a rose, for example, and I say it looks red or appears as if it’s red, I’m making a report that’s weaker than saying it actually is red. If for some reason, the rose wasn’t red (perhaps it’s actually a white rose with red light shined upon it in such a way to make the rose itself appear red), this report would still be acceptable.  In making it, I’m not committing myself to saying that the rose actually is red—only that I’m inclined to think so.

Now let’s look at the case of Mary the color scientist again in this Sellarsian perspective.  For Sellars, Having observational knowledge that something is red requires being reliably disposed to respond differentially to red stimuli and understanding the inferential significance that this response has in the game of giving and asking for reasons.  Since Mary is, after all, a color scientist, she already understands the inferential weight of these various color concepts.  She knows, for example, that if something is scarlet, it is also red, and that, if something is red all over, it can’t also be blue.  When Mary first walks outside and sees a rose, she’s suddenly disposed to form color judgments in a noninferential way.  If Mary was a good Sellarsian, then upon getting the surgery she might say, “So this is what it’s like to be able to elicit color judgments noninferentially!”  But there’s no fact of the matter about what that is—she’s just saying, “Now I’m able to elicit color judgments non-inferentially.”  The new “understanding” that she has just been endowed with is a practical one: an understanding of how to link noninferential judgments up with inferentially articulated concepts.  She does learn something, but it’s this new ability, not some particular phenomenal fact.  Accordingly, physicalism is not threatened.

But Zombies!

I think this is a very promising way of trying to tackle the hard problem of consciousness.  Still, many will doubt that this actually explains conscious experience.  Perhaps it explains my experience-related behavior; however, if that’s all we’re after, we might as well just stick to a reduction in terms of brain states.  The problem with a neuroscientific explanation isn’t that it fails to explain my consciousness-related behavior, but that it seems like it’d be able to explain my behavior just fine without conscious experience being what it is, and so consciousness remains unexplained.  This same problem seems to arise for this account.  Conscious experience, it seems, could be left out the picture entirely.

To illustrate this worry a bit more elaborately, let’s employ a famous philosophical example: “philosozombie picphical zombies.”  A philosophical zombie is a person who looks and acts exactly like a normal person, but who lacks conscious experience entirely.  There’s nothing it’s like to be a philosophical zombie in much the same sense that there’s nothing it’s like to be a rock.  Now, most people agree that philosophical zombies aren’t actually possible, but the puzzle is to explain why they’re not possible.   Any satisfactory theory of consciousness must do this, and it’s surprisingly hard to do so.The zombie issue raises some real concerns for any neuroscientific explanation of consciousness.  It seems that all that neuroscience gives us is an explanation of the neural events that are tied to certain stimuli, and the behaviors that those neural events induce, but it leaves open the question of why there’s any subjective experience that accompanies these neural events.  The zombie issue makes this explicit.  Why couldn’t there be a zombie that has the same neural events as me that induce the same behaviors, but for whom there’s nothing it’s like to have these neural events?  Presumably, there can’t be such a being.  But why?  Doing more neuroscience doesn’t seem like it’s going to answer this question.  At the very least, we’re going to have to supplement this neuroscience with some serious philosophy to explain how it actually answers the question.

Does the Sellarsian account of conscious experience help us at all here?  At first glance, it might seem just as hopeless.  For starters it isn’t clear that actually having conscious experience is a necessary requisite to making the sort of observation reports that Sellars describes.  All we must be able to do in to make those observation reports is to reliably discriminate things in our environment and make linguistic moves whose inferential significance is tied to these reliable discriminations.  A zombie, it seems, could do both of these things.  Even more, all that a report of a subjective seeming requires is the ability to reliably detect when one is inclined to make an observation report, and it seems like a zombie could have this capacity as well.  If the Sellarsian account

In response to this objection, the Sellarsian should be quick to note that we’ve only just explained one particular set of judgments—reports of subjective seemings.  These sort of judgments have been paradigmatic of the hard problem of consciousness.  However, the way we think of consciousness—the thing that separates us from zombies—includes a whole host of other judgments.  When the objector concludes that a zombie could perform reports of subjective seemings in accord with the Sellarsian model, they are merely conceiving of the consciousness that the zombie would still be lacking by clinging to the other sets of judgments that we make and have not yet explained.  The bold Sellarsian claim is that, once we account for all of the judgments account that a person might make that factor into our conception of conscious experience, there’s literally nothing else that needs explaining—we just need to need to figure out what all of these sorts of judgments are and how we might pragmatically explain them.

In the previous section, I used the Sellarsian strategy to explain just a few judgments: reports of subjective “seemings” and their relationship to ordinary observation reports.  This gave us enough to explain something about what’s going on in the case of Mary the color scientist, but, of course, there is much more explaining to do in order to account for everything we include in our concept of conscious experience. The suggestion, however, is that we know how to do it in principle.  All we have to do is explain the inferential significance of a move in the game of giving and asking for reasons, and how this move is tied to the various discriminative abilities we might possess.  Holding fast to the bold Sellarsian claim means maintaining that there is a lot more explanatory work to be done, but insisting that we have the means to do it, and, as we do so, we will be getting a richer and richer understanding of conscious experience.

Some Lingering Doubts and the Path Ahead

Perhaps you’re doubtful about the potential of this approach to provide all the things a genuine account of consciousness is supposed to provide.  Sellars’ approach, you might think, will never actually be able to account for the content of conscious experience.  Some people think that this content of subjective experience is something called “qualia,” this strange, purely qualitative stuff that we first-personally know, but could never quite explain from outside of the first person.  Other people will insist that the content of conscious experience is the objects in the world that we in fact experience.  In either case, it doesn’t seem (at least on the face of it) that these things can be explained purely in terms of the discriminative capacities of organisms and the moves they’re able to make in the game of giving and asking for reasons.  Why not?  Well, it seems that the subjective character of our conscious experience—the what-it’s-like-ness of conscious experience—is essentially tied to the content of our experience.  And it is hard to see how we could explain this on the Sellarsian approach.  How could our holding back on commitments about the way the world is (the Sellarsian account of subjective “seemings”) somehow contribute to the content of our experiences?

In my next post, I will attempt to extend the Sellarsian approach to explain how this might be so.  The strategy I employ, drawing from Hegel and Robert Brandom, is explaining the particular way in which we understand the contents of our consciousness as representations of things in the world.  I will cash out representational content in such a way that it reduces to the Sellarsian raw materials we are allotted.  One of the main allures of the Sellarsian strategy is that these raw materials are naturalistically unproblematic (as I explained in my previous post).  Using these raw materials to explain the nature and content of conscious experience will give us a crucial piece of the puzzle in connecting the scientific and the manifest image.

2 thoughts on “Wilfrid Sellars and the Hard Problem of Consciousness

  1. Very nicely written! This formulation of the problem I find highly inspirational. It makes me simultaneously think of many things, let me see if I can try and bring it together and see if something comes out of it…

    I have been teaching a little bit about Plato lately, and his view on knowledge, that knowledge cannot be conveyed, it is a subjective experience, and how he goes on by defining knowledge possible to teach as mathematical, such that we can show the logic of a mathematical proof to a student, but to the student it will not become knowledge until she has internalised the understanding of the proof as a subjective experience.

    Reading your post makes me think that there must be a connection between understanding and consciousness. Understanding as knowledge, and knowledge as justified true belief; Justified, which is logical and therefore objective, True, which is collective (with reference to your previous post), and belief which is subjective (I’m trying to avoid absolute truth in this context, not because I don’t think that there is an objectively true reality (I do), but that it is anyway not available to us, so invoking truth in an absolute sense does not seem very fruitful, and it is my impression (possibly flawed) that critique of this definition of knowledge stems from a notion of absolute truth…)

    There must be a difference between consciousness and awareness, and there must be an element of reflection in consciousness. And our understandings are based on categories and taxonomies. A child does not understand, a child learn categories and mimics. It is in adolescence we start to understand things in a real manner (baring the false binary understanding/not understanding). To understand is to be conscious, and to be conscious is to understand reflectively.

    The strength of an understanding is that it becomes applicable in a context different from where it was formed. That is how we test our understandings, to see if they hold as universally as they appear, and that is how we update and transform our understandings. A philosophical zombie could possibly act as if it understands something, but it would fail in this regard, it would fail exactly in the testing of the understanding by attempting to apply it in a different context, and it would fail in updating it accordingly.

    So, observing a red rose happens both at the level of awareness and understanding. We can be made aware of the object rose and its quality red. These are of course both categories we have learned to internalise and the real object and its colour conform to those internal categories. We also have other associations which go beyond the mere awareness, such as roses symbolising romance, reminding us of their scents, and red being our favourite colour, or whatever associations we may have with the object and the quality which is all part of their categories. Could it be that in this understanding consciousness dwells?

    Also: Consciousness is discussed as if it itself was an absolute property of a creature, either we have it or we do not. If it is connected instead with reflective understanding, we are only conscious when we are aware–and aware of being aware in the world. So, there must be an element of meta-cognition which is certainly not a constant. I guess this is the phenomenological notion of being in the interaction between our inner categories and taxonomies, and our reflected comparisons with what they are meant to represent in the real world which creates an ever changing interplay between inner and outer reality…

    Does any of this make sense?

  2. I suspect the crowbar to open up this problem lies in the maleability of qualia.

    So for example there is synisthesia(?) that sounds can take on colors is the common report but if I rememer correctly other mixing of qualia is reported. Moreover, I believe there are cases of synsisthesia being gained after a encephalitic disease. If these reports are accurate we have a start to getting the parameters. For example, now blue is not simply a way of delineating the visual field but some sort of qualitative entity that can be recongized in the auditory field.

    Of course, people speak of and I have the distinct sensation of colors in ordor. Brown and green being the most prominent. Yet, the suspesion that this is simply conditioning is hard to shake.

    We also have intense visualization techniques where quesdo-images can be brought into the visual field.

    Lastly and this is what really peaks my interest is that while the confidence-to-report model of qualia is clever, it strikes me that credence-in-reports is more consistent with how people use qualia.

    Its not that I am super confident in reporting to others that a rose is red, it is that I immeidately and drastically downgrade my confidence in someone elses report on roses if it seems oblivous to the fact that they are red “What are you blind!”

    In this way qualia may serve as a minimum standards bullshit detactor. A judgement so readily available to the consciouness that if you get it wrong, you can be easily and confidently idetifed as a fraud. This could be key as a seperating equilibrium for everyone will be hesitent to form a faction with a leader who seems to be ignorant of an important qualia, and everyone knows this. This common knowledge of doubt will lead even sympathetic supportes to the other side.

    So now the fraudster cannot even use social-political pressure to prepetuate the fraud.

    This is an exgaerration of course but it is notable that “the Emporer Has No Clothes” is a classic story precisely because it describes the seemingly untenable position in which social-political pressure has overcome qualia.

    Do you see? If so could you explain it to me? Cause I am in the dark here.

Leave a comment