- Bipolar Disorder
- Therapy Center
- When To See a Therapist
- Types of Therapy
- Best Online Therapy
- Best Couples Therapy
- Managing Stress
- Sleep and Dreaming
- Understanding Emotions
- Self-Improvement
- Healthy Relationships
- Student Resources
- Personality Types
- Guided Meditations
- Verywell Mind Insights
- 2024 Verywell Mind 25
- Mental Health in the Classroom
- Editorial Process
- Meet Our Review Board
- Crisis Support
Introspection and How It Is Used In Psychology Research
Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."
Amanda Tust is an editor, fact-checker, and writer with a Master of Science in Journalism from Northwestern University's Medill School of Journalism.
Thomas Barwick / Getty Images
- Improvement Tips
Introspection is a psychological process that involves looking inward to examine one's own thoughts, emotions, judgments, and perceptions.
In psychology, introspection refers to the informal process of exploring one's own mental and emotional states. Although, historically, the term also applies to a more formalized process that was once used as an experimental technique . Learn more about uses for introspection, a few examples, and how to be more introspective.
Uses for Introspection
Introspection is important for several reasons. Among them are that it helps us engage in reflection, it assists with research, and it can be a valuable tool in mental health treatments involving psychotherapy.
One way to use introspection is for reflection, which involves consciously examining our internal psychological processes. When we reflect on our thoughts, emotions , and memories and examine what they mean, we are engaged in introspection.
Doing a reflective dive into our own psychology can help improve our levels of self-awareness. Being self-aware and gaining self-insight through the act of reflection is connected with higher levels of resilience and lower levels of stress . In this way, introspective reflection aids in personal growth.
Research Technique
The term introspection is also used to describe a research technique that was first developed by psychologist Wilhelm Wundt . Also known as experimental self-observation , Wundt's technique involved training people to carefully and objectively as possible analyze the content of their own thoughts.
Some historians suggest that introspection is not the most accurate term to refer to the methods that Wundt utilized. They contend that introspection implies a level of armchair soul-searching , but the methods that Wundt used were a much more highly controlled and rigid experimental technique.
In everyday use, introspection is a way of looking inward and examining one's internal thoughts and feelings. As a research tool, however, the process was much more controlled and structured.
Psychotherapy
Introspection can also be useful in psychotherapy sessions. When both practitioners and patients have the ability to be introspective, this aids in the development of the therapeutic relationship and can even affect treatment outcomes.
Engaging in introspection-based activities has been found beneficial for certain mental health conditions. For example, when people with depression engaged in emotional introspection, they were able to downregulate activity in their amygdala—an area of the brain associated with emotion regulation .
The term introspection can be used to describe both an informal reflection process and a more formalized experimental approach that was used early on in psychology's history. It's also used in psychotherapy sessions.
History of Introspection in Psychology
The process that Wundt used is what set his methods apart from casual introspection. In Wundt's lab, highly trained observers were presented with carefully controlled sensory events. Wundt believed that the observers needed to be in a state of high attention to the stimulus and in control of the situation. The observations were also repeated numerous times.
What was the purpose of these observations? Wundt believed that there were two key components that make up the contents of the human mind: sensations and feelings.
In order to understand the mind, Wundt believed that researchers needed to do more than simply identify its structure or elements. Instead, it was essential to look at the processes and activities that occur as people experience the world around them.
Wundt focused on making the introspection process as structured and precise as possible. Observers were highly trained and the process itself was rigid and highly controlled.
In many instances, respondents were asked to simply respond with a "yes" or "no." In some cases, observers pressed a telegraph key to give their response. The goal of this process was to make introspection as scientific as possible.
Edward Titchener , a student of Wundt's, also utilized this technique, although he has been accused of misrepresenting many of Wundt's original ideas. While Wundt was interested in looking at the conscious experience as a whole, Titchener instead focused on breaking down mental experiences into individual components and asked individuals to describe their mental experiences of events.
Benefits of Introspection
While introspection has fallen out of favor as a research technique, there are many potential benefits to this sort of self-reflection and self-analysis. Among them are:
- Introspection can be a great source of personal knowledge , enabling you to better recognize and understand what you're thinking and feeling. This leads to a higher level of self-awareness, which can help promote mental health and increase our happiness .
- The introspective process provides knowledge that is not possible in any other way ; there is no other process or approach that can provide this information. The only way to understand why you think or feel a certain way is through self-analysis or reflection.
- Introspection can help people make connections between different experiences and their responses . For example, when engaging in self-reflection after a disagreement with your spouse, you may recognize that you responded defensively because you felt belittled or disrespected.
- Introspection can improve our capacity for empathy . The more we understand ourselves, the easier it becomes to understand others. We're able to put ourselves "in their shoes" and empathize with how they may feel.
- Introspection makes us stronger leaders . While some believe that being a good leader requires self-confidence, others contend that self-awareness is more important. People who understand themselves internally are able to lead others effectively, also often making better decisions .
Drawbacks of Introspection
Introspection is not a perfect process. So, it can come with a few drawbacks.
People often give greater weight to introspection about themselves while judging others on their outward behavior. This can result in bias without recognizing that a bias exists.
Even when their introspections don't provide useful or accurate information, people often remain confident that their interpretations are correct. This is a phenomenon known as the introspection illusion.
Cognitive biases are a good example of how people are often unaware of their own thoughts and biases. Despite this, people tend to be very confident in their introspections.
Bias can also exist during research studies using introspection. Because observers have to first be trained by researchers, there is always the possibility that this training introduces a bias to the results.
This bias can influence what they observe. Put another way, observers engaged in introspection might be thinking or feeling things because of how they have been influenced and trained by the experimenters.
Rumination involves obsessing over things or having them run through your mind over and over again. When trying to figure out the inner workings of the mind, one can end up ruminating on their "discoveries." This can have negative impacts mentally.
For example, in a study of adolescents with depression , researchers found that these teens tended to have maladaptive introspection with high levels of rumination, thus contributing to the worsening of their symptoms.
Subjectivity
While Wundt's experimental techniques did a great deal to advance psychology as a more scientific discipline, the introspective method had a number of notable limitations. One is that the process is subjective, making it impossible to examine or repeat the results.
When using introspection in research, different observers often provided significantly different responses to the exact same stimuli. Even the most highly trained observers were not consistent in their responses.
Limited Use
Another problem with introspection as a research technique is its limited use. Complex subjects such as learning, personality, mental disorders, and development are difficult or even impossible to study with this technique. This technique is also difficult to use with children and impossible to use with animals.
Because observers have to first be trained by researchers, there is always the possibility that this training introduces a bias to the results. Those engaged in introspection might be thinking or feeling things because of how they have been influenced and trained by the experimenters.
Examples of Introspection
Sometimes, seeing examples can help increase your understanding of a particular concept or idea. Some examples of introspection in everyday life include:
- Engaging in mindfulness activities designed to increase self-awareness
- Journaling your thoughts and feelings
- Practicing meditation to better understand your inner self
- Reflecting on a situation and how you feel about it
- Talking with a mental health professional while exploring your mental and emotional states
How to Be Introspective
If you want to be more introspective, there are a few things you can do to assist with this.
- Ask yourself "what" questions . When trying to figure out our thoughts and emotions, we often ask ourselves "why" we feel the way we do. However, research indicates that "what" questions are more effective for improving introspection. For instance, instead of asking why you feel sad, ask what makes you feel sad. This can help provide more insight into yourself internally.
- Be more mindful . Introspection is a thoughtful exploration of what you're thinking and feeling at the moment. This requires being present, or more mindful. Greater mindfulness can be achieved in many different ways, some of which include journaling and meditation.
- Expand your curiosity . Curiosity about your inner self can help you better understand your emotions, reflect on your past, and explore your identity and purpose. Get in touch with your curious side. With curiosity comes exploration, providing a clearer understanding of your psychological workings.
- Spend some time alone, doing nothing . If the world is always busy around you, it can be difficult to quiet your mind enough to explore its inner workings. Make time regularly to spend some time alone, removing all distractions in your surroundings. This can help create an environment in which you're able to do a deeper dive into your psychological processes.
The use of introspection as a tool for looking inward is an important part of self-awareness and is even used in psychotherapy as a way to help clients gain insight into their own feelings and behavior.
While Wundt's efforts contributed a great deal to the development and advancement of experimental psychology, researchers now recognize the numerous limitations and pitfalls of using introspection as an experimental technique.
Cowden RG, Meyer-Weitz A. Self-reflection and self-insight predict resilience and stress in competitive tennis . Soc Behav Personal . 2016;44(7):1133-1149. doi:10.2224/sbp.2016.44.7.1133
Brock AC. The History of Introspection Revisited . In: Clegg JW, editor. Self-Observation in the Social Sciences . London: Taylor & Francis; 2018:25-44. doi:10.4324/9781351296809-3
Anders A. Introspection and psychotherapy . SFU Res Bull . 2019. doi:10.15135/2019.7.2.55-70
Herwig U, Opialla S, Cattapan K, Wetter TC, Jäncke L, Brühl A. Emotion introspection and regulation in depression . Psychiat Res: Neuroimag . 2018;277:7-13. doi:10.1016/j.psychresns.2018.04.008
Hergenhahn B. An Introduction to the History of Psychology .
Pal M. Promoting mental health through self awareness among the disabled and non-disabled students at higher education level in North 24 paraganas . Int Res J Modern Eng Tech Sci . 2021;3(1):51-53.
Jubraj B, Barnett NL, Grimes L, Varia S, Chater A, Auyeung V. Why we should understand the patient experience: clinical empathy and medicines otimisation . Int J Pharm Pract . 2016;24(5):367-370. doi:10.1111/ijpp.12268
Zhao X. On leadership and self-awareness . Wharton Magazine .
Lilienfeld SO, Basterfield C. Reflective practice in clinical psychology: Reflections from basic psychological science . Clin Psychol Sci Pract . 2020;27(4):e12352. doi:10.1111/cpsp.12352
Kaiser RH, Kang MS, Lew Y, et al. Abnormal frontinsular-default network dynamics in adolescent depression and rumination: a preliminary resting-state co-activation pattern analysis . Neuropsychopharmacol . 2019;44:1604-1612. doi:10.1038/s41386-019-0399-3
Eurich T. What self-awareness really is (and how to cultivate it) . Harvard Business Review .
Litman JA, Robinson OC, Demetre JD. Intrapersonal curiosity: Inquisitiveness about the inner self . Self Ident . 2017;16(2):231-250. doi:10.1080/15298868.2016.1255250
Brock, AC. The history of introspection revisited . In JW Clegg (Ed.), Self-Observation in the Social Sciences .
Pronin, E & Kugler, MB. Valuing thoughts, ignoring behavior: The introspection illusion as a source of the bias blind spot. J Exper Soc Psychol . 2007; 43(4): 565-578. doi:10.1016/j.jesp.2006.05.011.
By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."
Wilhelm Wundt: Father of Psychology
Saul McLeod, PhD
Editor-in-Chief for Simply Psychology
BSc (Hons) Psychology, MRes, PhD, University of Manchester
Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.
Learn about our Editorial Process
Olivia Guy-Evans, MSc
Associate Editor for Simply Psychology
BSc (Hons) Psychology, MSc Psychology of Education
Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.
Wilhelm Wundt opened the Institute for Experimental Psychology at the University of Leipzig in Germany in 1879. This was the first laboratory dedicated to psychology, and its opening is usually thought of as the beginning of modern psychology. Indeed, Wundt is often regarded as the father of psychology.
Wundt was important because he separated psychology from philosophy by analyzing the workings of the mind in a more structured way, with the emphasis being on objective measurement and control.
This laboratory became a focus for those with a serious interest in psychology, first for German philosophers and psychology students, then for American and British students as well. All subsequent psychological laboratories were closely modeled in their early years on the Wundt model.
Wundt’s background was in physiology, and this was reflected in the topics with which the Institute was concerned, such as the study of reaction times and sensory processes and attention. For example, participants would be exposed to a standard stimulus (e.g. a light or the sound of a metronome) and asked to report their sensations.
Wundt’s aim was to record thoughts and sensations, and to analyze them into their constituent elements, in much the same way as a chemist analyses chemical compounds, in order to get at the underlying structure. The school of psychology founded by Wundt is known as voluntarism, the process of organizing the mind.
During his academic career Wundt trained 186 graduate students (116 in psychology). This is significant as it helped disseminate his work. Indeed, parts of Wundt’s theory were developed and promoted by his one-time student, Edward Titchener, who described his system as Structuralism , or the analysis of the basic elements that constitute the mind.
Wundt wanted to study the structure of the human mind (using introspection). Wundt believed in reductionism. That is, he believed consciousness could be broken down (or reduced) to its basic elements without sacrificing any of the properties of the whole.
Wundt argued that conscious mental states could be scientifically studied using introspection. Wundt’s introspection was not a casual affair, but a highly practiced form of self-examination. He trained psychology students to make observations that were biased by personal interpretation or previous experience, and used the results to develop a theory of conscious thought.
Highly trained assistants would be given a stimulus such as a ticking metronome and would reflect on the experience. They would report what the stimulus made them think and feel. The same stimulus, physical surroundings and instructions were given to each person.
Wundt’s method of introspection did not remain a fundamental tool of psychological experimentation past the early 1920″s. His greatest contribution was to show that psychology could be a valid experimental science .
Therefore, one way Wundt contributed to the development of psychology was to do his research in carefully controlled conditions, i.e. experimental methods. This encouraged other researchers such as the behaviorists to follow the same experimental approach and be more scientific. However, today psychologists (e.g. Skinner ) argue that introspection was not really scientific even if the methods used to introspect were. Skinner claims the results of introspection are subjective and cannot be verified because only observable behavior can be objectively measured.
Wundt concentrated on three areas of mental functioning; thoughts, images and feelings. some of these areas are still studied in cognitive psychology today. This means that the study of perceptual processes can be traced back to Wundt. Wundt’s work stimulated interest in cognitive psychology.
On the basis of his work, and the influence it had on psychologists who were to follow him, Wundt can be regarded as the founder of experimental psychology, so securing his place in the history of psychology. At the same time, Wundt himself believed that the experimental approach was limited in scope, and that other methods would be necessary if all aspects of human psychology were to be investigated.
= 3 )) || ((navigator.appName == "Microsoft Internet Explorer") && (parseInt(navigator.appVersion) >= 4 ))); function MSFPpreload(img) { var a=new Image(); a.src=img; return a; } // -->
Introspection: the analysis of consciousness.
At least, consciousness is something we know we have. According to Descartes, that we are conscious is the only thing we can know for sure. This certainty formed the basis for Descartes' insights, Cogito, ergo sum ("I think, therefore I am") and sum res cogitans ("I am a thing that thinks").
From the time of Descartes on, introspection remained the primary -- no, the only -- method for investigating consciousness. After all, the philosophical method consists of introspection and reasoning.
- Based on his introspections, Descartes concluded that body could be studied with the methods of science, but mind must be studied through introspection.
- Gottfried Wilhelm Leibniz argued that apperception (inner awareness, or self-consciousness) was the essence of consc iousness. In his view, perception was possible without consciousness, but consciousness entails awareness of perception -- and the ability to introspect on what one has perceived.
- Kant discussed apperception as an "interna l sense" analogous to sense-perception.
How did Descartes know he was conscious? How do we know that we are? Because we experience ourselves as observing, sensing, perceiving, knowing, remembering, thinking, intuiting, feeling, wanting, willing, intending, and doing. These paradigm cases of the monitoring and controlling aspects of consciousness are what consciousness is all about.
Consciousness is the totality of sensations, perceptions, memories, ideas, attitudes, feelings, desire, activities, etc. of which we are aware at any given time. Conscious consists in our awareness of events and of the meaning we give to them, and of the strategies that we plan and execute to deal with them.
Actually, the word "consciousness" means a lot of different things. Thomas Natsoulas of UC Davis published a very useful paper in which he analyzes the seven different definitions of consciousness provided by the Oxford English Dictionary ( American Psychologist , 1978): joint or mutual knowledge, internal knowledge or conviction, awareness, direct awareness, personal unity, normal waking state, and double consciousness . A later paper ( Journal of Personality & Social Psychology , 1981) analyzed the many different "problems" of consciousness studied by philosophers and psychologists: conscious experience, Intentionality, imagination, awareness, introspection, personal unity, the subject, "consciousness" (as more or less), the normal waking state, conscious behavior, and explicit consciousness Although these papers were purely exegetical in nature, and contained no empirical data, they were milestones in what we can call the "Consciousness Revolution" in cognitive psychology.
William James on the Stream of Consciousness
William James (1842-1910) -- trained as a physician, employed as a professor of philosophy, pioneering American psychologist -- serves as a link between strictly philosophical and psychological analyses of consciousness. Called "the greatest of the 19th-century introspective psychologists" (Farthing, 1992, p. 25), James nonetheless had little interest in the tightly controlled, experimental or "analytical" introspection of Wundt and Titchener. James assembled a collection of "brass instruments" for experimental introspection at the Harvard psychological laboratory, but he himself never used them, and as soon as he could he arranged for a new colleague, Hugo Munsterberg, to be hired to take over the laboratory work so that he could get back to his writing, based "first and foremost and always" on the method of "looking into our own minds and reporting what we there discover" (James, 1890, p. 185).
James's Introspective Analysis
The introspective analysis of "the stream of consciousness" that James offered in his Principles of Psychology (1890/1980, Chapter 9) has never been equaled. So here it is in full (emphases in red added).
Here's a summary:
- Personal Subjectivity: Consciousness is a property of individual minds, something that each person possesses him- or herself.
- Constant Change : We never have quite the same conscious state twice, because the second instance has been affected by the first.
- Continuity Despite Change : Consciousness is Consciousness flows continuously from the time we wake up until the time we fall asleep.
- Intentionality : Consciousness is always consciousness of something : There are no "pure", content-free states of consciousness.
- Selective Attention : We can voluntarily direct our attention towards some contents and away from others.
Un conscious Mental Life
Obviously, introspection is limited to conscious mental life, begging the question of whether there is an unconscious mental life consisting of percepts, memories, thoughts, feelings, and desires of which we have no phenomenal awareness. James's position on unconscious mental life was complicated. Because he identified consciousness with thought, the notion of unconscious mental states (as opposed to unconscious brain processes) struck him as a contradiction in terms. Further, James adopted the doctrine of esse est sentiri : the essence of consciousness (its "to be") is to be sensed. Mental states are felt; therefore they cannot be unconscious.
In Chapter 6 on "The Mind-Stuff Theory" (which otherwise was devoted to a critique of structuralism), James considered and rebutted 10 "proofs" of the existence of unconscious mental states. These 10 ostensible proofs are stated as follows.
- "The minimum visible , the minimum audible , are objects composed of parts, which affect the whole without themselves being separately sensible." Therefore, these petites perceptions (Leibnitz) are unconscious. This "proof" relates to the modern concept of "subliminal" perception and "preconscious" processing. But for James, if they're not sensed, they can't be mental states, because of the doctrine of esse est sentiri .
- Learned habits start out as deliberate, and then become automatic and take place outside of consciousness. This "proof" obviously anticipates the modern interest in automaticity and attention. But for James, these automatic processes are performed consciously, but so quickly that they leave no traces in memory. Remember that for James, introspection is really retrospection.
- Thoughts of A can evoke thoughts of C through the logical link of B, even though we are not conscious of B. But for James, B may have been consciously thought, but quickly forgotten. Or else, B is just a brain-process, and not a mental state.
- Incubation in problem solving during sleep, rational behavior of somnambulists, and awakening at a predetermined time without benefit of an alarm, all indicate that thought goes on while we are asleep, and thus unconscious. But for James, thought in sleep is conscious, but forgotten.
- Epileptic or hysterical patients, and hypnotic subjects, will engage in complex behaviors without being aware of them upon regaining consciousness. But for James, the explanation is rapid forgetting of a mental state that was once conscious.
- Musical concordance is produced by simple ratios which must be "counted" unconsciously. But for James, concordance is the product of brain processes, not counting.
- We often make judgment and reactions for which we cannot give logical explanations: "We know more than we can say" (note the reversal of this proposition in Richard Nisbett & Wilson's 1977 article, "Telling More Than We Can Know..."). But for James, this is just brain-processes.
- Instincts seem intelligent, because they pursue goals, but the intelligence is unconscious, because the goals are not in awareness. But for James, all of this happens via brain processes, without requiring any mental states at all.
- Perception is the product of unconscious inference (Helmholtz). But for James, as for Gibson more recently, rapid perceptual judgments are the produces of cerebral associations, with no mental states being involved.
- We frequently discover that what we thought we believed we do not believe, and that we really believe the opposite -- this "real" belief, then, was unconscious. But for James, such a situation merely involves giving a name to a mental state which has not as yet been named, even though it has been in consciousness all along.
Some of these refutations, frankly, strike me as strained, glib, hand-waving. They are not, in my view, James at his best. And in some cases, James has simply been proved wrong. There is, now, good evidence of subliminal perception, of the automatization of mental processes, and of unconscious inference in perception. There are dissociations between explicit and implicit memory, etc., in hysteria and hypnosis. There is some evidence of incubation in problem-solving. All of these empirical facts seem to show that some of James's refutations were empirically wrong, and that there is "something it is like" to be an unconscious mental state after all.
In fact, James was already well aware of some of this evidence, in 1890, and even in the Principles he describes in positive terms evidence of apparent "unconsciousness" in hypnosis, hysteria, and multiple personality. For example, in hysterical blindness, the person claims to be unable to see, while continuing to respond to visual stimuli. This looks like "unconscious" vision.
James accepted the evidence of hypnosis and hysteria as legitimate, but his interpretation was different. Rather than postulate un conscious mental states, he referred to mental states of which we were unaware as co-conscious , subconscious , or as representing a secondary or tertiary (etc.) consciousness. This is not just playing with words. Remember that, for James, "thought tends to personal form". For James, consciousness could be divided into parallel streams, each associated with a representation of the self. Each of them is a fully conscious condition, but each of them is unaware of the others. When we ask what a person is aware of, the result of the inquiry will depend on which stream is being tapped. If we tap the primary stream, which is usually the case, the person will seem unaware of what is in the secondary stream(s); but if we tap one of the secondary streams, one of the other selves, we will see immediately that consciousness is there. Esse est sentiri , still, but it depends on who's being asked -- or, put another way, who's doing the feeling.
All of this sounds a little odd, but it's what seems to happen in hypnosis and "hysteria" -- about which more later.
Experimental Introspection
Introspection, the philosopher's traditional method for investigating consciousness, became the psychologist's method as well. And not just James (who, after all, was a philosopher -- and physiologist -- before he became a psychologist). In the hands of Wundt, Titchener (Wundt's most famous American student), and other "Structuralists", introspection came to be the method for a "mental chemistry" by which complex conscious states could be analyzed into their constituent elements (for comprehensive reviews, see Boring, 1953; Danziger, 1980).
To quote from E.B. Titchener's Text-Book of Psychology (1910):
Scientific method may be summed up in the single word 'observation'.... The method of psychology, then is observation. To distinguish it from the observation of physical science, which is inspection, a looking-at, psychological observation has been termed introspection, a looking-within. But this difference of name must not blind us to the essential likeness of the methods. In principle, then, introspection is very like inspection. The objects of observation are different: they are objects of dependent, not of independent experience; they are likely to be transient, elusive, slippery. sometimes they refuse to be observed while they are in passage; they must be preserved in memory, as a delicate tissue is preserved in hardening fluid, before they can be examined. And the standpoint of the observer is different; it is the standpoint of human life and of human interest, not of detachment and aloofness. But, in general, the method of psychology is much the same as the method of physics.
Titchener (1898) also laid out the general rules for introspection (there were also specific rules, depending on the nature of the mental state being introspected):
- Be impartial ("Take consciousness as it is").
- Be attentive ("Take the experiment seriously").
- Be comfortable ("Take the experiment pleasantly").
- Be perfectly fresh ("Take the experiment vigorously").
Or, as Titchener, advised: "The rule of psychological work is this: Live impartially, attentively, comfortably, freshly, the part of your mental life you wish to understand .
The big rule, however, was to avoid what Titchener (1905; Boring, 1921) called the stimulus-error . That is, the introspective observer should not confuse the sensation with the stimulus and its meaning. Observers were to base their reports on "mental material", not on the objects which gave rise to their mental states. The stimulus-error consists in describing the objects of perception and their meanings. But, for Titchener, the description of the stimulus, independent of experience, reflects the point of view of physics, not psychology.
In any event, as Boring (1953) made clear, classical experimental introspection, as practiced by Wundt (1896), Titchener (1905, 1910), and other Structuralists, was a kind of mental chemistry (Boring should know, as he was a student of Titchener's and knew Wundt). Consciousness contains complexes , analogous to molecules, which are composed of sensory elements , analogous to atoms. Oswald Kulpe, another Structuralist, identified these elements as intensity , extensity , duration and, most important -- because it was inherently psychological in nature -- quality . The quest for identifying the basic qualities of sensation is discussed in the lectures on Psychophysics , to which we will turn shortly.
Titchener was clear that, to quote James (1890/1980, p. 187), all introspection is retrospection (later, Jean-Paul Sartre said much the same thing in Being and Nothingness (1957), p. 11). The Structuralists understood clearly that observing and reporting on experience would necessarily interfere with having the experience -- a kind of psychological anticipation of Heisenberg's (1927) uncertainty principle in physics. Accordingly, observers were carefully trained to have the experience first, and then report it from memory. This training, like training in avoidance of the stimulus-error, was painstaking, and involved as many as 10,000 trials (an anticipation of Anders Ericsson's "10,000 Hour Rule".
Titchener was also clear that experimental introspection involved going above and beyond mere verbal reports. Verbal reports, in his terms, were responses to the stimulus. Introspections were observations of experience.
In the final analysis, the psychologist's introspection was distinguished from the philosopher's introspection by the "scientific" means by which it was conducted:
- in a laboratory setting , with only a very short interval between perception and observation;
- employing experienced observers , for whom observation is an automatic habit with no self-consciousness attached;
- and replication of stimulus conditions , with the expectation that identical stimuli should generate identical experiences, time after time.
Critique of Introspection
James' analysis of mental life relied primarily on introspection. He had a collection of "brass instruments" in his teaching laboratory, but he rarely used them. He preferred to introspect and then psychologize. However, there were some differences between James's approach to introspection and that of the structuralists. (1) He believed that introspection was essentially memory-based, rather than on-line (i.e., "All introspection is retrospection"); the implication is that the introspective mental state (saying "I feel tired") is different from the pre-introspective mental state (feeling tired). (2) He believed that introspection was unreliable, and had to be checked by other means.
To this end, James outlined a number of methods to supplement introspection: (1) connecting conscious states with physical conditions; (2) analyzing space perception; (3) measuring the duration of mental processes; (4) reproducing sensory experiences and intervals of space and time; (5) studying how mental states influence each other (e.g., excitation and inhibition; span of apprehension); and (6) studying the laws of memory.
Still, introspection remained James' preferred method of psychological analysis -- and he thought that its results far outweighed those obtained (so far) by experimental analyses employing "brass instruments". But James was not entirely persuasive on this score, and as psychology developed, t hree quite different critiques of introspection emerged.
The Critique from Inside
Even the Structuralists understood that there were methodologica l problems with introspection.
- Despite their acknowledgement that "all introspection is retrospection", and the careful training of observers to observe and then report, they understood that introspections could be distorted by the very act of observation.
- And precisely because they understood that "all introspection is retrospection", they appreciated the possibility that forgetting, reconstruction, and inference could contaminate int rospective reports.
- They also worried about self-censorship (though it's not clear that any of the stimuli used in introspective studies were particularly threatening).
- They understood the difficulty of verbal description -- not just with respect to the stimulus error, but also the ineffab ility of qualia.
- They also intuitively appreciated the importance of what Martin Orne (1962) would later call the demand characteristics of the psychological experiment -- that is, the tendency of subjects to perform the way they think the experimenter thinks they should.
- And they appreciated that, because consciousness is inherently subjective, there was no possibility for independent verification of their observers' reports -- aside from whatever consensus was achieved across observers.
Aside from these methodological problems, which investigators like Titchener did their best to surmount, there was One Big Problem with introspection - -which was that scientific psychology was gradually abandoning introspection in favor of an emphasis on human performance.
- Partly this was due to an expansion of the subject matter deemed appropriate for experimental study. Wundt and Titchener, like Fechner and Helmholtz, had confined themselves to problems of sensation and perception -- to immediate experience, closely tied to stimulation. But beginning with Ebbinghaus, and also with the studies of animal learning by Pavlov and Thorndike, psychology turned to aspects of mental life that could not be studie d with introspection. Ebbinghaus (1884), by measuring memory strength in terms of savings in relearning, and James McKeen Cattell (1885), by determin ing the span of apprehension to be approximately 7 items, shifted the emphasis in psychology from introspection, or even self-report, to behavioral performance.
- This trend was exacerbated by the increasing interest in applied psychology, such as the invention of the intelligence test by Alfred Binet and Simon.
But the decisive critique of introspection came in John B. Watson's manifesto for behaviorism.
The Behaviorist Critique
The behaviorist critique of introspection is pretty straightforward: mental states are subjective and private, and science is based on objective, publicly available observations. Therefore you can't have a science based on introspection. You can only have a science based on what's observable, which is behavior and the stimulus circumstances under which it occurs.
Watson had other criticisms of introspection, such as the endless controversies over such topics as whether there was imageless thought (about which Karl Buhler and Wundt battled endlessly). Watson actually didn't object to introspection in studies of sensation and perception, where the stimuli can be controlled by the experimenter. The problems really arose when introspection was applied to the "higher" mental processes. If someone is going to introspect on thought processes, how could we be sure that two different observers were actually introspecting on the same thought? But these were merely methodological objections. The behaviorist critique of introspection was principled: you can't base a science on introspection; and psychology should be redefined as a science of behavior rather than as a science of mental life.
Watson's critique was echoed by B.F. Skinner, who wrote (among many other things) Science and Human Behavior (1953), intended to be an introductory textbook of psychology based on strict, radical behaviorism.
A Modern "Cognitivist" Critique
After the behaviorists were overthrown in the cognitive revolution, you'd think that introspections would be let in. And in some sense they were.
In the first place, the basic data for cognitive psychology is self-report and response latency -- that is, how fast people make their self-reports.
- In a perception experiment, the experimenter presents a stimulus, and the subject reports what he sees.
- In a recognition experiment, the subject studies a list of words, the experimenter presents a test consisting of targets and lures, the subject indicates which is which by pressing a key, and the computer records both the judgment and the response latency.
- In a lexical decision task, the experimenter presents a series of letter strings on a computer screen, the subject presses a key to indicate whether the string is a legal English word, and the computer again records the judgment and response latency.
More substantively, introspections provided the data for one of the landmarks of the cognitive revolution, Allen Newell and Simon's (1972) "General Problem Solver". One of the first examples of artificial intelligence, GPS employed means-end analysis to solve all sorts of mathematical and scientific problems, and was explicitly based on subjects' reports of how they went about solving various kinds of problems -- a technique known as protocol analysis which is basically introspective in nature. (Simon won the Nobel Prize in Economics in part for this work). Later, K. Anders Ericsson, who was a student of Simon and Newell (Ericsson & Simon, 1990, 1993) introduced the "Model of Verbalization of Thinking" -- a refinement of protocol analysis that is, again, essentially introspective in nature.
But that didn't mean that there weren't still problems with introspection that worried investigators (including Simon and Ericsson ).
- First, and foremost, was the persisting problem that individuals' mental states are privileged, and introspection doesn't permit the kind of intersubjectivity that science traditionally mandates. For this reason, some psychologists, and other cognitive scientists have preferred physiological data, including brain-imaging data -- measuring things like event-related potentials instead of self-reports. But any bio-marker (including brain-imaging data) must be validated against self-reports anyway, so that doesn't really solve the problem. When it comes to the science of mental life, there really isn't any way of avoiding self-reports.
- Setting this aside, there was James's point that "thought is in constant change". Sensations and perceptions might be stable enough to permit introspection, but thinking might be too dynamic -- with thoughts changing as they pass through our minds -- to permit observers to describe them in any detail.
- Relatedly, there is the problem that the very act of introspection -- attending to, describing, and reporting on one's thoughts -- may change the thoughts themselves. The problem here is a little like Heisenberg's Uncertainty Principle in physics, where the act of measurement may change the thing that's being measured. In the same way, thinking about a mental state can't help but change that mental state.
The cognitive revolution made consciousness a legitimate topic of scientific research again, but -- as we'll see later -- it also legitimized the study of un conscious mental life -- that is, percepts, memories, thoughts, and the like of which we have no awareness. This, in turn, drew attention to a further limitation of introspection -- which is that introspection, by definition, offers us a view limited to conscious mental life. You simply can't introspect on un conscious mental life. And if the scope of unconscious mental life is broad and deep, rather than narrow and shallow, introspection may miss as much as, or even more than, it hits.
This argument was made expressly by Richard Nisbett and Timothy Wilson, in a paper entitled "Telling More Than We Can Know: Verbal Reports on Mental Processes" ( Psych. Review , 1977) , which argued that people simply have "little or no direct introspective access to higher order cognitive processes". They reviewed old evidence, and presented new studies, supporting the following points:
- People can be unaware of a stimulus that influenced their behavior.
- People can be unaware of their response to a stimulus.
- People can be aware of the stimulus, and also of their behavior, but unaware of the causal connection between the stimulus and their behavior.
For example, Nisbett and Wilson conducted one of their studies in a department store, under the guise of a consumer survey. In one version of the study, the subjects -- actual shoppers, or at least window-shoppers -- were asked to evaluate four different nightgowns; in another version, they were asked to evaluate four pairs of women's stockings; in each case, the items were actually identical. Both studies revealed a marked position bias, such that items on the right-hand side of the display were much more likely to be preferred than those on the left. But when asked why they had their preferences, not a single subject mentioned its position. So, it seems, subjects were unaware of the connection between the position of the objects and their preferences. Nisbett and Wilson argue that this is the case more often than not.
What's the problem? Nisbett and Wilson distinguish between content and process . It's one thing, they say, to be aware of some mental state, like our preference for one nightgown over another, and it's quite another to be aware of the processes by which that mental state is constructed. And in general, they argue that mental processes are largely inaccessible to conscious awareness. So, if we want people to tell us what they like, they can do that (usually). But if we want people to tell us why they like it, we may be asking them to tell us more than they can know.
It might be said that the "nightgown" study and its like have certain methodological problems. For example, the study described doesn't really allow subjects a rational basis for their decisions. In the stocking version, for example, the four pairs presented for evaluation were, in fact, identical, so there was no way to choose between them. But the subjects were forced to express a choice, and they did. To be sure, they didn't seem to realize that their choices were biased by position -- and, more to the point, even if they did they would never have said so. Position is a ridiculous basis for preferring one pair of stockings over another, and subjects might think that, if they referred to position, they would be accused of not taking their job seriously. So even if they were aware of the influence, they wouldn't admit it. Distinguishing between what people are genuinely aware of, and what people are aware of but won't report, is a serious (but not unmanageable) problem in the scientific study of unconscious mental life.
Still, the content-process distinction is one that turns out to be important. As will be discussed later, in the lectures on " Attention and Automaticity ", a lot of mental operations appear to be performed automatically, and it's a property of automatic mental processes that they are unconscious in the strict sense that they are simply unavailable to introspective phenomenal awareness under any circumstances. Nisbett and Wilson do not explicitly refer to automaticity in their paper -- it was written before the distinction between automatic and controlled processes really took off. But if the argument is that we only have introspective access to controlled processes, but not to automatic processes, Nisbett and Wilson were onto something.
In addition, the philosopher Jerry Fodor ( The Modularity of Mind , 1983) has argued that some cognitive systems are modular in nature, performed by dedicated mental systems that are associated with a fixed neural architecture. Cognitive modules take some input, perform some transformation on it, and output this transformation to other parts of the cognitive system. According to Fodor's doctrine of modularity, the internal operations of these modules are inaccessible to other parts of the cognitive system -- which means, essentially, that these processes are inaccessible to introspection. Moreover, in the course of performing these transformations, the information may pass through one or more distinct states. Although these distinct states count as mental contents, and so might be accessible to introspection (by virtue of the process-content distinction discussed earlier), Fodor argues that these contents are also inaccessible to phenomenal awareness, and thus to introspection, precisely because they are encapsulated in these modules.
And finally, as Nisbett and Wilson also point out, there are some stimuli that are "subliminal" -- too weak, or too briefly presented, to be consciously perceptible. There is now a considerable amount of evidence that such "subliminal" stimuli can have palpable effects on experience, thought, or action. We'll discuss this evidence later, in the lectures on " The Explicit and the Implicit ".
So, Nisbett and Wilson were onto something, which is that there are limits to introspection. We can't introspect on subliminal stimuli, and we can't introspect on automatic processes, and we can't introspect on the inner workings of cognitive modules. But that doesn't mean that introspection is always invalid -- that we're always , or even often , telling more than we can know. We know a lot, about the stimuli in our environment, and our responses to them, and about what comes in between.
Philosophical Analyses of Consciousness
Many philosophers identify consciousness with phenomenal experience. As the philosopher Thomas Nagel argued in a famous essay, "What is it like to be a bat?" (1979), there is something it is like to be conscious. Conscious organisms have certain subjective experiences. This phenomenal experience, in turn, comes in several forms -- but how many?
Actually, some cognitive ethologists have tried to figure out what it's like to be a bat -- well, if not a bat, exactly, some other kind of nonhuman animal. Nagel's point, that there's something it's like to be conscious, directly inspired Bird Sense: What It's Like to Be a Bird (2012) by Tom Birkhead, an English behavioral ecologist. In an earlier book, The Wisdom of Birds: An Illustrated History of Ornithology (20008), Birkhead traced the evolution of our understanding of bird behavior. In What It's Like , he tries to get inside the head of birds, to develop some idea of what their sensory experience is like. For example, birds can see in the ultraviolet range of the electromagnetic spectrum, meaning that a bird who appears quite drab to us may look spectacular to another bird. And the asymmetrical placement of an owl's ears permit it to triangulate on noise prey in a way that is not possible for us. Never mind the special magnetic sense that may enable birds to navigate over long migratory distances. Birkhead understands that it's not really possible to know what the bird sees in the ultraviolet range, or how it feels the pull of geomagnetism. But from his objective standpoint, he takes us closer than anyone before to the subjective life of another species. Rachel Carson, author of Silent Spring (1962), the book that raised the alarm about environmental pollution and triggered the environmentalist movement, was a marine biologist who, earlier in her career, wrote a trilogy of books about the world's oceans and their inhabitants: Under the Sea-Wind (1941), The Sea Around Us (1951), and The Edge of the Sea (1955) -- all reissued by the Library of America in 2022 (reviewed by Rebecca Giggs in "The Sea, the Sea", New York Review of Books , 11/22/2022). Giggs writes: The vantage Under the Sea-Wind takes on its characters is close-range, but it is not internalized and so the animals' feelings remain inaccessible.... The effect of this is that, though Under the Sea-Wind takes no overt stance on animal consciousness, the outlook on the ocean is inflected by whichever creature Carson places us next to. What we see gets tinted by a sensibility seeped out of nonhuman bodies and minds, as though color gels are being affixed to a lens. The ocean is manifold and unalike, as it turns out, to an owl, a raven, and a sanderling, or to a trout, an eel, or an anglerfish. To borrow an expression from the American critic Lawrence Buell, Carson's writing in Under the Sea-Wind proves an exercise in "disciplined extrospection" -- the studied relinquishment of a self-centered perspective, guided by reaching out toward, but never quite enclosing, the viewpoint of another species.
Carl Safina, in Beyond Words: What Animals Think and Feel (2015), approvingly quotes Voltaire on Descartes: "What a pitiful, what a sorry thing to have said that animals are machines bereft of understanding and feeling".
T homas Thwaites , for example, realized that "to inhabit the m ental life of a goat, he would need to relate to his surroundings in a goatlike way", and built a kind of prosthetic device which enabled him to do so, after a fashion -- an experien ce he wrote about in GoatMan: How I Took a Holiday from Being Human (201 6 ) . Actually, he initially wanted to be an elephant, and the We lcome Trust, a British foundation that supports scientific research, approved his propos al. But when he consulted a Dutch shaman, she said his proposal was "idiotic", and that he should become a deer, or a sheep, or a goat instead. And so he did.
Charles Foster tried to accomplish much the same goal simply by living like a badger, as well as he could, fo r six weeks in the woods -- a story he told in Being a Beast (201 6; reviewed by Vicki Constantine Croke in "'I Want to know What It Is Like to Be a Wild Thing'", New York times Book Review , 07/17/2016 ). Foster also tried his hand at being a fox, red deer, and an otter -- the l ast project one in in which he a lso enrolled his children.
Goats and badgers are, at least, mammals. In What a Fish Knows: The Inner Lives of Our Underwater Cousins (2016), Jonathan Balcome tries to understand how the world appears to a fish -- and, more importantly, marshals anecdotal and scientific evidence about intelligent problem solving in various species of piscines.
And, for good measure, Andrew Barron and Colin Klein that insects and other invertebrates also possess at least the limited degree of self-awareness that comes with knowing where their bodies are in space and what they are doing ( "What Insects Can Tell Us About the Origins of Consciousness" , PNAS 2016).
From a more conventionally scientific basis, Franz d e Wall , in Are We Smart Enough to Know How Smart Animals Are? (2016) that the whole enterprise of comparative psychology mistakenly tries to compare other animals to humans . Instead, we s hould recognize that each animal species has its own unique , self-centered, subjective world -- what Jacob von Uexkull calls its Umwelt -- which cannot be fully comprehended by any other species (reviewed by Elizabeth Kolbert in "He Tried To Be a Badger", New York Review of Books , 06.23.2016, which also reviews Foster's and Balcome's books).
See also An Immense World: How Animal Senses Reveal the Hidden Realms Around Us by Ed Yong, a science writer (reviewed by Elizabeth Kolbert in "Contact", New Yorker , 06/13/2022, in an essay that also covers recent books on animal communication). Yong notes that different species have very different sensory capacities, leading them to perceive the world quite differently from the way we do (Yong refers to Nagel's essay, and uses von Uexkull's word Umwelt to refer to an animal's subjective world). Scallops, for example, have dozens or hundreds of eyes, but apparently don't see anything. Rather, their eyes function more as motion detectors, so that when a large enough object passes slowly enough through the water, they send a signal that opens the scallop's shell to catch some food. Yong covers a wide range of species, including the black ghost knifefish. Kolbert writes:
The black ghost knifefish is, as its name implies, a nocturnal hunter. By firing a specialized organ in its tail, a knifefish creates an electric field that surrounds it like an aura. Receptors embedded in its skin then enable it to detect anything nearby that conducts electricity, including other organisms. One researcher suggests to Yong that this mode of perception, known as active electrolocation, is analogous to sensing hot and cold. Another posits that it's like touching something, only without making contact. No one can really say, though, since humans lack both electric organs and electroreceptors. "Who knows what it's like for the fish?".... *** Yong's response to Nagel... runs along the lines of "Yes, but...". Yes, we can never know what it's like for a bat to be a bat (or for a knifefish to be a knifefish). But we can learn a lot about echolocation and electrolocation and the many other methods that animals use to sense their surrounds. And this experience is, for us, mind-expanding.
On the occasion of the publication of his book, Yong wrote an OpEd piece in the New York Times entitled "How Animals See Themselves", which also cites Nagel's essay (06/21/2022). Yong is critical of nature documentaries, even the best of them, which always seem to portray animals' lives through a filter of human narratives: An elephant family searches for water.... A lonely sloth swims in search of a mate...." The result is a subtle form of anthropomorphism, in which animals are of interest only if they satisfy familiar human tropes of violence, sex, companionship and perseverance. They're worth viewing only when we're secretly viewing a reflection of ourselves.
A tick’s Umwelt is limited to the touch of hair, the odor that emanates from skin and the heat of warm blood. A human’s Umwelt is far wider but doesn’t include the electric fields that sharks and platypuses are privy to, the infrared radiation that rattlesnakes and vampire bats track or the ultraviolet light that most sighted animals can see. The Umwelt concept is one of the most profound and beautiful in biology. It tells us that the all-encompassing nature of our subjective experience is an illusion, and that we sense just a small fraction of what there is to sense. It hints at flickers of the magnificent in the mundane, and the extraordinary in the ordinary.... By thinking about our surroundings through other Umwelten, we gain fresh appreciation not just for our fellow creatures, but also for the world we share with them. Through the nose of an albatross, a flat ocean becomes a rolling odorscape, full of scented mountains and valleys that hint at the presence of food. To the whiskers of a seal, seemingly featureless water roils with turbulent currents left behind by swimming fish — invisible tracks that the seal can follow. To a bee, a plain yellow sunflower has an ultraviolet bull’s-eye at its center, and a distinctive electric field around its petals. To the sensitive eyes of an elephant hawk moth, the night isn’t black, but full of colors.
Reviewing a number of such books, Martha Nussbaum, an ethics philosopher at the University of Chicago, writes:
The world we share with the other animals is stranger and more wondrous than humans have typically realized.... As de Waal puts it...: We used to think in terms of a linear ladder of intelligence with humans on top, but nowadays we realize it is more like a bush with lots of different branches, in which each species evolves the mental powers it needs to survive.
The new learning about animal lives and their complexity has large ethical implications. At the most general level we must face up to the fact that many, if not most, animals are not automata or "brute beasts" but creatures with a point of view on the world and diverse ends toward which they strive -- and that we interfere with these forms of life in countless ways, even when we do not directly cause pain.
For more on animal consciousness, see the lectures on " The Origins of Mind " ; also "The Metamorphosis" by Joshua Rothman, New Yorker , 05/30/2016), from which some of these quotes are drawn.
T echnically, Brian Farrell, a British philosopher, first posed the question of "what it would be like to be a bat" in a paper entitled "Experience" ( Mind , 1950). But Nagel popularized the question, and it's his essay with that title that has entered the canon of philosophical examinations of consciousness .
Faculties of Mind
In the late 18th century, the philosopher Immanuel Kant offered a tripartite classification of "irreducible" mental faculties: knowledge, feeling and desire -- or, as the 20th-century psychologist Ernest R. ("Jack") Hilgard (1980) put it, the "trilogy of mind": cognition (having to do with knowledge and belief), emotion (having to do with feeling, affect, and mood), and motivation (having to do with desires, goals and drives). Cognition, emotion, and motivation are three different mental functions, but they also serve as three broadly different types of mental states. According to this view, perceiving and remembering are different mental states, but they have in common that they are cognitive states of knowing; anger and fear are also different mental states, but they have in common that they are emotional states of feeling; hunger and thirst are different mental states, but they have in common that they are motivational states of desire.
Based on the Kant-Hilgard analysis, then, as a first pass we can identify three different qualitative states of mind, each corresponding to one of his "irreducible" mental faculties. Put another way,
- cognitive states of "knowing" something are qualitatively different from
- emotional states of "feeling" something, which in turn are qualitatively different from
- motivational states of "desiring" something.
Kant asserted that the trilogy of mind was irreducible, in that states of feeling and desire, for example, could not be further reduced to states of knowledge and belief. However, this point is controversial. Within both psychology and cognitive science, some theorists believe that cognition is the fundamental faculty, and that emotional and motivational states are reducible to cognitive states. Put another way, emotions and motives are cognitive constructions . In this view, the basis mental state is one of belief, and feelings and desires are actually beliefs about our feelings and desires. I call this situation the hegemony of the cognitive , and it is not a figment of my imagination. Cognitive psychology and cognitive science are both full of theorists who take the view that feelings and desires are cognitive constructions. Chief among these are Stanley Schachter (of Columbia University) and Richard Lazarus (late of UC Berkeley).
Disciplin es of Mind
Whether you believe that cognition, emotion, and motivation are irreducible, or that emotion and motivation are the products of cognitive construction, has implications for academic organization that lie at the heart of the relations between psychology and cognitive science. If everything boils down to cognition, then cognitive science can be a complete science of the mind. But if emotional and motivational states are independent of cognition, then it follows that cognitive science can't do it all, and must be supplemented by affective and conative sciences.
In fact, at the end of the 20th century a new interdisciplinary field began to emerge known as affective neuroscience , modeled on cognitive neuroscience but dedicated to the proposition that the principles of emotion were different from the principles of cognition -- otherwise, you wouldn't need a new field, would you? If affective neuroscience takes hold, can a conative neuroscience be far behind? If you're going to have separate fields for cognition, affection, and conation, why don't you just do psychology, which already encompasses all three.
It's important to remember that cognitive science arose in reaction to the dominance of behaviorism within psychology, its rejection of mentalistic concepts, and its unwillingness to consider mental processes as mediating between stimulus and response. In the final analysis, however, there's nothing that cognitive science does that psychology can't do, and psychology provides coverage of the mind that cognitive science can't.
If the term "cognitive" in cognitive science is really a euphemism, and cognitive science is really concerned with the mind in its entirety, including emotion and motivation, then it might just as well be called psychology .
Philosophers often describe mental states in terms qualia , or the phenomenal qualities of conscious experience -- "raw feels", if you will. These are the conscious experiences that Descartes could not bring himself to doubt -- the bundles of distinct sensory qualities that make up our conscious experiences. Qualia (singular quale ) refer to the distinctive states of mind associated with various sensory experiences. There is "something it is like" to see rather than hear, or to smell rather than taste, and there is "something it is like" to see red as opposed to blue, or to taste sweet as opposed to sour, and these "somethings" are the differences between and among qualia.
In his essay, "Quining Qualia", the philosopher Daniel Dennett has listed four ostensible properties of qualia:
- Ineffable : One cannot describe elementary sensory experiences to someone else.
- Intrinsic : They are somehow atomic and unanalyzable.
- Private : We cannot make interpersonal comparisons of qualia.
- Directly Apprehended : Qualia do not depend on any mediating processes or inferences.
It's important to note that Dennett doesn't actually believe that qualia exist. We'll go into the reasons for this in the lectures on "Mind and Body", but for now understand that Dennett is simply summarizing what the traditional view of qualia is.
Of these qualities, perhaps the most important is ineffability (the others are either implications or effects of ineffability): There is no linguistic description of an experience such that understanding the description would enable someone who has never had the experience to know what that experience is like.
To illustrate this point, the philosopher Frank Jackson (1982, 1986) asks us to imagine the experience of " Mary, the color-blind scientist ". Mary, a visual neuroscientist, is raised from birth in an achromatic chamber, so that she is completely deprived of exposure to all color stimuli. Meanwhile, she learns all there is to know about the nervous system, and in particular, all there is to know about color vision. What happens if Mary should leave her chamber, and be exposed to color stimuli? Will she have an altogether new experience, of color? Jackson proposes that she will have entirely new experiences of color. Knowing how physical and neural processes give rise to the experience of color does not enable us to know what the experience itself is like.
Jackson offered yet another thought experiment, this time of " Fred, the Scientist with Super-Vision " (not his actual title, but it will do). Imagine Fred, another vision neuroscientist, who has a range of visual sensitivity that extends beyond the normal -- into the infra red (> 780 nm), say, or into the ultraviolet (<380 nm). An observer -- a colleague of Mary and Fred, for example -- knows all there is to know about the visual system, and all there is to know about color. Does that observer have any knowledge of what Fred's infra-red or ultra-violet vision looks like?
Jackson's story is one of those thought experiments that philosophers love to pose, and debate, but there is actually anecdotal evidence that bears on the question.
" Although I have acquired a thorough theoretical knowledge of the physics of colors and the physiology of the color receptor mechanisms, nothing of this can help me to understand the true nature of colours.... From the history of art I have also learned about the meanings often attributed to colours and how colours have been used at different times, but this too does not give me an understanding of the essential character or quality of colours.
The point of Jackson's stories, and of the two actual cases, is that our objective linguistic, conceptual, and scientific knowledge of color and color vision is not enough to give rise to the subjective experience of color. It is in this sense that the experience of red and green, yellow and blue is ineffable. It also implies that objective, third-person descriptions of color are not sufficient to yield the subjective, first-person experience of color.
Intentionality
Analyses of qualia are important in the study of consciousness, but we rarely experience disembodied reds and blues, sweets and sours. Rather, "red" and "sweet" are properties of the things we see and taste. For this reason, mental states are also often described in terms of their Intentionality . We don't think, or believe, or feel or desire in the abstract. Rather, we always think (etc.) something . Put another way, consciousness is representational -- it is always about something other than itself.
In fact, the 19th-century philosopher Franz Brentano argued that Intentionality is the mark of the mental . Specifically,
- all mental states are Intentional in nature, and
- only mental states are Intentional in nature.
- If the proposition P is that it is raining , then
- my mental state is that I believe (etc.) that it is raining , where
- relations such as "perceive", "know", and "remember", among others, may substitute for "believe".
The notion of Intentionality is a little confusing, because the word intention has a double meaning. In philosophical discourse, such as the aphorism that "Intentionality is the mark of the mental" from Brentano cited above, In philosophy, Intentionality refers to the object toward which a mental state is directed, or its directedness ; in ordinary discourse, however, it simply refers to a property of an action, as in the familiar excuse that "I didn't intend to do it". Unfortunately, both these senses are relevant to the study of consciousness: Intentionality is a property of mental states, and it is also a feature of deliberate, conscious action. In order to keep the two senses separate, following the practice introduced by the UCB philosopher John Searle (in Intentionality: An Essay in the Philosophy of Mind , 1983) I will write Intentionality with an initial capital "I" when I am referring to the philosophical concept, and intentionality in all lower case when referring to the term in ordinary language. This makes some linguistic sense: Brentano wrote in German, and in German all nouns -- like Intentionalitat , meaning "directedness", are capitalized; the German word for intention, in the ordinary-language sense, is Absicht , or perhaps the verb wollen ("want").
Intentionality in Emotion and Motivation
The notion of Intentionality has been pretty well worked out for cognitive states like believing and knowing, but the corresponding solution for emotional and motivational states has always been a little unsatisfactory.
As already noted, Brentano argued that Intentionality is the mark of the mental: all mental states are Intentional in nature, and only mental states are Intentional in nature. Later, Bertrand Russell argued that Intentional states are represented by propositional attitudes, which state a relation between a person and some proposition P , where that proposition entails the person believing, knowing, feeling, or wanting (etc.) P . For John Searle, Intentional states are the means by which our minds relate us to the world.
Thus, in the statement John believes that it is raining outside, the proposition about the world is that it is raining, and John's relation to that proposition is an attitude of belief. From this point of view, propositions have truth value -- they are either true or false (it is either raining or not) -- or, as Searle prefers, truth conditions -- that is, they are true under certain conditions (i.e., when it really is raining outside).
The point is important because if consciousness is about the mental, and if all mental states are Intentional states, therefore all conscious mental states must be Intentional in nature. The problem is that some conscious mental states don't seem to have propositional content.
- In the cognitive state John believes that pizza is nutritious food , the proposition is that pizza is nutritious food , and John believes it.
- But in the emotional state John likes pizza , there doesn't seem to be any propositional content.
- The situation is even worse in an emotional state like John is happy .
- As Searle and other philosophizers have pointed out, certain pathological states, such as generalized anxiety disorder -- which is, essentially, pathological fear without an object -- don't have Intentionality in the Brentano-Russell sense.
Similarly, in the motivational state John wants pizza there doesn't seem to be any propositional content either. Certainly there isn't any proposition; thus there is no truth value and there are no truth conditions.
- Nor is there any propositional content in the even more generalized state John is hungry .
The implication is that emotional and motivational states aren't mental states, because they aren't Intentional in nature. But that doesn't seem to be right either: feelings and desires are epitomes of conscious mental states.
One solution to this problem is to re-frame the emotional and motivational states as beliefs, the way cognitive constructivists do. Thus, the emotional state John likes pizza becomes the cognitive state John believes that he likes pizza which includes a propositional attitude, and a proposition that has truth value or truth conditions. In the same way, John believes he is happy .
We can pull the same trick with the motivational state John wants pizza by transforming it into the cognitive state John believes that he wants pizza . And, John believes that he's hungry.
This is fine, but the upshot of this tack is that emotional and motivational states are not irreducible after all, as Kant had argued they are, because they can be reduced to cognitive states -- to beliefs about our feelings and desires. This is fine if you're the kind of person -- a cognitive constructionist -- who approves of the "hegemony of the cognitive" in psychology and cognitive science, but it's bad if you're Immanuel Kant, who argued that knowledge (cognition), feeling (emotion), and desire (motivation) are irreducible faculties of mind. And the conclusion is also going to make other people, who think that emotional and motivational processes are at least partially independent of cognition, a little nervous. So what to do?
Searle on Intentionality
One solution is suggested by Searle's reanalysis of Intentionality, which downplays the importance of propositional content. From his point of view, all Intentional states have four components:
- Type , which specifies a particular relation between the person and the world; type is similar to attitude, but goes beyond "belief" to include perceiving, remembering, feeling, wanting, etc.
- Content, which specifies some specific feature of the world; content is similar to the proposition, but as will become clearer below, is not necessarily propositional; and
- Direction of Fit is another important aspect of Searle's analysis: the different ways that Intentional states relate the mind to the world -- or, the different ways in which the propositional content of a state may be satisfied.
- Conditions of Satisfaction , a feature similar to truth conditions, but which covers Intentional states that are not propositional in nature, and thus are neither true nor false.
Within this framework, cognitive and motivational (conative) states are clearly distinguishable.
Cognitive states (e.g., beliefs, percepts, and memories) have propositional content, of course, but they have what Searle calls mind-to-world direction of fit, by which he means that the mind is describing a current reality that exists independently of it. Thus, in the cognitive state George believes that Martha likes him the question is whether the mental state is an accurate reflection of the world outside the mind, and the condition of satisfaction is whether the description of the world is true or false -- or, more precisely, the conditions under which the description is true.
Conative states (e.g., motivational states of want, need, and desire), also have propositional content, but they have a world-to-mind direction of fit. That is, the mind is anticipating a future reality that does not presently exist. In the conative state George wants Martha to like him the question is whether the world can be brought to match the mental state, and the condition of satisfaction is whether the desire is satisfied -- or, more precisely, the conditions under which the desire might be satisfied.
Thus, both cognitive and conative states have propositional content (something about Martha liking George) but differ in terms of direction of fit:
- in cognitive states, the direction is mind-to-world (does the mind represent the world accurately?);
- in conative states, the direction is world-to-mind (can the world be changed so that it is represented accurately?).
John Searle takes up Brentano's position, arguing that Intentional states are the means by which our minds relate us to the world. All Intentional states take the form of John believes that P is true , where P is some relationship -- in this instance, a relationship of belief -- about the world.
Within this framework, emotional (affective) states differ from both cognitive and conative states because their content is not propositional in nature: the state just refers to some feature of the world. Thus, in the affective state George was glad that Martha liked him there is propositional content (something about Martha liking George), but no direction of fit because the propositional content is already satisfied. It's just true (or, at least, George believes that it's true) that Martha likes him. So emotional states differ from cognitive and motivational states because there are no conditions of satisfaction: they just are what they are.
This is especially the case for our most abstract emotional states, such as John was happy .
Given this analysis, cognition, emotion, and motivation may be irreducible after all, just as Kant asserted: they are clearly distinguished from each other by their conditions of satisfaction (emotional states don't have any) and direction of fit (cognitive states fit mind to world, and conative states fit world to mind).
This analysis (if it's actually correct) will satisfy the Kantians among us, but it does create something of a paradox. Brentano argues that Intentionality is the mark of the mental, and Searle argues that all Intentional states have conditions of satisfaction. But emotional states don't have conditions of satisfaction, and therefore can't be mental states. If they're not Intentional, then they're not mental. But emotional states are mental states, aren't they?
The solution suggested by Searle is that emotional states are partly or largely constituted by beliefs and desires. Therefore they have propositional content after all, and conditions of satisfaction, and direction of fit. Thus, in an affective state like George likes Martha the belief is something like George believes that Martha is likable and the desire is something like George wants Martha to like him in return. And in an affective state like George is glad that Martha likes him the belief is something like George believes that he is a likable person and the desire is something like George wants Martha to like him .
So, in terms of Searle's formulation at least, affective states are reducible after all, to combinations of cognitive and conative states. That's bad for those who want to develop an independent affective (neuro)science, but it still opposes the hegemony of cognition, because something besides belief (i.e., desire) is needed, and that something is not reducible to cognition (because cognitive and conative states differ in direction of fit). So perhaps Kant was wrong after all, and there are only two absolutely irreducible faculties of mind: knowledge and desire. If so, somebody better get started developing a conative (neuro)science, because we don't have one right now.
The problem is worse than this, however, because there are some affective and motivational states that don't have any propositional content at all.
Consider, for example, George is happ y: there is no propositional content to be satisfied in either direction; and therefore there is no direction of fit. And in another example, Martha is hungry , again, there's no propositional content to be satisfied, no direction of fit, and no conditions of satisfaction. George is just happy, and Martha is just hungry. These are clearly mental states, aren't they? And if they are mental states, they are mental states that lack propositional content, direction of fit, and conditions of satisfaction.
Still and all, Intentionality and propositional attitudes lie at the heart of the philosophical doctrine of mentalism , which lies at the core of psychology:
Mental states stand in relation to actions as cause to effect.
The doctrine of mentalism lies at the core of psychology simply because psychological explanations of behavior invoke mental states as causal entities. The behaviorist movement in psychology rejected mentalism, and argued that behavior is caused by environmental stimuli, without any intervening mental states. Some philosophers (and, for that matter, some self-hating psychologists) also argue that mental states are irrelevant to behavior, leading to the positions described by Owen Flanagan as conscious inessentialism and epiphenomenalism .
So, we don't perceive qualia in the abstract; rather, our states of mind are "about" something. Intentionality has to do with this "aboutness", or the fact that consciousness is representational. Brentano proposed that Intentionality is the mark of the mental. Intentional states are represented by propositional attitudes (a term coined by Bertrand Russell), which state a relation (of believing, etc.) between a person and some proposition about the world.
- I put on my raincoat because I believe that it is raining .
Actually, of course, there has to be something else that stands between my belief and my action -- for example, a desire to achieve, or avoid, certain consequences. Thus,
- Because I believe it is raining, and I do not wish to get wet, I put on my raincoat.
According to the Doctrine of Mentalism in philosophy, these propositional attitudes cause us to behave the way we do; however, according to the contrary Doctrine of Epiphenomenalism, propositional attitudes are actually irrelevant to our behavior.
Subjectivity
The discussion of Intentionality has been a little misleading, because it has been framed in the third person, illustrated by the mental states of other people -- namely, George, Martha, and John.
But, as James put it so well, "Thought tends to personal form.... The universal conscious fact is not 'feelings and thoughts exist ' but ' I think' and ' I feel'."
Or, as Thomas Nagel put it, "there is something that it is like" to be conscious.
Or, as John Searle has put it, "Conscious states exist only as they are experienced by a human or animal subject".
This is the element of subjectivity. Consciousness is inherently subjective, and any analysis of consciousness that leaves it out misses the mark.
The matter of subjectivity re has been considerable ambiguity and confusion about the distinction between the objective and the subjective, which Searle has been at pains to try to straighten out.
In the first place, there is a distinction between objective and subjective ontology :
- Some things, like rocks and solar systems, are ontologically objective because they exist independent of the mind, attitudes, and feelings of the observer. Quoting from Searle's Mind, An Introduction (2004), "mountains, molecules, and tectonic plates" have an objective ontology. This is also referred to as a third-person ontology . They exist regardless of whether there is anyone to experience them.
- Other things, like conscious mental states, are ontologically subjective because exist only insofar as they are experienced by an observer. Again quoting Searle (2004), "pains, tickles, suspicions, and impressions" have a subjective ontology. This is also referred to as a first-person ontology .
Conscious states would not exist if there were no one to experience them, so they have a subjective ontology. Their existence depends on an observer who experiences them.
In the second place, there is a distinction between objective and subjective epistemology .
- Some pieces of knowledge are epistemically objective because their truth value is independent of the attitudes and feelings of the knower.
- To use Searle's own examples, "Jones is six feet tall" is objectively true or false regardless of what anyone believes. So is the statement that "Rembrandt was born in 1606".
- Other pieces of knowledge are epistemically subjective because their truth value depends on the attitudes and feelings of the observer.
- To use Searle's examples again, "Jones is a nicer person than Smith" is not objectively true, because the validity of the statement depends on the attitudes and feelings of an observer. This is also true of "Rembrandt was the best Dutch painter ever".
The situation is further complicated, as Searle notes, by the distinction between observer-independent and observer-dependent (or observer-relative ) features of the world.
- Some entities are observer-independent, in that their existence is independent of human attitudes. Two of Searle's examples are "mountains and molecules", whose features are intrinsic to their physics.
- Other entities are observer-relative, in that their existence depends on human attitudes. Two of Searle's examples are "marriage and money", which are created by conscious mental activity.
Ordinarily, we'd think of ontologically objective entities as observer-independent and ontologically subjective entities as observer-relative, and that would be that. But Searle argues that consciousness is both ontologically subjective and observer independent.
- C onsciousness is ontologically subjective because it exists only insofar as it is experienced.
- But consciousness is observer independent because "If I am in pain, it doesn't matter what anyone else thinks".
Conscious mental states are ontologically subjective, in that they do not exist independently of an observer. That is the challenge for a "scientific" approach to consciousness, what makes some cognitive scientists nervous about the whole topic-- part of Owen Flanagan's "conscious shyness" is the "positivist suspicion" that consciousness cannot be studied scientifically precisely because it is subjective, and private, while science is public and objective.
But as Searle points out, "Ontological subjectivity of the subject matter does not preclude an epistemically objective science of that same subject matter". The whole point of a course entitled "Scientific Approaches to Consciousness" is to achieve epistemically objective knowledge about ontologically subjective states of mind. If that's not possible, then we should all go home. But of course, it is possible. As Searle notes, psychology especially, but also neurology, cognitive science, and cognitive neuroscience are all dedicated to developing an epistemically objective knowledge of mind, including consciousness.
One common scientific approach is to reduce consciousness to brain-processes (think of Lord Rutherford, who is said to have stated that "All sciences is physics, all the rest is stamp-collecting". But, Searle argues, this tack must fail. According to him, you can't reduce ontologically subjective facts (e.g., about consciousness) to ontologically objective facts (e.g., about brain processes) because any such reduction leaves out subjectivity -- which is the think that is supposed to be explained by the reduction.
Nor, for that matter, is it possible to explain consciousness with observer-relative facts. This would be circular, because observer-relative facts already presuppose consciousness.
One final point about observer-relativity and science. Observer-independent entities lie, generally, in the domain of the natural sciences, such as physics, chemistry, and biology. Observer-relative entities, which include all of the phenomena created by consciousness, lie in the domain of the social sciences, such as history and sociology. Psychology, as the science of mental life, is both a natural and a social science.
- As a natural science, psychology is concerned with observer-independent mental processes, and discovering universal laws like Miller's "magical number seven, plus or minus two" or Stevens' Law (which I'll discuss in the lectures on Psychophysics ).
- As a social science, psychology is concerned with observer-dependent mental contents, such as what a person knows, and what events mean to a person.
Psychologists sometimes think that they have to choose between these positions, allying themselves either with biology or the social sciences. But they don't, because psychology is both a natural science and a social science .
Another philosopher, Ned Block, has made a distinction between two kinds of consciousness, based on subjectivity.
- Phenomenal consciousness (P-consciousness) refers to experiential states -- that is, mental states that are subjectively experienced by someone.
- Access consciousness ( A-consciousness ) refers to information that interacts with conscious mental states, but is not itself accessible to phenomenal awareness.
For Block, you can have P-consciousness without A-consciousness, as when there is background noise, but you don't pay any attention to it, so it doesn't interact with what you're thinking. And you can have A-consciousness without P-consciousness, as in cases of blindsight, where a person can make judgments about the visual properties of an object without consciously seeing that object. A-consciousness without P-consciousness is a characteristic of unconscious mental life, about which we will have more to say later.
The Self in Subjectivity
Let's return to James for a moment, and his idea that thought tends to personal form . Just as Intentionality suggests that a focus on qualia is not enough, so subjectivity suggests that a focus on Intentionality is not enough either. That is, a description of a mental state such as George believes that Martha likes him isn't an accurate description of consciousness, because it leaves out personal subjectivity. The appropriate description is I believe that Martha likes me . All conscious thoughts, feelings, and desires are personal thoughts, feelings, and desires: they take the self as their subject. Paraphrasing James, the universal conscious fact is not that "George thinks" and "George feels" but rather " I think" and ' I feel".
Going beyond syntax, it appears that self-reference takes one of four forms, depending on what the UCB linguist Charles Fillmore (1968; see also Brown & Fish, 1983) referred to as semantic role. By which he meant that the subject of any sentence can play one of four semantic roles:
- Agent or Patient of some action:
- I gave a present to Lucy (Agent)
- Lucy gave a present to me (Patient)
- Stimulus or Experiencer of some state.
- I made Lucy happy (Stimulus)
- Lucy made me happy (Experiencer)
Objectively Studying the Subjective: Synesthesia
As noted earlier, the property of subjectivity seems to make a scientific study of consciousness impossible: how do you make an objective study, based on public observations, of something that is inherently private and subjective? How can we know what a person is really seeing, remembering, thinking, or feeling?
This epistemological issue is brought to a head by synesthesia . In this phenomenon, a stimulus in one modality elicits sensation in another: for example, presentation of a sound may elicit the visual experience of color. Alternatively, the modality of experience may remain the same, but some unpresented quality may be added to the perceptual experience: for example, letters or digits presented in black-and-white may be experienced in color. (Apparently, different synesthetes have different item-color relations.)
In 1883, Sir Francis Galton took note of the individuality of synesthetic experiences: "To ordinary individuals one of these accounts seems just as wild and lunatic as another but when the account of one seer is submitted to another seer, the latter is scandalized and almost angry at the heresy of the former".
Roman Jacobson, the linguist, described a multilingual woman with phoneme-color synesthesia, who saw colors when she heard certain consonants and vowels: "As time went on words became simply sound, differently colored, and the more outstanding one color was, the better it remained in my memory. That is why, on the other hand, I have great difficulty with short English words like jut , jug , lie , lag , etc.: their colors simply run together." For her, Russian has "a lot of long, black and brown words", while German scientific expressions "are accompanied by a strange, dull yellowish glimmer".
Subject MLS displays letter-color synesthesia (Mills et al., 2002). Each letter of the alphabet is associated with a different color. In fact, MLS is multilingual, fluent in Russian as well as German, French, English, and Polish. In her synesthesia, she has one set of colors for Roman letters, and another set for Cyrillic.
Subject C displays digit-color synesthesia (Dixon et al., 2000, 2001, 2002a, 2002b). C was first studied for her extraordinary memory, as reflected in her ability to remember lists of 9-digit strings over intervals of as long as 2 months. In the course of investigating how she accomplished this feat, she happened to mention that she sees color whenever she sees, hears, or thinks of digits. When digits are presented in conventional black-on-white form, the color overlays the printed item.
Cases of syne sthesia are often labeled in pairwise combinations of the stimulus inducer and the concurrent experience : thus, in grapheme-color synesthesia , certain units of written language elicit colo r. In this way, we can distinguish between synesthetic experiences and other anomalies of perception:
- In illusions , the induc er is not perceived c orrectly.
- In hallucinations , the experience occurs in the absence of an inducer.
According to Cytowicz (1989, 1993), synesthetic experiences tend to have a number of features in common:
- involuntary : they are automatically elicited by certain stimuli;
- projected outside the body : they are experienced as percepts, not images;
- durable : they endure for a lifetime;
- discrete : they have signature qualities (for example, the digit "3" may be experienced in a particular shade of red, while the digit "7" might be experienced in another shade of red);
- generic : they are elementary sensations (like blobs), not complex percepts (like scenes);
- memorable : synesthetic experiences are encoded in memory, so that the person can report them retrospectively;
- emotional : there is a sense of personal conviction attached to the experience;
- noetic : they are experienced directly, without cognitive mediation; and
- unidirectional -- for example, digits might elicit colors, but colors do not elicit digits in turn.
In theory at least, the relationship between the inducer and the concurrent is automatic (for a detailed discussion of automaticity, see the lectures on Attention and Automaticity ). This can be demonstrated with a variant on the Stroop test, in which color names are replaced with the inducer. So, for example, a subject who sees the letter A as red would see a string of As -- e.g., AAAAA -- printed eit her in the concurrent color ( e.g. red, AAAAA , or some other color (e.g. green, AAAAA ). The subject is then asked simply to name the color in which the string is printed. Response latencies are reduced if the string is printed in the concurrent color, which suggests that the synesthetic subject cannot help seeing red, and this interferes with the correct perception -- or, at least, naming of green (Mattingly, 2001).
Theories of Synesthesia
It would be easy to suggest that synesthetic relations are metaphoric in nature, such as when we describe the taste of cheese as sharp, or the feeling of sadness as "being blue". However, synesthetes insist that their experiences are perceptual in nature.
According to another theory, synesthetic relations reflect associative learning during childhood. For example, the associations between letters and numbers on the one hand, and colors on the other, might have been learned through experience with blocks. But if that is the case, why aren't there more synesthetes around?
Perhaps the most interesting theory of synesthesia, proposed by Ramachandran and Hubbard (2001), is that it is result of sensory linkage -- a kind of "cross-wiring" between brain centers that generate the different modalities of sensation, and different qualities of sensation within each modality. This proposal is consistent with Muller's Doctrine of Specific Nerve Energies (discussed in the lectures on Psychophysics ) -- and especially with the view that sensory experience is generated by the projection areas where sensory impulses end up. So, for example, sound-color synesthesia might be generated by a cross-wiring that carries neural impulses from the auditory projection area in the temporal lobe to Area V4 of the occipital lobe, which is involved in color perception.
An alternative theory, proposed by Grossenbacher and Lovelace (2001), is in terms of disinhibited feedback from the association areas to the sensory projection areas of the cortex. Under normal circumstances, it is proposed, the sensory projection areas send information to the association areas, but not the other way around: that direction of influence is inhibited. But if there is disinhibition, then information being processed in the association areas -- e.g., for reading words -- can leak back to the sensory areas (like the color area in V4), causing letters to have colors.
In either case, synesthesia illustrates the Doctrine of Specific Nerve Energies and the associated Doctrine of Specific Fiber Energies: Conscious experience is not tied to the stimulus (if it were, then synesthetes could never perceive letters printed in black-and-white to be colored). Instead, conscious experience is tied to the brain that processes the stimulus. For example, studies have sometimes (but not always) found that synesthetic color activates area V4.
Most theories of synesthesia assume that inter-modal sensation reflects some sort of "crossing of the wires", and the further assumption that the neural organization that gives rise to synesthesia is in some sense innate -- synesthetes' brains are just wired that way. However, Without and Winawer (2013) identified 11 synesthetes whose color-grapheme correspondences were identical. The investigators attributed this coincidence to the fact that each of the subjects had been exposed in childhood to the same set of magnetic letters, in which different letters and numbers were printed in different colors. This means that synesthesia can be learned, presumably incidentally. All of the subjects met the standard criterion for synesthesia -- their cross-modal experiences were specific, automatic, and stable over time. So they were real synesthetes, not just mnemonists. Any theory of synesthesia will have to make room for learning and memory as well as neural connections.
Experimental Research
As interesting as the sensory linkage theory is, what synesthesia needs now is more experimental research to answer fundamental questions about the phenomenon -- such as whether it's really perceptual. While past studies of synesthesia were mostly clinical and impressionistic in nature, more recent work has applied carefully controlled experimental paradigms to determine what synesthetic subjects actually experience. Among the most interesting of these was a pioneering set of studies by Ramachandran and Hubbard (2000) at UCSD (Hubbard majored in Cognitive Science as a UCB undergraduate).
Ramachandran & Hubbard (2000) studied two subjects who experience digit and letter-color synesthesia:
- Subject JC experiences colors when viewing both digits and letters.
- Subject ER experiences colors only when viewing digits.
In one experiment, the subjects performed a perceptual grouping task with an array of digits printed in black and white. When random digits are spaced out evenly in the array, by chance we would expect roughly half of subjects to group them into vertical columns, and the others to group them into horizontal rows -- and to do so pretty much arbitrarily. However, rows of similar digits, like 3 and 8 , tend to bias grouping toward the horizontal, as a reflection of the familiar Gestalt principle of organization by similarity of shape.
As it happened, subject ER saw 3s and 7s as red, and 8s and 0s as blue. As a result, ER grouped the array by columns instead of rows. Taken together, the synesthetic subjects tended to organize the arrays by similarity of color, while controls tended to organize the arrays by similarity of shape.
In another experiment, on visual search , they exploited the phenomenon of pop-out , in which distinctive targets are identified automatically. The digits 2 and 5 have pretty much the same features, and so it is hard to find 2s embedded in an array of 5s -- especially when, as in this experiment, the array is presented only for a single second. Moreover, even when they identify one or more 2s, most people usually fail to detect patterns of targets that may be embedded in the array.
However, the task is quite different for digit-color synesthetes, for whom the target is perceived in a color that distinguishes it from the other digits in the array. Accordingly, the synesthetic subjects were more likely to detect the target, and were more likely to detect the hidden pattern as well. Their synesthetic experience made the target "pop out" in a way that was not the case for the control subjects.
Research on synesthesia is just beginning (the first international conference on synesthesia was held at UCB in Fall 2004). However, the studies of Ramachandran and Hubbard show what some of the possibilities are -- possibilities that are constrained only by the ingenuity of the experimenters.
At one level, synesthetic experiences represent anomalies in qualia, because the subject perceives a sensory quality that is not "in" the stimulus. Synesthesia reminds us that conscious experience is not given by the stimulus -- it's constructed in the perceiver's mind, by the perceiver's brain.
Reframing Questions About Consciousness
In any event, the discussion of Intentionality permits us to frame certain questions about consciousness more precisely, so that they become somewhat more tractable:
- Can nonhuman animals have Intentional states?
- Can machines?
- Can you have consciousness without "aboutness"? Some altered states of consciousness, either induced by "psychedelic" drugs or by non-Western meditative disciplines, seem to lack Intentionality -- they're not about anything but themselves.
- Can qualia or Intentional states be un conscious? Does it make any sense to talk about qualia or Intentional states that are outside of awareness?
- Are all conscious states Intentional in nature? Some conscious states don't seem to have propositional content, as in a motivational state like John wants a pizza or the emotional state John likes pizza . If they are lacking in propositional content, then they don't have truth conditions. So, either:
- there are mental states that are not Intentional;
- or the notion of propositional content has to be expanded;
- or motivational and emotional states must be reducible to cognitive states that do have propositional content.
- What is the role of self-reference in unconscious mental states?
These are questions for philosophical debate, but they are also questions for scientific research, so we can hope that scientific research on consciousness can shed light on these difficult scientific issues. For now, however, we will turn to examining the earliest scientific approach to consciousness:
Continue to the Lecture Supplement on Psychophysics .
This page last modified 07/21/2023.
- Table of Contents
- New in this Archive
- Chronological
- Editorial Information
- About the SEP
- Editorial Board
- How to Cite the SEP
- Special Characters
- Support the SEP
- PDFs for SEP Friends
- Make a Donation
- SEPIA for Libraries
- Entry Contents
Bibliography
Academic tools.
- Friends PDF Preview
- Author and Citation Info
- Back to Top
Introspection
Introspection, as the term is used in contemporary philosophy of mind, is a means of learning about one's own currently ongoing, or perhaps very recently past, mental states or processes. You can, of course, learn about your own mind in the same way you learn about others' minds—by reading psychology texts, by observing facial expressions (in a mirror), by examining readouts of brain activity, by noting patterns of past behavior—but it's generally thought that you can also learn about your mind introspectively , in a way that no one else can. But what exactly is introspection? No simple characterization is widely accepted.
Introspection is a key concept in epistemology, since introspective knowledge is often thought to be particularly secure, maybe even immune to skeptical doubt. Introspective knowledge is also often held to be more immediate or direct than sensory knowledge. Both of these putative features of introspection have been cited in support of the idea that introspective knowledge can serve as a ground or foundation for other sorts of knowledge.
Introspection is also central to philosophy of mind, both as a process worth study in its own right and as a court of appeal for other claims about the mind. Philosophers of mind offer a variety of theories of the nature of introspection; and philosophical claims about consciousness, emotion, free will, personal identity, thought, belief, imagery, perception, and other mental phenomena are often thought to have introspective consequences or to be susceptible to introspective verification. For similar reasons, empirical psychologists too have discussed the accuracy of introspective judgments and the role of introspection in the science of the mind.
1.1 Necessary Features of an Introspective Process
1.2 the targets of introspection, 1.3 the products of introspection, 2.1.1 behavioral observation accounts, 2.1.2 theory theory accounts, 2.1.3 restrictions on parity, 2.2.1 simple monitoring accounts, 2.2.2 multi-process monitoring accounts, 2.3.1 self-fulfillment and containment, 2.3.2 self-shaping, 2.3.3 expressivism, 2.3.4 transparency, 2.4 introspective pluralism, 3.1 the rise of introspective psychology as a science, 3.2 early skepticism about introspective observation, 3.3 the decline of scientific introspection, 3.4 the re-emergence of scientific introspection, 4.1.1 varieties of perfection: infallibility, indubitability, incorrigibility, and self-intimation, 4.1.2 weaker guarantees, 4.1.3 privilege without guarantee, 4.2.1 of the causes of attitudes and behavior, 4.2.2 of attitudes, 4.2.3 of conscious experience, other internet resources, related entries, 1. general features of introspection.
Introspection is generally regarded as a process by means of which we learn about our own currently ongoing, or very recently past, mental states or processes. Not all such processes are introspective, however: Few would say that you have introspected if you learn that you're angry by seeing your facial expression in the mirror. However, it's unclear and contentious exactly what more is required for a process to qualify as introspective. A relatively restrictive account of introspection might require introspection to involve attention to and direct detection of one's ongoing mental states; but many philosophers think attention to or direct detection of mental states is impossible or at least not present in many paradigmatic instances of introspection.
For a process to qualify as “introspective” as the term is ordinarily used in contemporary philosophy of mind, it must minimally meet the following three conditions:
The mentality condition : Introspection is a process that generates, or is aimed at generating, knowledge, judgments, or beliefs about mental events, states, or processes, and not about affairs outside one's mind, at least not directly. In this respect, it is different from sensory processes that normally deliver information about outward events or about the non-mental aspects of the individual's body. The border between introspective and non-introspective knowledge can begin to seem blurry with respect to bodily self-knowledge such as proprioceptive knowledge about the position of one's limbs or nociceptive knowledge about one's pains. But it seems that in principle the introspective part of such processes, pertaining to judgments about one's mind—e.g., that one has the feeling as though one's arms were crossed or of toe-ishly located pain—can be distinguished from the non-introspective judgment that one's arms are in fact crossed or one's toe is being pinched.
The first-person condition : Introspection is a process that generates, or is aimed at generating, knowledge, judgments, or beliefs about one's own mind only and no one else's, at least not directly. Any process that in a similar manner generates knowledge of one's own and others' minds is by that token not an introspective process. (Some philosophers have contemplated peculiar or science fiction cases in which we might introspect the contents of others' minds directly—for example in telepathy or when two individuals' brains are directly wired together—but the proper interpretation of such cases is disputable see, e.g., Gertler 2000.)
The temporal proximity condition : Introspection is a process that generates knowledge, beliefs, or judgments about one's currently ongoing mental life only; or, alternatively (or perhaps in addition) immediately past (or even future) mental life, within a certain narrow temporal window (sometimes called the specious present; see the entry on the experience and perception of time ). You may know that you were thinking about Montaigne yesterday during your morning walk, but you cannot know that fact by current introspection alone—though perhaps you can know introspectively that you currently have a vivid memory of having thought about Montaigne. Likewise, you cannot know by introspection alone that you will feel depressed if your favored candidate loses the election in November—though perhaps you can know introspectively what your current attitude is toward the election or what emotion starts to rise in you when you consider the possible outcomes. Whether the target of introspection is best thought of as one's current mental life or one's immediately past mental life may depend on one's model of introspection: On self-detection models of introspection, according to which introspection is a causal process involving the detection of a mental state (see Section 2.2 below), it's natural to suppose that a brief lapse of time will transpire between the occurrence of the mental state that is the introspective target and the final introspective judgment about that state, which invites (but does not strictly imply) the idea that introspective judgments generally pertain to immediately past states. On self-shaping and self-fulfillment models of introspection, according to which introspective judgments create or embed the very state introspected (see Sections 2.3.1 and 2.3.2 below), it seems more natural to think that the target of introspection is one's current mental life or perhaps even the immediate future.
Few contemporary philosophers of mind would call a process “introspective” if it does not meet some version of the three conditions above, though in ordinary language the temporal proximity condition may sometimes be violated. (For example, in ordinary speech we might describe as “introspective” a process of thinking about why you abandoned a relationship last month or whether you're really as kind to your children as you think you are.) However, many philosophers of mind will resist calling a process that meets these three conditions “introspective” unless it also meets some or all of the following three conditions:
The directness condition : Introspection yields judgments or knowledge about one's own current mental processes relatively directly or immediately . It's difficult to articulate exactly what directness or immediacy involves in the present context, but some examples should make the import of this condition relatively clear. Gathering sensory information about the world and then drawing theoretical conclusions based on that information should not, according to this condition, count as introspective, even if the process meets the three conditions above. Seeing that a car is twenty feet in front of you and then inferring from that fact about the external world that you are having a visual experience of a certain sort does not, by this condition, count as introspective. However, as we will see in Section 2.3.4 below, those who embrace transparency theories of introspection may reject at least strong formulations of this condition.
The detection condition : Introspection involves some sort of attunement to or detection of a pre-existing mental state or event, where the introspective judgment or knowledge is (when all goes well) causally but not ontologically dependent on the target mental state. For example, a process that involved creating the state of mind that one attributes to oneself would not be introspective, according to this condition. Suppose I say to myself in silent inner speech, “I am saying to myself in silent inner speech, ‘haecceities of applesauce’”, without any idea ahead of time how I plan to complete the embedded quotation. Now, what I say may be true, and I may know it to be true, and I may know its truth (in some sense) directly, by a means by which I could not know the truth of anyone else's mind. That is, it may meet all the four conditions above and yet we may resist calling such a self-attribution introspective. Self-shaping (Section 2.3.2 below), expressivist (Section 2.3.3 below), and transparency (Section 2.3.4 below) accounts of self-knowledge emphasize the extent to which our self-knowledge often does not involve the detection of pre-existing mental states; and because something like the detection condition is implicitly or explicitly accepted by many philosophers, some philosophers (including some but not all of those who endorse self-shaping, expressivist, and/or transparency views) would regard it as inappropriate to regard such accounts of self-knowledge as accounts of introspection proper.
The effort condition : Introspection is not constant, effortless, and automatic . We are not every minute of the day introspecting. Introspection involves some sort of special reflection on one's own mental life that differs from the ordinary un-self-reflective flow of thought and action. The mind may monitor itself regularly and constantly without requiring any special act of reflection by the thinker—for example, at a non-conscious level certain parts of the brain or certain functional systems may monitor the goings-on of other parts of the brain and other functional systems, and this monitoring may meet all five conditions above—but this sort of thing is not what philosophers generally have in mind when they talk of introspection. However, this condition, like the directness and detection conditions, is not universally accepted. For example, philosophers who think that conscious experience requires some sort of introspective monitoring of the mind and who think of conscious experience as a more or less constant feature of our lives may reject the effort condition (Armstrong 1968, 1999; Lycan 1996).
Though not all philosophical accounts that are put forward by their authors as accounts of “introspection” meet all of conditions 4–6, most meet at least two of those. Because of differences in the importance accorded to conditions 4–6, it is not unusual for authors with otherwise similar accounts of self-knowledge to differ in their willingness to describe their accounts as accounts of “introspection”.
Accounts of introspection differ in what they treat as the proper targets of the introspective process. No major contemporary philosopher believes that all of mentality is available to be discovered by introspection. For example, the cognitive processes involved in early visual processing and in the detection of phonemes are generally held to be introspectively impenetrable and nonetheless (in some important sense) mental (Marr 1983; Fodor 1983). Many philosophers also accept the existence of unconscious beliefs or desires, in roughly the Freudian sense, that are not introspectively available (e.g., Gardner 1993; Velleman 2000; Moran 2001; Wollheim 2003; though see Lear 1998). Although in ordinary English usage we sometimes say we are “introspecting” when we reflect on our character traits, contemporary philosophers of mind generally do not believe that we can directly introspect character traits in the same sense in which we can introspect some of our other mental states (especially in light of research suggesting that we sometimes have poor knowledge of our traits, reviewed in Taylor and Brown 1988; Paulhus and John 1998; Vazire 2010).
The two most commonly cited classes of introspectible mental states are attitudes , such as beliefs, desires, evaluations, and intentions, and conscious experiences , such as emotions, images, and sensory experiences. (These two groups may not be wholly, or even partially, disjoint: Depending on other aspects of her view, a philosopher may regard some or all conscious experiences as involving attitudes, and/or she may regard attitudes as things that are or can be consciously experienced.) It of course does not follow from the fact (if it is a fact) that some attitudes are introspectible that all attitudes are, or from the fact that some conscious experiences are introspectible that all conscious experiences are. Some accounts of introspection focus on attitudes (e.g., Nichols and Stich 2003), while others focus on conscious experiences (e.g., Hill 1991; Goldman 2006; Schwitzgebel 2012); and it is sometimes unclear to what extent philosophers intend their remarks about the introspection of one type of target to apply to the other type. There is no guarantee that the same mechanism or process is involved in introspecting all the different potential targets.
Generically, this article will describe the targets of introspection as mental states , though in some cases it may be more apt to think of the targets as processes rather than states. Also, in speaking of the targets of introspection as targets , no presupposition is intended of a self-detection view of introspection as opposed to a self-shaping or containment or expressivist view (see Section 2 below). The targets are simply the states self-ascribed as a consequence of the introspective process if the process works correctly, or if the introspective process fails, the states that would have been self-ascribed.
Though philosophers have not explored the issue very thoroughly, accounts also differ regarding the products of introspection. Most philosophers hold that introspection yields something like beliefs or judgments about one's own mind, but others prefer to characterize the products of introspection as “thoughts”, “representations”, “awareness”, or the like. For ease of exposition, this article will describe the products of the introspective process as judgments, without meaning to beg the question against competing views.
2. Introspective Versus Non-Introspective Accounts of Self-Knowledge
This section will outline several approaches to self-knowledge. Not all deserve to be called introspective, but an understanding of introspection requires an appreciation of this diversity of approaches—some for the sake of the contrast they provide to introspection proper and some because it's disputable whether they should be classified as introspective. These approaches are not exclusive. Surely there is more than one process by means of which we can obtain self-knowledge. Unavoidably, some of the same territory covered here is also covered, rather differently, in the entry on self-knowledge .
2.1 Self/Other Parity Accounts
Symmetrical or self/other parity accounts of self-knowledge treat the processes by which we acquire knowledge of our own minds as essentially the same as the processes by which we acquire knowledge of other people's minds. A simplistic version of this view is that we know both our own minds and the minds of others only by observing outward behavior. On such a view, introspection strictly speaking is impossible, since the first-person condition on introspection (condition 2 in Section 1.1) cannot be met: There is no distinctive process that generates knowledge of one's own mind only. Twentieth-century behaviorist principles tended to encourage this view, but no prominent treatment of self-knowledge accepts this view in its most extreme and simple form. Advocates of parity accounts sometimes characterize our knowledge of our own minds as arising from “theories” that we apply equally to ourselves and others (as in Nisbett and Ross 1980; Gopnik 1993a, 1993b). Consequently, this approach to self-knowledge is sometimes called the theory theory .
Among leading researchers, Bem (1972) perhaps comes closest to a simple self/other parity view, arguing on the basis of psychological research that our knowledge of the “internal states” of both self and other derives largely from the same types of behavioral evidence and employs the same principles of inference. We notice how we behave, and then we infer the attitudes evidenced by those behaviors—and we do so even when we actually lack the ascribed attitude. For example, Bem cites classic research in social psychology suggesting that when induced to perform an action for a small reward, people will attribute to themselves a more positive attitude toward that action than when they are induced by a large reward (Festinger and Carlsmith 1959; see also Section 4.2.2 below). When we notice ourselves doing something with minimal compensation, we infer a positive attitude toward that activity, just as we would if we saw someone else perform the same activity with minimal compensation. Likewise, we might know we like Thai food because we've noticed that we sometimes drive all the way across town to get it; we might know that we're happy because we see or feel ourselves smiling. Bem argues that social psychology has consistently failed to show that we have any appreciable access to private information that might tell against such externally-driven self-attributions. On Bem's view, if we are better at discerning our own motives and attitudes, it's primarily because we have observed more of our own behavior than of anyone else's.
Nisbett, Wilson, and their co-authors (Nisbett and Bellows 1977; Nisbett and Wilson 1977; Nisbett and Ross 1980; Wilson 2002) similarly argue for self/other parity in our knowledge of the bases or causes of our own and others' attitudes and behavior, describing cases in which people seem to show poor knowledge of these bases or causes. For example, people queried in a suburban shopping center about why they chose a particular pair of stockings appeared to be ignorant of the influence of position on that choice, including explicitly denying that influence when it was suggested to them. People asked to rate various traits of supposed job applicants were unaware that their judgments of the applicant's flexibility were greatly influenced by having been told that the applicant had spilled coffee during the job interview (see also Section 4.2.2 below). In such cases, Nisbett and his co-investigators found that subjects' descriptions of the causal influences on their own behavior closely mirrored the influences hypothesized by outside observers. From this finding, they infer that the same mechanism drives the first-person and third-person attributions, a mechanism that that does not involve any special private access to the real causes of one's attitudes and behavior and instead relies heavily on intuitive psychological theories.
Gopnik (1993a, 1993b; Gopnik and Meltzoff 1994) deploys developmental psychological evidence to support a parity theory of self-knowledge. She points to evidence that for a wide variety of mental states, including believing, desiring, and pretending, children develop the capacity to ascribe those states to themselves at the same age they develop the capacity to ascribe those states to others. For example, children do not seem to be able to ascribe to themselves past false beliefs (after having been tricked by the experimenter) any earlier than they can ascribe false beliefs to other people. This appears to be so even when that false belief is in the very recent past, having only just been revealed to be false. According to Gopnik, this pervasive parallelism shows that we are not given direct introspective access to our beliefs, desires, pretenses, and the like. Rather, we must develop a “theory of mind” in light of which we interpret evidence underwriting our self-attributions. The appearance of the immediate givenness of one's mental states is, Gopnik suggests, merely an “illusion of expertise”: Experts engage in all sorts of tacit theorizing that they don't recognize as such—the expert chess player for whom the strength of a move seems simply visually given, the doctor who immediately intuits cancer in a patient. Since we are all experts at mental state attribution, we don't recognize the layers of theory underwriting the process.
The empirical evidence behind self/other parity views remains contentious (White 1988; Nichols and Stich 2003). Furthermore, though Bem, Nisbett, Wilson, and Gopnik all stress the parallelism between mental state attribution to oneself and others and the inferential and theoretical nature of such attributions, they all also leave some room for a kind of self-awareness different in kind from the awareness one has of others' mental lives. Thus, none endorses a purely symmetrical or self/other parity view. Bem acknowledges that the parallelism only holds “to the extent that internal cues are weak, ambiguous, or uninterpretable” (1972, 5). With this caveat in mind, he states that our self-knowledge is “partially” based on external cues. Nisbett and Wilson stress that we lack access only to the “processes” or causes underlying our behavior and attitudes. Our attitudes themselves and our current sensations, they say, can be known with “near certainty” (1977, 255; though contrast Nisbett and Ross 1980, 200–202, which seems sympathetic to Bem's skepticism about special access even to our attitudes). Gopnik allows that we “may be well equipped to detect certain kinds of internal cognitive activity in a vague and unspecified way”, and that we have “genuinely direct and special access to certain kinds of first-person evidence [which] might account for the fact that we can draw some conclusions about our own psychological states when we are perfectly still and silent”, though we can “override that evidence with great ease” (1993a, 11–12). Ryle (1949) similarly stresses the importance of outward behavior in the self-attribution of mental states while acknowledging the presence of “twinges”, “thrills”, “tickles”, and even “silent soliloquies”, which we know of in our own case and that do not appear to be detectable by observing outward behavior. However, none of these authors develops an account of this apparently more direct self-knowledge. Their theories are consequently incomplete. Regardless of the importance of behavioral evidence and general theories in driving our self-attributions, in light of the considerations that drive Bem, Nisbett, Wilson, Gopnik, and Ryle to these caveats, it is probably impossible to sustain a view on which there is complete parity between first- and third-person mental state attributions. There must be some sort of introspective, or at least uniquely first-person, process.
Self/other parity views can also be restricted to particular subclasses of mental states: Any mental state that can only be known by cognitive processes identical to the processes by which we know about the same sorts of states in other people is a state to which we have no distinctively introspective access. States for which parity is often asserted include personality traits, unconscious motives, early perceptual processes, and the bases of our decisions (see Section 4.2.1 below for more on this). We learn about these states in ourselves, perhaps, in much the same way we learn about such states in other people. Carruthers (2011; see also Section 4.2.2 below) presents a case for parity of access to propositional attitudes like belief and desire (in contrast to inner speech, visual imagery, and the like, which he holds to be introspectible).
2.2 Self-Detection Accounts
Etymologically, the term “introspection”—from the Latin “looking into”—suggests a perceptual or quasi-perceptual process. Locke writes that we have a faculty of “Perception of the Operation of our own Mind” which, “though it be not Sense, as having nothing to do with external Objects; yet it is very like it, and might properly enough be call'd internal Sense” (1690/1975, 105, italics suppressed). Kant (1781/1997) says we have an “inner sense” by which we learn about mental aspects of ourselves that is in important ways parallel to the “outer sense” by which we learn about outer objects.
But what does it mean to say that introspection is like perception? In what respects? As Shoemaker (1994a, 1994b, 1994c) points out, in a number of respects introspection is plausibly unlike perception. For example, introspection does not involve a dedicated organ like the eye or ear (though as Armstrong 1968 notes, neither does bodily proprioception). Both friends and foes of self-detection accounts have tended to agree that introspection does not involve a distinctive phenomenology of “introspective appearances” (Shoemaker 1994a, 1994b, 1994c; Lycan 1996; Rosenthal 2001; Siewert 2012): The visual experience of redness has a distinctive sensory quality or phenomenology that would be difficult or impossible to convey to a blind person; analogously for the olfactory experience of smelling a banana, the auditory experience of hearing a pipe organ, the experience of touching something painfully hot. To be analogous to sensory experience in this respect, introspection would have to generate an analogously distinctive phenomenology—some quasi-sensory phenomenology in addition to, say, the visual phenomenology of seeing red that is the phenomenology of the introspective appearance of the visual phenomenology of seeing red. This would seem to require two layers of appearance in introspectively attended sensory perception: a visual appearance of the outward object and an introspective appearance of that visual appearance. (This isn't to say, however, that introspection, or at least conscious introspection, doesn't involve some sort of “cognitive phenomenology”—if there is such a thing—of the sort that accompanies conscious thoughts in general: See Bayne and Montague, eds., 2011.)
Contemporary proponents of quasi-perceptual models of introspection concede the existence of such disanalogies (e.g., Lycan 1996). We might consider an account of introspection to be quasi-perceptual, or less contentiously to be a “self-detection” account, if it meets the first five conditions described in Section 1.1—that is, the mentality condition, the first-person condition, the temporal proximity condition, the directness condition, and the detection condition. One aspect of the detection condition deserves special emphasis here: that detection requires the ontological independence of the target mental state and the introspective judgment—the two states will be causally connected (assuming that all has gone well) but not constitutively connected. (Shoemaker (1994a, 1994b, 1994c) calls models of self-knowledge that meet this aspect of the detection condition “broad perceptual” models.) Maybe on a liberal understanding of “detection” that does not require ontological independence, containment or other accounts of introspection (see Section 2.3.1 below) might qualify as involving “detection”. However, that is not how “detection” is being used in the present taxonomy.
Self-detection accounts of self-knowledge seem to put introspection epistemically on a par with sense perception. To many philosophers, this has seemed a deficiency in these accounts. A long and widespread philosophical tradition holds that self-knowledge is epistemically special, that we have specially “privileged access” to—perhaps even infallible or indubitable knowledge of—at least some portion of our mentality, in a way that is importantly different in kind from our knowledge of the world outside us (see Section 4 below). Both self/other parity accounts (Section 2.1 above) and self-detection accounts (this section) of self-knowledge either deny any special epistemic privilege or characterize that privilege as similar to the privilege of being the only person to have an extended view of an object or a certain sort of sensory access to that object. Other accounts of self-knowledge to be discussed later in Section 2.3 are more readily compatible with, and often to some extent driven by, more robust notions of the epistemic differences between self-knowledge and knowledge of environmental objects.
Armstrong (1968, 1981, 1999) is perhaps the leading defender of a quasi-perceptual, self-detection account of introspection. He describes introspection as a “self-scanning process in the brain” (1968, 324), and he stresses what he sees as the important ontological distinction between the state of awareness produced by the self-scanning procedure and the target mental state of which one is aware by means of that scanning—the distinction, for example, between one's pain and one's introspective awareness of that pain.
Armstrong also appears to hold that the quasi-perceptual introspective process proceeds at a fairly low level cognitively—quick and simple, typically without much interference by or influence from other cognitive or sensory processes. He describes introspection as “completely non-inferential”, similar to the simple detection of pressure on one's back (1968, 97), and he says it can be (and presumably typically is) continuous and “reflex”, involving no more than keeping “a watching brief on our own current mental contents, but without making much of a deal of it” (1999, 115). Since Armstrong allows that inferences are often non-conscious, based on sensory or other cues that the inferring person cannot herself discern, his claim that the introspective process is non-inferential is a substantial commitment to the simplicity of the process. He contrasts this reflexive self-monitoring with more sophisticated acts of deliberate introspection which he thinks are also possible (1999, 114). Note that in calling reflexive self-monitoring “introspection”, Armstrong violates the effort condition from Section 1.1, which requires that introspection not be constant and automatic. Lycan (1996) endorses a similar view, though unlike Armstrong, Lycan characterizes introspection as involving attentional mechanisms, thus presumably treating introspection as more demanding of cognitive resources (though still perhaps nearly constant).
Nichols and Stich (2003) employ a model of the mind on which having a propositional attitude such as a belief or desire is a matter of having a representation stored in a functionally-defined (and metaphorical) “belief box” or “desire box” (see also the entries on belief and functionalism ). On their account, self-awareness of these attitudes typically involves the operation of a simple “Monitoring Mechanism” that merely takes the representations from these boxes, appends an “I believe that …”, “I desire that …”, or whatever (as appropriate) to that representation, and adds it back into the belief box. For example, if I desire that my father flies to Hong Kong on Sunday, the Monitoring Mechanism can copy the representation in my desire box with the content “my father flies to Hong Kong on Sunday” and produce a new representation in my belief box—that is, create a new belief—with the content “I desire that my father flies to Hong Kong on Sunday”. Nichols and Stich also propose an analogous but somewhat more complicated mechanism (they leave the details unspecified) that takes percepts as its input and produces beliefs about those percepts as its output.
Nichols and Stich emphasize that this Monitoring Mechanism does not operate in isolation, but often co-operates or competes with a second means of acquiring self-knowledge, which involves deploying theories along the lines suggested by Gopnik (see Section 2.1.2 above). They offer a “double dissociation” argument for this view. That is, they present, on the one hand, cases which they interpret as cases showing a breakdown in the Monitoring Mechanism, while the capacity for theoretical inference about the mind remains intact and, on the other hand, cases in which the capacity for theoretical inference about the mind is impaired but the Monitoring Mechanism continues to function normally, suggesting that theoretical inference and self-monitoring are distinct and separable processes. Nichols and Stich argue that autistic people have very poor theoretical knowledge of the mind, as suggested by their very poor performance in “theory of mind” tasks (tasks like assessing when someone will have a false belief), and yet they succeed in monitoring their mental states as shown by their ability to describe their mental states in autobiographies and other forms of self-report. Conversely, Nichols and Stich argue that schizophrenic people remain excellent theorizers about mental states but monitor their own mental states very poorly—for example, when they fail to recognize certain actions as their own and struggle to report, or deny the existence of, ongoing thoughts.
Goldman (2006) criticizes the account of Nichols and Stich (see Section 2.2.1 above) for not describing how the Monitoring Mechanism detects the attitude type of the representation (belief, desire, etc.). If talk of “belief boxes” and the like is shorthand for talk of functional role (as Nichols and Stich say), then the Monitoring Mechanism must somehow detect the functional role of the detected representation. But functional role is a matter of what is apt to cause a particular mental state and what that mental state is apt to cause (see the entry on functionalism ), and Goldman argues that a simple mechanism could not discern such dispositional and relational facts (though Nichols and Stich might be able to avoid this concern by describing introspection as involving not just one but rather a cluster of similar mechanisms: 2003, 162). Goldman also argues that the Nichols and Stich account leaves unclear how we can discern the strength or intensity of our beliefs, desires, and other propositional attitudes.
Goldman's positive account starts with the idea that introspection is a quasi-perceptual process that involves attention: “Attention seems to act like an orienting organ in introspection, analogous to the shift of eye gaze or the sniffing of the nose” (2006, 244). Individual attended mental states are then classified into broad categories (similarly, in visual perception we can classify seen objects into broad categories). However, on Goldman's view this process can only generate introspective knowledge of the general types of mental states (such as belief, happiness, bodily sensation) and some properties of those mental states (such as degree of confidence for belief, and “a multitude of finely delineated categories” for bodily sensation). Specific contents, especially of attitudes like belief, are too manifold, Goldman suggests, for pre-existing classificational categories to exist for each one. Rather, we represent the specific content of such mental states by “redeploying” the representational content of the mental state, that is, simply copying the content of the introspected mental state into the content of the introspective belief or judgment (somewhat like in the Nichols and Stich account). Finally, Goldman argues that some mental states require “translation” into the mental code appropriate to belief if they are to be introspected. Visual representations, he suggests, have a different format or mental code than beliefs, and therefore cognitive work will be necessary to translate the fine-grained detail of visual experience into mental contents that can be believed introspectively.
Hill (1991, 2009) also offers a multi-process self-detection account of introspection. Like Goldman, Hill sees attention (in some broad, non-sensory sense) as central to introspection, though he also allows for introspective awareness without attention (1991, 117–118). Hill emphasizes dissimilarities between introspection and perception, while retaining a broadly self-detection account. Hill (2009) argues that introspection is a process that produces judgments about , rather than perceptual awareness of, the target states, and suggests that the processes that generate these judgments vary considerably, depending on the target state, and are often complex. For example, judgments about enduring beliefs and desires must, he says, involve complex procedures for searching “vast and heterogeneous” long-term memory stores. Central to Hill's (1991) account is an emphasis on the capacity of introspective attention to transform—especially to amplify and enrich, even to create—the target experience. In this respect Hill argues that the introspective act differs from the paradigmatic observational act which does not transform the object perceived (though of course both scientific and ordinary—especially gustatory—observation can affect what is perceived); and thus Hill's account contains a “self-fulfillment” or “self-shaping” aspect in the sense of Section 2.3.1 and Section 2.3.2 below, and only qualifiedly and conditionally meets the detection condition on accounts of introspection as described in Section 1.1 above—the condition that introspection involves attunement to or detection of a pre-existing mental state or event.
Like Hill, Prinz (2004) argues that introspection must involve multiple mechanisms, depending both on the target states (e.g., attitudes vs. perceptual experiences) and the particular mode of access to those states. Access might involve controlled attention or it might be more of a passive noticing; it might involve the verbal “captioning” or labeling of experiences or it might involve the kind of non-verbal access that even monkeys have to their mental states. Prinz (2007) sharply distinguishes between the conceptual classification of our conscious experiences into various types that can be recognized and re-identified over time—classifications which he thinks must necessarily be somewhat crude—and non-conceptual knowledge of ongoing conscious experiences attained by “pointing” at them with attention. The latter type of knowledge, Prinz argues, is much more detailed and finely structured than the former but cannot be expressed or retained over time. Prinz also follows Hill in emphasizing that introspection often intensifies or otherwise modifies the target experience. In such cases, Prinz argues, introspective “access” is only access in an attenuated sense.
2.3 Introspection Without Self-Detection?
There are several ways to generate judgments, or at least statements, about one's own current mental life—self-ascriptions, let's call them—that are reliably true though they do not involve the detection of a pre-existing state. Consider the following four types of case:
Automatically self-fulfilling self-ascriptions : I think to myself, “I am thinking”. Or: I judge that I am making a judgment about my own mental life. Or: I say to myself in inner speech “I am saying to myself in inner speech: ‘blu-bob’”. Such self-ascriptions are automatically self-fulfilling. Their existence conditions are a subset of their truth conditions.
Self-ascriptions that prompt self-shaping : I declare that I have a mental image of a pink elephant. At the same time I make this declaration, I deliberately cause myself to form the mental image of a pink elephant. Or: A man uninitiated in romantic love declares to a prospective lover that he is the kind of person who sends flowers to his lovers. At the same time he says this, he successfully resolves to be the kind of person who sends flowers to his lovers. The self-ascription either precipitates a change or buttresses what already exists in such a way as to make the self-ascription accurate. In these cases, unlike the cases described in (A), some change or self-maintenance is necessary to render the self-ascription true, beyond the self-ascriptional event itself.
Accurate self-ascription through self-expression : I learn to say “I'm in pain!” instead of “ow!” as an automatic, unreflective response to painful stimuli. Or: I use the self-attributive sentence “I believe Russell changed his mind about pacifism” simply as a cautious way of expressing the belief that Russell changed his mind about pacifism, this expression being the product of reflecting upon Russell rather than a product of reflection upon my own mind. Self-expressions of this sort are assumed here to flow naturally from the states expressed in roughly the same way that facial expressions and non-self-attributive verbal expressions flow naturally from those same states—that is, without being preceded by any attempt to detect the state self-ascribed.
Self-ascriptions derived from judgments about the outside world : From the non-self-attributive fact that Stanford is south of Berkeley I derive the self-attributive conclusion that I believe that Stanford is south of Berkeley. Or: From the non-self-attributive fact that it would be good to go to home now, I derive the self-attributive judgment that I want to go home now. These derivations may be inferences, but if so, such inferences require no specific premises about ongoing mental states. Perhaps one embraces a general inference principle like “from P , it is permissible to derive I believe that P ”, or “normally, if something is good, I want it”.
The following accounts of self-knowledge all take advantage of one or more of these facts about self-ascription. Because these ways of obtaining self-knowledge all violate the detection condition on introspection (condition 5 in Section 1.1 above), and because philosophers are divided about whether methods of obtaining self-knowledge that violate that condition count as introspective methods strictly speaking, philosophers are divided about whether accounts of self-knowledge of the sort described in this section should be regarded as accounts of introspection.
An emphasis on infallible knowledge through self-fulfilling self-ascriptions goes back at least to Augustine (c. 420 C.E./1998) and is most famously deployed by Descartes in his Discourse on Method (1637/1985) and Meditations (1641/1984), where he takes the self-fulfilling thought that he is thinking as indubitably true, immune to even the most radical skepticism, and a secure ground on which to build further knowledge.
Contemporary self-fulfillment accounts tend to exploit the idea of containment . In a 1988 essay, Burge writes:
When one knows one is thinking that p , one is not taking one's thought (or thinking) that p merely as an object. One is thinking that p in the very event of thinking knowledgeably that one is thinking it. It is thought and thought about in the same mental act. (654)
This is the case, Burge argues, because “by its reflexive, self-referential character, the content of the second-order [self-attributive] judgment is locked (self-referentially) onto the first-order content which it both contains and takes as its subject matter” (1988, 659–660; cf. Heil 1988; Gertler 2000, 2001; Heil and Gertler describe such thoughts as introspective while Burge appears not to think of self-knowledge so structured as introspective: 1998, 244; see also 1988, 652). In judging that I am thinking of a banana, I thereby necessarily think of a banana: The self-attributive judgment contains, as a part, the very thought self-ascribed, and thus cannot be false. In a 1996 essay, Burge extends his remarks to include not just self-attributive “thoughts” as targets but also (certain types of) “judgments” (e.g., “I judge, herewith, that there are physical entities” and other judgments with “herewith”-like reflexivity, 92)
Shoemaker (1994a, 1994b, 1994c) deploys the containment idea very differently, and over a much wider array of introspective targets, including conscious states like pains and propositional attitudes like belief. Shoemaker speculates that the relevant containment relation holds not between the contents or concepts employed in the target state and in the self-ascriptive state but rather between their neural realizations in the brain. To develop this point, Shoemaker distinguishes between a mental state's “core realization” and its “total realization”. One might think of mental processes as transpiring in fairly narrow regions of the brain (their core realization), and yet, Shoemaker suggests, it's not as though we could simply carve off those regions from all others and still have the mental state in question. To be the mental state it is, the process must be embedded in a larger causal network involving more of the brain (the total realization). Relationships of containment or overlap between core realization and total realization between the target state and the self-ascriptive judgment might then underwrite introspective accuracy. For example, the total brain-state realization of the state of pain may simply be a subset of the total brain-state realization of the state of believing that one is in pain. Introspective accuracy might then be explained by the fact that the introspective judgment is not an independently existing state.
More recently, philosophers have applied Burge-like content-containment models (as opposed to Shoemaker-like realization-containment models) to self-knowledge of conscious states, or “phenomenology”, in particular—for example, Gertler (2001), Papineau (2002), Chalmers (2003), and Horgan and Kriegel (2007). Husserl (1913/1982) offers an early phenomenal containment approach, arguing that we can at any time put our “cogitatio”—our conscious experiences—consciously before us through a kind of mental glancing, with the self-perception that arises containing as a part the conscious experience toward which it is directed, and incapable of existing without it. Papineau offers a “quotational” account on which in introspection we self-attribute “the experience: ___”, where the blank is completed by the experience itself. Chalmers writes that “direct phenomenal beliefs” about our experiences are “partly constituted by an underlying phenomenal quality”, in that the two will be tightly coupled across “a wide range of nearby conceptually possible cases” (2003, 235).
One possible difficulty with such accounts is that while it seems plausible to suppose that an introspective thought or judgment might contain another thought or judgment as a part, it's less clear how a self-attributive judgment or belief might contain a piece of conscious experience as a part. Beliefs, and other belief-like mental states like judgments, one might think, contain concepts , not conscious experiences, as their constituents (Fodor 1998); or, alternatively, one might think that beliefs are functional or dispositional patterns of response to input (Dennett 1987; Schwitzgebel 2002), again rendering it unclear how a piece of phenomenology could be part of belief. Perhaps with this concern in mind, advocates of containment accounts often appeal to “phenomenal concepts” that are, like the introspective judgments to which they contribute, partly constituted by the the conscious experiences that are the contents of those concepts. Such concepts are often thought to be obtained by demonstrative attention to our conscious experiences as they are ongoing.
It would seem, at least, that beliefs, concepts, or judgments containing pieces of phenomenology would have to expire once the phenomenology has passed and thus that the introspective judgments could not used in later inferences without recreating the state in question. Chalmers (2003) concedes the temporal locality of such phenomenology-containing introspective judgments and consequently their limited use in speech and in making generalizations. Papineau (2002), in contrast, embraces a theory in which the imaginative recreation of phenomenology in thinking about past experience is commonplace.
Although we can seemingly at least sometimes arrive at true self ascriptions through the self-shaping and the self-expression procedures (B and C) described at the beginning of Section 2.3, and although such procedures may meet the first three conditions on an account of introspection as described in Section 1.1—that is, they may (depending on how they are described and developed) be procedures that can yield only knowledge or judgments (or at least self-ascriptions) about one's own currently ongoing or very recently past mental states—few philosophers would describe such procedures as “introspective”. Nonetheless, they warrant brief treatment here, partly for the same reason self/other parity accounts warranted treatment in Section 2.1 above—that is, as skeptical accounts suggesting that the scope of introspection may be considerably narrower than is generally thought—and partly as background for the “transparency” accounts to be discussed in Section 2.3.4 below, with which they are often married.
It is difficult to find accounts of self-knowledge that stress the self-shaping technique in its purest, forward-looking, causal form—perhaps because it's clear that self-knowledge must involve considerably more than this (Gertler 2011). However, McGeer (1996, 2008; McGeer and Pettit 2002) puts considerable emphasis on self-shaping, writing that “we learn to use our intentional self-ascriptions to instill or reinforce tendencies and inclinations that fit with these ascriptions, even though such tendencies and inclinations may at best have been only nascent at the time we first made the judgments” (1996, 510). If I describe myself as brave in battle, or as a committed vegetarian—especially if I do so publicly—I create commitments and expectations for myself that help to make those self-ascriptions true. McGeer compares self-knowledge to the knowledge a driver has, as opposed to a passenger, of where the car is going: The driver, unlike the passenger, can make it the case that the car goes where she says it is going (505).
There are also strains in Dennett (though Dennett may not have an entirely consistent view on these matters; see Schwitzgebel 2007) that suggest either a self-fulfillment or a self-shaping view. In some places, Dennett compares “introspective” self-reports about consciousness to works of fiction, immune to refutation in the same way that fictional claims are—one could no more go wrong about one's consciousness, Dennett says, than Doyle could go wrong about the color of Holmes's easy chair (e.g., 1991, 81, 94). Such remarks are consistent with either an anti-realist view of fiction (there are no facts about the easy chair or about consciousness; see 366–367) or a self-fulfillment or self-shaping realist view (Doyle creates facts about Holmes as he thinks or writes about him; we create facts about what it's like to be us in thinking or making claims about our consciousness, as perhaps on 81 and 94). More moderately, in discussing attitudes, Dennett emphasizes how the act of formulating an attitude in language—for example, when ordering a menu item—can involve self-attributing a degree of specification in one's attitudes that was not present before, thereby committing one to, and partially or wholly creating, the specific attitude self-ascribed (1987, 20).
Wittgenstein writes:
[H]ow does a human being learn the meaning of the names of sensations?—of the word “pain” for example. Here is one possibility: words are connected with the primitive, the natural, expressions of the sensation and used in their place. A child has hurt himself and he cries; and then adults talk to him and teach him exclamations and, later, sentences. They teach the child new pain-behaviour.
“So you are saying that the word ‘pain’ really means crying?”—On the contrary: the verbal expression of pain replaces crying and does not describe it. (1953/1968, sec. 244)
“It can't be said of me at all (except perhaps as a joke) that I know I am in pain. What is it supposed to mean—except perhaps that I am in pain?” (1953/1968, sec. 246).
On Wittgenstein's view, it is both true that I am in pain and that I say of myself that I am in pain, but the utterance in no way emerges from a process of detecting one's pain.
A simple expressivist view—sometimes attributed to Wittgenstein on the basis of these and related passages—denies that the expressive utterances (e.g., “that hurts!”) genuinely ascribe mental states to the individuals uttering them. Such a view faces serious difficulties accommodating the evident semantics of self-ascriptive utterances, including their use in inference and the apparent symmetries between present-tense and past-tense uses and between first-person and third-person uses (Wright 1998; Bar-On 2004). Consequently, Bar-On advocates, instead, what she calls a neo-expressivist view according to which expressive utterances can share logical and semantic structure with non-expressive utterances, despite the epistemic differences between them.
Expressivists have not always been clear about exactly the range of target mental states expressible in this way, but it seems plausible that at least in principle some true (or apt) self-ascriptions could arise in this manner, with no intervening introspective self-detection. The question would then be whether this is how we generally arrive at true self-ascriptions, for some particular class of mental states, or whether some more archetypically introspective process is also available. (For a more detailed treatment of expressivism, consult the section about the expressivist model of self-knowledge in the entry self-knowledge .)
Evans writes:
[I]n making a self-ascription of belief, one's eyes are, so to speak, or occasionally literally, directed outward—upon the world. If someone asks me, “Do you think there is going to be a third world war?”, I must attend, in answering him, to precisely the same outward phenomena as I would attend to if I were answering the question “Will there be a third world war?” I get myself into the position to answer the question whether I believe that p by putting into operation whatever procedure I have for answering the question whether p . (1982, 225)
Transparency approaches to self-knowledge, like Evans', emphasize cases in which it seems that one arrives at an accurate self-ascription not by means of attending to, or thinking about, one's own mental states, but rather by means of attending to or thinking about the external states of the world that the target mental states are about. Note that this claim has both a negative and a positive aspect: We do not learn about our minds by as it were gazing inward; and we do learn about our minds by reflecting on the aspects of the world that our mental states are about. The positive and negative theses are separable: A pluralist might accept the positive thesis without the negative one; an advocate of a self/other parity theory or an expressivist account of self-knowledge (with respect to a certain class of target states) might accept the negative thesis without the positive. (N.B.: In the philosophical literature on self-knowledge “transparency” is also sometimes used to mean something like self-intimation in the sense of Section 4.1.1 below, for example in Wright 1998; Bilgrami 2006. This is a completely different usage, not to be confused with the present usage.) Because transparency accounts stress the outward focus of our thought in arriving at self-ascriptions, calling such accounts accounts of “introspection” strains against the etymology of the term. Nonetheless, some prominent advocates of transparency accounts, such as Dretske (1995) and Tye (2000), offer them explicitly as accounts of introspection.
The range of target states to which transparency applies is a matter of some dispute. Among philosophers who accept something like transparency, belief is generally regarded as transparent (Gordon 1995, 2007; Gallois 1996; Moran 2001; Fernández 2003; Byrne 2005). Perceptual states or perceptual experiences are also often regarded as transparent in the relevant sense. Harman's example is the most cited:
When Eloise sees a tree before her, the colors she experiences are all experienced as features of the tree and its surroundings. None of them are experienced as intrinsic features of her experience. Nor does she experience any features of anything as intrinsic features of her experiences. And that is true of you too. There is nothing special about Eloise's visual experience. When you see a tree, you do not experience any features as intrinsic features of your experience. Look at a tree and try to turn your attention to intrinsic features of your visual experience. I predict you will find that the only features there to turn your attention to will be features of the presented tree. (Harman 1990, 667)
Harman's emphasis here is on the negative thesis, which goes back at least to Moore (1903; though Moore does not unambiguously endorse it). The view that it is impossible to attend directly to perceptual experience has recently been especially stressed by Tye (1995, 2000, 2002; see also Evans 1982; Van Gulick 1993; Shoemaker 1994a; Dretske 1995; Martin 2002; Stoljar 2004), and directly conflicts with accounts according to which we learn about our sensory experience primarily by directing introspective attention to it (e.g., Goldman 2006; Petitmengin 2006; Hill 2009; Siewert 2012; and back at least to Wundt 1888 and Titchener 1908/1973).
Gordon (2007) argues (contra Nichols and Stich 2003 and Goldman 2006) that Evans-like ascent routines (ascending from “ p ” to “I believe that p ”) can drive the accurate self-ascription of all the attitudes, not just belief. He makes his case by wedding the transparency thesis to something like an expressive account of self-ascription: To answer a question about what I want—for example, which flavor ice cream do I want?—I think not about my desires but rather about the different flavors available, and then I express the resulting attitude self-ascriptively. Similarly for hopes, fears, wishes, intentions, regrets, etc. Gordon points out that from a very early age, before they likely have any self-ascriptive intent, children learn to express their attitudes self-ascriptively, for example with simple phrases like “[I] want banana!” (see also Bar-On 2004).
The transparency thesis is in fact consistent, not just with expressivism, but with any of the four non-detection-based self-ascription procedures described at the beginning of this section (and indeed Aydede and Güzeldere 2005 attempt to reconcile aspects of the transparency view with a broadly detection-like approach to introspection). This manifold compatibility highlights the fact that by itself the transparency thesis does not go far toward a positive view of the mechanisms of self-knowledge.
Moran (2001) brings together transparency and self-shaping in his commissive account of self-knowledge. Moran argues that normally when we are prompted to think about what we believe, desire, or intend (and he limits his account primarily to these three mental states), we reflect on the (outward) phenomena in question and make up our minds about what to believe, desire, or do. Rather than attempting to detect a pre-existing state, we open or re-open the matter and come to a resolution. Since we normally do believe, desire, and intend what we resolve to believe, desire, and do, we can therefore accurately self-ascribe those attitudes. Falvey (2000) embraces a similar view, and furthermore joins it with expressivism, a move Moran resists. (See also Falvey 2000; Boyle 2009; and see the discussion of the commitment model of self-knowledge in the entry self-knowledge for a more detailed discussion of commissive accounts.)
Byrne (2005) and Dretske (1995) bring together transparency and something like a derivational model of self-knowledge—a model on which I derive the conclusion that I believe that P directly from P itself, or the conclusion that I am representing x as F from the fact that x is F —a fact which must of course, to serve as a premise in the derivation, be represented (or believed) by me. Byrne argues that just as one might abide by the following epistemic rule:
DOORBELL: If the doorbell rings, believe that there is someone at the door
so also might someone abide by the rule:
BEL: If P , believe that you believe that P .
To determine whether you believe that P , first determine whether P is the case, then follow the rule BEL. Byrne (2011a, 2011b, 2011c, 2012) offers similar accounts of self-knowledge of intention, thinking, seeing, and desire.
Dretske analogizes introspection to ordinary cases of “displaced perception”—cases in which one perceives that something is the case by way of directly perceiving some other thing (e.g., hearing that the mail carrier has arrived by hearing the dog's barking; seeing that you weigh 110 pounds by seeing the dial on the bathroom scale): One perceives that one represents x as F by way of perceiving the F -ness of x . Dretske notes, however, two points of disanalogy between the cases. In the case of hearing that the mail carrier has arrived by hearing the dog's bark, the conclusion (that the mail carrier has arrived) is only established if the premise about the dog's barking is true, and furthermore it depends on a defeasible connecting belief, that the dog's barking is a reliable indicator of the mail's arrival. In the introspective case, however, the inference, if it is an inference, does not require the truth of the premise about x 's being F . Even if x is not F , the conclusion that I'm representing x as F is supported. Nor does there seem to be any sort of defeasible connecting belief.
Tye also emphasizes transparency in his account of introspection, though he limits his remarks to the introspection of conscious experience or “phenomenal character”. In his 2000 book, Tye develops a view like Dretske's, analogizing introspection to displaced perception, though Tye unlike Dretske explicitly denies that inference is involved, instead proposing a mechanism similar to the sort of mechanism envisioned by simple monitoring accounts like those of Nichols and Stich (2003; see Section 2.2.1 above), a reliable process that, in the case of perceptual self-awareness, takes awareness of external things as its input and yields as its output awareness of phenomenal character. (The key difference between Tye's 2000 account on the one hand and the Nichols and Stich account on the other that warrants the classification of Tye's view here rather than in the section on self-detection models is this: Tye rejects the idea that the process is one of internal detection, while Nichols and Stich stress that idea. To adjudicate the dispute between those two positions, and to determine whether it might, in fact, be merely nominal, it would be helpful to have a clearer sense than has so far been given of what it means to say that one subpersonal system detects, or “monitors” or “scans”, the states or contents of another.) However, in his 2009 book, Tye rejects the displaced perception model in favor of a version of the transparency view that identifies phenomenal character with external qualities in the world, so that perceiving features of the world just is perceiving phenomenal character—a view that he recognizes is then charged with the difficult task of explaining how phenomenal character is a property (or “quality”) of external objects rather than, as is generally assumed, a property only of experiences of those objects.
Several authors have challenged the idea that sensory experience necessarily eludes attention—that is, they have denied the central claim of transparency theories about sensory experience. Block (1996), Kind (2003), and Smith (2008) have argued that phosphenes—those little lights you see when you press on your eyes—and visual blurriness are aspects of sensory experiences that can be directly attended. Siewert (2004) has argued that what's intuitively appealing in the transparency view is primarily the observation that in reflecting on sensory experience one does not withdraw attention from the objects sensed; but, he argues, this is compatible with also devoting a certain sort of attention to the sensory experience itself. In early discussions of attention, perceptual attention was sometimes distinguished from “intellectual attention” (James 1890/1981; Baldwin 1901–1905; see also Peacocke 1998; Mole 2011), that is, from the kind of attention we can devote to purely imagined word puzzles or to philosophical issues. If non-sensory forms of attention are possible, then the transparency thesis for sensory experience will require restatement: Is it only sensory attention to sensory experience that is impossible? Or is it any kind of attention whatsoever? Simply to say we don't attend sensorily to our mental states is to make only a modest claim, akin to the claim that we see objects rather than seeing our visual experiences of objects; but to say that we cannot attend to our mental states even intellectually appears extreme. In light of this, it remains unclear how to cast the transparency intuition to better bring out the core idea that is meant to be conveyed by the slogan that introspecting sensory experience is not a matter of attending to one's own mind.
Philosophers discussing self-knowledge often write as if approaches highlighting one of these methods of generating self-ascriptions conflict with approaches that highlight other of these methods, and also as if approaches of this general sort conflict with self-detection approaches (Section 2.2 above). While conflicts will certainly exist between different accounts intended to serve as exhaustive approaches to self-knowledge, it is implausible that any one or even any few of these approaches to self-knowledge is exhaustive. Plausibly, all of the non-self-detection approaches described above can lead, at least occasionally, to accurate self-ascriptions. Enthusiasts for another of the models, or for a self-detection model, needn't deny this. It also seems hard to deny that we at least sometimes reach conclusions about our mental lives based on the kind of theoretical inference or self-interpretation emphasized by advocates of self/other parity accounts (Section 2.1 above). Finally, even philosophers concerned about strong or oversimple self-scanning views might wish to grant that the mind can do some sort of tracking of its own present or recently past states—for example, when we trace back a stream of recently past thoughts that presumably can't (because past) be self-ascribed by self-fulfillment, self-shaping, self-expression, or transparency methods.
Schwitzgebel (2012) elevates this pluralism into a kind of negative account of introspection. Introspective judgments, he says, arise from a shifting confluence of many processes, recruited opportunistically, none of which can be called introspection proper. Just as there is no single, unified faculty of poster-taking-in that one employs when trying to take in a poster at a psychological conference or science fair, there is, on Schwitzgebel's view, no single, unified faculty of introspection or one underlying core process. Instead, the introspector, like the poster-viewer, brings to bear a diverse range of cognitive resources as suits the occasion. However, he says, the process wouldn't be worth calling "introspective" unless the introspector aimed to reach a judgment about her current or very recently past conscious experience, in a way that uses at least some resources specific to the first-person case, and in a way that involves some relatively direct sensitivity to the target state.
3. The Role of Introspection in Scientific Psychology
Philosophers have long made introspective claims about the human mind—or, to speak more cautiously, they've made claims seemingly at least in part introspectively grounded. Aristotle (3rd c. BCE/1961) asserts that thought does not occur without imagery. Mengzi (3rd c. BCE/2008) argues that our hearts are pleased by moral goodness and revolted by evil, even if the pleasure and revulsion are not evident in our outward behavior. Berkeley finds in himself no “abstract ideas” like that of a triangle that is, in Locke's terms “neither oblique, nor rectangle, neither equilateral, equicrural, nor scalenon, but all and none of these at once” (Berkeley 1710/1965, 12; Locke 1689/1975, 596). James Mill (1829/1878) attempts a catalog of the varieties of sense experience.
Although a number of early modern philosophers had aimed to initiate the scientific study of the mind, it wasn't until the middle of the 19th century—with the appearance of quantitative introspective methods , especially regarding sensory consciousness—that the study of the mind took shape as a progressive, mathematical, laboratory-based science. Early quantitative psychologists such as Helmholtz (1856/1962), Fechner (1860/1964), and Wundt (1896/1902) sought quantitative answers to questions like: By how much must two physical stimuli differ for the experiences of them to differ noticeably? How weak a stimulus can still be consciously perceived? What is the mathematical relationship between stimulus intensity and the intensity of the resulting sensation? (The Weber-Fechner law holds that the relationship is logarithmic.) Along what dimensions, exactly, can sense experience vary? (The “color solid” [see the link to the Munsell solid in Other Internet Resources, below], for example, characterizes color experience by appeal to just three dimensions of variation: hue, saturation, and lightness or brightness.) Although from very early on, psychologists also employed non-introspective methods (e.g., performance on memory tests, reaction times), most early characterizations of the field stood introspection at the center. James, for example, wrote that “introspective observation is what we have to rely on first and foremost and always” (1890/1981, 185).
In contrast with the dominant philosophical tradition that has, since Descartes, stressed the special privilege or at least high accuracy of introspective judgments about consciousness (see Section 4.1 below) many early introspective psychologists held that the introspection of currently ongoing or recently past conscious experience is difficult and prone to error if the introspective observer is insufficiently trained. Wundt, for example, reportedly did not credit the introspective reports of people with fewer than 50,000 trials of practice in observing their conscious experience (Boring 1953). Titchener, a leading American introspective psychologist, wrote a 1600-page introspective training manual for students, arguing that introspective observation is at least as difficult as observation in the physical sciences (Titchener 1901–1905; see also Wundt 1874/1908; Müller 1904; for contemporary discussions of introspective training see Varela 1996; Nahmias 2002; Schwitzgebel 2011b). This difference in optimism about untrained introspection may partly reflect differences in the types of judgments foregrounded in the two disciplines. Philosophers stressing privilege tend to focus on coarse and (seemingly) simple judgments such as “I'm having a visual experience of redness” or “I believe it's raining”. The projects of interest to introspective psychologists often required much finer judgments—such as determining with mathematical precision whether one visual sensation has twice the “intensity” of another or determining along what dimensions emotional experience can vary.
Early introspective psychologists' theoretical discussions of the nature of introspection were often framed in reaction to skepticism about the scientific viability of introspection, especially the concern that the introspective act interferes with or destroys the mental state or process that is its target. [ 1 ] ) The most influential formulation of this concern was Comte's:
But as for observing in the same way intellectual phenomena at the time of their actual presence, that is a manifest impossibility. The thinker cannot divide himself into two, of whom one reasons whilst the other observes him reason. The organ observed and the organ observing being, in this case, identical, how could observation take place? This pretended psychological method is then radically null and void (1830, using the translation of James 1890/1981, 188).
Introspective psychologists tended to react to this concern in one of three ways. The most concessive approach—recommended, for example, by James (1890/1981; see also Mill 1865/1961; Lyons 1986)—was to grant Comte's point for concurrent introspection, that is, introspection simultaneous with the target state or process, and to emphasize in contrast immediate retrospection , that is, reflecting on or attending to the target process (usually a conscious experience) very shortly after it occurs. Since the scientific observation occurs only after the target process is complete, it does not interfere with that process; but of course the delay between the process and the observation must be as brief as possible to ensure that the process is accurately remembered.
Brentano (1874/1973) responded to Comte's concern by distinguishing between “inner observation” [ innere Beobachtung ] and “inner perception” [ innere Wahrnehmung ]. Observation, as Brentano characterizes it, involves dedicating full attention to a phenomenon, with the aim of apprehending it accurately. This dedication of attention necessarily interferes with the process to be observed if the process is a mental one; therefore, he says, inner observation is problematic as a scientific psychological method. Inner perception , in contrast, according to Brentano, does not involve attention to our mental lives and thus does not objectionably disturb them. While our “attention is turned toward a different object … we are able to perceive, incidentally, the mental processes which are directed toward that object” (1874/1973, 30). Brentano concedes that inner perception necessarily lacks the advantages of attentive observation, so he recommends conjoining it with retrospective methods.
Wundt (1888) agrees with Comte and Brentano that observation necessarily involves attention and so often interferes with the process to be observed, if that process is an inner, psychological one. To a much greater extent than Brentano, however, Wundt emphasizes the importance to scientific psychology of direct attention to experience, including planful and controlled variation. The psychological method of “inner perception” is, for Wundt, the method of holding and attentively manipulating a memory image or reproduction of a past psychological process. Although Wundt sees some value in this retrospective method, he thinks it has two crucial shortcomings: First, one can only work with what one remembers of the process in question—the manipulation of a memory-image cannot discover new elements. And second, foreign elements may be unintentionally introduced through association—one might confuse one's memory of a process with one's memory of another associated process or object.
Therefore, Wundt suggests, the science of psychology must depend upon the attentive observation of mental processes as they occur. He argues that those who think attention necessarily distorts the target mental process are too pessimistic. A subclass of mental processes remains relatively unperturbed by attentive observation—the “simpler” mental processes, especially of perception (1896/1902, 27–28). The experience of seeing red, Wundt claims, is more or less the same whether or not one is attending to the psychological fact that one is experiencing redness. Wundt also suggests that the basic processes of memory, feeling, and volition can be observed systematically and without excessive disruption. These alone , he thinks, can be studied by introspective psychology (see also Wundt 1874/1904; 1896/1902; 1907). Other aspects of our psychology must be approached through non-introspective methods such as the observation of language, mythology, culture, and human and animal development.
Although introspective psychologists were able to build scientific consensus on some issues concerning sense experience—issues such as the limits of sensory perception in various modalities and some of the contours of variation in sensory experience—by the early 20th century it was becoming clear that on many issues consensus was elusive. The most famous dispute concerned the existence of “imageless thought” (see the discussion of the imageless thought controversy in the entry mental imagery ; see also Humphrey 1951; Kusch 1999); but other topics proved similarly resistant such as the structure of emotion or “feeling” (James 1890/1981; Külpe 1893/1895; Wundt 1896/1902; Titchener 1908/1973) and the experiential changes brought about by shifts in attention (Wundt 1896/1902; Pillsbury 1908; Titchener 1908/1973; Chapman 1933).
By the 1910s, behaviorism (which focused simply on the relationship between outward stimuli and behavioral response) had declared war on introspective psychology, portraying it as bogged down in irresolvable disputes between differing introspective “experts”, and also rebuking the introspectivists' passive taxonomizing of experience, recommending that psychology focus instead on socially useful paradigms for modifying behavior (e.g., Watson 1913). In the 1920s and 1930s, introspective studies were increasingly marginalized. Although strict behaviorism declined in the 1960s and 1970s, its main replacement, cognitivist functionalism (which treats functionally defined internal cognitive processes as central to psychological inquiry), generally continued to share behaviorism's disdain of introspective methods.
Psychophysics (the study of the relationship between physical sensory input and consequent psychological state or response), where the introspective psychologists had found their greatest success, underwent a subtle shift in this period from a focus on subjective methods—methods that involve asking subjects to report on their experiences or percepts—to a focus on objective methods such as asking subjects to report on states of the outside world, including insisting that subjects guess even when they feel they don't know or have no relevant conscious experience (especially with the rise of “signal detection theory” in psychophysics: Green and Swets 1966; Cheesman and Merikle 1986; Macmillan and Creelman 1991; Merikle, Smilek, and Eastwood 2001). Perhaps in accord with transparency views of introspection (Section 2.3.4 above), the two types of instruction to subjects seem very similar (compare the subjective “tell me if you visually experience a flash of light” with the objective “tell me if the light flashes”). On the other hand, perhaps in tension with transparency views, subjective and objective instructions seem sometimes to differ importantly, especially in cases of known illusion, Gestalt effects such as perceived grouping, stimuli near the limits of perceivability, and the experience of ambiguous figures (Boring 1921; Merikle, Smilek, and Eastwood 2001; Siewert 2004).
In no period, however, were introspective methods entirely abandoned by psychologists, and in the last few decades, they have begun to make something of a comeback, especially with the rise of the interdisciplinary field of “consciousness studies” (see, e.g., Jack and Roepstorff, eds., 2003, 2004). Ericsson and Simon (1984/1993; to be discussed further in Section 4.2.3 below) have advocated the use of “think-aloud protocols” and immediately retrospective reports in the study of problem solving. Other researchers have emphasized introspective methods in the study of imagery (Marks 1985; Kosslyn, Reisbert, and Behrmann 2006) and emotion (Lambie and Marcel 2002; Barrett et al. 2007).
Beeper methodologies have been developed to facilitate immediate retrospection, especially by Hurlburt (1990, 2011; Hurlburt and Heavey 2006; Hurlburt and Schwitzgebel 2007) and Csikszentmihalyi (Larson and Csikszentmihalyi 1983; Hektner, Schmidt, and Csikszentmihalyi 2007). Traditional immediately retrospective methods required the introspective observer in the laboratory somehow to intentionally refrain from introspecting the target experience as it occurs, arguably a difficult task. Hurlburt and Csikszentmihalyi, in contrast, give participants beepers to wear during ordinary, everyday activity. The beepers are timed to sound only at long intervals, surprising participants and triggering an immediately retrospective assessment of their “inner experience”, emotion, or thoughts in the moment before the beep.
Introspective or subjective reports of conscious experience have also played an important role in the search for the “neural correlates of consciousness” (as reviewed in Rees and Frith 2007; Tononi and Koch 2008; Prinz 2012; see also Varela 1996). One paradigm is for researchers to present ambiguous sensory stimuli, holding them constant over an extended period, noting what neural changes correlate with changes in subjective reports of experience. For example, in “binocular rivalry” methods, two different images (e.g., a face and a house) are presented, one to each eye. Participants typically say that only one image is visible at a time, with the visible image switching every few seconds. Researchers have sometimes reported finding evidence that activity in “early” visual areas (such as V1) is not temporally coupled with reported changes in visual experience, while changes in conscious percept are better temporally coupled with activity in frontal and parietal areas further downstream and to large-scale changes in neural synchronization or oscillation; but the evidence is disputed (Lumer, Friston, and Rees 1998; Tong et al. 1998; Tononi et al. 1998; Polonsky et al. 2000; Kreiman, Fried, and Koch 2002; Moutoussis and Zeki 2002; Tong, Meng, and Blake 2006; Kamphuisen, Bauer, and van Ee 2008; Sandberg et al. 2013; Ishiku and Zeki 2014). Another version of the ambiguous sensory stimuli paradigm involves presenting the subject with an ambiguous figure such as the Rubin faces-vase figure:
Using this paradigm, researchers have found neuronal changes both in early visual areas and in later areas, as well as changes in widespread neuronal synchrony, that correspond temporally with subjective reports of flipping between one way and another of seeing the ambiguous figure (Kleinschmidt et al. 1998; Rodriguez et al. 1999; Ilg et al. 2008; Parkkonen et al. 2008; de Graaf et al. 2011). In masking paradigms, stimuli are briefly presented then followed by a “mask”. On some trials, subjects report seeing the stimuli, while on others they don't. In trials in which the subject reports that stimulus was visually experienced, researchers have tended to find higher levels of activity through at least some of the downstream visual pathways as well as spontaneous electrical oscillations near 40 Hz (Dehaene et al. 2001; Summerfield, Jack, and Burgess 2002; Del Cul, Baillet, and Dehaene 2007; Quiroga et al. 2008). However, it remains contentious how properly to interpret such attempts to find neural correlates of consciousness (Noë and Thompson 2004; Overgaard, Sandberg, and Jensen 2008; Tononi and Koch 2008; Dehaene and Changeux 2011; Aru, Bachmann, Singer, and Melloni 2012; de Graaf, Hsieh, and Sack 2012).
If we report our attitudes by introspecting upon them, then much of survey research is also introspective, though psychologists have not generally explicitly described it as such. As with subjective vs. objective methods in psychophysics, there appears to be only a slight difference between subjectively phrased questions (“Do you approve of the President's handling of the war?”, “Do you think gay marriage should be legalized?”) and objectively phrased questions (“Has the President handled the war well?”, “Should gay marriage be legalized?”). This would seem to support the observation at the core of transparency theory (discussed in Section 2.3.4 above) that questions about the mind and questions about the outside world often call for the same type of reflection.
4. The Accuracy of Introspection
4.1 varieties of privilege.
It's plausible to suppose that people have some sort of privileged access to at least some of their own mental states or processes: You know about your own mind, or at least some aspects of it, in a different way and better than you know about other people's minds, and maybe also in a different way and better than you know about the outside world. Consider pain. It seems you know your own pains differently and better than you know mine, differently and (perhaps) better than you know about the coffee cup in your hand. If so, perhaps that special “first-person” privileged knowledge arises through something like introspection, in one or more of the senses described in Section 2 above.
Just as there is a diversity of methods for acquiring knowledge of or reaching judgments about one's own mental states and processes, to which the label “introspection” applies with more or less or disputable accuracy, so also is there a diversity of forms of “privileged access”, with different kinds of privilege and to which the idea of access applies with more or less or disputable accuracy. And as one might expect, the different introspective methods do not all align equally well with the different varieties of privilege.
A substantial philosophical tradition, going back at least to Descartes (1637/1985; 1641/1984; also Augustine c. 420 C.E./1998), ascribes a kind of epistemic perfection to at least some of our judgments (or thoughts or beliefs or knowledge) about our own minds—infallibility, indubitability, incorrigibility, or self-intimation. Consider the judgment (thought, belief, etc.) that P , where P is a proposition self-ascribing a mental state or process (for example P might be I am in pain , or I believe that it is snowing , or I am thinking of a dachshund ). The judgment that P is infallible just in case, if I make that judgment, it is not possible that P is false. It is indubitable just in case, if I make the judgment, it is not possible for me to doubt the truth of P . It is incorrigible just in case, if I make the judgment, it is not possible for anyone else to show that P is false. And it is self-intimating if it is not possible for P to be true without my reaching the judgment (thought, belief, etc.) that it is true. Note that the direction of implication for the last of these is the reverse of the first three. Infallibility, indubitability, and incorrigibility all have the form: “If I judge (think, believe, etc.) that P , then …”, while self-intimation has the form “If P , then I judge (think, believe, etc.) that P ”. All four theses also admit of weakening by adding conditions to the antecedent “if” clause (e.g., “If I judge that P as a result of normal introspective processes, then …”). (See Alston 1971 for a helpful dissection of these distinctions; all admit of variations and nuance. Also note that some philosophers [e.g. Ayer 1936/1946; Armstrong 1963; Chalmers 2003; Tye 2009] use “incorrigibility” to mean infallibility as defined here, while others [e.g., Ayer 1963; Alston 1971; Rorty 1970; Dennett 2000] use it with the more etymologically specific meaning of [something like] “incapable of correction”.)
Descartes (1641/1984) famously endorsed the indubitability of “I think”, which he extends also to such mental states as doubting, understanding, affirming, and seeming to have sensory perceptions. He also appears to claim that the thought or affirmation that I am in such states is infallibly true. He was followed in this—especially in his infallibilism—by Locke (1690/1975), Hume (1739/1978), twentieth-century thinkers such as Husserl (1913/1982), Ayer (1936/1946, 1963), Lewis (1946), and the early Shoemaker (1963), and many others. Historical arguments for indubitability and infallibility have tended to center on intuitive appeals to the apparent impossibility of doubting or going wrong about such matters as whether one is having a thought with a certain content or is experiencing pain or having a visual experience as of seeing red.
Recent infallibilists have added to this intuitive appeal structural arguments based on self-fulfillment accounts of introspection or self-knowledge (see Section 2.3.1 above)—generally while also narrowing the scope of infallibility, for example to thoughts about thoughts (Burge 1988, 1996), or to “pure” phenomenal judgments about consciousness (Chalmers 2003; see also Wright 1998; Gertler 2001; Horgan, Tienson, and Graham 2006; Horgan and Kriegel 2007; Tye 2009; with important predecessors in Brentano 1874/1973; Husserl 1913/1982). The intuitive idea behind all these structural arguments is that somehow the self-ascriptive thought or judgment contains the mental state or process self-ascribed: the thought that I am thinking of a pink elephant contains the thought of a pink elephant; the judgment that I am having a visual experience of redness contains the red experience itself.
In contrast, self/other parity (Section 2.1) and self-detection (Section 2.2) accounts of introspection or self-knowledge appear to stand in tension with infallibilism. If introspection or self-knowledge involves a causal process from a mental state to an ontologically distinct self-ascription of that state, it appears that, however reliable such a process may generally be, there is inevitably room in principle for interference and error. Minimally, it seems, stroke, quantum accident, or clever neurosurgery could break otherwise generally reliable relationships between target mental states and the self-ascriptions of those states. Similar considerations apply to self-shaping (Section 2.3.2) and expressivist (Section 2.3.3) accounts, to the extent that these are interpreted causally rather than constitutively.
Introspective incorrigibility, as opposed to either infallibility or indubitability, was held by Rorty (1970) to be “the mark of the mental”—and thus as applying to a wide range of mental states—and has also been embraced more recently by Dennett (2000, 2002). The idea behind incorrigibility, recall, is that no one else could show your self-ascriptions to be false; or we might say, more qualifiedly and a bit differently, that if you arrive at the right kind of self-ascriptive judgment (perhaps an introspectively based judgment about a currently ongoing conscious process that survives critical reflection), then no one else, perhaps not even you in the future, aware of this, can rationally hold that judgment to be mistaken. If I judge that right now I am in severe pain, and I do so as a result of considering introspectively whether I am indeed in such pain (as opposed to, say, merely inferring that I am in pain based on outward behavior), and if I pause to think carefully about whether I really am in pain and conclude that I indeed am, then no one else who is aware of this can rationally believe that I'm not in pain, regardless of what my outward behavior might be (say, calm and relaxed) or what shows up in the course of brain imaging (say, no activation in brain centers normally associated with pain).
Incorrigibility does not imply infallibility: I may not actually be in pain, even if no one could show that I'm not. Consequently, incorrigibility is compatible with a broader array of sources of self-knowledge than is infallibility. Neither Rorty nor Dennett, for example, appear to defend incorrigibility by appeal to self-fulfillment accounts of introspection (though in both cases, interpreting their positive accounts is difficult). Causal accounts of self-knowledge may be compatible with incorrigibility if the causal connections underwriting the incorrigible judgments are vastly more trustworthy than judgments obtained without the benefit of this sort of privileged access. Of course, unless one embraces a strict self-fulfillment account, with its attendant infallibilism, one will want to rule out abnormal cases such as quantum accident; hence the need for qualifications.
Self-intimating mental states are those such that, if a person (or at least a person with the right background capacities) has them, she necessarily believes or judges or knows that she does. Conscious states are often held to be in some sense self-intimating, in that the mere having of them involves, requires, or implies some sort of representation or awareness of those states. Brentano argues that consciousness, for example, of an outward stimulus like a sound, “clearly occurs together with consciousness of this consciousness”, that is, the consciousness is “of the whole mental act in which the sound is presented and in which the consciousness itself exists concomitantly” (1874/1995, 129; see also phenomenological approaches to self-consciousness ). Recent “higher order” and “same order” theories of consciousness (Armstrong 1968; Rosenthal 1990, 2005; Gennaro 1996; Lycan 1996; Carruthers 2005; Kriegel 2009; see also higher-order theories of consciousness ) explain consciousness in terms of some thought, perception, or representation of the mental state that is conscious—the presence of that thought, perception, or representation being what makes the target state conscious. (On same order theories, the target mental state, or an aspect of it, represents itself, with no need for a distinct higher order state.) Thus, Horgan and others have described consciousness as “self-presenting” (Horgan, Tienson, and Graham 2005; Horgan and Kriegel 2007; the usage appears to follow Chisholm 1981, but Chisholm actually has an indubitability rather than a self-intimation thesis in mind). Shoemaker (1995) argues that beliefs—as long as they are “available” (i.e., readily deployed in inference, assent, practical reasoning, etc.), which needn't require that they are occurrently conscious—are self-intimating for individuals with sufficient cognitive capacity. Shoemaker's idea is that if the belief that P is available in the relevant sense, then one is disposed to do things like say “I believe P ”, and such dispositions are themselves constitutive of believing that one believes that P .
Self-intimation claims (unlike infallibility, indubitability, and incorrigibility claims) are not usually cast as claims about “introspection”. This may be because knowledge acquired through self-intimation would appear to be constant and automatic, thus violating the effort condition on introspection (condition 6 in Section 1.1 above).
A number of philosophers have argued for forms of first-person privilege involving some sort of epistemic guarantee—not just conditional accuracy as a matter of empirical fact, but something more robust than that—without embracing infallibility, indubitability, incorrigibility, or self-intimation in the senses described in Section 4.1.1 above.
Shoemaker (1968), for example, argues that self-knowledge of certain psychological facts such as “I am waving my arm” or “I see a canary”, when arrived at “in the ordinary way (without the aid of mirrors, etc.)”, is immune to error through misidentification relative to the first-person pronoun (see also Campbell 1999; Pryor 1999; Bar-On 2004; Hamilton 2008). That is, although one may be wrong about waving one's arm (perhaps the nerves to your arm were recently severed unbeknownst to you) or about seeing a canary (perhaps it's a goldfinch), one cannot be wrong due to mistakenly identifying the person waving the arm or seeing the canary as you , when in fact it is someone else. This immunity arises, Shoemaker argues, because there is no need for identification in the first place, and thus no opportunity for mis -identification. In this respect, Shoemaker argues, knowledge that a particular arm that is moving is your arm (not immune to misidentification since maybe it's someone else's arm, misidentified in the mirror) is different from the knowledge that you are moving your arm—knowledge, that is, of what Searle (1983) calls an “intention in action”.
Shoemaker has also argued for the conceptual impossibility of introspective self-blindness with respect to one's beliefs, desires, and intentions, and for somewhat different reasons one's pains (1988, 1994b). A self-blind creature, by Shoemaker's definition, would be a rational creature with a conception of the relevant mental states, and who can entertain the thought that she has this or that belief, desire, intention, or pain, but who nonetheless utterly lacks introspective access to the type of mental state in question. A self-blind creature could still gain “third person” knowledge of the mental states in question, through observing her own behavior, reading textbooks, and the like. (Thus, strict self/other parity accounts of self-knowledge of the sort described in Section 2.1 are accounts according to which one is self-blind in Shoemaker's sense.) Shoemaker's case against self-blindness with respect to belief turns on the dilemma of whether the self-blind creature can avoid “Moore-paradoxical” sentences (see Moore 1942, 1944/1993; Shoemaker 1995) like “it's raining but I don't believe that it's raining” in which the subject asserts both P and that she doesn't believe that P . If the subject is truly self-blind, Shoemaker suggests, there should be cases in which her best evidence is both that P and that she doesn't believe that P (the latter, perhaps, based on misleading facts about her behavior). But if the subject asserts “ P but I don't believe that P ” in such cases, she does not (contra the initial supposition) really have a rational command of the nature of belief and assertion; and thus it's not a genuine case of self-blindness as originally intended. Alternatively, perhaps the creature can reliably avoid such Moore-paradoxical sentences, self-attributing belief in an apparently normal way. But then, Shoemaker suggests, it seems that she is indistinguishable from normal people in thought and behavior and hence not self-blind. For desire, intention, and pain, too, Shoemaker aims to reveal incoherences between having a rational command of the concepts in question and behaving as though one were systematically ignorant of or mistaken about those states. Shoemaker uses his case against self-blindness as part of his argument against self-detection accounts of introspection (described in Section 2.2 above): If introspection were a matter of detecting the presence of states that exist independently of the introspective judgment or belief, then it ought to be possible for the faculty enabling the detection to break down entirely, as in the case of blindness, deafness, etc., in outward perception (see also Nichols and Stich 2003, who argue that schizophrenia provides such a case).
Burge has influentially asserted that brute errors about “present, ordinary, accessible propositional attitudes [such as belief and desire]” are impossible or at least subject to “severe limits”—where a “brute error” is an error that “indicates no rational failure and no malfunction in the mistaken individual” such as commonly occur in ordinary perception due to “misleading natural conditions or look-alike substitutes” (1988, 657–658; 1996, 103–104). However, Burge offers little argument for this claim, apart from the argument mentioned in Sections 2.3.1 and 4.1.1 above that for certain sorts of self-ascriptions error in general (and not just “brute error”) is impossible, due to the “self-verifying” nature of such self-ascriptions.
Dretske (1995, 2004) argues that we have infallible knowledge of the content of our attitudes without necessarily knowing (or even having a very good idea about) the attitude we take toward those contents. For example, if I believe that it will rain tomorrow , I have infallibly accurate information, which I may then access introspectively, regarding the presence of a mental state with a certain content—the content “it will rain tomorrow”—but I may often have little or no information about the fact that my attitude toward that content is the particular attitude it is—belief, in this case (as opposed to supposition or hope). This view follows from Dretske's accepting something like a containment account of the introspection of the content of the attitude (the introspective judgment employing the same content as the target attitude; see Section 2.3.1 above, especially the discussion of Burge), while he sees knowledge of the attitude one has toward that content as requiring complex information about the causal role and history of that mental state.
Transcendental arguments for the accuracy of certain sorts of self-knowledge offer a different sort of epistemic guarantee—“transcendental arguments” being arguments that assume the existence of some sort of experience or capacity, then develop insights about the background conditions necessary for that experience or capacity, and finally conclude that those background conditions must in fact be met. Burge (1996; see also Shoemaker 1988) argues that to be capable of “critical reasoning” one must be able to recognize one's own attitudes, knowledgeably evaluating, identifying, and reviewing one's beliefs, desires, commitments, suppositions, etc., where these mental states are known to be the states they are. Since we are (by assumption, for the sake of transcendental argument) capable of critical reasoning, we must have some knowledge of our attitudes. Bilgrami (2006) argues that we can only be held responsible for actions if we know the beliefs and desires that “rationalize” our actions; since we can (by assumption) sometimes be held responsible, we must sometimes know our beliefs and desires. Wright (1989) argues that the “language game” of ascribing “intentional states” such as belief and desire to oneself and others requires as a background condition that self-ascriptions have special authority within that game. Given that we successfully play this language game, we must indeed have the special authority that we assume and others grant us in the context of the game.
Developing an analogy from Wright (1998), if it's your turn with the kaleidoscope, you have a type of privileged perspective on the shapes and colors it presents. If someone else in the room wants to know what color dominates, for example, the most straightforward course would be to ask you. But this type of privileged access comes with no guarantee. At least in principle, you might be quite wrong about the tumbling shapes. You might be dazzled by afterimages, or momentarily confused, or hallucinating, or (unbeknownst to you) colorblind. (Yes, people often don't know they are colorblind, a point stressed by Kornblith 1998.) It is also at least in principle possible that others may know better than you, perhaps even systematically so, what is transpiring in the kaleidoscope. You might think the figure shows octagonal symmetry, but the rest of us, familiar with the kaleidoscope's design, might know that the symmetry is hexagonal. A brilliant engineer may invent a kaleidoscope state detector that can dependably reveal from outside the shape, color, and position of the tumbling chunks.
Wright raises this analogy to suggest that people's privilege with respect to certain aspects of their mental lives must be different from that of the person with the kaleidoscope; but other philosophers, especially those who embrace self-detection accounts of introspection, should find the analogy at least somewhat apt: Introspective privilege is akin to the privilege of having a unique and advantageous sensory perspective on something. Metaphorically speaking, we are the only ones who can gaze directly at our attitudes or our stream of experience, while others must rely on us or on outward signs. Less metaphorically, in generating introspective judgments (or beliefs or knowledge) about one's own mentality one employs a detection process available to no one else. It is then an empirical question how accurate the deliverances of this process are; but on the assumption that the deliverances are in a broad range of conditions at least somewhat accurate and more accurate than the typical judgments other people make about those same aspects of your mind, you have a “privileged” perspective. Typically, advocates of self-detection models of introspection regard the mechanism or cognitive process generating introspective judgments or beliefs as highly reliable in roughly this way, but not infallible, and not immune to correction by other people (Armstrong 1968; Churchland 1988; Hill 1981, 2009; Lycan 1996; Nichols and Stich 2003; Goldman 2000, 2006).
4.2 Empirical Evidence on the Accuracy of Introspection
The arguments of the previous section are a priori in at least the broad sense of that term (the psychologists' sense): They depend on general conceptual considerations and armchair folk psychology rather than on empirical research. To these might be added the argument, due to Boghossian (1989) that “externalism” about the content of our attitudes (the view that our attitudes depend constitutively not just on what is going on internally but also on facts about our environment; Putnam 1975; Burge 1979) seems to problematize introspective self-knowledge of those attitudes. This issue will not be treated here, since it is amply covered in the entries on externalism about mental content and externalism and self-knowledge .
Now we turn to empirical research on our self-knowledge of those aspects of our minds often thought to be accessible to introspection. Since character traits are not generally regarded as introspectible aspects of our mentality, we'll skip the large literature on the accuracy or inaccuracy of our judgments about them (e.g., Taylor and Brown 1988; Paulhus and John 1998; Funder 1999; Vazire 2010; see also Haybron's 2008 skeptical perspective on our knowledge of how happy we are); nor will we discuss self-knowledge of subpersonal, nonconscious mental processes, such as the processes underlying visual recognition of color and shape.
As a general matter, while a priori accounts of the epistemology of introspection have tended to stress its privilege and accuracy, empirical accounts have tended to stress its failures.
Perhaps the most famous argument in the psychological literature on introspection and self-knowledge is Nisbett and Wilson's argument that we have remarkably poor knowledge of the causes of, and processes underlying, our behavior and attitudes (Nisbett and Wilson 1977; Nisbett and Ross 1980; Wilson 2002). Section 2.1 above briefly mentioned their emblematic finding that people in a shopping mall were often ignorant of a major factor—position—influencing their judgments about the quality of pairs of stockings. In Nisbett and Bellows (1977), also briefly mentioned above, participants were asked to assess the influence of various factors on their judgments about features of a supposed job applicant. As in Nisbett and Wilson's stocking study, participants denied the influence of some factors that were in fact influential; for example, they denied that the information that they would meet the applicant influenced their judgments about the applicant's flexibility. (It actually had a major influence, as assessed by comparing the judgments of participants who were told and not told that they would meet the applicant.) Participants also attributed influence to factors that were not in fact influential; for example, they falsely reported that the information that the applicant accidentally knocked over a cup of coffee during the interview influenced “how sympathetic the person seems” to them. Nisbett and Bellows found that ordinary observers' hypothetical ratings of the influence of the various factors on the various judgments closely paralleled the participants' own ratings of the factors influencing them—a finding used by Nisbett to argue that people have no special access to causal influences on their judgments and instead rely on the same sorts of theoretical considerations outside observers rely on (the self/other parity view described in Section 2.1). Despite some objections (such as White 1988), both psychologists and philosophers now tend to accept Nisbett's and Wilson's view that there is at best only a modest first-person advantage in assessing the factors influencing our judgments and behavior.
In series of experiments, Gazzaniga (1995) presented commissurotomy patients (people with severed corpus callosum) with different visual stimuli to each hemisphere of the brain. With cross-hemispheric communication severely impaired due to the commissurotomy, the left hemisphere, controlling speech, had information about one part of the visual stimulus, while the right hemisphere, controlling some aspects of movement (especially the left hand) had information about a different part. Gazzaniga reported finding that when these “split brain” patients were asked to explain why they did something, when that action was clearly caused by input to the right, non-verbal hemisphere, the left hemisphere would sometimes fluently confabulate an explanation. For example, Gazzaniga reports presenting an instruction like “laugh” to the right hemisphere, making the patient laugh. When asked why he laughed, the patient would say something like “You guys come up and test us every month. What a way to make a living!” (1393). When a chicken claw was shown to the left hemisphere and snow scene to the right, and the patient was asked to select an appropriate picture from an array, the right hand would point to a chicken and the left hand to a snow shovel, and when asked why she selected those two things, the patient would say something like “Oh, that's simple. The chicken claw goes with the chicken and you need a shovel to clean out the chicken shed” (ibid.). Similar confabulation about motives is sometimes (but not always) seen in people whose behavior is, unbeknownst to them, driven by post-hypnotic suggestion (Richet 1884; Moll 1889/1911), and in disorders such as hemineglect (anosognosia), blindness denial (Anton's syndrome), and Korsakoff's syndrome (Hirstein 2005).
In a normal population, Johansson and collaborators (Johansson et al. 2005; Johansson et al. 2006) manually displayed to participants pairs of pictures of women's faces. On each trial, the participant was to point to the face he found more attractive. The picture of that face was then centered before the participant while the other face was hidden. On some trials, participants were asked to explain the reasons for their choices while continuing to look at the selected face. On a few key trials, the experimenters used sleight-of-hand to present to the participant the face that was not selected as though it had been the face selected. Strikingly, the switch was noticed only 28% of the time. What's more, when the change was not detected participants actually gave explanations for their choice that appealed to specific features of the unselected face that were not possessed by the selected face 13% of the time. For example, one participant claimed to have chosen the face before him “because I love blondes” when in fact he had chosen a dark-haired face (Johansson et al. 2006, 690). Johansson and colleagues failed to find any systematic differences in the explanations of choice between the manipulated and non-manipulated trials, using a wide variety of measures. They found, for example, no difference in linguistic markers of confidence (including pauses in speech), emotionality, specificity of detail, complexity or length of description, or general position in semantic space. These results, like Nisbett's and Wilson's, suggest that at least some of the time when people think they are explaining the bases of their decisions, they are instead merely theorizing or confabulating.
Wegner has found that people can often be manipulated into believing that they willed or intended behavior that is in fact caused by another person's manipulation and, conversely, that they exerted no control over movements that were in fact their own—as with Ouija boards, with or without a cheating, intentionally directive confederate (Wegner and Wheatley 1999; Wegner 2002). The literature on “cognitive dissonance” is replete with cases in which participants' attitudes appear to change for reasons they do, or would, deny. According to cognitive dissonance theory, when people behave or appear to behave counternormatively (e.g., incompetently, foolishly, immorally), they will tend to adjust their attitudes so as to make the behavior seem less counternormative or “dissonant” (Festinger 1957; Aronson 1968; Cooper and Fazio 1984; Stone and Cooper 2001). For example, people induced to falsely describe as enjoyable a monotonous task they've just completed will tend, later, to report having a more positive attitude toward the task then those not induced to lie (though much less so if they were handsomely paid to lie in which case the behavior is not clearly counternormative; Festinger and Carlsmith 1959; but see Bem 1967, 1972 for an argument that the attitude doesn't change but only the report of it). Presumably, if such attitude changes were known to the subject they would generally fail to have their dissonance-reducing effect. Research psychologists have also confirmed such familiar phenomena as “sour grapes” (Lyubomirsky and Ross 1999; Kay, Jiminez, and Jost 2002) and “self-deception” (Mele 2001) which presumably also involve ignorance of the factors driving the relevant judgments and actions. And of course the Freudian psychoanalytic tradition has also long held that people often have only poor knowledge of their motives and the influences on their attitudes (Wollheim 1981; Cavell 2006).
In light of this empirical research, no major philosopher now holds (perhaps no major philosopher ever held) that we have infallible, indubitable, incorrigible, or self-intimating knowledge of the causes of our judgments, decisions, and behavior. Perhaps weaker forms of privilege also come under threat. But the question arises: Whatever failures there may be in assessing the causes of our attitudes and behavior, are those failures failures of introspection , properly construed? Psychologists tend to cast these results as failures of “introspection”, but if it turns out that a very different and more trustworthy process underwrites our knowledge of some other aspects of our minds—such as what our present attitudes are (however caused) or our currently ongoing or recently past conscious experience—then perhaps we can call only that process introspection, thereby retaining some robust form of introspective privilege while acceding to the psychological consensus regarding (what we would now call non-introspective) first-person knowledge of causes. Indeed, few contemporary philosophical accounts of introspection or privileged self-knowledge highlight, as the primary locus of privilege, the causes of our attitudes and behavior (though Bilgrami 2006 is a notable exception). Thus, the literature reviewed in this section can be interpreted as suggesting that the causes of our behavior are not, after all, the sorts of things to which we have introspective access.
Research psychologists have generally not been as skeptical of our knowledge of our attitudes as they have been of our knowledge of the causes of our attitudes (Section 4.2.1 above). In fact, many of the same experiments that purport to show inaccurate knowledge of the causes of our attitudes nonetheless rely unguardedly on self-report for assessment of the attitudes themselves—a feature of those experiments criticized by Bem (1967). Attitudinal surveys in psychology and social science generally rely on participants' self-report as the principal source of evidence about attitudes (de Vaus 1985/2002; Sirken et al. (eds.) 1999). However, as in the case of motives and causes, there's a long tradition in clinical psychology skeptical of our self-knowledge of our attitudes, giving a large role to “unconscious” motives and attitudes.
A key challenge in assessing the accuracy of people's beliefs or judgments about their attitudes is the difficulty of accurately measuring attitudes independently of self-report. There is at present no tractable measure of attitude that is generally seen by philosophers as overriding individuals' own reports about their attitudes. However, in the psychological literature, “implicit” measures of attitudes—measures of attitudes that do not reply on self-report—have recently been gaining considerable attention (see Wittenbrink and Schwarz, eds., 2007; Petty, Fazio, and Briñol, eds., 2009). Such measures are sometimes thought capable of revealing unconscious attitudes or implicit attitudes either unavailable to introspection or erroneously introspected (Wilson, Lindsey, and Schooler 2000; Kihlstrom 2004; Lane et al. 2007; though see Hahn et al. forthcoming).
Much of the leading research on implicit attitude measures has concerned racism, in accord with the view that racist attitudes, though common, are considered socially undesirable and therefore often not self-ascribed even when present. For example, Campbell, Kruskal, and Wallace (1966) explored the use of seating distance as an index of racial attitudes, noting that racially Black and White students tended to aggregate in classroom seating arrangements. Using facial electromyography (EMG), Vanman et al. (1997) found (racially) White participants to display facial responses indicative of negative affect more frequently when asked to imagine co-operative activity with Black than with White partners—results interpreted as indicative of racist attitudes. Cunningham et al. (2004) showed White and Black faces to White participants while participants were undergoing fMRI brain imaging. They found less amygdala activation when participants looked at faces from their own group than when participants looked at other faces; and since amygdala activation is generally associated with negative emotion, they interpreted this tendency suggesting a negative attitude toward outgroup members (see also Hart et al 1990; and for discussion Ito and Cacioppo 2007).
Much of the recent implicit attitude research has focused on response priming and interference in speeded tasks. In priming research, a stimulus (the “prime”) is briefly displayed, followed by a mask that hides it, and then a second stimulus (the “target”) is displayed. The participant's task is to respond as swiftly as possible to the target, typically with a classification judgment. In evaluative priming, for example, the participant is primed with a positively or negatively valenced word or picture (e.g., snake), then asked to make a swift judgment about whether the subsequently presented target word (e.g., “disgusting”) is good or bad, or has some other feature (e.g., belongs to a particular category). Generally, negative primes will speed response for negative targets while delaying response for positive targets, and positive primes will do the reverse. Researchers have found that photographs of Black faces, whether presented visibly or whether presented so quickly as to be subliminal, tend to facilitate the categorization of negative targets and delay the categorization of positive targets for White participants—a result widely interpreted as revealing racist attitudes (Fazio et al. 1995; Dovidio et al. 1997; Wittenbrink, Judd, and Park 1997). In the Implicit Association Test , respondents are asked to respond disjunctively to combined categories, giving for example one response if they see either a dark-skinned face or a positively valenced word and a different response if they see either a light-skinned face or a negatively valenced word. As in evaluative priming tasks, White respondents tend to respond more slowly when asked to pair dark-skinned faces with positively valenced words than with negatively valenced words, which is interpreted as revealing a negative attitude or association (Greenwald, McGhee, and Schwartz 1998; Lane et al. 2007).
As mentioned above, such implicit measures are often interpreted as revealing attitudes to which people have poor or no introspective access. The evidence that people lack introspective knowledge of such attitudes generally turns on the low correlations between such implicit measures of racism and more explicit measures such as self-report—though due to the recognized social undesirability of racial prejudice, it is difficult to disentangle self-presentational from self-knowledge factors in self-reports (Fazio et al. 1995; Greenwald, McGhee, and Schwartz 1998; Wilson, Lindsey, and Schooler 2000; Greenwald and Nosek 2009). People who appear racist by implicit measures might disavow racism and inhibit racist patterns of response on explicit measures (such as when asked to rate the attractiveness of faces of different races) because they don't want to be seen as racist—a motivation that might drive them whether or not they have accurate self-knowledge of their racist attitudes. Still, it seems prima facie plausible that people have at best limited knowledge of the patterns of association that drive their responses on priming and other implicit measures.
But what do such tests really measure? In philosophy, Zimmerman (2007) and Gendler (2008a, 2008b) have argued that measures like the Implicit Association Test do not measure actual racist beliefs but rather something else, something under less rational control (Gendler calls them “aliefs”). In psychology, Gawronski and Bodenhausen (2006) advance a model according to which there is a substantial difference between implicit attitudes, defined in terms of associative processes, and explicit attitudes which have a propositional structure and are guided by standards of truth and consistency (see also Wilson, Lindsey, and Schooler 2000; Greenwald and Nosek 2009). On such a model, as on Zimmerman's and Gendler's views, a person with implicit racist associations may nonetheless have fully and genuinely egalitarian propositional beliefs. To the extent attitudes are held to be reflected in, or even defined by, our explicit judgments about the matter in question and also, differently but perhaps not wholly separably (see Section 2.3.4 above), our explicit judgments about our attitudes toward the matter in question, our self-knowledge would seem to be correspondingly secure and implicit measures beside the point. To the extent attitudes are held to crucially involve swift and automatic, or unreflective, patterns of reaction and association, our self-knowledge of them would appear to be correspondingly problematic, corrigible by data from implicit measures (Bohner and Dickel 2011; Schwitzgebel 2011a).
In a different vein, Carruthers (2011; see also Rosenthal 2001; Bem 1967, 1972) argues that the evidence of Nisbett, Gazzaniga, Wegner, and others (reviewed in Section 4.2.1 above) shows that people confabulate not just in reporting the causes of their attitudes but also in reporting the attitudes themselves. For example, Carruthers suggests that if someone in Nisbett and Wilson's famous 1977 study confabulates “I thought this pair was softest” as an explanation of her choice of the rightmost pair of stockings, she errs not only about the cause of her choice but also in ascribing to herself the judgment that the pair was softest. On this basis, Carruthers adopts a self/other parity view (see Section 2.1 above) of our self-knowledge of our attitudes, holding that we can only introspect, in the strict sense, conscious experiences like those that arise in perception and imagery.
Currently ongoing conscious experience—or maybe immediately past conscious experience (if we hold that introspective judgment must temporally follow the state or process introspected, or if we take seriously the concerns raised in Section 3.2 about the self-undermining of the introspective process)—is both the most universally acknowledged target of the introspective process and the target most commonly thought to be known with a high degree of privilege. Infallibility, indubitability, incorrigibility, and self-intimation claims (see Section 4.1.1) are most commonly made for self-knowledge of states such as being in pain or having a visual experience as of the color red, where these states are construed as qualitative states, or subjective experiences, or aspects of our phenomenology or consciousness. (All these terms are intended interchangeably to refer to what Block [1995], Chalmers [1996], and other contemporary philosophers call “phenomenal consciousness”.) If attitudes are sometimes conscious, then we might also be capable of introspecting those attitudes as part of our capacity to introspect conscious experience generally (Goldman 2006; Hill 2009).
It's difficult to study the accuracy of self-ascriptions of conscious experience for the same reasons it's difficult to study the accuracy of our self-ascriptions of attitudes (Section 4.2.2): There's no widely accepted measure to trump or confirm self-report. In the medical literature on pain, for example, no behavioral or physiological measure of pain is generally thought capable of overriding self-report of current pain, despite the fact that scaling issues remain a problem both within and especially between subjects (Williams, Davies, and Chadury 2000) as does retrospective assessment (Redelmeier and Kahneman 1996). When physiological markers of pain and self-report dissociate, it's by no means clear that the physiological marker should be taken as the more accurate index (for methodological recommendations see Price and Aydede 2005). Corresponding remarks apply to the case of pleasure (Haybron 2008).
As mentioned in Section 3.3 above, early introspective psychologists both asserted the difficulty of accurately introspecting conscious experience and achieved only mixed success in their attempts to obtain scientifically replicable (and thus presumably accurate) data through the use of trained introspectors. In some domains they achieved considerable success and replicability, such as in the construction of the “color solid” (a representation of the three primary dimensions of variation in color experience: hue, saturation, and lightness or brightness), the mapping of the size of “just noticeable differences” between sensations and the “liminal” threshold below which a stimulus is too faint to be experienced, and the (at least roughly) logarithmic relationship between the intensity of a sensory stimulus and the intensity of the resulting experience (the “Weber-Fechner law”). Contemporary psychophysics—the study of the relation between physical stimuli and the resulting sense experiences or percepts—is rooted in these early introspective studies. However, other sorts of phenomena proved resistant to cross-laboratory introspective consensus—such as the possibility or not of imageless thought (see the entry on “ mental imagery ”), the structure of emotion, and the experiential aspects of of attention. Perhaps these facts about the range of early introspective agreement and apparently intractable disagreement cast light on the range over which careful and well-trained introspection is and is not reliable.
Ericsson and Simon (1984/1993; Ericsson 2003) discuss and review relationships between the subject's performance on various problem-solving tasks, her concurrent verbalizations of conscious thoughts (“think aloud protocols”), and her immediately retrospective verbalizations. The existence of good relationships in the predicted directions in many problem-solving tasks lends empirical support to the view that people's reports about their stream of thoughts often accurately reflect those thoughts. For example, Ericsson and Simon find that think-aloud and retrospective reports of thought processes correlate with predicted patterns of eye movement and response latency. Ericsson and Simon also cite studies like that of Hamilton and Sanford (1978), who asked subjects to make yes or no judgments about whether pairs of letters were in alphabetical order (like MO) or not (like RP) and then to describe retrospectively their method for arriving at the judgments. When subjects retrospectively reported knowing the answer “automatically” without an intervening conscious process, reaction times were swift and did not depend on the distance between the letters. When subjects retrospectively reported “running through” a sequential series of letters (such as “LMNO” when prompted with “MO”) reaction times correlated nicely with reported length of run-through. On the other hand, Flavell, Green, and Flavell (1995) report gross and widespread introspective error about recently past and even current (conscious) thought in young children; and Smallwood and Schooler (2006) review literature that suggests that people are not especially good at detecting when their mind is wandering.
In the 20th century, philosophers arguing against infallibilism often devised hypothetical examples in which they suggested it was plausible to attribute introspective error; but even if such examples succeed, they are generally confined to far-fetched scenarios, pathological cases, or very minor or very brief mistakes (e.g., Armstrong 1963; Churchland 1988; Kornblith 1998, with an eye to the distinction between mistakes about current conscious experience and other sorts of mistakes). In the 21st century, philosophical critics of the accuracy of introspective judgments about consciousness shifted their focus to cases of widespread disagreement or (putative) error, either among ordinary people or among research specialists. Dennett (1991), Blackmore (2002), and Schwitzgebel (2011b), for example, argue that most people are badly mistaken about the nature of the experience of peripheral vision. These authors argue that people experience visual clarity only in a small and rapidly moving region of about 1–2 degrees of visual arc, contrary to the (they say) widespread impression most people have that they experience a substantially broader range of stable clarity in the visual field. Other recent arguments against the accuracy of introspective judgments about conscious experience turn on citing the widespread disagreement about whether there is a “phenomenology of thinking” beyond that of imagery and emotion, about whether sensory experience as a whole is “rich” (including for example constant tactile experience of one's feet in one's shoes) or “thin” (limited mostly just to what is in attention at any one time), and about the nature of visual imagery experience (Hurlburt and Schwitzgebel 2007; Bayne and Spener 2010; Schwitzgebel 2011b; though see Hohwy 2011). Irvine (2013) has argued that the methodological problems in this area are so severe that the term “consciousness” should be eliminated from scientific discourse as impossible to effectively operationalize or measure.
- Alston, William P., 1971, “Varieties of privileged access”, American Philosophical Quarterly , 8: 223–241.
- Amedi, Amir, Rafael Malach, and Alvaro Pascual-Leone, 2005, “Negative BOLD differentiates visual imagery and perception”, Neuron , 48: 859–872.
- Aristotle, 3rd c. BCE/1961, De Anima , W.D. Ross (ed.), Oxford: Oxford University Press.
- Armstrong, David M., 1963, “Is introspective knowledge incorrigible?”, Philosophical Review , 72: 417–432.
- –––, 1968, A materialist theory of the mind , London: Routledge.
- –––, 1981, The nature of mind and other essays , Ithaca, NY: Cornell University Press.
- –––, 1999, The mind-body problem , Boulder, CO: Westview.
- Aronson, Elliot, 1968, “Dissonance theory: Progress and problems”, in Theories of cognitive consistency , Robert P. Abelson, et al. (eds.), Chicago: Rand McNally, 112–139.
- Aru, Jaan, Talis Bachmann, Wolf Singer, and Lucia Melloni, 2012, “Distilling the neural correlates of consciousness”, Neuroscience and Biobehavioral Reviews , 36: 737–746.
- Augustinus, Aurelius, c. 420 C.E./1998, The city of God against the pagans , R.W. Dyson (tr.), Cambridge: Cambridge University Press.
- Aydede, Murat, and Güven Güzeldere, 2005, “Cognitive architecture, concepts, and introspection: An information-theoretic solution to the problem of phenomenal consciousness”, Noûs , 39: 197–255.
- Ayer, A.J., 1936/1946, Language, truth, and logic , 2nd ed., London: Gollancz.
- –––, 1963, The concept of a person , New York: St. Martin's.
- Baldwin, James Mark, 1901–1905, Dictionary of philosophy and psychology , New York: Macmillan.
- Bar-On, Dorit, 2004, Speaking my mind , Oxford: Oxford.
- Barrett, Lisa Feldman, Batja Mesquita, Kevin N. Ochsner, and James J. Gross, 2007, “The experience of emotion”, Annual Review of Psychology , 58: 373–403.
- Bayne, Tim, and Michelle Montague, eds., 2011, Cognitive phenomenology , Oxford: Oxford University Press.
- Bayne, Tim, and Maja Spener, 2010, “Introspective humility”, Philosophical Issues , 20, 1–22.
- Bem, Daryl J., 1967, “Self-perception: An alternative interpretation of cognitive dissonance phenomena”, Psychological Review , 74: 183–200.
- –––, 1972, “Self-perception theory”, Advances in Experimental Social Psychology , 6: 1–62.
- Berkeley, George, 1710/1965, A Treatise Concerning the Principles of Human Knowledge , in Principles, Dialogues, and Philosophical Correspondence , Colin M. Turbayne (ed.), New York: Macmillan, 3–101.
- Bilgrami, Akeel, 2006, Self-knowledge and resentment , Cambridge, MA: Harvard University Press.
- Blackmore, Susan, 2002, “There is no stream of consciousness”, Journal of Consciousness Studies , 9(5–6), 17–28.
- Block, Ned, 1995, “On a confusion about a function of consciousness”, Behavioral and Brain Sciences , 18: 227–247.
- –––, 1996, “Mental paint and mental latex”, Philosophical Issues , 7: 19–49.
- Boghossian, Paul, 1989, “Content and self-knowledge”, Philosophical Topics , 17: 5–26.
- Bohner, Gerd, and Nina Dickel, 2011, “Attitudes and attitude change”, Annual Review of Psychology , 62: 391–417.
- Boring, Edwin G., 1921, “The stimulus-error”, American Journal of Psychology , 32: 449–471.
- –––, 1953, “A history of introspection”, Psychological Bulletin , 50: 169–189.
- Boyle, Matthew, 2009, “Two kinds of self-knowledge”, Philosophy & Phenomenological Research , 78: 133–164.
- Brentano, Franz, 1874/1995, Psychology from an empirical standpoint , 2nd English edition, Antos C. Rancurello, D. B. Terrell and Linda L. McAlister (trans.), New York: Routledge.
- Burge, Tyler, 1979, “Individualism and the mental”, Midwest Studies in Philosophy , 4: 73–121.
- –––, 1988, “Individualism and self-knowledge”, Journal of Philosophy , 85: 649–663.
- –––, 1996, “Our entitlement to self-knowledge”, Proceedings of the Aristotelian Society , 96: 91–116.
- –––, 1998, “Reason and the first person”, in Knowing our own minds , Crispin Wright, Barry C. Smith, and Cynthia Macdonald (eds.), Oxford: Oxford University Press, 243–270.
- Byrne, Alex, 2005, “Introspection”, Philosophical Topics , 33(1): 79–104.
- –––, 2011a, Knowing that I am thinking, in Self-knowledge , Anthony Hatzimoysis (ed.), Oxford: Oxford.
- –––, 2011b, Knowing what I want, in Consciousness and the Self , JeeLoo Liu and John Perry (eds.), Cambridge: Cambridge.
- –––, 2011c, Transparency, belief, intention, Aristotelian Society Supplementary Volume , 85, 201-221.
- –––, 2012, in Introspection and Consciousness , Declan Smithies and Daniel Stoljar (eds.), Oxford: Oxford.
- Campbell, Donald T., William H. Kruskal, and William P. Wallace, 1966, “Seating aggregation as an index of attitude”, Sociometry , 29: 1–15.
- Campbell, John, 1999, “Immunity to error through misidentification and the meaning of a referring term”, Philosophical Topics , 25(1–2), 89–104.
- Carruthers, Peter, 2005, Consciousness: Essays from a higher-order perspective , Oxford: Oxford University Press.
- –––, 2011, The Opacity of Mind , Oxford: Oxford University Press.
- Cavell, Marcia, 2006, Becoming a subject , Oxford: Oxford University Press.
- Chalmers, David J., 1996, The conscious mind , New York: Oxford.
- –––, 2003, “The content and epistemology of phenomenal belief”, in Consciousness: New philosophical perspectives , Quentin Smith and Aleksandar Jokic (eds.), Oxford: Oxford, 220–272.
- Chapman, Dwight W., 1933, “Attensity, clearness, and attention”, American Journal of Psychology , 45: 156–165.
- Cheesman, Jim, and Philip M. Merikle, 1986, “Distinguishing conscious from unconscious perceptual processes”, Canadian Journal of Psychology , 40: 343–367.
- Chisholm, Roderick M., 1981, The first person , Brighton, UK: Harvester.
- Churchland, Paul M., 1988, Matter and consciousness , rev. ed., Cambridge, MA: MIT Press.
- Comte, Auguste, 1830, Cours de philosophie positive , volume 1, Paris: Bacheleier, Libraire pour les Mathématiques.
- Cooper, Joel, and Russell H. Fazio, 1984, “A new look at dissonance theory”, Advances in Experimental Social Psychology , 17: 229–266.
- Cui, Xu, Cameron B. Jeter, Dongni Yang, P. Read Montague, and David M. Eagleman, 2007, “Vividness of mental imagery: Individual variability can be measured objectively”, Vision Research , 47: 474–478.
- Cunningham, William A., et al., 2004, “Separable neural components in the processing of Black and White faces”, Psychological Science , 15: 806–813.
- de Graaf, Tom A., Maartje c. de Jong, Rainer Goebel, Raymond van Ee, and Alexander T. Sack, 2011, “On the functional relevance of frontal cortex for passive and volunatarily controlled bistable vision”, Cerebral Cortex , 21: 2322–2331.
- de Graaf, Tom A., Po-Jang Hsieh, and Alexander T. Sack, 2012, “The 'correlates' in neural correlates of consciousness”, Neuroscience and Biobehavioral Reviews , 36: 191–197.
- De Vaus, David, 1985/2002, Surveys in social research , London: Routledge.
- Dehaene, Stanislaus, et al., 2001, “Cerebral mechanisms of word masking and unconscious repetition priming”, Nature Neuroscience , 4: 752–758.
- Dehaene, Stanislaus, and Jean-Pierre Changeux, 2011, “Experimental and theoretical approaches to conscious processing”, Neuron , 70: 200–227.
- Del Cul, Antoine, Sylvain Baillet, and Stanislas Dehaene, 2007, “Brain dynamics underlying the nonlinear threshold for access to consciousness”, PLoS Biology , 5(10): e260).
- Dennett, Daniel C., 1987, The intentional stance , Cambridge, MA: MIT Press.
- –––, 1991, Consciousness explained , Boston: Little, Brown, and Co.
- –––, 2000, “The case for rorts”, in Rorty and his critics , R.B. Brandom (ed.), Malden, MA: Blackwell, 91–101.
- –––, 2002, “How could I be wrong? How wrong could I be?”, Journal of Consciousness Studies , 9(5–6): 13–6.
- Descartes, René, 1637/1985, Discourse on the method , in The philosophical writings of Descartes , vol. 1, John Cottingham, Robert Stoothoff, and Dugald Murdoch (editors and translators), Cambridge: Cambridge University Press, 111–151.
- –––, 1641/1984, Meditations on first philosophy , in The philosophical writings of Descartes , vol. 2, John Cottingham, Robert Stoothoff, and Dugald Murdoch (editors and translators,), Cambridge: Cambridge University Press, 1–62.
- Dovidio, John F., Kerry Kawakami, Craig Johnson, Brenda Johnson, and Adaiah Howard, 1997, “On the nature of prejudice: Automatic and controlled processes”, Journal of Experimental Social Psychology , 33: 510–540.
- Dretske, Fred, 1995, Naturalizing the mind , Cambridge, MA: MIT.
- –––, 2004, “Knowing what you think vs. knowing that you think it”, in The externalist challenge , Richard Schantz (ed.), Berlin: Walter de Gruyter, 389–399.
- Ebbinghaus, Hermann, 1885/1913, Memory: A contribution to experimental psychology , Henry A. Ruger and Clara E. Bussenius (translators), New York: Columbia.
- Ericsson, K. Anders, 2003, “Valid and non-reactive verbalization of thoughts during performance of tasks: Towards a solution to the central problems of introspection as a source of scientific data”, Journal of Consciousness Studies , 10(9–10): 1–18.
- Ericsson, K. Anders, and Herbert A. Simon, 1984/1993, Protocol analysis , rev. ed., Cambridge, MA: MIT.
- Evans, Gareth, 1982, The varieties of reference , John McDowell (ed.), Oxford: Clarendon; New York: Oxford University Press.
- Fazio, Russell H., Joni R. Jackson, Bridget C. Dunton, and Carol J. Williams, 1995, “Variability in automatic activation as an unobtrusive measure of racial attitudes: A bona fide pipeline?”, Journal of Personality and Social Psychology , 69(6): 1013–1027.
- Fechner, Gustav, 1860/1964, Elements of Psychophysics , Helmut E. Adler, Davis H. Howes, and Edwin G. Boring (ed. and trans.), New York: Holt, Rinehart, and Winston.
- Fernández, Jorgi, 2003, “Privileged access naturalized”, Philosophical Quarterly , 53: 352–372.
- Festinger, Leon, 1957, A theory of cognitive dissonance , Stanford, CA: Stanford.
- Festinger, Leon, and James M. Carlsmith, 1959, “Cognitive consequences of forced compliance”, Journal of Abnormal and Social Psychology , 58: 203–210.
- Flavell, John H., Frances L. Green, and Eleanor R. Flavell, 1995, Young children's knowledge about thinking , Monographs of the Society for Research in Child Development , 60(1).
- Fodor, Jerry A., 1983, Modularity of mind , Cambridge, MA: MIT.
- –––, 1998, Concepts: Where cognitive science went wrong , Oxford: Oxford University Press.
- Funder, David C., 1999, Personality judgment , London: Academic.
- Gallois, Andre, 1996, The world without, the mind within , Cambridge: Cambridge.
- Galton, Francis, 1869/1891, Hereditary genius , rev. ed., New York: Appleton.
- Gardner, Sebastian, 1993, Irrationality and the philosophy of psychoanalysis , Cambridge: Cambridge University Press.
- Gazzaniga, Michael S., 1995, “Consciousness and the cerebral hemispheres”, in The Cognitive Neurosciences , Michael S. Gazzaniga (ed.), Cambridge, MA: MIT, 1391–1400.
- Gawronski, Bertram, and Galen V. Bodenhausen, 2006, “Associative and propositional processes in evaluation: An integrative review of implicit and explicit attitude change”, Psychological Bulletin , 132: 692–731.
- Gendler, Tamar Szabó, 2008a, “Alief and Belief”, Journal of Philosophy , 105: 634–663.
- –––, 2008b, “Alief in Action, and Reaction”, Mind & Language , 23: 552–585.
- Gennaro, Rocco J., 1996, Consciousness and Self-Consciousness , Amsterdam: John Benjamins.
- Gertler, Brie, 2000, “The mechanics of self-knowledge”, Philosophical Topics , 28: 125–146.
- –––, 2001, “Introspecting phenomenal states”, Philosophy and Phenomenological Research , 63: 305–328.
- –––, 2011, “Self-knowledge and the transparency of belief”, in Self-knowledge , Anthony Hatzimoysis (ed.), Oxford: Oxford University Press.
- Goldman, Alvin I., 1989, “Interpretation psychologized”, Mind and Language , 4: 161–185.
- –––, 2000, “Can science know when you're conscious?”, Journal of Consciousness Studies , 7(5): 3–22.
- –––, 2006, Simulating minds , Oxford: Oxford.
- Gopnik, Alison, 1993a, “How we know our minds: The illusion of first-person knowledge of intentionality”, Behavioral and Brain Sciences , 16: 1–14.
- –––, 1993b, “Psychopsychology”, Consciousness and Cognition , 2: 264–280.
- Gopnik, Alison, and Andrew N. Meltzoff, 1994, “Minds, bodies and persons: Young children's understanding of the self and others as reflected in imitation and ‘theory of mind’ research”, in Self-awareness in animals and humans , Sue Taylor Parker, Robert W. Mitchell, and Maria L. Boccia (eds.), New York: Cambridge, 166–186.
- Gordon, Robert M., 1995, “Simulation without introspection or inference from me to you”, in Mental simulation , Martin Davies and Tony Stone (eds.), Oxford: Blackwell.
- –––, 2007, “Ascent routines for propositional attitudes”, Synthese , 159: 151–165.
- Green, David M., and John A. Swets, 1966, Signal detection theory and psychophysics , Oxford: Wiley.
- Greenwald, Anthony G., Debbie E. McGhee, and Jordan L.K. Schwartz, 1998, “Measuring individual differences in implicit cognition: The Implicit Association Test”, Journal of Personality and Social Psychology , 74: 1464–1480.
- Greenwald, Anthony G., and Brian A. Nosek, 2009, “Attitudinal dissociation: What does it mean?”, in Attitudes: Insights from the New Implicit Measures , Richard E. Petty, Russell H. Fazio, and Pablo Briñol (eds.), New York: Taylor and Francis, 65–82.
- Hahn, Adam, Charles M. Judd, Holen K. Hirsh, and Irene V. Blair, forthcoming, “Awareness of implicit attitudes”, Journal of Experimental Psychology: General .
- Hamilton, Andy, 2007, “Memory and self-consciousness: Immunity to error through misidentification”, Synthese , 171: 409–417.
- Hamilton, J.M.E., and A.J. Sanford, 1978, “The symbolic distance effect for alphabetic order judgements: A subjective report and reaction time analysis”, Quarterly Journal of Experimental Psychology , 30: 33–43.
- Harman, Gilbert, 1990, “The intrinsic quality of experience”, in James Tomberlin, (ed.), Philosophical Perspectives , 4, Atascadero, CA: Ridgeview, 31–52.
- Hart, Allen J., Paul J. Whalen, Lisa M. Shin, Sean C. McInerney, Hakan Fischer, and Scott L. Rauch, 2000, “Differential response in the human amygdala to racial outgroup vs ingroup face stimuli”, NeuroReport , 11: 2351–2355.
- Haybron, Daniel M., 2008, The pursuit of unhappiness , Oxford: Oxford University Press.
- Hektner, Joel M., Jennifer A. Schmidt, and Mihaly Csikszentmihalyi, 2007, Experience sampling method , Thousand Oaks, CA: Sage.
- Heil, John, 1988, “Privileged access”, Mind , 97: 238–251.
- Helmholtz, Hermann, 1856/1962, Helmholtz's Treatise on Physiological Optics , James P.C. Southall (ed.), New York: Dover. [Translation based on 1924 edition.]
- Hill, Christopher S., 1991, Sensations: A defense of type materialism , Cambridge: Cambridge University Press.
- –––, 2009, Consciousness , Cambridge: Cambridge University Press.
- Hirstein, William, 2005, Brain fiction , Cambridge, MA: MIT.
- Hohwy, Jakob, 2011, “Phenomenal variability and introspective reliability”, Mind & Language , 26: 261–286.
- Horgan, Terence, John L. Tienson, and George Graham, 2006, “Internal-world skepticism and mental self-presentation”, in Self-representational approaches to consciousness , Uriah Kriegel and Kenneth Williford (eds.), Cambridge, MA: MIT, 191–207.
- Horgan, Terence, and Uriah Kriegel, 2007, “Phenomenal epistemology: What is consciousness that we may know it so well?”, Philosophical Issues , 17(1): 123–144.
- Hume, David, 1739/1978, A treatise of human nature , L.A. Selby-Bigge and P.H. Nidditch (eds.), Oxford: Clarendon.
- –––, 1748/1975, An enquiry concerning human understanding , in Enquiries concerning human understanding and concerning the principles of morals , L.A. Selby-Bigge and P.H. Nidditch (eds.), Oxford: Clarendon, 1–165.
- Humphrey, George, 1951, Thinking: An introduction to its experimental psychology , London: Methuen.
- Hurlburt, Russell T., 1990, Sampling normal and schizophrenic inner experience , New York: Plenum.
- Hurlburt, Russell T., 2011, Investigating pristine inner experience , Cambridge: Cambridge.
- Hurlburt, Russell T., and Christopher L. Heavey, 2006, Exploring inner experience , Amsterdam: John Benjamins.
- Hurlburt, Russell T., and Eric Schwitzgebel, 2007, Describing inner experience? Proponent meets skeptic , Cambridge, MA: MIT.
- Husserl, Edmund, 1913/1982, Ideas , Book I, T.E. Klein and W.E. Pohl (trs.), Dordrecht: Kluwer.
- Ilg, Rüdiger, Afra M. Wohlschläger, Stefan Burazanis, Andreas Wöller, Sabine Nunnemann, and Mark Mühlau, 2008, “Neural correlates of spontaneous percept switches in ambiguous stimuli: An event-related functional magnetic resonance imaging study”, European Journal of Neuroscience , 28: 2325–2332.
- Irvine, Elizabeth, 2013, Consciousness as a scientific concept , Dordrecht: Springer.
- Ito, Tiffany A., and John T. Cacioppo, 2007, “Attitudes as mental and neural states of readiness”, in Implicit measures of attitudes , Bernd Wittenbrink and Norbert Schwarz (eds.), New York: Guilford, 125–158.
- Jack, Anthony, and Andreas Roepstorff, 2003, Trusting the subject , vol. 1, special issue of the Journal of Consciousness Studies , no. 10(9–10).
- –––, 2004, Trusting the subject , vol. 2, special issue of the Journal of Consciousness Studies , 11(7–8).
- James, William, 1890/1981, The principles of psychology , Cambridge, MA: Harvard.
- Jaynes, Julian, 1976, The origin of consciousness in the breakdown of the bicameral mind , New York: Houghton Mifflin.
- Johansson, Petter, Lars Hall, Sverker Sikström, and Andreas Olsson, 2005, “Failure to detect mismatches between intention and outcome in a simple decision task”, Science , 310: 116–119.
- Johansson, Petter, Lars Hall, Sverker Sikström, Betty Tärning, and Andreas Lind, 2006, “How something can be said about telling more than we can know: On choice blindness and introspection”, Consciousness and Cognition , 15: 673–692.
- Kamphuisen, Allard, Markus Bauer, and Raymond van Ee, 2008, “No evidence for widespread synchronized networks in binocular rivalry: MEG frequency tagging entrains primary early visual cortex”, Journal of Vision , 8(5): article 4.
- Kant, Immanuel, 1781/1997, The critique of pure reason , Paul Guyer and Allen W. Wood (eds. and trs.), Cambridge: Cambridge.
- Kay, Aaron C., Maria C. Jimenez, and John T. Jost, 2002, “Sour grapes, sweet lemons, and the anticipatory rationalization of the status quo”, Personality and Social Psychology Bulletin , 28: 1300–1312.
- Kihlstrom, John F., “Implicit methods in social psychology”, in The SAGE handbook of methods in social psychology , Carol Sansone, Carolyn C. Morf, and A.T. Panter (eds.), Thousand Oaks, CA: Sage, 195–212.
- Kind, Amy, 2003, “What's so transparent about transparency?”, Philosophical Studies , 115: 225–244.
- Kleinschmidt, A., C. Büchel, S. Zeki, and R.S.J. Frackowiak, 1998, “Human brain activity during spontaneously reversing perception of ambiguous figures”, Proceedings of the Royal Society B , 265: 2427–2433.
- Knapen, Tomas, Jan Brascamp, Joel Pearson, Raymond van Ee, and Randolph Blake, 2011, “The role of frontal and parietal areas in bistable perception”, Journal of Neuroscience , 31: 10293–10301.
- Kornblith, Hilary, 1998, “What is it like to be me?”, Australasian Journal of Philosophy , 76: 48–60.
- Kosslyn, Stephen M., Daniel Reisberg, and Marlene Behrmann, 2006, “Introspection and mechanism in mental imagery”, in The Dalai Lama at MIT , Anne Harrington and Arthur Zajonc (eds.), Cambridge, MA: Harvard, 79–90.
- Kreiman, Gabriel, Itzhak Fried, and Christof Koch, 2002, “Single-neuron correlates of subjective vision in the human medial temporal lobe”, Proceedings of the National Academy of Sciences , 99: 8378–8383.
- Kriegel, Uriah, 2009, Subjective consciousness , Oxford: Oxford.
- Külpe, Oswald, 1893/1895, Outlines of psychology , London: Swan Sonnenschein.
- Kusch, Martin, 1999, Psychological knowledge , London, Routledge.
- Lambie, John A., and Anthony J. Marcel, 2002, “Consciousness and the varieties of emotion experience: A theoretical framework”, Psychological Review , 109: 219–259.
- Lane, Kristin A., Mahzarin R. Banaji, Brian A. Nosek, and Anthony G. Greenwald, 2007, “Understanding and using the Implicit Association Test: IV”, in Implicit measures of attitudes , Bernd Wittenbrink and Norbert Schwarz (eds.), New York: Guilford, 59–102.
- Larson, Reed, and Mihaly Csikszentmihalyi, 1983, “The Experience Sampling Method” in Harry T. Reis, (ed.), Naturalistic approaches to studying social interaction , San Francisco: Jossey-Bass, 41-56.
- Lear, Jonathan, 1998, Open-minded , Cambridge, MA: Harvard.
- Lewis, C.I., 1946, An analysis of knowledge and valuation , La Salle, IL: Open Court.
- Locke, John, 1690/1975, An essay concerning human understanding , Peter H. Nidditch (ed.), Oxford: Oxford University Press.
- Lumer, Erik D., Karl J. Friston, and Geraint Rees, 1998, “Neural correlates of perceptual rivalry in the human brain”, Science , 280: 1930–1934.
- Lycan, William G., 1996, Consciousness and experience , Cambridge, MA: MIT.
- Lyons, William, 1986, The disappearance of introspection , Cambridge, MA: MIT.
- Lyubomirsky, Sonja, and Lee Ross, 1999, “Changes in attractiveness of elected, rejected, and precluded alternatives: A comparison of happy and unhappy individuals”, Journal of Personality and Social Psychology , 76: 988–1007.
- Macmillan, Neil A., and C. Douglas Creelman, 1991, Detection theory , Cambridge: Cambridge University Press.
- Marks, David F., 1985, “Imagery paradigms and methodology” Journal of Mental Imagery , 9: 93–105.
- Marr, David, 1983, Vision , New York: Freeman.
- Martin, Michael G.F., 2002, “The transparency of experience”, Mind and Language , 17: 376–425.
- Maudsley, Henry, 1867/1977, Physiology and pathology of the mind , Daniel N. Robinson (ed.), Washington, DC: University Publications of America.
- McGeer, Victoria, 1996, “Is ‘self-knowledge’ an empirical problem? Renegotiating the space of philosophical explanation”, Journal of Philosophy , 93: 483–515.
- –––, 2008, “The moral development of first-person authority”, European Journal of Philosophy , 16: 81–108.
- McGeer, Victoria, and Philip Pettit, 2002, “The self-regulating mind”, Language and Communication , 22: 281–299.
- Mele, Alfred, 2001, Self-deception unmasked , Princeton, NJ: Princeton.
- Mengzi, 3rd c. BCE/2008, B.W. Van Norden (tr.), Indianapolis: Hackett.
- Merickle, Philip M., Daniel Smilek, and John D. Eastwood, 2001, “Perception without awareness: Perspectives from cognitive psychology”, Cognition , 79: 115–134.
- Mill, James, 1829/1878, Analysis of the Phenomena of the Human Mind , John Stuart Mill (ed.), London: Longmans, Green, Reader, and Dyer.
- Mill, John Stuart, 1865/1961, Auguste Comte and positivism , Ann Arbor, MI: University of Michigan.
- Mole, Christoper, 2011, Attention is cognitive unison , Oxford: Oxford University Press.
- Moll, Albert, 1889/1911, Hypnotism , Arthur F. Hopkirk (ed.), New York: Charles Scribner's Sons.
- Moore, George Edward, 1903, “The refutation of idealism”, Mind , 12: 433–453.
- –––, 1942, “A reply to my critics”, in The philosophy of G.E. Moore , in P.A. Schilpp (ed.), New York: Tudor, 535–677.
- –––, 1944/1993, “Moore's paradox”, in G.E. Moore, Selected writings , Thomas Baldwin (ed.), London: Routledge, 207–212.
- Moran, Richard, 2001, Authority and estrangement , Princeton: Princeton.
- Müller, G.E., 1904, Die Gesichtspunkte und die Tatsachen der psychophysischen Methodik , Wiesbaden: J.F. Bergmann.
- Nahmias, Eddy, 2002, “Verbal reports on the contents of consciousness: Reconsidering introspectionist methodology”, Psyche , 8(21).
- Nichols, Shaun, and Stephen P. Stich, 2003, Mindreading , Oxford: Oxford University Press.
- Nisbett, Richard E., and Nancy Bellows, 1977, “Verbal reports about causal influences on social judgments: Private access versus public theories”, Journal of Personality and Social Psychology , 35: 613–624.
- Nisbett, Richard E., and Lee Ross, 1980, Human inference , Englewood Cliffs, NJ: Prentice-Hall.
- Nisbett, Richard E., and Timothy DeCamp Wilson, 1977, “Telling more than we can know: Verbal reports on mental processes”, Psychological Review , 84: 231–259.
- Noë, Alva, 2004, Action in perception , Cambridge, MA: MIT Press.
- Noë, Alva, and Evan Thompson, 2004, “Are there neural correlates of consciousness?”, Journal of Consciousness Studies , 11: (1): 3–28.
- Overgaard, Morten, Kristian Sandberg, and Mads Jensen, 2008, “The neural correlate of consciousness?”, Journal of Theoretical Biology , 254: 713–715.
- Papineau, David, 2002, Thinking about consciousness , Oxford: Oxford University Press.
- Parkkonen, Lauri, Jesper Andersson, Matti Hämäläinen, and Riitta Hari, 2008, “Early visual brain areas reflect the percept of an ambiguous scene”, Proceedings of the National Academy of Sciences , 105: 20500–20504.
- Paulhus, Delroy L., and Oliver P. John, 1998, “Egoistic and moralistic biases in self-perception: The interplay of self-deceptive styles with basic traits and motives”, Journal of Personality , 66: 1025–1060.
- Peacocke, Christopher, 1998, “Conscious attitudes, attention, and self-knowledge”, in Knowing our own minds , Crispin Wright, Barry C. Smith, and Cynthia Macdonald (eds.), Oxford: Oxford University Press, 63–99.
- Petitmengin, Claire, 2006, “Describing one's subjective experience in the second person: An interview method for the science of consciousness”, Phenomenology and the Cognitive Sciences , 5: 229–269.
- Petty, Richard E., Russell H. Fazio, and Pablo Briñol (eds.), 2009, Attitudes: Insights from the new implicit measures , New York: Taylor and Francis.
- Pillsbury, W.B., 1908, Attention , London: Swan Sonnenschein.
- Price, Donald D., and Murat Aydede, 2005, “The experimental use of introspection in the scientific study of pain and its integration with third-person methodologies: The experiential-phenomenological approach”, in Pain: New essays on its nature and the methodology of its study ,Murat Aydede (ed.), Cambridge, MA: MIT, 243–273.
- Prinz, Jesse, 2004, “The fractionation of introspection”, Journal of Consciousness Studies , 11(7–8): 40–57.
- –––, 2007, “Mental pointing: Phenomenal knowledge without concepts”, Journal of Consciousness Studies , 14(9–10): 184–211.
- –––, 2012, The conscious brain , Oxford: Oxford.
- Pryor, James, 1999, “Immunity to error through misidentification”, Philosophical Topics , 26(1–2): 271–304.
- Putnam, Hilary, 1975, “The meaning of ‘meaning’” in Hilary Putnam, Philosophical papers , vol. 2, Cambridge: Cambridge University Press, 215–271.
- Quiroga, R. Quian, R. Mukamel, E.A. Isham, and I. Fried, 2008, “Human single-neuron responses at the threshold of conscious recognition”, Proceedings of the National Academy of Sciences , 105: 3599–3604.
- Redelmeier, Donald A., and Daniel Kahneman, 1996, “Patients' memories of painful medical treatments: Real-time and retrospective evaluations of two minimally invasive procedures”, Pain , 66: 3–8.
- Rees, Geraint, and Chris Frith, 2007, “Methodologies for identifying the neural correlates of consciousness”, in The Blackwell Companion to Consciousness , Max Velmans and Susan Schneider (eds.), Malden, MA: Blackwell, 553–566.
- Richet, Charles, 1884, L'homme et l'intelligence , Paris: F. Alcan.
- Rodriguez, Eugenio, Nathalie George, Jean-Philippe Lachauz, Jacques Martinerie, Bernard Renault, and Francisco J. Varela, 1999, Perception's shadow: Long-distance synchronization of human brain activity“, Nature , 397: 430–433.
- Rorty, Richard, 1970, ”Incorrigibility as the mark of the mental“, Journal of Philosophy , 67: 399–424.
- Rosenthal, David M., 1990, ”Two concepts of consciousness“, Philosophical Studies , 49: 329–359
- –––, 2001, ”Introspection and self-interpretation“, Philosophical Topics , 28(2): 201–233.
- –––, 2005, Consciousness and Mind , Oxford: Oxford University Press.
- Ryle, Gilbert, 1949, The concept of mind , New York: Barnes and Noble.
- Sandberg, Kristian, Bahador Bahrami, Ryota Kanai, Gareth Robert Barnes, Morten Overgaard, and Geraint Rees, 2013, ”Early visual responses predict conscious face perception within and between subjects during binocular rivalry“, Journal of Cognitive Neuroscience , 25: 969–985.
- Schwitzgebel, Eric, 2002, ”A phenomenal, dispositional account of belief“, Noûs , 36: 249–275.
- –––, 2005, ”Difference tone training“, Psyche , 11(6).
- –––, 2007, ”No unchallengeable epistemic authority, of any sort, regarding our own conscious experience—contra Dennett?“, Phenomenology and the Cognitive Sciences , 6: 107–113.
- –––, 2011a, ”Knowing your own beliefs“, Canadian Journal of Philosophy , 35, supplement 41–62 ( Belief and Agency , ed. D. Hunter.)
- –––, 2011b, Perplexities of consciousness , Cambridge, MA: MIT.
- –––, 2012, ”Introspection, what?“, in Introspection and consciousness , Declan Smithies and Daniel Stoljar (eds.), Oxford: Oxford.
- Scollon, Christie Napa, Ed Diener, Shigehiro Oishi, Robert Biswas-Diener , 2005, ”An experience-sampling and cross-cultural investigation of the relation between pleasant and unpleasant affect“, Cognition and Emotion , 19: 27–52.
- Searle, John R., 1983, Intentionality , Cambridge: Cambridge.
- –––, 1992, The rediscovery of the mind , Cambridge, MA: MIT Press.
- Shoemaker, Sydney, 1963, Self-knowledge and self-identity , Ithaca, NY: Cornell University Press.
- –––, 1968, ”Self-reference and self-awareness“, Journal of Philosophy , 65: 555–567.
- –––, 1988, ”On knowing one's own mind“, Philosophical Perspectives , 2: 183–209.
- –––, 1994a, ”Self-knowledge and ‘inner sense’. Lecture I: The object perception model, Philosophy and Phenomenological Research , 54: 249–269.
- –––, 1994b, “Self-knowledge and ‘inner sense’. Lecture II: The broad perceptual model”, Philosophy and Phenomenological Research , 54: 271–290.
- –––, 1994c, “Self-knowledge and ‘inner sense’. Lecture III: The phenomenal character of experience”, Philosophy and Phenomenological Research , 54: 291–314.
- –––, 1995, “Moore's paradox and self-knowledge”, Philosophical Studies , 77: 211–228.
- Siewert, Charles, 2004, “Is experience transparent?”, Philosophical Studies , 117: 15–41.
- –––, 2012, ”On the phenomenology of introspection“, in Introspection and consciousness , Declan Smithies and Daniel Stoljar (eds.), Oxford: Oxford.
- Sirken, Monroe G., Douglas J. Herrmann, Susan Schechter, Norbert Schwarz, Judith N. Tanur, Roger Tourangeau (eds.), 1999, Cognition and survey research , New York: John Wiley and Sons.
- Smallwood, Jonathan, and Jonathan W. Schooler, 2006, “The restless mind”, Psychological Bulletin , 132: 946–958.
- Smith, A.D., 2008, “Translucent experiences”, Philosophical Studies , 140:197–212.
- Spener, Maja, forthcoming, “Disagreement about cognitive phenomenology”, in Cognitive phenomenology , Tim Bayne and Michelle Montague (eds.), Oxford: Oxford University Press.
- Stoljar, Daniel, 2004, “The argument from diaphanousness”, in New essays in the philosophy of language and mind , Maite Ezcurdia, Robert J. Stainton, and Christopher Viger (eds.), Calgary: University of Calgary, 341–390.
- Stone, Jeff, and Joel Cooper, 2001, “A self-standards model of cognitive dissonance”, Journal of Experimental Social Psychology , 37: 228–243.
- Summerfield, Christopher, Anthony Ian Jack, and Adrian Philip Burgess, 2002, “Induced gamma activity is associated with conscious awareness of pattern masked nouns”, International Journal of Psychophysiology , 44: 93–100.
- Taylor, Shelley E., and Jonathon D. Brown, 1988, “Illusion and well-being: A social psychological perspective on mental health”, Psychological Bulletin , 103: 193–210.
- Thomas, Nigel, 1999, “Are theories of imagery theories of imagination?”, Cognitive Science , 23: 207–245.
- Titchener, E.B., 1901–1905, Experimental psychology , New York: Macmillan.
- –––, 1908/1973, Lectures on the elementary psychology of feeling and attention , New York: Arno.
- –––, 1912a, “Prolegomena to a study of introspection”, American Journal of Psychology , 23: 427–448.
- –––, 1912b, “The schema of introspection”, American Journal of Psychology , 23: 485–508.
- Tong, Frank, Ming Meng, and Randolf Blake, 2006, “Neural bases of binocular rivalry”, Trends in Cognitive Sciences , 10: 502–511.
- Tong, Frank, Ken Nakayama, J. Thomas Vaughan, and Nancy Kanwisher, 1998, “Binocular rivalry and visual awareness in human extrastriate cortex”, Neuron , 21: 753–759.
- Tononi, Giulio, and Christof Koch, 2008, “The neural correlates of consciousness: An update”, Annals of the New York Academy of Sciences: The Year in Cognitive Neuroscience 2008 , 1124: 239–261.
- Tononi, Giulio, Ramesh Srinivasan, D. Patrick Russell, and Gerald M. Edelman, 1998, “Investigating neural correlates of conscious perception by frequency-tagged neuromagnetic responses”, Proceedings of the National Academy of Sciences , 95: 3198–3203.
- Tye, Michael, 1995, Ten problems about consciousness , Cambridge, MA: MIT.
- –––, 2000, Consciousness, color, and content , Cambridge, MA: MIT.
- –––, 2002, “Representationalism and the transparency of experience” Noûs , 36: 137–151.
- –––, 2009, Consciousness revisited , Cambridge, MA: MIT.
- Van Gulick, Robert, 1993, “Understanding the phenomenal mind: Are we all just armadillos?”, in Consciousness: Psychological and philosophical essays , Martin Davies and Glyn W. Humphreys (eds.), Oxford: Blackwell, 134–154.
- Vanman, Eric J., Brenda Y. Paul, Tiffany A. Ito, and Norman Miller, 1997, “The modern face of prejudice and structural features that moderate the effect of cooperation on affect” Journal of Personality and Social Psychology , 73: 941–959.
- Varela, Francisco J., 1996, “Neurophenomenology: A methodological remedy for the hard problem”, Journal of Consciousness Studies , 3(4): 330–49.
- Vazire, Simine, 2010, “Who knows what about a person? The Self-Other Knowledge Asymmetry (SOKA) model”, Journal of Personality and Social Psychology , 98: 281–300.
- Velleman, J. David, 2000, The possibility of practical reason , Oxford: Oxford University Press.
- Watson, John B., 1913, “Psychology as the behaviorist views it”, Psychological Review , 20: 158–177.
- Wegner, Daniel M., 2002, The illusion of conscious will , Cambridge, MA: MIT.
- Wegner, Daniel M. and Thalia Wheatley, 1999, “Apparent mental causation”, American Psychologist , 54: 480–492.
- White, Peter A., 1988, “Knowing more about what we can tell: ‘Introspective access’ and causal report accuracy ten years later”, British Journal of Psychology , 79: 13–45.
- Williams, Amanda C. de C., Huw Talfryn Oakley Davies, and Yasmin Chadury, 2000, “Simple pain rating scales hide complex idiosyncratic meanings”, Pain , 85: 457–463.
- Wilson, Timothy D., 2002, Strangers to ourselves , Cambridge, MA: Harvard.
- Wilson, Timothy D., Samuel Lindsey, and Tonya T. Schooler, 2000, “A model of dual attitudes”, Psychological Review , 107: 101–126.
- Wittenbrink, Bernd, Charles M. Judd, and Bernadette Park, 1997, “Evidence for racial prejudice at the implicit level and its relationship with questionnaire measures”, Journal of Personality and Social Psychology , 72: 262–274.
- Wittenbrink, Bernd, and Norbert Schwarz (eds.), 2007, Implicit measures of attitudes , New York: Guilford.
- Wittgenstein, Ludwig, 1953/1968, Philosophical investigations , 3rd edition, G.E.M. Anscombe (translator), New York: Macmillan.
- Wollheim, Richard, 1981, Sigmund Freud , New York: Cambridge.
- –––, 2003, “On the Freudian unconscious”, Proceedings and Addresses of the American Philosophical Association , 77(2): 23–35.
- Wright, Crispin, 1989, “Wittgenstein's later philosophy of mind: Sensation, privacy, and intention”, Journal of Philosophy , 86: 622–634.
- –––, 1998, “Self-knowledge: The Wittgensteinian legacy”, in Knowing our own minds , Crispin Wright, Barry C. Smith, and Cynthia Macdonald (eds.), Oxford: Oxford.
- Wundt, Wilhelm, 1874/1908, Grundzüge der physiologischen Psychologie (6th ed.), Leipzig: Wilhelm Engelmann.
- –––, 1888, Selbstbeobachtung und innere Wahrnehmung, Philosophische Studien , 4: 292–309.
- –––, 1896/1902, Outlines of psychology (4th ed.), 2nd English ed., Charles Hubbard Judd (trans.), Leipzig: Wilhelm Engelmann.
- –––, 1907, “Über Ausfrageexperiments und über die Methoden zur Psychologie des Denkens”, Psychologische Studien , 3: 301–360.
- Zimmerman, Aaron, 2007, “The nature of belief”, Journal of Consciousness Studies , 14(11): 61–82.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.
- Implicit Association Test , from Project Implicit, Harvard University.
- Difference Tone Training , Schwitzgebel's (2005) recreation of an introspective training procedure from Titchener's (1901–1905) lab manual.
- Color Wheels; Color Systems , An image of the Munsell color solid can be found at in the pages for the course 2D Design (Art 107), by Curt Heuer at the University of Wisconsin at Green Bay.
behaviorism | belief | Brentano, Franz | consciousness | consciousness: and intentionality | consciousness: higher-order theories | consciousness: representational theories of | consciousness: unity of | delusion | Descartes, René: epistemology | folk psychology: as a theory | folk psychology: as mental simulation | functionalism | Helmholtz, Hermann von | James, William | Kant, Immanuel: view of mind and consciousness of self | mental content: externalism about | mental content: narrow | mental imagery | pain | perception: the problem of | phenomenology | propositional attitude reports | qualia | Ryle, Gilbert | self-consciousness: phenomenological approaches to | self-deception | self-knowledge | Wundt, Wilhelm Maximilian
Copyright © 2014 by Eric Schwitzgebel < eschwitz @ citrus . ucr . edu >
Support SEP
Mirror sites.
View this site from another server:
- Info about mirror sites
The Stanford Encyclopedia of Philosophy is copyright © 2016 by The Metaphysics Research Lab , Center for the Study of Language and Information (CSLI), Stanford University
Library of Congress Catalog Data: ISSN 1095-5054
Introspectionism
- Reference work entry
- First Online: 01 January 2020
- pp 2424–2427
- Cite this reference work entry
- Rui Miguel Costa 3
151 Accesses
Introspection is the method of studying psychological processes based on systematic self-observation of thoughts, perceptions, feelings, and bodily sensations. The term “introspectionism” appears to have been introduced by behaviorist critics who subsumed all the diverse views and meanings of introspection under a common umbrella; there was never an introspectionist movement (Danziger 1980 ; Freitas Araujo and Maluf de Souza 2016 ).
Wilhelm Wundt (1832–1920)
Wilhelm Wundt is the psychologist most closely linked to introspection in experimental psychology. Yet he made very limited use of introspection, and when he did, it was essentially for recording reactions to the presentation of external stimuli in the study of perception and sensation. In this regard, he distinguished between selbstbeobachtung (self-observation) and innere wahrnehmung (internal perception), which have been both translated to English as introspection. It was on internal perceptions that experimental...
This is a preview of subscription content, log in via an institution to check access.
Access this chapter
Subscribe and save.
- Get 10 units per month
- Download Article/Chapter or eBook
- 1 Unit = 1 Article or 1 Chapter
- Cancel anytime
- Available as PDF
- Read on any device
- Instant download
- Own it forever
- Available as EPUB and PDF
- Durable hardcover edition
- Dispatched in 3 to 5 business days
- Free shipping worldwide - see info
Tax calculation will be finalised at checkout
Purchases are for personal use only
Institutional subscriptions
Banissy, M. J., Jonas, C., & Cohen Kadosh, R. (2014). Synesthesia: An introduction. Frontiers in Psychology, 5 , 1414.
Article PubMed PubMed Central Google Scholar
Barrett, F. S., & Griffiths, R. R. (2018). Classic hallucinogens and mystical experiences: Phenomenology and neural correlates. Current Topics in Behavioral Neurosciences, 36 , 393–340.
Berkovich-Ohana, A., & Wittmann, M. (2017). A typology of altered states according to the consciousness state space (CSS) model: A special reference to subjective time. Journal of Consciousness Studies, 24 , 37–61.
Google Scholar
Bervokich-Ohana, A., Dor-Ziderman, Y., Glicksohn, J., & Goldstein, A. (2013). Alterations in the sense of time, space, and body in the mindfulness-trained brain: A neurophenomenologically-guided MEG study. Frontiers in Psychology, 4 , 912.
Blumenthal, A. L. (1975). A reappraisal of Wilhelm Wundt. American Psychologist, 30 , 1081–1088.
Article Google Scholar
Cannon, W. B. (1927). The James-Lange theory of emotions: A critical examination and an alternative theory. American Journal of Psychology, 39 , 106–124.
Costa, R. M., Pestana, J., Costa, D., & Wittmann, M. (2016). Altered states of consciousness are related to higher sexual responsiveness. Consciousness and Cognition, 42 , 135–141.
Article PubMed Google Scholar
Costa, R. M., Oliveira, G., Pestana, J., Costa, D., & Oliveira, R. F. (2018). Do psychosocial factors moderate the relation between testosterone and female sexual desire? The role of interoception, alexithymia, defense mechanisms, and relationship status. Adaptive Human Behavior and Physiology . https://doi.org/10.1007/s40750-018-0102-7 .
Craig, A. D. (2011). Significance of the insula for the evolution of human awareness of feelings from the body. Annals of the New York Academy of Sciences, 1225 , 72–82.
Danziger, K. (1979). The positivist repudiation of Wundt. Journal of the History of the Behavioural Sciences, 15 , 205–230.
Danziger, K. (1980). The history of introspection reconsidered. Journal of the History of the Behavioural Sciences, 16 , 241–262.
Freitas Araujo, S., & Maluf de Souza, R. (2016). “… to rely on first and foremost and always”: Revisiting the role of introspection in William James’s early psychological work. Theory & Psychology, 26 , 96–111.
Hood, R. W., Ghorbani, N., Watson, P. J., Ghramaleki, A. F., Bing, Davision, H. K., … Williamson, W. P. (2001). Dimensions of the mysticism scale: Confirming the three-factor structure in the United States and Iran. Journal for the Scientific Study of Religion, 40 , 691–705.
James, W. (1884). On some omissions of introspective psychology. Mind, 9 , 1–29.
James, W. (1894/1994). The physical basis of emotion. Psychological Bulletin, 101 , 205–210.
James, W. (1902/1917). The varieties of religious experience . New York: Longmans, Green, and Co.
Levin, J., & Steele, L. (2005). The transcendent experience: Conceptual, theoretical, and epidemiologic perspectives. Explore, 1 , 89–101.
Nielsen, J., Kruger, T. H. C., Hartmann, U., Passie, T., Fehr, T., & Zedler, M. (2013). Synesthesia and sexuality: The influence of synaesthetic perceptions on sexual experience. Frontiers in Psychology, 4 , 751.
Northoff, G. (2012). From emotions to consciousness – a neuro-phenomenal and neuro-relational approach. Frontiers in Psychology, 3 , 303.
PubMed PubMed Central Google Scholar
Speth, J., Speth, C., Kaelen, M., Schloerscheidt, A. M., Feilding, A., Nutt, D. J., & Carhart-Harris, R. L. (2016). Decreased mental time travel to the past correlates with default-mode network disintegration under lysergic acid diethylamide. Journal of Psychopharmacology, 30 , 344–353.
Stanley, S. (2012). Intimate distances: William James’ introspection, Buddhist mindfulness, and experiential inquiry. New Ideas in Psychology, 30 , 201–211.
Studerus, E., Gamma, A., & Vollenweider, F. X. (2010). Psychometric evaluation of the altered states of consciousness rating scale (OAV). PLoS One, 5 , e12412.
Tagliazucchi, E., Roseman, L., Kaelen, M., Orban, C., Muthukumaraswamy, S. D., Murphy, K., … Carhart-Harris, R. (2016). Increased global functional connectivity correlates with LSD-induced ego-dissolution. Current Biology, 26 , 1043–1050.
Winkler, P., & Csémy, L. (2014). Self-experimentations with psychedelics among mental health professionals: LSD in the former Czechoslovakia. Journal of Psychoactive Drugs, 46 , 11–19.
Wittmann, M. (2015). Modulations of the experience of self and time. Consciousness and Cognition, 38 , 172–181.
Download references
Author information
Authors and affiliations.
WJCR – William James Center for Research, ISPA – Instituto Universitário, Lisbon, Portugal
Rui Miguel Costa
You can also search for this author in PubMed Google Scholar
Corresponding author
Correspondence to Rui Miguel Costa .
Editor information
Editors and affiliations.
Oakland University, Rochester, MI, USA
Virgil Zeigler-Hill
Todd K. Shackelford
Section Editor information
Department of Psychology, Universität zu Lübeck, Lübeck, NC, Germany
John F. Rauthmann
Rights and permissions
Reprints and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this entry
Cite this entry.
Costa, R.M. (2020). Introspectionism. In: Zeigler-Hill, V., Shackelford, T.K. (eds) Encyclopedia of Personality and Individual Differences. Springer, Cham. https://doi.org/10.1007/978-3-319-24612-3_691
Download citation
DOI : https://doi.org/10.1007/978-3-319-24612-3_691
Published : 22 April 2020
Publisher Name : Springer, Cham
Print ISBN : 978-3-319-24610-9
Online ISBN : 978-3-319-24612-3
eBook Packages : Behavioral Science and Psychology Reference Module Humanities and Social Sciences Reference Module Business, Economics and Social Sciences
Share this entry
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Publish with us
Policies and ethics
- Find a journal
- Track your research
- Subscriber Services
- For Authors
- Publications
- Archaeology
- Art & Architecture
- Bilingual dictionaries
- Classical studies
- Encyclopedias
- English Dictionaries and Thesauri
- Language reference
- Linguistics
- Media studies
- Medicine and health
- Names studies
- Performing arts
- Science and technology
- Social sciences
- Society and culture
- Overview Pages
- Subject Reference
- English Dictionaries
- Bilingual Dictionaries
Recently viewed (0)
- Save Search
- Share This Facebook LinkedIn Twitter
Related Content
Related overviews.
behaviourism
infinite regress
metacognition
protocol analysis
See all related overviews in Oxford Reference »
More Like This
Show all results sharing these subjects:
introspection
Quick reference.
A method of data collection in which observers examine, record, and describe their own internal mental processes and experiences. It can be traced back to the writings of the Greek philosopher Aristotle (384–322bc), who described in his essay On Memory in his Parva Naturalia how the process of recalling autumn was preceded by related thoughts such as milk, white, air, fluid , and only then autumn . When psychology emerged as an independent empirical science in Germany in the 1880s, Wilhelm (Max) Wundt (1832–1920) and other experimental researchers regarded uncontrolled introspection as unreliable and introduced methods based on introspection by trained observers under experimental conditions. In 1913 the US psychologist John B(roadus) Watson (1878–1958) launched an attack on introspection, declaring that psychology ought to be confined to the prediction and control of overt behaviour, but he advocated the study of reasoning processes via think-aloud methods, on which modern techniques of protocol analysis are based. See also infinite regress, metacognition, phenomenology, structuralism (2). Compare free association. introspect vb. [From Latin intro - towards the inside+ specere, spectum to look+- ion indicating an action, process, or state]
From: introspection in A Dictionary of Psychology »
Subjects: Science and technology — Psychology
Related content in Oxford Reference
Reference entries.
View all reference entries »
View all related items in Oxford Reference »
Search for: 'introspection' in Oxford Reference »
- Oxford University Press
PRINTED FROM OXFORD REFERENCE (www.oxfordreference.com). (c) Copyright Oxford University Press, 2023. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single entry from a reference work in OR for personal use (for details see Privacy Policy and Legal Notice ).
date: 24 November 2024
- Cookie Policy
- Privacy Policy
- Legal Notice
- Accessibility
- [66.249.64.20|81.177.180.204]
- 81.177.180.204
Character limit 500 /500
IMAGES
VIDEO
COMMENTS
They contend that introspection implies a level of armchair soul-searching, but the methods that Wundt used were a much more highly controlled and rigid experimental technique. In everyday use, introspection is a way of looking inward and examining one's internal thoughts and feelings.
Wilhelm Wundt opened the Institute for Experimental Psychology at the University of Leipzig in Germany in 1879. This was the first laboratory dedicated to. ... Wundt's introspection was not a casual affair, but a highly practiced form of self-examination. He trained psychology students to make observations that were biased by personal ...
It has often been claimed that Wilhelm Wundt, the father of experimental psychology, was the first to adopt introspection to experimental psychology [1] though the methodological idea had been presented long before, as by 18th century German philosopher-psychologists such as Alexander Gottlieb Baumgarten or Johann Nicolaus Tetens. [7] Also, Wundt's views on introspection must be approached ...
This paper examines the meaning and evidential role of reports of introspection in cognitive psychology. A theory of scientific introspection aims to detail the nature, scope and limits of reports of subjective experience in science. Introspective reports best function as experimental data when combined with
Experimental Introspection. Introspection, the philosopher's traditional method for investigating consciousness, became the psychologist's method as well. And not just James (who, after all, was a philosopher -- and physiologist -- before he became a psychologist). In the hands of Wundt, Titchener (Wundt's most famous American student), and ...
Introspection, as the term is used in contemporary philosophy of mind, is a means of learning about one's own currently ongoing, or perhaps very recently past, mental states or processes. ... "The experimental use of introspection in the scientific study of pain and its integration with third-person methodologies: The experiential ...
Introspective reports best function as experimental data when combined with objective methods of stimulus control and the more recent, developing methods of brain scanning and brain imaging-which ...
It was on internal perceptions that experimental psychology should rely, that is, quick and attentive observations to avoid the interference of self-reflection and insufficient memory (Danziger 1980). The greater reliance of experimental psychology on introspection was a later development, one criticized by Wundt (Blumenthal 1975; Danziger 1980).
When psychology emerged as an independent empirical science in Germany in the 1880s, Wilhelm (Max) Wundt (1832-1920) and other experimental researchers regarded uncontrolled introspection as unreliable and introduced methods based on introspection by trained observers under experimental conditions.
introspection, (from Latin introspicere, "to look within"), the process of observing the operations of one's own mind with a view to discovering the laws that govern the mind. In a dualistic philosophy, which divides the natural world (matter, including the human body) from the contents of consciousness, introspection is the chief method of psychology.