Root out friction in every digital experience, super-charge conversion rates, and optimize digital self-service
Uncover insights from any interaction, deliver AI-powered agent coaching, and reduce cost to serve
Increase revenue and loyalty with real-time insights and recommendations delivered to teams on the ground
Know how your people feel and empower managers to improve employee engagement, productivity, and retention
Take action in the moments that matter most along the employee journey and drive bottom line growth
Whatever they’re saying, wherever they’re saying it, know exactly what’s going on with your people
Get faster, richer insights with qual and quant tools that make powerful market research available to everyone
Run concept tests, pricing studies, prototyping + more with fast, powerful studies designed by UX research experts
Track your brand performance 24/7 and act quickly to respond to opportunities and challenges in your market
Explore the platform powering Experience Management
- Free Account
- Product Demos
- For Digital
- For Customer Care
- For Human Resources
- For Researchers
- Financial Services
- All Industries
Popular Use Cases
- Customer Experience
- Employee Experience
- Net Promoter Score
- Voice of Customer
- Customer Success Hub
- Product Documentation
- Training & Certification
- XM Institute
- Popular Resources
- Customer Stories
- Artificial Intelligence
Market Research
- Partnerships
- Marketplace
The annual gathering of the experience leaders at the world’s iconic brands building breakthrough business results, live in Salt Lake City.
- English/AU & NZ
- Español/Europa
- Español/América Latina
- Português Brasileiro
- REQUEST DEMO
- Experience Management
- Descriptive Statistics
Try Qualtrics for free
Descriptive statistics in research: a critical component of data analysis.
15 min read With any data, the object is to describe the population at large, but what does that mean and what processes, methods and measures are used to uncover insights from that data? In this short guide, we explore descriptive statistics and how it’s applied to research.
What do we mean by descriptive statistics?
With any kind of data, the main objective is to describe a population at large — and using descriptive statistics, researchers can quantify and describe the basic characteristics of a given data set.
For example, researchers can condense large data sets, which may contain thousands of individual data points or observations, into a series of statistics that provide useful information on the population of interest. We call this process “describing data”.
In the process of producing summaries of the sample, we use measures like mean, median, variance, graphs, charts, frequencies, histograms, box and whisker plots, and percentages. For datasets with just one variable, we use univariate descriptive statistics. For datasets with multiple variables, we use bivariate correlation and multivariate descriptive statistics.
Want to find out the definitions?
Univariate descriptive statistics: this is when you want to describe data with only one characteristic or attribute
Bivariate correlation: this is when you simultaneously analyze (compare) two variables to see if there is a relationship between them
Multivariate descriptive statistics: this is a subdivision of statistics encompassing the simultaneous observation and analysis of more than one outcome variable
Then, after describing and summarizing the data, as well as using simple graphical analyses, we can start to draw meaningful insights from it to help guide specific strategies. It’s also important to note that descriptive statistics can employ and use both quantitative and qualitative research .
Describing data is undoubtedly the most critical first step in research as it enables the subsequent organization, simplification and summarization of information — and every survey question and population has summary statistics. Let’s take a look at a few examples.
Examples of descriptive statistics
Consider for a moment a number used to summarize how well a striker is performing in football — goals scored per game. This number is simply the number of shots taken against how many of those shots hit the back of the net (reported to three significant digits). If a striker is scoring 0.333, that’s one goal for every three shots. If they’re scoring one in four, that’s 0.250.
A classic example is a student’s grade point average (GPA). This single number describes the general performance of a student across a range of course experiences and classes. It doesn’t tell us anything about the difficulty of the courses the student is taking, or what those courses are, but it does provide a summary that enables a degree of comparison with people or other units of data.
Ultimately, descriptive statistics make it incredibly easy for people to understand complex (or data intensive) quantitative or qualitative insights across large data sets.
Take your research to the next level with XM for Strategy & Research
Types of descriptive statistics
To quantitatively summarize the characteristics of raw, ungrouped data, we use the following types of descriptive statistics:
- Measures of Central Tendency ,
- Measures of Dispersion and
- Measures of Frequency Distribution.
Following the application of any of these approaches, the raw data then becomes ‘grouped’ data that’s logically organized and easy to understand. To visually represent the data, we then use graphs, charts, tables etc.
Let’s look at the different types of measurement and the statistical methods that belong to each:
Measures of Central Tendency are used to describe data by determining a single representative of central value. For example, the mean, median or mode.
Measures of Dispersion are used to determine how spread out a data distribution is with respect to the central value, e.g. the mean, median or mode. For example, while central tendency gives the person the average or central value, it doesn’t describe how the data is distributed within the set.
Measures of Frequency Distribution are used to describe the occurrence of data within the data set (count).
The methods of each measure are summarized in the table below:
Mean: The most popular and well-known measure of central tendency. The mean is equal to the sum of all the values in the data set divided by the number of values in the data set.
Median: The median is the middle score for a set of data that has been arranged in order of magnitude. If you have an even number of data, e.g. 10 data points, take the two middle scores and average the result.
Mode: The mode is the most frequently occurring observation in the data set.
Range: The difference between the highest and lowest value.
Standard deviation: Standard deviation measures the dispersion of a data set relative to its mean and is calculated as the square root of the variance.
Quartile deviation : Quartile deviation measures the deviation in the middle of the data.
Variance: Variance measures the variability from the average of mean.
Absolute deviation: The absolute deviation of a dataset is the average distance between each data point and the mean.
Count: How often each value occurs.
Scope of descriptive statistics in research
Descriptive statistics (or analysis) is considered more vast than other quantitative and qualitative methods as it provides a much broader picture of an event, phenomenon or population.
But that’s not all: it can use any number of variables, and as it collects data and describes it as it is, it’s also far more representative of the world as it exists.
However, it’s also important to consider that descriptive analyses lay the foundation for further methods of study. By summarizing and condensing the data into easily understandable segments, researchers can further analyze the data to uncover new variables or hypotheses.
Mostly, this practice is all about the ease of data visualization. With data presented in a meaningful way, researchers have a simplified interpretation of the data set in question. That said, while descriptive statistics helps to summarize information, it only provides a general view of the variables in question.
It is, therefore, up to the researchers to probe further and use other methods of analysis to discover deeper insights.
Things you can do with descriptive statistics
Define subject characteristics
If a marketing team wanted to build out accurate buyer personas for specific products and industry verticals, they could use descriptive analyses on customer datasets (procured via a survey) to identify consistent traits and behaviors.
They could then ‘describe’ the data to build a clear picture and understanding of who their buyers are, including things like preferences, business challenges, income and so on.
Measure data trends
Let’s say you wanted to assess propensity to buy over several months or years for a specific target market and product. With descriptive statistics, you could quickly summarize the data and extract the precise data points you need to understand the trends in product purchase behavior.
Compare events, populations or phenomena
How do different demographics respond to certain variables? For example, you might want to run a customer study to see how buyers in different job functions respond to new product features or price changes. Are all groups as enthusiastic about the new features and likely to buy? Or do they have reservations? This kind of data will help inform your overall product strategy and potentially how you tier solutions.
Validate existing conditions
When you have a belief or hypothesis but need to prove it, you can use descriptive techniques to ascertain underlying patterns or assumptions.
Form new hypotheses
With the data presented and surmised in a way that everyone can understand (and infer connections from), you can delve deeper into specific data points to uncover deeper and more meaningful insights — or run more comprehensive research.
Guiding your survey design to improve the data collected
To use your surveys as an effective tool for customer engagement and understanding, every survey goal and item should answer one simple, yet highly important question:
What am I really asking?
It might seem trivial, but by having this question frame survey research, it becomes significantly easier for researchers to develop the right questions that uncover useful, meaningful and actionable insights.
Planning becomes easier, questions clearer and perspective far wider and yet nuanced.
Hypothesize – what’s the problem that you’re trying to solve? Far too often, organizations collect data without understanding what they’re asking, and why they’re asking it.
Finally, focus on the end result. What kind of data do you need to answer your question? Also, are you asking a quantitative or qualitative question? Here are a few things to consider:
- Clear questions are clear for everyone. It takes time to make a concept clear
- Ask about measurable, evident and noticeable activities or behaviors.
- Make rating scales easy. Avoid long lists, confusing scales or “don’t know” or “not applicable” options.
- Ensure your survey makes sense and flows well. Reduce the cognitive load on respondents by making it easy for them to complete the survey.
- Read your questions aloud to see how they sound.
- Pretest by asking a few uninvolved individuals to answer.
Furthermore…
As well as understanding what you’re really asking, there are several other considerations for your data:
Keep it random
How you select your sample is what makes your research replicable and meaningful. Having a truly random sample helps prevent bias, increasingly the quality of evidence you find.
Plan for and avoid sample error
Before starting your research project, have a clear plan for avoiding sample error. Use larger sample sizes, and apply random sampling to minimize the potential for bias.
Don’t over sample
Remember, you can sample 500 respondents selected randomly from a population and they will closely reflect the actual population 95% of the time.
Think about the mode
Match your survey methods to the sample you select. For example, how do your current customers prefer communicating? Do they have any shared characteristics or preferences? A mixed-method approach is critical if you want to drive action across different customer segments.
Use a survey tool that supports you with the whole process
Surveys created using a survey research software can support researchers in a number of ways:
- Employee satisfaction survey template
- Employee exit survey template
- Customer satisfaction (CSAT) survey template
- Ad testing survey template
- Brand awareness survey template
- Product pricing survey template
- Product research survey template
- Employee engagement survey template
- Customer service survey template
- NPS survey template
- Product package testing survey template
- Product features prioritization survey template
These considerations have been included in Qualtrics’ survey software , which summarizes and creates visualizations of data, making it easy to access insights, measure trends, and examine results without complexity or jumping between systems.
Uncover your next breakthrough idea with Stats iQ™
What makes Qualtrics so different from other survey providers is that it is built in consultation with trained research professionals and includes high-tech statistical software like Qualtrics Stats iQ .
With just a click, the software can run specific analyses or automate statistical testing and data visualization. Testing parameters are automatically chosen based on how your data is structured (e.g. categorical data will run a statistical test like Chi-squared), and the results are translated into plain language that anyone can understand and put into action.
Get more meaningful insights from your data
Stats iQ includes a variety of statistical analyses, including: describe, relate, regression, cluster, factor, TURF, and pivot tables — all in one place!
Confidently analyze complex data
Built-in artificial intelligence and advanced algorithms automatically choose and apply the right statistical analyses and return the insights in plain english so everyone can take action.
Integrate existing statistical workflows
For more experienced stats users, built-in R code templates allow you to run even more sophisticated analyses by adding R code snippets directly in your survey analysis.
Advanced statistical analysis methods available in Stats iQ
Regression analysis – Measures the degree of influence of independent variables on a dependent variable (the relationship between two or multiple variables).
Analysis of Variance (ANOVA) test – Commonly used with a regression study to find out what effect independent variables have on the dependent variable. It can compare multiple groups simultaneously to see if there is a relationship between them.
Conjoint analysis – Asks people to make trade-offs when making decisions, then analyses the results to give the most popular outcome. Helps you understand why people make the complex choices they do.
T-Test – Helps you compare whether two data groups have different mean values and allows the user to interpret whether differences are meaningful or merely coincidental.
Crosstab analysis – Used in quantitative market research to analyze categorical data – that is, variables that are different and mutually exclusive, and allows you to compare the relationship between two variables in contingency tables.
Go from insights to action
Now that you have a better understanding of descriptive statistics in research and how you can leverage statistical analysis methods correctly, now’s the time to utilize a tool that can take your research and subsequent analysis to the next level.
Try out a Qualtrics survey software demo so you can see how it can take you through descriptive research and further research projects from start to finish.
Related resources
Mixed methods research 17 min read, market intelligence 10 min read, marketing insights 11 min read, ethnographic research 11 min read, qualitative vs quantitative research 13 min read, qualitative research questions 11 min read, qualitative research design 11 min read, request demo.
Ready to learn more about Qualtrics?
An official website of the United States government
Official websites use .gov A .gov website belongs to an official government organization in the United States.
Secure .gov websites use HTTPS A lock ( Lock Locked padlock icon ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.
- Publications
- Account settings
- Advanced Search
- Journal List
Qualitative Descriptive Methods in Health Science Research
Karen jiggins colorafi , phd, mba, rn, bronwynne evans , phd, rn, fngna, anef, faan.
- Author information
- Article notes
- Copyright and License information
Corresponding Author: Karen Jiggins Colorafi, PhD, MBA, RN, College of Nursing & Health Innovation, Arizona State University, 550N. 3rd Street, Phoenix, AZ 85004, USA. [email protected]
Issue date 2016 Jul.
Reprints and permission: sagepub.com/journalsPermissions.nav
The purpose of this methodology paper is to describe an approach to qualitative design known as qualitative descriptive that is well suited to junior health sciences researchers because it can be used with a variety of theoretical approaches, sampling techniques, and data collection strategies.
Background:
It is often difficult for junior qualitative researchers to pull together the tools and resources they need to embark on a high-quality qualitative research study and to manage the volumes of data they collect during qualitative studies. This paper seeks to pull together much needed resources and provide an overview of methods.
A step-by-step guide to planning a qualitative descriptive study and analyzing the data is provided, utilizing exemplars from the authors’ research.
This paper presents steps to conducting a qualitative descriptive study under the following headings: describing the qualitative descriptive approach, designing a qualitative descriptive study, steps to data analysis, and ensuring rigor of findings.
Conclusions:
The qualitative descriptive approach results in a summary in everyday, factual language that facilitates understanding of a selected phenomenon across disciplines of health science researchers.
Keywords: qualitative descriptive, qualitative methodology, rigor, qualitative design, qualitative analysis
There is an explosion in qualitative methodologies among health science researchers because social problems lend themselves toward thoughtful exploration, such as when issues of interest are complex, have variables or concepts that are not easily measured, or involve listening to populations who have traditionally been silenced ( Creswell, 2013 ). Creswell (2013 , p. 48) suggests qualitative research is preferred when health science researchers seek to (a) share individual stories, (b) write in a literary, flexible style, (c) understand the context or setting of issues, (d) explain mechanisms or linkages in causal theories, (e) develop theories, and (f) when traditional quantitative statistical analyses do not fit the problem at hand. Typically, qualitative textbooks present learners with five approaches for qualitative inquiry: narrative, phenomenological, grounded theory, case study, and ethnography. Yet eminent researcher Margarete Sandelowski argues that in “the now vast qualitative methods literature, there is no comprehensive description of qualitative description as a distinctive method of equal standing with other qualitative methods, although it is one of the most frequently employed methodological approaches in the practice disciplines” ( Sandelowski, 2000 ). Qualitative description is especially amenable to health environments research because it provides factual responses to questions about how people feel about a particular space, what reasons they have for using features of the space, who is using particular services or functions of a space, and the factors that facilitate or hinder use.
The purpose of this methodology article is to define and outline qualitative description for health science researchers, providing a starter guide containing important primary sources for those who wish to become better acquainted with this methodological approach.
Describing the Qualitative Descriptive Approach
In two seminal articles, Sandelowski promotes the mainstream use of qualitative description ( Sandelowski, 2000 , 2010 ) as a well-developed but unacknowledged method which provides a “comprehensive summary of an event in the every day terms of those events” ( Sandelowski, 2000 , p. 336). Such studies are characterized by lower levels of interpretation than are high-inference qualitative approaches such as phenomenology or grounded theory and require a less “conceptual or otherwise highly abstract rendering of data” ( Sandelowski, 2000 , p. 335). Researchers using qualitative description “stay closer to their data and to the surface of words and events” ( Sandelowski, 2000 , p. 336) than many other methodological approaches. Qualitative descriptive studies focus on low-inference description, which increases the likelihood of agreement among multiple researchers. The difference between high and low inference approaches is not one of rigor but refers to the amount of logical reasoning required to move from a data-based premise to a conclusion. Researchers who use qualitative description may choose to use the lens of an associated interpretive theory or conceptual framework to guide their studies, but they are prepared to alter that framework as necessary during the course of the study ( Sandelowski, 2010 ). These theories and frameworks serve as conceptual hooks upon which hang study procedures, analysis, and re-presentation. Findings are presented in straightforward language that clearly describes the phenomena of interest.
Other cardinal features of the qualitative descriptive approach include (a) a broad range of choices for theoretical or philosophical orientations, (b) the use of virtually any purposive sampling technique (e.g., maximum variation, homogenous, typical case, criterion), (c) the use of observations, document review, or minimally to moderately structured interview or focus group questions, (d) content analysis and descriptive statistical analysis as data analysis techniques, and (e) the provision of a descriptive summary of the informational contents of the data organized in a way that best fits the data ( Neergaard, Olesen, Andersen, & Sondergaard, 2009 ; Sandelowski, 2000 , 2001 , 2010 ).
Designing a Qualitative Descriptive Study
Methodology.
Unlike traditional qualitative methodologies such as grounded theory, which are built upon a particular, prescribed constellation of procedures and techniques, qualitative description is grounded in the general principles of naturalistic inquiry. Lincoln and Guba suggest that naturalistic inquiry deals with the concept of truth, whereby truth is “a systematic set of beliefs, together with their accompanying methods” ( Lincoln & Guba, 1985 , p. 16). Using an often eclectic compilation of sampling, data collection, and data analysis techniques, the researcher studies something in its natural state and does not attempt to manipulate or interfere with the ordinary unfolding of events. Taken together, these practices lead to “true understanding” or “ultimate truth.” Table 1 describes design elements in two exemplar qualitative descriptive studies and serves as guide to the following discussion.
Example of Study Design Elements for Two Studies.
Adapted from Jiggins Colorafi (2015) .
Adapted from Evans, Belyea, Coon, and Ume (2012) ; Evans, Belyea, and Ume (2011)
Theoretical Framework
Theoretical frameworks serve as organizing structures for research design: sampling, data collection, analysis, and interpretation, including coding schemes, and formatting hypothesis for further testing ( Evans, Coon, & Ume, 2011 ; Miles, Huberman, & Saldana, 2014 ; Sandelowski, 2010 ). Such frameworks affect the way in which data are ultimately viewed; qualitative description supports and allows for the use of virtually any theory ( Sandelowski, 2010 ). Creswell’s chapter on “Philosophical Assumptions and Interpretative Frameworks” (2013) is a useful place to gain understanding about how to embed a theory into a study.
Sampling choices place a boundary around the conclusions you can draw from your qualitative study and influence the confidence you and others place in them ( Miles et al., 2014 ). A hallmark of the qualitative descriptive approach is the acceptability of virtually any sampling technique (e.g., maximum variation where you aim to collect as many different cases as possible or homogenous whereby participants are mostly the same). See Miles, Huberman, and Saldana’s (2014 , p. 30) “Bounding the Collection of Data” discussion to select an appropriate and congruent purposive sampling strategy for your qualitative study.
Data Collection
In qualitative descriptive studies, data collection attempts to discover “the who, what and where of events” or experiences ( Sandelowski, 2000 , p.339). This includes, but is not limited to focus groups, individual interviews, observation, and the examination of documents or artifacts.
Data Analysis
Content analysis refers to a technique commonly used in qualitative research to analyze words or phrases in text documents. Hsieh and Shannon (2005) present three types of content analysis, any of which could be used in a qualitative descriptive study. Conventional content analysis is used in studies that aim to describe a phenomenon where exiting research and theory are limited. Data are collected from open-ended questions, read word for word, and then coded. Notes are made and codes are categorized. Directed content analysis is used in studies where existing theory or research exists: it can be used to further describe phenomena that are incomplete or would benefit from further description. Initial codes are created from theory or research and applied to data and unlabeled portions of text are given new codes. Summative content analysis is used to quantify and interpret words in context, exploring their usage. Data sources are typically seminal texts or electronic word searches.
Quantitative data can be included in qualitative descriptive studies if they aim to more adequately or fully describe the participants or phenomenon of interest. Counting is conceptualized as a “means to and end, not the end itself” by Sandelowski (2000 , p. 338) who emphasizes that careful descriptive statistical analysis is an effort to understand the content of data, not simply the means and frequencies, and results in a highly nuanced description of the patterns or regularities of the phenomenon of interest ( Sandelowski, 2000 , 2010 ). The use of validated measures can assist with generating dependable and meaningful findings, especially when the instrument (e.g., survey, questionnaire, or list of questions) used in your study has been used in others, helping to build theory, improve predictions, or make recommendations ( Miles et al., 2014 ).
Data Re-Presentation
In clear and simple terms, the “expected outcome of qualitative descriptive studies is a straight forward descriptive summary of the informational contents of data organized in a way that best fits the data” ( Sandelowski, 2000 , p. 339). Data re-presentation techniques allow for tremendous creativity and variation among researchers and studies. Several good resources are provided to spur imagination ( Miles et al., 2014 ; Munhall & Chenail, 2008 ; Wolcott, 2009 ).
Steps to Data Analysis
It is often difficult for junior health science researchers to know what to do with the volumes of data collected during a qualitative study and formal course work in traditional qualitative methods courses are typically sparse regarding the specifics of data management. It is for those reasons that this section of our article will provide a detailed description of the data analysis techniques used in qualitative descriptive methodology. The following steps are case examples of a study undertaken by one author (K.J.C.) after completing a data management course offered by another author (B.E.). Examples are offered from the two studies noted in Table 1 . It is offered in list format for general readability, but the qualitative researcher should recognize that qualitative analyses are iterative and recursive by nature.
Prior to initiating data collection, a coding manual containing a beginning list of codes ( Fonteyn, Vettese, Lancaster, & Bauer-Wu, 2008 ; Hsieh & Shannon, 2005 ; Miles et al., 2014 ) derived from the theoretical framework, literature, and the analysis of preliminary data, was developed. Codes are action-oriented words or labels assigned to designated portions (chunks or meaning units) of text reflecting themes or topics that occur with regularity ( Miles et al., 2014 , p. 71). In the coding manual (see example in Table 2 ), themes which were conceptually similar were grouped together using an ethnographic technique of domain analysis ( Spradley, 1980 ). A domain analysis contains a series of themes, a semantic relationship such as “is a component of” or “is a type of,” and the name of the domain. It is read from the bottom up, hence, “Acknowledging the importance of la familia” “is a result of” “cultural expectation.” Between the semantic relationship (is a result of) and the domain name, we inserted a definition of the domain itself (values, beliefs, and activities seen as normative by members of the culture who learn, share, and transmit this knowledge to others).
Example of a Coding Manual.
Note . SES = socioeconomic status.
Reading from the left in Table 2 , codes were given a number and letter for use in marking sections of text. Next, the code name indicating a theme was entered in boldface type with a definition in the code immediately under it. The second column provided an exemplar of each code, along with a notation indicating where it was found in the data, so that coders could recognize instances of that particular code when they saw them.
The coding manual was tested against data gathered in a preliminary study and was revised as codes found to overlap or be missing entirely. We continued to revise it iteratively during the study as data collection and analysis proceeded and then used it to recode previously coded data. Using this procedure, it was used to revisit the data several times.
Each transcribed document was formatted with wide right margins that allowed the investigator to apply codes and generate marginal remarks by hand. Marginal remarks are handwritten comments entered by the investigator. They represent an attempt to stay “alert” about analysis, forming ideas and recording reactions to the meaning of what is seen in the data. Marginal remarks often suggest new interpretations, leads, and connections or distinctions with other parts of the data ( Miles et al., 2014 ). Such remarks are preanalytic and add meaning and clarity to transcripts.
The investigator took sentences or paragraphs in the transcripts and divided them into meaning units, which are segments of text that contain a single idea ( Table 3 ). One or more codes were applied to each meaning unit during first-level coding, which is highly descriptive in nature. In Table 3 , reading from left to right, the first column contains text that has been separated into meaning units by color. The second column lists codes that were applied to each meaning unit, also color coded for clarity. First-level codes are in gerund form: a verb with an “ing” ending that denotes action. Gerunds are used to help the researcher focus on participant behaviors and actions in the transcript. Table 3 is an example of first-level or coarse coding (applying fewer codes to bigger “chunks” of material). Alternatively, individual researchers may choose to code finely (applying more codes to smaller “chunks” of material). Coding is a form of analysis; they “are prompts or triggers for deeper reflection” ( Miles et al., 2014 , p. 73). Because coding is a way to condense data, the researcher may choose to put “chunks” of coded material in large or small groupings, effectively slicing the data in a fine or coarse manner.
Conceptually similar codes were organized into categories (coding groups of coded themes that were increasingly abstract) through revisiting the theory framing the study (asking, “does this system of coding make sense according to the chosen theory?”). Miles et al. (2014) provide many examples for creating, categorizing, and revising codes, including highlighting a technique used by Corbin and Strauss ( Corbin & Strauss, 2015 ) that includes growing a list of codes and then applying a slightly more abstract label to the code, creating new categories of codes with each revision. This is often referred to as second-level or pattern coding, a way of grouping data into a smaller number of sets, themes, or constructs. During the analysis of data, patterns were generated and the researcher spent significant amounts of time with different categorizations, asking questions, checking relationships, and generally resisting the urge to be “locked too quickly into naming a pattern” ( Miles et al., 2014 , p. 69).
During this phase of analysis, pattern codes were revised and redefined in the coding manual and exemplars were used to clarify the understanding of each code. Miles et al. (2014) suggest that software can be helpful during this categorization (counting) step, so lists of observed engagement behaviors were also recorded in Dedoose software ( Dedoose, 2015 ) by code so that frequencies could be captured and analyzed. Despite the assistance of Dedoose, the researcher found that hand sorting codes into themes and categories was best done on paper.
Analytic memos are defined by Miles et al. (2014 , p. 95) as a “brief or extended narrative that documents the researcher’s reflections and thinking processes about the data.” Memos (see Figure 1 as an example) aided in data reduction by tying together different pieces of data into conceptual clusters. Memos were personal, methodological, or substantive in nature. These analytic memos were further analyzed by summarizing and creating additional analytic memos for groups of observations that contained similarities, effectively reducing the data collected through observation. Memoing was conducted throughout the analysis, beginning with data collection and continuing to the dissertation findings to chapter write-up.
Data displays (matrices), or visual representations containing concepts or variables were helpful in analyzing the data ( Table 4 ). Data displays help the investigator draw conclusions through an iterative process whereby collected data are represented in data displays, thereby reducing data and conducting further analysis ( Miles et al., 2014 ). Data displays are used extensively to categorize, organize, and analyze data. Such displays provide an opportunity to combine quantitative and qualitative findings, triangulating data collected by standardized measures, forms, observations, and interviews both within case and cross case. Triangulation refers to the use of more than one approach for investigating the research question in order to enhance confidence in the findings ( Creswell & Plano-Clark, 2007 ; Denzin & Lincoln, 1994 ; Denzin, Lincoln, & Giardina, 2006 ; Sandelowski, 2001 ).
Finally, the data are re-presented in a creative but rigorous way that are judged to best fit the findings ( Miles et al., 2014 ; Sandelowski & Leeman, 2012 ; Stake, 2010 ; Wolcott, 2009 ).
Level 1 Coding With Meaning Units.
Example of an analytic memo used in qualitative description analysis.
Data Matrix.
Note . The CLOX is an executive clock drawing task that tests cognition and was used in this study with the caregiver (CG) and the care recipient (CR). The CG Strain and the CG Gain scores were derived by the researcher through a qualitative content analysis ( Evans, Coon, & Belyea, 2006 ).
Strategies for Ensuring Rigor of Findings
Many qualitative researchers do not provide enough information in their reports about the analytic strategies used to ensure verisimilitude or the “ring of truth” for the conclusions. Miles, Huberman, and Saldana (2014) outline 13 tactics for generating meaning from data and another 13 for testing or confirming findings. They also provide five standards for assessing the quality of conclusions. The techniques relied upon most heavily during a qualitative descriptive study ought to be addressed within the research report. It is important to establish “trustworthiness” and “authenticity” in qualitative research that are similar to the terms validity and reliability in quantitative research. The five standards (objectivity, dependability, credibility, transferability, and application) typically used in qualitative descriptive studies to assess quality and legitimacy (trustworthiness and authenticity) of the conclusions are discussed in the next sections ( Lincoln & Guba, 1985 ; Miles et al., 2014 ).
Objectivity
First, objectivity (confirmability) is conceptualized as relative neutrality and reasonable freedom from researcher bias and can be addressed by (a) describing the study’s methods and procedures in explicit detail, (b) sharing the sequence of data collection, analysis, and presentation methods to create an audit trail, (c) being aware of and reporting personal assumptions and potential bias, (d) retaining study data and making it available to collaborators for evaluation.
Dependability
Second, dependability (reliability or auditability) can be fostered by consistency in procedures across participants over time through various methods, including the use of semistructured interview questions and an observation data collection worksheet. Quality control ( Miles et al., 2014 ) can be fostered by:
deriving study procedures from clearly outlined research questions and conceptual theory, so that data analysis could be linked back to theoretical constructs;
clearly describing the investigator’s role and status at the research site;
demonstrating parallelism in findings across sources (i.e., interview vs. observation, etc.);
triangulation through the use of observations, interviews, and standardized measures to more adequately describe various characteristics of the sample population ( Denzin & Lincoln, 1994 );
demonstrating consistency in data collection for all participants (i.e., using the same investigator and preprinted worksheets, asking the same questions in the same order);
developing interview questions and observation techniques based on theory, revised, and tested during preliminary work;
developing a coding manual a priori to guide data analysis, containing a “start list” of codes derived from the theoretical framework and relevant literature ( Fonteyn et al., 2008 ; Hsieh & Shannon, 2005 ; Miles et al., 2014 ); and
developing a monitoring plan (fidelity) to ensure that junior researchers, especially do not go “beyond the data” ( Sandelowski, 2000 ) in interpretation. In keeping with the qualitative tradition, data analysis and collection should occur simultaneously, giving the investigator the opportunity to correct errors or make revisions.
Credibility
Third, credibility or verisimilitude (internal validity) is defined as the truth value of data: Do the findings of the study make sense ( Miles et al., 2014 , p. 312). Credibility in qualitative work promotes descriptive and evaluative understanding, which can be addressed by (a) providing context-rich “thick descriptions,” that is, the work of interpretation based on data ( Sandelowski, 2004 ), (b) checking with other practitioners or researchers that the findings “ring true,” (c) providing a comprehensive account, (d) using triangulation strategies, (e) searching for negative evidence, and (f) linking findings to a theoretical framework.
Transferability
Fourth, transferability (external validity or “fittingness”) speaks to whether the findings of your study have larger import and application to other settings or studies. This includes a discussion of generalizability. Sample to population generalizability is important to quantitative researchers and less helpful to qualitative researchers who seek more of an analytic or case-to-case transfer ( Miles et al., 2014 ). Nonetheless, transferability can be aided by (a) describing the characteristics of the participants fully so that comparisons with other groups may be made, (b) adequately describing potential threats to generalizability through sample and setting sections, (c) using theoretical sampling, (d) presenting findings that are congruent with theory, and (e) suggesting ways that findings from your study could be tested further by other researchers.
Application
Finally, Miles et al. (2014) speak to the utilization, application, or action orientation of the data. “Even if we know that a study’s findings are valid and transferable,” they write, “we still need to know what the study does for its participants and its consumers” ( Miles et al., 2014 , p. 314). To address application, findings of qualitative descriptive studies are typically made accessible to potential consumers of information through the publication of manuscripts, poster presentations, and summary reports written for consumers. In addition, qualitative descriptive study findings may stimulate further research, promote policy discussions, or suggest actual changes to a product or environment.
Implications for Practice
The qualitative description clarified and advocated by Sandelowski (2000 , 2010 ) is an excellent methodological choice for the healthcare environments designer, practitioner, or health sciences researcher because it provides rich descriptive content from the subjects’ perspective. Qualitative description allows the investigator to select from any number of theoretical frameworks, sampling strategies, and data collection techniques. The various content analysis strategies described in this paper serve to introduce the investigator to methods for data analysis that promote staying “close” to the data, thereby avoiding high-inference techniques likely challenging to the novice investigator. Finally, the devotion to thick description (interpretation based on data) and flexibility in the re-presentation of study findings is likely to produce meaningful information to designers and healthcare leaders. The practical, step-by-step nature of this article should serve as a starting guide to researchers interested in this technique as a way to answer their own burning questions.
Acknowledgments
The author would like to recognize the other members of her dissertation committee for their contributions to the study: Gerri Lamb, Karen Dorman Marek, and Robert Greenes.
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Research assistance for data analysis and manuscript development was supported by training funds from the National Institutes of Health/National Institute on Nursing Research (NIH/NINR), award T32 1T32NR012718-01 Transdisciplinary Training in Health Disparities Science (C. Keller, P.I.). The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH or the NINR. This research was supported through the Hartford Center of Gerontological Nursing Excellence at Arizona State University College of Nursing & Health Innovation.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
- Corbin J, & Strauss A (2015). Basics of qualitative research: Techniques and procedures for developing grounded theory (4th ed.). Thousand Oaks, CA: Sage. [ Google Scholar ]
- Creswell J (2013). Qualitative inquiry and research design: Choosing among five approaches (3rd ed.). Los Angeles, CA: Sage. [ Google Scholar ]
- Creswell J, & Plano-Clark V (2007). Designing and conducting mixed methods research. Thousand Oaks, CA: Sage. [ Google Scholar ]
- Dedoose. (2015). Version 6.1.18, web application for managing, analyzing, and presenting qualitative and mixed method research data Los Angeles, CA: SocioCultural Research Consultants, LLC; Retrieved from www.dedoose.com [ Google Scholar ]
- Denzin N, & Lincoln Y (1994). The handbook of qualitative research. New York, NY: Sage. [ Google Scholar ]
- Denzin N, Lincoln Y, & Giardina M (2006). Disciplining qualitative research. International Journal of Qualitative Studies in Education, 19, 769–782. [ Google Scholar ]
- Evans BC, Belyea MJ, Coon DW, & Ume E (2012). Activities of daily living in Mexican American caregivers: The key to continuing informal care. Journal of Family Nursing, 18, 439–466. doi:410.1177/1074840712450210. Epub 1074840 712452012 Jun 1074840712450226 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Evans BC, Belyea MJ, & Ume E (2011). Mexican-American males providing personal care for their mothers. Hispanic Journal of Behavioral Sciences, 33, 234–260. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Evans B, Coon D, & Belyea M (2006). Worry among Mexican American caregivers of community-dwelling elders. Public Health Nursing, 23(3), 284–291. doi: 10.1111/j.1525-1446.2006.230312.x [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Evans BC, Coon DW, & Ume E (2011). Use of theoretical frameworks as a pragmatic guide for mixed methods studies: A methodological necessity? Journal of Mixed Methods Research, 5, 276–292. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Fonteyn ME, Vettese M, Lancaster DR, & Bauer-Wu S (2008). Developing a codebook to guide content analysis of expressive writing transcripts. Applied Nursing Research, 21, 165–168. doi: 10.1016/j.apnr.2006.08.005 [ DOI ] [ PubMed ] [ Google Scholar ]
- Hsieh H, & Shannon S (2005). Three approaches to qualitative content analysis. Qualitative Health Research, 15, 1277–1288. [ DOI ] [ PubMed ] [ Google Scholar ]
- Jiggins Colorafi K (2015). Patient centered health information technology: Engagement with the plan of care among older adults with multi-morbidities. Retrieved from ProQuest. [ Google Scholar ]
- Lincoln Y, & Guba E (1985). Naturalistic inquiry. New York, NY: Sage. [ Google Scholar ]
- Miles M, Huberman M, & Saldana J (2014). Qualitative data analysis: A methods sourcebook (3rd ed.). Thousand Oaks, CA: Sage. [ Google Scholar ]
- Munhall P, & Chenail R (2008). Qualitative research proposals and reports: A guide (3rd ed.). Boston, MA: Jones and Bartlett. [ Google Scholar ]
- Neergaard MA, Olesen F, Andersen RS, & Sondergaard J (2009). Qualitative description - the poor cousin of health research? BMC Medical Research Methodology, 9, 52. doi: 10.1186/1471-2288-9-52 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- Sandelowski M (2000). Whatever happened to qualitative description? Research in Nursing & Health, 23, 334–340. [ DOI ] [ PubMed ] [ Google Scholar ]
- Sandelowski M (2001). Focus on research methods. Real qualitative researchers do not count: The use of numbers in qualitative research. Research in Nursing & Health, 24, 230–240. [ DOI ] [ PubMed ] [ Google Scholar ]
- Sandelowski M (2004). Counting cats in Zanzibar. Research in Nursing & Health, 27, 215–216. [ DOI ] [ PubMed ] [ Google Scholar ]
- Sandelowski M (2010). What’s in a name? Qualitative description revisited. Research in Nursing & Health, 33, 77–84. doi: 10.1002/nur.20362 [ DOI ] [ PubMed ] [ Google Scholar ]
- Sandelowski M, & Leeman J (2012). Writing usable qualitative health research findings. Qualitative Health Research, 22, 1404–1413. [ DOI ] [ PubMed ] [ Google Scholar ]
- Spradley J (1980). Patricipant observation. Belmont, CA: Wadsworth Cengage Learning. [ Google Scholar ]
- Stake R (2010). Qualitative research: Studying how things work. New York, NY: The Guilford Press. [ Google Scholar ]
- Wolcott H (2009). Writing up qualitative research (3rd ed.). Los Angeles, CA: Sage. [ Google Scholar ]
- View on publisher site
- PDF (406.2 KB)
- Collections
Similar articles
Cited by other articles, links to ncbi databases.
- Download .nbib .nbib
- Format: AMA APA MLA NLM
IMAGES
VIDEO