GEOG 30323: Data Analysis & Visualization
Assignment 8: Data visualization
Assignment 8: data visualization #.
As we’ve discussed in class, there are many different ways you can visualize data! You’ve learned several techniques for data visualization in this class thus far. This assignment will focus explicitly on data visualization, and include more of an emphasis on plot customization.
The dataset we’ll be using in this assignment is the popular “Baby names” dataset from the Social Security Administration, available at http://ssa.gov/oact/babynames/limits.html . We’ll be using a pre-processed dataset available in the R package babynames , which is a long-form table of baby names from 1880 to 2017. Download the dataset from TCU Online and upload to Colab or your Drive. Next, import the necessary libraries for this assignment, then read in the dataset and take a quick look:
The data frame has the following columns: year , which is the year the baby was born; sex , the sex of the baby; name , the name of the baby; n , the number of babies born with that name for that sex in that year; and prop , the proportion of babies of that sex in that year with that name. As you can see, over 7 percent of female babies in 1880 were given the name Mary! Now let’s take a look at the size of our data frame.
Our data frame has 1.92 million rows! As such, this isn’t a dataset that you could reasonably deal with manually. Also: Excel worksheets cannot handle data of this size, as they have row limits of 1,048,576 rows, which takes us up to around 1989. This is not a dataset that is “big” by standard definitions, as it is only about 49 MB in size given the small number of columns. However, it is much-better suited to a computational approach to data analysis like Python/ pandas .
Granted, with 1.9 million rows in our dataset, we’ll need to carefully consider our research questions and how they can help us cut down the size of our dataset. In this notebook, I’d like you to get experience with three skills in Python plot customization:
Modifying chart properties
Annotation/labeling
Small multiples
To do this, we are going to focus on three topics:
What were the most popular names in 2017 (the last year in the dataset), and how did their popularity change over the past 10 years?
How does the release of Disney princess movies influence the popularity of baby names?
How have various gender-neutral names shifted in popularity between male & female over time?
You’ll then get a chance to do some of this on your own at the end of the assignment.
Question 1: What were the most popular names in 2017, and how did their popularity change over the past 10 years? #
To get started with this question, we need to do some subsetting, which you are very familiar with by now. Let’s look specifically at males for this first question. First and foremost, however, we need to figure out the most popular male baby names in 2017. A few pandas /Python methods that you’ve learned in previous assignments can get this done.
Notice what we are doing here - you can think of the line of code as a chain of methods in which we are manipulating the df data frame in turn.
First, we subset the data frame for only those male records in 2017;
Then, we sort the data frame in descending order by count;
Then, we slice the data frame to get back the top 15 rows;
Finally, we ask pandas to generate a list of names from our subsetted and sorted data frame.
pandas returns a Python list of the top 15 baby names in 2017 for boys. We can then pass this list to the .isin() method to get back entries for all of those names since 2000, and calculate their frequency per 1000 records in the dataset.
We are just about ready to visualize the data now. There are multiple ways these data could be visualized; in this instance, we’ll use a heatmap , which we discussed in class. A heatmap is a grid of cells in which the shading of each cell is proportional to its value. Generally, darker cells represent a greater value. When applied to temporal data, it can be an effective way to show the variation of values for multiple data series over time.
Heatmaps in seaborn take a wide-format data frame with the y-values in the index, the x-values as the columns, and the data values in the cells. We will use the .pivot() method to reshape our data and produce this type of data frame, then pass the dataframe to the heatmap() function.
The plot looks nice by default; we can see some trends such as the ascension of Liam, Aiden, and Noah and the relative descent of Michael and Jacob (although both of those names are still in the top 15, of course). However, you may still want to customize your chart.
seaborn plots have many plot customization options built-in; you’ll learn how to use a few later in the assignment. seaborn plots, however, are also matplotlib objects, which is the core plotting library in Python. In turn, you can use the wealth of functions available in matplotlib to modify your seaborn plots. You’ll learn a few of those methods in this assignment.
Note the code below and what we are doing. We’ll import the pyplot module from matplotlib in the standard way as plt . pyplot gives us access to many different plot customization functions. We can set the figure size before calling the plotting function, then rotate the x-tick labels, remove the axis labels, and add a title to our chart. Also, notice the arguments passed to sns.heatmap() . The annot parameter allows us to annotate the heatmap with data values, and the cmap parameter allows us to adjust the colors. It accepts all ColorBrewer palettes as well as the built-in matplotlib palettes.
Question 2: How does the release of Disney movies influence the popularity of baby names? #
Baby names can sometimes be responsive to trends in popular culture. For example, “Daenerys” showed up in the dataset for the first time in 2012, and 82 baby girls were named Daenerys in 2015!. In this exercise, we’ll examine how the release of Disney Princess movies relates to baby names.
Let’s examine trends in female baby names since 1980 for four Disney Princess names: Jasmine, Ariel, Elsa, and Tiana.
Clearly, Jasmine was a popular name in the early 1980s prior to the release of Aladdin. Tiana, Ariel, and Elsa, however, were not as popular. So how did their popularity shift over time?
We’ll make a line chart using the lineplot() function in seaborn . sns.lineplot() takes a long-form data frame like our babynames data frame along with a mapping of x and y values for a given dataset. The hue argument, if specified, will divide up the data into groups within a given column and plot a separate line, with different colors, for each group.
We can start to get a sense here of some “spikes” in the data - for example, a clear spike in babies named Ariel is evident after 1989, which is when The Little Mermaid was released. We can also note small spikes for Tiana and Elsa after the release of their respective movies.
However - how can we express this on the chart in clearer terms? One way to accomplish this is through annotation , which refers to the placement of text on the plot to highlight particular data points. Before doing this, let’s figure out approximately what the values are for each princess name when its movie was released:
I accomplished this with a little new Python code. I’ve mentioned before the dict , a type of Python object enclosed in curly braces ( {} ) that can hold key-value pairs. The key comes before the colon, the value comes after the colon, and each element of the dictionary is separated by a comma.
In this case, our dictionary holds the name of the Disney princess, and the year that the corresponding film was released. Dictionaries can be iterated through with for and the .items() method; in this example, princess represents the key in the loop, and year represents the value. Within the loop, we can first create a princess and year-specific subset of our data frame, then extract the corresponding value from it.
Spend some time reading through the plt.annotate() code below so that you can understand it. We’re using a number of parameters here:
The annotation text is the first argument. Python will interpret the string ‘\n’ as a line break, which allows us to put the text on multiple lines.
The xy parameter refers to the data coordinates of the point we want to annotate, given that we’ve specified this with the 'data' argument supplied to the xycoords parameter. We’ll use the year of the film release for the X value, and the data values we obtained above (approximately) for the Y value.
In this case, however, we don’t want to put the text right on top of the lines themselves; as such, we can specify an offset, and connect our text with the data point with an arrow. We use the xytext and textcoords parameters to do this; have a look at the plot and see where this puts the text. The argument supplied to arrowprops , which are in the form of a dict , govern the appearance of the arrow.
Annotation often takes iteration and patience to get it right. Try changing some of the arguments in the plt.annotate() calls below and see how the text and arrows move around!
Question 3: How have gender-neutral names shifted in popularity between male and female over time? #
For the third and final question, we’ll be looking at how four gender-neutral names have shifted between male and female over time. Let’s produce a new data frame from our original data frame that subsets for four popular gender-neutral names: Jordan, Riley, Peyton, and Taylor. We’ll take rows for years 1960 and later, and fill NaN values with 0.
In Assignment 6, you learned how to make faceted plots are available using the catplot() function, which is appropriate for charts that have a categorical axis. The companion relplot() function can be used for plots with two continuous axes, such as scatterplots or lineplots. Let’s try plotting faceted line charts that show how counts for these names vary by gender over time:
We can start to get a sense of some of the variations here; Taylor is more popular among girls than boys, whereas the opposite is true for Jordan. Let’s make a few modifications to the plot to improve its clarity. We will add a col_wrap argument to specify how many columns to create in our plot grid. We can also change the colors with the argument supplied to palette , and we can specify a height argument to modify the plot size.
Additionally, plot objects themselves have methods that you can use to modify the chart appearance; we’ll use .set_axis_labels() to improve the look of our axes, and we can modify the title of the legend as well.
Exercises #
To get credit for this assignment, you are going to apply what you’ve learned to some additional tasks using the baby names dataset. Some of this will involve re-producing some of the analyses in the notebook, but for different prompts.
Exercise 1: Re-create the heatmap from Question 1, but this time for females. What trends do you observe?
Exercise 2: Create a line chart that shows how a name of your choice has varied in popularity over time. Find out the year when your chosen name peaked in popularity, and annotate your chart to show where this peak is located on the line.
Exercise 3: In Question 2, we looked at the possible influence of Disney princess movies on female baby names. Pick four other names (male or female) from popular culture over the past 30 years and produce a chart that illustrates their influence (or lack thereof) on baby names. Be strategic with your name decisions! You can create a single line chart with four series, or a small multiples chart with facets - pick the one you think is ideal!
6.894 : Interactive Data Visualization
Assignment 2: exploratory data analysis.
In this assignment, you will identify a dataset of interest and perform an exploratory analysis to better understand the shape & structure of the data, investigate initial questions, and develop preliminary insights & hypotheses. Your final submission will take the form of a report consisting of captioned visualizations that convey key insights gained during your analysis.
Step 1: Data Selection
First, you will pick a topic area of interest to you and find a dataset that can provide insights into that topic. To streamline the assignment, we've pre-selected a number of datasets for you to choose from.
However, if you would like to investigate a different topic and dataset, you are free to do so. If working with a self-selected dataset, please check with the course staff to ensure it is appropriate for the course. Be advised that data collection and preparation (also known as data wrangling ) can be a very tedious and time-consuming process. Be sure you have sufficient time to conduct exploratory analysis, after preparing the data.
After selecting a topic and dataset – but prior to analysis – you should write down an initial set of at least three questions you'd like to investigate.
Part 2: Exploratory Visual Analysis
Next, you will perform an exploratory analysis of your dataset using a visualization tool such as Tableau. You should consider two different phases of exploration.
In the first phase, you should seek to gain an overview of the shape & stucture of your dataset. What variables does the dataset contain? How are they distributed? Are there any notable data quality issues? Are there any surprising relationships among the variables? Be sure to also perform "sanity checks" for patterns you expect to see!
In the second phase, you should investigate your initial questions, as well as any new questions that arise during your exploration. For each question, start by creating a visualization that might provide a useful answer. Then refine the visualization (by adding additional variables, changing sorting or axis scales, filtering or subsetting data, etc. ) to develop better perspectives, explore unexpected observations, or sanity check your assumptions. You should repeat this process for each of your questions, but feel free to revise your questions or branch off to explore new questions if the data warrants.
- Final Deliverable
Your final submission should take the form of a Google Docs report – similar to a slide show or comic book – that consists of 10 or more captioned visualizations detailing your most important insights. Your "insights" can include important surprises or issues (such as data quality problems affecting your analysis) as well as responses to your analysis questions. To help you gauge the scope of this assignment, see this example report analyzing data about motion pictures . We've annotated and graded this example to help you calibrate for the breadth and depth of exploration we're looking for.
Each visualization image should be a screenshot exported from a visualization tool, accompanied with a title and descriptive caption (1-4 sentences long) describing the insight(s) learned from that view. Provide sufficient detail for each caption such that anyone could read through your report and understand what you've learned. You are free, but not required, to annotate your images to draw attention to specific features of the data. You may perform highlighting within the visualization tool itself, or draw annotations on the exported image. To easily export images from Tableau, use the Worksheet > Export > Image... menu item.
The end of your report should include a brief summary of main lessons learned.
Recommended Data Sources
To get up and running quickly with this assignment, we recommend exploring one of the following provided datasets:
World Bank Indicators, 1960–2017 . The World Bank has tracked global human developed by indicators such as climate change, economy, education, environment, gender equality, health, and science and technology since 1960. The linked repository contains indicators that have been formatted to facilitate use with Tableau and other data visualization tools. However, you're also welcome to browse and use the original data by indicator or by country . Click on an indicator category or country to download the CSV file.
Chicago Crimes, 2001–present (click Export to download a CSV file). This dataset reflects reported incidents of crime (with the exception of murders where data exists for each victim) that occurred in the City of Chicago from 2001 to present, minus the most recent seven days. Data is extracted from the Chicago Police Department's CLEAR (Citizen Law Enforcement Analysis and Reporting) system.
Daily Weather in the U.S., 2017 . This dataset contains daily U.S. weather measurements in 2017, provided by the NOAA Daily Global Historical Climatology Network . This data has been transformed: some weather stations with only sparse measurements have been filtered out. See the accompanying weather.txt for descriptions of each column .
Social mobility in the U.S. . Raj Chetty's group at Harvard studies the factors that contribute to (or hinder) upward mobility in the United States (i.e., will our children earn more than we will). Their work has been extensively featured in The New York Times. This page lists data from all of their papers, broken down by geographic level or by topic. We recommend downloading data in the CSV/Excel format, and encourage you to consider joining multiple datasets from the same paper (under the same heading on the page) for a sufficiently rich exploratory process.
The Yelp Open Dataset provides information about businesses, user reviews, and more from Yelp's database. The data is split into separate files ( business , checkin , photos , review , tip , and user ), and is available in either JSON or SQL format. You might use this to investigate the distributions of scores on Yelp, look at how many reviews users typically leave, or look for regional trends about restaurants. Note that this is a large, structured dataset and you don't need to look at all of the data to answer interesting questions. In order to download the data you will need to enter your email and agree to Yelp's Dataset License .
Additional Data Sources
If you want to investigate datasets other than those recommended above, here are some possible sources to consider. You are also free to use data from a source different from those included here. If you have any questions on whether your dataset is appropriate, please ask the course staff ASAP!
- data.boston.gov - City of Boston Open Data
- MassData - State of Masachussets Open Data
- data.gov - U.S. Government Open Datasets
- U.S. Census Bureau - Census Datasets
- IPUMS.org - Integrated Census & Survey Data from around the World
- Federal Elections Commission - Campaign Finance & Expenditures
- Federal Aviation Administration - FAA Data & Research
- fivethirtyeight.com - Data and Code behind the Stories and Interactives
- Buzzfeed News
- Socrata Open Data
- 17 places to find datasets for data science projects
Visualization Tools
You are free to use one or more visualization tools in this assignment. However, in the interest of time and for a friendlier learning curve, we strongly encourage you to use Tableau . Tableau provides a graphical interface focused on the task of visual data exploration. You will (with rare exceptions) be able to complete an initial data exploration more quickly and comprehensively than with a programming-based tool.
- Tableau - Desktop visual analysis software . Available for both Windows and MacOS; register for a free student license.
- Data Transforms in Vega-Lite . A tutorial on the various built-in data transformation operators available in Vega-Lite.
- Data Voyager , a research prototype from the UW Interactive Data Lab, combines a Tableau-style interface with visualization recommendations. Use at your own risk!
- R , using the ggplot2 library or with R's built-in plotting functions.
- Jupyter Notebooks (Python) , using libraries such as Altair or Matplotlib .
Data Wrangling Tools
The data you choose may require reformatting, transformation or cleaning prior to visualization. Here are tools you can use for data preparation. We recommend first trying to import and process your data in the same tool you intend to use for visualization. If that fails, pick the most appropriate option among the tools below. Contact the course staff if you are unsure what might be the best option for your data!
Graphical Tools
- Tableau Prep - Tableau provides basic facilities for data import, transformation & blending. Tableau prep is a more sophisticated data preparation tool
- Trifacta Wrangler - Interactive tool for data transformation & visual profiling.
- OpenRefine - A free, open source tool for working with messy data.
Programming Tools
- JavaScript data utilities and/or the Datalib JS library .
- Pandas - Data table and manipulation utilites for Python.
- dplyr - A library for data manipulation in R.
- Or, the programming language and tools of your choice...
The assignment score is out of a maximum of 10 points. Submissions that squarely meet the requirements will receive a score of 8. We will determine scores by judging the breadth and depth of your analysis, whether visualizations meet the expressivenes and effectiveness principles, and how well-written and synthesized your insights are.
We will use the following rubric to grade your assignment. Note, rubric cells may not map exactly to specific point scores.
Submission Details
This is an individual assignment. You may not work in groups.
Your completed exploratory analysis report is due by noon on Wednesday 2/19 . Submit a link to your Google Doc report using this submission form . Please double check your link to ensure it is viewable by others (e.g., try it in an incognito window).
Resubmissions. Resubmissions will be regraded by teaching staff, and you may earn back up to 50% of the points lost in the original submission. To resubmit this assignment, please use this form and follow the same submission process described above. Include a short 1 paragraph description summarizing the changes from the initial submission. Resubmissions without this summary will not be regraded. Resubmissions will be due by 11:59pm on Saturday, 3/14. Slack days may not be applied to extend the resubmission deadline. The teaching staff will only begin to regrade assignments once the Final Project phase begins, so please be patient.
- Due: 12pm, Wed 2/19
- Recommended Datasets
- Example Report
- Visualization & Data Wrangling Tools
- Submission form
IMAGES
VIDEO