Blog

Hip Hop’s 2023 Heavyweights

With over 15 million listeners, Spotify’s RapCaviar has been called “the most influential playlist in music.” RapCaviar is curated by Spotify’s editorial team and updated daily to represent the latest and greatest hip-hop and rap tracks.

For the last year, I’ve saved a daily snapshot of the playlist using the Spotify API to empirically determine the biggest rappers in hip hop today. In this post, we’ll use hard data to approximate influence, hustle, and longevity for rap’s biggest names during 2023.

Methodology

To collect the data, I scheduled a Python script to run daily to (1) hit Spotify’s API to collect the RapCaviar track list and (2) save the resulting data frame as a .csv file to an S3 bucket. After pulling down and combining all daily files from S3 using an R script, the tidied dataset contains 11 fields:

Field NameSample Value
Playlist Id37i9dQZF1DX0XUsuxWHRQd
Playlist NameRapCaviar
Track Playlist Position2
Track NameMad Max
Track Id2i2qDe3dnTl6maUE31FO7c
Track Release Date2022-12-16
Track Added At2022-12-30
Artist Track Position1
Artist NameLil Durk
Artist Id3hcs9uc56yIGFCSy9leWe7
Date2023-01-02
The value for “artist track position” helps distinguish owners from features. For example, both Lil Durk and Future participate in the track Mad Max. Since it’s Lil Durk’s song and Future is the feature, a two row exists in the dataset, with values of “artist track position” set to 1 (Lil Durk) and 2 (Future).

After the cleaning and duplication process, the dataset contains 469 total tracks with 271 distinct artists represented across 351 distinct playlist snapshots between January 1 to December 27, 2023.

Metrics

Influence

Let’s start with influence: what percent of available days was a given artist represented on the playlist? For example, if an artist appeared in 50 of the 351 possible daily snapshots, their “influence” score would be 14.2%.

Here are the top ten rappers ranked by this influence metric, for 2023:

NameDays RepresentedPercent of Available
Drake351100%
Future351100%
Gucci Mane351100%
Travis Scott351100%
21 Savage34498%
Kodak Black33094%
Yeat32893%
Latto31189%
Lil Uzi Vert30988%
Quavo30286%

Impressively, four artists yielded sufficient influence to maintain a presence on the playlist every day of the year: Drake, Future, Gucci Mane, and Travis Scott. Here’s a visual representation of their dominant year:

Each colored line represents a unique track. With the y-axis reversed, the chart shows how new tracks enter the playlist positioned near the top and then descend over time. The biggest surprise to me is Gucci Mane, who managed to maintain his presence on the playlist via 14 distinct tracks released throughout the year:

The hustle shown here reminds me of my favorite Lil Wayne clip of all time.

Notably, 21 Savage was only a week short of full coverage, coming in at 98%.

Looking at the distribution of influence scores for all artists appearing at least once during the year, 38 (14%) were present in the RapCaviar playlist more than half of the year:

Density

It’s one thing for an artist to have one of their tracks represented on RapCaviar, but the heavyweights often have several at once. “Density” is calculated as a distinct count of tracks by artist and day.

The highest density score for 2023 was 6, a score achieved by just four rappers:

DensityArtist | Dates
6 tracks21 Savage | Jun 23 – Jul 13 (21 days)
6 tracksLil Wayne | Nov 10 – 16 (7 days)
6 tracksDrake | Oct 7 – 12 (6 days)
6 tracksTravis Scott | Aug 3 (1 day)

Most impressive is 21 Savage’s dominant 21-day, 6-track run over the summer, preceded by a 20-day, 5-track run. Notably, during the 6-track spree, all six were features or joint tracks:

  1. Pull Up (feat. 21 Savage)
  2. Wit Da Racks (feat. 21 Savage, Travis Scott & Yak Gotti)
  3. Peaches & Eggplants (feat. 21 Savage)
  4. 06 Gucci (feat. DaBaby & 21 Savage)
  5. War Bout It (feat. 21 Savage)
  6. Spin Bout U (with Drake)

Contributing to more than 10% of the playlist’s track count simultaneously is truly impressive (RapCaviar usually has 50 tracks total); rap’s heavyweights are dense.

Longevity

Finally, let’s consider longevity, meaning how long an artist’s tracks remains on the playlist. Here are the top ten songs by lifespan on the RapCaviar track list during ’23:

TrackArtistDaysFirst DayLast Day
f*kumeanGunna179Jun 19Dec 14
Turn Yo Clic UpQuavo167Jul 14Dec 27
Search & RescueDrake161Apr 7Sep 15
500lbsLil Tecca159Jul 21Dec 27
I KNOW ?Travis Scott153Jul 28Dec 27
Paint The Town RedDoja Cat146Aug 4Dec 27
MELTDOWNTravis Scott143Aug 1Dec 21
Private LandingDon Toliver136Feb 24Jul 13
SuperheroMetro Boomin135Jan 2May 25
All My LifeLil Durk133May 12Sep 21

Importantly, four of the tracks in the top ten are still active (italicized above), so there’s a decent chance Turn Yo Clic Up could outlive f*kumean. Speaking of which, Gunna’s first top-ten solo single managed to spend almost six months on RapCaviar, complete with a position surge in mid-August:

Zooming in, here’s the position history for all of those top ten tracks:

Most of the time, a track will debut on the playlist and then fade out over time, sinking deeper in the set list before falling off. Good examples are All My Life, Private Landing, and Search & Rescue. Hits like 500lbs and Paint The Town Red are more anomalous, with momentum building within the playlist over time.

To close this metric out, let’s look at the top ten rappers with the highest average longevity per track, for those artists with three or more distinct tracks ever appearing on the playlist during the year:

NameMedian LongevityAverage LongevityTrack Count
Gunna1161034
Metro Boomin92966
Ice Spice96724
Latto75744
Moneybagg Yo75696
Lil Uzi Vert68648
Don Toliver41645
Toosii79633
Key Glock61624
Sexyy Red63624
Conclusion

The influence and density metrics point toward the same heavyweights: 21 Savage, Drake, and Travis Scott. This is intuitive since the two metrics are correlated. The longevity metric shines the spotlight on a different subset of rappers, like Gunna, Metro Boomin, and Ice Spice.

Either way, it was a great year for rap. Thanks for reading!

GitHub Actions for Data Analysts

Web scraping is a useful tool for data practitioners, to state the obvious. Often, scraping is most valuable when performed on a scheduled basis, to incorporate new or refreshed values into the dataset.

In the past, I’ve paid a (small) monthly fee to PythonAnywhere to run scraping jobs. However, there’s a better, free alternative offered by a familiar platform: GitHub Actions. While GitHub Actions is largely designed for code deployment automation (testing pull requests, deploying merged pull requests to production) it can also be used to run jobs, including web scraping jobs.

This post walks through the implementation of a simple GitHub action, which scrapes the headline mortgage rates posted on Freddie Mac’s home page daily.

Setup

To get started, create a directory called .github/workflows in your repository. Within the .github/workflows directory, create a .yml file. This will contain the details of the action workflow.

The .yml file structure has two basic parts: on:specifies when the job should run, and jobs: defines what steps should be taken.

This action has been scheduled to run daily:

on:
  workflow_dispatch:
  schedule:
    - cron: '0 8 * * *'

Copy the scraper.yml, and modify as needed for your use case. Update the .py script and requirements.txt file in the root directory accordingly. This tutorial, as well as the official GitHub documentation, are good resources for building on this template.

Here, the python script is grabbing the mortgage rates posted on Freddie Mac’s homepage and saving them to a new .csv file.

Over time, enough snapshots accumulate to do something meaningful with this data!

Analysis

Let’s get a quick sense of how mortgage rates have changes since the action was first configured on December 18, 2021.

This .R script reads, joins, and cleans the saved files, and creates a trend plot:

The takeaway? It’s clear that mortgage rates are rapidly rising from their historically-low levels, propelled by the expectation of rate hikes by the Fed to counter inflation.

Thanks for following along. Check out this repo for all the components of the walkthrough.

Meetinghouses: A Proxy for Growth in the LDS Church

Background

When the Church of Jesus Christ of Latter-Day Saints was organized in a small town in New York in 1830, there were only six members. Today, the Mormon Church has grown to over 16 million members, with congregations in 160 countries.

Leaders of the church have taught that this extraordinary growth is fulfillment of the Old Testament prophecy. Daniel 2:31–45 describes a stone “cut out of the mountain without hands” which would roll forth to fill the whole earth. Like the stone, the Church is prophesied to spread and fill “every nation, kindred, tongue, and people.” (See D&C 42:58)

Given the ambitious scope of that prophecy, it’s no wonder that many parties, inside and outside the organization, are interested in measuring and tracking the growth of the LDS Church. While high-level membership metrics are shared bi-annually by church leadership, country or state-specific trends are not provided. This project is an attempt to more precisely measure church growth by tracking changes in the number and distribution of meetinghouses and wards over time.

Wards are the basic organizational unit of the Church, e.g., a congregation.

DATA SOURCE

To help members or visitors locate worship services nearby, the Church provides a meetinghouse locator tool. After entering an address, the user is shown nearby meetinghouses and hours of ward worship services.

Since there are many thousands of meetinghouses owned by the Church across the world, it would be very difficult to collect meetinghouse and ward details manually. However, using the back-end web service that powers the meetinghouse locator, it’s possible to query the full list of meetinghouses, along with the ward units assigned to those meetinghouses. You can find a copy of the code used to extract and clean the data here.

Note: The meetinghouse locator is publicly available online and is not restricted to authenticated users. Consequently, the underlying meetinghouse data is presumed to be open and available for collection and analysis.

DATA Structure

There are currently two data outputs: (1) a list of the ~19,000+ meetinghouses owned or operated by the Church and (2) a list of ~30,000+ wards or other organizational units “assigned” to those meetinghouses. Below is a simple example to make this relationship clear:

Meetinghouses Table

idaddresscitystatecountrylatitudelongitude
5272017-01-016695 S 2200 WWest JordanUtahUSA40.629395-111.9480540

Meetinghouse Assignments Table

meetinghouse_idassignment_idassignment_typeassignment_name
5272017-01-01125857wardColonial Park Ward
5272017-01-01170534wardMeadowland Ward

In other words, the first table says there’s a meetinghouse (church building) on 6695 South 2200 West in West Jordan, UT. The second table says that there are two wards that meet in that building, the Colonial Park Ward and the Meadowland Ward.

Use Cases

The data described above represents a snapshot in time. It describes the distribution of meetinghouses and wards around the world at the moment the data is compiled, and could be used to answer these kinds of questions:

  • How many meetinghouses are there currently in Sao Paulo, Brazil?
  • How many wards are there currently in Coconut Creek, Florida?

While these are interesting questions, the bigger prize is understanding how the number of meetinghouses and wards in each country, state, or zip code is changing over time. To accomplish this, we need to capture and compare snapshots at regular intervals.

For example, by comparing the list of meetinghouses in January 2020 and June 2021, we could infer which meetinghouses are new, where they are located, which meetinghouses have been closed, and where they are located. Ultimately, this should serve as a kind of (imperfect) proxy for growth or migration effects within the church. My intention is to capture monthly snapshots of this data, and then stitch it together to analyze trends in growth (or decline).

Data download

Visit ldsmeetinghouses.com or the GitHub repo for the latest files for download.

Have a question, suggestion, or idea? Create a new issue via GitHub here.

Minivan Wars: Visualizing Prices in the Used Car Market

With the recent birth of our second child, it was time to face a harsh reality: the impending necessity of a minivan. After trying to cope by dreaming up a list of alternative “family” cars, the truth set in: with young kids, features like sliding doors, captain chairs, and amble storage space can’t be beat.

Looking to get acquainted with prices in the used minivan market, I scraped 20 years’ worth of monthly average price data from CarGurus for five minivan models: Kia Sedona, Toyota Sienna, Chrysler Pacifica, Honda Odyssey, and Dodge Grand Caravan. May the best car win!

Source: motortrend.com

As one of the most visited car shopping sites in the United States, CarGurus tracks prices for millions of used car listings every year. With a bit of web scraping (using R), I compiled a dataset to visualize how car prices for used minivans have changed over time.

Here’s the result, for minivan models released between 2015 and 2019:

At first glance, my impression is that the Honda Odyssey and Toyota Sienna fall in the “premium” segment of the minivan market (You be the judge: is premium minivan an oxymoron?). On average, prices are higher compared to the Kia Sedona and Dodge Grand Caravan.

Second, I was struck by how steadily deprecation appears to occur for the Honda Odyssey. Roughly speaking, you can expect your Odyssey to depreciate by about $5k a year in the early years of ownership.

Finally, the impact of the COVID-19 pandemic and related semiconductor shortage becomes really clear in this picture. Notice the uptick in average price across the board for almost all make-model and year combinations. Because of the reduced supply of new vehicles (thanks to the semiconductor shortage), would-be buyers of new cars have moved into the used car market, driving up prices.

Bottom line, this visual helped me develop a better feel for the prices we’ll encounter in the used minivan market. You can find the script used to create the dataset here (and below), and the dataset itself here. Thanks for reading!

Exploring the Marvel Cinematic Universe in Tableau

The first Marvel Avengers movie was released right around the time I graduated from high school. In fact, I saw it in theaters during Senior Day with my graduating class. Since then, over the last 10 years, the Marvel Cinematic Universe (MCU) franchise has grown astronomically, far out-grossing other major film franchises like Star Wars and Harry Potter.

While re-watching parts of the series during paternity leave, I compiled a dataset measuring things like budget, box office sales, and Rotten Tomatoes rating for the 23 movies. Using this data, I created an interactive visual in Tableau allowing comparison of measures across the films in different orders, like release date and chronological order.

Screenshot of the MCU dashboard

You can find the visual on Tableau Public here, and the dataset here.

The first takeaway is that these movies were (and are) big money-makers. You have to admire the way Bob Iger‘s gathered quality intellectual property (e.g., Pixar, Marvel, LucasFilm) under the Disney umbrella via acquisition during his 15-year tenure as CEO, creating a deep catalog of content for the Disney+ streaming service. According to the data I collected from Wikipedia, total gross box office revenue for the MCU franchise is north of $22 Billion.

Second, I’m always interested in comparing critic and audience ratings on Rotten Tomatoes. While ratings were generally in sync for most films in the franchise, there were some notable exceptions. For example, the average rating for Captain Marvel among critics was 79%, compared to 45% for audiences, a 34 point difference! Sadly, there were reports of review bombing with troll comments attacking the film for perceived feminism.

Rotten Tomatoes Audience Score in ranked order

This was a light-hearted project, and a fun way to practice more advanced Tableau techniques like parameters, nested calculated fields, and custom shapes. You can explore the viz for yourself here.

How to send yourself a notification when your code is done running

I have a couple of Python scripts scheduled to run daily and hourly using pythonanywhere. These scripts help automate tasks for me, like tracking cryptocurrency prices or sending texts to friends on their birthday.

Sometimes, the jobs fail and the code doesn’t run. Since I’d like to know when that happens, I add a few lines of code to send myself a text when something goes wrong.

To get started, create a Twilio account. You’ll need three things from your Twilio account to make the code work: an account sid, auth token, and active Twilio phone number.

Locate your account sid and auth token from the Twilio dashboard home page.

Copy the code snippet below, switching out the account sid, auth token, and sender and recipient phone numbers. Note that the phone numbers should be formatted like this: +18299321023. This is the machine-readable version of (829) 932-1023.

In practice, you can set up your automated scripts with a structure like this:

Pretty easy, right? Twilio’s API makes it simple to get SMS alerts in just a few lines of code.

5 simple ways to improve your digital security

When it comes to online security and privacy, it doesn’t hurt to be a bit paranoid.

Even if you follow best practices like using two-factor authentication or password management software, it’s possible that loved ones like parents, grandparents, siblings, or close friends are not doing the same. For example, chances are good you know someone that uses a single password for every account!

Hoping to provide some straight-forward, actionable steps to help my family members get serious about their digital footprint, I put together a deck with five simple tips for improving digital security. Even though better security usually means less convenience, helping others get serious about managing their digital life is worth the effort.

Here are the slides. You can also download a PDF copy here.

How to scrape IMDb and analyze your favorite TV shows like a true nerd

Like many people, my wife and I relax by watching a show before going to bed at the end of the day. My preference is light-hearted comedy that doesn’t require much brainpower. Not surprisingly, frequent picks include episodes from sitcoms like The Office and Community.

Curious to see how well-liked some of my favorite shows were in their time, I scrapped 818 episode ratings and descriptions from IMDb.com for my top shows: The Office, Parks & Recreation, Modern Family, Community, New Girl, and The Good Place. I used IMDb’s crowd-sourced episode ratings to plot popularity across seasons, and extracted character name counts from episode descriptions to loosely quantify character importance.

Rating Trends

IMDb lists five data elements for each episode: name, release date, average rating, number of votes, and description:

For example, here’s how episode one of season one of The Office looks.

Looking for high-level rating trends, I plotted all 800+ episode ratings by release date for all six shows in a single chart, with an overlaid bold line to emphasize the trend.

A few show-specific observations:

The Office: It’s pretty easy to spot the impact of Michael’s (Steve Carell) departure from the show at the end of the seventh season. The final season punches below average until the last three episodes, which audiences appeared to adore (9.1, 9.5, and 9.8, respectively).

Community: Something is obviously off in season four. Wikipedia notes: “The [fourth] season marked the departure of show-runner Dan Harmon and overall received mixed reviews from critics. In the fifth season, Harmon returned as show-runner, and the fourth season was referred to retroactively as ‘the gas-leak year’.”

Modern Family: There’s a clear downward trend in average rating, but the show’s longevity definitely speaks to some kind of loyal fan base.

Character Importance

A dynamic and likeable cast of characters is really a key ingredient to any sitcom; personalities like Schmidt from New Girl, Ron Swanson from Parks & Recreation, or Abed from Community keep audiences coming back for more.

As a proxy for character “importance”, I counted the number of times a character’s name appeared in the IMDb descriptions, divided by the total number of episodes.

Here’s the calculation: The Office has 188 episodes. “Michael” appeared in the episode descriptions 128 times, so his “character importance” is ~68%. The actual value doesn’t matter as much as its relative position compared to other characters.

Community and The Good Place seem to have a fairly balanced character line up. In contrast, Parks & Recreation and The Office have an obvious “main” character (Leslie Knope and Michael Scott, respectively), with a solid cast of supporting personalities.

Keep in mind, this metric is a pretty rough proxy for character importance; a much better measure would be something like percentage of screen time or dialogue.

Code Walkthrough

Let’s start by pulling in the necessary packages.

Next, we’ll create a tibble (tidyverse data frame) containing a list of TV shows to scrape from IMDb.

Sourcing the imdb_id is easy, just search for the show you’re interested in and pull the last component of the URL (e.g. imdb.com/title/tt1442437 for Modern Family).

Next, define a scraper function to extract the key data elements (like episode name, average rating, and description). Here we loop over the list of shows and seasons previously defined.

After some cleaning, the data is ready to visualize. The geom_smooth function powers the overlaid bold line to emphasize the overall trend.

The code above is what produces the ratings trend chart:

Next, I used the str_detect() function from the stringr package to count the number of times a character name appeared in the episode descriptions. For example, Michael, Dwight, and Jim would have been counted once in the description below:

Ready to finalize his deal for a new condo, Michael is away with Dwight while Jim rallies the staff together for office games.

“Office Olympics”, The Office Season 2

As described previously, the “character importance” calculation is as simple as dividing the number of times a character’s name appears in the episode descriptions divided by the total number of episodes in the show.

The code above is what produces the character importance chart:

That’s it. Now you can impress (or bore) your friends and family with data-driven TV sitcom trivia, like a a true nerd. You can find the full code here.

The Rise of Rap: A Genre Popularity Analysis

Today it feels like rap is bigger and more mainstream than ever. A casual scan of the charts reveals that many of today’s biggest music icons are rappers. How long has it been this way? I remember a time when pop legends like Katy Perry, Lady Gaga, and Rihanna ruled the charts.

Looking for more than anecdotal evidence of the rise of rap as a genre in the mainstream music landscape, I developed a data-driven methodology to measure the high-level trend in music genre popularity over time.

Using Billboard’s Hot 100 Artist data, and mapping each artist to a genre using the Spotify API, I calculated what percent of the artists were represented within each genre over time, from 2006 to the present.

Here’s the trend:

Here’s another view with the same data, as a line chart:

It seems like the data supports my observation that rap has gone mainstream, with the percentage of rap artists in the Billboard Hot 100 growing steadily since 2014, and surpassing pop artists in 2018.

According to Rolling Stone, much of rap’s growth can be attributed to its reactivity on streaming services, with 92% of the genre’s total consumption coming from streaming channels. The timeline fits, with streaming giants like Apple launching in 2015, and Spotify hitting ~40 MAU in 2016.

While pop and country have maintained a relatively stable level of popularity, rock appears to be trending down, with rock artists composing less than 5% of the Billboard Hot 100 artist list in 2019.

What does the future hold? As the lines between genres continue to blur, with artists like Post Malone and Lil Nas X cutting across pop, rap, country, and even rock, it stops making sense to box artists into a single genre. In the age of the playlist, it’s easier than ever to rebel against the very idea of genre.

Walkthrough

The first step of the project was scraping the historical list of Hot 100 Artists from Billboard. Using the tidyverse and rvest packages in R, I quickly looped over the 13 years of available data:

Below is a preview of the first few rows of the resulting dataset:

Next, using a Python script and the Spotify API, I looped through each of the artists from the Billboard Hot 100 dataset and collected a list of their corresponding sub-genres. For example, Spotify associates Sean Paul with the sub-genres of dance pop, dance hall, and pop rap.

Here’s a preview of what the resulting data looks like:

The next step took some thought. I needed a way to map back each of the thousands of sub-genres labeled by Spotify into a few core genres, like pop, rap, country, and rock. Tapping into the work of the Every Noise project, which attempts to create “an algorithmically-generated, readability-adjusted scatter-plot of the musical genre-space”, I developed logic to assign a single genre to each of the artists in the original Billboard Hot 100 artist table:

Using this logic, each artist was assigned to a single genre bucket:

The last step was to merge the billboard artist and artist genre tables and calculate the genre percentage breakdown over time, from 2006 to 2019.

This R code produces the chart below that visualizes relative genre popularity over time:

You can find the GitHub repo for this project here.

Building a Scripture Search Tool with R Shiny

Many religions have texts that contain beliefs, ritual practices, or commandments. The Quran is the central religious text of Islam, believed by Muslims to be a revelation from Allah. The Bible is a collection of religious texts sacred to Christians, Jews, and others. Unique to the Latter Day Saint movement is the Book of Mormon.

The study of these texts is a core religious practice of believers. Looking for a way to quickly understand what the scriptures say on a given topic, I developed a simple Shiny app using R as a study tool:

When a user enters a search term (e.g. “faith”, “gospel”, “sacrifice”, etc.) and clicks “Search”, the app returns a summary table and a detail table. The summary table shows the number of verses that contain the search term by book of scripture. The detail table shows the actual text of all the verses containing the search term.

You can find a full-screen version of the web app here.

In the future, I’d like to enhance the app by adding the ability to search for a phrase (e.g. holy ghost), instead of just a single word. I’d also like to add functionality to compare the presence of multiple words and phrases in different volumes of scriptures. For example, comparing the frequency of the appearance of words like “man” and “woman”.

Hopefully this simple scripture search app can be a helpful tool in your own study. You can find the R code for this project here and access the Shiny app directly here.

css.php