An approach to Web Scraping in Python with BeautifulSoup

There are mainly two ways to extract data from a website:

Use the API of the website (if it exists). For example, Facebook has the Facebook Graph API which allows retrieval of data posted on Facebook. Access the HTML of the webpage and extract useful information/data from it. This technique is called web scraping or web harvesting or web data extraction. This article discusses the steps involved in web scraping using the implementation of a Web Scraping framework of Python called Beautiful Soup.

Steps involved in web scraping:

Send an HTTP request to the URL of the webpage you want to access. The server responds to the request by returning the HTML content of the webpage. For this task, we will use a third-party HTTP library for python-requests. Once we have accessed the HTML content, we are left with the task of parsing the data. Since most of the HTML data is nested, we cannot extract data simply through string processing. One needs a parser which can create a nested/tree structure of the HTML data. There are many HTML parser libraries available but the most advanced one is html5lib. Now, all we need to do is navigating and searching the parse tree that we created, i.e. tree traversal. For this task, we will be using another third-party python library, Beautiful Soup. It is a Python library for pulling data out of HTML and XML files.

Step 1: Installing the required third-party libraries

Easiest way to install external libraries in python is to use pip. pip is a package management system used to install and manage software packages written in Python. All you need to do is: # In your CMD.exe Prompt on Anaconda pip install requests pip install html5lib pip install bs4

Step 2: Accessing the HTML content from webpage

import requests URL = “” r = requests.get(URL) print(r.content) #the output is large so you don’t need to run it.

Let us try to understand this piece of code.

First of all import the requests library. Then, specify the URL of the webpage you want to scrape. Send a HTTP request to the specified URL and save the response from server in a response object called r. Now, as print r.content to get the raw HTML content of the webpage. It is of ‘string’ type.

Step 3: Parsing the HTML content

#This will not run on online IDE
import requests
from bs4 import BeautifulSoup

URL = ""
r = requests.get(URL)

soup = BeautifulSoup(r.content, 'html5lib') # If this line causes an error, run 'pip install html5lib' or install html5lib
<!DOCTYPE html>
<html class="no-js" dir="ltr" lang="en-US">
   Inspirational Quotes - Motivational Quotes - Leadership Quotes |
  <meta charset="utf-8"/>
  <meta content="text/html; charset=utf-8" http-equiv="content-type"/>
  <meta content="IE=edge" http-equiv="X-UA-Compatible"/>
  <meta content="width=device-width,initial-scale=1.0" name="viewport"/>
  <meta content="The Foundation for a Better Life | Pass It" name="description"/>
  <link href="/apple-touch-icon.png" rel="apple-touch-icon" sizes="180x180"/>
  <link href="/favicon-32x32.png" rel="icon" sizes="32x32" type="image/png"/>
  <link href="/favicon-16x16.png" rel="icon" sizes="16x16" type="image/png"/>
  <link href="/site.webmanifest" rel="manifest"/>
  <link color="#c8102e" href="/safari-pinned-tab.svg" rel="mask-icon"/>
  <meta content="#c8102e" name="msapplication-TileColor"/>
  <meta content="#ffffff" name="theme-color"/>

A really nice thing about the BeautifulSoup library is that it is built on the top of the HTML parsing libraries like html5lib, lxml, html.parser, etc. So BeautifulSoup object and specify the parser library can be created at the same time.

In the example above,

soup = BeautifulSoup(r.content, 'html5lib')

We create a BeautifulSoup object by passing two arguments:

r.content : It is the raw HTML content. html5lib : Specifying the HTML parser we want to use. Now soup.prettify() is printed, it gives the visual representation of the parse tree created from the raw HTML content.

Step 4: Searching and navigating through the parse tree

Now, we would like to extract some useful data from the HTML content. The soup object contains all the data in the nested structure which could be programmatically extracted. In our example, we are scraping a webpage consisting of some quotes. So, we would like to create a program to save those quotes (and all relevant information about them).

#Python program to scrape website
#and save quotes from website
import requests
from bs4 import BeautifulSoup
import csv

URL = ""
r = requests.get(URL)

soup = BeautifulSoup(r.content, 'html5lib')

quotes=[] # a list to store quotes

table = soup.find('div', attrs = {'id':'all_quotes'})

for row in table.findAll('div',
						attrs = {'class':'col-6 col-lg-3 text-center margin-30px-bottom sm-margin-30px-top'}):
	quote = {}
	quote['theme'] = row.h5.text
	quote['url'] = row.a['href']
	quote['img'] = row.img['src']
	quote['lines'] = row.img['alt'].split(" #")[0]
	quote['author'] = row.img['alt'].split(" #")[1]

filename = 'inspirational_quotes.csv'
with open(filename, 'w', newline='') as f:
	w = csv.DictWriter(f,['theme','url','img','lines','author'])
	for quote in quotes:

Before moving on, we recommend you to go through the HTML content of the webpage which we printed using soup.prettify() method and try to find a pattern or a way to navigate to the quotes.

It is noticed that all the quotes are inside a div container whose id is ‘all_quotes’. So, we find that div element (termed as table in above code) using find() method :

table = soup.find('div', attrs = {'id':'all_quotes'}) 

The first argument is the HTML tag you want to search and second argument is a dictionary type element to specify the additional attributes associated with that tag. find() method returns the first matching element. You can try to print table.prettify() to get a sense of what this piece of code does.

Now, in the table element, one can notice that each quote is inside a div container whose class is quote. So, we iterate through each div container whose class is quote. Here, we use findAll() method which is similar to find method in terms of arguments but it returns a list of all matching elements. Each quote is now iterated using a variable called row. Here is one sample row HTML content for better understanding: Now consider this piece of code:

for row in table.find_all_next('div', attrs = {'class': 'col-6 col-lg-3 text-center margin-30px-bottom sm-margin-30px-top'}):
    quote = {}
    quote['theme'] = row.h5.text
    quote['url'] = row.a['href']
    quote['img'] = row.img['src']
    quote['lines'] = row.img['alt'].split(" #")[0]
    quote['author'] = row.img['alt'].split(" #")[1]

We create a dictionary to save all information about a quote. The nested structure can be accessed using dot notation. To access the text inside an HTML element, we use .text :

quote['theme'] = row.h5.text

We can add, remove, modify and access a tag’s attributes. This is done by treating the tag as a dictionary:

quote['url'] = row.a['href']

Lastly, all the quotes are appended to the list called quotes.

Finally, we would like to save all our data in some CSV file.

filename = 'documents/inspirational_quotes.csv'
with open(filename, 'w', newline='') as f:
    w = csv.DictWriter(f,['theme','url','img','lines','author'])
    for quote in quotes:

Here we create a CSV file called inspirational_quotes.csv and save all the quotes in it for any further use.


Data Visualization Using Python

In this example we’ll perform different Data Visualization charts on Population Data. There’s an easy way to create visuals directly from Pandas, and we’ll see how it works in detail in this tutorial.

Install neccessary Libraries

To easily create interactive visualizations, we need to install Cufflinks. This is a library that connects Pandas with Plotly, so we can create visualizations directly from Pandas (in the past you had to learn workarounds to make them work together, but now it’s simpler) First, make sure you install Pandas and Plotly running the following commands on the terminal:

Install the following labraries in the this order – on Conda CMD prompt pip install pandas pip install plotly pip install cufflinks

Import the following Libraries

import pandas as pd
import cufflinks as cf
from IPython.display import display,HTML

In this case, I’m using the ‘ggplot’ theme, but feel free to choose any theme you want. Run the command cf.getThemes() to get all the themes available. To create data visualization with Pandas in the following sections, we only need to use the syntaxdataframe.iplot().

The data we’ll use is a population dataframe. First, download the CSV file from, move the file where your Python script is located, and then read it in a Pandas dataframe as shown below.

#Format year column to number with no decimals
df_population = pd.read_csv('documents/population/population.csv')
#use a list of indexes:
   country    year    population
0    China  2020.0  1.439324e+09
10   China  1990.0  1.176884e+09
  country    year    population
0   China  2020.0  1.439324e+09
1   China  2019.0  1.433784e+09
2   China  2018.0  1.427648e+09
3   China  2017.0  1.421022e+09
4   China  2016.0  1.414049e+09
5   China  2015.0  1.406848e+09
6   China  2010.0  1.368811e+09
7   China  2005.0  1.330776e+09
8   China  2000.0  1.290551e+09
9   China  1995.0  1.240921e+09

This dataframe is almost ready for plotting, we just have to drop null values, reshape it and then select a couple of countries to test our interactive plots. The code shown below does all of this.

# dropping null values
df_population = df_population.dropna()
# reshaping the dataframe
df_population = df_population.pivot(index="year", columns="country", values="population")
# selecting 5 countries
df_population = df_population[['United States', 'India', 'China', 'Nigeria', 'Spain']]
country  United States         India         China      Nigeria       Spain
1955.0     171685336.0  4.098806e+08  6.122416e+08   41086100.0  29048395.0
1960.0     186720571.0  4.505477e+08  6.604081e+08   45138458.0  30402411.0
1965.0     199733676.0  4.991233e+08  7.242190e+08   50127921.0  32146263.0
1970.0     209513341.0  5.551898e+08  8.276014e+08   55982144.0  33883749.0
1975.0     219081251.0  6.231029e+08  9.262409e+08   63374298.0  35879209.0
1980.0     229476354.0  6.989528e+08  1.000089e+09   73423633.0  37698196.0
1985.0     240499825.0  7.843600e+08  1.075589e+09   83562785.0  38733876.0
1990.0     252120309.0  8.732778e+08  1.176884e+09   95212450.0  39202525.0
1995.0     265163745.0  9.639226e+08  1.240921e+09  107948335.0  39787419.0
2000.0     281710909.0  1.056576e+09  1.290551e+09  122283850.0  40824754.0


Let’s make a lineplot to compare how much the population has grown from 1955 to 2020 for the 5 countries selected. As mentioned before, we will use the syntax df_population.iplot(kind=’name_of_plot’) to make plots as shown below.

df_population.iplot(kind='line',xTitle='Years', yTitle='Population',
                    title='Population (1955-2020)')


We can make a single barplot on barplots grouped by categories. Let’s have a look.

Single Barplot

Let’s create a barplot that shows the population of each country by the year 2020. To do so, first, we select the year 2020 from the index and then transpose rows with columns to get the year in the column. We’ll name this new dataframe df_population_2020 (we’ll use this dataframe again when plotting piecharts)

df_population_2020 = df_population[df_population.index.isin([2020])]
df_population_2020 = df_population_2020.T

Now we can plot this new dataframe with .iplot(). In this case, I’m going to set the bar color to blue using the color argument.

df_population_2020.iplot(kind='bar', color='blue',
                         xTitle='Years', yTitle='Population',
                         title='Population in 2020')

Barplot grouped by “n” variables

Now let’s see the evolution of the population at the beginning of each decade.

# filter years out
df_population_sample = df_population[df_population.index.isin([1980, 1990, 2000, 2010, 2020])]
# plotting
df_population_sample.iplot(kind='bar', xTitle='Years',

Naturally, all of them increased their population throughout the years, but some did it at a faster rate.


Boxplots are useful when we want to see the distribution of the data. The boxplot will reveal the minimum value, first quartile (Q1), median, third quartile (Q3), and maximum value. The easiest way to see those values is by creating an interactive visualization. Let’s see the population distribution of the China.

df_population['China'].iplot(kind='box', color='green', 

Let’s say now we want to get the same distribution but for all the selected countries.

df_population.iplot(kind='box', xTitle='Countries',

As we can see, we can also filter out any country by clicking on the legends on the right.


A histogram represents the distribution of numerical data. Let’s see the population distribution of the USA and Nigeria.

df_population[['United States', 'Nigeria']].iplot(kind='hist',


Let’s compare the population by the year 2020 again but now with a piechart. To do so, we’ll use the df_population_2020 dataframe created in the “Single Barplot” section. However, to make a piechart we need the “country” as a column and not as an index, so we use .reset_index() to get the column back. Then we transform the 2020 into a string.

# transforming data
df_population_2020 = df_population_2020.reset_index()
df_population_2020 =df_population_2020.rename(columns={2020:'2020'})
# plotting
df_population_2020.iplot(kind='pie', labels='country',
                         title='Population in 2020 (%)')


Although population data is not suitable for a scatterplot (the data follows a common pattern), I would make this plot for the purposes of this guide. Making a scatterplot is similar to a line plot, but we have to add the mode argument.

df_population.iplot(kind='scatter', mode='markers')

Whaola! Now you’re ready to make your own beautiful interactive visualization with Pandas.