There are two basic tasks that are used to scrape web sites: Load a web page to a string. Parse HTML from a web page to locate the interesting bits. Python offers two excellent tools for the above tasks. We would like to show you a description here but the site won’t allow us.
Scraping Web Pages Python
One of the awesome things about Python is how relatively simple it is to do pretty complex and impressive tasks. A great example of this is web scraping.
This is an article about web scraping with Python. In it we will look at the basics of web scraping using popular libraries such as requests
and beautiful soup
.
Topics covered:
- What is web scraping?
- What are
requests
andbeautiful soup
? - Using CSS selectors to target data on a web-page
- Getting product data from a demo book site
- Storing scraped data in CSV and JSON formats
What is Web Scraping?
Some websites can contain a large amount of valuable data. Web scraping means extracting data from websites, usually in an automated fashion using a bot or web crawler. The kinds or data available are as wide ranging as the internet itself. Common tasks include
- scraping stock prices to inform investment decisions
- automatically downloading files hosted on websites
- scraping data about company contacts
- scraping data from a store locator to create a list of business locations
- scraping product data from sites like Amazon or eBay
- scraping sports stats for betting
- collecting data to generate leads
- collating data available from multiple sources
Legality of Web Scraping
Fancy pants adventures world 1gamerate. There has been some confusion in the past about the legality of scraping data from public websites. This has been cleared up somewhat recently (I’m writing in July 2020) by a court case where the US Court of Appeals denied LinkedIn’s requests to prevent HiQ, an analytics company, from scraping its data.
The decision was a historic moment in the data privacy and data regulation era. It showed that any data that is publicly available and not copyrighted is potentially fair game for web crawlers.
However, proceed with caution. You should always honour the terms and conditions of a site that you wish to scrape data from as well as the contents of its robots.txt
file. You also need to ensure that any data you scrape is used in a legal way. For example you should consider copyright issues and data protection laws such as GDPR. Also, be aware that the high court decision could be reversed and other laws may apply. This article is not intended to prvide legal advice, so please do you own research on this topic. One place to start is Quora. There are some good and detailed questions and answers there such as at this link
One way you can avoid any potential legal snags while learning how to use Python to scrape websites for data is to use sites which either welcome or tolerate your activity. One great place to start is to scrape – a web scraping sandbox which we will use in this article.
An example of Web Scraping in Python
You will need to install two common scraping libraries to use the following code. This can be done using
pip install requests
and
pip install beautifulsoup4
in a command prompt. For details in how to install packages in Python, check out Installing Python Packages with Pip.
The requests
library handles connecting to and fetching data from your target web-page, while beautifulsoup
enables you to parse and extract the parts of that data you are interested in.
Let’s look at an example:
So how does the code work?
In order to be able to do web scraping with Python, you will need a basic understanding of HTML and CSS. This is so you understand the territory you are working in. You don’t need to be an expert but you do need to know how to navigate the elements on a web-page using an inspector such as chrome dev tools. If you don’t have this basic knowledge, you can go off and get it (w3schools is a great place to start), or if you are feeling brave, just try and follow along and pick up what you need as you go along.
To see what is happening in the code above, navigate to http://books.toscrape.com/. Place your cursor over a book price, right-click your mouse and select “inspect” (that’s the option on Chrome – it may be something slightly different like “inspect element” in other browsers. When you do this, a new area will appear showing you the HTML which created the page. You should take particular note of the “class” attributes of the elements you wish to target.
In our code we have
This uses the class attribute and returns a list of elements with the class product_pod
.
Then, for each of these elements we have:
The first line is fairly straightforward and just selects the text of the h3
element for the current product. The next line does lots of things, and could be split into separate lines. Basically, it finds the p
tag with class price_color
within the div
tag with class product_price
, extracts the text, strips out the pound sign and finally converts to a float. This last step is not strictly necessary as we will be storing our data in text format, but I’ve included it in case you need an actual numeric data type in your own projects.
Storing Scraped Data in CSV Format
csv
(comma-separated values) is a very common and useful file format for storing data. It is lightweight and does not require a database.
Add this code above the if __name__ '__main__':
line
and just before the line print('### RESULTS ###')
, add this:
store_as_csv(data, headings=['title', 'price'])
When you run the code now, a file will be created containing your book data in csv format. Pretty neat huh?
Storing Scraped Data in JSON Format
Another very common format for storing data is JSON
(JavaScript Object Notation), which is basically a collection of lists and dictionaries (called arrays and objects in JavaScript).
Add this extra code above if __name__ ..:
and store_as_json(data)
above the print('### Results ###')
line.
So there you have it – you now know how to scrape data from a web-page, and it didn’t take many lines of Python code to achieve!
Full Code Listing for Python Web Scraping Example
Here’s the full listing of our program for your convenience.
One final note. We have used requests
and beautifulsoup
for our scraping, and a lot of the existing code on the internet in articles and repositories uses those libraries. However, there is a newer library which performs the task of both of these put together, and has some additional functionality which you may find useful later on. This newer library is requests-HTML
and is well worth looking at once you have got a basic understanding of what you are trying to achieve with web scraping. Another library which is often used for more advanced projects spanning multiple pages is scrapy
, but that is a more complex beast altogether, for a later article.
Working through the contents of this article will give you a firm grounding in the basics of web scraping in Python. I hope you find it helpful
Happy computing.
The situation: I wanted to extract chemical identifiers of a set of ~350 chemicals offered by a vendor to compare it to another list. Unfortunately, there is no catalog that neatly tabulates this information, but there is a product catalog pdf that has the list of product numbers. The detailed information of each product (including the chemical identifier) can be found in the vendor’s website like this: vendor.com/product/[product_no]
. Let me show you how to solve this problem with bash and Python.
Let’s break the problem down into steps:
- Extract list of product numbers (call it list A)
- Iterate over list A and webscrape chemical id to get a list (call it list B)
- Compare list B with desired list C
Steps 1 and 3 look easy – just some text manipulation. Step 2 is basically the automated version of going to the product webpage and copy-paste the chemical identifier, and repeat this ~350 times (yup, not going to do that).
Step 1
I have pdf catalogue that looks like this:
Plate | Well | Product | Product No. |
---|---|---|---|
1 | A1 | chemical x | 1111 |
1 | A2 | chemical y | 2222 |
… | … | … | … |
And of course, when copy-pasted to a text file, it is messed up…
Well, that is quite easy to fix. If we are sure that each table row becomes 4 lines, we can do some bash magic:
and we will get
But beware of empty cells! This may cause a table row to become fewer than 3 lines and mess up your data. This is why I choose paste
in this case even though we could have just extracted every 4th line with $ sed -n '0~4p' temp
. With a quick glance you can easily verify that the data is reformatted to look like the original table.
So, inspecting that the reformatted table looks fine, extract the product number, i.e. the 4th column:
Step 2
Let’s do a test by scraping from the webpage of one product. Go to the webpage in your browser and do “Inspect element” to inspect the HTML underneath. I found my chemical identifier nicely contained in a <div>
tag which has the id inchiKey
.
Make sure you have the packages requests
and BeautifulSoup
and run this Python script:
Do you get the correct chemical identifier? If so, it’s time to wrap this in a loop that iterates over the list of product numbers:
Together with the chemical id, I printed out the product number again to ensure correspondence – some product numbers may be invalid and thus won’t yield the chemical id! This is guarding against that.
I get something like this as the output:
Notice the extraneous space and blank lines. Instead of trying to wrangle Python to output more consistently formatted output, I cleaned up with bash – it’s much easier:
You can confirm that each product number corresponds to a chemical identifier, then extracts just the identifiers like in Step 1:
Step 3
Easy:
comm
outputs 3 columns: (1) C-B (2) B-C (3) C ∩ B. The flag -12
suppresses columns 1 and 2. You can similarly suppress the other columns to output what you need.
G hub app. Download the G HUB Early Access executable and run the application from your downloads 2. When the Logitech G HUB windows appears click I NSTALL to continue. You will see a progress bar, once the download is complete click I NSTALL AND LAUNCH To uninstall G HUB: Go to Application and run the Logitech G HUB Uninstaller. Or drag the Logitech G HUB application onto the Trash. Logitech G HUB Software lets you customize Logitech G gaming mice, keyboards, headsets, speakers, and other devices. Bug Fixes - Fixed issues where the Screen Sampler Lighting Effect may stop working correctly - Fixed Discord authorization issues - Fixed an issue where the Hardware Noise Reduction for headsets may not stay enabled. G HUB is a new software platform from Logitech G that lets you fine-tune control of your hardware, customize lighting, use your hardware with third party apps, and more - all with an easy to use interface that’s built for future products. What’s different about G HUB compared to LGS? G HUB is a new software platform from Logitech G that lets you fine-tune control of your hardware, customize lighting, use your hardware with third party apps, and more - all with an easy to use interface that’s built for future products. Logitech G HUB Software lets you customize Logitech G gaming mice, keyboards, headsets, speakers, and other devices.
Scrapy-tutorial-web-scraping-with-python
Bottom line
Simple Web Scraping With Python 3.5
- Verify, verify your data at every step
- Freely switch bash and Python according to your needs
