The Simple Way to Scrape an HTML Table: Google Docs

Google Docs used for scraping web tables

Raw data is the best data, but a lot of public data can still only be found in tables rather than as directly machine-readable files. One example is the FDIC’s List of Failed Banks. Here is a simple trick to scrape such data from a website: Use Google Docs.

The table on that page is even relatively nice because it includes some JavaScript to sort it. But a large table with close to 200 entries is still not exactly the best way to analyze that data.

I first tried dabbledb for this task, and it worked in principle. The only problem was that it only extracted 17 rows for some reason. I have no idea what the issue was, but I didn’t want to invest the time to figure it out.

After some digging around and even considering writing my own throw-away extraction script, I remembered having read something about Google Docs being able to import tables from websites. And indeed, it has a very useful function called ImportHtml that will scrape a table from a page.

To extract a table, create a new spreadsheet and enter the following expression in the top left cell: =ImportHtml(URL, “table”, num). URL here is the URL of the page (between quotation marks), “table” is the element to look for (Google Docs can also import lists), and num is the number of the element, in case there are more on the same page (which is rather common for tables). The latter supposedly starts at 1, but I had to use 0 to get it to pick up the correct table on the FDIC page.

Once this is done, Google Docs retrieves the data and inserts it into the spreadsheet, including the headers. The last step is to download the spreadsheet as a CSV file.

This is very simple and quick, and a much better idea than writing a custom script. Of course, the real solution would be to offer all data as a CSV file in addition to the table to begin with. But until that happens, we will need tools like this to get the data into a format that is actually useful.

Comments

  1. says

    It took under a minute to set up a web query in Excel to extract the data. Data menu > Import External Data > New Web Query. Enter the URL, then select the table you want imported. Whenever you want, you can click on the imported table. The External Data toolbar pops up, and you can click on the icon with the exclamation point to update the query. It’s a simple matter to save the sheet with the data as a CSV.

  2. says

     

    This is a very nice trick. For more complex data extraction needs, I’ve been playing with Open Dapper (http://www.dapper.net/open/) and I found it quite powerful: it’s able to extract data from almost any page showing any kind of regularity – not just tables – and export it at least as csv or rss (and you can access the rss from an url which stays live, updating the feed as the original page update etc.)

    (I’ve searched the site for “Dapper” and the search engine returns no results, so I thought it could be useful to point it to you – thanks for this blog and for your work!)

  3. Robert Kosara says

    That’s probably where I saw this. I actually did this a while ago and couldn’t find the page where I had seen the ImportHtml trick when I wrote the posting. I’ll add a link.

  4. says

    I would just use yahoo pipes for this. Very easy (with a small investment of time to learn), and very flexible too. Great tool for scraping.

  5. Robert Kosara says

    The good thing about HTML is that tables are very clearly structured in the markup. That’s not the case in PDF, where it’s just stuff that happens to line up and maybe lines that are drawn in-between. Best option is probably to copy&paste from the PDF into Excel or another spreadsheet app.

  6. says

    Let’s combine your climate data and web scrapping posts.

    Here’s an example where I scrapped climate science data for a visualization of the IR absorption properties of 5 greenhouse gases.

     

    I wanted to scrap the spectrum data from 5 NIST Chemistry Webbook data web pages and generate this chart automatically.

    Jon Peltier is right about Excel’s external data capabilities, however, Jon would need a VBA procedure to retreive the data and reproduce my 5 panel chart.

    Bill Dedman’s suggestion about using Excel’s cut and paste approach would be quite time consuming and Bill would have a challenge generating the 5 panel chart.

    Here’s a link to my post, it includes a link to my R script, available on Google docs.

    http://chartsgraphs.wordpress.com/2009/12/07/understanding-the-science-of-co2%E2%80%99s-role-in-climate-change-3-%E2%80%93-how-green-house-gases-trap-heat/

     

     

  7. kristineh says

    This is a great website, but you have some syntax errors:

    =ImportHtml(URL, “table”, num)

    should be:
    =ImportHtml(URL; “table”; num)

  8. says

    You could have used MS Excel’s “Data–>From Web” option to fetch tables from web sites. It’s more easy and workable. And it could be updated automatically also when spreadsheet opens every time. Cheers…

  9. says

    i’m a newbie with complex ideas and no programming skills. i need to extract table data from an internal web portal page. i can’t find the table id. i’m in firefox and viewed the frame source. all i can find is a table class. thoughts? ideas? if i can make this work, i’ve just saved myself hours per week.

  10. Jason Pittman says

    Another good way to import data into a spreadsheet is from JSON data. If you use json-csv.com you can upload text or enter a URL and a spreadsheet will be produced.

  11. Alex says

    Data menu > Import External Data > New Web Query

    just find this today….i was doing it by hand for years…..thanks man!

Leave a Reply