Skip to content Skip to sidebar Skip to footer

How To Generically Crawl Different Websites Using Python?

I want to extract comments from Dawn.com as well as from Tribune.com from any article. The way I'm extracting comments is, to target the class

Solution 1:

It is not so easy to write an algorithm that can generically grab the wanted content from a website or something. Because, as you've mentioned, there is no any pattern here. Some can put comments of his site there and give it a class name like comments or site_comments or whatever and some can put it here and give it another class name and so on and so forth. So what I think is you need to figure out the class names or whatever you want to select to scrap the content of the website.

Nevertheless, in your case, if you don't want to write separate code for them I think that you can use BeautifulSoup's regex functionality.

For example you can do something like this:

from bs4 import BeautifulSoup
import requests

site_urls = [first_site, second_site]
for site in site_urls:
    # this is just an example and in real life situations 
    # you should do some error checking
    site_content = requests.get(site)
    soup = BeautifulSoup(site_content, 'html5lib')
    # this is the list of html tags with the current site's comments 
    # and you can do whatever you want with them
    comments = soup.find_all(class_=re.compile("(comment)|(content)"))

They have a very nice documentation here. You should check it.


Post a Comment for "How To Generically Crawl Different Websites Using Python?"