Skip to content Skip to sidebar Skip to footer

Start_urls In Scrapy

I am trying to fetch some information from this website: http://www.go-on.fi/tyopaikat. As you can see, the table has a pagination, so whenever you click second or third page, the

Solution 1:

What you could do is set the start_urls to the main page then based on the number of pages shown in the footer pagination (in this case 3), use a loop to create a yield Request for each of the pages:

allowed_domains = ["go-on.fi"]
start_urls = ["http://www.go-on.fi/tyopaikat"]

def parse(self, response):
    pages = response.xpath('//ul[@class="pagination"][last()-1]/a/text()').extract()
    page = 1
    start = 0
    while page <= pages:
        url = "http://www.go-on.fi/tyopaikat?start="+str(start)
        start += 20
        page += 1
        yield Request(url, callback=self.parse_page)

def parse_page(self,response):
    hxs = HtmlXPathSelector(response)
    items = []
    titles = hxs.select("//tr")

    for row in titles:
        item = JobData()
        item['header'] = row.select("./td[1]/a/text()").extract()
        item['link'] = row.select("./td[1]/a/@href").extract()
        items.append(item)

Post a Comment for "Start_urls In Scrapy"