2017-05-04 81 views
0

繼續以前的工作來抓取關於查詢的所有新聞結果並返回標題和網址,我正在細化抓取工具以獲取Google新聞中所有頁面的所有結果。目前的代碼似乎只能返回第一頁Googel新聞搜索結果。將不勝感激知道如何獲得所有網頁結果。非常感謝!谷歌新聞履帶翻頁

我下面的代碼:

import requests 
from bs4 import BeautifulSoup 
import time 
import datetime 
from random import randint 
import numpy as np 
import pandas as pd 


query2Google = input("What do you want from Google News?\n") 

def QGN(query2Google): 
    s = '"'+query2Google+'"' #Keywords for query 
    s = s.replace(" ","+") 
    date = str(datetime.datetime.now().date()) #timestamp 
    filename =query2Google+"_"+date+"_"+'SearchNews.csv' #csv filename 
    f = open(filename,"wb") 
    url = "http://www.google.com.sg/search?q="+s+"&tbm=nws&tbs=qdr:y" # URL for query of news results within one year and sort by date 

    #htmlpage = urllib2.urlopen(url).read() 
    time.sleep(randint(0, 2))#waiting 

    htmlpage = requests.get(url) 
    print("Status code: "+ str(htmlpage.status_code)) 
    soup = BeautifulSoup(htmlpage.text,'lxml') 

    df = [] 
    for result_table in soup.findAll("div", {"class": "g"}): 
     a_click = result_table.find("a") 
     #print ("-----Title----\n" + str(a_click.renderContents()))#Title 

     #print ("----URL----\n" + str(a_click.get("href"))) #URL 

     #print ("----Brief----\n" + str(result_table.find("div", {"class": "st"}).renderContents()))#Brief 

     #print ("Done") 
     df=np.append(df,[str(a_click.renderContents()).strip("b'"),str(a_click.get("href")).strip('/url?q='),str(result_table.find("div", {"class": "st"}).renderContents()).strip("b'")]) 


     df = np.reshape(df,(-1,3)) 
     df1 = pd.DataFrame(df,columns=['Title','URL','Brief']) 
    print("Search Crawl Done!") 

    df1.to_csv(filename, index=False,encoding='utf-8') 
    f.close() 
    return 

QGN(query2Google) 

回答

0

曾經有一個AJAX API,但它不再是avaliable。
但是,如果您想獲取多個頁面,則可以使用for循環修改腳本,如果要獲取所有頁面,可以使用while循環。
例子:

url = "http://www.google.com.sg/search?q="+s+"&tbm=nws&tbs=qdr:y&start=" 
pages = 10 # the number of pages you want to crawl # 

for next in range(0, pages*10, 10) : 
    page = url + str(next) 
    time.sleep(randint(1, 5)) # you may need longer than that # 
    htmlpage = requests.get(page) # you should add User-Agent and Referer # 
    print("Status code: " + str(htmlpage.status_code)) 
    if htmlpage.status_code != 200 : 
     break # something went wrong # 
    soup = BeautifulSoup(htmlpage.text, 'lxml') 

    ... process response here ... 

    next_page = soup.find('td', { 'class':'b', 'style':'text-align:left' }) 
    if next_page is None or next_page.a is None : 
     break # there are no more pages # 

請記住,谷歌不喜歡機器人,你可能會得到一個禁令。
您可以在headers中添加'User-Agent'和'Referer'以模擬Web瀏覽器,並使用time.sleep(random.uniform(2, 6))來模擬人類或使用硒。

+0

嗨亞當,當有多個結果頁面時,代碼完美工作,但我意識到這一行: if soup.find('td',{'class':'b','style':'text-align :left'})。a is None:break 會提示錯誤AttributeError:如果搜索結果是單頁且沒有翻頁,則'NoneType'對象沒有屬性'a'。有沒有辦法擺脫這個? – Sun

+0

該行檢查導航欄中是否有下一頁,如果在Web瀏覽器中使用「檢查元素」,則可以看到該行。如果由於某種原因沒有導航欄,這條線會引發異常,我會更新我的代碼來捕捉它 –