2016-04-25 99 views
1

我想知道如何從一個網站使用美麗的湯/請求抓取多個不同的網頁,而不必一遍又一遍地重複我的代碼。從網站抓取多個網頁(BeautifulSoup,Requests,Python3)

在下面我當前的代碼,這是爬行某些城市的旅遊景點:

RegionIDArray = [187147,187323,186338] 
dict = {187147: 'Paris', 187323: 'Berlin', 186338: 'London'} 
already_printed = set() 

for reg in RegionIDArray: 
    for page in range(1,700,30): 
     r = requests.get("https://www.tripadvisor.de/Attractions-c47-g" + str(reg) + "-oa" + str(page) + ".html") 

     g_data = soup.find_all("div", {"class": "element_wrap"}) 

     for item in g_data: 
      header = item.find_all("div", {"class": "property_title"}) 
      item = (header[0].text.strip()) 
      if item not in already_printed: 
       already_printed.add(item) 

       print("POI: " + str(item) + " | " + "Location: " + str(dict[reg]) + " | " + "Art: Museum ") 

到目前爲止一切正常。下一步,我想抓取這些城市最受歡迎的博物館以及旅遊景點。

因此,我必須改變-c參數修改請求,以獲得所需的所有博物館:

r = requests.get("https://www.tripadvisor.de/Attractions-c" + str(museumIDArray) +"-g" + str(reg) + "-oa" + str(page) + ".html") 

因此我的代碼是這樣的:

RegionIDArray = [187147,187323,186338] 
museumIDArray = [47,49] 
dict = {187147: 'Paris', 187323: 'Berlin', 186338: 'London'} 
already_printed = set() 

for reg in RegionIDArray: 
    for page in range(1,700,30): 
     r = requests.get("https://www.tripadvisor.de/Attractions-c" + str(museumIDArray) +"-g" + str(reg) + "-oa" + str(page) + ".html") 
     soup = BeautifulSoup(r.content) 

     g_data = soup.find_all("div", {"class": "element_wrap"}) 

     for item in g_data: 
      header = item.find_all("div", {"class": "property_title"}) 
      item = (header[0].text.strip()) 
      if item not in already_printed: 
       already_printed.add(item) 

       print("POI: " + str(item) + " | " + "Location: " + str(dict[reg]) + " | " + "Art: Museum ") 

那似乎不完全正確。我得到的輸出不包括某些城市的所有博物館和旅遊景點。

任何人都可以幫助我嗎?任何反饋意見。

+0

您的代碼會出錯,也什麼字典在你的代碼吧做陰影巨蟒內建? –

+0

@PadraicCunningham你的意思是「隱藏python builtin」對不起,如果我對你的神經有所瞭解,但我仍然是初學者 –

+0

dict是一個python類型/函數,最好避免使用陰影,即使用相同的作爲內建類型的vriables的名稱。你能添加一個鏈接並解釋你想要解析的內容嗎? –

回答

1

所有名稱都在property_title類的div內的錨標記中。

for reg in RegionIDArray: 
    for page in range(1,700,30): 
     r = requests.get("https://www.tripadvisor.de/Attractions-c" + str(museumIDArray) +"-g" + str(reg) + "-oa" + str(page) + ".html") 
     soup = BeautifulSoup(r.content) 

     for item in (a.text for a in soup.select("div.property_title a")): 
      if item not in already_printed: 
       already_printed.add(item) 
       print("POI: " + str(item) + " | " + "Location: " + str(dct[reg]) + " | " + "Art: Museum ") 

這也是更好地得到從分頁格鏈接:

from bs4 import BeautifulSoup 
import requests 
from urllib.parse import urljoin 


RegionIDArray = [187147,187323,186338] 
museumIDArray = [47,49] 
dct = {187147: 'Paris', 187323: 'Berlin', 186338: 'London'} 
already_printed = set() 

def get_names(soup): 
    for item in (a.text for a in soup.select("div.property_title a")): 
     if item not in already_printed: 
      already_printed.add(item) 
      print("POI: {} | Location: {} | Art: Museum ".format(item, dct[reg])) 

base = "https://www.tripadvisor.de" 
for reg in RegionIDArray: 
    r = requests.get("https://www.tripadvisor.de/Attractions-c[47,49]-g{}-oa.html".format(reg)) 
    soup = BeautifulSoup(r.content) 

    # get links to all next pages. 
    all_pages = (urljoin(base, a["href"]) for a in soup.select("div.unified.pagination a.pageNum.taLnk")[1:]) 
    # use helper function to print the names. 
    get_names(soup) 

    # visit all remaining pages. 
    for url in all_pages: 
     soup = BeautifulSoup(requests.get(url).content) 
     get_names(soup) 
+0

非常感謝您的反饋。但現在我收到以下錯誤消息:Traceback(最近呼叫的最後一個): 文件「C:/Users/Raju/Desktop/Scr​​ipts/nnnn.py」,第25行,在 get_names(湯) 文件「C:/Users/Raju/Desktop/Scr​​ipts/nnnn.py」,第15行,在get_names print(「POI:{} | Location:{} |」+「Art:Museum」.format(item。dict [ reg])) AttributeError:'str'對象沒有屬性'dict'你能幫我嗎?哪裏不對? –

+0

@SeriousRuffy,你使用的字典? –

+0

@Padriac它應該是dct,就像您在上面的代碼中所陳述的那樣。我只是試着用「字典」。不過,我收到了同樣的錯誤消息 –