2017-08-15 48 views
0

我想湊這個網站:https://www.99acres.com如何湊多頁有一個不變的URL - 的Python和BeautifulSoup

到目前爲止,我已經使用BeautifulSoup來執行代碼,並提取從網站上的數據;然而,我現在的代碼只能讓我成爲第一頁。我想知道是否有訪問其他頁面的方法,因爲當我點擊下一頁時,URL不會改變,所以我不能每次迭代不同的URL。

下面是到目前爲止我的代碼:

import io 
import csv 
import requests 
from bs4 import BeautifulSoup 

response = requests.get('https://www.99acres.com/search/property/buy/residential-all/hyderabad?search_type=QS&search_location=CP1&lstAcn=CP_R&lstAcnId=1&src=CLUSTER&preference=S&selected_tab=1&city=269&res_com=R&property_type=R&isvoicesearch=N&keyword_suggest=hyderabad%3B&bedroom_num=3&fullSelectedSuggestions=hyderabad&strEntityMap=W3sidHlwZSI6ImNpdHkifSx7IjEiOlsiaHlkZXJhYmFkIiwiQ0lUWV8yNjksIFBSRUZFUkVOQ0VfUywgUkVTQ09NX1IiXX1d&texttypedtillsuggestion=hy&refine_results=Y&Refine_Localities=Refine%20Localities&action=%2Fdo%2Fquicksearch%2Fsearch&suggestion=CITY_269%2C%20PREFERENCE_S%2C%20RESCOM_R&searchform=1&price_min=null&price_max=null') 
html = response.text 
soup = BeautifulSoup(html, 'html.parser') 
list=[] 

dealer = soup.findAll('div',{'class': 'srpWrap'}) 

for item in dealer: 
    try: 
     p = item.contents[1].find_all("div",{"class":"_srpttl srpttl fwn wdthFix480 lf"})[0].text 
    except: 
     p='' 
    try: 
     d = item.contents[1].find_all("div",{"class":"lf f13 hm10 mb5"})[0].text 
    except: 
     d='' 

    li=[p,d] 
    list.append(li) 


with open('project.txt','w',encoding="utf-8") as file: 
    writer= csv.writer(file) 
    for row in list: 
     writer.writerows(row) 

file.close() 

回答

0

我從來沒有與beautifulSoup工作,但這裏是一個通用的方法如何做到這一點:你應當指數從Ajax響應的JSON格式的響應加載時頁。下面是使用curl一個樣本:

curl 'https://www.99acres.com/do/quicksearch/getresults_ajax' -H 'pragma: no-cache' -H 'origin: https://www.99acres.com' -H 'accept-encoding: gzip, deflate, br' -H 'accept-language: en-US,en;q=0.8,de;q=0.6,da;q=0.4' -H 'user-agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.101 Safari/537.36' -H 'content-type: application/x-www-form-urlencoded' -H 'accept: */*' -H 'cache-control: no-cache' -H 'authority: www.99acres.com' -H 'cookie: 99_ab=37; NEW_VISITOR=1; 99_FP_VISITOR_OFFSET=87; 99_suggestor=37; 99NRI=2; PROP_SOURCE=IP; src_city=-1; 99_citypage=-1; sl_prop=0; 99_defsrch=n; RES_COM=RES; kwp_last_action_id_type=2784981911907674%2CSEARCH%2C402278484965075610; 99_city=38; spd=%7B%22P%22%3A%7B%22a%22%3A%22R%22%2C%22b%22%3A%22S%22%2C%22c%22%3A%22R%22%2C%22d%22%3A%22269%22%2C%22j%22%3A%223%22%7D%7D; lsp=P; 99zedoParameters=%7B%22city%22%3A%22269%22%2C%22locality%22%3Anull%2C%22budgetBucket%22%3Anull%2C%22activity%22%3A%22SRP%22%2C%22rescom%22%3A%22RES%22%2C%22preference%22%3A%22BUY%22%2C%22nri%22%3A%22YES%22%7D; GOOGLE_SEARCH_ID=402278484965075610; _sess_id=1oFlv%2B%2FPAnDwWEEZiIGqNUTFrkARButJKqqEYu%2Fcv5WKMZCNYvpc89tievPnYatE28uBWbcd0PTpvCp9k3O20w%3D%3D; newRequirementsByUser=0' -H 'referer: https://www.99acres.com/3-bhk-property-in-hyderabad-ffid?orig_property_type=R&search_type=QS&search_location=CP1&pageid=QS' --data 'src=PAGING&static_search=1&nextbutton=Next%20%BB&page=2&button_next=2&lstAcnId=2784981911907674&encrypted_input=UiB8IFFTIHwgUyB8IzcjICB8IENQMSB8IzQjICB8IDMgIzE1I3wgIHwgMzExODQzMzMsMzExODM5NTUgfCAgfCAyNjkgfCM1IyAgfCBSICM0MCN8ICA%3D&lstAcn=SEARCH&sortby=&is_ajax=1' --compressed 

這樣你可以調整page參數。

+0

抱歉打擾你,但我不明白你在告訴我什麼。你能修改我的代碼,以便我能夠從下一頁中提取數據。 –

0

這是修改的代碼,它不接收任何數據。

import time 
import io 
import csv 
import requests 
from bs4 import BeautifulSoup 
list=[] 
for i in range(1, 101): 
    time.sleep(2) 
    url = "https://www.99acres.com/3-bhk-property-in-hyderabad-ffid-page-{0}".format(i) 
    response = requests.get(url) 
    html = response.text 
    soup = BeautifulSoup(html, 'html.parser') 


    dealer = soup.findAll('div',{'class': 'srpWrap'}) 

    for item in dealer: 
     try: 
      p = item.contents[1].find_all("div",{"class":"_srpttl srpttl fwn wdthFix480 lf"})[0].text 
     except: 
      p='' 
     try: 
      d = item.contents[1].find_all("div",{"class":"lf f13 hm10 mb5"})[0].text 
     except: 
      d='' 

     li=[p,d] 
     list.append(li) 


    with open('project.txt','w',encoding="utf-8") as file: 
     writer= csv.writer(file) 
     for row in list: 
      writer.writerows(row) 

    file.close() 
1

試試這個。它會給你不同的屬性名稱從第1頁到第3頁。

import requests ; from bs4 import BeautifulSoup 

base_url = "https://www.99acres.com/3-bhk-property-in-hyderabad-ffid-page-{0}" 
for url in [base_url.format(i) for i in range(1,4)]: 
    response = requests.get(url) 
    soup = BeautifulSoup(response.text,"html.parser") 
    for title in soup.select("a[id^=desc_]"): 
     print(title.text.strip())