2017-03-04 61 views
-3

我使用下面的代碼下載公開可用的PDF文檔並保存到個人文件中。此代碼從我上傳的文件中提取PDF鏈接,然後將它們下載到指定的文件夾。樣本鏈接可能看起來像:
http://askebsa.dol.gov/BulkFOIARequest/Listings.aspx/GetImage?ack_id=20151008144858P040036764801001&year=2014刪除url字符串中的多餘文本

,它會保存下列名稱此PDF文件:

GetImage?ack_id=20151008144858P040036764801001&year=2014.pdf

我想下載的名稱僅包括唯一的ID號,使之如下所示:20151008144858P040036764801001.pdf

此代碼是由另一種更先進的編碼器提供給我,我可以不再獲得與我聯繫,我正在努力弄清楚,如何改變命名。

我已經嘗試在這裏編輯幾行代碼,我認爲會改變名稱,但不能讓它工作。我試着調整:

out_name=str_c(base %>% str_extract("[^/]*.$"), tail, ".pdf")

mutate_each(funs(. %>% str_replace("^.*(?=\\?)", "")), link, facsimile_link) %>%

我希望有人更先進或許能夠找到並插入正確的代碼,這樣我可以節省pdf文檔只是身份證號碼。感謝您的幫助R社區。

# General 
# ------------------------------------------------------------------------------ 

Create <- function(
    var_name, # (character) name of the variable to assign to. 
    expr # (character) the expression to be parsed and evaluated for assignment. 
) { 

    # If a variable `var_name` does not exist then an expression `expr` is 
    # evaluated and assigned to it. 

    # If the variable exists, then do nothing: 
    if(exists(var_name)) {return()} 

    # Evaluate expression: 
    parse(text=expr) %>% 
    eval %>% 
    # Assign to variable in global environment: 
    assign(x=var_name, value=., envir=globalenv()) 
} 



# Indices 
# ------------------------------------------------------------------------------ 

AnnualIxUrls <- function(
    base_url=ix_base, # (character) base URL 
    years=annual_ix # (integer) years with annual index files 
) { 

    # Create annual index URLs. 
    file.path(base_url, "YEARLY_BY_PLAN_YEAR", str_c(years, ".zip")) 
} 

MonthlyIxUrls <- function(
    base_url=ix_base, # (character) base URL 
    years=monthly_ix # (integer) years with annual index files 
) { 

    # Create annual index URLs. 
    file.path(base_url, "MONTHLY", years, str_c(years, "-", month.name, ".zip")) 
} 

IxDown <- function() { 
    # Download all the index files (as ZIP files). 
    c(AnnualIxUrls(), MonthlyIxUrls()) %>% 
    llply(.progress="text", DownFile, di=ix_dir) 
} 

# Unzip all the index files: 
IxUnzip <- . %>% {list.files(ix_dir, ".zip$", full.name=T) %>% 
    llply(.progress="text", unzip, exdir=ix_dir)} 

IxRead <- . %>% # Read all the index files into one data frame 
    {list.files(ix_dir, ".txt$", full.name=T)} %>% 
    ldply(.parallel=T, IxLoad) %T>% 
    # Replace empty strings with NAs: 
    {.$link[.$link == ""] <- NA} %T>% 
    {.$facsimile_link[.$facsimile_link == ""] <- NA} %>% 
    # Remove URL headers from links: 
    mutate_each(funs(. %>% str_replace("^.*(?=\\?)", "")), link, facsimile_link) %>% 
    tbl_df 

IxLoad <- function(
    pat, #(character) input file path 
    nm=in_colnames #(character) index column names to use 
) { 

    # Loads the index file into a data frame. 

    fread(pat, data.table=F, sep="|") %>% 
    setNames(in_colnames) %>% 
    tbl_df 
} 


# Images 
# ------------------------------------------------------------------------------ 

Link <- . %$% {str_c(link_base, "?dln=", ack_id, "&year=", filing_year)} 

DownLink <- function(
    base, #(character) 
    tail #(character) 
) { 
    if(is.na(tail)) {return(NA)} 
    DownFile(url=str_c(base, tail), di=pdf_dir, 
    out_name=str_c(base %>% str_extract("[^/]*.$"), tail, ".pdf") 
) 
} 


DlRow <- . %$% { 
    DownLink(link_base, link) 
    DownLink(facs_base, facsimile_link) 
    TRUE 
} 

DlRows <- . %>% adply(.margins=1, .fun=DlRow, .progress="text") 


# General 
# ------------------------------------------------------------------------------ 

DownFile <- function(
    url, # (character) 
    di, # (character) output directory. 
    out_name=NA # (character) output file name. 
) { 

    # Downloads and saves a file from the DOL site. 

    if(is.na(out_name)) {out_name <- str_extract(url, "[^/]*$")} 

    # Set up a CURL handle: 
    curl <- getCurlHandle() 

    # Add options to CURL handle (cookie and to follow redirects): 
    curlSetOpt(
    cookiefile=file.path(in_dir, cookie_file), 
    curl=curl, 
    followLocation=T 
) 

    # Download the binary data: 
    getBinaryURL(url,curl=curl) %>% 
    # Save the binary data: 
    writeBin(file.path(di, str_c(out_name, ".pdf"))) 
} 


ProcessIndex <- function(
    i=LoadIndex() #(data frame) the data loaded from index file 
) { 

    # Processes the index: downloads each of the documents listed in the file. 

    # Define a functional sequence to apply to every entry: 
    {. %$% { 

    # Dowload the "link" variable if defined: 
    if(!is.na(link) & str_length(link)) { 
     DownFile(url=link, dest_file=str_c(ack_id, ".pdf")) 
    } 

    # Dowload the "facsimile_link" variable if defined: 
    if(!is.na(facsimile_link) & str_length(facsimile_link)) { 
     DownFile(url=facsimile_link, dest_file=str_c(ack_id, "_facs.pdf")) 
    } 

    TRUE 
    }} %>% 
    # Apply this functional sequence to each row in the index data frame: 
    adply(.data=i,.progress="text", .fun=., .margins=1) 
} 


# Sample 
# ------------------------------------------------------------------------------ 

# Download all the sample files. 
SampleDown <- . %$% LINK %>% llply(.progress="text", DownFile, sample_dir) 

回答

3

您的原始代碼使用正則表達式來提取url字符串的給定部分。我建議使用以下方式替換字符串。

out_name <- str_replace(url, "^.*ack_id=(.*)&.*$", "\\1")

我們匹配所有的字符串和id=&之間創建一個捕獲組(parentesis之間的事情)。 str_replace的最後一個參數是我們將用來替換字符串的。 "\\1"表示我們使用第一個捕獲組,這是您要用於pdf名稱的標識。

+0

完美的作品,非常感謝你@zelite !!!!! – richiepop2