2010-12-10 87 views
2

我有一個以3列開頭的CSV。累積百分比成本列,成本列和關鍵字列。 R腳本適用於小文件,但當我向它提供實際文件(有一百萬行)時完全死亡(永遠不會結束)。你能幫我讓這個腳本更高效嗎? Token.Count是我無法創建的人。謝謝!計數令牌字的最佳和最有效的方法

# Token Histogram 

# Import CSV data from Report Downloader API Feed 
Mydf <- read.csv("Output_test.csv.csv", sep=",", header = TRUE, stringsAsFactors=FALSE) 

# Helps limit the dataframe according the HTT 
# Change number to: 
# .99 for big picture 
# .8 for HEAD 
limitor <- Mydf$CumuCost <= .8 
# De-comment to ONLY measure TORSO 
#limitor <- (Mydf$CumuCost <= .95 & Mydf$CumuCost > .8) 
# De-comment to ONLY measure TAIL 
#limitor <- (Mydf$CumuCost <= 1 & Mydf$CumuCost > .95) 
# De-comment to ONLY measure Non-HEAD 
#limitor <- (Mydf$CumuCost <= 1 & Mydf$CumuCost > .8) 

# Creates a column with HTT segmentation labels 
# Creates a dataframe 
HTT <- data.frame() 
# Populates dataframe according to conditions 
HTT <- ifelse(Mydf$CumuCost <= .8,"HEAD",ifelse(Mydf$CumuCost <= .95,"TORSO","TAIL")) 
# Add the column to Mydf and rename it HTT 
Mydf <- transform(Mydf, HTT = HTT) 

# Count all KWs in account by using the dimension function 
KWportfolioSize <- dim(Mydf)[1] 

# Percent of portfolio 
PercentofPortfolio <- sum(limitor)/KWportfolioSize 

# Length of Keyword -- TOO SLOW 
# Uses the Tau package 
# My function takes the row number and returns the number of tokens 
library(tau) 
Myfun = function(n) { 
    sum(sapply(Mydf$Keyword.text[n], textcnt, split = "[[:space:][:punct:]]+", method = "string", n = 1L))} 
# Creates a dataframe to hold the results 
Token.Count <- data.frame() 
# Loops until last row and store it in data.frame 
for (i in c(1:dim(Mydf)[1])) {Token.Count <- rbind(Token.Count,Myfun(i))} 
# Add the column to Mydf 
Mydf <- transform(Mydf, Token.Count = Token.Count) 
# Not quite sure why but the column needs renaming in this case 
colnames(Mydf)[dim(Mydf)[2]] <- "Token.Count" 
+1

您可以鏈接到一塊樣本數據的?隨意使它合成,只是具有代表性,所以人們可以測試他們的方法,以確保他們更快。 – 2010-12-10 21:12:56

+0

CumuCost \t \t成本Keyword.text 0.004394288 \t \t 678.5北+臉+出口 0.006698245 \t \t 80.05超高動力學傳感器 0.008738991 \t \t 79.51 X盒360 250 – datayoda 2010-12-10 22:47:12

+0

'data.frame':74231個OBS。 5個變量: $ CumuCost:num 0.00439 0.0067 0.00874 0.01067 0.01258 ... $ Cost:num 1678 880 780 736 731 ... $ Keyword.text:chr「north + face + outlet」「kinect sensor」「x box 360 250「... $ HTT:因子w/1級別」HEAD「:1 1 1 1 1 1 1 1 1 1 ... $ Token.Count:int 3 2 4 1 4 2 2 2 2 1 ... – datayoda 2010-12-10 22:51:07

回答

2

預分配存儲與循環填充它之前從不做你在做什麼,連接或r | cbind循環內的對象。 R必須在循環的每次迭代中複製,分配更多的存儲空間等,這是削弱代碼的開銷。

用足夠的行和列創建Token.Count並將其填充到循環中。喜歡的東西:

Token.Count <- matrix(ncol = ?, nrow = nrow(Mydf)) 
for (i in seq_len(nrow(Mydf))) { 
    Token.Count[i, ] <- Myfun(i) 
} 
Token.Count <- data.frame(Token.Count) 

對不起,我不能更具體,但我不知道有多少列Myfun回報。


更新1:具有textcnt採取一看,我想你也許能避免循環完全。你有這樣的事情的數據幀

DF <- data.frame(CumuCost = c(0.00439, 0.0067), Cost = c(1678, 880), 
       Keyword.text = c("north+face+outlet", "kinect sensor"), 
       stringsAsFactors = FALSE) 

如果我們剔除了關鍵詞,並將其轉換爲一個列表

keywrds <- with(DF, as.list(Keyword.text)) 
head(keywrds) 

然後我們可以調用textcnt遞歸這個名單來算的話在每個列表組件;

countKeys <- textcnt(keywrds, split = "[[:space:][:punct:]]+", method = "string", 
        n = 1L, recursive = TRUE) 
head(countKeys) 

以上的幾乎是你有什麼,除了我加recursive = TRUE分別處理每個輸入向量。最後一步是sapplysum功能countKeys得到的單詞數:這似乎是你正在努力實現與循環和功能是什麼

> sapply(countKeys, sum) 
[1] 3 2 

。我有這個權利嗎?


更新2: OK,如果有固定的預分配問題的量化方式使用textcnt還是不太一樣快,你想,我們可以調查字數統計的其他方式。你很可能不需要textcnt的所有功能來做你想做的事。 [我無法檢查下面的解決方案是否適用於您的所有數據,但速度更快。]

一個潛在解決方案是將Keyword.text矢量使用內置strsplit功能,使用上述和僅第一元件產生keywrds分割成單詞,例如:

> length(unlist(strsplit(keywrds[[1]], split = "[[:space:][:punct:]]+"))) 
[1] 3 

要使用此想法它可能是更容易它包裝在一個用戶功能:

fooFun <- function(x) { 
    length(unlist(strsplit(x, split = "[[:space:][:punct:]]+"), 
        use.names = FALSE, recursive = FALSE)) 
} 

然後我們可以應用到keywrds列表:

> sapply(keywrds, fooFun) 
[1] 3 2 

對於這個簡單的示例數據集,我們可以得到相同的結果。計算時間呢?第一使用textcnt該溶液中,從組合兩個步驟更新1

> system.time(replicate(10000, sapply(textcnt(keywrds, 
+          split = "[[:space:][:punct:]]+", 
+          method = "string", n = 1L, 
+          recursive = TRUE), sum))) 
    user system elapsed 
    4.165 0.026 4.285 

,然後在更新2溶液:

​​

所以即使對於這個小樣本,呼叫textcnt涉及相當大的開銷,但是在將這兩種方法應用於完整數據集時是否存在這種差異仍有待觀察。

最後,我們應該注意到,strsplit方法可以矢量化對矢量Keyword.textDF直接工作:

> sapply(strsplit(DF$Keyword.text, split = "[[:space:][:punct:]]+"), length) 
[1] 3 2 

可以得到相同的結果與其他兩種方法,並稍高於更快非矢量化使用strsplit

> system.time(replicate(10000, sapply(strsplit(DF$Keyword.text, 
+        split = "[[:space:][:punct:]]+"), length))) 
    user system elapsed 
    0.732 0.001 0.734 

這些是否對您的全部數據集更快?

次要更新:複製DF給130行數據和定時三種方法表明,去年(矢量strsplit())能更好地伸縮:

> DF2 <- rbind(DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF) 
> dim(DF2) 
[1] 130 3 
> system.time(replicate(10000, sapply(textcnt(keywrds2, split = "[[:space:][:punct:]]+", method = "string", n = 1L, recursive = TRUE), sum))) 
    user system elapsed 
238.266 1.790 241.404 
> system.time(replicate(10000, sapply(keywrds2, fooFun))) 
    user system elapsed 
28.405 0.007 28.511 
> system.time(replicate(10000, sapply(strsplit(DF2$Keyword.text,split = "[[:space:][:punct:]]+"), length))) 
    user system elapsed 
    7.497 0.011 7.528 
+0

照明不快但工作正常。謝謝! – datayoda 2010-12-11 00:21:18