2
我剛剛在R中開始使用tm包,似乎無法解決問題。 雖然我的分詞器的功能似乎工作權:R中的TermDocumentMatrix - 僅創建1剋剋
uniTokenizer <- function(x) NGramTokenizer(x, Weka_control(min=1, max=1))
biTokenizer <- function(x) NGramTokenizer(x, Weka_control(min=2, max=2))
triTokenizer <- function(x) NGramTokenizer(x, Weka_control(min=3, max=3))
uniTDM <- TermDocumentMatrix(corpus, control=list(tokenize = uniTokenizer))
biTDM <- TermDocumentMatrix(corpus, control=list(tokenize = biTokenizer))
triTDM <- TermDocumentMatrix(corpus, control=list(tokenize = triTokenizer))
當我試圖拉2克從biTDM,只有1克拿出...
findFreqTerms(biTDM, 50)
[1] "after" "and" "most" "the" "were" "years" "love"
[8] "you" "all" "also" "been" "did" "from" "get"
的同時, 2克的功能似乎是在機智:
x <- biTokenizer(corpus)
head(x)
[1] "c in" "in the" "the years"
[4] "years thereafter" "thereafter most" "most of"
包括[最小再現的示例](https://stackoverflow.com/questions/5963269/how-to-make-a-great-r-reproducible-example)在你的問題會增加你的機會得到答案。 – jsb