2017-05-26 70 views

回答

0

sklearn.decomposition.LatentDirichletAllocation.components_請看:

components_:陣列,[n_topics,n_features]

主題單詞分佈。 components_ [i,j]代表主題i中的單詞j。

這裏有一個小例子:

import numpy as np 
from sklearn.decomposition import LatentDirichletAllocation 
from sklearn.feature_extraction.text import CountVectorizer 

data = ['blah blah foo bar', 'foo foo foo foo bar', 'bar bar bar bar foo', 
     'foo bar bar bar baz foo', 'foo foo foo bar baz', 'blah banana', 
     'cookies candy', 'more text please', 'hey there are more words here', 
     'bananas', 'i am a real boy', 'boy', 'girl'] 

vectorizer = CountVectorizer() 
X = vectorizer.fit_transform(data) 

vocab = vectorizer.get_feature_names() 

n_top_words = 5 
k = 2 

model = LatentDirichletAllocation(n_topics=k, random_state=100) 

id_topic = model.fit_transform(X) 

topic_words = {} 

for topic, comp in enumerate(model.components_): 
    # for the n-dimensional array "arr": 
    # argsort() returns a ranked n-dimensional array of arr, call it "ranked_array" 
    # which contains the indices that would sort arr in a descending fashion 
    # for the ith element in ranked_array, ranked_array[i] represents the index of the 
    # element in arr that should be at the ith index in ranked_array 
    # ex. arr = [3,7,1,0,3,6] 
    # np.argsort(arr) -> [3, 2, 0, 4, 5, 1] 
    # word_idx contains the indices in "topic" of the top num_top_words most relevant 
    # to a given topic ... it is sorted ascending to begin with and then reversed (desc. now)  
    word_idx = np.argsort(comp)[::-1][:n_top_words] 

    # store the words most relevant to the topic 
    topic_words[topic] = [vocab[i] for i in word_idx] 

檢查出來的結果:

for topic, words in topic_words.items(): 
    print('Topic: %d' % topic) 
    print(' %s' % ', '.join(words)) 

Topic: 0 
    more, blah, here, hey, words 
Topic: 1 
    foo, bar, blah, baz, boy 

很顯然你應該試試這個代碼文本的一個更大的身體,但是這是一個辦法針對給定數量的主題獲取最豐富的信息。