1

我有一個LDA模型,運行在語料庫大小爲12,054個文檔,語義大小爲9,681個字和60個集羣。我試圖通過調用.topicDistributions()或.javaTopicDistributions()來獲取文檔的主題分佈。這兩種方法都會在文檔上返回一個主題分佈的rdd。根據我的理解,行數應該是文檔數量,列數應該是主題數量。但是,在調用topicDistributions()之後,當我計算rdd時,我得到的計數爲11,665(比傳遞給模型的文檔數少)?每個文檔都有正確數量的主題(60)。爲什麼是這樣?Spark 1.4 Mllib LDA topicDistributions()返回錯誤的文檔數

這裏的演示: http://spark.apache.org/docs/latest/mllib-clustering.html

和文檔:https://spark.apache.org/docs/1.4.0/api/java/org/apache/spark/mllib/clustering/DistributedLDAModel.html

下面的代碼:

enter code here 

//parse tf vectors from corpus 

JavaRDD<Vector> parsedData = data.map(
    new Function<String, Vector>() { 
     public Vector call(String s) { 
      s = s.substring(1, s.length()-1); 
      String[] sarray = s.trim().split(","); 
      double[] values = new double[sarray.length]; 
      for (int i = 0; i < sarray.length; i++) 
      { 
       values[i] = Double.parseDouble(sarray[i]); 
      } 
      return Vectors.dense(values); 
      } 

); 

System.out.println(parsedData.count()) //prints 12,054 

// Index documents with unique IDs 

JavaPairRDD<Long, Vector> corpus = JavaPairRDD.fromJavaRDD(parsedData.zipWithIndex().map(
    new Function<Tuple2<Vector, Long>, Tuple2<Long, Vector>>() { 
     public Tuple2<Long, Vector> call(Tuple2<Vector, Long> doc_id) { 
     return doc_id.swap(); 
     } 
    } 
)); 

System.out.println(corpus.count()) //prints 12,054 

LDA lda = new LDA() 
LDAModel ldaModel = lda.setK(k.intValue()).run(corpus); 

RDD<scala.Tuple2<Object,Vector>> topic_dist_over_docs = ((DistributedLDAModel) ldaModel).topicDistributions(); 
System.out.println(topic_dist_over_docs.count()) //prints 11,655 ??? 

JavaPairRDD<Long,Vector> topic_dist_over_docs2 = ((DistributedLDAModel) ldaModel).javaTopicDistributions(); 
System.out.println(topic_dist_over_docs2.count()) //also prints 11,655 ??? 

回答

0

似乎在星火1.4 topicDistributions的錯誤。在更新到Spark 1.5的實驗版本後,我能夠解決這個問題。