2017-08-17 101 views
0

我注意到計算模型精度的時間幾乎和創建模型本身的時間一樣長,這看起來不正確。我有一個包含六臺虛擬機的集羣。時間上最昂貴的是第一次迭代「for range in item(numClasses)」循環。這背後應該發生什麼rdd操作?爲什麼「MulticlassMetrics」對象的「.precision」方法需要花費太多時間?

代碼:

%pyspark 
from pyspark.sql.types import DoubleType 
from pyspark.sql.functions import UserDefinedFunction 
from pyspark.mllib.regression import LabeledPoint 
from pyspark.mllib.tree import DecisionTree 
from pyspark.mllib.evaluation import MulticlassMetrics 
from timeit import default_timer 

def decision_tree(train,test,numClasses,CatFeatInf): 
    ref = default_timer() 
    training_data = train.rdd.map(lambda row: LabeledPoint(row[-1], row[:-1])).persist(StorageLevel.MEMORY_ONLY) 
    testing_data = test.rdd.map(lambda row: LabeledPoint(row[-1], row[:-1])).persist(StorageLevel.MEMORY_ONLY) 
    print 'transformed in dense data in: %.3f seconds'%(default_timer()-ref) 

    ref = default_timer() 
    model = DecisionTree.trainClassifier(training_data, 
             numClasses=numClasses, 
             maxDepth=7, 
             categoricalFeaturesInfo=CatFeatInf, 
             impurity='entropy', maxBins=max(CatFeatInf.values())) 
    print 'model created in: %.3f seconds'%(default_timer()-ref) 

    ref = default_timer() 
    predictions_and_labels = model.predict(testing_data.map(lambda r: r.features)).zip(testing_data.map(lambda r: r.label)) 
    print 'predictions made in: %.3f seconds'%(default_timer()-ref) 

    ref = default_timer() 

    metrics = MulticlassMetrics(predictions_and_labels) 


    res = {} 
    for item in range(numClasses): 
     try: 
      res[item] = metrics.precision(item) 
     except: 
      res[item] = 0.0 
    print 'accuracy calculated in: %.3f seconds'%(default_timer()-ref) 
    return res 

變換在密集數據:0.074秒

模型中創建:0.095秒

精度計算:355.276秒

預測在由:346.497秒

回答

0

當我第一次調用metrics.precision(0)

時可能會執行一些未完成的rdd操作
相關問題