2017-01-30 81 views
0

我有一個入侵數據集,標記爲我想用來測試不同的監督機器學習技術。創建一個RDD:太多的字段=> RDD的用例類

因此,這裏是我的代碼的一部分:

object parser_dataset { 

    val conf = new SparkConf() 
     .setMaster("local[2]") 
     .setAppName("kdd") 
     .set("spark.executor.memory", "8g") 
     conf.registerKryoClasses(Array(
     classOf[Array[Any]], 
     classOf[Array[scala.Tuple3[Int, Int, Int]]], 
     classOf[String], 
     classOf[Any] 
    )) 
    val context = new SparkContext(conf) 

    def load(file: String): RDD[(Int, String, String,String,Int,Int,Int,Int,Int,Int,Int,Int,Int,Int,Int,Int,Int,Int,Int,Int,Int,Int,Int,Int,Double,Double,Double,Double,Double,Double,Double, Int, Int,Double, Double, Double, Double, Double, Double, Double, Double, String)] = { 

     val data = context.textFile(file) 

     val res = data.map(x => { 

      val s = x.split(",") 
     (s(0).toInt, s(1), s(2), s(3), s(4).toInt, s(5).toInt, s(6).toInt, s(7).toInt, s(8).toInt, s(9).toInt, s(10).toInt, s(11).toInt, s(12).toInt, s(13).toInt, s(14).toInt, s(15).toInt, s(16).toInt, s(17).toInt, s(18).toInt, s(19).toInt, s(20).toInt, s(21).toInt, s(22).toInt, s(23).toInt, s(24).toDouble, s(25).toDouble, s(26).toDouble, s(27).toDouble, s(28).toDouble, s(29).toDouble, s(30).toDouble, s(31).toInt, s(32).toInt, s(33).toDouble, s(34).toDouble, s(35).toDouble, s(36).toDouble, s(37).toDouble, s(38).toDouble, s(39).toDouble, s(40).toDouble, s(41))  
     }) 
     .persist(StorageLevel.MEMORY_AND_DISK) 
    return res 
    } 


    def main(args: Array[String]) { 
    val data = this.load("/home/hvfd8529/Datasets/KDDCup99/kddcup.data_10_percent_corrected") 

    data1.collect.foreach(println) 
    data.distinct() 

    } 

}

這不是我的代碼,它給了我,我只是修改了一些部件(尤其是RDD和分離部分)和我在斯卡拉一個新手,星火 :)

編輯: 所以我添加案例類以上我的負荷的功能,如:

case class BasicFeatures(duration:Int, protocol_type:String, service:String, flag:String, src_bytes:Int, dst_bytes:Int, land:Int, wrong_fragment:Int, urgent:Int) 

case class ContentFeatures(hot:Int, num_failed_logins:Int, logged_in:Int, num_compromised:Int, root_shell:Int, su_attempted:Int, num_root:Int, num_file_creations:Int, num_shells:Int, num_access_files:Int, num_outbound_cmds:Int, is_host_login:Int, is_guest_login:Int) 

case class TrafficFeatures(count:Int, srv_count:Int, serror_rate:Double, srv_error_rate:Double, rerror_rate:Double, srv_rerror_rate:Double, same_srv_rate:Double, diff_srv_rate:Double, srv_diff_host_rate:Double, dst_host_count:Int, dst_host_srv_count:Int, dst_host_same_srv_rate:Double, dst_host_diff_srv_rate:Double, dst_host_same_src_port_rate:Double, dst_host_srv_diff_host_rate:Double, dst_host_serror_rate:Double, dst_host_srv_serror_rate:Double, dst_host_rerror_rate:Double, dst_host_srv_rerror_rate:Double, attack_type:String) 

但現在我很困惑,我該如何使用這些來解決我的問題,因爲我仍然需要RDD才能擁有一個功能=一個字段 以下是我想要解析的我的一行文件:

0,tcp,ftp_data,SF,491,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,2,0.00,0.00,0.00,0.00,1.00,0.00,0.00,150,25,0.17,0.03,0.17,0.00,0.00,0.00,0.05,0.00,normal,20 
+0

說實話,在我看來,使用22字段元組是非常糟糕的做法,元組沒有描述任何內容。儘管存在這個問題,考慮使用自己的類,這有一些意義。如果一年後你不得不修改你的代碼,你會說「謝謝」;) –

+1

我同意@T.Gawęda。你也想閱讀[this](http://stackoverflow.com/questions/20258417/how-to-get-around-the-scala-case-class-limit-of-22-fields)! – eliasah

+1

可能的重複[如何解決22個字段的Scala案例類限制?](http://stackoverflow.com/questions/20258417/how-to-get-around-the-scala-case-class-limit- 22-fields) –

回答

0

Scala支持的最大元組大小是22.Scala函數有22個參數的限制。因此,你不能創建更大的元組22。