在我最近的BigData項目中,我需要使用Spark。使用Spark Java進行數據匹配RDD
的第一個要求是如下
我們有兩組數據從不同的數據源來讓說一個從平面文件和其他從HDFS的。
數據集可能有也可能沒有共同的列,但我們有手中的映射規則,例如,
功能1(data1.columnA)==函數2(data2.columnB)
我試圖通過在RDD一個其它內部執行的foreach到實現這一目標,但是這沒有在允許的火花,
org.apache.spark.SparkException: RDD transformations and actions can only be invoked by the driver, not inside of other transformations; for example, rdd1.map(x => rdd2.values.count() * x) is invalid because the values transformation and count action cannot be performed inside of the rdd1.map transformation. For more information, see SPARK-5063. at org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$sc(RDD.scala:87) at org.apache.spark.rdd.RDD.withScope(RDD.scala:316) at org.apache.spark.rdd.RDD.foreach(RDD.scala:910) at org.apache.spark.api.java.JavaRDDLike$class.foreach(JavaRDDLike.scala:332) at org.apache.spark.api.java.AbstractJavaRDDLike.foreach(JavaRDDLike.scala:46) at com.pramod.engine.DataMatchingEngine.lambda$execute$4e658232$1(DataMatchingEngine.java:44) at com.pramod.engine.DataMatchingEngine$$Lambda$9/1172080526.call(Unknown Source) at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreach$1.apply(JavaRDDLike.scala:332) at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreach$1.apply(JavaRDDLike.scala:332) at scala.collection.Iterator$class.foreach(Iterator.scala:727) at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) at org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$32.apply(RDD.scala:912) at org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$32.apply(RDD.scala:912) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
請幫助我以最佳方式實現此目標。
我認爲你需要提供更多的細節(至少我不明白),確切地說......你需要做什麼? –