2016-01-15 35 views
7

我試圖在PySpark DataFrame中做一些Unicode列的NLP文本清理。我已經在Spark 1.3,1.5和1.6中試過了,似乎無法讓事情爲我的生活工作。我也嘗試過使用Python 2.7和Python 3.4。Pyspark DataFrame文本列上的UDF

我已經創建了一個非常簡單的UDF,如下所示,應該只是返回一個字符串爲每個記錄在一個新的列。其他函數將操作文本,然後將更改後的文本返回到新列中。

import pyspark 
from pyspark.sql import SQLContext 
from pyspark.sql.types import * 
from pyspark.sql import SQLContext 
from pyspark.sql.functions import udf 

def dummy_function(data_str): 
    cleaned_str = 'dummyData' 
    return cleaned_str 

dummy_function_udf = udf(dummy_function, StringType()) 

一些樣本數據可以從here.

解壓縮下面是我用導入的數據,然後應用的UDF的代碼。

# Load a text file and convert each line to a Row. 
lines = sc.textFile("classified_tweets.txt") 
parts = lines.map(lambda l: l.split("\t")) 
training = parts.map(lambda p: (p[0], p[1])) 

# Create dataframe 
training_df = sqlContext.createDataFrame(training, ["tweet", "classification"]) 

training_df.show(5) 
+--------------------+--------------+ 
|    tweet|classification| 
+--------------------+--------------+ 
|rt @jiffyclub: wi...|  python| 
|rt @arnicas: ipyt...|  python| 
|rt @treycausey: i...|  python| 
|what's my best op...|  python| 
|rt @raymondh: #py...|  python| 
+--------------------+--------------+ 

# Apply UDF function 
df = training_df.withColumn("dummy", dummy_function_udf(training_df['tweet'])) 
df.show(5) 

當我運行df.show(5)時出現以下錯誤。我明白,這個問題很可能不是源於節目(),但是追蹤並沒有給我太多幫助。

---------------------------------------------------------------------------Py4JJavaError        Traceback (most recent call last)<ipython-input-19-0b21c233c724> in <module>() 
     1 df = training_df.withColumn("dummy", dummy_function_udf(training_df['tweet'])) 
----> 2 df.show(5) 
/Users/dreyco676/spark-1.6.0-bin-hadoop2.6/python/pyspark/sql/dataframe.py in show(self, n, truncate) 
    255   +---+-----+ 
    256   """ 
--> 257   print(self._jdf.showString(n, truncate)) 
    258 
    259  def __repr__(self): 
/Users/dreyco676/spark-1.6.0-bin-hadoop2.6/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py in __call__(self, *args) 
    811   answer = self.gateway_client.send_command(command) 
    812   return_value = get_return_value(
--> 813    answer, self.gateway_client, self.target_id, self.name) 
    814 
    815   for temp_arg in temp_args: 
/Users/dreyco676/spark-1.6.0-bin-hadoop2.6/python/pyspark/sql/utils.py in deco(*a, **kw) 
    43  def deco(*a, **kw): 
    44   try: 
---> 45    return f(*a, **kw) 
    46   except py4j.protocol.Py4JJavaError as e: 
    47    s = e.java_exception.toString() 
/Users/dreyco676/spark-1.6.0-bin-hadoop2.6/python/lib/py4j-0.9-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name) 
    306     raise Py4JJavaError(
    307      "An error occurred while calling {0}{1}{2}.\n". 
--> 308      format(target_id, ".", name), value) 
    309    else: 
    310     raise Py4JError(
Py4JJavaError: An error occurred while calling o474.showString. 
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 10.0 failed 1 times, most recent failure: Lost task 0.0 in stage 10.0 (TID 10, localhost): org.apache.spark.api.python.PythonException: Traceback (most recent call last): 
    File "/Users/dreyco676/spark-1.6.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/worker.py", line 111, in main 
    process() 
    File "/Users/dreyco676/spark-1.6.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/worker.py", line 106, in process 
    serializer.dump_stream(func(split_index, iterator), outfile) 
    File "/Users/dreyco676/spark-1.6.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/serializers.py", line 263, in dump_stream 
    vs = list(itertools.islice(iterator, batch)) 
    File "<ipython-input-12-4bc30395aac5>", line 4, in <lambda> 
IndexError: list index out of range 

    at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166) 
    at org.apache.spark.api.python.PythonRunner$$anon$1.next(PythonRDD.scala:129) 
    at org.apache.spark.api.python.PythonRunner$$anon$1.next(PythonRDD.scala:125) 
    at org.apache.spark.InterruptibleIterator.next(InterruptibleIterator.scala:43) 
    at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) 
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) 
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) 
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) 
    at scala.collection.Iterator$GroupedIterator.takeDestructively(Iterator.scala:913) 
    at scala.collection.Iterator$GroupedIterator.go(Iterator.scala:929) 
    at scala.collection.Iterator$GroupedIterator.fill(Iterator.scala:968) 
    at scala.collection.Iterator$GroupedIterator.hasNext(Iterator.scala:972) 
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) 
    at scala.collection.Iterator$class.foreach(Iterator.scala:727) 
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) 
    at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:452) 
    at org.apache.spark.api.python.PythonRunner$WriterThread$$anonfun$run$3.apply(PythonRDD.scala:280) 
    at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1741) 
    at org.apache.spark.api.python.PythonRunner$WriterThread.run(PythonRDD.scala:239) 

Driver stacktrace: 
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418) 
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) 
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) 
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799) 
    at scala.Option.foreach(Option.scala:236) 
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588) 
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) 
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620) 
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832) 
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845) 
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858) 
    at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:212) 
    at org.apache.spark.sql.execution.Limit.executeCollect(basicOperators.scala:165) 
    at org.apache.spark.sql.execution.SparkPlan.executeCollectPublic(SparkPlan.scala:174) 
    at org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$execute$1$1.apply(DataFrame.scala:1538) 
    at org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$execute$1$1.apply(DataFrame.scala:1538) 
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56) 
    at org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:2125) 
    at org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$$execute$1(DataFrame.scala:1537) 
    at org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$$collect(DataFrame.scala:1544) 
    at org.apache.spark.sql.DataFrame$$anonfun$head$1.apply(DataFrame.scala:1414) 
    at org.apache.spark.sql.DataFrame$$anonfun$head$1.apply(DataFrame.scala:1413) 
    at org.apache.spark.sql.DataFrame.withCallback(DataFrame.scala:2138) 
    at org.apache.spark.sql.DataFrame.head(DataFrame.scala:1413) 
    at org.apache.spark.sql.DataFrame.take(DataFrame.scala:1495) 
    at org.apache.spark.sql.DataFrame.showString(DataFrame.scala:171) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:497) 
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231) 
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381) 
    at py4j.Gateway.invoke(Gateway.java:259) 
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133) 
    at py4j.commands.CallCommand.execute(CallCommand.java:79) 
    at py4j.GatewayConnection.run(GatewayConnection.java:209) 
    at java.lang.Thread.run(Thread.java:745) 
Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last): 
    File "/Users/dreyco676/spark-1.6.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/worker.py", line 111, in main 
    process() 
    File "/Users/dreyco676/spark-1.6.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/worker.py", line 106, in process 
    serializer.dump_stream(func(split_index, iterator), outfile) 
    File "/Users/dreyco676/spark-1.6.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/serializers.py", line 263, in dump_stream 
    vs = list(itertools.islice(iterator, batch)) 
    File "<ipython-input-12-4bc30395aac5>", line 4, in <lambda> 
IndexError: list index out of range 

    at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166) 
    at org.apache.spark.api.python.PythonRunner$$anon$1.next(PythonRDD.scala:129) 
    at org.apache.spark.api.python.PythonRunner$$anon$1.next(PythonRDD.scala:125) 
    at org.apache.spark.InterruptibleIterator.next(InterruptibleIterator.scala:43) 
    at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) 
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) 
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) 
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) 
    at scala.collection.Iterator$GroupedIterator.takeDestructively(Iterator.scala:913) 
    at scala.collection.Iterator$GroupedIterator.go(Iterator.scala:929) 
    at scala.collection.Iterator$GroupedIterator.fill(Iterator.scala:968) 
    at scala.collection.Iterator$GroupedIterator.hasNext(Iterator.scala:972) 
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) 
    at scala.collection.Iterator$class.foreach(Iterator.scala:727) 
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) 
    at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:452) 
    at org.apache.spark.api.python.PythonRunner$WriterThread$$anonfun$run$3.apply(PythonRDD.scala:280) 
    at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1741) 
    at org.apache.spark.api.python.PythonRunner$WriterThread.run(PythonRDD.scala:239) 

實際功能我想:

def tag_and_remove(data_str): 
    cleaned_str = ' ' 
    # noun tags 
    nn_tags = ['NN', 'NNP', 'NNP', 'NNPS', 'NNS'] 
    # adjectives 
    jj_tags = ['JJ', 'JJR', 'JJS'] 
    # verbs 
    vb_tags = ['VB', 'VBD', 'VBG', 'VBN', 'VBP', 'VBZ'] 
    nltk_tags = nn_tags + jj_tags + vb_tags 

    # break string into 'words' 
    text = data_str.split() 

    # tag the text and keep only those with the right tags 
    tagged_text = pos_tag(text) 
    for tagged_word in tagged_text: 
     if tagged_word[1] in nltk_tags: 
      cleaned_str += tagged_word[0] + ' ' 

    return cleaned_str 


tag_and_remove_udf = udf(tag_and_remove, StringType()) 
+2

你肯定'l.split( '\ t')'返回多個項目?索引錯誤可能來自'training = parts.map(...)'。你的數據是什麼樣子的 - 你確定在任何地方都有使用的標籤嗎? – AChampion

+0

是的,我可以確認數據有兩列。在放入平面文件之前,我擦除了除空格以外的所有空白的數據。我會把一個小樣本放在上面。 – dreyco676

+2

你沒有分割空白 - 只有選項卡 - 'l.split()'會分割任何空格。 – AChampion

回答

3

您的數據集不乾淨。 985線split('\t')只有一個值:

>>> from operator import add 
>>> lines = sc.textFile("classified_tweets.txt") 
>>> parts = lines.map(lambda l: l.split("\t")) 
>>> parts.map(lambda l: (len(l), 1)).reduceByKey(add).collect() 
[(2, 149195), (1, 985)] 
>>> parts.filter(lambda l: len(l) == 1).take(5) 
[['"show me the money!」 at what point do you start trying to monetize your #startup? tweet us with #startuplife.'], 
['a good pitch can mean money in the bank for your #startup. see how body language plays a key role: (via: ajalumnify)'], 
['100+ apps in five years? @2359media did it using microsoft #azure: #azureapps'], 
['does buying better coffee make you a better leader? little things can make a big difference: (via: @jmbrandonbb)'], 
['[email protected] graduates pitched\xa0#homeautomation #startups to #vcs! check out how they celebrated: ']] 

因此改變你的代碼:

>>> training = parts.filter(lambda l: len(l) == 2).map(lambda p: (p[0], p[1].strip())) 
>>> training_df = sqlContext.createDataFrame(training, ["tweet", "classification"]) 
>>> df = training_df.withColumn("dummy", dummy_function_udf(training_df['tweet'])) 
>>> df.show(5) 
+--------------------+--------------+---------+ 
|    tweet|classification| dummy| 
+--------------------+--------------+---------+ 
|rt @jiffyclub: wi...|  python|dummyData| 
|rt @arnicas: ipyt...|  python|dummyData| 
|rt @treycausey: i...|  python|dummyData| 
|what's my best op...|  python|dummyData| 
|rt @raymondh: #py...|  python|dummyData| 
+--------------------+--------------+---------+ 
only showing top 5 rows 
+0

謝謝。我瞭解到,如果不需要指定N,則show()不一定會導致完全解析。 – dreyco676

4

我想你misdefining問題,也許簡化您對這個問題的目的拉姆達但隱藏真正的問題。

你的堆棧跟蹤讀取

File "<ipython-input-12-4bc30395aac5>", line 4, in <lambda> 
IndexError: list index out of range 

當我運行這段代碼,它工作正常:

import pyspark 
from pyspark.sql import SQLContext 
from pyspark.sql.types import * 
from pyspark.sql import SQLContext 
from pyspark.sql.functions import udf 

training_df = sqlContext.sql("select 'foo' as tweet, 'bar' as classification") 

def dummy_function(data_str): 
    cleaned_str = 'dummyData' 
    return cleaned_str 

dummy_function_udf = udf(dummy_function, StringType()) 
df = training_df.withColumn("dummy", dummy_function_udf(training_df['tweet'])) 
df.show() 

+-----+--------------+---------+ 
|tweet|classification| dummy| 
+-----+--------------+---------+ 
| foo|   bar|dummyData| 
+-----+--------------+---------+ 

你肯定沒有一些其它的錯誤在你dummy_function_udf?除了這個示例版本之外,你使用的是什麼「真正的」udf?

+0

非常感謝您的回答。它看起來像文本數據總是邪惡的,會打破你的解析器。我預計解析出現的任何錯誤都會顯示在training_df.show(5)中,但是它看起來像只解析前N個記錄(如果不進行其他轉換)。 – dreyco676

+0

感謝您的回答。我有類似的問題。我能跟進並詢問「udf」是什麼意思嗎?我將代碼複製到我的外殼中以查找以下錯誤:[Traceback(最近呼叫的最後一個): 文件「」,第1行, NameError:未定義名稱'udf' – yuqli

+0

用戶定義的函數。 –