600字范文,内容丰富有趣,生活中的好帮手!
600字范文 > dataframe 转rdd java 在pyspark中将RDD转换为Dataframe

dataframe 转rdd java 在pyspark中将RDD转换为Dataframe

时间:2019-09-23 20:18:51

相关推荐

dataframe 转rdd java 在pyspark中将RDD转换为Dataframe

我想在pyspark中将我的RDD转换为Dataframe .

我的RDD:

[(['abc', '1,2'], 0), (['def', '4,6,7'], 1)]

我希望RDD以Dataframe的形式:

Index Name Number

0 abc [1,2]

1 def [4,6,7]

我试过了:

rd2=rd.map(lambda x,y: (y, x[0] , x[1]) ).toDF(["Index", "Name" , "Number"])

但我收到了错误

An error occurred while calling

z:org.apache.spark.api.python.PythonRDD.runJob.

: org.apache.spark.SparkException: Job aborted due to stage failure:

Task 0 in stage 62.0 failed 1 times, most recent failure: Lost task 0.0

in stage 62.0 (TID 88, localhost, executor driver):

org.apache.spark.api.python.PythonException: Traceback (most recent

call last):

你能让我知道吗,我哪里错了?

更新:

rd2=rd.map(lambda x: (x[1], x[0][0] , x[0][1]))

我有以下形式的RDD:

[(0, 'abc', '1,2'), (1, 'def', '4,6,7')]

要转换为Dataframe:

rd2.toDF(["Index", "Name" , "Number"])

它仍然给我错误:

An error occurred while calling o2271.showString.

: java.lang.IllegalStateException: SparkContext has been shutdown

at org.apache.spark.SparkContext.runJob(SparkContext.scala:)

at org.apache.spark.SparkContext.runJob(SparkContext.scala:2050)

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。