我的RDD看起来像这样:
[((String, String, String), (String, String))]
示例数据如下:
((10,1,a),(x,3))
((10,2,b),(y,5))
((11,2,b),
((11,3,c),(z,4))
因此,如果键内的第二个字符串的值是2或3,则将其替换为2-3,如果它是1,或者如果rdd类似于第三个,则删除该rdd。
所以预期的输出是这样的:
((10,2-3,b),(y,5))
((11,2-3,c),(z,4))
答案 0 :(得分:0)
将输入数据指定为
val rdd = spark.sparkContext.parallelize(Seq(
(("10","1","a"),("x","3")),
(("10","2","b"),("y","5")),
(("11","2","b"),()),
(("11","3","c"),("z","4"))
))
您可以执行以下操作以获得所需的输出
rdd.filter(x => x._1._2 != "1").filter(x => x._2 != ()).map(x => {
if(x._1._2 == "2" || x._1._2 == "3") ((x._1._1, "2-3", x._1._3), x._2)
else ((x._1._1, x._1._2, x._1._3), x._2)
})
您的输出将是
((10,2-3,b),(y,5))
((11,2-3,c),(z,4))
感谢philantrovert指出它必须是String
而不是Int
。