Spark递归修复类中的循环引用

时间:2019-04-23 14:34:42

标签: scala apache-spark recursion linked-list self-reference

我的初始数据结构包含Spark不支持的自引用:

initial.toDF
java.lang.UnsupportedOperationException: cannot have circular references in class, but got the circular reference

初始数据结构:

case class FooInitial(bar:String, otherSelf:Option[FooInitial])
val initial = Seq(FooInitial("first", Some(FooInitial("i1", Some(FooInitial("i2", Some(FooInitial("finish", None))))))))

要解决此问题,可以在语义上进行相似且期望的表示形式:

case class Inner(value:String)
case class Foo(bar:String, otherSelf:Option[Seq[Inner]])
val first = Foo("first", None)
val intermediate1 = Inner("i1")//Foo("i1", None)
val intermediate2 = Inner("i2")//Foo("i2", None)
val finish = Foo("finish", Some(Seq(intermediate1, intermediate2)))
val basic = Seq(first, finish)

basic.foreach(println)
val df = basic.toDF
df.printSchema
df.show
+------+------------+
|   bar|   otherSelf|
+------+------------+
| first|        null|
|finish|[[i1], [i2]]|
+------+------------+

从初始转换为其他非自引用的一种很好的功能方法是什么  代表性?

1 个答案:

答案 0 :(得分:0)

这递归地取消引用对象:

class MyCollector {
    val intermediateElems = new ListBuffer[Foo]

    def addElement(initialElement : FooInitial) : MyCollector = {

      intermediateElems += Foo(initialElement.bar, None)
      intermediateElems ++ addIntermediateElement(initialElement.otherSelf, ListBuffer.empty[Foo])
      this
    }

    @tailrec private def addIntermediateElement(intermediate:Option[FooInitial], l:ListBuffer[Foo]) : ListBuffer[Foo] = {

      intermediate match {
        case None => l
        case Some(s) => {
          intermediatePoints += Foo(s.bar + "_inner", None)
          addIntermediateElement(s.otherSelf,intermediatePoints)
        }
      }

    }
  }

  initial.foldLeft(new MyCollector)((myColl,stay)=>myColl.addElement(stay)).intermediatePoints.toArray.foreach(println)

结果是一个列表:

Foo(first,None)
Foo(i1_inner,None)
Foo(i2_inner,None)
Foo(finish_inner,None)

现在可以很好地用于火花。

  

注意:这并不是我最初要求的1:1,但对我来说已经足够了。

相关问题