Spark写入Nebula

Spark:2.2.0
Nebula:3.0.0
目前线上Spark版本低于2.4.0所以不采用nebula-spark的API进行写入,目前采取java客户端获取session,对Dataset对象进行ForeachPartitionFunction 进行希尔,测试环境没问题,到线上数据量大,就会出现如下问题:Caused by: com.vesoft.nebula.client.graph.exception.NotValidConnectionException: No extra connection: All servers are broken.
at com.vesoft.nebula.client.graph.net.NebulaPool.getConnection(NebulaPool.java:215)
at com.vesoft.nebula.client.graph.net.NebulaPool.getSession(NebulaPool.java:137)
at graph.write.NebulaForeachPartition.call(NebulaForeachPartition.java:60)
at org.apache.spark.sql.Dataset$$anonfun$foreachPartition$2.apply(Dataset.scala:2691)
at org.apache.spark.sql.Dataset$$anonfun$foreachPartition$2.apply(Dataset.scala:2691)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:935)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:935)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2074)

老哥,你的问题不是还没解决吗,在原帖子上更新好了,这个帖子先行关闭了,你更新原来帖子就好了,不用一个问题多个地方发的。