spark中使用nebula java client ping failed

提问参考模版:

  • nebula 版本:nebula-graph-2.6.1.el7.x86_64.rpm nebula 2.6
    spark版本 3.0
  • 部署方式:分布式
  • 安装方式:RPM
  • 是否为线上版本: N
  • 硬件信息
    • 磁盘( 推荐使用 SSD)
    • CPU、内存信息
  • 问题的具体描述
  • 相关的 meta / storage / graph info 日志信息(尽量使用文本形式方便检索)

如果有日志或者代码,记得用 Markdown 语法(下面语法)包裹它们提高阅读体验,让回复者更快解决问题哟~~

2021-12-16 14:19:52,210 ERROR [com.vesoft.nebula.client.graph.net.RoundRobinLoadBalancer] - ping failed
com.vesoft.nebula.client.graph.exception.IOErrorException: java.net.ConnectException: Cannot assign requested address (connect failed)
        at com.vesoft.nebula.client.graph.net.SyncConnection.open(SyncConnection.java:107)
        at com.vesoft.nebula.client.graph.net.RoundRobinLoadBalancer.ping(RoundRobinLoadBalancer.java:81)
        at com.vesoft.nebula.client.graph.net.RoundRobinLoadBalancer.updateServersStatus(RoundRobinLoadBalancer.java:67)
        at com.vesoft.nebula.client.graph.net.RoundRobinLoadBalancer.isServersOK(RoundRobinLoadBalancer.java:92)
        at com.vesoft.nebula.client.graph.net.ConnObjectPool.init(ConnObjectPool.java:88)
        at com.vesoft.nebula.client.graph.net.NebulaPool.init(NebulaPool.java:109)
        at netflow.ForeachWriterVersionEntityVersionMulti.executeNebulaSql(ForeachWriterVersionEntityVersionMulti.java:126)
        at netflow.ForeachWriterVersionEntityVersionMulti.writeEdge(ForeachWriterVersionEntityVersionMulti.java:194)
        at netflow.ForeachWriterVersionEntityVersionMulti.writeMapToNebula(ForeachWriterVersionEntityVersionMulti.java:159)
        at netflow.ForeachWriterVersionEntityVersionMulti.process(ForeachWriterVersionEntityVersionMulti.java:82)
        at netflow.ForeachWriterVersionEntityVersionMulti.process(ForeachWriterVersionEntityVersionMulti.java:36)
        at org.apache.spark.sql.execution.streaming.sources.ForeachDataWriter.write(ForeachWriterTable.scala:140)
        at org.apache.spark.sql.execution.streaming.sources.ForeachDataWriter.write(ForeachWriterTable.scala:125)
        at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.$anonfun$run$7(WriteToDataSourceV2Exec.scala:441)
        at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1411)
        at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2Exec.scala:477)
        at org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.$anonfun$writeWithV2$2(WriteToDataSourceV2Exec.scala:385)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
        at org.apache.spark.scheduler.Task.run(Task.scala:127)
        at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:444)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:447)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

Spark 的版本号,还有 Java Client 的版本号,以及 Java Client 中的配置信息贴一下

连接没关闭吧

我之前是session会释放 pool不会关闭 这样不可以吗 不过也试过 pool关闭 session释放 也会有这种情况

不知道你代码怎么写的

就是正常写法 使用spark structured streaming 微批处理 foreach中使用nebula java client 更新图数据库

NebulaPoolConfig nebulaPoolConfig = new NebulaPoolConfig();
        nebulaPoolConfig.setMaxConnSize(100);
        nebulaPoolConfig.setTimeout(0);
        nebulaPoolConfig.setWaitTime(0);
        List<HostAddress> addresses = Arrays.asList(new HostAddress("xxx.xxx.xxx.xxx", 9669));
if (pool == null) {
            pool = new NebulaPool();
            pool.init(addresses, nebulaPoolConfig);
        }
        if (session == null) {
            session = pool.getSession("root", "nebula", false);
        }
        close时:
        session.release();
        pool.close();

此话题已在最后回复的 30 天后被自动关闭。不再允许新回复。