提问参考模版:
- nebula 版本:nebula-graph-2.6.1.el7.x86_64.rpm nebula 2.6
spark版本 3.0 - 部署方式:分布式
- 安装方式:RPM
- 是否为线上版本: N
- 硬件信息
- 磁盘( 推荐使用 SSD)
- CPU、内存信息
- 问题的具体描述
- 相关的 meta / storage / graph info 日志信息(尽量使用文本形式方便检索)
如果有日志或者代码,记得用 Markdown 语法(下面语法)包裹它们提高阅读体验,让回复者更快解决问题哟~~
2021-12-16 14:19:52,210 ERROR [com.vesoft.nebula.client.graph.net.RoundRobinLoadBalancer] - ping failed
com.vesoft.nebula.client.graph.exception.IOErrorException: java.net.ConnectException: Cannot assign requested address (connect failed)
at com.vesoft.nebula.client.graph.net.SyncConnection.open(SyncConnection.java:107)
at com.vesoft.nebula.client.graph.net.RoundRobinLoadBalancer.ping(RoundRobinLoadBalancer.java:81)
at com.vesoft.nebula.client.graph.net.RoundRobinLoadBalancer.updateServersStatus(RoundRobinLoadBalancer.java:67)
at com.vesoft.nebula.client.graph.net.RoundRobinLoadBalancer.isServersOK(RoundRobinLoadBalancer.java:92)
at com.vesoft.nebula.client.graph.net.ConnObjectPool.init(ConnObjectPool.java:88)
at com.vesoft.nebula.client.graph.net.NebulaPool.init(NebulaPool.java:109)
at netflow.ForeachWriterVersionEntityVersionMulti.executeNebulaSql(ForeachWriterVersionEntityVersionMulti.java:126)
at netflow.ForeachWriterVersionEntityVersionMulti.writeEdge(ForeachWriterVersionEntityVersionMulti.java:194)
at netflow.ForeachWriterVersionEntityVersionMulti.writeMapToNebula(ForeachWriterVersionEntityVersionMulti.java:159)
at netflow.ForeachWriterVersionEntityVersionMulti.process(ForeachWriterVersionEntityVersionMulti.java:82)
at netflow.ForeachWriterVersionEntityVersionMulti.process(ForeachWriterVersionEntityVersionMulti.java:36)
at org.apache.spark.sql.execution.streaming.sources.ForeachDataWriter.write(ForeachWriterTable.scala:140)
at org.apache.spark.sql.execution.streaming.sources.ForeachDataWriter.write(ForeachWriterTable.scala:125)
at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.$anonfun$run$7(WriteToDataSourceV2Exec.scala:441)
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1411)
at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2Exec.scala:477)
at org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.$anonfun$writeWithV2$2(WriteToDataSourceV2Exec.scala:385)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:127)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:444)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:447)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)