spark写入nebula边数据报错

-nebula 版本:3.4.0
-部署方式: 分布式
-安装方式: RPM
-是否上生产环境:Y
-硬件信息
-磁盘: SSD
-CPU、内存信息:三台机器分别为 CPU(24) 内存:251
-问题的具体描述
spark 可以正常读取CSV数据,在导入nebula时候报错 ,同类型文件数据量小时没有问题,数据量大失败
查询nebula服务日志没有任何报错信息输出

spark已经将数据读取到

 2023-06-21 09:15:29 [main] INFO  org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator - Code generated in 10.784938 ms
+-------------+-------------------+------+
|        srcId|              dstId|  name|
+-------------+-------------------+------+
|DLGB200406020|1669305794820182017|关键词|
|NRPJ201924137|1669305794820182018|关键词|
|ZRZK201708018|1669305794820182018|关键词|
|WFXY202005026|1669305794820182018|关键词|
|DDLY201805226|1669305794820182018|关键词|
|NRPJ202004149|1669305794820182018|关键词|
|TXWL202022095|1669305794820182018|关键词|
|SHLG202202017|1669305794820182018|关键词|
|QYZL201951056|1669305794820182018|关键词|
|XDBY201712089|1669305794820182018|关键词|
|ZXQX201807030|1669305794820182018|关键词|
|DYKJ201834091|1669305794820182018|关键词|
|DDLY201912060|1669305794820182018|关键词|
|JMSJ201812256|1669305794820182018|关键词|
|CUYN201912046|1669305794820182018|关键词|
|SJSM201806254|1669305794820182018|关键词|
|WHYK201810064|1669305794820182018|关键词|
|SXZX200405018|1669305794820182019|关键词|
|GSKJ200711116|1669305794820182020|关键词|
|NMGS2004S2040|1669305794820182021|关键词|
+-------------+-------------------+------+
only showing top 20 rows

在写入nebula是报错

2023-06-21 09:25:20 [task-result-getter-0] WARN  org.apache.spark.scheduler.TaskSetManager - Lost task 26.0 in stage 4.0 (TID 30) (10.27.107.33 executor 4): java.lang.NullPointerException: Cannot invoke "Object.toString()" because the return value of "org.apache.spark.sql.catalyst.InternalRow.get(int, org.apache.spark.sql.types.DataType)" is null
        at com.vesoft.nebula.connector.writer.NebulaExecutor$.extraID(NebulaExecutor.scala:57)
        at com.vesoft.nebula.connector.writer.NebulaEdgeWriter.write(NebulaEdgeWriter.scala:57)
        at com.vesoft.nebula.connector.writer.NebulaEdgeWriter.write(NebulaEdgeWriter.scala:17)
        at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.$anonfun$run$1(WriteToDataSourceV2Exec.scala:442)
        at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1538)
        at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2Exec.scala:480)
        at org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.$anonfun$writeWithV2$2(WriteToDataSourceV2Exec.scala:381)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
        at org.apache.spark.scheduler.Task.run(Task.scala:136)
        at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
        at java.base/java.lang.Thread.run(Thread.java:833)

源数据里作为vid的一列有空数据,你可以在调用接口写入前 根据vid列做一下过滤

此话题已在最后回复的 30 天后被自动关闭。不再允许新回复。