通过spark-connector批量导入数据,数据量大时报错

2021-12-10 15:29:07,293 ERROR SparkUncaughtExceptionHandler: [Container in shutdown] Uncaught exception in thread Thread[Executor task launch worker for task 2220,5,main]
java.lang.OutOfMemoryError: Java heap space
at com.esotericsoftware.kryo.io.Output.require(Output.java:172)
at com.esotericsoftware.kryo.io.Output.writeString_slow(Output.java:467)
at com.esotericsoftware.kryo.io.Output.writeString(Output.java:368)
at com.esotericsoftware.kryo.serializers.DefaultSerializers$StringSerializer.write(DefaultSerializers.java:195)
at com.esotericsoftware.kryo.serializers.DefaultSerializers$StringSerializer.write(DefaultSerializers.java:188)
at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:651)
at com.twitter.chill.TraversableSerializer$$anonfun$write$1.apply(Traversable.scala:29)
at com.twitter.chill.TraversableSerializer$$anonfun$write$1.apply(Traversable.scala:27)
at scala.collection.immutable.List.foreach(List.scala:392)
at com.twitter.chill.TraversableSerializer.write(Traversable.scala:27)
at com.twitter.chill.TraversableSerializer.write(Traversable.scala:21)
at com.esotericsoftware.kryo.Kryo.writeObject(Kryo.java:575)
at com.esotericsoftware.kryo.serializers.ObjectField.write(ObjectField.java:79)
at com.esotericsoftware.kryo.serializers.FieldSerializer.write(FieldSerializer.java:508)
at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:651)
at org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:351)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:456)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

spark提交任务参数:
–deploy-mode client
–num-executors 200
–executor-memory 6g
–driver-memory 20g
–executor-cores 2
一次性导入的节点数据量在20000000条。

你 Exchange 和 Nebula 的版本号是多少?Nebula 的部署方式是 Docker 吗?

:see_no_evil:

我现在不是通过exchange方式导入,通过

com.vesoft
nebula-spark-connector
2.5.1
建立spark application来导入。现在一次50万数据是ok的,但是一次80万数据报错如下:
Caused by: com.facebook.thrift.transport.TTransportException: java.net.ConnectException: Connection refused (Connection refused)
at com.facebook.thrift.transport.TSocket.open(TSocket.java:175)
at com.vesoft.nebula.client.meta.MetaClient.getClient(MetaClient.java:104)
at com.vesoft.nebula.client.meta.MetaClient.doConnect(MetaClient.java:99)
at com.vesoft.nebula.client.meta.MetaClient.connect(MetaClient.java:89)
at com.vesoft.nebula.connector.nebula.MetaProvider.(MetaProvider.scala:22)
at com.vesoft.nebula.connector.writer.NebulaWriter.(NebulaWriter.scala:24)
at com.vesoft.nebula.connector.writer.NebulaVertexWriter.(NebulaVertexWriter.scala:19)
at com.vesoft.nebula.connector.writer.NebulaVertexWriterFactory.createDataWriter(NebulaSourceWriter.scala:28)
at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2Exec.scala:113)

@nicole 艾特一下大佬,帮忙看看

导入数据过程中nebula的meta服务挂了吧

@nicole 大佬帮忙看下 怎么确认是meta服务挂了?怎么分析服务挂掉是什么引起的?
一直看到这个log:
1:43:09.990777 7128 MetaDaemon.cpp:127] Waiting for the leader’s clusterId
I1214 11:43:10.990895 7128 KVBasedClusterIdMan.h:84] There is no clusterId existed in kvstore!
I1214 11:43:10.990952 7128 MetaDaemon.cpp:127] Waiting for the leader’s clusterId
I1214 11:43:11.991068 7128 KVBasedClusterIdMan.h:84] There is no clusterId existed in kvstore!

此话题已在最后回复的 30 天后被自动关闭。不再允许新回复。