nebula-spark-connector ClassNotFoundException

  • nebula 版本:2.5

  • 部署方式:Docker

  • 是否为线上版本: N

  • 问题的具体描述
    使用Nebula Spark Connector 提交任务到spark时提示错误找不到类,
    “java.lang.ClassNotFoundException: com.vesoft.nebula.connector.reader.NebulaVertexPartition”。
    确定已经在程序中引用该依赖。

    implementation("org.slf4j:slf4j-api:1.7.25")
    implementation("org.slf4j:slf4j-log4j12:1.7.25")
    implementation("org.apache.spark:spark-core_2.11:2.4.4")
    implementation("org.apache.spark:spark-sql_2.11:2.4.4")
    implementation("org.apache.spark:spark-sql_2.11:2.4.4")
    implementation("com.vesoft:nebula-spark-connector:2.5.0")

-完整错误信息

Driver stacktrace:
21/09/15 07:35:00 INFO DAGScheduler: Job 0 failed: show at NebulaSparkReaderExample.scala:60, took 0.579329 s
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, 172.19.0.11, executor 0): java.lang.ClassNotFoundException: com.vesoft.nebula.connector.reader.NebulaVertexPartition
        at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:348)
        at org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:67)
        at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1925)
        at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1808)
        at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2099)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1625)
        at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2344)
        at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2268)
        at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2126)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1625)
        at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2344)
        at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2268)
        at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2126)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1625)
        at java.io.ObjectInputStream.readObject(ObjectInputStream.java:465)
        at java.io.ObjectInputStream.readObject(ObjectInputStream.java:423)
        at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
        at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:376)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1889)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1877)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1876)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
        at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1876)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
        at scala.Option.foreach(Option.scala:257)
        at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2110)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2059)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2048)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
        at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2082)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2101)
        at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:365)
        at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
        at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:3389)
        at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2550)
        at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2550)
        at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3370)
        at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
        at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
        at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
        at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3369)
        at org.apache.spark.sql.Dataset.head(Dataset.scala:2550)
        at org.apache.spark.sql.Dataset.take(Dataset.scala:2764)
        at org.apache.spark.sql.Dataset.getRows(Dataset.scala:254)
        at org.apache.spark.sql.Dataset.showString(Dataset.scala:291)
        at org.apache.spark.sql.Dataset.show(Dataset.scala:751)
        at org.apache.spark.sql.Dataset.show(Dataset.scala:710)
        at com.misha.NebulaSparkReaderExample$.readVertex(NebulaSparkReaderExample.scala:60)
        at com.misha.NebulaSparkReaderExample$.main(NebulaSparkReaderExample.scala:32)
        at com.misha.NebulaSparkReaderExample.main(NebulaSparkReaderExample.scala)
Caused by: java.lang.ClassNotFoundException: com.vesoft.nebula.connector.reader.NebulaVertexPartition
        at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:348)
        at org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:67)
        at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1925)
        at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1808)
        at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2099)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1625)
        at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2344)
        at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2268)
        at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2126)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1625)
        at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2344)
        at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2268)
        at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2126)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1625)
        at java.io.ObjectInputStream.readObject(ObjectInputStream.java:465)
        at java.io.ObjectInputStream.readObject(ObjectInputStream.java:423)
        at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
        at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:376)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

使用spark-submit的方式提交就没问题了,
原来是直接java -jar 的方式运行。

浙ICP备20010487号