用 spark写入数据出错

我用的是spark-2.12.3, nebula-2.5.0,使用docker-compose安装的,nebula-spark-connector也是从 github上pull下来的最新的。

我首先编译connector:

cd nebula-spark-utils/nebula-spark-connector
mvn clean package -Dmaven.test.skip=true -Dgpg.skip -Dmaven.javadoc.skip=true

然后把这个生成的jar文件复制到我的project/lib目录里面:

cp -riv target/*jar /path/to/project/lib

我的build.sbt是这样的:

name := "Example"

version := "0.0"

scalaVersion := "2.12.3"

libraryDependencies += "org.apache.spark" %% "spark-sql" % "3.1.2"
libraryDependencies += "org.apache.spark" %% "spark-core" % "3.1.2"
libraryDependencies += "org.apache.spark" %% "spark-mllib" % "3.1.2"

我的代码这样的 :


import org.apache.log4j.{Level, Logger}
import org.apache.spark.{SparkContext, SparkConf}
import org.apache.spark.graphx._
import org.apache.spark.rdd.RDD
import org.apache.spark
import scala.io.Source
import scala.util.control.Breaks._

import org.apache.spark.sql.SparkSession

import java.nio.charset.StandardCharsets

import collection.mutable.ArrayBuffer

import com.vesoft.nebula.connector.connector.{NebulaDataFrameReader, NebulaDataFrameWriter}
import com.vesoft.nebula.connector.{NebulaConnectionConfig, ReadNebulaConfig, WriteNebulaConfig, WriteNebulaVertexConfig}


object Example {

  def main(args: Array[String]) {

    try_database()
  }

  def try_database() {
 
    val spark_conf = new SparkConf
    spark_conf
      .set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
    val spark = SparkSession
      .builder()
      .appName("test")
      .master("local")
      .config(spark_conf)
      .getOrCreate()
    val v_df = spark.read.json("../data/vertices.json")
    v_df.show()
    val e_df = spark.read.json("../data/edges.json")
    e_df.show()

    // 写入数据库的方法 
    val nbl_config = NebulaConnectionConfig.builder()
      .withMetaAddress("10.128.61.8:49221") // docker-compose生成的容器的使用的本地端口
      .withGraphAddress("10.128.61.8:49238")
      .build()
    val nbl_write_vert_conf = WriteNebulaVertexConfig.builder()
      .withSpace("test") // db里面的space
      .withTag("vert") // vertex类型
      .withVidField("vertexId") // 这个df里面的表示vid的列
      // .withVidPolicy("hash") // vid如果不是整数就加这个,是就不用管
      .withVidAsProp(false)
      .withUser("root")
      .withPasswd("nebula")
      .withBatch(1000)
      .build()
      
    v_df.write.nebula(nbl_config, nbl_write_vert_conf).writeVertices()
  }
}

运行的时候会有一个报错:

Exception in thread "main" java.lang.NoClassDefFoundError: com/vesoft/nebula/connector/WriteNebulaConfig
	at Example.main(main.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
	at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:951)
	at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
	at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
	at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
	at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1039)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1048)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: com.vesoft.nebula.connector.WriteNebulaConfig
	at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
	... 13 more

请问我是哪里没弄对吗

spark还有2.12? 那是scala版本吧

你 清理 编译一下

我搞错了,spark是3.12.10,我运行的命令是:

sbt clean
sbt package
spark-submit target/scala-2.12/example_2.12-0.
0.jar

然后就出现上面的错误了,请问这个要怎么办呢

不支持spark3。 你的example中有引入nebula-spark-connector依赖么
ps:spark 也没有3.12.10的版本,目前最高3.1.2。

好的,我再换成spark2试一下,另外,请问引入nebula-spark-connector的方法是不是就是把相关的jar包放到我的工程目录的lib里面呢,如果用sbt编译的话,还是要在build.sbt里面加一点东西

和spark-core的使用方式一样。 将依赖加入 pom文件:

<dependency>
            <groupId>com.vesoft</groupId>
            <artifactId>nebula-spark-connector</artifactId>
            <version>2.5-SNAPSHOT</version>
        </dependency>

请问下载仓库url是什么呢,我换成spark2.4.2之后,在build.sbt里面加上这个:

libraryDependencies += "com.vesoft" %% "nebula-spark-connector" % "2.5-SNAPSHOT"

还报错:

[error] sbt.librarymanagement.ResolveException: Error downloading com.vesoft:nebula-spark-connector_2.12:2.5-SNAPSHOT
[error]   Not found
[error]   Not found
[error]   not found: /root/.ivy2/localcom.vesoft/nebula-spark-connector_2.12/2.5-SNAPSHOT/ivys/ivy.xml
[error]   not found: https://repo1.maven.org/maven2/com/vesoft/nebula-spark-connector_2.12/2.5-SNAPSHOT/nebula-spark-connector_2.12-2.5-SNAPSHOT.pom
[error] 	at lmcoursier.CoursierDependencyResolution.unresolvedWarningOrThrow(CoursierDependencyResolution.scala:258)
[error] 	at lmcoursier.CoursierDependencyResolution.$anonfun$update$38(CoursierDependencyResolution.scala:227)

其他包都是从maven网站上下载的,其中一部分log是这样的:

https://repo1.maven.org/maven2/io/dropwizard/metrics/metrics-core/3.1.5/metrics-core-3.1.5.jar
  100.0% [##########] 117.6 KiB (482.1 KiB / s)
https://repo1.maven.org/maven2/org/apache/hadoop/hadoop-yarn-server-nodemanager/2.6.5/hadoop-yarn-server-no…
  100.0% [##########] 651.9 KiB (2.7 MiB / s)
https://repo1.maven.org/maven2/org/scala-lang/modules/scala-parser-combinators_2.12/1.1.0/scala-parser-comb…
  100.0% [##########] 219.8 KiB (886.2 KiB / s)
https://repo1.maven.org/maven2/commons-net/commons-net/3.1/commons-net-3.1.jar
  100.0% [##########] 267.0 KiB (1.0 MiB / s)

是不是nebula依赖需要从其他网站上下载呢

你要弄清楚 你到底是用2.5.0版本还是2.5-SNAPSHOT版本的spark-connector。
SNAPSHOT版本在这 Index of /repositories/snapshots/com/vesoft/nebula-spark-connector 。 不在中央仓库。

两个都试过了,如果是从你给的链接上下载nebula-spark-connector-2.5-20210825.031923-1.jar放在lib 里面的话,编译能通过,但是运行时候会报错,报错就是:

Exception in thread "main" java.lang.NoClassDefFoundError: com/vesoft/nebula/connector/WriteNebulaConfig
	at Example.main(main.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

如果是在build.sbt里面加上:

libraryDependencies += "com.vesoft" %% "nebula-spark-connector" % "2.5.0"

或者

libraryDependencies += "com.vesoft" %% "nebula-spark-connector" % "2.5"

的话,编译会不通过,报错是:

[error] sbt.librarymanagement.ResolveException: Error downloading com.vesoft:nebula-spark-connector_2.12:2.5.0
[error]   Not found
[error]   Not found
[error]   not found: /root/.ivy2/localcom.vesoft/nebula-spark-connector_2.12/2.5.0/ivys/ivy.xml
[error]   not found: https://repo1.maven.org/maven2/com/vesoft/nebula-spark-connector_2.12/2.5.0/nebula-spark-connector_2.12-2.5.0.pom

请问我漏了什么吗

  1. 直接放lib有啥用,照样找不到依赖包
  2. 我们没有这个东西 com.vesoft:nebula-spark-connector_2.12:2.5.0, 不知道你从哪里加的2.12

我也不知道2.12是怎么来的,可能是scala的版本? 我的build.sbt内容是这样的:

name := "Example"

version := "0.0"

scalaVersion := "2.12.3"

/* // spark-3.1.2 */
/* libraryDependencies += "org.apache.spark" %% "spark-sql" % "3.1.2" */
/* libraryDependencies += "org.apache.spark" %% "spark-core" % "3.1.2" */
/* libraryDependencies += "org.apache.spark" %% "spark-mllib" % "3.1.2" */
/* libraryDependencies += "com.vesoft" %% "nebula-spark-connector" % "2.5" */

// spark-2.4.2
libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.4.2"
libraryDependencies += "org.apache.spark" %% "spark-core" % "2.4.2"
libraryDependencies += "org.apache.spark" %% "spark-mllib" % "2.4.2"

libraryDependencies += "com.vesoft" %% "nebula-spark-connector" % "2.5.0"

请问有没有使用sbt的项目的例子呢

你的sbt有问题的, spark-sql这些依赖会自动加上2.12的版本是对的,但nebula-spark-connector还自动加上2.12就不对。 你看下build.sbt文件怎么处理,我们没有使用sbt的例子

搞定了,我是scala2.12,仓库里面的是2.11的,我重新拉github的源码,修改了pom.xml重新编译的jar就能正常用的,看样子不同的scala版本之间还是有区别的。

1 个赞

浙ICP备20010487号