- nebula 版本:2.0.1
- 部署方式(分布式 / 单机 / Docker / DBaaS):分布式
- 是否为线上版本:Y
- 硬件信息
- 磁盘( 推荐使用 SSD)SSD
- CPU、内存信息
- 问题的具体描述
集群版本:
Hadoop 3.1.1.3.1.4.0-315
Hive (version 3.1.0.3.1.4.0-315)
Spark 2.3.2.3.1.4.0-315
Scala version 2.11.12 (Java HotSpot™ 64-Bit Server VM, Java 1.8.0_144)
guava是guava-28.0-jre.jar
直接编译的nebula-spark-utils-2.1.0,
提交任务时执行
spark-submit \
--conf spark.app.name="g1" \
--master "local" \
--class com.vesoft.nebula.exchange.Exchange /data/nebula-spark-utils210/nebula-exchange-2.1.0.jar -c /data/nebula/exchange/configs/imp_test1.conf -h
报错信息为:
21/08/11 23:18:20 INFO HiveMetaStoreClient: Opened a connection to metastore, current connections: 1
21/08/11 23:18:20 INFO HiveMetaStoreClient: Connected to metastore.
21/08/11 23:18:20 INFO RetryingMetaStoreClient: RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=xxx (auth:SIMPLE) retries=1 delay=5 lifetime=0
Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.net.HostAndPort.getHostText()Ljava/lang/String;
at com.vesoft.nebula.exchange.MetaProvider$$anonfun$1.apply(MetaProvider.scala:30)
at com.vesoft.nebula.exchange.MetaProvider$$anonfun$1.apply(MetaProvider.scala:29)
at scala.collection.immutable.List.foreach(List.scala:392)
at com.vesoft.nebula.exchange.MetaProvider.<init>(MetaProvider.scala:29)
at com.vesoft.nebula.exchange.processor.VerticesProcessor.process(VerticesProcessor.scala:109)
at com.vesoft.nebula.exchange.Exchange$$anonfun$main$2.apply(Exchange.scala:152)
at com.vesoft.nebula.exchange.Exchange$$anonfun$main$2.apply(Exchange.scala:129)
at scala.collection.immutable.List.foreach(List.scala:392)
at com.vesoft.nebula.exchange.Exchange$.main(Exchange.scala:129)
at com.vesoft.nebula.exchange.Exchange.main(Exchange.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:904)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
21/08/11 23:18:22 INFO SparkContext: Invoking stop() from shutdown hook
已经看了论坛上几个同样的问题,但是没看到具体的处理方法,
不太熟悉spark任务,已尝试:
1、修改pom文件的各版本为集群的组件版本,编译失败
2、将guava-14的包放到spark/jars下,执行报同样的错
问题,
1、那这个问题到底应该怎么处理?
2、文档里要求“已经安装并开启Hadoop服务,并已启动Hive Metastore数据库(本示例中为 MySQL)”,是要求提交任务的接口机要和Hive Metastore数据库打通网络吗?