nebula-exchange 使用spark版本问题

  • nebula 版本:2.5.1
  • 部署方式:分布式 /
  • 安装方式: RPM
  • 是否为线上版本: N
    -场景: 运用nebula-exchange, 将hive数据导入 nebula集群
    目前系统已有hadoop集群,spark集群,spark的版本是2.2.0,但是nebula-exchange 2.5.1的
    版本依赖的spark的版本是2.4.x,有没有方式可以 将2.4.x降低为2.2.0 编译打包运行呢?
21/11/11 08:24:45 INFO SparkContext: Running Spark version 2.2.0
21/11/11 08:24:46 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
QueueChecker WARN: remove spark.sf.userId=01394612
QueueChecker WARN: remove spark.sf.jobFrom=scheduler
QueueChecker INFO: queue=null, from=scheduler, userId=01394612
21/11/11 08:24:46 INFO SparkContext: Submitted application: com.vesoft.nebula.exchange.Exchange
21/11/11 08:24:46 INFO SecurityManager: Changing view acls to: hive
21/11/11 08:24:46 INFO SecurityManager: Changing modify acls to: hive
21/11/11 08:24:46 INFO SecurityManager: Changing view acls groups to: 
21/11/11 08:24:46 INFO SecurityManager: Changing modify acls groups to: 
21/11/11 08:24:46 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(hive); groups with view permissions: Set(); users  with modify permissions: Set(hive); groups with modify permissions: Set()
21/11/11 08:24:46 INFO Utils: Successfully started service 'sparkDriver' on port 54391.
21/11/11 08:24:46 INFO SparkEnv: Registering MapOutputTracker
21/11/11 08:24:46 INFO SparkEnv: Registering BlockManagerMaster
21/11/11 08:24:46 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
21/11/11 08:24:46 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
21/11/11 08:24:46 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-402fbda9-cd1f-4ea8-899c-58606dc7e27e
21/11/11 08:24:46 INFO MemoryStore: MemoryStore started with capacity 366.3 MB
21/11/11 08:24:46 INFO SparkEnv: Registering OutputCommitCoordinator
21/11/11 08:24:47 INFO Utils: Successfully started service 'SparkUI' on port 4040.
21/11/11 08:24:47 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.202.105.76:4040
21/11/11 08:24:47 INFO SparkContext: Added JAR file:/app/INC-BDP-SCH-AIO-APP/data/script/37870/1003/nebula-exchange-2.5.1.jar at spark://10.202.105.76:54391/jars/nebula-exchange-2.5.1.jar with timestamp 1636590287135
21/11/11 08:24:47 INFO Executor: Starting executor ID driver on host localhost
21/11/11 08:24:47 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 37523.
21/11/11 08:24:47 INFO NettyBlockTransferService: Server created on 10.202.105.76:37523
21/11/11 08:24:47 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
21/11/11 08:24:47 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 10.202.105.76, 37523, None)
21/11/11 08:24:47 INFO BlockManagerMasterEndpoint: Registering block manager 10.202.105.76:37523 with 366.3 MB RAM, BlockManagerId(driver, 10.202.105.76, 37523, None)
21/11/11 08:24:47 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.202.105.76, 37523, None)
21/11/11 08:24:47 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 10.202.105.76, 37523, None)
21/11/11 08:24:47 WARN DFSUtil: Namenode for test-rtc remains unresolved for ID nn1.  Check your hdfs-site.xml file to ensure namenodes are configured properly.
21/11/11 08:24:47 WARN DFSUtil: Namenode for test-rtc remains unresolved for ID nn2.  Check your hdfs-site.xml file to ensure namenodes are configured properly.
21/11/11 08:24:48 INFO EventLoggingListener: Logging events to hdfs://test-cluster-log/sparkHistory/local-1636590287187
21/11/11 08:24:48 INFO SparkContext: Registered listener org.apache.spark.sql.hive.DagUsageListener
21/11/11 08:24:48 INFO SharedState: loading hive config file: file:/app/spark/conf/hive-site.xml
21/11/11 08:24:48 INFO SharedState: spark.sql.warehouse.dir is not set, but hive.metastore.warehouse.dir is set. Setting spark.sql.warehouse.dir to the value of hive.metastore.warehouse.dir ('hdfs://test-cluster/user/hive/warehouse').
21/11/11 08:24:48 INFO SharedState: Warehouse path is 'hdfs://test-cluster/user/hive/warehouse'.
21/11/11 08:24:49 WARN DFSUtil: Namenode for test-rtc remains unresolved for ID nn1.  Check your hdfs-site.xml file to ensure namenodes are configured properly.
21/11/11 08:24:49 WARN DFSUtil: Namenode for test-rtc remains unresolved for ID nn2.  Check your hdfs-site.xml file to ensure namenodes are configured properly.
21/11/11 08:24:49 INFO SessionCatalog: metastoreNames = Set(bdp)
21/11/11 08:24:49 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
21/11/11 08:24:49 INFO Exchange$: Processing Tag player
21/11/11 08:24:49 INFO Exchange$: field keys: age, name
21/11/11 08:24:49 INFO Exchange$: nebula keys: age, name
21/11/11 08:24:49 INFO Exchange$: Loading from Hive and exec select playerid, age, name from tmp_cz.player
21/11/11 08:24:49 INFO SparkSqlParser: Parsing command: select playerid, age, name from tmp_cz.player
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/spark/sql/sources/v2/StreamWriteSupport
	at java.lang.ClassLoader.defineClass1(Native Method)
	at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
	at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
	at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
	at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	at java.lang.Class.forName0(Native Method)
	at java.lang.Class.forName(Class.java:348)
	at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:370)
	at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
	at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
	at scala.collection.convert.Wrappers$JIteratorWrapper.next(Wrappers.scala:43)
	at scala.collection.Iterator$class.foreach(Iterator.scala:893)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
	at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
	at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
	at scala.collection.TraversableLike$class.filterImpl(TraversableLike.scala:247)
	at scala.collection.TraversableLike$class.filter(TraversableLike.scala:259)
	at scala.collection.AbstractTraversable.filter(Traversable.scala:104)
	at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:529)
	at org.apache.spark.sql.execution.datasources.DataSource.providingClass$lzycompute(DataSource.scala:86)
	at org.apache.spark.sql.execution.datasources.DataSource.providingClass(DataSource.scala:86)
	at org.apache.spark.sql.execution.datasources.ResolveSQLOnFile$$anonfun$apply$1.applyOrElse(rules.scala:52)
	at org.apache.spark.sql.execution.datasources.ResolveSQLOnFile$$anonfun$apply$1.applyOrElse(rules.scala:41)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolveOperators$1.apply(LogicalPlan.scala:62)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolveOperators$1.apply(LogicalPlan.scala:62)
	at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:61)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$1.apply(LogicalPlan.scala:59)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$1.apply(LogicalPlan.scala:59)
	at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:306)
	at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
	at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:304)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:59)
	at org.apache.spark.sql.execution.datasources.ResolveSQLOnFile.apply(rules.scala:41)
	at org.apache.spark.sql.execution.datasources.ResolveSQLOnFile.apply(rules.scala:36)
	at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:85)
	at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:82)
	at scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:124)
	at scala.collection.immutable.List.foldLeft(List.scala:84)
	at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:82)
	at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:74)
	at scala.collection.immutable.List.foreach(List.scala:381)
	at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:74)
	at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:69)
	at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:67)
	at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:50)
	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:66)
	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:624)
	at com.vesoft.nebula.exchange.reader.HiveReader.read(ServerBaseReader.scala:71)
	at com.vesoft.nebula.exchange.Exchange$.com$vesoft$nebula$exchange$Exchange$$createDataSource(Exchange.scala:261)
	at com.vesoft.nebula.exchange.Exchange$$anonfun$main$2.apply(Exchange.scala:136)
	at com.vesoft.nebula.exchange.Exchange$$anonfun$main$2.apply(Exchange.scala:128)
	at scala.collection.immutable.List.foreach(List.scala:381)
	at com.vesoft.nebula.exchange.Exchange$.main(Exchange.scala:128)
	at com.vesoft.nebula.exchange.Exchange.main(Exchange.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:792)
	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:217)
	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:242)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: org.apache.spark.sql.sources.v2.StreamWriteSupport
	at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	... 70 more
21/11/11 08:24:50 INFO SparkContext: Invoking stop() from shutdown hook
21/11/11 08:24:50 INFO SparkUI: Stopped Spark web UI at http://10.202.105.76:4040
21/11/11 08:24:50 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
21/11/11 08:24:50 INFO MemoryStore: MemoryStore cleared
21/11/11 08:24:50 INFO BlockManager: BlockManager stopped
21/11/11 08:24:50 INFO BlockManagerMaster: BlockManagerMaster stopped
21/11/11 08:24:50 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
21/11/11 08:24:50 INFO SparkContext: Successfully stopped SparkContext
21/11/11 08:24:50 INFO ShutdownHookManager: Shutdown hook called
21/11/11 08:24:50 INFO ShutdownHookManager: Deleting directory /tmp/spark-5c55d130-8bb1-4ad8-b9ed-11f1cb977f0b

目前spark2.4是不兼容2.2版本的哈,你看能否在环境中装一下spark2.4 。
spark集群和nebula、hadoop可以不在同一机器上。

此话题已在最后回复的 7 天后被自动关闭。不再允许新回复。

浙ICP备20010487号