exchang导入点报错

  • nebula 版本:2.0.0

  • 部署方式(分布式):

  • 是否为线上版本:N

  • 问题的具体描述
    在使用Exchange时,提交spark-submit命令时,开始报错,无法导入数据,以下是日志信息

21/07/31 17:43:41 INFO config.Configs$: DataBase Config com.vesoft.nebula.exchange.config.DataBaseConfigEntry@ca12197c
21/07/31 17:43:41 INFO config.Configs$: User Config com.vesoft.nebula.exchange.config.UserConfigEntry@3161b833
21/07/31 17:43:41 INFO config.Configs$: Connection Config Some(Config(SimpleConfigObject({"retry":3,"timeout":3000})))
21/07/31 17:43:41 INFO config.Configs$: Execution Config com.vesoft.nebula.exchange.config.ExecutionConfigEntry@7f9c3944
21/07/31 17:43:41 INFO config.Configs$: Source Config File source path: hdfs://172.20.62.118:39000/input/car.csv, separator: Some(;), header: Some(false)
21/07/31 17:43:41 INFO config.Configs$: Sink Config File source path: hdfs://172.20.62.118:39000/input/car.csv, separator: Some(;), header: Some(false)
21/07/31 17:43:41 INFO config.Configs$: name Car  batch 256
21/07/31 17:43:41 INFO config.Configs$: Tag Config: Tag name: Car, source: File source path: hdfs://172.20.62.118:39000/input/car.csv, separator: Some(;), header: Some(false), sink: Nebula sink addresses: [172.20.62.119:9669], vertex field: _c1, vertex policy: None, batch: 256, partition: 32.
21/07/31 17:43:41 INFO exchange.Exchange$: Config Configs(com.vesoft.nebula.exchange.config.DataBaseConfigEntry@ca12197c,com.vesoft.nebula.exchange.config.UserConfigEntry@3161b833,com.vesoft.nebula.exchange.config.ConnectionConfigEntry@c419f174,com.vesoft.nebula.exchange.config.ExecutionConfigEntry@7f9c3944,com.vesoft.nebula.exchange.config.ErrorConfigEntry@55508fa6,com.vesoft.nebula.exchange.config.RateConfigEntry@fc4543af,,List(Tag name: Car, source: File source path: hdfs://172.20.62.118:39000/input/car.csv, separator: Some(;), header: Some(false), sink: Nebula sink addresses: [172.20.62.119:9669], vertex field: _c1, vertex policy: None, batch: 256, partition: 32.),List(),None)
21/07/31 17:43:41 INFO spark.SparkContext: Running Spark version 2.4.8
21/07/31 17:43:41 INFO spark.SparkContext: Submitted application: com.vesoft.nebula.exchange.Exchange
21/07/31 17:43:41 INFO spark.SecurityManager: Changing view acls to: hadoop
21/07/31 17:43:41 INFO spark.SecurityManager: Changing modify acls to: hadoop
21/07/31 17:43:41 INFO spark.SecurityManager: Changing view acls groups to: 
21/07/31 17:43:41 INFO spark.SecurityManager: Changing modify acls groups to: 
21/07/31 17:43:41 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(hadoop); groups with view permissions: Set(); users  with modify permissions: Set(hadoop); groups with modify permissions: Set()
21/07/31 17:43:41 INFO util.Utils: Successfully started service 'sparkDriver' on port 41372.
21/07/31 17:43:41 INFO spark.SparkEnv: Registering MapOutputTracker
21/07/31 17:43:41 INFO spark.SparkEnv: Registering BlockManagerMaster
21/07/31 17:43:41 INFO storage.BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
21/07/31 17:43:41 INFO storage.BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
21/07/31 17:43:41 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-d50de68d-64c5-48c5-b871-17c026c67ed3
21/07/31 17:43:41 INFO memory.MemoryStore: MemoryStore started with capacity 366.3 MB
21/07/31 17:43:41 INFO spark.SparkEnv: Registering OutputCommitCoordinator
21/07/31 17:43:42 INFO util.log: Logging initialized @2091ms to org.spark_project.jetty.util.log.Slf4jLog
21/07/31 17:43:42 INFO server.Server: jetty-9.4.z-SNAPSHOT; built: unknown; git: unknown; jvm 1.8.0_144-b01
21/07/31 17:43:42 INFO server.Server: Started @2190ms
21/07/31 17:43:42 INFO server.AbstractConnector: Started ServerConnector@70197c93{HTTP/1.1, (http/1.1)}{0.0.0.0:4040}
21/07/31 17:43:42 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
21/07/31 17:43:42 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@267bbe1a{/jobs,null,AVAILABLE,@Spark}
21/07/31 17:43:42 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4d6f197e{/jobs/json,null,AVAILABLE,@Spark}
21/07/31 17:43:42 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6ef7623{/jobs/job,null,AVAILABLE,@Spark}
21/07/31 17:43:42 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5c089b2f{/jobs/job/json,null,AVAILABLE,@Spark}
21/07/31 17:43:42 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6999cd39{/stages,null,AVAILABLE,@Spark}
21/07/31 17:43:42 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@14bae047{/stages/json,null,AVAILABLE,@Spark}
21/07/31 17:43:42 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7ed9ae94{/stages/stage,null,AVAILABLE,@Spark}
21/07/31 17:43:42 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2bc12da{/stages/stage/json,null,AVAILABLE,@Spark}
21/07/31 17:43:42 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3122b117{/stages/pool,null,AVAILABLE,@Spark}
21/07/31 17:43:42 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@534ca02b{/stages/pool/json,null,AVAILABLE,@Spark}
21/07/31 17:43:42 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@29a23c3d{/storage,null,AVAILABLE,@Spark}
21/07/31 17:43:42 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4b6ac111{/storage/json,null,AVAILABLE,@Spark}
21/07/31 17:43:42 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6fe46b62{/storage/rdd,null,AVAILABLE,@Spark}
21/07/31 17:43:42 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@591fd34d{/storage/rdd/json,null,AVAILABLE,@Spark}
21/07/31 17:43:42 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@61e45f87{/environment,null,AVAILABLE,@Spark}
21/07/31 17:43:42 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7c9b78e3{/environment/json,null,AVAILABLE,@Spark}
21/07/31 17:43:42 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3068b369{/executors,null,AVAILABLE,@Spark}
21/07/31 17:43:42 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@17ca8b92{/executors/json,null,AVAILABLE,@Spark}
21/07/31 17:43:42 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5491f68b{/executors/threadDump,null,AVAILABLE,@Spark}
21/07/31 17:43:42 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@736ac09a{/executors/threadDump/json,null,AVAILABLE,@Spark}
21/07/31 17:43:42 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6ecd665{/static,null,AVAILABLE,@Spark}
21/07/31 17:43:42 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@378bd86d{/,null,AVAILABLE,@Spark}
21/07/31 17:43:42 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2189e7a7{/api,null,AVAILABLE,@Spark}
21/07/31 17:43:42 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@644abb8f{/jobs/job/kill,null,AVAILABLE,@Spark}
21/07/31 17:43:42 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1a411233{/stages/stage/kill,null,AVAILABLE,@Spark}
21/07/31 17:43:42 INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, and started at http://node03:4040
21/07/31 17:43:42 INFO spark.SparkContext: Added JAR file:/home/hadoop/wangbin/test/nebula-exchange-2.0.0.jar at spark://node03:41372/jars/nebula-exchange-2.0.0.jar with timestamp 1627724622303
21/07/31 17:43:42 INFO executor.Executor: Starting executor ID driver on host localhost
21/07/31 17:43:42 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 42169.
21/07/31 17:43:42 INFO netty.NettyBlockTransferService: Server created on node03:42169
21/07/31 17:43:42 INFO storage.BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
21/07/31 17:43:42 INFO storage.BlockManagerMaster: Registering BlockManager BlockManagerId(driver, node03, 42169, None)
21/07/31 17:43:42 INFO storage.BlockManagerMasterEndpoint: Registering block manager node03:42169 with 366.3 MB RAM, BlockManagerId(driver, node03, 42169, None)
21/07/31 17:43:42 INFO storage.BlockManagerMaster: Registered BlockManager BlockManagerId(driver, node03, 42169, None)
21/07/31 17:43:42 INFO storage.BlockManager: Initialized BlockManager: BlockManagerId(driver, node03, 42169, None)
21/07/31 17:43:42 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@26722665{/metrics/json,null,AVAILABLE,@Spark}
21/07/31 17:43:42 INFO exchange.Exchange$: Processing Tag Car
21/07/31 17:43:42 INFO exchange.Exchange$: field keys: _c0, _c1, _c2, _c3, _c4, _c5
21/07/31 17:43:42 INFO exchange.Exchange$: nebula keys: Hphm, Hphmzl, Hpys, Csys, Hpzl, Clpp
21/07/31 17:43:42 INFO exchange.Exchange$: Loading CSV files from hdfs://172.20.62.118:39000/input/car.csv
21/07/31 17:43:42 INFO internal.SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('file:/home/hadoop/wangbin/test/spark-warehouse').
21/07/31 17:43:42 INFO internal.SharedState: Warehouse path is 'file:/home/hadoop/wangbin/test/spark-warehouse'.
21/07/31 17:43:42 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4d192aef{/SQL,null,AVAILABLE,@Spark}
21/07/31 17:43:42 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1416cf9f{/SQL/json,null,AVAILABLE,@Spark}
21/07/31 17:43:42 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2dfe5525{/SQL/execution,null,AVAILABLE,@Spark}
21/07/31 17:43:42 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1290c49{/SQL/execution/json,null,AVAILABLE,@Spark}
21/07/31 17:43:43 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@75361cf6{/static/sql,null,AVAILABLE,@Spark}
21/07/31 17:43:43 INFO state.StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
21/07/31 17:43:44 INFO datasources.InMemoryFileIndex: It took 72 ms to list leaf files for 1 paths.
21/07/31 17:43:44 INFO datasources.InMemoryFileIndex: It took 4 ms to list leaf files for 1 paths.
21/07/31 17:43:46 INFO datasources.FileSourceStrategy: Pruning directories with: 
21/07/31 17:43:46 INFO datasources.FileSourceStrategy: Post-Scan Filters: (length(trim(value#0, None)) > 0)
21/07/31 17:43:46 INFO datasources.FileSourceStrategy: Output Data Schema: struct<value: string>
21/07/31 17:43:46 INFO execution.FileSourceScanExec: Pushed Filters: 
21/07/31 17:43:46 INFO codegen.CodeGenerator: Code generated in 220.019558 ms
21/07/31 17:43:46 INFO codegen.CodeGenerator: Code generated in 20.40764 ms
21/07/31 17:43:46 INFO memory.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 297.4 KB, free 366.0 MB)
21/07/31 17:43:47 INFO memory.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 26.4 KB, free 366.0 MB)
21/07/31 17:43:47 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on node03:42169 (size: 26.4 KB, free: 366.3 MB)
21/07/31 17:43:47 INFO spark.SparkContext: Created broadcast 0 from csv at FileBaseReader.scala:85
21/07/31 17:43:47 INFO execution.FileSourceScanExec: Planning scan with bin packing, max size: 14309981 bytes, open cost is considered as scanning 4194304 bytes.
21/07/31 17:43:47 INFO spark.SparkContext: Starting job: csv at FileBaseReader.scala:85
21/07/31 17:43:47 INFO scheduler.DAGScheduler: Got job 0 (csv at FileBaseReader.scala:85) with 1 output partitions
21/07/31 17:43:47 INFO scheduler.DAGScheduler: Final stage: ResultStage 0 (csv at FileBaseReader.scala:85)
21/07/31 17:43:47 INFO scheduler.DAGScheduler: Parents of final stage: List()
21/07/31 17:43:47 INFO scheduler.DAGScheduler: Missing parents: List()
21/07/31 17:43:47 INFO scheduler.DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[3] at csv at FileBaseReader.scala:85), which has no missing parents
21/07/31 17:43:47 INFO memory.MemoryStore: Block broadcast_1 stored as values in memory (estimated size 8.9 KB, free 366.0 MB)
21/07/31 17:43:47 INFO memory.MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 4.6 KB, free 366.0 MB)
21/07/31 17:43:47 INFO storage.BlockManagerInfo: Added broadcast_1_piece0 in memory on node03:42169 (size: 4.6 KB, free: 366.3 MB)
21/07/31 17:43:47 INFO spark.SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1184
21/07/31 17:43:47 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[3] at csv at FileBaseReader.scala:85) (first 15 tasks are for partitions Vector(0))
21/07/31 17:43:47 INFO scheduler.TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
21/07/31 17:43:47 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, executor driver, partition 0, ANY, 8256 bytes)
21/07/31 17:43:47 INFO executor.Executor: Running task 0.0 in stage 0.0 (TID 0)
21/07/31 17:43:47 INFO executor.Executor: Fetching spark://node03:41372/jars/nebula-exchange-2.0.0.jar with timestamp 1627724622303
21/07/31 17:43:47 INFO client.TransportClientFactory: Successfully created connection to node03/172.20.62.119:41372 after 45 ms (0 ms spent in bootstraps)
21/07/31 17:43:47 INFO util.Utils: Fetching spark://node03:41372/jars/nebula-exchange-2.0.0.jar to /tmp/spark-3b505ece-e021-447a-9a23-dff70fe99d14/userFiles-2b3a0458-ee5a-4451-8071-5f164a4b75d6/fetchFileTemp449888639441390634.tmp
21/07/31 17:43:47 INFO executor.Executor: Adding file:/tmp/spark-3b505ece-e021-447a-9a23-dff70fe99d14/userFiles-2b3a0458-ee5a-4451-8071-5f164a4b75d6/nebula-exchange-2.0.0.jar to class loader
21/07/31 17:43:47 INFO datasources.FileScanRDD: Reading File path: hdfs://172.20.62.118:39000/input/car.csv, range: 0-10115677, partition values: [empty row]
21/07/31 17:43:47 INFO codegen.CodeGenerator: Code generated in 10.203916 ms
21/07/31 17:43:48 INFO executor.Executor: Finished task 0.0 in stage 0.0 (TID 0). 1262 bytes result sent to driver
21/07/31 17:43:48 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 816 ms on localhost (executor driver) (1/1)
21/07/31 17:43:48 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
21/07/31 17:43:48 INFO scheduler.DAGScheduler: ResultStage 0 (csv at FileBaseReader.scala:85) finished in 0.936 s
21/07/31 17:43:48 INFO scheduler.DAGScheduler: Job 0 finished: csv at FileBaseReader.scala:85, took 1.006232 s
21/07/31 17:43:48 INFO datasources.FileSourceStrategy: Pruning directories with: 
21/07/31 17:43:48 INFO datasources.FileSourceStrategy: Post-Scan Filters: 
21/07/31 17:43:48 INFO datasources.FileSourceStrategy: Output Data Schema: struct<value: string>
21/07/31 17:43:48 INFO execution.FileSourceScanExec: Pushed Filters: 
21/07/31 17:43:48 INFO codegen.CodeGenerator: Code generated in 9.854946 ms
21/07/31 17:43:48 INFO memory.MemoryStore: Block broadcast_2 stored as values in memory (estimated size 297.4 KB, free 365.7 MB)
21/07/31 17:43:48 INFO memory.MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 26.4 KB, free 365.7 MB)
21/07/31 17:43:48 INFO storage.BlockManagerInfo: Added broadcast_2_piece0 in memory on node03:42169 (size: 26.4 KB, free: 366.2 MB)
21/07/31 17:43:48 INFO spark.SparkContext: Created broadcast 2 from csv at FileBaseReader.scala:85
21/07/31 17:43:48 INFO execution.FileSourceScanExec: Planning scan with bin packing, max size: 14309981 bytes, open cost is considered as scanning 4194304 bytes.
Exception in thread "main" java.lang.IllegalStateException
	at com.google.common.base.Preconditions.checkState(Preconditions.java:129)
	at com.google.common.net.HostAndPort.getPort(HostAndPort.java:106)
	at com.vesoft.nebula.exchange.MetaProvider$$anonfun$1.apply(MetaProvider.scala:28)
	at com.vesoft.nebula.exchange.MetaProvider$$anonfun$1.apply(MetaProvider.scala:27)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at com.vesoft.nebula.exchange.MetaProvider.<init>(MetaProvider.scala:27)
	at com.vesoft.nebula.exchange.processor.VerticesProcessor.process(VerticesProcessor.scala:109)
	at com.vesoft.nebula.exchange.Exchange$$anonfun$main$2.apply(Exchange.scala:145)
	at com.vesoft.nebula.exchange.Exchange$$anonfun$main$2.apply(Exchange.scala:122)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at com.vesoft.nebula.exchange.Exchange$.main(Exchange.scala:122)
	at com.vesoft.nebula.exchange.Exchange.main(Exchange.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
	at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:855)
	at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161)
	at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)
	at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
	at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:930)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:939)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
21/07/31 17:43:48 INFO spark.SparkContext: Invoking stop() from shutdown hook
21/07/31 17:43:48 INFO server.AbstractConnector: Stopped Spark@70197c93{HTTP/1.1, (http/1.1)}{0.0.0.0:4040}
21/07/31 17:43:48 INFO ui.SparkUI: Stopped Spark web UI at http://node03:4040
21/07/31 17:43:48 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
21/07/31 17:43:48 INFO memory.MemoryStore: MemoryStore cleared
21/07/31 17:43:48 INFO storage.BlockManager: BlockManager stopped
21/07/31 17:43:48 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
21/07/31 17:43:48 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
21/07/31 17:43:48 INFO spark.SparkContext: Successfully stopped SparkContext
21/07/31 17:43:48 INFO util.ShutdownHookManager: Shutdown hook called
21/07/31 17:43:48 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-3b505ece-e021-447a-9a23-dff70fe99d14
21/07/31 17:43:48 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-cf0bedc5-8fc9-475d-b69f-bb882c24b00c

有遇到过此类问题的或是知道的,还请给刚接触这个的我一点指点,万分感谢!!!!

检查你配置的graph地址和meta地址。

1 个赞

这是exchange中application.conf中配置的地址
snipaste_20210802_164050
这是nebula配置的nebula-metad.conf的地址


这是nebula配置的nebula-graphd.conf的地址

请问这样配置有什么问题吗??

meta配错了,[“”,“”,“”]的格式

1 个赞

地址修改完之后开始报其他的错误了

Caused by: com.vesoft.nebula.client.graph.exception.AuthFailedException: Auth failed: Authenticate failed: Expected protocol id ffffff82 but got 0
	at com.vesoft.nebula.client.graph.net.SyncConnection.authenticate(SyncConnection.java:59)
	at com.vesoft.nebula.client.graph.net.NebulaPool.getSession(NebulaPool.java:108)
	at com.vesoft.nebula.exchange.GraphProvider.getGraphClient(GraphProvider.scala:35)
	at com.vesoft.nebula.exchange.writer.NebulaGraphClientWriter.<init>(ServerBaseWriter.scala:137)
	at com.vesoft.nebula.exchange.processor.VerticesProcessor.com$vesoft$nebula$exchange$processor$VerticesProcessor$$processEachPartition(VerticesProcessor.scala:68)
	at com.vesoft.nebula.exchange.processor.VerticesProcessor$$anonfun$process$4.apply(VerticesProcessor.scala:251)
	at com.vesoft.nebula.exchange.processor.VerticesProcessor$$anonfun$process$4.apply(VerticesProcessor.scala:251)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:980)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:980)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2107)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2107)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:123)
	at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:411)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:417)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

请问是客户端的问题吗?还是服务端的配置没配好,服务端我用的master编译的,exchange用的2.0.0编译的。

版本不匹配。 要么你都用2.0.0(2.0.1), 要么你都用master