nebula2.6.2 sst 导入报错

  • nebula 版本:nebulaGraph 2.6.2
  • 部署方式:分布式 /
  • 安装方式: RPM
  • 是否为线上版本:Y
  • 硬件信息
    • 磁盘 1T( 推荐使用 SSD)
    • CPU32c、32G
  • 问题的具体描述nebula2.6.2 sst 导入报错
  • 相关的 meta / storage / graph info 日志信息(尽量使用文本形式方便检索)
    Ingest Failed: Corruption: block checksum mismatch: stored = 227964223, computed = 3404695789 in /home/service/var/data/nebula-graph-2.6.2/storaged/nebula/4/data/000410.sst offset 0 size 465

生成的sst文件损坏了,重新生成一下在ingest试试

重新生成sst了几次导入还是一样

试试分批导入,小部分导入试试看呢

大神帮在看看,这边生产环境 很急
重新生成sst了几次导入还是一样, 查看数据有部分导入了,这个到底什么问题,这个都是同一次生成的数据sst 文件。怎么有的可以导入,有的不行。exchange 工具是有bug 吗???

] Ingesting extra file: /home/service/var/data/nebula-graph-2.6.2/storaged/nebula/2/download/3/3-497.sst
I0608 10:46:41.550320 555443 EventListener.h:93] Ingest external SST file: column family default, the external file path /home/service/var/data/nebula-graph-2.6.2/storaged/nebula/2/download/3/3-497.sst, the internal file path /home/service/var/data/nebula-graph-2.6.2/storaged/nebula/2/data/000076.sst, the properties of the table: # data blocks=778; # entries=58082; # deletions=0; # merge operands=0; # range deletions=0; raw key size=2381362; raw average key size=41.000000; raw value size=1045476; raw average value size=18.000000; data block size=1339018; index block size (user-key? 0, delta-value? 0)=22681; filter block size=0; (estimated) table size=1361699; filter policy name=N/A; prefix extractor name=nullptr; column family ID=N/A; column family name=N/A; comparator name=leveldb.BytewiseComparator; merge operator name=nullptr; property collectors names=[]; SST file compression algo=Snappy; SST file compression options=N/A; creation time=0; time stamp of earliest key=0; file creation time=0; DB identity=; DB session identity=;
I0608 10:46:41.550355 555443 NebulaStore.cpp:791] Ingesting extra file: /home/service/var/data/nebula-graph-2.6.2/storaged/nebula/2/download/3/3-309.sst
I0608 10:46:41.554502 555443 EventListener.h:93] Ingest external SST file: column family default, the external file path /home/service/var/data/nebula-graph-2.6.2/storaged/nebula/2/download/3/3-309.sst, the internal file path /home/service/var/data/nebula-graph-2.6.2/storaged/nebula/2/data/000077.sst, the properties of the table: # data blocks=778; # entries=57966; # deletions=0; # merge operands=0; # range deletions=0; raw key size=2376606; raw average key size=41.000000; raw value size=1043388; raw average value size=18.000000; data block size=1334268; index block size (user-key? 0, delta-value? 0)=22567; filter block size=0; (estimated) table size=1356835; filter policy name=N/A; prefix extractor name=nullptr; column family ID=N/A; column family name=N/A; comparator name=leveldb.BytewiseComparator; merge operator name=nullptr; property collectors names=[]; SST file compression algo=Snappy; SST file compression options=N/A; creation time=0; time stamp of earliest key=0; file creation time=0; DB identity=; DB session identity=;
I0608 10:46:41.554535 555443 NebulaStore.cpp:791] Ingesting extra file: /home/service/var/data/nebula-graph-2.6.2/storaged/nebula/2/download/3/3-355.sst
I0608 10:46:41.558632 555443 EventListener.h:93] Ingest external SST file: column family default, the external file path /home/service/var/data/nebula-graph-2.6.2/storaged/nebula/2/download/3/3-355.sst, the internal file path /home/service/var/data/nebula-graph-2.6.2/storaged/nebula/2/data/000078.sst, the properties of the table: # data blocks=777; # entries=57973; # deletions=0; # merge operands=0; # range deletions=0; raw key size=2376893; raw average key size=41.000000; raw value size=1043514; raw average value size=18.000000; data block size=1331506; index block size (user-key? 0, delta-value? 0)=22586; filter block size=0; (estimated) table size=1354092; filter policy name=N/A; prefix extractor name=nullptr; column family ID=N/A; column family name=N/A; comparator name=leveldb.BytewiseComparator; merge operator name=nullptr; property collectors names=[]; SST file compression algo=Snappy; SST file compression options=N/A; creation time=0; time stamp of earliest key=0; file creation time=0; DB identity=; DB session identity=;
I0608 10:46:41.558668 555443 NebulaStore.cpp:791] Ingesting extra file: /home/service/var/data/nebula-graph-2.6.2/storaged/nebula/2/download/3/3-529.sst
E0608 10:46:41.573429 555443 RocksEngine.cpp:402] Ingest Failed: Corruption: block checksum mismatch: stored = 2330542018, computed = 2431487495  in /home/service/var/data/nebula-graph-2.6.2/storaged/nebula/2/data/000038.sst offset 0 size 1083
E0608 10:46:41.573515 555443 StorageHttpIngestHandler.cpp:95] SSTFile Ingest Failed: E_UNKNOWN
E0608 10:46:41.573534 555443 StorageHttpIngestHandler.cpp:71] SSTFile ingest failed

exchange生成sst的日志还在吗,贴上来看看

1,生成sst 没有看到报错spark 任务也成功了,就是导入ingest 命令报错
2.我把节点,和边分开ingest 又成功了,但是放一起导入就有失败的可能,有点奇怪
日志.txt (395.0 KB)

把exchange的配置文件发一下吧

edgeAll.conf (2.0 KB)
Thing.conf (3.3 KB)

这边生成sst 文件也出现报错了,帮忙看看
日志 (1).txt (190.3 KB)

可以发一下executor的日志,上面是driver的日志,真正的错误日志在executor上。

1 个赞

问题我找到了,这边tag 名称字母大小写转换有点问题。导致配置文件conf生成的和schema 不一致

1 个赞

你可以勾选自己的回复为解决方案啊,方便后续遇到相似问题的人可以快速找到答案~

这个是之前ingest报错 Corruption的原因吗?

现在生成100亿大量数据sst 文件有报这个错,这个是数据有问题吗?帮看看什么问题

22/06/17 10:37:35 WARN TaskSetManager: Lost task 78.2 in stage 1.0 (TID 1878, cdh-bjht-1304, executor 35): TaskKilled (Stage cancelled)
22/06/17 10:37:35 ERROR ApplicationMaster: User class threw exception: org.apache.spark.SparkException: Job aborted due to stage failure: Task 354 in stage 1.0 failed 4 times, most recent failure: Lost task 354.3 in stage 1.0 (TID 1680, cdh-bjht-7210, executor 3): org.rocksdb.RocksDBException: Keys must be added in order
	at org.rocksdb.SstFileWriter.put(Native Method)
	at org.rocksdb.SstFileWriter.put(SstFileWriter.java:132)
	at com.vesoft.nebula.exchange.writer.NebulaSSTWriter.write(FileBaseWriter.scala:43)
	at com.vesoft.nebula.exchange.processor.EdgeProcessor$$anonfun$process$3$$anonfun$apply$5.apply(EdgeProcessor.scala:288)
	at com.vesoft.nebula.exchange.processor.EdgeProcessor$$anonfun$process$3$$anonfun$apply$5.apply(EdgeProcessor.scala:258)
	at scala.collection.Iterator$class.foreach(Iterator.scala:891)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
	at com.vesoft.nebula.exchange.processor.EdgeProcessor$$anonfun$process$3.apply(EdgeProcessor.scala:258)
	at com.vesoft.nebula.exchange.processor.EdgeProcessor$$anonfun$process$3.apply(EdgeProcessor.scala:246)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:935)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:935)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:121)
	at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 354 in stage 1.0 failed 4 times, most recent failure: Lost task 354.3 in stage 1.0 (TID 1680, cdh-bjht-7210, executor 3): org.rocksdb.RocksDBException: Keys must be added in order
	at org.rocksdb.SstFileWriter.put(Native Method)
	at org.rocksdb.SstFileWriter.put(SstFileWriter.java:132)
	at com.vesoft.nebula.exchange.writer.NebulaSSTWriter.write(FileBaseWriter.scala:43)
	at com.vesoft.nebula.exchange.processor.EdgeProcessor$$anonfun$process$3$$anonfun$apply$5.apply(EdgeProcessor.scala:288)
	at com.vesoft.nebula.exchange.processor.EdgeProcessor$$anonfun$process$3$$anonfun$apply$5.apply(EdgeProcessor.scala:258)
	at scala.collection.Iterator$class.foreach(Iterator.scala:891)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
	at com.vesoft.nebula.exchange.processor.EdgeProcessor$$anonfun$process$3.apply(EdgeProcessor.scala:258)
	at com.vesoft.nebula.exchange.processor.EdgeProcessor$$anonfun$process$3.apply(EdgeProcessor.scala:246)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:935)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:935)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:121)
	at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
	at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1887)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1875)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1874)
	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1874)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
	at scala.Option.foreach(Option.scala:257)
	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2108)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2057)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2046)
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2082)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2101)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:935)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:933)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
	at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
	at org.apache.spark.rdd.RDD.foreachPartition(RDD.scala:933)
	at org.apache.spark.sql.Dataset$$anonfun$foreachPartition$1.apply$mcV$sp(Dataset.scala:2736)
	at org.apache.spark.sql.Dataset$$anonfun$foreachPartition$1.apply(Dataset.scala:2736)
	at org.apache.spark.sql.Dataset$$anonfun$foreachPartition$1.apply(Dataset.scala:2736)
	at org.apache.spark.sql.Dataset$$anonfun$withNewRDDExecutionId$1.apply(Dataset.scala:3350)
	at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
	at org.apache.spark.sql.Dataset.withNewRDDExecutionId(Dataset.scala:3346)
	at org.apache.spark.sql.Dataset.foreachPartition(Dataset.scala:2735)
	at com.vesoft.nebula.exchange.processor.EdgeProcessor.process(EdgeProcessor.scala:246)
	at com.vesoft.nebula.exchange.Exchange$$anonfun$main$3.apply(Exchange.scala:190)
	at com.vesoft.nebula.exchange.Exchange$$anonfun$main$3.apply(Exchange.scala:168)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at com.vesoft.nebula.exchange.Exchange$.main(Exchange.scala:168)
	at com.vesoft.nebula.exchange.Exchange.main(Exchange.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:678)
22/06/17 10:37:35 WARN TaskSetManager: Lost task 67.3 in stage 1.0 (TID 1838, cdh-bjht-0643, executor 97): TaskKilled (Stage cancelled)
22/06/17 10:37:35 WARN TaskSetManager: Lost task 2.3 in stage 1.0 (TID 1769, cdh-bjht-1729, executor 38): TaskKilled (Stage cancelled)
22/06/17 10:37:35 WARN TaskSetManager: Lost task 24.3 in stage 1.0 (TID 1980, cdh-bjht-7376, executor 27): TaskKilled (Stage cancelled)
22/06/17 10:37:35 WARN TaskSetManager: Lost task 171.2 in stage 1.0 (TID 1654, cdh-bjht-3339, executor 33): TaskKilled (Stage cancelled)
22/06/17 10:37:35 WARN TaskSetManager: Lost task 57.3 in stage 1.0 (TID 2013, cdh-bjht-7376, executor 27): TaskKilled (Stage cancelled)
22/06/17 10:37:35 WARN TaskSetManager: Lost task 134.3 in stage 1.0 (TID 1746, cdh-bjht-7376, executor 27): TaskKilled (Stage cancelled)
22/06/17 10:37:35 WARN TaskSetManager: Lost task 194.3 in stage 1.0 (TID 1760, cdh-bjht-7334, executor 73): TaskKilled (Stage cancelled)
22/06/17 10:37:35 WARN TaskSetManager: Lost task 227.3 in stage 1.0 (TID 1831, cdh-bjht-8366, executor 26): TaskKilled (Stage cancelled)
22/06/17 10:37:35 INFO ApplicationMaster: Final app status: FAILED, exitCode: 15, (reason: User class threw exception: org.apache.spark.SparkException: Job aborted due to stage failure: Task 354 in stage 1.0 failed 4 times, most recent failure: Lost task 354.3 in stage 1.0 (TID 1680, cdh-bjht-7210, executor 3): org.rocksdb.RocksDBException: Keys must be added in order
	at org.rocksdb.SstFileWriter.put(Native Method)
	at org.rocksdb.SstFileWriter.put(SstFileWriter.java:132)
	at com.vesoft.nebula.exchange.writer.NebulaSSTWriter.write(FileBaseWriter.scala:43)
	at com.vesoft.nebula.exchange.processor.EdgeProcessor$$anonfun$process$3$$anonfun$apply$5.apply(EdgeProcessor.scala:288)
	at com.vesoft.nebula.exchange.processor.EdgeProcessor$$anonfun$process$3$$anonfun$apply$5.apply(EdgeProcessor.scala:258)
	at scala.collection.Iterator$class.foreach(Iterator.scala:891)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
	at com.vesoft.nebula.exchange.processor.EdgeProcessor$$anonfun$process$3.apply(EdgeProcessor.scala:258)
	at com.vesoft.nebula.exchange.processor.EdgeProcessor$$anonfun$process$3.apply(EdgeProcessor.scala:246)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:935)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:935)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:121)
	at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
	at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1887)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1875)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1874)
	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1874)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
	at scala.Option.foreach(Option.scala:257)
	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2108)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2057)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2046)
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2082)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2101)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:935)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:933)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
	at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
	at org.apache.spark.rdd.RDD.foreachPartition(RDD.scala:933)
	at org.apache.spark.sql.Dataset$$anonfun$foreachPartition$1.apply$mcV$sp(Dataset.scala:2736)
	at org.apache.spark.sql.Dataset$$anonfun$foreachPartition$1.apply(Dataset.scala:2736)
	at org.apache.spark.sql.Dataset$$anonfun$foreachPartition$1.apply(Dataset.scala:2736)
	at org.apache.spark.sql.Dataset$$anonfun$withNewRDDExecutionId$1.apply(Dataset.scala:3350)
	at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
	at org.apache.spark.sql.Dataset.withNewRDDExecutionId(Dataset.scala:3346)
	at org.apache.spark.sql.Dataset.foreachPartition(Dataset.scala:2735)
	at com.vesoft.nebula.exchange.processor.EdgeProcessor.process(EdgeProcessor.scala:246)
	at com.vesoft.nebula.exchange.Exchange$$anonfun$main$3.apply(Exchange.scala:190)
	at com.vesoft.nebula.exchange.Exchange$$anonfun$main$3.apply(Exchange.scala:168)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at com.vesoft.nebula.exchange.Exchange$.main(Exchange.scala:168)
	at com.vesoft.nebula.exchange.Exchange.main(Exchange.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:678)
)
22/06/17 10:37:35 WARN TaskSetManager: Lost task 212.3 in stage 1.0 (TID 1808, cdh-bjht-0435, executor 46): TaskKilled (Stage cancelled)
22/06/17 10:37:35 WARN TaskSetManager: Lost task 438.2 in stage 1.0 (TID 1872, cdh-bjht-7210, executor 3): TaskKilled (Stage cancelled)
22/06/17 10:37:35 WARN TaskSetManager: Lost task 363.3 in stage 1.0 (TID 1864, cdh-bjht-7210, executor 3): TaskKilled (Stage cancelled)
22/06/17 10:37:35 WARN TaskSetManager: Lost task 7.3 in stage 1.0 (TID 1858, cdh-bjht-5653, executor 7): TaskKilled (Stage cancelled)
22/06/17 10:37:35 WARN TaskSetManager: Lost task 332.3 in stage 1.0 (TID 1854, cdh-bjht-7210, executor 3): TaskKilled (Stage cancelled)
22/06/17 10:37:35 WARN TaskSetManager: Lost task 142.2 in stage 1.0 (TID 1729, cdh-bjht-3339, executor 33): TaskKilled (Stage cancelled)
22/06/17 10:37:35 WARN TaskSetManager: Lost task 235.3 in stage 1.0 (TID 1802, cdh-bjht-7974, executor 100): TaskKilled (Stage cancelled)
22/06/17 10:37:35 INFO SparkContext: Invoking stop() from shutdown hook
22/06/17 10:37:35 WARN TaskSetManager: Lost task 13.3 in stage 1.0 (TID 1796, cdh-bjht-7368, executor 113): TaskKilled (Stage cancelled)
22/06/17 10:37:35 WARN TaskSetManager: Lost task 291.3 in stage 1.0 (TID 2035, cdh-bjht-1466, executor 91): TaskKilled (Stage cancelled)
22/06/17 10:37:35 WARN TaskSetManager: Lost task 250.2 in stage 1.0 (TID 1464, cdh-bjht-1466, executor 91): TaskKilled (Stage cancelled)
22/06/17 10:37:35 WARN TaskSetManager: Lost task 232.2 in stage 1.0 (TID 1709, cdh-bjht-8383, executor 103): TaskKilled (Stage cancelled)
22/06/17 10:37:35 WARN TaskSetManager: Lost task 88.3 in stage 1.0 (TID 1783, cdh-bjht-6872, executor 56): TaskKilled (Stage cancelled)
22/06/17 10:37:35 WARN TaskSetManager: Lost task 476.2 in stage 1.0 (TID 1699, cdh-bjht-1392, executor 80): TaskKilled (Stage cancelled)
22/06/17 10:37:35 WARN TaskSetManager: Lost task 37.3 in stage 1.0 (TID 1905, cdh-bjht-0116, executor 43): TaskKilled (Stage cancelled)
22/06/17 10:37:35 WARN TaskSetManager: Lost task 135.3 in stage 1.0 (TID 1759, cdh-bjht-5323, executor 72): TaskKilled (Stage cancelled)
22/06/17 10:37:35 WARN TaskSetManager: Lost task 90.3 in stage 1.0 (TID 1827, cdh-bjht-7210, executor 3): TaskKilled (Stage cancelled)
22/06/17 10:37:35 WARN TaskSetManager: Lost task 255.2 in stage 1.0 (TID 1587, cdh-bjht-0116, executor 43): TaskKilled (Stage cancelled)
22/06/17 10:37:35 WARN TaskSetManager: Lost task 254.2 in stage 1.0 (TID 1689, cdh-bjht-7376, executor 27): TaskKilled (Stage cancelled)
22/06/17 10:37:35 WARN TaskSetManager: Lost task 33.3 in stage 1.0 (TID 1731, cdh-bjht-7974, executor 100): TaskKilled (Stage cancelled)
22/06/17 10:37:35 WARN TaskSetManager: Lost task 140.2 in stage 1.0 (TID 1692, cdh-bjht-5313, executor 51): TaskKilled (Stage cancelled)
22/06/17 10:37:35 WARN TaskSetManager: Lost task 459.2 in stage 1.0 (TID 1813, cdh-bjht-5641, executor 122): TaskKilled (Stage cancelled)
22/06/17 10:37:35 WARN TaskSetManager: Lost task 76.3 in stage 1.0 (TID 1724, cdh-bjht-6872, executor 56): TaskKilled (Stage cancelled)
22/06/17 10:37:35 WARN TaskSetManager: Lost task 98.3 in stage 1.0 (TID 1780, cdh-bjht-7240, executor 88): TaskKilled (Stage cancelled)
22/06/17 10:37:35 INFO SparkUI: Stopped Spark web UI at http://cdh-bjht-2264:26576
22/06/17 10:37:35 WARN TaskSetManager: Lost task 226.3 in stage 1.0 (TID 1799, cdh-bjht-5653, executor 7): TaskKilled (Stage cancelled)
22/06/17 10:37:35 WARN TaskSetManager: Lost task 470.3 in stage 1.0 (TID 2033, cdh-bjht-1202, executor 60): TaskKilled (Stage cancelled)
22/06/17 10:37:35 INFO YarnAllocator: Driver requested a total number of 0 executor(s).
22/06/17 10:37:35 WARN TaskSetManager: Lost task 61.3 in stage 1.0 (TID 1770, cdh-bjht-1745, executor 98): TaskKilled (Stage cancelled)
22/06/17 10:37:35 INFO YarnClusterSchedulerBackend: Shutting down all executors
22/06/17 10:37:35 INFO YarnSchedulerBackend$YarnDriverEndpoint: Asking each executor to shut down
22/06/17 10:37:35 WARN TaskSetManager: Lost task 176.2 in stage 1.0 (TID 1605, cdh-bjht-1304, executor 35): TaskKilled (Stage cancelled)
22/06/17 10:37:35 WARN TaskSetManager: Lost task 437.2 in stage 1.0 (TID 1608, cdh-bjht-1304, executor 35): TaskKilled (Stage cancelled)
22/06/17 10:37:35 WARN TaskSetManager: Lost task 339.3 in stage 1.0 (TID 1713, cdh-bjht-1304, executor 35): TaskKilled (Stage cancelled)
22/06/17 10:37:35 INFO SchedulerExtensionServices: Stopping SchedulerExtensionServices
(serviceOption=None,
 services=List(),
 started=false)
22/06/17 10:37:35 ERROR TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message.
org.apache.spark.SparkException: Could not find CoarseGrainedScheduler.
	at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:160)
	at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:140)
	at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:655)
	at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:274)
	at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:105)
	at org.apache.spark.network.server.TransportChannelHandler.channelRead(TransportChannelHandler.java:118)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:85)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
	at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
	at java.lang.Thread.run(Thread.java:748)
22/06/17 10:37:35 ERROR TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message.
org.apache.spark.SparkException: Could not find CoarseGrainedScheduler.
	at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:160)
	at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:140)
	at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:655)
	at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:274)
	at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:105)
	at org.apache.spark.network.server.TransportChannelHandler.channelRead(TransportChannelHandler.java:118)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:85)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
	at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
	at java.lang.Thread.run(Thread.java:748)
22/06/17 10:37:35 ERROR TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message.
org.apache.spark.SparkException: Could not find CoarseGrainedScheduler.
	at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:160)
	at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:140)
	at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:655)
	at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:274)
	at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:105)
	at org.apache.spark.network.server.TransportChannelHandler.channelRead(TransportChannelHandler.java:118)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:85)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
	at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
	at java.lang.Thread.run(Thread.java:748)
22/06/17 10:37:35 ERROR TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message.
org.apache.spark.SparkException: Could not find CoarseGrainedScheduler.
	at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:160)
	at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:140)
	at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:655)
	at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:274)
	at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:105)
	at org.apache.spark.network.server.TransportChannelHandler.channelRead(TransportChannelHandler.java:118)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:85)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
	at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
	at java.lang.Thread.run(Thread.java:748)
22/06/17 10:37:35 ERROR TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message.
org.apache.spark.SparkException: Could not find CoarseGrainedScheduler.
	at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:160)
	at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:140)
	at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:655)
	at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:274)
	at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:105)
	at org.apache.spark.network.server.TransportChannelHandler.channelRead(TransportChannelHandler.java:118)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:85)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
	at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
	at java.lang.Thread.run(Thread.java:748)

此话题已在最后回复的 7 天后被自动关闭。不再允许新回复。