spark-submit yarn-cluster 模式导hive数据 报配置文件not existed

和这个帖子同样的问题

按照给的方案,读取项目的代码不能使用hive导数据了,有其他办法吗?

看了driver机器是有文件的,为啥就读不到

你好,这个帖子和你发的另外一个帖子是相同的问题,我把这个帖子隐藏了,麻烦你在另外一个帖子下面按照提问模板添加相关的 debug 信息哈,方便 dev 排查问题,谢谢!

可以把报错贴一下吗,tracing url进去,你是client启动,按理说一定会找到配置文件的,除非你配置文件路径错了

application.conf
app.log
app.sql
curl_result
exchange-1.1.0.jar
exchange-1-1.1.0.jar
exchange.jar
hive_application.conf
info
main
main_origin
preCommands.txt
Application Id: application_1608793700198_22265362, Tracking URL: http://bigdata-nmg-hdprm00.nmg01.bigdata.intra.xiaojukeji.com:8088/proxy/application_1608793700198_22265362/
Notice: There is a serious problem of syntax errors and exceptions in your code. Please check your code and dependencies!
DriverContainer URL: http://bigdata-nmg-hdp5396.nmg01.diditaxi.com.bigdata.intra.xiaojukeji.com:8042/node/containerlogs/container_e20_1608793700198_22265362_01_000002/xxxx

21/01/29 12:24:04 ERROR Client: Application diagnostics message: Application application_1608793700198_22265362 failed 2 times due to AM Container for appattempt_1608793700198_22265362_000002 exited with exitCode: 13
For more detailed output, check application tracking page:http://bigdata-nmg-hdprm00.nmg01.bigdata.intra.xiaojukeji.com:8088/proxy/application_1608793700198_22265362/Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_e20_1608793700198_22265362_02_000001
Exit code: 13
Stack trace: ExitCodeException exitCode=13:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
at org.apache.hadoop.util.Shell.run(Shell.java:456)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:372)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:310)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:85)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

Shell output: main : command provided 1
main : user is yarn
main : requested yarn user is xxx

Container exited with a non-zero exit code 13. Last 4096 bytes of stderr :
ster: Preparing Local resources
21/01/29 12:23:59 INFO ApplicationMaster: ApplicationAttemptId: appattempt_1608793700198_22265362_000002
21/01/29 12:24:00 INFO ApplicationMaster: Starting the user application in a separate Thread
21/01/29 12:24:00 INFO ApplicationMaster: Waiting for spark context initialization…
21/01/29 12:24:00 ERROR ApplicationMaster: User class threw exception: java.lang.IllegalArgumentException: ./application.conf not exist
java.lang.IllegalArgumentException: ./application.conf not exist
at com.vesoft.nebula.tools.importer.config.Configs$.parse(Configs.scala:182)
at com.vesoft.nebula.tools.importer.Exchange$.main(Exchange.scala:76)
at com.vesoft.nebula.tools.importer.Exchange.main(Exchange.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:735)
21/01/29 12:24:00 INFO ApplicationMaster: Final app status: FAILED, exitCode: 13, (reason: User class threw exception: java.lang.IllegalArgumentException: ./application.conf not exist
at com.vesoft.nebula.tools.importer.config.Configs$.parse(Configs.scala:182)
at com.vesoft.nebula.tools.importer.Exchange$.main(Exchange.scala:76)
at com.vesoft.nebula.tools.importer.Exchange.main(Exchange.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:735)
)
21/01/29 12:24:00 ERROR ApplicationMaster: Uncaught exception:
org.apache.spark.SparkException: Exception thrown in awaitResult:
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:226)
at org.apache.spark.deploy.yarn.ApplicationMaster.runDriver(ApplicationMaster.scala:480)
at org.apache.spark.deploy.yarn.ApplicationMaster.org$apache$spark$deploy$yarn$ApplicationMaster$$runImpl(ApplicationMaster.scala:308)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$run$1.apply$mcV$sp(ApplicationMaster.scala:245)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$run$1.apply(ApplicationMaster.scala:245)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$run$1.apply(ApplicationMaster.scala:245)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$3.run(ApplicationMaster.scala:830)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1923)
at org.apache.spark.deploy.yarn.ApplicationMaster.doAsUser(ApplicationMaster.scala:829)
at org.apache.spark.deploy.yarn.ApplicationMaster.run(ApplicationMaster.scala:244)
at org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:854)
at org.apache.spark.deploy.yarn.ApplicationMaster.main(ApplicationMaster.scala)
Caused by: java.lang.IllegalArgumentException: ./application.conf not exist
at com.vesoft.nebula.tools.importer.config.Configs$.parse(Configs.scala:182)
at com.vesoft.nebula.tools.importer.Exchange$.main(Exchange.scala:76)
at com.vesoft.nebula.tools.importer.Exchange.main(Exchange.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:735)
21/01/29 12:24:00 INFO ApplicationMaster: Deleting staging directory hdfs://difed/user/xxxx/.sparkStaging/application_1608793700198_22265362

Failing this attempt. Failing the application.
Exception in thread “main” org.apache.spark.SparkException: Application application_1608793700198_22265362 finished with failed status
at org.apache.spark.deploy.yarn.Client.run(Client.scala:1281)
at org.apache.spark.deploy.yarn.YarnClusterApplication.start(Client.scala:1678)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:853)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:931)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:940)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

[2021-01-29 12:24:04] *************** 运行失败[EXIT CODE: 1] ***************

ls ./
spark-submit --master yarn --deploy-mode client --class com.vesoft.nebula.tools.importer.Exchange ./exchange-1.1.0.jar -c ./application.conf -h

ls ./ 查看是有文件的

你有个相同的帖子,为了避免浪费回复者两个地方同步相同内容,以你发布的这个帖子 spark-submit yarn-cluster 模式 导入hive数据 报配置文件not existed - #7 由 bobbi 为准哈,我这个帖子关闭回复了(因为其他社区小伙伴可以通过你对外发布的链接看到这个帖子内容,会造成两处回复的情况)