- 部署方式(分布式 / 单机 / Docker / DBaaS):分布式 1.2.1
- 是否为线上版本:N
在论坛找了类似错误的帖子,核对了几遍,还是没有解决 呜…
以下时错误信息、配置、schema 和spark-submit提交 ,麻烦帮忙分析一下,谢谢
1、错误信息:
21/04/20 15:55:50 ERROR meta.MetaClientImpl: List Spaces Error Code: -11
21/04/20 15:55:50 ERROR meta.MetaClientImpl: Get tags Error: -23
Exception in thread "main" java.util.NoSuchElementException: key not found: vId
at scala.collection.MapLike$class.default(MapLike.scala:228)
at scala.collection.AbstractMap.default(Map.scala:59)
at scala.collection.MapLike$class.apply(MapLike.scala:141)
at scala.collection.AbstractMap.apply(Map.scala:59)
at com.vesoft.nebula.tools.importer.utils.NebulaUtils$$anonfun$getDataSourceFieldType$1.apply(NebulaUtils.scala:65)
at com.vesoft.nebula.tools.importer.utils.NebulaUtils$$anonfun$getDataSourceFieldType$1.apply(NebulaUtils.scala:64)
at scala.collection.immutable.Range.foreach(Range.scala:160)
at com.vesoft.nebula.tools.importer.utils.NebulaUtils$.getDataSourceFieldType(NebulaUtils.scala:64)
at com.vesoft.nebula.tools.importer.processor.VerticesProcessor.process(VerticesProcessor.scala:138)
at com.vesoft.nebula.tools.importer.Exchange$$anonfun$main$2.apply(Exchange.scala:174)
at com.vesoft.nebula.tools.importer.Exchange$$anonfun$main$2.apply(Exchange.scala:152)
at scala.collection.immutable.List.foreach(List.scala:392)
at com.vesoft.nebula.tools.importer.Exchange$.main(Exchange.scala:152)
at com.vesoft.nebula.tools.importer.Exchange.main(Exchange.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:845)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
2、test数据
{"vId":"DHHM_13549999999","area":"020","city":"北京","operator":"联通"}
{"vId":"DHHM_13550000008","area":"020","city":"广州","operator":"移动"}
{"vId":"DHHM_13549999991","area":"020","city":"重庆","operator":"联通"}
3、录入配置
{
name: dhhm
type: {
source: json
sink: client
}
path: "hdfs://10.210.141.9:8020/graph/data/nebula/DHHM/"
fields: ["vId","area","city","operator"]
nebula.fields: ["vId","area","city","operator"]
vertex: {
field: vId
policy: "hash"
}
# vertex: source
batch: 256
partition: 32
isImplicit: true
}
4、schema信息
(root@nebula) [graphSpace]> show hosts;
=================================================================================================
| Ip | Port | Status | Leader count | Leader distribution | Partition distribution |
=================================================================================================
| 10.210.141.9 | 44500 | online | 33 | graphSpace: 33 | graphSpace: 33 |
-------------------------------------------------------------------------------------------------
| 10.210.141.77 | 44500 | online | 34 | graphSpace: 34 | graphSpace: 34 |
-------------------------------------------------------------------------------------------------
| 10.210.141.135 | 44500 | online | 33 | graphSpace: 33 | graphSpace: 33 |
-------------------------------------------------------------------------------------------------
| Total | | | 100 | graphSpace: 100 | graphSpace: 100 |
-------------------------------------------------------------------------------------------------
(root@nebula) [graphSpace]> show create tag dhhm;
=====================================================================================================================================
| Tag | Create Tag |
=====================================================================================================================================
| dhhm | CREATE TAG `dhhm` (
`vId` string,
`area` string,
`city` string,
`operator` string
) ttl_duration = 0, ttl_col = "" |
-------------------------------------------------------------------------------------------------------------------------------------
Got 1 rows (Time spent: 1.378/2.251 ms)
Tue Apr 20 16:03:52 2021
(root@nebula) [graphSpace]>
(root@nebula) [graphSpace]> show create tag sfzh;
============================================================================================================
| Tag | Create Tag |
============================================================================================================
| sfzh | CREATE TAG `sfzh` (
`vId` string,
`name` string,
`age` int
) ttl_duration = 0, ttl_col = "" |
------------------------------------------------------------------------------------------------------------
5、提交:
#!/bin/bash
sh /home/louyp/spark/spark-2.4.5-bin-hadoop2.7/bin/spark-submit \
--master yarn-client \
--name spark-nebula-load \
--executor-memory 10G \
--executor-cores 1 \
--num-executors 24 \
--total-executor-cores 27 \
--class com.vesoft.nebula.tools.importer.Exchange \
./exchange-1.1.0.jar \
-c ./application1.2.1.conf