spark导入失败

模拟从hdfs上的players .csv导入到spaces nba中的tags(player)
一、players.csv文件
id,name,age
100,zs,10
101,ls,12

二、player
CREATE TAG player(name string, age int);
三、配置文件:

{
  # Spark 相关信息配置
  # 参见: http://spark.apache.org/docs/latest/configuration.html
  spark: {
    app: {
      name: Spark Writer
    }

    driver: {
      cores: 1
      maxResultSize: 1G
    }

    cores {
      max: 16
    }
  }

  # Nebula Graph 相关信息配置
  nebula: {
    # 查询引擎 IP 列表
    addresses: ["127.0.0.1:3699"]

    # 连接 Nebula Graph 服务的用户名和密码
    user: user
    pswd: password

    # Nebula Graph 图空间名称
    space: test

    # thrift 超时时长及重试次数
    # 如未设置,则默认值分别为 3000 和 3
    connection {
      timeout: 3000
      retry: 3
    }

  # nGQL 查询重试次数
  # 如未设置,则默认值为 3
    execution {
      retry: 3
    }
  }

  # 处理标签
  tags: [

    # 从 HDFS 文件加载数据, 此处数据类型为 Parquet
    # tag 名称为 tag name 0
    #  HDFS Parquet 文件的中的 field_0、field_1、field_2 将写入 tag_name_0
    # 节点列为 vertex_key_field
     {
      name: player
      type: csv
      path: "hdfs://192.168.96.221:8020/nebula/players.csv"
      fields: {
        id: id,
        name: name,
        age: age
      }
      vertex: id
      batch : 16
    }
  ]


}

四、执行命令
/bdp/spark/bin/spark-submit --class com.vesoft.nebula.tools.generator.v2.SparkClientGenerator --master local /bdp/spark/jars/sst.generator-1.0.0-rc4-spark2.4.4.jar -c /bdp/spark/conf/spark_nebula.conf -h -d -D

五、发生错误

Exception in thread "main" com.typesafe.config.ConfigException$WrongType: /bdp/spark/conf/spark_nebula.conf: 46: tags has type LIST rather than OBJECT
        at com.typesafe.config.impl.SimpleConfig.findKeyOrNull(SimpleConfig.java:163)
        at com.typesafe.config.impl.SimpleConfig.findOrNull(SimpleConfig.java:174)
        at com.typesafe.config.impl.SimpleConfig.find(SimpleConfig.java:188)
        at com.typesafe.config.impl.SimpleConfig.find(SimpleConfig.java:193)
        at com.typesafe.config.impl.SimpleConfig.getObject(SimpleConfig.java:268)
        at com.typesafe.config.impl.SimpleConfig.getObject(SimpleConfig.java:41)
        at com.vesoft.nebula.tools.generator.v2.SparkClientGenerator$.main(SparkClientGenerator.scala:149)
        at com.vesoft.nebula.tools.generator.v2.SparkClientGenerator.main(SparkClientGenerator.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
        at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:845)
        at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161)
        at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)
        at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
        at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
20/06/22 16:26:41 INFO SparkContext: Invoking stop() from shutdown hook
20/06/22 16:26:41 INFO SparkUI: Stopped Spark web UI at http://w540:4040
20/06/22 16:26:41 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
20/06/22 16:26:41 INFO MemoryStore: MemoryStore cleared
20/06/22 16:26:41 INFO BlockManager: BlockManager stopped
20/06/22 16:26:41 INFO BlockManagerMaster: BlockManagerMaster stopped
20/06/22 16:26:41 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
20/06/22 16:26:41 INFO SparkContext: Successfully stopped SparkContext
20/06/22 16:26:41 INFO ShutdownHookManager: Shutdown hook called
20/06/22 16:26:41 INFO ShutdownHookManager: Deleting directory /tmp/spark-f672a344-55e3-4549-a689-405259e3a016
20/06/22 16:26:41 INFO ShutdownHookManager: Deleting directory /tmp/spark-5f30b22d-e5cd-44c5-9c30-4d27c4b91897

我用你的配置运行了一下 Spark 2.4.4 是正常的 可以检查一下源码是不是编译了

> ~/spark-2.4.4/bin/spark-submit --class com.vesoft.nebula.tools.generator.v2.SparkClientGenerator --master local target/sst.generator-1.0.0-rc4.jar -c /home/darion/conf.nebula/tag.test.conf -d -D
20/06/22 16:55:55 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
log4j:WARN No appenders could be found for logger (com.vesoft.nebula.tools.generator.v2.SparkClientGenerator$).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
20/06/22 16:55:55 INFO SparkContext: Running Spark version 2.4.4
20/06/22 16:55:55 INFO SparkContext: Submitted application: Spark Writer
20/06/22 16:55:56 INFO SecurityManager: Changing view acls to: darion
20/06/22 16:55:56 INFO SecurityManager: Changing modify acls to: darion
20/06/22 16:55:56 INFO SecurityManager: Changing view acls groups to:
20/06/22 16:55:56 INFO SecurityManager: Changing modify acls groups to:
20/06/22 16:55:56 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(darion); groups with view permissions: Set(); users  with modify permissions: Set(darion); groups with modify permissions: Set()
20/06/22 16:55:56 INFO Utils: Successfully started service 'sparkDriver' on port 46745.
20/06/22 16:55:56 INFO SparkEnv: Registering MapOutputTracker
20/06/22 16:55:56 INFO SparkEnv: Registering BlockManagerMaster
20/06/22 16:55:56 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
20/06/22 16:55:56 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
20/06/22 16:55:56 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-e14e6e5d-0291-462f-a97b-2462afba3991
20/06/22 16:55:56 INFO MemoryStore: MemoryStore started with capacity 366.3 MB
20/06/22 16:55:56 INFO SparkEnv: Registering OutputCommitCoordinator
20/06/22 16:55:56 INFO Utils: Successfully started service 'SparkUI' on port 4040.
20/06/22 16:55:56 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://host01-cluster.vesoft:4040
20/06/22 16:55:56 INFO SparkContext: Added JAR file:/home/darion/nebula/src/tools/spark-sstfile-generator/target/sst.generator-1.0.0-rc4.jar at spark://host01-cluster.vesoft:46745/jars/sst.generator-1.0.0-rc4.jar with timestamp 1592816156611
20/06/22 16:55:56 INFO Executor: Starting executor ID driver on host localhost
20/06/22 16:55:56 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 46205.
20/06/22 16:55:56 INFO NettyBlockTransferService: Server created on host01-cluster.vesoft:46205
20/06/22 16:55:56 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
20/06/22 16:55:56 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, host01-cluster.vesoft, 46205, None)
20/06/22 16:55:56 INFO BlockManagerMasterEndpoint: Registering block manager host01-cluster.vesoft:46205 with 366.3 MB RAM, BlockManagerId(driver, host01-cluster.vesoft, 46205, None)
20/06/22 16:55:56 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, host01-cluster.vesoft, 46205, None)
20/06/22 16:55:56 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, host01-cluster.vesoft, 46205, None)
20/06/22 16:55:56 INFO SparkClientGenerator$: Processing Tag player
20/06/22 16:55:56 INFO SparkClientGenerator$: Loading csv from /tmp/players.csv
20/06/22 16:55:56 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('file:/home/darion/nebula/src/tools/spark-sstfile-generator/spark-warehouse').
20/06/22 16:55:56 INFO SharedState: Warehouse path is 'file:/home/darion/nebula/src/tools/spark-sstfile-generator/spark-warehouse'.
20/06/22 16:55:57 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
20/06/22 16:55:57 WARN DataSource: Multiple sources found for csv (org.apache.spark.sql.execution.datasources.csv.CSVFileFormat, com.databricks.spark.csv.DefaultSource15), defaulting to the internal datasource (org.apache.spark.sql.execution.datasources.csv.CSVFileFormat).
20/06/22 16:55:57 WARN DataSource: Multiple sources found for csv (org.apache.spark.sql.execution.datasources.csv.CSVFileFormat, com.databricks.spark.csv.DefaultSource15), defaulting to the internal datasource (org.apache.spark.sql.execution.datasources.csv.CSVFileFormat).
20/06/22 16:55:58 INFO FileSourceStrategy: Pruning directories with:
20/06/22 16:55:58 INFO FileSourceStrategy: Post-Scan Filters: (length(trim(value#0, None)) > 0)
20/06/22 16:55:58 INFO FileSourceStrategy: Output Data Schema: struct<value: string>
20/06/22 16:55:58 INFO FileSourceScanExec: Pushed Filters:
20/06/22 16:55:59 INFO CodeGenerator: Code generated in 197.510777 ms
20/06/22 16:55:59 INFO CodeGenerator: Code generated in 15.564154 ms
20/06/22 16:55:59 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 220.9 KB, free 366.1 MB)
20/06/22 16:55:59 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 20.8 KB, free 366.1 MB)
20/06/22 16:55:59 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on host01-cluster.vesoft:46205 (size: 20.8 KB, free: 366.3 MB)
20/06/22 16:55:59 INFO SparkContext: Created broadcast 0 from csv at SparkClientGenerator.scala:737
20/06/22 16:55:59 INFO FileSourceScanExec: Planning scan with bin packing, max size: 4194304 bytes, open cost is considered as scanning 4194304 bytes.
20/06/22 16:55:59 INFO SparkContext: Invoking stop() from shutdown hook
20/06/22 16:55:59 INFO SparkUI: Stopped Spark web UI at http://host01-cluster.vesoft:4040
20/06/22 16:55:59 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
20/06/22 16:55:59 INFO MemoryStore: MemoryStore cleared
20/06/22 16:55:59 INFO BlockManager: BlockManager stopped
20/06/22 16:55:59 INFO BlockManagerMaster: BlockManagerMaster stopped
20/06/22 16:55:59 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
20/06/22 16:55:59 INFO SparkContext: Successfully stopped SparkContext
20/06/22 16:55:59 INFO ShutdownHookManager: Shutdown hook called
20/06/22 16:55:59 INFO ShutdownHookManager: Deleting directory /tmp/spark-f18dd142-5792-44cf-ba7a-8e9be1846a0f
20/06/22 16:55:59 INFO ShutdownHookManager: Deleting directory /tmp/spark-fc81ec5b-d1fc-4e95-9329-0d0c3a40d035

源码的确编译了。

楼主,请问你用spark导入数据的时候遇到了读 csv文件找不到域的问题吗?
是不是目前hdfs只支持json和parquet?

找不到域?其实是 支持 CSV的 能详细说说嘛?