nebula exchange3 为啥PG导入的时候,报错PSQLException: ERROR: type "string" does not exist

:fire: 如果安装部署失败,不妨试试【免配置】【免安装】的 :cloud: 云服务,拷贝访问:http://c.nxw.so/cjUIw
:warning: NebulaGraph Cloud 的用户记得标签选用:NebulaGraph-Cloud,会有更高的回复优先级哟^^

充足的信息能提高解决问题的速度~

提问之前,记得带上报错信息在「本论坛」和「文档」搜索是否已有解决方案存在哟 ^^

为了更快地定位、解决问题,麻烦参考下面模版提问(不符合提问规范的问题,会被隐藏待补充相关信息之后再发布

提问参考模版:

  • nebula 版本:(为节省回复者核对版本信息的时间,首次发帖的版本信息记得以截图形式展示)
  • 部署方式:云端 / 分布式 / 单机
  • 安装方式:源码编译 / Docker / RPM
  • 是否上生产环境:Y / N
  • 硬件信息
    • 磁盘( 推荐使用 SSD)
    • CPU、内存信息
  • 问题的具体描述
  • 相关的 meta / storage / graph info 日志信息(尽量使用文本形式方便检索)

如果有日志或者代码,记得用 Markdown 语法(下面语法)包裹它们提高阅读体验,让回复者更快解决问题哟~~

代码 / 终端输出 / 日志…

最后烦请删掉本模版和问题无关的信息之后,再提交提问,Thx [quote=“lsz4123, post:1, topic:2116, full:true”]
请问: 配置文件哪里写的不符合规范

提问参考模版:

  • nebula 版本:2.0
  • exchange: 2.0
  • nosql
CREATE SPACE user_relate;
USE user_relate;
CREATE TAG ve(vid string);
CREATE EDGE ed();
  • csv
点文件:
1
2
3
...
50000

边文件
rand(1,50000),rand(1,50000) 1000w组随机数据
  • conf 文件
{
  # Spark relation config
  spark: {
    app: {
      name: Nebula Exchange 2.0
    }

    driver: {
      cores: 1
      maxResultSize: 1G
    }

    executor: {
        memory: 1G
    }

    cores:{
      max: 16
    }
  }

  nebula: {
    address:{
      graph:["192.168.10.188:3699"]
      meta:["192.168.10.188:45500"]
    }
    user: root
    pswd: nebula
    space: user_relate

    connection {
      timeout: 3000
      retry: 3
    }

    execution {
      retry: 3
    }

    error: {
      max: 32
      output: /tmp/errors
    }

    rate: {
      limit: 1024
      timeout: 1000
    }
  }

  tags: [

    {
      name: ve
      type: {
        source: csv
        sink: client
      }
      path: "hdfs://ip:port/user/lsz/user_relate/vertex.csv"
      fields: [_c0]
      nebula.fields: [vid]
      vertex: _c0
      separator: ","
      header: false
      batch: 256
      partition: 32
    }

  ]

  edges: [
    {
      name: ed
      type: {
        source: csv
        sink: client
      }
      path: "hdfs://ip:port/user/lsz/user_relate/edge.csv"
      fields: [_c0,_c1]
      nebula.fields: []
      source: {
        field: _c1
      }
      target: {
        field: _c0
      }
      separator: ","
      header: false
      batch: 256
      partition: 32
    }
  ]
}
  • 错误日志
20/12/18 17:42:29 ERROR Executor: Exception in task 0.0 in stage 3.0 (TID 3)
java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.String
	at org.apache.spark.sql.Row$class.getString(Row.scala:257)
	at org.apache.spark.sql.catalyst.expressions.GenericRow.getString(rows.scala:166)
	at com.vesoft.nebula.tools.importer.processor.Processor$class.extraValue(Processor.scala:50)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor.extraValue(VerticesProcessor.scala:42)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$1$$anonfun$3.apply(VerticesProcessor.scala:120)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$1$$anonfun$3.apply(VerticesProcessor.scala:119)
	at scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:683)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:682)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$1.apply(VerticesProcessor.scala:119)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$1.apply(VerticesProcessor.scala:100)
	at org.apache.spark.sql.execution.MapElementsExec$$anonfun$7$$anonfun$apply$1.apply(objects.scala:237)
	at org.apache.spark.sql.execution.MapElementsExec$$anonfun$7$$anonfun$apply$1.apply(objects.scala:237)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
	at scala.collection.Iterator$GroupedIterator.takeDestructively(Iterator.scala:1074)
	at scala.collection.Iterator$GroupedIterator.go(Iterator.scala:1089)
	at scala.collection.Iterator$GroupedIterator.fill(Iterator.scala:1126)
	at scala.collection.Iterator$GroupedIterator.hasNext(Iterator.scala:1130)
	at scala.collection.Iterator$class.foreach(Iterator.scala:891)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor.com$vesoft$nebula$tools$importer$processor$VerticesProcessor$$processEachPartition(VerticesProcessor.scala:69)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$process$2.apply(VerticesProcessor.scala:137)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$process$2.apply(VerticesProcessor.scala:137)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:980)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:980)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:123)
	at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
20/12/18 17:42:29 INFO TaskSetManager: Starting task 1.0 in stage 3.0 (TID 4, localhost, executor driver, partition 1, ANY, 7767 bytes)
20/12/18 17:42:29 INFO Executor: Running task 1.0 in stage 3.0 (TID 4)
20/12/18 17:42:29 INFO ShuffleBlockFetcherIterator: Getting 1 non-empty blocks including 1 local blocks and 0 remote blocks
20/12/18 17:42:29 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 1 ms
20/12/18 17:42:29 WARN TaskSetManager: Lost task 0.0 in stage 3.0 (TID 3, localhost, executor driver): java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.String
	at org.apache.spark.sql.Row$class.getString(Row.scala:257)
	at org.apache.spark.sql.catalyst.expressions.GenericRow.getString(rows.scala:166)
	at com.vesoft.nebula.tools.importer.processor.Processor$class.extraValue(Processor.scala:50)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor.extraValue(VerticesProcessor.scala:42)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$1$$anonfun$3.apply(VerticesProcessor.scala:120)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$1$$anonfun$3.apply(VerticesProcessor.scala:119)
	at scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:683)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:682)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$1.apply(VerticesProcessor.scala:119)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$1.apply(VerticesProcessor.scala:100)
	at org.apache.spark.sql.execution.MapElementsExec$$anonfun$7$$anonfun$apply$1.apply(objects.scala:237)
	at org.apache.spark.sql.execution.MapElementsExec$$anonfun$7$$anonfun$apply$1.apply(objects.scala:237)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
	at scala.collection.Iterator$GroupedIterator.takeDestructively(Iterator.scala:1074)
	at scala.collection.Iterator$GroupedIterator.go(Iterator.scala:1089)
	at scala.collection.Iterator$GroupedIterator.fill(Iterator.scala:1126)
	at scala.collection.Iterator$GroupedIterator.hasNext(Iterator.scala:1130)
	at scala.collection.Iterator$class.foreach(Iterator.scala:891)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor.com$vesoft$nebula$tools$importer$processor$VerticesProcessor$$processEachPartition(VerticesProcessor.scala:69)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$process$2.apply(VerticesProcessor.scala:137)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$process$2.apply(VerticesProcessor.scala:137)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:980)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:980)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:123)
	at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

20/12/18 17:42:29 ERROR TaskSetManager: Task 0 in stage 3.0 failed 1 times; aborting job
20/12/18 17:42:30 INFO NebulaPool: Get connection to 192.168.10.188:3699
20/12/18 17:42:30 INFO GraphProvider: switch space user_relate
20/12/18 17:42:30 INFO NebulaGraphClientWriter: Connection to List(192.168.10.188:45500)
20/12/18 17:42:30 INFO TaskSchedulerImpl: Cancelling stage 3
20/12/18 17:42:30 INFO TaskSchedulerImpl: Killing all running tasks in stage 3: Stage cancelled
20/12/18 17:42:30 INFO Executor: Executor is trying to kill task 1.0 in stage 3.0 (TID 4), reason: Stage cancelled
20/12/18 17:42:30 INFO TaskSchedulerImpl: Stage 3 was cancelled
20/12/18 17:42:30 INFO DAGScheduler: ResultStage 3 (foreachPartition at VerticesProcessor.scala:137) failed in 0.222 s due to Job aborted due to stage failure: Task 0 in stage 3.0 failed 1 times, most recent failure: Lost task 0.0 in stage 3.0 (TID 3, localhost, executor driver): java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.String
	at org.apache.spark.sql.Row$class.getString(Row.scala:257)
	at org.apache.spark.sql.catalyst.expressions.GenericRow.getString(rows.scala:166)
	at com.vesoft.nebula.tools.importer.processor.Processor$class.extraValue(Processor.scala:50)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor.extraValue(VerticesProcessor.scala:42)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$1$$anonfun$3.apply(VerticesProcessor.scala:120)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$1$$anonfun$3.apply(VerticesProcessor.scala:119)
	at scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:683)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:682)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$1.apply(VerticesProcessor.scala:119)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$1.apply(VerticesProcessor.scala:100)
	at org.apache.spark.sql.execution.MapElementsExec$$anonfun$7$$anonfun$apply$1.apply(objects.scala:237)
	at org.apache.spark.sql.execution.MapElementsExec$$anonfun$7$$anonfun$apply$1.apply(objects.scala:237)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
	at scala.collection.Iterator$GroupedIterator.takeDestructively(Iterator.scala:1074)
	at scala.collection.Iterator$GroupedIterator.go(Iterator.scala:1089)
	at scala.collection.Iterator$GroupedIterator.fill(Iterator.scala:1126)
	at scala.collection.Iterator$GroupedIterator.hasNext(Iterator.scala:1130)
	at scala.collection.Iterator$class.foreach(Iterator.scala:891)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor.com$vesoft$nebula$tools$importer$processor$VerticesProcessor$$processEachPartition(VerticesProcessor.scala:69)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$process$2.apply(VerticesProcessor.scala:137)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$process$2.apply(VerticesProcessor.scala:137)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:980)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:980)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:123)
	at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
20/12/18 17:42:30 INFO Executor: Executor killed task 1.0 in stage 3.0 (TID 4), reason: Stage cancelled

[/quote]

**[quote=“lsz4123, post:1, topic:2116, full:true”]
请问: 配置文件哪里写的不符合规范

提问参考模版:

  • nebula 版本:2.0
  • exchange: 2.0
  • nosql
CREATE SPACE user_relate;
USE user_relate;
CREATE TAG ve(vid string);
CREATE EDGE ed();
  • csv
点文件:
1
2
3
...
50000

边文件
rand(1,50000),rand(1,50000) 1000w组随机数据
  • conf 文件
{
  # Spark relation config
  spark: {
    app: {
      name: Nebula Exchange 2.0
    }

    driver: {
      cores: 1
      maxResultSize: 1G
    }

    executor: {
        memory: 1G
    }

    cores:{
      max: 16
    }
  }

  nebula: {
    address:{
      graph:["192.168.10.188:3699"]
      meta:["192.168.10.188:45500"]
    }
    user: root
    pswd: nebula
    space: user_relate

    connection {
      timeout: 3000
      retry: 3
    }

    execution {
      retry: 3
    }

    error: {
      max: 32
      output: /tmp/errors
    }

    rate: {
      limit: 1024
      timeout: 1000
    }
  }

  tags: [

    {
      name: ve
      type: {
        source: csv
        sink: client
      }
      path: "hdfs://ip:port/user/lsz/user_relate/vertex.csv"
      fields: [_c0]
      nebula.fields: [vid]
      vertex: _c0
      separator: ","
      header: false
      batch: 256
      partition: 32
    }

  ]

  edges: [
    {
      name: ed
      type: {
        source: csv
        sink: client
      }
      path: "hdfs://ip:port/user/lsz/user_relate/edge.csv"
      fields: [_c0,_c1]
      nebula.fields: []
      source: {
        field: _c1
      }
      target: {
        field: _c0
      }
      separator: ","
      header: false
      batch: 256
      partition: 32
    }
  ]
}
  • 错误日志
20/12/18 17:42:29 ERROR Executor: Exception in task 0.0 in stage 3.0 (TID 3)
java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.String
	at org.apache.spark.sql.Row$class.getString(Row.scala:257)
	at org.apache.spark.sql.catalyst.expressions.GenericRow.getString(rows.scala:166)
	at com.vesoft.nebula.tools.importer.processor.Processor$class.extraValue(Processor.scala:50)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor.extraValue(VerticesProcessor.scala:42)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$1$$anonfun$3.apply(VerticesProcessor.scala:120)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$1$$anonfun$3.apply(VerticesProcessor.scala:119)
	at scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:683)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:682)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$1.apply(VerticesProcessor.scala:119)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$1.apply(VerticesProcessor.scala:100)
	at org.apache.spark.sql.execution.MapElementsExec$$anonfun$7$$anonfun$apply$1.apply(objects.scala:237)
	at org.apache.spark.sql.execution.MapElementsExec$$anonfun$7$$anonfun$apply$1.apply(objects.scala:237)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
	at scala.collection.Iterator$GroupedIterator.takeDestructively(Iterator.scala:1074)
	at scala.collection.Iterator$GroupedIterator.go(Iterator.scala:1089)
	at scala.collection.Iterator$GroupedIterator.fill(Iterator.scala:1126)
	at scala.collection.Iterator$GroupedIterator.hasNext(Iterator.scala:1130)
	at scala.collection.Iterator$class.foreach(Iterator.scala:891)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor.com$vesoft$nebula$tools$importer$processor$VerticesProcessor$$processEachPartition(VerticesProcessor.scala:69)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$process$2.apply(VerticesProcessor.scala:137)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$process$2.apply(VerticesProcessor.scala:137)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:980)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:980)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:123)
	at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
20/12/18 17:42:29 INFO TaskSetManager: Starting task 1.0 in stage 3.0 (TID 4, localhost, executor driver, partition 1, ANY, 7767 bytes)
20/12/18 17:42:29 INFO Executor: Running task 1.0 in stage 3.0 (TID 4)
20/12/18 17:42:29 INFO ShuffleBlockFetcherIterator: Getting 1 non-empty blocks including 1 local blocks and 0 remote blocks
20/12/18 17:42:29 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 1 ms
20/12/18 17:42:29 WARN TaskSetManager: Lost task 0.0 in stage 3.0 (TID 3, localhost, executor driver): java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.String
	at org.apache.spark.sql.Row$class.getString(Row.scala:257)
	at org.apache.spark.sql.catalyst.expressions.GenericRow.getString(rows.scala:166)
	at com.vesoft.nebula.tools.importer.processor.Processor$class.extraValue(Processor.scala:50)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor.extraValue(VerticesProcessor.scala:42)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$1$$anonfun$3.apply(VerticesProcessor.scala:120)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$1$$anonfun$3.apply(VerticesProcessor.scala:119)
	at scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:683)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:682)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$1.apply(VerticesProcessor.scala:119)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$1.apply(VerticesProcessor.scala:100)
	at org.apache.spark.sql.execution.MapElementsExec$$anonfun$7$$anonfun$apply$1.apply(objects.scala:237)
	at org.apache.spark.sql.execution.MapElementsExec$$anonfun$7$$anonfun$apply$1.apply(objects.scala:237)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
	at scala.collection.Iterator$GroupedIterator.takeDestructively(Iterator.scala:1074)
	at scala.collection.Iterator$GroupedIterator.go(Iterator.scala:1089)
	at scala.collection.Iterator$GroupedIterator.fill(Iterator.scala:1126)
	at scala.collection.Iterator$GroupedIterator.hasNext(Iterator.scala:1130)
	at scala.collection.Iterator$class.foreach(Iterator.scala:891)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor.com$vesoft$nebula$tools$importer$processor$VerticesProcessor$$processEachPartition(VerticesProcessor.scala:69)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$process$2.apply(VerticesProcessor.scala:137)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$process$2.apply(VerticesProcessor.scala:137)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:980)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:980)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:123)
	at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

20/12/18 17:42:29 ERROR TaskSetManager: Task 0 in stage 3.0 failed 1 times; aborting job
20/12/18 17:42:30 INFO NebulaPool: Get connection to 192.168.10.188:3699
20/12/18 17:42:30 INFO GraphProvider: switch space user_relate
20/12/18 17:42:30 INFO NebulaGraphClientWriter: Connection to List(192.168.10.188:45500)
20/12/18 17:42:30 INFO TaskSchedulerImpl: Cancelling stage 3
20/12/18 17:42:30 INFO TaskSchedulerImpl: Killing all running tasks in stage 3: Stage cancelled
20/12/18 17:42:30 INFO Executor: Executor is trying to kill task 1.0 in stage 3.0 (TID 4), reason: Stage cancelled
20/12/18 17:42:30 INFO TaskSchedulerImpl: Stage 3 was cancelled
20/12/18 17:42:30 INFO DAGScheduler: ResultStage 3 (foreachPartition at VerticesProcessor.scala:137) failed in 0.222 s due to Job aborted due to stage failure: Task 0 in stage 3.0 failed 1 times, most recent failure: Lost task 0.0 in stage 3.0 (TID 3, localhost, executor driver): java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.String
	at org.apache.spark.sql.Row$class.getString(Row.scala:257)
	at org.apache.spark.sql.catalyst.expressions.GenericRow.getString(rows.scala:166)
	at com.vesoft.nebula.tools.importer.processor.Processor$class.extraValue(Processor.scala:50)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor.extraValue(VerticesProcessor.scala:42)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$1$$anonfun$3.apply(VerticesProcessor.scala:120)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$1$$anonfun$3.apply(VerticesProcessor.scala:119)
	at scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:683)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:682)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$1.apply(VerticesProcessor.scala:119)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$1.apply(VerticesProcessor.scala:100)
	at org.apache.spark.sql.execution.MapElementsExec$$anonfun$7$$anonfun$apply$1.apply(objects.scala:237)
	at org.apache.spark.sql.execution.MapElementsExec$$anonfun$7$$anonfun$apply$1.apply(objects.scala:237)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
	at scala.collection.Iterator$GroupedIterator.takeDestructively(Iterator.scala:1074)
	at scala.collection.Iterator$GroupedIterator.go(Iterator.scala:1089)
	at scala.collection.Iterator$GroupedIterator.fill(Iterator.scala:1126)
	at scala.collection.Iterator$GroupedIterator.hasNext(Iterator.scala:1130)
	at scala.collection.Iterator$class.foreach(Iterator.scala:891)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor.com$vesoft$nebula$tools$importer$processor$VerticesProcessor$$processEachPartition(VerticesProcessor.scala:69)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$process$2.apply(VerticesProcessor.scala:137)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$process$2.apply(VerticesProcessor.scala:137)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:980)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:980)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:123)
	at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
20/12/18 17:42:30 INFO Executor: Executor killed task 1.0 in stage 3.0 (TID 4), reason: Stage cancelled

[/quote]

**[quote=“lsz4123, post:1, topic:2116, full:true”]
请问: 配置文件哪里写的不符合规范

提问参考模版:

  • nebula 版本:2.0
  • exchange: 2.0
  • nosql
CREATE SPACE user_relate;
USE user_relate;
CREATE TAG ve(vid string);
CREATE EDGE ed();
  • csv
点文件:
1
2
3
...
50000

边文件
rand(1,50000),rand(1,50000) 1000w组随机数据
  • conf 文件
{
  # Spark relation config
  spark: {
    app: {
      name: Nebula Exchange 2.0
    }

    driver: {
      cores: 1
      maxResultSize: 1G
    }

    executor: {
        memory: 1G
    }

    cores:{
      max: 16
    }
  }

  nebula: {
    address:{
      graph:["192.168.10.188:3699"]
      meta:["192.168.10.188:45500"]
    }
    user: root
    pswd: nebula
    space: user_relate

    connection {
      timeout: 3000
      retry: 3
    }

    execution {
      retry: 3
    }

    error: {
      max: 32
      output: /tmp/errors
    }

    rate: {
      limit: 1024
      timeout: 1000
    }
  }

  tags: [

    {
      name: ve
      type: {
        source: csv
        sink: client
      }
      path: "hdfs://ip:port/user/lsz/user_relate/vertex.csv"
      fields: [_c0]
      nebula.fields: [vid]
      vertex: _c0
      separator: ","
      header: false
      batch: 256
      partition: 32
    }

  ]

  edges: [
    {
      name: ed
      type: {
        source: csv
        sink: client
      }
      path: "hdfs://ip:port/user/lsz/user_relate/edge.csv"
      fields: [_c0,_c1]
      nebula.fields: []
      source: {
        field: _c1
      }
      target: {
        field: _c0
      }
      separator: ","
      header: false
      batch: 256
      partition: 32
    }
  ]
}
  • 错误日志
20/12/18 17:42:29 ERROR Executor: Exception in task 0.0 in stage 3.0 (TID 3)
java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.String
	at org.apache.spark.sql.Row$class.getString(Row.scala:257)
	at org.apache.spark.sql.catalyst.expressions.GenericRow.getString(rows.scala:166)
	at com.vesoft.nebula.tools.importer.processor.Processor$class.extraValue(Processor.scala:50)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor.extraValue(VerticesProcessor.scala:42)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$1$$anonfun$3.apply(VerticesProcessor.scala:120)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$1$$anonfun$3.apply(VerticesProcessor.scala:119)
	at scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:683)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:682)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$1.apply(VerticesProcessor.scala:119)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$1.apply(VerticesProcessor.scala:100)
	at org.apache.spark.sql.execution.MapElementsExec$$anonfun$7$$anonfun$apply$1.apply(objects.scala:237)
	at org.apache.spark.sql.execution.MapElementsExec$$anonfun$7$$anonfun$apply$1.apply(objects.scala:237)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
	at scala.collection.Iterator$GroupedIterator.takeDestructively(Iterator.scala:1074)
	at scala.collection.Iterator$GroupedIterator.go(Iterator.scala:1089)
	at scala.collection.Iterator$GroupedIterator.fill(Iterator.scala:1126)
	at scala.collection.Iterator$GroupedIterator.hasNext(Iterator.scala:1130)
	at scala.collection.Iterator$class.foreach(Iterator.scala:891)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor.com$vesoft$nebula$tools$importer$processor$VerticesProcessor$$processEachPartition(VerticesProcessor.scala:69)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$process$2.apply(VerticesProcessor.scala:137)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$process$2.apply(VerticesProcessor.scala:137)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:980)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:980)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:123)
	at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
20/12/18 17:42:29 INFO TaskSetManager: Starting task 1.0 in stage 3.0 (TID 4, localhost, executor driver, partition 1, ANY, 7767 bytes)
20/12/18 17:42:29 INFO Executor: Running task 1.0 in stage 3.0 (TID 4)
20/12/18 17:42:29 INFO ShuffleBlockFetcherIterator: Getting 1 non-empty blocks including 1 local blocks and 0 remote blocks
20/12/18 17:42:29 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 1 ms
20/12/18 17:42:29 WARN TaskSetManager: Lost task 0.0 in stage 3.0 (TID 3, localhost, executor driver): java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.String
	at org.apache.spark.sql.Row$class.getString(Row.scala:257)
	at org.apache.spark.sql.catalyst.expressions.GenericRow.getString(rows.scala:166)
	at com.vesoft.nebula.tools.importer.processor.Processor$class.extraValue(Processor.scala:50)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor.extraValue(VerticesProcessor.scala:42)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$1$$anonfun$3.apply(VerticesProcessor.scala:120)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$1$$anonfun$3.apply(VerticesProcessor.scala:119)
	at scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:683)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:682)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$1.apply(VerticesProcessor.scala:119)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$1.apply(VerticesProcessor.scala:100)
	at org.apache.spark.sql.execution.MapElementsExec$$anonfun$7$$anonfun$apply$1.apply(objects.scala:237)
	at org.apache.spark.sql.execution.MapElementsExec$$anonfun$7$$anonfun$apply$1.apply(objects.scala:237)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
	at scala.collection.Iterator$GroupedIterator.takeDestructively(Iterator.scala:1074)
	at scala.collection.Iterator$GroupedIterator.go(Iterator.scala:1089)
	at scala.collection.Iterator$GroupedIterator.fill(Iterator.scala:1126)
	at scala.collection.Iterator$GroupedIterator.hasNext(Iterator.scala:1130)
	at scala.collection.Iterator$class.foreach(Iterator.scala:891)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor.com$vesoft$nebula$tools$importer$processor$VerticesProcessor$$processEachPartition(VerticesProcessor.scala:69)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$process$2.apply(VerticesProcessor.scala:137)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$process$2.apply(VerticesProcessor.scala:137)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:980)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:980)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:123)
	at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

20/12/18 17:42:29 ERROR TaskSetManager: Task 0 in stage 3.0 failed 1 times; aborting job
20/12/18 17:42:30 INFO NebulaPool: Get connection to 192.168.10.188:3699
20/12/18 17:42:30 INFO GraphProvider: switch space user_relate
20/12/18 17:42:30 INFO NebulaGraphClientWriter: Connection to List(192.168.10.188:45500)
20/12/18 17:42:30 INFO TaskSchedulerImpl: Cancelling stage 3
20/12/18 17:42:30 INFO TaskSchedulerImpl: Killing all running tasks in stage 3: Stage cancelled
20/12/18 17:42:30 INFO Executor: Executor is trying to kill task 1.0 in stage 3.0 (TID 4), reason: Stage cancelled
20/12/18 17:42:30 INFO TaskSchedulerImpl: Stage 3 was cancelled
20/12/18 17:42:30 INFO DAGScheduler: ResultStage 3 (foreachPartition at VerticesProcessor.scala:137) failed in 0.222 s due to Job aborted due to stage failure: Task 0 in stage 3.0 failed 1 times, most recent failure: Lost task 0.0 in stage 3.0 (TID 3, localhost, executor driver): java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.String
	at org.apache.spark.sql.Row$class.getString(Row.scala:257)
	at org.apache.spark.sql.catalyst.expressions.GenericRow.getString(rows.scala:166)
	at com.vesoft.nebula.tools.importer.processor.Processor$class.extraValue(Processor.scala:50)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor.extraValue(VerticesProcessor.scala:42)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$1$$anonfun$3.apply(VerticesProcessor.scala:120)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$1$$anonfun$3.apply(VerticesProcessor.scala:119)
	at scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:683)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:682)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$1.apply(VerticesProcessor.scala:119)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$1.apply(VerticesProcessor.scala:100)
	at org.apache.spark.sql.execution.MapElementsExec$$anonfun$7$$anonfun$apply$1.apply(objects.scala:237)
	at org.apache.spark.sql.execution.MapElementsExec$$anonfun$7$$anonfun$apply$1.apply(objects.scala:237)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
	at scala.collection.Iterator$GroupedIterator.takeDestructively(Iterator.scala:1074)
	at scala.collection.Iterator$GroupedIterator.go(Iterator.scala:1089)
	at scala.collection.Iterator$GroupedIterator.fill(Iterator.scala:1126)
	at scala.collection.Iterator$GroupedIterator.hasNext(Iterator.scala:1130)
	at scala.collection.Iterator$class.foreach(Iterator.scala:891)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor.com$vesoft$nebula$tools$importer$processor$VerticesProcessor$$processEachPartition(VerticesProcessor.scala:69)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$process$2.apply(VerticesProcessor.scala:137)
	at com.vesoft.nebula.tools.importer.processor.VerticesProcessor$$anonfun$process$2.apply(VerticesProcessor.scala:137)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:980)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:980)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:123)
	at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
20/12/18 17:42:30 INFO Executor: Executor killed task 1.0 in stage 3.0 (TID 4), reason: Stage cancelled
jdbc.config配置文件:
tags: [
    {
      name: test_Company
      type: {
        source: jdbc
        sink: client
      }
      url: "jdbc:postgresql://xxxxxx:1921/dm"
      driver: "org.postgresql.Driver"
      user: "xxxx"
      password: "JDYA_ldhe_9381"
      sentence: "select id,company_name,company_code,uni_code,company_type,cast(update_time as string) as update_time,legal_rep from dm.dws.dw_lget_company_info_nebula_v_base_20251013_nebula_v_base_20251013_03"
      fields: [id, company_name,company_code,uni_code,company_type,update_time,legal_rep]
      nebula.fields: [id, company_name,company_code,uni_code,company_type,update_time,legal_rep]
      vertex: id
      batch: 2000
      partition: 60
    }
  ]

报错:

25/10/14 08:32:45 INFO Exchange$: >>>>> Processing Tag test_Company
25/10/14 08:32:45 INFO Exchange$: >>>>> field keys: id, company_name, company_code, uni_code, company_type, update_time, legal_rep
25/10/14 08:32:45 INFO Exchange$: >>>>> nebula keys: id, company_name, company_code, uni_code, company_type, update_time, legal_rep
25/10/14 08:32:45 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('file:/data/nebula-exchange-master/nebula-exchange_spark_2.4/target/spark-warehouse').
25/10/14 08:32:45 INFO SharedState: Warehouse path is 'file:/data/nebula-exchange-master/nebula-exchange_spark_2.4/target/spark-warehouse'.
25/10/14 08:32:45 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
Exception in thread "main" org.postgresql.util.PSQLException: ERROR: type "string" does not exist
  Position: 94
        at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2455)
        at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2155)
        at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:288)
        at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:430)
        at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:356)
        at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:168)
        at org.postgresql.jdbc.PgPreparedStatement.executeQuery(PgPreparedStatement.java:116)
        at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:61)
        at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation$.getSchema(JDBCRelation.scala:210)
        at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:35)
        at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:318)
        at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
        at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
        at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:167)
        at com.vesoft.nebula.exchange.reader.JdbcReader.read(ServerBaseReader.scala:429)
        at com.vesoft.nebula.exchange.Exchange$.com$vesoft$nebula$exchange$Exchange$$createDataSource(Exchange.scala:359)
        at com.vesoft.nebula.exchange.Exchange$$anonfun$main$2.apply(Exchange.scala:130)
        at com.vesoft.nebula.exchange.Exchange$$anonfun$main$2.apply(Exchange.scala:117)
        at scala.collection.parallel.mutable.ParArray$ParArrayIterator.foreach_quick(ParArray.scala:143)
        at scala.collection.parallel.mutable.ParArray$ParArrayIterator.foreach(ParArray.scala:136)
        at scala.collection.parallel.ParIterableLike$Foreach.leaf(ParIterableLike.scala:972)
        at scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply$mcV$sp(Tasks.scala:49)
        at scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply(Tasks.scala:48)
        at scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply(Tasks.scala:48)
        at scala.collection.parallel.Task$class.tryLeaf(Tasks.scala:51)
        at scala.collection.parallel.ParIterableLike$Foreach.tryLeaf(ParIterableLike.scala:969)
        at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask$class.compute(Tasks.scala:152)
        at scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.compute(Tasks.scala:443)
        at scala.concurrent.forkjoin.RecursiveAction.exec(RecursiveAction.java:160)
        at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
        at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
        at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)







:;'

此话题已在最后回复的 30 天后被自动关闭。不再允许新回复。