exchange导入basketballplayer报错

  • nebula 版本:2.6.0
  • 部署方式:单机
  • 安装方式:Docker-compose
  • 是否为线上版本:N
  • exchange版本: 2.6
  • spark版本: 2.4.8
  • scala版本: 2.11.12
  • 硬件信息
    • 4C6G
  • 问题的具体描述
    利用exchange导入basketballplayer数据报错
  • spark报错日志
[wjc@localhost bin]$ ./spark-submit --master local --class com.vesoft.nebula.exchange.Exchange ~/nebula-exchange-2.6/nebula-exchange/target/nebula-exchange-2.6.3.jar -c ~/nebula-exchange-2.6/nebula-exchange/target/classes/csv_application.conf 
22/03/16 11:18:20 WARN Utils: Your hostname, localhost resolves to a loopback address: 127.0.0.1; using 172.20.10.14 instead (on interface ens33)
22/03/16 11:18:20 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
22/03/16 11:18:21 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
log4j:WARN No appenders could be found for logger (com.vesoft.nebula.exchange.config.Configs$).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
22/03/16 11:18:21 INFO SparkContext: Running Spark version 2.4.8
22/03/16 11:18:21 INFO SparkContext: Submitted application: com.vesoft.nebula.exchange.Exchange
22/03/16 11:18:21 INFO SecurityManager: Changing view acls to: wjc
22/03/16 11:18:21 INFO SecurityManager: Changing modify acls to: wjc
22/03/16 11:18:21 INFO SecurityManager: Changing view acls groups to: 
22/03/16 11:18:21 INFO SecurityManager: Changing modify acls groups to: 
22/03/16 11:18:21 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(wjc); groups with view permissions: Set(); users  with modify permissions: Set(wjc); groups with modify permissions: Set()
22/03/16 11:18:22 INFO Utils: Successfully started service 'sparkDriver' on port 46097.
22/03/16 11:18:22 INFO SparkEnv: Registering MapOutputTracker
22/03/16 11:18:22 INFO SparkEnv: Registering BlockManagerMaster
22/03/16 11:18:22 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
22/03/16 11:18:22 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
22/03/16 11:18:22 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-3250b5a2-226c-486b-b2fd-e65bdcb1e60e
22/03/16 11:18:22 INFO MemoryStore: MemoryStore started with capacity 366.1 MB
22/03/16 11:18:22 INFO SparkEnv: Registering OutputCommitCoordinator
22/03/16 11:18:22 INFO Utils: Successfully started service 'SparkUI' on port 4040.
22/03/16 11:18:22 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://172.20.10.14:4040
22/03/16 11:18:22 INFO SparkContext: Added JAR file:/home/wjc/nebula-exchange-2.6/nebula-exchange/target/nebula-exchange-2.6.3.jar at spark://172.20.10.14:46097/jars/nebula-exchange-2.6.3.jar with timestamp 1647400702966
22/03/16 11:18:23 INFO Executor: Starting executor ID driver on host localhost
22/03/16 11:18:23 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 33021.
22/03/16 11:18:23 INFO NettyBlockTransferService: Server created on 172.20.10.14:33021
22/03/16 11:18:23 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
22/03/16 11:18:23 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 172.20.10.14, 33021, None)
22/03/16 11:18:23 INFO BlockManagerMasterEndpoint: Registering block manager 172.20.10.14:33021 with 366.1 MB RAM, BlockManagerId(driver, 172.20.10.14, 33021, None)
22/03/16 11:18:23 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 172.20.10.14, 33021, None)
22/03/16 11:18:23 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 172.20.10.14, 33021, None)
22/03/16 11:18:23 INFO Exchange$: Processing Tag player
22/03/16 11:18:23 INFO Exchange$: field keys: _c1, _c2
22/03/16 11:18:23 INFO Exchange$: nebula keys: age, name
22/03/16 11:18:23 INFO Exchange$: Loading CSV files from file:///home/wjc/daoshu/dataset/vertex_player.csv
22/03/16 11:18:23 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('file:/home/wjc/spark-2.4.8-bin-hadoop2.6/bin/spark-warehouse').
22/03/16 11:18:23 INFO SharedState: Warehouse path is 'file:/home/wjc/spark-2.4.8-bin-hadoop2.6/bin/spark-warehouse'.
22/03/16 11:18:24 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
22/03/16 11:18:24 WARN SizeEstimator: Failed to check whether UseCompressedOops is set; assuming yes
22/03/16 11:18:24 INFO InMemoryFileIndex: It took 34 ms to list leaf files for 1 paths.
22/03/16 11:18:24 INFO InMemoryFileIndex: It took 1 ms to list leaf files for 1 paths.
22/03/16 11:18:27 INFO FileSourceStrategy: Pruning directories with: 
22/03/16 11:18:27 INFO FileSourceStrategy: Post-Scan Filters: (length(trim(value#0, None)) > 0)
22/03/16 11:18:27 INFO FileSourceStrategy: Output Data Schema: struct<value: string>
22/03/16 11:18:27 INFO FileSourceScanExec: Pushed Filters: 
22/03/16 11:18:27 INFO CodeGenerator: Code generated in 262.076914 ms
22/03/16 11:18:28 INFO CodeGenerator: Code generated in 21.848622 ms
22/03/16 11:18:28 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 200.3 KB, free 366.0 MB)
22/03/16 11:18:28 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 21.0 KB, free 365.9 MB)
22/03/16 11:18:28 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 172.20.10.14:33021 (size: 21.0 KB, free: 366.1 MB)
22/03/16 11:18:28 INFO SparkContext: Created broadcast 0 from csv at FileBaseReader.scala:86
22/03/16 11:18:28 INFO FileSourceScanExec: Planning scan with bin packing, max size: 4195699 bytes, open cost is considered as scanning 4194304 bytes.
22/03/16 11:18:28 INFO SparkContext: Starting job: csv at FileBaseReader.scala:86
22/03/16 11:18:28 INFO DAGScheduler: Got job 0 (csv at FileBaseReader.scala:86) with 1 output partitions
22/03/16 11:18:28 INFO DAGScheduler: Final stage: ResultStage 0 (csv at FileBaseReader.scala:86)
22/03/16 11:18:28 INFO DAGScheduler: Parents of final stage: List()
22/03/16 11:18:28 INFO DAGScheduler: Missing parents: List()
22/03/16 11:18:28 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[3] at csv at FileBaseReader.scala:86), which has no missing parents
22/03/16 11:18:29 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 8.9 KB, free 365.9 MB)
22/03/16 11:18:29 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 4.6 KB, free 365.9 MB)
22/03/16 11:18:29 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 172.20.10.14:33021 (size: 4.6 KB, free: 366.1 MB)
22/03/16 11:18:29 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1184
22/03/16 11:18:29 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[3] at csv at FileBaseReader.scala:86) (first 15 tasks are for partitions Vector(0))
22/03/16 11:18:29 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
22/03/16 11:18:29 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, executor driver, partition 0, PROCESS_LOCAL, 8265 bytes)
22/03/16 11:18:29 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
22/03/16 11:18:29 INFO Executor: Fetching spark://172.20.10.14:46097/jars/nebula-exchange-2.6.3.jar with timestamp 1647400702966
22/03/16 11:18:29 INFO TransportClientFactory: Successfully created connection to /172.20.10.14:46097 after 43 ms (0 ms spent in bootstraps)
22/03/16 11:18:29 INFO Utils: Fetching spark://172.20.10.14:46097/jars/nebula-exchange-2.6.3.jar to /tmp/spark-ce0772b1-f41c-4aa0-acea-07a42345bfaa/userFiles-469bd7eb-e976-4904-bc17-73aee2d96a75/fetchFileTemp6355599499474469828.tmp
22/03/16 11:18:31 INFO Executor: Adding file:/tmp/spark-ce0772b1-f41c-4aa0-acea-07a42345bfaa/userFiles-469bd7eb-e976-4904-bc17-73aee2d96a75/nebula-exchange-2.6.3.jar to class loader
22/03/16 11:18:31 INFO FileScanRDD: Reading File path: file:///home/wjc/daoshu/dataset/vertex_player.csv, range: 0-1395, partition values: [empty row]
22/03/16 11:18:31 INFO CodeGenerator: Code generated in 17.728156 ms
22/03/16 11:18:31 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 1259 bytes result sent to driver
22/03/16 11:18:31 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 2375 ms on localhost (executor driver) (1/1)
22/03/16 11:18:31 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
22/03/16 11:18:31 INFO DAGScheduler: ResultStage 0 (csv at FileBaseReader.scala:86) finished in 2.560 s
22/03/16 11:18:31 INFO DAGScheduler: Job 0 finished: csv at FileBaseReader.scala:86, took 2.675931 s
22/03/16 11:18:31 INFO FileSourceStrategy: Pruning directories with: 
22/03/16 11:18:31 INFO FileSourceStrategy: Post-Scan Filters: 
22/03/16 11:18:31 INFO FileSourceStrategy: Output Data Schema: struct<value: string>
22/03/16 11:18:31 INFO FileSourceScanExec: Pushed Filters: 
22/03/16 11:18:31 INFO CodeGenerator: Code generated in 16.122899 ms
22/03/16 11:18:31 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 200.3 KB, free 365.7 MB)
22/03/16 11:18:31 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 21.0 KB, free 365.7 MB)
22/03/16 11:18:31 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on 172.20.10.14:33021 (size: 21.0 KB, free: 366.1 MB)
22/03/16 11:18:31 INFO SparkContext: Created broadcast 2 from csv at FileBaseReader.scala:86
22/03/16 11:18:31 INFO FileSourceScanExec: Planning scan with bin packing, max size: 4195699 bytes, open cost is considered as scanning 4194304 bytes.
Exception in thread "main" com.facebook.thrift.transport.TTransportException: java.net.ConnectException: Connection refused (Connection refused)
	at com.facebook.thrift.transport.TSocket.open(TSocket.java:206)
	at com.vesoft.nebula.client.meta.MetaClient.getClient(MetaClient.java:145)
	at com.vesoft.nebula.client.meta.MetaClient.doConnect(MetaClient.java:124)
	at com.vesoft.nebula.client.meta.MetaClient.connect(MetaClient.java:113)
	at com.vesoft.nebula.exchange.MetaProvider.<init>(MetaProvider.scala:56)
	at com.vesoft.nebula.exchange.processor.VerticesProcessor.process(VerticesProcessor.scala:110)
	at com.vesoft.nebula.exchange.Exchange$$anonfun$main$2.apply(Exchange.scala:150)
	at com.vesoft.nebula.exchange.Exchange$$anonfun$main$2.apply(Exchange.scala:126)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at com.vesoft.nebula.exchange.Exchange$.main(Exchange.scala:126)
	at com.vesoft.nebula.exchange.Exchange.main(Exchange.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
	at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:855)
	at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161)
	at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)
	at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
	at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:930)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:939)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
	at java.net.PlainSocketImpl.socketConnect(Native Method)
	at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
	at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
	at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
	at java.net.Socket.connect(Socket.java:589)
	at com.facebook.thrift.transport.TSocket.open(TSocket.java:201)
	... 22 more
22/03/16 11:18:31 INFO SparkContext: Invoking stop() from shutdown hook
22/03/16 11:18:31 INFO SparkUI: Stopped Spark web UI at http://172.20.10.14:4040
22/03/16 11:18:31 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
22/03/16 11:18:31 INFO MemoryStore: MemoryStore cleared
22/03/16 11:18:31 INFO BlockManager: BlockManager stopped
22/03/16 11:18:31 INFO BlockManagerMaster: BlockManagerMaster stopped
22/03/16 11:18:31 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
22/03/16 11:18:31 INFO SparkContext: Successfully stopped SparkContext
22/03/16 11:18:31 INFO ShutdownHookManager: Shutdown hook called
22/03/16 11:18:31 INFO ShutdownHookManager: Deleting directory /tmp/spark-f664c208-638d-4554-831a-273e42352a98
22/03/16 11:18:31 INFO ShutdownHookManager: Deleting directory /tmp/spark-ce0772b1-f41c-4aa0-acea-07a42345bfaa
  • application.conf 内容
{

  spark: {
    app: {
      name: Nebula Exchange 2.0
    }

    master:local

    driver: {
      cores: 1
      maxResultSize: 128M
    }

    executor: {
        memory:1513M
    }

    cores:{
      max: 1
    }
  }

  nebula: {
    address:{
      graph:["172.18.0.9:49177","172.18.0.8:9669","172.18.0.10:49178"]
      meta:["172.18.0.3:49160","172.18.0.2:49159","172.18.0.4:49161"]
    }
    user: root
    pswd: nebula
    space: basketballplayer

    connection {

      timeout: 30000
    }

    error: {

      max: 1

      output: /tmp/errors
    }

    rate: {

      limit: 1024

      timeout: 1000
    }
  }

  tags: [

    {

      name: player
      type: {

        source: csv

        sink: client
      }

      path: "file:///home/wjc/daoshu/dataset/vertex_player.csv"

      fields: [_c1, _c2]

      nebula.fields: [age, name]

      vertex: {
        field:_c0

      }

      separator: ","

      header: false

      batch: 128

      partition: 32
    }

    {

      name: team
      type: {

        source: csv

        sink: client
      }

      path: "file:///home/wjc/daoshu/dataset/vertex_team.csv"

      fields: [_c1]

      nebula.fields: [name]

      vertex: {
        field:_c0

      }

      separator: ","

      header: false

      batch: 128

      partition: 32
    }

  ]

  edges: [

    {

      name: follow
      type: {

        source: csv

        sink: client
      }

      path: "file:///home/wjc/daoshu/dataset/edge_follow.csv"

      fields: [_c2]

      nebula.fields: [degree]

      source: {
        field: _c0
      }
      target: {
        field: _c1
      }

      separator: ","

      header: false

      batch: 128

      partition: 32
    }

    {

      name: serve
      type: {

        source: csv

        sink: client
      }

      path: "file:///home/wjc/daoshu/dataset/edge_serve.csv"

      fields: [_c2,_c3]

      nebula.fields: [start_year, end_year]

      source: {
        field: _c0
      }
      target: {
        field: _c1
      }

      separator: ","

      header: false

      batch: 128

      partition: 32
    }

  ]

}

这个 graphd 和 metad 的 ip 和端口号没配错对吗

这个ip:port是把每个容器的ip:port写上去吗, 还是直接写本地机器ip和端口号
我之前试了写本地机器的172.20.10.14:9669 9559 但是都是报错,
不清楚是应该填哪个

啊,不是啊,这里是要同你的 Nebula 部署的配置的,你 Nebula 部署用的 ip 地址和 端口号是啥,这里就填啥,因为是要去那个地址读取、写入数据的。

不管写啥吧, 反正都是报的同样的错误 :neutral_face:

这个错误就是配置没配对啊。- -,你用这个钥匙去开门拿东西或者放东西,你门没找对,自然是打不开的呀。你 Nebula 咋配置的

我从console和studio连接nebula都是用的172.20.10.14:9669, 但是exchange就报错呢

你看下,Exchange 用了什么 ip 和端口号。- -。我上面说了啊,要保持和 Nebula 配置一致啊。单机部署的话。。为啥会有 3 个 ip 和 3 个端口号…

:joy:嗯嗯, 我改回去了, 再跑还是一模一样的错误

[wjc@localhost bin]$ cat ~/nebula-exchange-2.6/nebula-exchange/target/classes/csv_application.conf 
{
  # Spark relation config
  spark: {
    app: {
      name: Nebula Exchange 2.0
    }

    master:local

    driver: {
      cores: 1
      maxResultSize: 128M
    }

    executor: {
        memory:1513M
    }

    cores:{
      max: 1
    }
  }

  # if the hive is hive-on-spark with derby mode, you can ignore this hive configure
  # get the config values from file $HIVE_HOME/conf/hive-site.xml or hive-default.xml

  #  hive: {
  #    warehouse: "hdfs://NAMENODE_IP:9000/apps/svr/hive-xxx/warehouse/"
  #    connectionURL: "jdbc:mysql://your_ip:3306/hive_spark?characterEncoding=UTF-8"
  #    connectionDriverName: "com.mysql.jdbc.Driver"
  #    connectionUserName: "user"
  #    connectionPassword: "password"
  #  }


  # Nebula Graph relation config
  nebula: {
    address:{
      graph:["172.20.10.14:9669"]
      meta:["172.20.10.14:9559"]
    }
    user: root
    pswd: nebula
    space: basketballplayer
[wjc@localhost bin]$ ./spark-submit --master local --class com.vesoft.nebula.exchange.Exchange ~/nebula-exchange-2.6/nebula-exchange/target/nebula-exchange-2.6.3.jar -c ~/nebula-exchange-2.6/nebula-exchange/target/classes/csv_application.conf 
22/03/16 14:41:52 WARN Utils: Your hostname, localhost resolves to a loopback address: 127.0.0.1; using 172.20.10.14 instead (on interface ens33)
22/03/16 14:41:52 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
22/03/16 14:41:53 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
log4j:WARN No appenders could be found for logger (com.vesoft.nebula.exchange.config.Configs$).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
22/03/16 14:41:53 INFO SparkContext: Running Spark version 2.4.8
22/03/16 14:41:53 INFO SparkContext: Submitted application: com.vesoft.nebula.exchange.Exchange
22/03/16 14:41:53 INFO SecurityManager: Changing view acls to: wjc
22/03/16 14:41:53 INFO SecurityManager: Changing modify acls to: wjc
22/03/16 14:41:53 INFO SecurityManager: Changing view acls groups to: 
22/03/16 14:41:53 INFO SecurityManager: Changing modify acls groups to: 
22/03/16 14:41:53 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(wjc); groups with view permissions: Set(); users  with modify permissions: Set(wjc); groups with modify permissions: Set()
22/03/16 14:41:54 INFO Utils: Successfully started service 'sparkDriver' on port 41472.
22/03/16 14:41:54 INFO SparkEnv: Registering MapOutputTracker
22/03/16 14:41:54 INFO SparkEnv: Registering BlockManagerMaster
22/03/16 14:41:54 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
22/03/16 14:41:54 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
22/03/16 14:41:54 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-bf766ad0-33c9-464f-80dc-fdcd9aabe8db
22/03/16 14:41:54 INFO MemoryStore: MemoryStore started with capacity 366.1 MB
22/03/16 14:41:54 INFO SparkEnv: Registering OutputCommitCoordinator
22/03/16 14:41:54 INFO Utils: Successfully started service 'SparkUI' on port 4040.
22/03/16 14:41:54 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://172.20.10.14:4040
22/03/16 14:41:54 INFO SparkContext: Added JAR file:/home/wjc/nebula-exchange-2.6/nebula-exchange/target/nebula-exchange-2.6.3.jar at spark://172.20.10.14:41472/jars/nebula-exchange-2.6.3.jar with timestamp 1647412914778
22/03/16 14:41:54 INFO Executor: Starting executor ID driver on host localhost
22/03/16 14:41:54 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 42341.
22/03/16 14:41:54 INFO NettyBlockTransferService: Server created on 172.20.10.14:42341
22/03/16 14:41:54 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
22/03/16 14:41:55 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 172.20.10.14, 42341, None)
22/03/16 14:41:55 INFO BlockManagerMasterEndpoint: Registering block manager 172.20.10.14:42341 with 366.1 MB RAM, BlockManagerId(driver, 172.20.10.14, 42341, None)
22/03/16 14:41:55 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 172.20.10.14, 42341, None)
22/03/16 14:41:55 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 172.20.10.14, 42341, None)
22/03/16 14:41:55 INFO Exchange$: Processing Tag player
22/03/16 14:41:55 INFO Exchange$: field keys: _c1, _c2
22/03/16 14:41:55 INFO Exchange$: nebula keys: age, name
22/03/16 14:41:55 INFO Exchange$: Loading CSV files from file:///home/wjc/daoshu/dataset/vertex_player.csv
22/03/16 14:41:55 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('file:/home/wjc/spark-2.4.8-bin-hadoop2.6/bin/spark-warehouse').
22/03/16 14:41:55 INFO SharedState: Warehouse path is 'file:/home/wjc/spark-2.4.8-bin-hadoop2.6/bin/spark-warehouse'.
22/03/16 14:41:56 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
22/03/16 14:41:56 WARN SizeEstimator: Failed to check whether UseCompressedOops is set; assuming yes
22/03/16 14:41:56 INFO InMemoryFileIndex: It took 35 ms to list leaf files for 1 paths.
22/03/16 14:41:56 INFO InMemoryFileIndex: It took 2 ms to list leaf files for 1 paths.
22/03/16 14:41:59 INFO FileSourceStrategy: Pruning directories with: 
22/03/16 14:41:59 INFO FileSourceStrategy: Post-Scan Filters: (length(trim(value#0, None)) > 0)
22/03/16 14:41:59 INFO FileSourceStrategy: Output Data Schema: struct<value: string>
22/03/16 14:41:59 INFO FileSourceScanExec: Pushed Filters: 
22/03/16 14:41:59 INFO CodeGenerator: Code generated in 250.724245 ms
22/03/16 14:42:00 INFO CodeGenerator: Code generated in 28.390387 ms
22/03/16 14:42:00 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 200.3 KB, free 366.0 MB)
22/03/16 14:42:00 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 21.1 KB, free 365.9 MB)
22/03/16 14:42:00 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 172.20.10.14:42341 (size: 21.1 KB, free: 366.1 MB)
22/03/16 14:42:00 INFO SparkContext: Created broadcast 0 from csv at FileBaseReader.scala:86
22/03/16 14:42:00 INFO FileSourceScanExec: Planning scan with bin packing, max size: 4195699 bytes, open cost is considered as scanning 4194304 bytes.
22/03/16 14:42:00 INFO SparkContext: Starting job: csv at FileBaseReader.scala:86
22/03/16 14:42:00 INFO DAGScheduler: Got job 0 (csv at FileBaseReader.scala:86) with 1 output partitions
22/03/16 14:42:00 INFO DAGScheduler: Final stage: ResultStage 0 (csv at FileBaseReader.scala:86)
22/03/16 14:42:00 INFO DAGScheduler: Parents of final stage: List()
22/03/16 14:42:00 INFO DAGScheduler: Missing parents: List()
22/03/16 14:42:00 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[3] at csv at FileBaseReader.scala:86), which has no missing parents
22/03/16 14:42:01 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 8.9 KB, free 365.9 MB)
22/03/16 14:42:01 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 4.6 KB, free 365.9 MB)
22/03/16 14:42:01 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 172.20.10.14:42341 (size: 4.6 KB, free: 366.1 MB)
22/03/16 14:42:01 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1184
22/03/16 14:42:01 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[3] at csv at FileBaseReader.scala:86) (first 15 tasks are for partitions Vector(0))
22/03/16 14:42:01 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
22/03/16 14:42:01 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, executor driver, partition 0, PROCESS_LOCAL, 8265 bytes)
22/03/16 14:42:01 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
22/03/16 14:42:01 INFO Executor: Fetching spark://172.20.10.14:41472/jars/nebula-exchange-2.6.3.jar with timestamp 1647412914778
22/03/16 14:42:01 INFO TransportClientFactory: Successfully created connection to /172.20.10.14:41472 after 59 ms (0 ms spent in bootstraps)
22/03/16 14:42:01 INFO Utils: Fetching spark://172.20.10.14:41472/jars/nebula-exchange-2.6.3.jar to /tmp/spark-b177f202-528a-4d7b-b3dc-6ca81dda8526/userFiles-29304ad3-eeca-4da2-9583-7067f25406bf/fetchFileTemp8932770341953309027.tmp
22/03/16 14:42:02 INFO Executor: Adding file:/tmp/spark-b177f202-528a-4d7b-b3dc-6ca81dda8526/userFiles-29304ad3-eeca-4da2-9583-7067f25406bf/nebula-exchange-2.6.3.jar to class loader
22/03/16 14:42:02 INFO FileScanRDD: Reading File path: file:///home/wjc/daoshu/dataset/vertex_player.csv, range: 0-1395, partition values: [empty row]
22/03/16 14:42:02 INFO CodeGenerator: Code generated in 20.423799 ms
22/03/16 14:42:02 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 1259 bytes result sent to driver
22/03/16 14:42:02 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 1576 ms on localhost (executor driver) (1/1)
22/03/16 14:42:02 INFO DAGScheduler: ResultStage 0 (csv at FileBaseReader.scala:86) finished in 1.784 s
22/03/16 14:42:02 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
22/03/16 14:42:02 INFO DAGScheduler: Job 0 finished: csv at FileBaseReader.scala:86, took 1.899759 s
22/03/16 14:42:02 INFO FileSourceStrategy: Pruning directories with: 
22/03/16 14:42:02 INFO FileSourceStrategy: Post-Scan Filters: 
22/03/16 14:42:02 INFO FileSourceStrategy: Output Data Schema: struct<value: string>
22/03/16 14:42:02 INFO FileSourceScanExec: Pushed Filters: 
22/03/16 14:42:02 INFO CodeGenerator: Code generated in 13.389406 ms
22/03/16 14:42:02 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 200.3 KB, free 365.7 MB)
22/03/16 14:42:02 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 21.1 KB, free 365.7 MB)
22/03/16 14:42:02 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on 172.20.10.14:42341 (size: 21.1 KB, free: 366.1 MB)
22/03/16 14:42:02 INFO SparkContext: Created broadcast 2 from csv at FileBaseReader.scala:86
22/03/16 14:42:02 INFO FileSourceScanExec: Planning scan with bin packing, max size: 4195699 bytes, open cost is considered as scanning 4194304 bytes.
Exception in thread "main" com.facebook.thrift.transport.TTransportException: java.net.ConnectException: Connection refused (Connection refused)
	at com.facebook.thrift.transport.TSocket.open(TSocket.java:206)
	at com.vesoft.nebula.client.meta.MetaClient.getClient(MetaClient.java:145)
	at com.vesoft.nebula.client.meta.MetaClient.doConnect(MetaClient.java:124)
	at com.vesoft.nebula.client.meta.MetaClient.connect(MetaClient.java:113)
	at com.vesoft.nebula.exchange.MetaProvider.<init>(MetaProvider.scala:56)
	at com.vesoft.nebula.exchange.processor.VerticesProcessor.process(VerticesProcessor.scala:110)
	at com.vesoft.nebula.exchange.Exchange$$anonfun$main$2.apply(Exchange.scala:150)
	at com.vesoft.nebula.exchange.Exchange$$anonfun$main$2.apply(Exchange.scala:126)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at com.vesoft.nebula.exchange.Exchange$.main(Exchange.scala:126)
	at com.vesoft.nebula.exchange.Exchange.main(Exchange.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
	at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:855)
	at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161)
	at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)
	at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
	at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:930)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:939)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
	at java.net.PlainSocketImpl.socketConnect(Native Method)
	at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
	at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
	at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
	at java.net.Socket.connect(Socket.java:589)
	at com.facebook.thrift.transport.TSocket.open(TSocket.java:201)
	... 22 more
22/03/16 14:42:02 INFO SparkContext: Invoking stop() from shutdown hook
22/03/16 14:42:03 INFO SparkUI: Stopped Spark web UI at http://172.20.10.14:4040
22/03/16 14:42:03 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
22/03/16 14:42:03 INFO MemoryStore: MemoryStore cleared
22/03/16 14:42:03 INFO BlockManager: BlockManager stopped
22/03/16 14:42:03 INFO BlockManagerMaster: BlockManagerMaster stopped
22/03/16 14:42:03 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
22/03/16 14:42:03 INFO SparkContext: Successfully stopped SparkContext
22/03/16 14:42:03 INFO ShutdownHookManager: Shutdown hook called
22/03/16 14:42:03 INFO ShutdownHookManager: Deleting directory /tmp/spark-637e201f-c071-4eb4-aa51-4c513fb98fcb
22/03/16 14:42:03 INFO ShutdownHookManager: Deleting directory /tmp/spark-b177f202-528a-4d7b-b3dc-6ca81dda8526

检查一下 Spark 集群是否能连到NebulaGraph集群吧

[wjc@localhost bin]$ ./pyspark 
Python 3.6.15 (default, Mar 16 2022, 15:29:52) 
[GCC 4.8.5 20150623 (Red Hat 4.8.5-44)] on linux
Type "help", "copyright", "credits" or "license" for more information.
22/03/16 15:42:59 WARN Utils: Your hostname, localhost resolves to a loopback address: 127.0.0.1; using 172.20.10.14 instead (on interface ens33)
22/03/16 15:42:59 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
22/03/16 15:42:59 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /__ / .__/\_,_/_/ /_/\_\   version 2.4.8
      /_/

Using Python version 3.6.15 (default, Mar 16 2022 15:29:52)
SparkSession available as 'spark'.
>>> from nebula2.gclient.net import ConnectionPool
>>> from nebula2.Config import Config
>>> config = Config()
>>> config.max_connection_pool_size = 10
>>> connection_pool = ConnectionPool()
>>> ok = connection_pool.init([('172.20.10.14', 9669)], config)
>>> session = connection_pool.get_session('root', 'nebula')
[2022-03-16 15:44:27,709] INFO     [ConnectionPool.py:176]:Get connection to ('172.20.10.14', 9669)
>>>

这样算能连上了吗

这个Graph的地址都没指定吧 你执行一个查询语句试一下?

[wjc@localhost bin]$ ./pyspark
Python 3.6.15 (default, Mar 16 2022, 15:29:52) 
[GCC 4.8.5 20150623 (Red Hat 4.8.5-44)] on linux
Type "help", "copyright", "credits" or "license" for more information.
22/03/16 16:02:49 WARN Utils: Your hostname, localhost resolves to a loopback address: 127.0.0.1; using 172.20.10.14 instead (on interface ens33)
22/03/16 16:02:49 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
22/03/16 16:02:50 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /__ / .__/\_,_/_/ /_/\_\   version 2.4.8
      /_/

Using Python version 3.6.15 (default, Mar 16 2022 15:29:52)
SparkSession available as 'spark'.
>>> from nebula2.gclient.net import ConnectionPool
 else return false
ok = connection_pool.init([('172.20.10.14', 9669)], config)

# option 1 control the connection release yourself
# get session from the pool
session = connection_pool.get_session('root', 'nebula')
>>> from nebula2.Config import Config
>>> 
>>> # define a config
... config = Config()
>>> config.max_connection_pool_size = 10
>>> # init connection pool
... connection_pool = ConnectionPool()
>>> # if the given servers are ok, return true, else return false
... ok = connection_pool.init([('172.20.10.14', 9669)], config)
>>> 
>>> # option 1 control the connection release yourself
... # get session from the pool
... session = connection_pool.get_session('root', 'nebula')
[2022-03-16 16:03:24,928] INFO     [ConnectionPool.py:176]:Get connection to ('172.20.10.14', 9669)
>>> session.execute("show spaces")
ResultSet(keys: ['Name'], values: ["basketballplayer"])
>>>

看看meta 服务的端口是不是打开了

这个咋看

[root@fb607114d944 scripts]# ./nebula.service status metad
[INFO] nebula-metad(3ba41bd): Running as 1, Listening on 9559 
[root@fb607114d944 scripts]# exit
exit
[wjc@localhost nebula-docker-compose-2.6.0]$ docker exec -it nebula-docker-compose-260-metad1-1 bash
[root@17d8df8448cc nebula]# ./scripts/nebula.service status metad
[INFO] nebula-metad(3ba41bd): Running as 1, Listening on 9559 
[root@17d8df8448cc nebula]# exit
exit
[wjc@localhost nebula-docker-compose-2.6.0]$ docker exec -it nebula-docker-compose-260-metad2-1 bash
[root@cdfc15134a96 nebula]# ./scripts/nebula.service status metad
[INFO] nebula-metad(3ba41bd): Running as 1, Listening on 9559 
[root@cdfc15134a96 nebula]#

docker 的端口是不是需要配置一下 才能从外面访问到?

我没用过docker 这个不是很清楚

你把 Docker Compose 的配置文件贴一下

version: '3.4'
services:
  metad0:
    image: vesoft/nebula-metad:v2.6.0
    environment:
      USER: root
      TZ:   "${TZ}"
    command:
      - --meta_server_addrs=metad0:9559,metad1:9559,metad2:9559
      - --local_ip=metad0
      - --ws_ip=metad0
      - --port=9559
      - --ws_http_port=19559
      - --data_path=/data/meta
      - --log_dir=/logs
      - --v=0
      - --minloglevel=0
    healthcheck:
      test: ["CMD", "curl", "-sf", "http://metad0:19559/status"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 20s
    ports:
      - 9559
      - 19559
      - 19560
    volumes:
      - ./data/meta0:/data/meta
      - ./logs/meta0:/logs
    networks:
      - nebula-net
    restart: on-failure
    cap_add:
      - SYS_PTRACE

  metad1:
    image: vesoft/nebula-metad:v2.6.0
    environment:
      USER: root
      TZ:   "${TZ}"
    command:
      - --meta_server_addrs=metad0:9559,metad1:9559,metad2:9559
      - --local_ip=metad1
      - --ws_ip=metad1
      - --port=9559
      - --ws_http_port=19559
      - --data_path=/data/meta
      - --log_dir=/logs
      - --v=0
      - --minloglevel=0
    healthcheck:
      test: ["CMD", "curl", "-sf", "http://metad1:19559/status"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 20s
    ports:
      - 9559
      - 19559
      - 19560
    volumes:
      - ./data/meta1:/data/meta
      - ./logs/meta1:/logs
    networks:
      - nebula-net
    restart: on-failure
    cap_add:
      - SYS_PTRACE

  metad2:
    image: vesoft/nebula-metad:v2.6.0
    environment:
      USER: root
      TZ:   "${TZ}"
    command:
      - --meta_server_addrs=metad0:9559,metad1:9559,metad2:9559
      - --local_ip=metad2
      - --ws_ip=metad2
      - --port=9559
      - --ws_http_port=19559
      - --data_path=/data/meta
      - --log_dir=/logs
      - --v=0
      - --minloglevel=0
    healthcheck:
      test: ["CMD", "curl", "-sf", "http://metad2:19559/status"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 20s
    ports:
      - 9559
      - 19559
      - 19560
    volumes:
      - ./data/meta2:/data/meta
      - ./logs/meta2:/logs
    networks:
      - nebula-net
    restart: on-failure
    cap_add:
      - SYS_PTRACE

  storaged0:
    image: vesoft/nebula-storaged:v2.6.0
    environment:
      USER: root
      TZ:   "${TZ}"
    command:
      - --meta_server_addrs=metad0:9559,metad1:9559,metad2:9559
      - --local_ip=storaged0
      - --ws_ip=storaged0
      - --port=9779
      - --ws_http_port=19779
      - --data_path=/data/storage
      - --log_dir=/logs
      - --v=0
      - --minloglevel=0
    depends_on:
      - metad0
      - metad1
      - metad2
    healthcheck:
      test: ["CMD", "curl", "-sf", "http://storaged0:19779/status"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 20s
    ports:
      - 9779
      - 19779
      - 19780
    volumes:
      - ./data/storage0:/data/storage
      - ./logs/storage0:/logs
    networks:
      - nebula-net
    restart: on-failure
    cap_add:
      - SYS_PTRACE

  storaged1:
    image: vesoft/nebula-storaged:v2.6.0
    environment:
      USER: root
      TZ:   "${TZ}"
    command:
      - --meta_server_addrs=metad0:9559,metad1:9559,metad2:9559
      - --local_ip=storaged1
      - --ws_ip=storaged1
      - --port=9779
      - --ws_http_port=19779
      - --data_path=/data/storage
      - --log_dir=/logs
      - --v=0
      - --minloglevel=0
    depends_on:
      - metad0
      - metad1
      - metad2
    healthcheck:
      test: ["CMD", "curl", "-sf", "http://storaged1:19779/status"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 20s
    ports:
      - 9779
      - 19779
      - 19780
    volumes:
      - ./data/storage1:/data/storage
      - ./logs/storage1:/logs
    networks:
      - nebula-net
    restart: on-failure
    cap_add:
      - SYS_PTRACE

  storaged2:
    image: vesoft/nebula-storaged:v2.6.0
    environment:
      USER: root
      TZ:   "${TZ}"
    command:
      - --meta_server_addrs=metad0:9559,metad1:9559,metad2:9559
      - --local_ip=storaged2
      - --ws_ip=storaged2
      - --port=9779
      - --ws_http_port=19779
      - --data_path=/data/storage
      - --log_dir=/logs
      - --v=0
      - --minloglevel=0
    depends_on:
      - metad0
      - metad1
      - metad2
    healthcheck:
      test: ["CMD", "curl", "-sf", "http://storaged2:19779/status"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 20s
    ports:
      - 9779
      - 19779
      - 19780
    volumes:
      - ./data/storage2:/data/storage
      - ./logs/storage2:/logs
    networks:
      - nebula-net
    restart: on-failure
    cap_add:
      - SYS_PTRACE

  graphd:
    image: vesoft/nebula-graphd:v2.6.0
    environment:
      USER: root
      TZ:   "${TZ}"
    command:
      - --meta_server_addrs=metad0:9559,metad1:9559,metad2:9559
      - --port=9669
      - --local_ip=graphd
      - --ws_ip=graphd
      - --ws_http_port=19669
      - --log_dir=/logs
      - --v=0
      - --minloglevel=0
    depends_on:
      - storaged0
      - storaged1
      - storaged2
    healthcheck:
      test: ["CMD", "curl", "-sf", "http://graphd:19669/status"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 20s
    ports:
      - "9669:9669"
      - 19669
      - 19670
    volumes:
      - ./logs/graph:/logs
    networks:
      - nebula-net
    restart: on-failure
    cap_add:
      - SYS_PTRACE

  graphd1:
    image: vesoft/nebula-graphd:v2.6.0
    environment:
      USER: root
      TZ:   "${TZ}"
    command:
      - --meta_server_addrs=metad0:9559,metad1:9559,metad2:9559
      - --port=9669
      - --local_ip=graphd1
      - --ws_ip=graphd1
      - --ws_http_port=19669
      - --log_dir=/logs
      - --v=0
      - --minloglevel=0
    depends_on:
      - storaged0
      - storaged1
      - storaged2
    healthcheck:
      test: ["CMD", "curl", "-sf", "http://graphd1:19669/status"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 20s
    ports:
      - 9669
      - 19669
      - 19670
    volumes:
      - ./logs/graph1:/logs
    networks:
      - nebula-net
    restart: on-failure
    cap_add:
      - SYS_PTRACE

  graphd2:
    image: vesoft/nebula-graphd:v2.6.0
    environment:
      USER: root
      TZ:   "${TZ}"
    command:
      - --meta_server_addrs=metad0:9559,metad1:9559,metad2:9559
      - --port=9669
      - --local_ip=graphd2
      - --ws_ip=graphd2
      - --ws_http_port=19669
      - --log_dir=/logs
      - --v=0
      - --minloglevel=0
    depends_on:
      - storaged0
      - storaged1
      - storaged2
    healthcheck:
      test: ["CMD", "curl", "-sf", "http://graphd2:19669/status"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 20s
    ports:
      - 9669
      - 19669
      - 19670
    volumes:
      - ./logs/graph2:/logs
    networks:
      - nebula-net
    restart: on-failure
    cap_add:
      - SYS_PTRACE

networks:
  nebula-net:

就是nebula-docker-compose下载下来的原版, 没改动