nebula-importer报错,write: broken pipe

nebula graph基本信息:

  • nebula 版本:3.1.0
  • 部署方式:单机
  • 安装方式:RPM
  • 是否为线上版本:N
  • 硬件信息
    • 300G 磁盘
    • 10核CPU,64G内存

nebular-importer 基本信息:

  • nebula-importer版本: 3.1.0
  • 安装方式:docker 与 二进制文件均尝试

具体描述

  • 尝试使用了docker 部署和二进制包直接运行,均出现了 write: broken pipe 的报错

导入的CSV数据

  • vertices.csv:
1
2
3
4
5
6
7
8
9
10
  • edges.csv:
1 2
3 6
7 10

预先创建好的图

// 创建测试用 graph space - test
CREATE SPACE IF NOT EXISTS test (partition_num = 20, replica_factor = 1, vid_type = INT64);
SHOW SPACES;

// 使用 test
USE test;

// 创建 schema: tag 与 edge type
CREATE TAG IF NOT EXISTS person();
CREATE EDGE IF NOT EXISTS follow();

yaml文件

version: v3
description: test
removeTempFiles: false

clientSettings:
  retry: 3
  concurrency: 10
  channelBufferSize: 128
  space: test
  connection:
    user: root
    password: nebula
    address: 192.168.8.80:9669

logPath: ./err/test.log


files:
  - path: ./vertices.csv
    failDataPath: ./err/verticeserr.csv
    batchSize: 128
    limit: 100
    inOrder: false
    type: csv
    csv:
      withHeader: false
      withLabel: false
      delimiter: " "
    schema:
      type: vertex
      vertex:
        vid:
           index: 0
           type: int
        tags:
          - name: person

  - path: ./edges.csv
    failDataPath: ./err/edgeserr.csv
    batchSize: 128
    limit: 100
    inOrder: false
    type: csv
    csv:
      withHeader: false
      withLabel: false
      delimiter: " "
    schema:
      type: edge
      edge:
        name: follow
        srcVID:
          type: int
          index: 0
        dstVID:
          type: int
          index: 1

报错信息

2022/05/26 15:17:20 --- START OF NEBULA IMPORTER ---
2022/05/26 15:17:20 [INFO] clientmgr.go:31: Create 10 Nebula Graph clients
2022/05/26 15:17:20 [INFO] reader.go:49: The delimiter of /home/luyilun/Documents/yuanmou/gdbm_experiments/nebula/vertices.csv is U+0020 ' '
2022/05/26 15:17:20 [INFO] reader.go:49: The delimiter of /home/luyilun/Documents/yuanmou/gdbm_experiments/nebula/edges.csv is U+0020 ' '
2022/05/26 15:17:20 [INFO] reader.go:68: Start to read file(0): /home/luyilun/Documents/yuanmou/gdbm_experiments/nebula/vertices.csv, schema: < :VID(int) >
2022/05/26 15:17:20 [INFO] reader.go:68: Start to read file(1): /home/luyilun/Documents/yuanmou/gdbm_experiments/nebula/edges.csv, schema: < :SRC_VID(int),:DST_VID(int) >
2022/05/26 15:17:20 [INFO] reader.go:184: Total lines of file(/home/luyilun/Documents/yuanmou/gdbm_experiments/nebula/edges.csv) is: 3, error lines: 0
2022/05/26 15:17:20 [INFO] reader.go:184: Total lines of file(/home/luyilun/Documents/yuanmou/gdbm_experiments/nebula/vertices.csv) is: 10, error lines: 0
2022/05/26 15:17:24 [ERROR] handler.go:63: Client 2 fail to execute: INSERT EDGE `follow`() VALUES  3->6:() ;, Error: write tcp 192.168.8.80:44604->192.168.8.80:9669: write: broken pipe
2022/05/26 15:17:24 [ERROR] handler.go:63: Client 4 fail to execute: INSERT VERTEX `person`() VALUES  4: ();, Error: write tcp 192.168.8.80:44608->192.168.8.80:9669: write: broken pipe
2022/05/26 15:17:24 [ERROR] handler.go:63: Client 1 fail to execute: INSERT EDGE `follow`() VALUES  1->2:() ;, Error: write tcp 192.168.8.80:44602->192.168.8.80:9669: write: broken pipe
2022/05/26 15:17:24 [ERROR] handler.go:63: Client 6 fail to execute: INSERT VERTEX `person`() VALUES  6: ();, Error: write tcp 192.168.8.80:44612->192.168.8.80:9669: write: broken pipe
2022/05/26 15:17:24 [ERROR] handler.go:63: Client 8 fail to execute: INSERT VERTEX `person`() VALUES  8: ();, Error: write tcp 192.168.8.80:44616->192.168.8.80:9669: write: broken pipe
2022/05/26 15:17:24 [ERROR] handler.go:63: Client 5 fail to execute: INSERT VERTEX `person`() VALUES  5: ();, Error: write tcp 192.168.8.80:44610->192.168.8.80:9669: write: broken pipe
2022/05/26 15:17:24 [ERROR] handler.go:63: Client 3 fail to execute: INSERT EDGE `follow`() VALUES  7->10:() ;, Error: write tcp 192.168.8.80:44606->192.168.8.80:9669: write: broken pipe
2022/05/26 15:17:24 [ERROR] handler.go:63: Client 7 fail to execute: INSERT VERTEX `person`() VALUES  7: ();, Error: write tcp 192.168.8.80:44614->192.168.8.80:9669: write: broken pipe
2022/05/26 15:17:24 [INFO] statsmgr.go:89: Done(/home/luyilun/Documents/yuanmou/gdbm_experiments/nebula/edges.csv): Time(4.37s), Finished(7), Failed(7), Read Failed(0), Latency AVG(0us), Batches Req AVG(0us), Rows AVG(1.60/s)
2022/05/26 15:17:24 [ERROR] handler.go:63: Client 0 fail to execute: INSERT VERTEX `person`() VALUES  10: ();, Error: write tcp 192.168.8.80:44600->192.168.8.80:9669: write: broken pipe
2022/05/26 15:17:24 [ERROR] handler.go:63: Client 9 fail to execute: INSERT VERTEX `person`() VALUES  9: ();, Error: write tcp 192.168.8.80:44618->192.168.8.80:9669: write: broken pipe
2022/05/26 15:17:25 [INFO] statsmgr.go:89: Tick: Time(5.00s), Finished(10), Failed(10), Read Failed(0), Latency AVG(0us), Batches Req AVG(0us), Rows AVG(2.00/s)
2022/05/26 15:17:27 [ERROR] handler.go:63: Client 2 fail to execute: INSERT VERTEX `person`() VALUES  2: ();, Error: write tcp 192.168.8.80:44604->192.168.8.80:9669: write: broken pipe
2022/05/26 15:17:27 [ERROR] handler.go:63: Client 3 fail to execute: INSERT VERTEX `person`() VALUES  3: ();, Error: write tcp 192.168.8.80:44606->192.168.8.80:9669: write: broken pipe
2022/05/26 15:17:27 [ERROR] handler.go:63: Client 1 fail to execute: INSERT VERTEX `person`() VALUES  1: ();, Error: write tcp 192.168.8.80:44602->192.168.8.80:9669: write: broken pipe
2022/05/26 15:17:27 [INFO] statsmgr.go:89: Done(/home/luyilun/Documents/yuanmou/gdbm_experiments/nebula/vertices.csv): Time(7.37s), Finished(13), Failed(13), Read Failed(0), Latency AVG(0us), Batches Req AVG(0us), Rows AVG(1.76/s)
2022/05/26 15:17:27 [WARN] session.go:280: [nebula-go] Sign out failed, write tcp 192.168.8.80:44600->192.168.8.80:9669: write: broken pipe
2022/05/26 15:17:27 [WARN] session.go:280: [nebula-go] Sign out failed, write tcp 192.168.8.80:44602->192.168.8.80:9669: write: broken pipe
2022/05/26 15:17:27 [WARN] session.go:280: [nebula-go] Sign out failed, write tcp 192.168.8.80:44604->192.168.8.80:9669: write: broken pipe
2022/05/26 15:17:27 [WARN] session.go:280: [nebula-go] Sign out failed, write tcp 192.168.8.80:44606->192.168.8.80:9669: write: broken pipe
2022/05/26 15:17:27 [WARN] session.go:280: [nebula-go] Sign out failed, write tcp 192.168.8.80:44608->192.168.8.80:9669: write: broken pipe
2022/05/26 15:17:27 [WARN] session.go:280: [nebula-go] Sign out failed, write tcp 192.168.8.80:44610->192.168.8.80:9669: write: broken pipe
2022/05/26 15:17:27 [WARN] session.go:280: [nebula-go] Sign out failed, write tcp 192.168.8.80:44612->192.168.8.80:9669: write: broken pipe
2022/05/26 15:17:27 [WARN] session.go:280: [nebula-go] Sign out failed, write tcp 192.168.8.80:44614->192.168.8.80:9669: write: broken pipe
2022/05/26 15:17:27 [WARN] session.go:280: [nebula-go] Sign out failed, write tcp 192.168.8.80:44616->192.168.8.80:9669: write: broken pipe
2022/05/26 15:17:27 [WARN] session.go:280: [nebula-go] Sign out failed, write tcp 192.168.8.80:44618->192.168.8.80:9669: write: broken pipe
2022/05/26 15:17:27 Total 13 lines fail to insert into nebula graph database
2022/05/26 15:17:28 --- END OF NEBULA IMPORTER ---

sapce 创建过吗

创建过了

你的配置文件不对,对着示例重新改下 https://docs.nebula-graph.com.cn/3.1.0/nebula-importer/config-without-header/#_3

我的节点没有属性只有id的话,props部分要怎么写呀

空着就行,但是要配置上props,类似这样:

点里面也是这样配置,另外注意下分隔符是空格还是tab

你好,我按照你说的在yaml里把props都加上了:

version: v3
description: test
removeTempFiles: false
clientSettings:
  retry: 3
  concurrency: 10
  channelBufferSize: 128
  space: test
  connection:
    user: root
    password: nebula
    address: 192.168.8.80:9669
logPath: ./err/test.log
files:
  - path: ./vertices.csv
    failDataPath: ./err/verticeserr.csv
    batchSize: 128
    limit: 100
    inOrder: false
    type: csv
    csv:
      withHeader: false
      withLabel: false
      delimiter: ","
    schema:
      type: vertex
      vertex:
        # 点 ID 设置。
        vid:
           index: 0
           type: int
        tags:
          - name: person
            props:

  - path: ./edges.csv
    failDataPath: ./err/edgeserr.csv
    batchSize: 128
    limit: 100
    inOrder: false
    type: csv
    csv:
      withHeader: false
      withLabel: false
      delimiter: ","
    schema:
      type: edge
      edge:
        name: follow
        srcVID:
          type: int
          index: 0
        dstVID:
          type: int
          index: 1
        props:

并且把csv文件里的空格分隔符改成了逗号,但是重新跑了之后依然出现同样的问题:

2022/05/26 16:22:42 --- START OF NEBULA IMPORTER ---
2022/05/26 16:22:42 [INFO] clientmgr.go:31: Create 10 Nebula Graph clients
2022/05/26 16:22:42 [INFO] reader.go:49: The delimiter of /home/luyilun/Documents/yuanmou/gdbm_experiments/nebula/vertices.csv is U+002C ','
2022/05/26 16:22:42 [INFO] reader.go:68: Start to read file(0): /home/luyilun/Documents/yuanmou/gdbm_experiments/nebula/vertices.csv, schema: < :VID(int) >
2022/05/26 16:22:42 [INFO] reader.go:184: Total lines of file(/home/luyilun/Documents/yuanmou/gdbm_experiments/nebula/vertices.csv) is: 10, error lines: 0
2022/05/26 16:22:42 [INFO] reader.go:49: The delimiter of /home/luyilun/Documents/yuanmou/gdbm_experiments/nebula/edges.csv is U+002C ','
2022/05/26 16:22:42 [INFO] reader.go:68: Start to read file(1): /home/luyilun/Documents/yuanmou/gdbm_experiments/nebula/edges.csv, schema: < :SRC_VID(int),:DST_VID(int) >
2022/05/26 16:22:42 [INFO] reader.go:184: Total lines of file(/home/luyilun/Documents/yuanmou/gdbm_experiments/nebula/edges.csv) is: 3, error lines: 0
2022/05/26 16:22:46 [ERROR] handler.go:63: Client 4 fail to execute: INSERT VERTEX `person`() VALUES  4: ();, Error: write tcp 192.168.8.80:44994->192.168.8.80:9669: write: broken pipe
2022/05/26 16:22:46 [ERROR] handler.go:63: Client 3 fail to execute: INSERT VERTEX `person`() VALUES  3: ();, Error: write tcp 192.168.8.80:44992->192.168.8.80:9669: write: broken pipe
2022/05/26 16:22:46 [ERROR] handler.go:63: Client 9 fail to execute: INSERT VERTEX `person`() VALUES  9: ();, Error: write tcp 192.168.8.80:45004->192.168.8.80:9669: write: broken pipe
2022/05/26 16:22:46 [ERROR] handler.go:63: Client 8 fail to execute: INSERT VERTEX `person`() VALUES  8: ();, Error: write tcp 192.168.8.80:45002->192.168.8.80:9669: write: broken pipe
2022/05/26 16:22:46 [ERROR] handler.go:63: Client 5 fail to execute: INSERT VERTEX `person`() VALUES  5: ();, Error: write tcp 192.168.8.80:44996->192.168.8.80:9669: write: broken pipe
2022/05/26 16:22:46 [ERROR] handler.go:63: Client 1 fail to execute: INSERT VERTEX `person`() VALUES  1: ();, Error: write tcp 192.168.8.80:44988->192.168.8.80:9669: write: broken pipe
2022/05/26 16:22:46 [ERROR] handler.go:63: Client 6 fail to execute: INSERT VERTEX `person`() VALUES  6: ();, Error: write tcp 192.168.8.80:44998->192.168.8.80:9669: write: broken pipe
2022/05/26 16:22:46 [ERROR] handler.go:63: Client 2 fail to execute: INSERT VERTEX `person`() VALUES  2: ();, Error: write tcp 192.168.8.80:44990->192.168.8.80:9669: write: broken pipe
2022/05/26 16:22:46 [ERROR] handler.go:63: Client 7 fail to execute: INSERT VERTEX `person`() VALUES  7: ();, Error: write tcp 192.168.8.80:45000->192.168.8.80:9669: write: broken pipe
2022/05/26 16:22:46 [ERROR] handler.go:63: Client 0 fail to execute: INSERT VERTEX `person`() VALUES  10: ();, Error: write tcp 192.168.8.80:44986->192.168.8.80:9669: write: broken pipe
2022/05/26 16:22:46 [INFO] statsmgr.go:89: Done(/home/luyilun/Documents/yuanmou/gdbm_experiments/nebula/vertices.csv): Time(3.98s), Finished(10), Failed(10), Read Failed(0), Latency AVG(0us), Batches Req AVG(0us), Rows AVG(2.51/s)
2022/05/26 16:22:47 [INFO] statsmgr.go:89: Tick: Time(5.00s), Finished(10), Failed(10), Read Failed(0), Latency AVG(0us), Batches Req AVG(0us), Rows AVG(2.00/s)
2022/05/26 16:22:49 [ERROR] handler.go:63: Client 3 fail to execute: INSERT EDGE `follow`() VALUES  7->10:() ;, Error: write tcp 192.168.8.80:44992->192.168.8.80:9669: write: broken pipe
2022/05/26 16:22:49 [ERROR] handler.go:63: Client 2 fail to execute: INSERT EDGE `follow`() VALUES  3->6:() ;, Error: write tcp 192.168.8.80:44990->192.168.8.80:9669: write: broken pipe
2022/05/26 16:22:49 [ERROR] handler.go:63: Client 1 fail to execute: INSERT EDGE `follow`() VALUES  1->2:() ;, Error: write tcp 192.168.8.80:44988->192.168.8.80:9669: write: broken pipe
2022/05/26 16:22:49 [INFO] statsmgr.go:89: Done(/home/luyilun/Documents/yuanmou/gdbm_experiments/nebula/edges.csv): Time(6.99s), Finished(13), Failed(13), Read Failed(0), Latency AVG(0us), Batches Req AVG(0us), Rows AVG(1.86/s)
2022/05/26 16:22:49 [WARN] session.go:280: [nebula-go] Sign out failed, write tcp 192.168.8.80:44986->192.168.8.80:9669: write: broken pipe
2022/05/26 16:22:49 [WARN] session.go:280: [nebula-go] Sign out failed, write tcp 192.168.8.80:44988->192.168.8.80:9669: write: broken pipe
2022/05/26 16:22:49 [WARN] session.go:280: [nebula-go] Sign out failed, write tcp 192.168.8.80:44990->192.168.8.80:9669: write: broken pipe
2022/05/26 16:22:49 [WARN] session.go:280: [nebula-go] Sign out failed, write tcp 192.168.8.80:44992->192.168.8.80:9669: write: broken pipe
2022/05/26 16:22:49 [WARN] session.go:280: [nebula-go] Sign out failed, write tcp 192.168.8.80:44994->192.168.8.80:9669: write: broken pipe
2022/05/26 16:22:49 [WARN] session.go:280: [nebula-go] Sign out failed, write tcp 192.168.8.80:44996->192.168.8.80:9669: write: broken pipe
2022/05/26 16:22:49 [WARN] session.go:280: [nebula-go] Sign out failed, write tcp 192.168.8.80:44998->192.168.8.80:9669: write: broken pipe
2022/05/26 16:22:49 [WARN] session.go:280: [nebula-go] Sign out failed, write tcp 192.168.8.80:45000->192.168.8.80:9669: write: broken pipe
2022/05/26 16:22:49 [WARN] session.go:280: [nebula-go] Sign out failed, write tcp 192.168.8.80:45002->192.168.8.80:9669: write: broken pipe
2022/05/26 16:22:49 [WARN] session.go:280: [nebula-go] Sign out failed, write tcp 192.168.8.80:45004->192.168.8.80:9669: write: broken pipe
2022/05/26 16:22:49 Total 13 lines fail to insert into nebula graph database
2022/05/26 16:22:50 --- END OF NEBULA IMPORTER ---

并且每次 nebula-importer 出现这个报错后 nebula graphd 服务都会退出,需要重新启动

[INFO] nebula-metad(33fd35e): Running as 12705, Listening on 9559
[INFO] nebula-graphd(33fd35e): Exited
[INFO] nebula-storaged(33fd35e): Running as 12813, Listening on 9779

贴一下graph的日志看看

  • nebula-graphd.ERROR
Log file created at: 2022/05/26 16:22:42
Running on machine: localhost.localdomain
Running duration (h:mm:ss): 0:00:00
Log line format: [IWEF]yyyymmdd hh:mm:ss.uuuuuu threadid file:line] msg
E20220526 16:22:42.110040 12994 StorageClientBase-inl.h:206] Request to "192.168.8.80":9779 failed: Failed to write to remote endpoint. Wrote 0 bytes. AsyncSocketException: AsyncSocketException: connect failed, type = Socket not open, errno = 111 (Connection refused)
E20220526 16:22:42.110057 12993 StorageClientBase-inl.h:206] Request to "192.168.8.80":9779 failed: Failed to write to remote endpoint. Wrote 0 bytes. AsyncSocketException: AsyncSocketException: connect failed, type = Socket not open, errno = 111 (Connection refused)
E20220526 16:22:42.110394 13002 StorageClientBase-inl.h:206] Request to "192.168.8.80":9779 failed: Failed to write to remote endpoint. Wrote 0 bytes. AsyncSocketException: AsyncSocketException: connect failed, type = Socket not open, errno = 111 (Connection refused)
E20220526 16:22:42.110057 12996 StorageClientBase-inl.h:206] Request to "192.168.8.80":9779 failed: Failed to write to remote endpoint. Wrote 0 bytes. AsyncSocketException: AsyncSocketException: connect failed, type = Socket not open, errno = 111 (Connection refused)

  • nebula-graphd.INFO
Log file created at: 2022/05/26 16:22:39
Running on machine: localhost.localdomain
Running duration (h:mm:ss): 0:00:00
Log line format: [IWEF]yyyymmdd hh:mm:ss.uuuuuu threadid file:line] msg
I20220526 16:22:39.531677 12766 GraphDaemon.cpp:130] Starting Graph HTTP Service
I20220526 16:22:39.544327 12776 WebService.cpp:124] Web service started on HTTP[19669]
I20220526 16:22:39.544554 12766 GraphDaemon.cpp:144] Number of networking IO threads: 20
I20220526 16:22:39.544608 12766 GraphDaemon.cpp:153] Number of worker threads: 20
I20220526 16:22:39.572134 12766 MetaClient.cpp:80] Create meta client to "127.0.0.1":9559
I20220526 16:22:39.572208 12766 MetaClient.cpp:81] root path: /home/nebula, data path size: 0
I20220526 16:22:41.651983 12766 MetaClient.cpp:3079] Load leader of "192.168.8.80":9779 in 1 space
I20220526 16:22:41.652053 12766 MetaClient.cpp:3079] Load leader of "192.168.8.80":9000 in 0 space
I20220526 16:22:41.652073 12766 MetaClient.cpp:3079] Load leader of "192.168.8.80":9669 in 0 space
I20220526 16:22:41.652091 12766 MetaClient.cpp:3085] Load leader ok
I20220526 16:22:41.669402 12766 MetaClient.cpp:148] Register time task for heartbeat!
I20220526 16:22:41.677089 12766 GraphSessionManager.cpp:331] Total of 0 sessions are loaded
I20220526 16:22:41.679409 12766 Snowflake.cpp:16] WorkerId init success: 1
I20220526 16:22:41.681720 13019 GraphServer.cpp:59] Starting nebula-graphd on 192.168.8.80:9669
I20220526 16:22:42.071838 12826 GraphService.cpp:68] Authenticating user root from [::ffff:192.168.8.80]:44986
I20220526 16:22:42.074986 12827 GraphService.cpp:68] Authenticating user root from [::ffff:192.168.8.80]:44988
I20220526 16:22:42.077724 12825 GraphService.cpp:68] Authenticating user root from [::ffff:192.168.8.80]:44990
I20220526 16:22:42.080451 12827 GraphService.cpp:68] Authenticating user root from [::ffff:192.168.8.80]:44992
I20220526 16:22:42.082906 12825 GraphService.cpp:68] Authenticating user root from [::ffff:192.168.8.80]:44994
I20220526 16:22:42.090145 12827 GraphService.cpp:68] Authenticating user root from [::ffff:192.168.8.80]:44996
I20220526 16:22:42.092768 12825 GraphService.cpp:68] Authenticating user root from [::ffff:192.168.8.80]:44998
I20220526 16:22:42.095053 12827 GraphService.cpp:68] Authenticating user root from [::ffff:192.168.8.80]:45000
I20220526 16:22:42.097340 12825 GraphService.cpp:68] Authenticating user root from [::ffff:192.168.8.80]:45002
I20220526 16:22:42.099947 12827 GraphService.cpp:68] Authenticating user root from [::ffff:192.168.8.80]:45004
I20220526 16:22:42.102653 12827 SwitchSpaceExecutor.cpp:37] Graph switched to `test', space id: 1
I20220526 16:22:42.104986 12827 SwitchSpaceExecutor.cpp:37] Graph switched to `test', space id: 1
I20220526 16:22:42.105041 12817 SwitchSpaceExecutor.cpp:37] Graph switched to `test', space id: 1
I20220526 16:22:42.105082 12808 SwitchSpaceExecutor.cpp:37] Graph switched to `test', space id: 1
I20220526 16:22:42.105121 12816 SwitchSpaceExecutor.cpp:37] Graph switched to `test', space id: 1
I20220526 16:22:42.105154 12812 SwitchSpaceExecutor.cpp:37] Graph switched to `test', space id: 1
I20220526 16:22:42.105316 12818 SwitchSpaceExecutor.cpp:37] Graph switched to `test', space id: 1
I20220526 16:22:42.105051 12827 SwitchSpaceExecutor.cpp:37] Graph switched to `test', space id: 1
I20220526 16:22:42.107751 12827 SwitchSpaceExecutor.cpp:37] Graph switched to `test', space id: 1
I20220526 16:22:42.108137 12806 SwitchSpaceExecutor.cpp:37] Graph switched to `test', space id: 1
I20220526 16:22:42.109292 12808 SwitchSpaceExecutor.cpp:37] Graph switched to `test', space id: 1
E20220526 16:22:42.110040 12994 StorageClientBase-inl.h:206] Request to "192.168.8.80":9779 failed: Failed to write to remote endpoint. Wrote 0 bytes. AsyncSocketException: AsyncSocketException: connect failed, type = Socket not open, errno = 111 (Connection refused)
E20220526 16:22:42.110057 12993 StorageClientBase-inl.h:206] Request to "192.168.8.80":9779 failed: Failed to write to remote endpoint. Wrote 0 bytes. AsyncSocketException: AsyncSocketException: connect failed, type = Socket not open, errno = 111 (Connection refused)
E20220526 16:22:42.110394 13002 StorageClientBase-inl.h:206] Request to "192.168.8.80":9779 failed: Failed to write to remote endpoint. Wrote 0 bytes. AsyncSocketException: AsyncSocketException: connect failed, type = Socket not open, errno = 111 (Connection refused)
E20220526 16:22:42.110057 12996 StorageClientBase-inl.h:206] Request to "192.168.8.80":9779 failed: Failed to write to remote endpoint. Wrote 0 bytes. AsyncSocketException: AsyncSocketException: connect failed, type = Socket not open, errno = 111 (Connection refused)

show hosts看下状态,在贴下storage的日志看看

  • SHOW HOSTS
+----------------+------+-----------+-----------+--------------+----------------------+------------------------+---------+
| Host           | Port | HTTP port | Status    | Leader count | Leader distribution  | Partition distribution | Version |
+----------------+------+-----------+-----------+--------------+----------------------+------------------------+---------+
| "192.168.8.80" | 9779 | 19669     | "ONLINE"  | 10           | "test:10"            | "test:10"              | "3.1.0" |
| "192.168.8.80" | 9000 | 19669     | "OFFLINE" | 0            | "No valid partition" | "No valid partition"   |         |
| "192.168.8.80" | 9669 | 19669     | "OFFLINE" | 0            | "No valid partition" | "test:10"              |         |
+----------------+------+-----------+-----------+--------------+----------------------+------------------------+---------+

  • nebula-storaged.ERROR
Log file created at: 2022/05/17 18:43:25
Running on machine: localhost.localdomain
Running duration (h:mm:ss): 0:00:00
Log line format: [IWEF]yyyymmdd hh:mm:ss.uuuuuu threadid file:line] msg
E20220517 18:43:25.595103  4316 FileUtils.cpp:377] Failed to read the directory "/home/nebula/data/storage/nebula" (2): No such file or directory

  • nebula-storaged.INFO
Log file created at: 2022/05/26 16:42:17
Running on machine: localhost.localdomain
Running duration (h:mm:ss): 0:00:00
Log line format: [IWEF]yyyymmdd hh:mm:ss.uuuuuu threadid file:line] msg
I20220526 16:42:17.660050 14354 StorageDaemon.cpp:129] localhost = "192.168.8.80":9779
I20220526 16:42:17.660634 14354 StorageDaemon.cpp:144] data path= /home/nebula/data/storage
I20220526 16:42:17.718219 14354 MetaClient.cpp:80] Create meta client to "127.0.0.1":9559
I20220526 16:42:17.718348 14354 MetaClient.cpp:81] root path: /home/nebula, data path size: 1
I20220526 16:42:17.721895 14354 FileBasedClusterIdMan.cpp:53] Get clusterId: 5910483749134832870
I20220526 16:42:20.813230 14354 MetaClient.cpp:3079] Load leader of "192.168.8.80":9779 in 1 space
I20220526 16:42:20.813346 14354 MetaClient.cpp:3079] Load leader of "192.168.8.80":9000 in 0 space
I20220526 16:42:20.813367 14354 MetaClient.cpp:3079] Load leader of "192.168.8.80":9669 in 0 space
I20220526 16:42:20.813385 14354 MetaClient.cpp:3085] Load leader ok
I20220526 16:42:20.816754 14354 MetaClient.cpp:148] Register time task for heartbeat!
I20220526 16:42:20.816805 14354 StorageServer.cpp:200] Init schema manager
I20220526 16:42:20.816823 14354 StorageServer.cpp:203] Init index manager
I20220526 16:42:20.816836 14354 StorageServer.cpp:206] Init kvstore
I20220526 16:42:20.816880 14354 NebulaStore.cpp:51] Start the raft service...
I20220526 16:42:20.822304 14354 NebulaSnapshotManager.cpp:25] Send snapshot is rate limited to 10485760 for each part by default
I20220526 16:42:20.837890 14354 RaftexService.cpp:46] Start raft service on 9780
I20220526 16:42:20.838078 14354 NebulaStore.cpp:85] Scan the local path, and init the spaces_
I20220526 16:42:20.838168 14354 NebulaStore.cpp:92] Scan path "/home/nebula/data/storage/nebula/0"
I20220526 16:42:20.838191 14354 NebulaStore.cpp:92] Scan path "/home/nebula/data/storage/nebula/1"
I20220526 16:42:20.838574 14354 RocksEngineConfig.cpp:366] Emplace rocksdb option max_bytes_for_level_base=268435456
I20220526 16:42:20.838615 14354 RocksEngineConfig.cpp:366] Emplace rocksdb option max_write_buffer_number=4
I20220526 16:42:20.838631 14354 RocksEngineConfig.cpp:366] Emplace rocksdb option write_buffer_size=67108864
I20220526 16:42:20.838994 14354 RocksEngineConfig.cpp:366] Emplace rocksdb option block_size=8192
I20220526 16:42:20.867664 14354 RocksEngine.cpp:97] open rocksdb on /home/nebula/data/storage/nebula/1/data
I20220526 16:42:20.867882 14354 NebulaStore.cpp:196] Load space 1 from disk
I20220526 16:42:20.867913 14354 NebulaStore.cpp:205] Need to open 10 parts of space 1
I20220526 16:42:21.450711 14565 NebulaStore.cpp:228] Load part 1, 3 from disk
I20220526 16:42:21.494879 14564 NebulaStore.cpp:228] Load part 1, 1 from disk
I20220526 16:42:21.524407 14566 NebulaStore.cpp:228] Load part 1, 5 from disk
I20220526 16:42:21.578505 14567 NebulaStore.cpp:228] Load part 1, 7 from disk
I20220526 16:42:21.758921 14565 NebulaStore.cpp:228] Load part 1, 11 from disk
I20220526 16:42:21.796762 14564 NebulaStore.cpp:228] Load part 1, 9 from disk
I20220526 16:42:21.820132 14566 NebulaStore.cpp:228] Load part 1, 13 from disk
I20220526 16:42:21.846478 14567 NebulaStore.cpp:228] Load part 1, 15 from disk
I20220526 16:42:21.979348 14565 NebulaStore.cpp:228] Load part 1, 19 from disk
I20220526 16:42:22.034783 14564 NebulaStore.cpp:228] Load part 1, 17 from disk
I20220526 16:42:22.034860 14354 NebulaStore.cpp:262] Load space 1 complete
I20220526 16:42:22.034893 14354 NebulaStore.cpp:271] Init data from partManager for "192.168.8.80":9779
I20220526 16:42:22.034915 14354 NebulaStore.cpp:369] Data space 1 has existed!
I20220526 16:42:22.034935 14354 NebulaStore.cpp:430] [Space: 1, Part: 1] has existed!
I20220526 16:42:22.034945 14354 NebulaStore.cpp:430] [Space: 1, Part: 3] has existed!
I20220526 16:42:22.034951 14354 NebulaStore.cpp:430] [Space: 1, Part: 5] has existed!
I20220526 16:42:22.034957 14354 NebulaStore.cpp:430] [Space: 1, Part: 7] has existed!
I20220526 16:42:22.034962 14354 NebulaStore.cpp:430] [Space: 1, Part: 9] has existed!
I20220526 16:42:22.034968 14354 NebulaStore.cpp:430] [Space: 1, Part: 11] has existed!
I20220526 16:42:22.034984 14354 NebulaStore.cpp:430] [Space: 1, Part: 13] has existed!
I20220526 16:42:22.034991 14354 NebulaStore.cpp:430] [Space: 1, Part: 15] has existed!
I20220526 16:42:22.034997 14354 NebulaStore.cpp:430] [Space: 1, Part: 17] has existed!
I20220526 16:42:22.035002 14354 NebulaStore.cpp:430] [Space: 1, Part: 19] has existed!
I20220526 16:42:22.035027 14354 NebulaStore.cpp:78] Register handler...
I20220526 16:42:22.035034 14354 StorageServer.cpp:209] Init LogMonitor
I20220526 16:42:22.035158 14354 StorageServer.cpp:95] Starting Storage HTTP Service
I20220526 16:42:22.035518 14354 StorageServer.cpp:99] Http Thread Pool started
I20220526 16:42:22.041631 14600 WebService.cpp:124] Web service started on HTTP[19779]
I20220526 16:42:22.041702 14354 TransactionManager.cpp:24] TransactionManager ctor()
I20220526 16:42:22.042048 14354 RocksEngineConfig.cpp:366] Emplace rocksdb option max_bytes_for_level_base=268435456
I20220526 16:42:22.042062 14354 RocksEngineConfig.cpp:366] Emplace rocksdb option max_write_buffer_number=4
I20220526 16:42:22.042068 14354 RocksEngineConfig.cpp:366] Emplace rocksdb option write_buffer_size=67108864
I20220526 16:42:22.042215 14354 RocksEngineConfig.cpp:366] Emplace rocksdb option block_size=8192
I20220526 16:42:22.048866 14354 RocksEngine.cpp:97] open rocksdb on /home/nebula/data/storage/nebula/0/data
I20220526 16:42:22.048928 14354 AdminTaskManager.cpp:22] max concurrent subtasks: 10
I20220526 16:42:22.049050 14354 AdminTaskManager.cpp:40] exit AdminTaskManager::init()
I20220526 16:42:22.049077 14621 AdminTaskManager.cpp:227] waiting for incoming task
I20220526 16:42:41.647656 14562 MetaClient.cpp:3079] Load leader of "192.168.8.80":9779 in 1 space
I20220526 16:42:41.647758 14562 MetaClient.cpp:3079] Load leader of "192.168.8.80":9000 in 0 space
I20220526 16:42:41.647780 14562 MetaClient.cpp:3079] Load leader of "192.168.8.80":9669 in 0 space
I20220526 16:42:41.647797 14562 MetaClient.cpp:3085] Load leader ok

大概是 meta 挂了,你看下 meta 的配置信息

  • nebula-metad.conf
########## basics ##########
# Whether to run as a daemon process
--daemonize=true
# The file to host the process id
--pid_file=pids/nebula-metad.pid

########## logging ##########
# The directory to host logging files
--log_dir=logs
# Log level, 0, 1, 2, 3 for INFO, WARNING, ERROR, FATAL respectively
--minloglevel=0
# Verbose log level, 1, 2, 3, 4, the higher of the level, the more verbose of the logging
--v=0
# Maximum seconds to buffer the log messages
--logbufsecs=0
# Whether to redirect stdout and stderr to separate output files
--redirect_stdout=true
# Destination filename of stdout and stderr, which will also reside in log_dir.
--stdout_log_file=metad-stdout.log
--stderr_log_file=metad-stderr.log
# Copy log messages at or above this level to stderr in addition to logfiles. The numbers of severity levels INFO, WARNING, ERROR, and FATAL are 0, 1, 2, and 3, respectively.
--stderrthreshold=2
# wether logging files' name contain time stamp, If Using logrotate to rotate logging files, than should set it to true.
--timestamp_in_logfile_name=true

########## networking ##########
# Comma separated Meta Server addresses
--meta_server_addrs=127.0.0.1:9559
# Local IP used to identify the nebula-metad process.
# Change it to an address other than loopback if the service is distributed or
# will be accessed remotely.
--local_ip=127.0.0.1
# Meta daemon listening port
--port=9559
# HTTP service ip
--ws_ip=0.0.0.0
# HTTP service port
--ws_http_port=19559
# Port to listen on Storage with HTTP protocol, it corresponds to ws_http_port in storage's configuration file
--ws_storage_http_port=19779

########## storage ##########
# Root data path, here should be only single path for metad
--data_path=data/meta

########## Misc #########
# The default number of parts when a space is created
--default_parts_num=100
# The default replica factor when a space is created
--default_replica_factor=1

--heartbeat_interval_secs=10
--agent_heartbeat_interval_secs=60

storage的配置也贴一下吧

  • nebula-graphd.conf
########## basics ##########
# Whether to run as a daemon process
--daemonize=true
# The file to host the process id
--pid_file=pids/nebula-graphd.pid
# Whether to enable optimizer
--enable_optimizer=true
# The default charset when a space is created
--default_charset=utf8
# The default collate when a space is created
--default_collate=utf8_bin
# Whether to use the configuration obtained from the configuration file
--local_config=true

########## logging ##########
# The directory to host logging files
--log_dir=logs
# Log level, 0, 1, 2, 3 for INFO, WARNING, ERROR, FATAL respectively
--minloglevel=0
# Verbose log level, 1, 2, 3, 4, the higher of the level, the more verbose of the logging
--v=0
# Maximum seconds to buffer the log messages
--logbufsecs=0
# Whether to redirect stdout and stderr to separate output files
--redirect_stdout=true
# Destination filename of stdout and stderr, which will also reside in log_dir.
--stdout_log_file=graphd-stdout.log
--stderr_log_file=graphd-stderr.log
# Copy log messages at or above this level to stderr in addition to logfiles. The numbers of severity levels INFO, WARNING, ERROR, and FATAL are 0, 1, 2, and 3, respectively.
--stderrthreshold=2
# wether logging files' name contain time stamp.
--timestamp_in_logfile_name=true
########## query ##########
# Whether to treat partial success as an error.
# This flag is only used for Read-only access, and Modify access always treats partial success as an error.
--accept_partial_success=false
# Maximum sentence length, unit byte
--max_allowed_query_size=4194304

########## networking ##########
# Comma separated Meta Server Addresses
--meta_server_addrs=127.0.0.1:9559
# Local IP used to identify the nebula-graphd process.
# Change it to an address other than loopback if the service is distributed or
# will be accessed remotely.
--local_ip=192.168.8.80
# Network device to listen on
--listen_netdev=any
# Port to listen on
--port=9669
# To turn on SO_REUSEPORT or not
--reuse_port=false
# Backlog of the listen socket, adjust this together with net.core.somaxconn
--listen_backlog=1024
# The number of seconds Nebula service waits before closing the idle connections
--client_idle_timeout_secs=28800
# The number of seconds before idle sessions expire
# The range should be in [1, 604800]
--session_idle_timeout_secs=28800
# The number of threads to accept incoming connections
--num_accept_threads=1
# The number of networking IO threads, 0 for # of CPU cores
--num_netio_threads=0
# The number of threads to execute user queries, 0 for # of CPU cores
--num_worker_threads=0
# HTTP service ip
--ws_ip=0.0.0.0
# HTTP service port
--ws_http_port=19669
# storage client timeout
--storage_client_timeout_ms=60000
# Port to listen on Meta with HTTP protocol, it corresponds to ws_http_port in metad's configuration file
--ws_meta_http_port=19559

########## authentication ##########
# Enable authorization
--enable_authorize=true
# User login authentication type, password for nebula authentication, ldap for ldap authentication, cloud for cloud authentication
--auth_type=password

########## memory ##########
# System memory high watermark ratio, cancel the memory checking when the ratio greater than 1.0
--system_memory_high_watermark_ratio=0.8

########## metrics ##########
--enable_space_level_metrics=false

########## experimental feature ##########
# if use experimental features
--enable_experimental_feature=false

  • nebula-storaged.conf
########## basics ##########
# Whether to run as a daemon process
--daemonize=true
# The file to host the process id
--pid_file=pids/nebula-storaged.pid
# Whether to use the configuration obtained from the configuration file
--local_config=true

########## logging ##########
# The directory to host logging files
--log_dir=logs
# Log level, 0, 1, 2, 3 for INFO, WARNING, ERROR, FATAL respectively
--minloglevel=0
# Verbose log level, 1, 2, 3, 4, the higher of the level, the more verbose of the logging
--v=0
# Maximum seconds to buffer the log messages
--logbufsecs=0
# Whether to redirect stdout and stderr to separate output files
--redirect_stdout=true
# Destination filename of stdout and stderr, which will also reside in log_dir.
--stdout_log_file=storaged-stdout.log
--stderr_log_file=storaged-stderr.log
# Copy log messages at or above this level to stderr in addition to logfiles. The numbers of severity levels INFO, WARNING, ERROR, and FATAL are 0, 1, 2, and 3, respectively.
--stderrthreshold=2
# Wether logging files' name contain time stamp.
--timestamp_in_logfile_name=true

########## networking ##########
# Comma separated Meta server addresses
--meta_server_addrs=127.0.0.1:9559
# Local IP used to identify the nebula-storaged process.
# Change it to an address other than loopback if the service is distributed or
# will be accessed remotely.
--local_ip=192.168.8.80
# Storage daemon listening port
--port=9779
# HTTP service ip
--ws_ip=0.0.0.0
# HTTP service port
--ws_http_port=19779
# heartbeat with meta service
--heartbeat_interval_secs=10

######### Raft #########
# Raft election timeout
--raft_heartbeat_interval_secs=30
# RPC timeout for raft client (ms)
--raft_rpc_timeout_ms=500
## recycle Raft WAL
--wal_ttl=14400

########## Disk ##########
# Root data path. Split by comma. e.g. --data_path=/disk1/path1/,/disk2/path2/
# One path per Rocksdb instance.
--data_path=data/storage

# Minimum reserved bytes of each data path
--minimum_reserved_bytes=268435456

# The default reserved bytes for one batch operation
--rocksdb_batch_size=4096
# The default block cache size used in BlockBasedTable.
# The unit is MB.
--rocksdb_block_cache=4
# The type of storage engine, `rocksdb', `memory', etc.
--engine_type=rocksdb

# Compression algorithm, options: no,snappy,lz4,lz4hc,zlib,bzip2,zstd
# For the sake of binary compatibility, the default value is snappy.
# Recommend to use:
#   * lz4 to gain more CPU performance, with the same compression ratio with snappy
#   * zstd to occupy less disk space
#   * lz4hc for the read-heavy write-light scenario
--rocksdb_compression=lz4

# Set different compressions for different levels
# For example, if --rocksdb_compression is snappy,
# "no:no:lz4:lz4::zstd" is identical to "no:no:lz4:lz4:snappy:zstd:snappy"
# In order to disable compression for level 0/1, set it to "no:no"
--rocksdb_compression_per_level=

# Whether or not to enable rocksdb's statistics, disabled by default
--enable_rocksdb_statistics=false

# Statslevel used by rocksdb to collection statistics, optional values are
#   * kExceptHistogramOrTimers, disable timer stats, and skip histogram stats
#   * kExceptTimers, Skip timer stats
#   * kExceptDetailedTimers, Collect all stats except time inside mutex lock AND time spent on compression.
#   * kExceptTimeForMutex, Collect all stats except the counters requiring to get time inside the mutex lock.
#   * kAll, Collect all stats
--rocksdb_stats_level=kExceptHistogramOrTimers

# Whether or not to enable rocksdb's prefix bloom filter, enabled by default.
--enable_rocksdb_prefix_filtering=true
# Whether or not to enable rocksdb's whole key bloom filter, disabled by default.
--enable_rocksdb_whole_key_filtering=false

############## Key-Value separation ##############
# Whether or not to enable BlobDB (RocksDB key-value separation support)
--rocksdb_enable_kv_separation=false
# RocksDB key value separation threshold in bytes. Values at or above this threshold will be written to blob files during flush or compaction.
--rocksdb_kv_separation_threshold=100
# Compression algorithm for blobs, options: no,snappy,lz4,lz4hc,zlib,bzip2,zstd
--rocksdb_blob_compression=lz4
# Whether to garbage collect blobs during compaction
--rocksdb_enable_blob_garbage_collection=true

############## rocksdb Options ##############
# rocksdb DBOptions in json, each name and value of option is a string, given as "option_name":"option_value" separated by comma
--rocksdb_db_options={}
# rocksdb ColumnFamilyOptions in json, each name and value of option is string, given as "option_name":"option_value" separated by comma
--rocksdb_column_family_options={"write_buffer_size":"67108864","max_write_buffer_number":"4","max_bytes_for_level_base":"268435456"}
# rocksdb BlockBasedTableOptions in json, each name and value of option is string, given as "option_name":"option_value" separated by comma
--rocksdb_block_based_table_options={"block_size":"8192"}

把所有127.0.0.1的地方设置成192.168.8.80,然后重启下服务再试试?

1 个赞

你好,我改完 ip 重启完服务还是出现相同的问题。这是改完的配置:

  • nebula-graphd.conf
########## basics ##########
# Whether to run as a daemon process
--daemonize=true
# The file to host the process id
--pid_file=pids/nebula-graphd.pid
# Whether to enable optimizer
--enable_optimizer=true
# The default charset when a space is created
--default_charset=utf8
# The default collate when a space is created
--default_collate=utf8_bin
# Whether to use the configuration obtained from the configuration file
--local_config=true

########## logging ##########
# The directory to host logging files
--log_dir=logs
# Log level, 0, 1, 2, 3 for INFO, WARNING, ERROR, FATAL respectively
--minloglevel=0
# Verbose log level, 1, 2, 3, 4, the higher of the level, the more verbose of the logging
--v=0
# Maximum seconds to buffer the log messages
--logbufsecs=0
# Whether to redirect stdout and stderr to separate output files
--redirect_stdout=true
# Destination filename of stdout and stderr, which will also reside in log_dir.
--stdout_log_file=graphd-stdout.log
--stderr_log_file=graphd-stderr.log
# Copy log messages at or above this level to stderr in addition to logfiles. The numbers of severity levels INFO, WARNING, ERROR, and FATAL are 0, 1, 2, and 3, respectively.
--stderrthreshold=2
# wether logging files' name contain time stamp.
--timestamp_in_logfile_name=true
########## query ##########
# Whether to treat partial success as an error.
# This flag is only used for Read-only access, and Modify access always treats partial success as an error.
--accept_partial_success=false
# Maximum sentence length, unit byte
--max_allowed_query_size=4194304

########## networking ##########
# Comma separated Meta Server Addresses
--meta_server_addrs=192.168.8.80:9559
# Local IP used to identify the nebula-graphd process.
# Change it to an address other than loopback if the service is distributed or
# will be accessed remotely.
--local_ip=192.168.8.80
# Network device to listen on
--listen_netdev=any
# Port to listen on
--port=9669
# To turn on SO_REUSEPORT or not
--reuse_port=false
# Backlog of the listen socket, adjust this together with net.core.somaxconn
--listen_backlog=1024
# The number of seconds Nebula service waits before closing the idle connections
--client_idle_timeout_secs=28800
# The number of seconds before idle sessions expire
# The range should be in [1, 604800]
--session_idle_timeout_secs=28800
# The number of threads to accept incoming connections
--num_accept_threads=1
# The number of networking IO threads, 0 for # of CPU cores
--num_netio_threads=0
# The number of threads to execute user queries, 0 for # of CPU cores
--num_worker_threads=0
# HTTP service ip
--ws_ip=0.0.0.0
# HTTP service port
--ws_http_port=19669
# storage client timeout
--storage_client_timeout_ms=60000
# Port to listen on Meta with HTTP protocol, it corresponds to ws_http_port in metad's configuration file
--ws_meta_http_port=19559

########## authentication ##########
# Enable authorization
--enable_authorize=true
# User login authentication type, password for nebula authentication, ldap for ldap authentication, cloud for cloud authentication
--auth_type=password

########## memory ##########
# System memory high watermark ratio, cancel the memory checking when the ratio greater than 1.0
--system_memory_high_watermark_ratio=0.8

########## metrics ##########
--enable_space_level_metrics=false

########## experimental feature ##########
# if use experimental features
--enable_experimental_feature=false

  • nebula-metad.conf
########## basics ##########
# Whether to run as a daemon process
--daemonize=true
# The file to host the process id
--pid_file=pids/nebula-metad.pid

########## logging ##########
# The directory to host logging files
--log_dir=logs
# Log level, 0, 1, 2, 3 for INFO, WARNING, ERROR, FATAL respectively
--minloglevel=0
# Verbose log level, 1, 2, 3, 4, the higher of the level, the more verbose of the logging
--v=0
# Maximum seconds to buffer the log messages
--logbufsecs=0
# Whether to redirect stdout and stderr to separate output files
--redirect_stdout=true
# Destination filename of stdout and stderr, which will also reside in log_dir.
--stdout_log_file=metad-stdout.log
--stderr_log_file=metad-stderr.log
# Copy log messages at or above this level to stderr in addition to logfiles. The numbers of severity levels INFO, WARNING, ERROR, and FATAL are 0, 1, 2, and 3, respectively.
--stderrthreshold=2
# wether logging files' name contain time stamp, If Using logrotate to rotate logging files, than should set it to true.
--timestamp_in_logfile_name=true

########## networking ##########
# Comma separated Meta Server addresses
--meta_server_addrs=192.168.8.80:9559
# Local IP used to identify the nebula-metad process.
# Change it to an address other than loopback if the service is distributed or
# will be accessed remotely.
--local_ip=192.168.8.80
# Meta daemon listening port
--port=9559
# HTTP service ip
--ws_ip=0.0.0.0
# HTTP service port
--ws_http_port=19559
# Port to listen on Storage with HTTP protocol, it corresponds to ws_http_port in storage's configuration file
--ws_storage_http_port=19779

########## storage ##########
# Root data path, here should be only single path for metad
--data_path=data/meta

########## Misc #########
# The default number of parts when a space is created
--default_parts_num=100
# The default replica factor when a space is created
--default_replica_factor=1

--heartbeat_interval_secs=10
--agent_heartbeat_interval_secs=60

  • nebula-storaged.conf
########## basics ##########
# Whether to run as a daemon process
--daemonize=true
# The file to host the process id
--pid_file=pids/nebula-storaged.pid
# Whether to use the configuration obtained from the configuration file
--local_config=true

########## logging ##########
# The directory to host logging files
--log_dir=logs
# Log level, 0, 1, 2, 3 for INFO, WARNING, ERROR, FATAL respectively
--minloglevel=0
# Verbose log level, 1, 2, 3, 4, the higher of the level, the more verbose of the logging
--v=0
# Maximum seconds to buffer the log messages
--logbufsecs=0
# Whether to redirect stdout and stderr to separate output files
--redirect_stdout=true
# Destination filename of stdout and stderr, which will also reside in log_dir.
--stdout_log_file=storaged-stdout.log
--stderr_log_file=storaged-stderr.log
# Copy log messages at or above this level to stderr in addition to logfiles. The numbers of severity levels INFO, WARNING, ERROR, and FATAL are 0, 1, 2, and 3, respectively.
--stderrthreshold=2
# Wether logging files' name contain time stamp.
--timestamp_in_logfile_name=true

########## networking ##########
# Comma separated Meta server addresses
--meta_server_addrs=192.168.8.80:9559
# Local IP used to identify the nebula-storaged process.
# Change it to an address other than loopback if the service is distributed or
# will be accessed remotely.
--local_ip=192.168.8.80
# Storage daemon listening port
--port=9779
# HTTP service ip
--ws_ip=0.0.0.0
# HTTP service port
--ws_http_port=19779
# heartbeat with meta service
--heartbeat_interval_secs=10

######### Raft #########
# Raft election timeout
--raft_heartbeat_interval_secs=30
# RPC timeout for raft client (ms)
--raft_rpc_timeout_ms=500
## recycle Raft WAL
--wal_ttl=14400

########## Disk ##########
# Root data path. Split by comma. e.g. --data_path=/disk1/path1/,/disk2/path2/
# One path per Rocksdb instance.
--data_path=data/storage

# Minimum reserved bytes of each data path
--minimum_reserved_bytes=268435456

# The default reserved bytes for one batch operation
--rocksdb_batch_size=4096
# The default block cache size used in BlockBasedTable.
# The unit is MB.
--rocksdb_block_cache=4
# The type of storage engine, `rocksdb', `memory', etc.
--engine_type=rocksdb

# Compression algorithm, options: no,snappy,lz4,lz4hc,zlib,bzip2,zstd
# For the sake of binary compatibility, the default value is snappy.
# Recommend to use:
#   * lz4 to gain more CPU performance, with the same compression ratio with snappy
#   * zstd to occupy less disk space
#   * lz4hc for the read-heavy write-light scenario
--rocksdb_compression=lz4

# Set different compressions for different levels
# For example, if --rocksdb_compression is snappy,
# "no:no:lz4:lz4::zstd" is identical to "no:no:lz4:lz4:snappy:zstd:snappy"
# In order to disable compression for level 0/1, set it to "no:no"
--rocksdb_compression_per_level=

# Whether or not to enable rocksdb's statistics, disabled by default
--enable_rocksdb_statistics=false

# Statslevel used by rocksdb to collection statistics, optional values are
#   * kExceptHistogramOrTimers, disable timer stats, and skip histogram stats
#   * kExceptTimers, Skip timer stats
#   * kExceptDetailedTimers, Collect all stats except time inside mutex lock AND time spent on compression.
#   * kExceptTimeForMutex, Collect all stats except the counters requiring to get time inside the mutex lock.
#   * kAll, Collect all stats
--rocksdb_stats_level=kExceptHistogramOrTimers

# Whether or not to enable rocksdb's prefix bloom filter, enabled by default.
--enable_rocksdb_prefix_filtering=true
# Whether or not to enable rocksdb's whole key bloom filter, disabled by default.
--enable_rocksdb_whole_key_filtering=false

############## Key-Value separation ##############
# Whether or not to enable BlobDB (RocksDB key-value separation support)
--rocksdb_enable_kv_separation=false
# RocksDB key value separation threshold in bytes. Values at or above this threshold will be written to blob files during flush or compaction.
--rocksdb_kv_separation_threshold=100
# Compression algorithm for blobs, options: no,snappy,lz4,lz4hc,zlib,bzip2,zstd
--rocksdb_blob_compression=lz4
# Whether to garbage collect blobs during compaction
--rocksdb_enable_blob_garbage_collection=true

############## rocksdb Options ##############
# rocksdb DBOptions in json, each name and value of option is a string, given as "option_name":"option_value" separated by comma
--rocksdb_db_options={}
# rocksdb ColumnFamilyOptions in json, each name and value of option is string, given as "option_name":"option_value" separated by comma
--rocksdb_column_family_options={"write_buffer_size":"67108864","max_write_buffer_number":"4","max_bytes_for_level_base":"268435456"}
# rocksdb BlockBasedTableOptions in json, each name and value of option is string, given as "option_name":"option_value" separated by comma
--rocksdb_block_based_table_options={"block_size":"8192"}

  • 更改完之后show hosts
+----------------+------+-----------+-----------+--------------+----------------------+------------------------+---------+
| Host           | Port | HTTP port | Status    | Leader count | Leader distribution  | Partition distribution | Version |
+----------------+------+-----------+-----------+--------------+----------------------+------------------------+---------+
| "192.168.8.80" | 9779 | 19669     | "ONLINE"  | 10           | "test:10"            | "test:10"              | "3.1.0" |
| "192.168.8.80" | 9000 | 19669     | "OFFLINE" | 0            | "No valid partition" | "No valid partition"   |         |
| "192.168.8.80" | 9669 | 19669     | "OFFLINE" | 0            | "No valid partition" | "test:10"              |         |
+----------------+------+-----------+-----------+--------------+----------------------+------------------------+---------+

-报错

2022/05/27 13:21:34 --- START OF NEBULA IMPORTER ---
2022/05/27 13:21:34 [INFO] clientmgr.go:31: Create 10 Nebula Graph clients
2022/05/27 13:21:34 [INFO] reader.go:49: The delimiter of /home/luyilun/Documents/yuanmou/gdbm_experiments/nebula/vertices.csv is U+002C ','
2022/05/27 13:21:34 [INFO] reader.go:68: Start to read file(0): /home/luyilun/Documents/yuanmou/gdbm_experiments/nebula/vertices.csv, schema: < :VID(int) >
2022/05/27 13:21:34 [INFO] reader.go:184: Total lines of file(/home/luyilun/Documents/yuanmou/gdbm_experiments/nebula/vertices.csv) is: 10, error lines: 0
2022/05/27 13:21:34 [INFO] reader.go:49: The delimiter of /home/luyilun/Documents/yuanmou/gdbm_experiments/nebula/edges.csv is U+002C ','
2022/05/27 13:21:34 [INFO] reader.go:68: Start to read file(1): /home/luyilun/Documents/yuanmou/gdbm_experiments/nebula/edges.csv, schema: < :SRC_VID(int),:DST_VID(int) >
2022/05/27 13:21:34 [INFO] reader.go:184: Total lines of file(/home/luyilun/Documents/yuanmou/gdbm_experiments/nebula/edges.csv) is: 3, error lines: 0
2022/05/27 13:21:39 [ERROR] handler.go:63: Client 3 fail to execute: INSERT VERTEX `person`() VALUES  3: ();, Error: write tcp 192.168.8.80:46152->192.168.8.80:9669: write: broken pipe
2022/05/27 13:21:39 [ERROR] handler.go:63: Client 5 fail to execute: INSERT VERTEX `person`() VALUES  5: ();, Error: write tcp 192.168.8.80:46156->192.168.8.80:9669: write: broken pipe
2022/05/27 13:21:39 [ERROR] handler.go:63: Client 9 fail to execute: INSERT VERTEX `person`() VALUES  9: ();, Error: write tcp 192.168.8.80:46164->192.168.8.80:9669: write: broken pipe
2022/05/27 13:21:39 [ERROR] handler.go:63: Client 1 fail to execute: INSERT VERTEX `person`() VALUES  1: ();, Error: write tcp 192.168.8.80:46148->192.168.8.80:9669: write: broken pipe
2022/05/27 13:21:39 [ERROR] handler.go:63: Client 2 fail to execute: INSERT VERTEX `person`() VALUES  2: ();, Error: write tcp 192.168.8.80:46150->192.168.8.80:9669: write: broken pipe
2022/05/27 13:21:39 [ERROR] handler.go:63: Client 7 fail to execute: INSERT VERTEX `person`() VALUES  7: ();, Error: write tcp 192.168.8.80:46160->192.168.8.80:9669: write: broken pipe
2022/05/27 13:21:39 [INFO] statsmgr.go:89: Done(/home/luyilun/Documents/yuanmou/gdbm_experiments/nebula/vertices.csv): Time(4.55s), Finished(10), Failed(6), Read Failed(0), Latency AVG(161158us), Batches Req AVG(162688us), Rows AVG(2.20/s)
2022/05/27 13:21:39 [INFO] statsmgr.go:89: Tick: Time(5.00s), Finished(10), Failed(6), Read Failed(0), Latency AVG(161158us), Batches Req AVG(162688us), Rows AVG(2.00/s)
2022/05/27 13:21:42 [ERROR] handler.go:63: Client 3 fail to execute: INSERT EDGE `follow`() VALUES  7->10:() ;, Error: write tcp 192.168.8.80:46152->192.168.8.80:9669: write: broken pipe
2022/05/27 13:21:42 [ERROR] handler.go:63: Client 2 fail to execute: INSERT EDGE `follow`() VALUES  3->6:() ;, Error: write tcp 192.168.8.80:46150->192.168.8.80:9669: write: broken pipe
2022/05/27 13:21:42 [ERROR] handler.go:63: Client 1 fail to execute: INSERT EDGE `follow`() VALUES  1->2:() ;, Error: write tcp 192.168.8.80:46148->192.168.8.80:9669: write: broken pipe
2022/05/27 13:21:42 [INFO] statsmgr.go:89: Done(/home/luyilun/Documents/yuanmou/gdbm_experiments/nebula/edges.csv): Time(7.55s), Finished(13), Failed(9), Read Failed(0), Latency AVG(123968us), Batches Req AVG(125145us), Rows AVG(1.72/s)
2022/05/27 13:21:42 [WARN] session.go:280: [nebula-go] Sign out failed, write tcp 192.168.8.80:46148->192.168.8.80:9669: write: broken pipe
2022/05/27 13:21:42 [WARN] session.go:280: [nebula-go] Sign out failed, write tcp 192.168.8.80:46150->192.168.8.80:9669: write: broken pipe
2022/05/27 13:21:42 [WARN] session.go:280: [nebula-go] Sign out failed, write tcp 192.168.8.80:46152->192.168.8.80:9669: write: broken pipe
2022/05/27 13:21:42 [WARN] session.go:280: [nebula-go] Sign out failed, write tcp 192.168.8.80:46156->192.168.8.80:9669: write: broken pipe
2022/05/27 13:21:42 [WARN] session.go:280: [nebula-go] Sign out failed, write tcp 192.168.8.80:46160->192.168.8.80:9669: write: broken pipe
2022/05/27 13:21:42 [WARN] session.go:280: [nebula-go] Sign out failed, write tcp 192.168.8.80:46164->192.168.8.80:9669: write: broken pipe
2022/05/27 13:21:42 Total 9 lines fail to insert into nebula graph database
2022/05/27 13:21:43 --- END OF NEBULA IMPORTER ---

看这个报错,10个节点里有6个导入失败,其余4个还是导入成功的,这是什么原因呀

你端口起在 9559,为啥 show hosts 是 9000 的端口,确认下你是 rpm 安装,以及只部署了一套 Nebula 对吧。

你好,9000是我之前运行console的时候偶然添加的,刚刚我用drop hosts把它删了。那个offline的9669我不知道是哪里来的,用dop hosts 也删不掉。

确认过的确是 rpm 安装,只是之前第一次安装的时候装错了位置,之后按照教程删了重装过。那应该也只部署了一套nebula呀。

show hosts meta;show hosts storage;show hosts graph;

分别执行下看看结果呢