nebula插入大量数据到一定程度准时报错

数据量大概在200个G,每次插入到40g都会导致一个storage出现offline,不知道什么原因
提问参考模版:

  • nebula 版本:3.6
  • 部署方式:分布式
  • 安装方式:RPM
  • 是否上生产环境:Y
dbg文件日志
This dump file has an exception of interest stored in it.
The stored exception information can be accessed via .ecxr.
(f0f0f0f0.144e): Unknown exception - code 00000006 (first/second chance not available)
For analysis of this file, run !analyze -v
*** WARNING: Unable to verify timestamp for libc.so.6
*** WARNING: Unable to verify timestamp for nebula-storaged

graph的日志
 Request to "172.25.136.150":9779 failed: Failed to write to remote endpoint. Wrote 0 bytes. AsyncSocketException: AsyncSocketException: connect failed, type = Socket not open, errno = 111 (Connection refused)
E20230922 04:43:01.416117  5773 StorageClientBase-inl.h:143] There some RPC errors: RPC failure in StorageClient: Failed to write to remote endpoint. Wrote 0 bytes. AsyncSocketException: AsyncSocketException: connect failed, type = Socket not open, errno = 111 (Connection refused)
E20230922 04:43:01.416316  5773 StorageAccessExecutor.h:47] InsertEdgesExecutor failed, error E_RPC_FAILURE, part 4
E20230922 04:43:01.416342  5773 StorageAccessExecutor.h:47] InsertEdgesExecutor failed, error E_RPC_FAILURE, part 1
 
storage日志
Rocksdb compaction completed column family: default because of LevelL0FilesNum, status: IO error: While open a file for random read: /home/nebuladata/nebula/29/data/004369.sst: Too many open files, compacted 45 files into 0, base level is 0, output level is 1
I20230922 07:20:49.055927  5490 EventListener.h:147] BackgroundError: because of Compaction IO error: While open a file for random read: /home/nebuladata/nebula/29/data/004369.sst: Too many open files
F20230922 07:20:49.104537  5198 RaftPart.cpp:1097] [Port: 9780, Space: 29, Part: 5] Failed to commit logs
F20230922 07:20:49.104573  5192 RaftPart.cpp:1097] [Port: 9780, Space: 29, Part: 2] Failed to commit logs

从这感觉像是系统的ulimit设置问题,请贴一下shell下ulimit -a的结果。建议按文档设置内核参数Linux 内核配置 - NebulaGraph Database 手册