nebula-storaged 内存占用高

  1. 背景
  • v3.6.0版本
  • 7台机器 64C256G
  • 3 meta 7graph 7 storage

使用sst文件导入历史数据,load sst文件到nebula之后 ( https://docs.nebula-graph.com.cn/3.6.0/import-export/nebula-exchange/use-exchange/ex-ug-import-from-sst/
nebula-storaged 内存占用高,然后oom kill
其中内存占用如下,256G的节点


重启storaged可以稳定复现上述过程;

  1. 配置
########## basics ##########
--daemonize=true
--pid_file=pids/nebula-storaged-listener.pid
--local_config=true

########## networking ##########
--meta_server_addrs=xxxxx.1:9559,xxxx.2:9559,xxxxx.3:9559
--local_ip=xxxxxxxxx
--port=9779
--ws_ip=0.0.0.0
--ws_http_port=19779
--heartbeat_interval_secs=10

######### Raft #########
--raft_heartbeat_interval_secs=30
--raft_rpc_timeout_ms=5000
--wal_ttl=1440

########## Disk ##########
--data_path=/data/software/nebula/data/storage,/data1/software/nebula/data/storage


--minimum_reserved_bytes=268435456
--rocksdb_batch_size=4096
--rocksdb_block_cache=102400
--disable_page_cache=true
--rocksdb_compression=lz4

# Set different compressions for different levels
--rocksdb_compression_per_level=
--num_compaction_threads=16

############## rocksdb Options ##############
--rocksdb_db_options={"max_subcompactions":"4","max_background_jobs":"4","skip_checking_sst_file_sizes_on_db_open":"true","max_open_files":"30","max_background_compactions":"8","max_background_flushes":"16","compaction_readahead_size":"32K"}

--rocksdb_column_family_options={"disable_auto_compactions":"true","write_buffer_size":"67108864","max_write_buffer_number":"4","max_bytes_for_level_base":"268435456"}
--rocksdb_block_based_table_options={"block_size":"16384","cache_index_and_filter_blocks":"true"}


--enable_rocksdb_statistics=false
--rocksdb_stats_level=kExceptHistogramOrTimers

--enable_rocksdb_prefix_filtering=true
--enable_rocksdb_whole_key_filtering=false

############### misc ####################
--query_concurrently=true
--auto_remove_invalid_space=true
--num_io_threads=32
--num_max_connections=0
--num_worker_threads=32
--max_concurrent_subtasks=10
--snapshot_part_rate_limit=10485760
--snapshot_batch_size=1048576
--rebuild_index_part_rate_limit=4194304
--rebuild_index_batch_size=1048576

########## memory tracker ##########
--memory_tracker_limit_ratio=0.3
--memory_tracker_untracked_reserved_memory_mb=10240

--memory_tracker_detail_log=true
--memory_tracker_detail_log_interval_ms=60000

--memory_purge_enabled=true
--memory_purge_interval_seconds=10

--timezone_name=UTC+08:00
--max_edge_returned_per_vertex=10000
--storage_client_timeout_ms=60000
--enable_partitioned_index_filter=false
--reader_handlers=32
--max_batch_size=1024
#--rocksdb_rate_limit=30
--optimize_appendvertices=1
#rocksdb_filtering_prefix_length=16
--move_files=true
--rocksdb_disable_wal=true
  1. jeprof 内存分析

nebula-storaged.pdf (15.4 KB)

  1. sst文件大小
du -sh ./nebula/74/data/
319G    ./nebula/74/data/

ll ./nebula/74/data/*.sst | wc -l 
3743
  1. sst dump

todo sst_dump 报错

./sst_dump  --file=/data/software/nebula/data/storage/nebula/74/data/003814.sst  --compression_types=kLZ4Compression

options.env is 0xcea600
Process /data/software/nebula/data/storage/nebula/74/data/003814.sst
Sst file format: block-based
/data/software/nebula/data/storage/nebula/74/data/003814.sst: Not implemented: Unsupported compression method for this build: LZ4
/data/software/nebula/data/storage/nebula/74/data/003814.sst is not a valid SST file

有相似帖子 但无法解决

看看这个帖子呢,里面的东西有用么?

UncompressBlockContentsForCompressionType 指向的好像是 max_open_files 改了么?

记一次 nebula-storaged 内存占用高解决的过程 一开始也是指向这个

1 个赞

max_open_files = 30 也还是不行

然后再看呢?内存占用改了之后会变吧

没变化 上面的测试就是限制了max_open_files

试了那几个参数,不起作用

内存增高的时候,看下内存占比
curl -sv “http://127.0.0.1:19779/rocksdb_property?space=$value&&property=rocksdb.estimate-table-readers-mem
curl -sv “http://127.0.0.1:19779/rocksdb_property?space=$value&&property=rocksdb.block-cache-usage
curl -sv “http://127.0.0.1:19779/rocksdb_property?space=$value&&property=rocksdb.cur-size-all-mem-tables
curl -sv “http://127.0.0.1:19779/rocksdb_property?space=$value&&property=rocksdb.block-cache-pinned-usage

1 个赞

这一步需要storage启动之后查看 现在storage都还没完全启动就down了

sst文件太多、太大导致启动失败 但是目前没发现如何解决

block cache先关掉 --rocksdb_block_cache=0
可能还得加个
–enable_partitioned_index_filter=true

试试

1 个赞

已使用 但无效

此话题已在最后回复的 30 天后被自动关闭。不再允许新回复。