nebula storaged 服务同步完数据后每天使用内存都在增加,大概每天增加300M,拉长时间线来看就很明显

  • nebula 版本:3.3.0
  • 部署方式:分布式
  • 安装方式:RPM
  • 是否上生产环境:N
  • 问题的具体描述
    日志没有异常,nebula storaged 服务同步完数据后每天使用内存都在增加,大概每天增加300M,持续一周了,本以为增长一段时间或许能达到一个点稳定住,但是现在看来丝毫没有稳定的迹象,有点害怕服务崩溃。(注:同步完成后几乎没有新增数据,下面是内存使用截图,已经确定是storaged服务造成的增长)
    image
1 个赞

nebula storaged配置如下:

########## basics ##########
# Whether to run as a daemon process
--daemonize=true
# The file to host the process id
--pid_file=pids/nebula-storaged.pid
# Whether to use the configuration obtained from the configuration file
--local_config=true

########## logging ##########
# The directory to host logging files
--log_dir=logs
# Log level, 0, 1, 2, 3 for INFO, WARNING, ERROR, FATAL respectively
--minloglevel=0
# Verbose log level, 1, 2, 3, 4, the higher of the level, the more verbose of the logging
--v=0
# Maximum seconds to buffer the log messages
--logbufsecs=0
# Whether to redirect stdout and stderr to separate output files
--redirect_stdout=true
# Destination filename of stdout and stderr, which will also reside in log_dir.
--stdout_log_file=storaged-stdout.log
--stderr_log_file=storaged-stderr.log
# Copy log messages at or above this level to stderr in addition to logfiles. The numbers of severity levels INFO, WARNING, ERROR, and FATAL are 0, 1, 2, and 3, respectively.
--stderrthreshold=2
# Wether logging files' name contain time stamp.
--timestamp_in_logfile_name=true

########## networking ##########
# Comma separated Meta server addresses
--meta_server_addrs={host_ip}:9559
# Local IP used to identify the nebula-storaged process.
# Change it to an address other than loopback if the service is distributed or
# will be accessed remotely.
--local_ip={ip}
# Storage daemon listening port
--port=9779
# HTTP service ip
--ws_ip=0.0.0.0
# HTTP service port
--ws_http_port=19779
# heartbeat with meta service
--heartbeat_interval_secs=10

######### Raft #########
# Raft election timeout
--raft_heartbeat_interval_secs=30
# RPC timeout for raft client (ms)
--raft_rpc_timeout_ms=500
## recycle Raft WAL
--wal_ttl=14400

########## Disk ##########
# Root data path. Split by comma. e.g. --data_path=/disk1/path1/,/disk2/path2/
# One path per Rocksdb instance.
--data_path=data/storage
--enable_partitioned_index_filter=true
# Minimum reserved bytes of each data path
--minimum_reserved_bytes=268435456

# The default reserved bytes for one batch operation
--rocksdb_batch_size=4096
# The default block cache size used in BlockBasedTable.
# The unit is MB.
--rocksdb_block_cache=3072
# The type of storage engine, `rocksdb', `memory', etc.
--engine_type=rocksdb

# Compression algorithm, options: no,snappy,lz4,lz4hc,zlib,bzip2,zstd
# For the sake of binary compatibility, the default value is snappy.
# Recommend to use:
#   * lz4 to gain more CPU performance, with the same compression ratio with snappy
#   * zstd to occupy less disk space
#   * lz4hc for the read-heavy write-light scenario
--rocksdb_compression=lz4

# Set different compressions for different levels
# For example, if --rocksdb_compression is snappy,
# "no:no:lz4:lz4::zstd" is identical to "no:no:lz4:lz4:snappy:zstd:snappy"
# In order to disable compression for level 0/1, set it to "no:no"
--rocksdb_compression_per_level=

# Whether or not to enable rocksdb's statistics, disabled by default
--enable_rocksdb_statistics=false

# Statslevel used by rocksdb to collection statistics, optional values are
#   * kExceptHistogramOrTimers, disable timer stats, and skip histogram stats
#   * kExceptTimers, Skip timer stats
#   * kExceptDetailedTimers, Collect all stats except time inside mutex lock AND time spent on compression.
#   * kExceptTimeForMutex, Collect all stats except the counters requiring to get time inside the mutex lock.
#   * kAll, Collect all stats
--rocksdb_stats_level=kExceptHistogramOrTimers

# Whether or not to enable rocksdb's prefix bloom filter, enabled by default.
--enable_rocksdb_prefix_filtering=true
# Whether or not to enable rocksdb's whole key bloom filter, disabled by default.
--enable_rocksdb_whole_key_filtering=false

############## Key-Value separation ##############
# Whether or not to enable BlobDB (RocksDB key-value separation support)
--rocksdb_enable_kv_separation=false
# RocksDB key value separation threshold in bytes. Values at or above this threshold will be written to blob files during flush or compaction.
--rocksdb_kv_separation_threshold=100
# Compression algorithm for blobs, options: no,snappy,lz4,lz4hc,zlib,bzip2,zstd
--rocksdb_blob_compression=lz4
# Whether to garbage collect blobs during compaction
--rocksdb_enable_blob_garbage_collection=true
--disable_page_cache=true
############## rocksdb Options ##############
# rocksdb DBOptions in json, each name and value of option is a string, given as "option_name":"option_value" separated by comma
--rocksdb_db_options={"max_open_files":"800"}
# rocksdb ColumnFamilyOptions in json, each name and value of option is string, given as "option_name":"option_value" separated by comma
--rocksdb_column_family_options={"disable_auto_compactions":"true","write_buffer_size":"46976204","max_write_buffer_number":"4","max_bytes_for_level_base":"268435456"}
# rocksdb BlockBasedTableOptions in json, each name and value of option is string, given as "option_name":"option_value" separated by comma
--rocksdb_block_based_table_options={"block_size":"8192","block_cache":"4096","cache_index_and_filter_blocks":"false"}

############### misc ####################
# Whether remove outdated space data
--auto_remove_invalid_space=true
# Network IO threads number
--num_io_threads=16
# Worker threads number to handle request
--num_worker_threads=32
# Maximum subtasks to run admin jobs concurrently
--max_concurrent_subtasks=10
# The rate limit in bytes when leader synchronizes snapshot data
--snapshot_part_rate_limit=10485760
# The amount of data sent in each batch when leader synchronizes snapshot data
--snapshot_batch_size=1048576
# The rate limit in bytes when leader synchronizes rebuilding index
--rebuild_index_part_rate_limit=4194304
# The amount of data sent in each batch when leader synchronizes rebuilding index
--rebuild_index_batch_size=1048576

机器内存单台8g,目前同步的数据大概有60g,帮忙看看是哪里的问题

昨晚服务已经崩溃了,有人可以帮忙看下吗?

能否提供下stderr log,另外建议提升log的’–v=0’改成–v=4记录更详细的日志。

storage 把 enable_partitioned_index_filter 打开,设置为 true

1 个赞

https://docs.nebula-graph.com.cn/3.6.0/5.configurations-and-logs/1.configurations/4.storage-config/

1 个赞

已经是这样设置的了

storaged-stderr.log是空的

那看起来就是内存太小了,试试enable_rocksdb_prefix_filtering设置成off呢? 观察下内存用量趋势

好的,我尝试一下

但是为什么会一直增加呢,持续了一两周,期间也没有写入什么新数据,如果单单是数据太多内存不够那我理解同步完很快就该出现问题吧?

这是一个用内存换速度的功能,会在后台对数据按条数做计算存储起来的,后面给你说enable_rocksdb_prefix_filtering设置成off就是关闭这个功能。

三台机器构成的集群其中一台修改了这个配置能看到效果吗?目前看重启后趋势还是增长的
image

修改配置后还是持续增加,是有些内存收不回来吗,需要手动执行compaction?

自动 Compaction 是每天会做一次的,只不过不定时。你这个内存一天之后还没下降么

是的,我disable_auto_compactions是true,手动执行compactions内存会下降一些,但是好像还是回不到一天前的水平

可能对于你的数据量而言8g内存还是太小了

此话题已在最后回复的 30 天后被自动关闭。不再允许新回复。