nebualGraph2.5.0 查询提示内存Used memory(178593436KB) hits the high watermark(0.800000) of total system memory(196268364KB).

graph的配置文件。 nebula-graph.config 文件中 system_memory_high_watermark_ratio 可以设置

我是从2.0.1升级到2.5.0的在nebula-graph.conf中未找到此配置

这个配置再2.0.1 上没有,可以把这个
–system_memory_high_watermark_ratio = 0.9。这个配置加入nebula-graph.conf 中

是的,添加了您说的配置,又报了一个 错误(之前也有遇到过)

这是频繁请求导致的一个报错,你可以尝试过一段时间重新执行。

studio nohup.out 输出的错误

2021/09/02 22:33:50 ErrorCode: -1005, ErrorMsg: Used memory(107210216KB) hits the high watermark(0.800000) of total system memory(131860204KB).
2021/09/02 22:33:50.972 [D] [server.go:2867]  |      127.0.0.1| 200 |5m21.277283826s|   match| POST     /api/db/exec   r:/api/db/exec

nebula-graph.conf 中的配置

########## Authentication ##########
# User login authentication type, password for nebula authentication, ldap for ldap authentication, cloud for cloud authentication
--auth_type=password
--system_memory_high_watermark_ratio = 0.9
--storage_client_timeout_ms=60000

如上 我设置的 0.9怎么报了 0.8呢?即使报了 0.9,如果还是报内存问题,如何减少调用内存,用时间换内存 可以吗 ?128GB内存对于测试来说不小了

你这个解决了么,我得改成0.9 还是报这个错

改成1.0

1 个赞

设置了1.0,还是报0.8,没起作用,怎么回事儿,之前是ok的

  1. 确保配置文件中 -local_config=true
  2. 新配置生效需要重启服务
1 个赞
########## basics ##########
# Whether to run as a daemon process
--local_conf=true
--daemonize=true
# The file to host the process id
--pid_file=pids/nebula-graphd.pid
# Whether to enable optimizer
--enable_optimizer=true

########## logging ##########
# The directory to host logging files
--log_dir=logs
# Log level, 0, 1, 2, 3 for INFO, WARNING, ERROR, FATAL respectively
--minloglevel=0
# Verbose log level, 1, 2, 3, 4, the higher of the level, the more verbose of the logging
--v=0
# Maximum seconds to buffer the log messages
--logbufsecs=0
# Whether to redirect stdout and stderr to separate output files
--redirect_stdout=true
# Destination filename of stdout and stderr, which will also reside in log_dir.
--stdout_log_file=graphd-stdout.log
--stderr_log_file=graphd-stderr.log
# Copy log messages at or above this level to stderr in addition to logfiles. The numbers of severity levels INFO, WARNING, ERROR, and FATAL are 0, 1, 2, and 3, respectively.
--stderrthreshold=2

########## query ##########
# Whether to treat partial success as an error.
# This flag is only used for Read-only access, and Modify access always treats partial success as an error.
--accept_partial_success=false

########## networking ##########
# Comma separated Meta Server Addresses
--meta_server_addrs=172.19.208.25:9559,172.19.208.20:9559,172.19.208.42:9559
# Local IP used to identify the nebula-graphd process.
# Change it to an address other than loopback if the service is distributed or
# will be accessed remotely.
--local_ip=172.19.208.25
# Network device to listen on
--listen_netdev=any
# Port to listen on
--port=9669
# To turn on SO_REUSEPORT or not
--reuse_port=false
# Backlog of the listen socket, adjust this together with net.core.somaxconn
--listen_backlog=1024
# Seconds before the idle connections are closed, 0 for never closed
--client_idle_timeout_secs=0
# Seconds before the idle sessions are expired, 0 for no expiration
--session_idle_timeout_secs=0
# The number of threads to accept incoming connections
--num_accept_threads=1
# The number of networking IO threads, 0 for # of CPU cores
--num_netio_threads=0
# The number of threads to execute user queries, 0 for # of CPU cores
--num_worker_threads=4
# HTTP service ip
--ws_ip=172.19.208.25
# HTTP service port
--ws_http_port=19669
# HTTP2 service port
--ws_h2_port=19670

# The default charset when a space is created
--default_charset=utf8
# The defaule collate when a space is created
--default_collate=utf8_bin

########## authorization ##########
# Enable authorization
--enable_authorize=false

########## Authentication ##########
# User login authentication type, password for nebula authentication, ldap for ldap authentication, cloud for cloud authentication
--auth_type=password
--system_memory_high_watermark_ratio = 1.0
--storage_client_timeout_ms=60000

这个-local_config=true一直有配置,并且三台服务器均设置了1.0,并且 restart all

local_config 参数名不对

1 个赞

一直是错的,竟然能跑数据,现在改过来了 报了以下错误

2021/09/03 12:55:09 ErrorCode: -1005, ErrorMsg: Storage Error: part: 37, error: E_RPC_FAILURE(-3).
2021/09/03 12:55:09.681 [D] [server.go:2867]  |      127.0.0.1| 200 |3m51.736671116s|   match| POST     /api/db/exec   r:/api/db/exec


cpu占用不高,如何使它多核计算,cpu只到了100

128GB内存还是不够用吗 ?内存溢出,执行的节点 又挂掉了,怎样用时间换空间?节约内存


执行的是以下语句,有没有查询语句优化空间 ?能指导下吗(目的是,通过查询点和边,生成新的边,这个边会有统计数据,所以需要整体查询)

MATCH (t1)<-[:tagged_by]-(c:Content)-[:tagged_by]->(t2) 
WITH t1, t2, sum(c.count1) as weight, min(c.timestamp1) as earliest, max(c.timestamp1) as latest  
WHERE  id(t1) < id(t2)  
return count(*)

今天动手试了一下,导入大概几百个节点和边,确定没有回路的情况,用match 双向关系查询就撑爆32G内存,同样是这个错误信息,用单向边来查询就没有问题也很快。语句大概是这样子:
match p=(v1:concept) -[e:related_to1…100]-(v2:concept)
where v1.concept_code== “C00000001” and v2.concept_code == “C00000184”
RETURN p
把-[e:related_to
1…100]-改成<-[e:related_to*1…100]-就可以

1 个赞


我这边2T的内存才用了一点,为什么也有这个问题?

我们也是,256G用了170G,也是报这个错

你们修改下 0.8 为 1 呢?

2.5.1 版本以下建议把相关参数设置为 1,在 2.6.0 中已经做了解决。

1 个赞

请问你们解决了么。