balance leader失败

新搭建了一套3.1.0的集群,3台storage,建了一个图空间导入了一些数据后,show hosts发现leader不均衡,就使用了balance leader,但是一致失败不知道为什么。
metad的日志:
I20220727 11:04:36.995651 11603 MetaJobExecutor.cpp:31] partiton failed to transfer leader
I20220727 11:04:36.995703 11603 JobManager.cpp:201] Job dispatch failed
I20220727 11:04:36.995715 11603 JobManager.cpp:223] jobFinished, spaceId=2, jobId=12, result=FAILED
storaged的日志:
I20220727 11:10:05.787549 5367 AdminProcessor.h:115] Can’t find leader for space 2 part 128 on “172.17.141.117”:9779
I20220727 11:10:05.787608 5366 AdminProcessor.h:115] Can’t find leader for space 2 part 147 on “172.17.141.117”:9779
I20220727 11:10:05.789521 5368 AdminProcessor.h:115] Can’t find leader for space 2 part 130 on “172.17.141.117”:9779
I20220727 11:10:05.791451 5369 AdminProcessor.h:115] Can’t find leader for space 2 part 126 on “172.17.141.117”:9779
I20220727 11:10:05.792737 5370 AdminProcessor.h:115] Can’t find leader for space 2 part 148 on “172.17.141.117”:9779
I20220727 11:10:05.794687 5371 AdminProcessor.h:115] Can’t find leader for space 2 part 117 on “172.17.141.117”:9779
I20220727 11:10:05.796669 5372 AdminProcessor.h:115] Can’t find leader for space 2 part 133 on “172.17.141.117”:9779
I20220727 11:10:05.800477 5373 AdminProcessor.h:115] Can’t find leader for space 2 part 149 on “172.17.141.117”:9779
I20220727 11:10:05.800575 5374 AdminProcessor.h:115] Can’t find leader for space 2 part 144 on “172.17.141.117”:9779
I20220727 11:10:05.802902 5375 AdminProcessor.h:115] Can’t find leader for space 2 part 125 on “172.17.141.117”:9779
I20220727 11:10:05.802989 5376 AdminProcessor.h:115] Can’t find leader for space 2 part 146 on “172.17.141.117”:9779
I20220727 11:10:05.804837 5377 AdminProcessor.h:115] Can’t find leader for space 2 part 120 on “172.17.141.117”:9779
I20220727 11:10:05.806766 5378 AdminProcessor.h:115] Can’t find leader for space 2 part 150 on “172.17.141.117”:9779
I20220727 11:10:05.807058 5379 AdminProcessor.h:115] Can’t find leader for space 2 part 4 on “172.17.141.117”:9779
I20220727 11:10:05.807211 5380 AdminProcessor.h:115] Can’t find leader for space 2 part 132 on “172.17.141.117”:9779
I20220727 11:10:05.807704 5381 AdminProcessor.h:115] Can’t find leader for space 2 part 140 on “172.17.141.117”:9779
I20220727 11:10:05.808131 5382 AdminProcessor.h:115] Can’t find leader for space 2 part 131 on “172.17.141.117”:9779
I20220727 11:10:05.808565 5383 AdminProcessor.h:115] Can’t find leader for space 2 part 139 on “172.17.141.117”:9779
I20220727 11:10:05.808991 5384 AdminProcessor.h:115] Can’t find leader for space 2 part 129 on “172.17.141.117”:9779
I20220727 11:10:05.809340 5385 AdminProcessor.h:115] Can’t find leader for space 2 part 134 on “172.17.141.117”:9779
I20220727 11:10:05.809839 5386 AdminProcessor.h:115] Can’t find leader for space 2 part 143 on “172.17.141.117”:9779
I20220727 11:10:06.020207 4554 MetaClient.cpp:2518] Send heartbeat to “172.17.141.116”:9559, clusterId 8219658889359283833
I20220727 11:10:06.020357 4427 ThriftClientManager-inl.h:47] Getting a client to “172.17.141.116”:9559
I20220727 11:10:06.020399 4427 MetaClient.cpp:702] Send request to meta “172.17.141.116”:9559
I20220727 11:10:06.022291 4427 MetaClient.cpp:2533] Metad last update time: 1658891125765

麻烦把storage的配置和meta的配置发一下

meta:

########## basics ##########
# Whether to run as a daemon process
--daemonize=true
# The file to host the process id
--pid_file=pids/nebula-metad.pid

########## logging ##########
# The directory to host logging files
--log_dir=logs
# Log level, 0, 1, 2, 3 for INFO, WARNING, ERROR, FATAL respectively
--minloglevel=0
# Verbose log level, 1, 2, 3, 4, the higher of the level, the more verbose of the logging
--v=3
# Maximum seconds to buffer the log messages
--logbufsecs=0
# Whether to redirect stdout and stderr to separate output files
--redirect_stdout=true
# Destination filename of stdout and stderr, which will also reside in log_dir.
--stdout_log_file=metad-stdout.log
--stderr_log_file=metad-stderr.log
# Copy log messages at or above this level to stderr in addition to logfiles. The numbers of severity levels INFO, WARNING, ERROR, and FATAL are 0, 1, 2, and 3, respectively.
--stderrthreshold=2
# wether logging files' name contain time stamp, If Using logrotate to rotate logging files, than should set it to true.
--timestamp_in_logfile_name=true

########## networking ##########
# Comma separated Meta Server addresses
--meta_server_addrs=172.17.141.116:9559,172.17.141.117:9559,172.17.141.118:9559,10.200.90.67:9559,10.200.90.114:9559
# Local IP used to identify the nebula-metad process.
# Change it to an address other than loopback if the service is distributed or
# will be accessed remotely.
--local_ip=172.17.141.116
# Meta daemon listening port
--port=9559
# HTTP service ip
--ws_ip=0.0.0.0
# HTTP service port
--ws_http_port=19559
# Port to listen on Storage with HTTP protocol, it corresponds to ws_http_port in storage's configuration file
--ws_storage_http_port=19779

########## storage ##########
# Root data path, here should be only single path for metad
--data_path=data/meta

########## Misc #########
# The default number of parts when a space is created
--default_parts_num=100
# The default replica factor when a space is created
--default_replica_factor=1

--heartbeat_interval_secs=10
--agent_heartbeat_interval_secs=60

这个贴错了吧。。。

都贴成metad的了。不过我配置只改了localip和meta_server_addrs两项,其他都是安装包里带的配置

storage的配置:

########## basics ##########
# Whether to run as a daemon process
--daemonize=true
# The file to host the process id
--pid_file=pids/nebula-storaged.pid
# Whether to use the configuration obtained from the configuration file
--local_config=true

########## logging ##########
# The directory to host logging files
--log_dir=logs
# Log level, 0, 1, 2, 3 for INFO, WARNING, ERROR, FATAL respectively
--minloglevel=0
# Verbose log level, 1, 2, 3, 4, the higher of the level, the more verbose of the logging
--v=3
# Maximum seconds to buffer the log messages
--logbufsecs=0
# Whether to redirect stdout and stderr to separate output files
--redirect_stdout=true
# Destination filename of stdout and stderr, which will also reside in log_dir.
--stdout_log_file=storaged-stdout.log
--stderr_log_file=storaged-stderr.log
# Copy log messages at or above this level to stderr in addition to logfiles. The numbers of severity levels INFO, WARNING, ERROR, and FATAL are 0, 1, 2, and 3, respectively.
--stderrthreshold=2
# Wether logging files' name contain time stamp.
--timestamp_in_logfile_name=true

########## networking ##########
# Comma separated Meta server addresses
--meta_server_addrs=172.17.141.116:9559,172.17.141.117:9559,172.17.141.118:9559,10.200.90.67:9559,10.200.90.114:9559
# Local IP used to identify the nebula-storaged process.
# Change it to an address other than loopback if the service is distributed or
# will be accessed remotely.
--local_ip=172.17.141.117
# Storage daemon listening port
--port=9779
# HTTP service ip
--ws_ip=0.0.0.0
# HTTP service port
--ws_http_port=19779
# heartbeat with meta service
--heartbeat_interval_secs=10

######### Raft #########
# Raft election timeout
--raft_heartbeat_interval_secs=30
# RPC timeout for raft client (ms)
--raft_rpc_timeout_ms=500
## recycle Raft WAL
--wal_ttl=600

########## Disk ##########
# Root data path. Split by comma. e.g. --data_path=/disk1/path1/,/disk2/path2/
# One path per Rocksdb instance.
--data_path=data/storage

# Minimum reserved bytes of each data path
--minimum_reserved_bytes=268435456

# The default reserved bytes for one batch operation
--rocksdb_batch_size=4096
# The default block cache size used in BlockBasedTable.
# The unit is MB.
--rocksdb_block_cache=4
# The type of storage engine, `rocksdb', `memory', etc.
--engine_type=rocksdb

# Compression algorithm, options: no,snappy,lz4,lz4hc,zlib,bzip2,zstd
# For the sake of binary compatibility, the default value is snappy.
# Recommend to use:
#   * lz4 to gain more CPU performance, with the same compression ratio with snappy
#   * zstd to occupy less disk space
#   * lz4hc for the read-heavy write-light scenario
--rocksdb_compression=lz4

# Set different compressions for different levels
# For example, if --rocksdb_compression is snappy,
# "no:no:lz4:lz4::zstd" is identical to "no:no:lz4:lz4:snappy:zstd:snappy"
# In order to disable compression for level 0/1, set it to "no:no"
--rocksdb_compression_per_level=

# Whether or not to enable rocksdb's statistics, disabled by default
--enable_rocksdb_statistics=false

# Statslevel used by rocksdb to collection statistics, optional values are
#   * kExceptHistogramOrTimers, disable timer stats, and skip histogram stats
#   * kExceptTimers, Skip timer stats
#   * kExceptDetailedTimers, Collect all stats except time inside mutex lock AND time spent on compression.
#   * kExceptTimeForMutex, Collect all stats except the counters requiring to get time inside the mutex lock.
#   * kAll, Collect all stats
--rocksdb_stats_level=kExceptHistogramOrTimers

# Whether or not to enable rocksdb's prefix bloom filter, enabled by default.
--enable_rocksdb_prefix_filtering=true
# Whether or not to enable rocksdb's whole key bloom filter, disabled by default.
--enable_rocksdb_whole_key_filtering=false

############## Key-Value separation ##############
# Whether or not to enable BlobDB (RocksDB key-value separation support)
--rocksdb_enable_kv_separation=false
# RocksDB key value separation threshold in bytes. Values at or above this threshold will be written to blob files during flush or compaction.
--rocksdb_kv_separation_threshold=100
# Compression algorithm for blobs, options: no,snappy,lz4,lz4hc,zlib,bzip2,zstd
--rocksdb_blob_compression=lz4
# Whether to garbage collect blobs during compaction
--rocksdb_enable_blob_garbage_collection=true

############## rocksdb Options ##############
# rocksdb DBOptions in json, each name and value of option is a string, given as "option_name":"option_value" separated by comma
--rocksdb_db_options={}
# rocksdb ColumnFamilyOptions in json, each name and value of option is string, given as "option_name":"option_value" separated by comma
--rocksdb_column_family_options={"write_buffer_size":"67108864","max_write_buffer_number":"4","max_bytes_for_level_base":"268435456"}
# rocksdb BlockBasedTableOptions in json, each name and value of option is string, given as "option_name":"option_value" separated by comma
--rocksdb_block_based_table_options={"block_size":"8192"}

storage的日志有看吗?storage侧有没有报选不出来leader

之前的日志被我清除了,我看过之前的日志有报错是找不到leader,就是下面这样的

I20220729 10:59:54.877018 37108 AdminProcessor.h:115] Can't find leader for space 40 part 1 on "172.17.141.116":9779**

但是我刚才我发现集群时间不准,将时钟校准之后居然就成功了。这个和机器时钟有关系吗?我这个集群之前balance是一致没成功过得,只有刚才校准了时间戳 job42成功了,可以看下面的job记录

(root@nebula) [test]> show jobs;
+--------+------------------+------------+----------------------------+----------------------------+
| Job Id | Command          | Status     | Start Time                 | Stop Time                  |
+--------+------------------+------------+----------------------------+----------------------------+
| 42     | "LEADER_BALANCE" | "FINISHED" | 2022-07-29T03:02:40.000000 | 2022-07-29T03:02:40.000000 |
| 41     | "LEADER_BALANCE" | "FAILED"   | 2022-07-29T02:59:54.000000 | 2022-07-29T03:00:19.000000 |
| 36     | "LEADER_BALANCE" | "FAILED"   | 2022-07-28T10:24:18.000000 | 2022-07-28T10:24:58.000000 |
| 35     | "LEADER_BALANCE" | "FAILED"   | 2022-07-28T11:00:25.000000 | 2022-07-28T11:01:05.000000 |
| 33     | "STATS"          | "FINISHED" | 2022-07-28T10:06:17.000000 | 2022-07-28T10:06:17.000000 |
| 22     | "STATS"          | "FINISHED" | 2022-07-27T07:39:07.000000 | 2022-07-27T07:39:07.000000 |
| 21     | "STATS"          | "FINISHED" | 2022-07-27T07:36:35.000000 | 2022-07-27T07:36:35.000000 |
| 20     | "STATS"          | "FINISHED" | 2022-07-27T07:35:30.000000 | 2022-07-27T07:35:30.000000 |
| 19     | "STATS"          | "FINISHED" | 2022-07-27T07:35:05.000000 | 2022-07-27T07:35:05.000000 |
| 18     | "STATS"          | "FINISHED" | 2022-07-27T07:34:44.000000 | 2022-07-27T07:34:44.000000 |
| 17     | "STATS"          | "FINISHED" | 2022-07-27T07:33:57.000000 | 2022-07-27T07:33:57.000000 |
| 15     | "LEADER_BALANCE" | "FAILED"   | 2022-07-27T03:15:35.000000 | 2022-07-27T03:16:00.000000 |
| 14     | "COMPACT"        | "FINISHED" | 2022-07-27T03:14:36.000000 | 2022-07-27T03:14:47.000000 |
| 13     | "LEADER_BALANCE" | "FAILED"   | 2022-07-27T03:09:20.000000 | 2022-07-27T03:09:45.000000 |
| 12     | "LEADER_BALANCE" | "FAILED"   | 2022-07-27T03:04:11.000000 | 2022-07-27T03:04:36.000000 |
| 10     | "STATS"          | "FINISHED" | 2022-07-25T09:47:27.000000 | 2022-07-25T09:47:27.000000 |
| 9      | "STATS"          | "FINISHED" | 2022-07-25T09:45:52.000000 | 2022-07-25T09:45:52.000000 |
| 8      | "STATS"          | "FINISHED" | 2022-07-25T08:55:04.000000 | 2022-07-25T08:55:04.000000 |
| 7      | "STATS"          | "FINISHED" | 2022-07-25T08:54:34.000000 | 2022-07-25T08:54:34.000000 |
| 6      | "STATS"          | "FINISHED" | 2022-07-25T08:54:22.000000 | 2022-07-25T08:54:22.000000 |
| 5      | "STATS"          | "FINISHED" | 2022-07-25T09:00:37.000000 | 2022-07-25T09:00:37.000000 |
| 4      | "STATS"          | "FINISHED" | 2022-07-25T08:59:59.000000 | 2022-07-25T08:59:59.000000 |
| 3      | "LEADER_BALANCE" | "FAILED"   | 2022-07-25T08:55:51.000000 | 2022-07-25T08:56:26.000000 |
+--------+------------------+------------+----------------------------+----------------------------+
Got 23 rows (time spent 7791/9314 us)
1 个赞

之前那个space不知道为啥就balance leader可以成功了,我之前一个回复回的。但是我刚才新建一个space又是,建出来leader就不均匀,然后balance leader失败,在storage日志我看到有找不到leader的日志提示。
建space语句

CREATE SPACE social(partition_num=150, replica_factor=3, vid_type=fixed_string(30));

storage部分日志

I20220729 15:41:08.572850 39745 AdminProcessor.h:44] Receive transfer leader for space 45, part 19, to [172.17.141.118, 9779]
I20220729 15:41:08.572865 39754 AdminProcessor.h:44] Receive transfer leader for space 45, part 59, to [172.17.141.118, 9779]
I20220729 15:41:08.573043 39748 AdminProcessor.h:44] Receive transfer leader for space 45, part 57, to [172.17.141.118, 9779]
I20220729 15:41:08.573093 39776 AdminProcessor.h:44] Receive transfer leader for space 45, part 62, to [172.17.141.118, 9779]
I20220729 15:41:08.572973 39765 AdminProcessor.h:44] Receive transfer leader for space 45, part 92, to [172.17.141.118, 9779]
I20220729 15:41:08.573168 39769 AdminProcessor.h:44] Receive transfer leader for space 45, part 133, to [172.17.141.118, 9779]
I20220729 15:41:08.572901 39771 AdminProcessor.h:44] Receive transfer leader for space 45, part 132, to [172.17.141.118, 9779]
I20220729 15:41:08.573091 39760 AdminProcessor.h:44] Receive transfer leader for space 45, part 71, to [172.17.141.118, 9779]
I20220729 15:41:08.573091 39752 AdminProcessor.h:44] Receive transfer leader for space 45, part 82, to [172.17.141.118, 9779]
I20220729 15:41:08.575111 39746 AdminProcessor.h:44] Receive transfer leader for space 45, part 69, to [172.17.141.118, 9779]
I20220729 15:41:08.575131 39747 AdminProcessor.h:44] Receive transfer leader for space 45, part 66, to [172.17.141.118, 9779]
I20220729 15:41:08.575150 39749 AdminProcessor.h:44] Receive transfer leader for space 45, part 23, to [172.17.141.118, 9779]
I20220729 15:41:08.576858  3670 AdminProcessor.h:115] Can't find leader for space 45 part 89 on "172.17.141.117":9779
I20220729 15:41:08.579135  3672 AdminProcessor.h:115] Can't find leader for space 45 part 86 on "172.17.141.117":9779
I20220729 15:41:08.579707  3673 AdminProcessor.h:115] Can't find leader for space 45 part 97 on "172.17.141.117":9779
I20220729 15:41:08.580261  3674 AdminProcessor.h:115] Can't find leader for space 45 part 61 on "172.17.141.117":9779
I20220729 15:41:08.582315  3675 AdminProcessor.h:115] Can't find leader for space 45 part 7 on "172.17.141.117":9779
I20220729 15:41:08.583707  3676 AdminProcessor.h:115] Can't find leader for space 45 part 114 on "172.17.141.117":9779
I20220729 15:41:08.585008  3677 AdminProcessor.h:115] Can't find leader for space 45 part 56 on "172.17.141.117":9779
I20220729 15:41:08.586318  3678 AdminProcessor.h:115] Can't find leader for space 45 part 101 on "172.17.141.117":9779
I20220729 15:41:08.588399  3679 AdminProcessor.h:115] Can't find leader for space 45 part 118 on "172.17.141.117":9779
I20220729 15:41:08.590456  3680 AdminProcessor.h:115] Can't find leader for space 45 part 64 on "172.17.141.117":9779
I20220729 15:41:08.592540  3681 AdminProcessor.h:115] Can't find leader for space 45 part 93 on "172.17.141.117":9779
I20220729 15:41:08.594523  3682 AdminProcessor.h:115] Can't find leader for space 45 part 91 on "172.17.141.117":9779
I20220729 15:41:08.595894  3683 AdminProcessor.h:115] Can't find leader for space 45 part 17 on "172.17.141.117":9779
I20220729 15:41:08.597286  3684 AdminProcessor.h:115] Can't find leader for space 45 part 119 on "172.17.141.117":9779
I20220729 15:41:08.598595  3685 AdminProcessor.h:115] Can't find leader for space 45 part 74 on "172.17.141.117":9779
I20220729 15:41:08.601514  3686 AdminProcessor.h:115] Can't find leader for space 45 part 90 on "172.17.141.117":9779
I20220729 15:41:08.603531  3687 AdminProcessor.h:115] Can't find leader for space 45 part 87 on "172.17.141.117":9779
I20220729 15:41:08.606462  3688 AdminProcessor.h:115] Can't find leader for space 45 part 60 on "172.17.141.117":9779
I20220729 15:41:08.607748  3689 AdminProcessor.h:115] Can't find leader for space 45 part 79 on "172.17.141.117":9779
I20220729 15:41:08.609031  3690 AdminProcessor.h:115] Can't find leader for space 45 part 130 on "172.17.141.117":9779
I20220729 15:41:08.609115 39768 AdminProcessor.h:44] Receive transfer leader for space 45, part 3, to [172.17.141.116, 9779]
I20220729 15:41:08.609362 39768 AdminProcessor.h:44] Receive transfer leader for space 45, part 5, to [172.17.141.116, 9779]
I20220729 15:41:08.609719  3691 AdminProcessor.h:115] Can't find leader for space 45 part 83 on "172.17.141.117":9779

这个问题还有人能解答下吗。。。。。。顶一顶

说明还在选举过程中,等一等应该就好了,当然时间最好也校准

浙ICP备20010487号