Dashboard安装后导入集群失败问题——too many component existed in one node

nebula版本:3.0.0
部署方式:集群
Dashboard安装方式:tar包
问题描述:
按照文档部署,出现如下错误:

你好,Dashboard管理集群,每台机器上最多只能有一个metad,storaged,graphd,可以检查下机器上是不是有其他服务

1 个赞

好的。
大佬,这个权限不足指的是哪个地方权限不足?登录主机还是操作Dashboard文件?

Dashboard我用root启动的

这个地方账号密码是root/nebula

这个地方账号密码是主机账号密码
image

我看了,仍有不解

我猜是你授权机器时填的ssh账号没有权限操作机器上的nebula服务

授权机器是root权限,root权限也不能操作nebula吗?

可以贴一下你的ssh user 和 nebula部署路径的用户/用户组

root肯定是没问题的,所有机器都是root吗

三台授权主机都是root权限

填写内容如下:

可以贴一下导入失败时,webserver的日志吗,在 logs/webserver.log

2022/04/15 15:28:07 /__w/nebula-dashboard-ent/nebula-dashboard-ent/source/nebula-dashboard-ent/backend/internal/dao/machine.go:48 SLOW SQL >= 200ms
[1066.467ms] [rows:0] SELECT * FROM machines WHERE machines.deleted_at IS NULL

2022/04/15 15:28:07 /__w/nebula-dashboard-ent/nebula-dashboard-ent/source/nebula-dashboard-ent/backend/internal/dao/alert_rule.go:78 SLOW SQL >= 200ms
[1066.389ms] [rows:0] SELECT * FROM alert_rules WHERE alert_rules.deleted_at IS NULL
1.6500076870569985e+09 info monitor/prometheus.go:57 [Monitor] Sync Prometheus Config Success…
1.6500076870581e+09 info monitor/node_exporter.go:129 [Monitor] Sync Node Exporter Deploy Success…
1.6500076916862304e+09 info task/task.go:62 + [ Serial ] - UserSSH: user=root, host=192.168.2.184

1.6500076916864135e+09 info task/task.go:91 + [ Parallel ] - UserSSH: user=root, host=192.168.2.184

1.6500076916864395e+09 info task/task.go:62 + [ Serial ] - UserSSH: user=root, host=192.168.2.184
1.6500076916864817e+09 info task/task.go:62 + [ Serial ] -
[INFO] 2022/04/15 15:28 200 1.856621875s 192.168.11.207 POST /api/v1/machines/approve
1.6500077023382425e+09 info task/task.go:62 + [ Serial ] - UserSSH: user=root, host=192.168.26.123

1.6500077023384268e+09 info task/task.go:91 + [ Parallel ] - UserSSH: user=root, host=192.168.26.123

1.6500077023384683e+09 info task/task.go:62 + [ Serial ] - UserSSH: user=root, host=192.168.26.123
1.6500077023385236e+09 info task/task.go:62 + [ Serial ] -
[INFO] 2022/04/15 15:28 200 543.366887ms 192.168.11.207 POST /api/v1/machines/approve
1.6500077201345026e+09 info task/task.go:62 + [ Serial ] - UserSSH: user=root, host=192.168.2.181
GetNebulaConfig: host=192.168.2.181, configPath=root
UserSSH: user=root, host=192.168.2.184
GetNebulaConfig: host=192.168.2.184, configPath=root
UserSSH: user=root, host=192.168.26.123
GetNebulaConfig: host=192.168.26.123, configPath=root
UserSSH: user=root, host=192.168.2.184
GetNebulaConfig: host=192.168.2.184, configPath=root
UserSSH: user=root, host=192.168.2.181
GetNebulaConfig: host=192.168.2.181, configPath=root
UserSSH: user=root, host=192.168.2.184
GetNebulaConfig: host=192.168.2.184, configPath=root
UserSSH: user=root, host=192.168.26.123
GetNebulaConfig: host=192.168.26.123, configPath=root
1.6500077201347306e+09 info task/task.go:91 + [ Parallel ] - UserSSH: user=root, host=192.168.26.123
GetNebulaConfig: host=192.168.26.123, configPath=root
1.6500077201347346e+09 info task/task.go:91 + [ Parallel ] - UserSSH: user=root, host=192.168.2.184
GetNebulaConfig: host=192.168.2.184, configPath=root
1.6500077201347637e+09 info task/task.go:91 + [ Parallel ] - UserSSH: user=root, host=192.168.2.184
GetNebulaConfig: host=192.168.2.184, configPath=root
1.6500077201348004e+09 info task/task.go:62 + [ Serial ] - UserSSH: user=root, host=192.168.2.184
1.6500077201347775e+09 info task/task.go:91 + [ Parallel ] - UserSSH: user=root, host=192.168.2.181
GetNebulaConfig: host=192.168.2.181, configPath=root
1.650007720134813e+09 info task/task.go:91 + [ Parallel ] - UserSSH: user=root, host=192.168.2.184
GetNebulaConfig: host=192.168.2.184, configPath=root
1.650007720134821e+09 info task/task.go:62 + [ Serial ] - UserSSH: user=root, host=192.168.2.181
1.6500077201348314e+09 info task/task.go:62 + [ Serial ] - UserSSH: user=root, host=192.168.2.184
1.6500077201348417e+09 info task/task.go:62 + [ Serial ] - GetNebulaConfig: host=192.168.2.184, configPath=root
1.6500077201348388e+09 info task/task.go:91 + [ Parallel ] - UserSSH: user=root, host=192.168.26.123
GetNebulaConfig: host=192.168.26.123, configPath=root
1.6500077201348848e+09 info task/task.go:62 + [ Serial ] - UserSSH: user=root, host=192.168.26.123
1.65000772013484e+09 info task/task.go:91 + [ Parallel ] - UserSSH: user=root, host=192.168.2.181
GetNebulaConfig: host=192.168.2.181, configPath=root
1.6500077201349132e+09 info task/task.go:62 + [ Serial ] - UserSSH: user=root, host=192.168.2.181
1.6500077201348839e+09 info task/task.go:62 + [ Serial ] - GetNebulaConfig: host=192.168.2.184, configPath=root
1.6500077201349375e+09 info task/task.go:62 + [ Serial ] - GetNebulaConfig: host=192.168.2.181, configPath=root
1.6500077201348429e+09 info task/task.go:62 + [ Serial ] - GetNebulaConfig: host=192.168.2.181, configPath=root
1.6500077201347787e+09 info task/task.go:62 + [ Serial ] - UserSSH: user=root, host=192.168.2.184
1.6500077201350732e+09 info task/task.go:62 + [ Serial ] - GetNebulaConfig: host=192.168.2.184, configPath=root
1.6500077201348984e+09 info task/task.go:62 + [ Serial ] - GetNebulaConfig: host=192.168.26.123, configPath=root
1.6500077201347525e+09 info task/task.go:62 + [ Serial ] - UserSSH: user=root, host=192.168.26.123
1.6500077201351724e+09 info task/task.go:62 + [ Serial ] - GetNebulaConfig: host=192.168.26.123, configPath=root
1.6500077202795458e+09 info task/nebula_auto_find.go:35 auto find {“host”: “192.168.26.123”, “component”: “metad”, “pid”: “645527”}
1.650007720279539e+09 info task/nebula_auto_find.go:35 auto find {“host”: “192.168.26.123”, “component”: “storaged”, “pid”: “1011312”}
1.6500077204111357e+09 info task/nebula_auto_find.go:51 auto find {“host”: “192.168.26.123”, “runtimeBin”: “/data3/nebula300/bin/nebula-storaged”}
1.6500077204155827e+09 info task/nebula_auto_find.go:51 auto find {“host”: “192.168.26.123”, “runtimeBin”: “/data3/nebula300/bin/nebula-metad”}
1.650007720718025e+09 info task/nebula_auto_find.go:35 auto find {“host”: “192.168.2.184”, “component”: “storaged”, “pid”: “24640”}
1.6500077207371838e+09 info task/nebula_auto_find.go:35 auto find {“host”: “192.168.2.184”, “component”: “graphd”, “pid”: “24594”}
1.6500077207743928e+09 info task/nebula_auto_find.go:35 auto find {“host”: “192.168.2.184”, “component”: “metad”, “pid”: “”}
1.6500077211666338e+09 info task/nebula_auto_find.go:51 auto find {“host”: “192.168.2.184”, “runtimeBin”: “/data3/nebula300/bin/nebula-storaged”}
1.650007721231351e+09 warn task/nebula_auto_find.go:47
1.650007721279324e+09 info task/nebula_auto_find.go:51 auto find {“host”: “192.168.2.184”, “runtimeBin”: “/data3/nebula300/bin/nebula-graphd”}
1.6500077230827627e+09 info task/nebula_auto_find.go:35 auto find {“host”: “192.168.2.181”, “component”: “storaged”, “pid”: “476\n2716\n17134\n21727”}
1.6500077230864327e+09 info task/nebula_auto_find.go:35 auto find {“host”: “192.168.2.181”, “component”: “metad”, “pid”: “2620\n8710\n19432\n28389\n29116”}
1.6500077230865002e+09 warn core/handler.go:31 permissionDenied
[INFO] 2022/04/15 15:28 200 3.004330623s 192.168.11.207 POST /api/v1/clusters/import
1.6500077460356033e+09 info monitor/nebula_exporter.go:103 [Monitor] Sync Nebula Exporter Config Success…
1.6500077460363114e+09 info monitor/nebula_process.go:45 [Monitor] Sync Nebula Process Status Success…
1.6500077460375643e+09 info monitor/node_exporter.go:129 [Monitor] Sync Node Exporter Deploy Success…
1.6500077460401647e+09 info monitor/nebula_cluster.go:48 [Monitor] Sync Nebula Cluster Status Success…
1.6500077460420246e+09 info monitor/prometheus.go:57 [Monitor] Sync Prometheus Config Success…
[INFO] 2022/04/15 15:29 200 14.002327ms 192.168.11.207 GET /api/v1/alerts
[INFO] 2022/04/15 15:29 200 8.753764ms 192.168.11.207 GET /api/v1/tasks/current
1.6500078060022056e+09 info monitor/prometheus.go:57 [Monitor] Sync Prometheus Config Success…
1.6500078060144346e+09 info monitor/nebula_process.go:45 [Monitor] Sync Nebula Process Status Success…
1.65000780601456e+09 info monitor/nebula_cluster.go:48 [Monitor] Sync Nebula Cluster Status Success…
1.6500078060149014e+09 info monitor/nebula_exporter.go:103 [Monitor] Sync Nebula Exporter Config Success…
1.6500078060153995e+09 info monitor/node_exporter.go:129 [Monitor] Sync Node Exporter Deploy Success…
1.650007865994806e+09 info monitor/prometheus.go:57 [Monitor] Sync Prometheus Config Success…
1.650007865998235e+09 info monitor/nebula_cluster.go:48 [Monitor] Sync Nebula Cluster Status Success…
1.6500078659984486e+09 info monitor/nebula_process.go:45 [Monitor] Sync Nebula Process Status Success…
1.6500078659985294e+09 info monitor/nebula_exporter.go:103 [Monitor] Sync Nebula Exporter Config Success…
1.6500078659994085e+09 info monitor/node_exporter.go:129 [Monitor] Sync Node Exporter Deploy Success…
1.6500078889555478e+09 warn core/handler.go:61 authorizationInvalid
[INFO] 2022/04/15 15:31 200 704.31µs 192.168.11.207 GET /api/v1/alerts
[INFO] 2022/04/15 15:31 200 357.779µs 192.168.11.207 GET /login
[INFO] 2022/04/15 15:31 200 401.707µs 192.168.11.207 GET /api/v1/system/settings
[INFO] 2022/04/15 15:31 200 168.535µs 192.168.11.207 GET /api/v1/system/settings

大佬,这是最后100行日志内容。

看起来是181上的storaged、metad,184上的metad pid找的有问题,目前服务发现的逻辑比较粗糙,直接通过ps -ef来找的。你可以在181和184上执行下ps -ef|grep nebula-metad,看下是不是有其他同名应用

181上执行ps -ef|grep nebula-metad,如下图所示:

嗯,181上你有4个metad服务,不符合dashboard一台机器最多1个metad的约定, 可以把不属于导入集群的metad停了

大佬,杀不掉进程是怎么回事儿?

看着像是docker-compose启动的,找到docker-compose文件位置,执行下docker-compose down?

好的,我去试试,多谢指点

此话题已在最后回复的 7 天后被自动关闭。不再允许新回复。