NebulaGraph 多机集群部署
本文介绍通过 docker-compose部署集群的示例。
使用 Docker Compose 部署 NebulaGraph
使用 Docker Compose 可以基于准备好的配置文件快速部署 NebulaGraph 服务,仅建议在测试 NebulaGraph 功能时使用该方式。
前提条件
主机上安装如下应用程序。
应用程序 | 推荐版本 | 官方安装参考 |
---|---|---|
Docker | 最新版本 | Install Docker Engine |
Docker Compose | 最新版本 | Install Docker Compose |
Git | 最新版本 | Download Git |
集群规划
主机名 | IP | Nebula服务 |
---|---|---|
spark1 | 192.168.2.10 | graphd0, metad-0, storaged-0 |
spark2 | 192.168.2.140 | graphd1, metad-1, storaged-1 |
spark3 | 192.168.2.74 | graphd2, metad-2, storaged-2 |
部署 NebulaGraph
-
通过 Git 克隆
nebula-docker-compose
仓库的3.8.0
分支到主机。$ git clone -b release-3.8 https://github.com/vesoft-inc/nebula-docker-compose.git
-
切换至目录
nebula-docker-compose
。$ cd nebula-docker-compose/
-
配置docker-compose文件
version: '3.4' services: metad: image: docker.io/vesoft/nebula-metad:v3.8.0 environment: USER: root TZ: "${TZ}" command: - --meta_server_addrs=192.168.2.10:9559,192.168.2.140:9559,192.168.2.74:9559 - --local_ip=${HOST_IP} - --ws_ip=${HOST_IP} - --port=9559 - --ws_http_port=19559 - --data_path=/data/meta - --log_dir=/logs - --v=0 - --minloglevel=0 healthcheck: test: ["CMD", "curl", "-sf", "http://${HOST_IP}:19559/status"] interval: 30s timeout: 10s retries: 3 start_period: 20s ports: - "9559:9559" - "19559:19559" - "19560:19560" volumes: - ./data/meta:/data/meta - ./logs/meta:/logs restart: on-failure network_mode: host cap_add: - SYS_PTRACE storaged: image: docker.io/vesoft/nebula-storaged:v3.8.0 environment: USER: root TZ: "${TZ}" command: - --meta_server_addrs=192.168.2.10:9559,192.168.2.140:9559,192.168.2.74:9559 - --local_ip=${HOST_IP} - --ws_ip=${HOST_IP} - --port=9779 - --ws_http_port=19779 - --data_path=/data/storage - --log_dir=/logs - --v=0 - --minloglevel=0 depends_on: - metad healthcheck: test: ["CMD", "curl", "-sf", "http://${HOST_IP}:19779/status"] interval: 30s timeout: 10s retries: 3 start_period: 20s ports: - "9779:9779" - "19779:19779" - "19780:19780" volumes: - ./data/storage:/data/storage - ./logs/storage:/logs network_mode: host restart: on-failure cap_add: - SYS_PTRACE graphd: image: docker.io/vesoft/nebula-graphd:v3.8.0 environment: USER: root TZ: "${TZ}" command: - --meta_server_addrs=192.168.2.10:9559,192.168.2.140:9559,192.168.2.74:9559 - --port=9669 - --local_ip=${HOST_IP} - --ws_ip=${HOST_IP} - --ws_http_port=19669 - --log_dir=/logs - --v=0 - --minloglevel=0 depends_on: - storaged healthcheck: test: ["CMD", "curl", "-sf", "http://${HOST_IP}:19669/status"] interval: 30s timeout: 10s retries: 3 start_period: 20s ports: - "9669:9669" - "19669:19669" - "19670:19670" volumes: - ./logs/graph:/logs network_mode: host restart: on-failure cap_add: - SYS_PTRACE console: image: docker.io/vesoft/nebula-console:v3.8 entrypoint: "" command: - sh - -c - | for i in `seq 1 60`;do var=`nebula-console -addr graphd -port 9669 -u root -p nebula -e 'ADD HOSTS "192.168.2.10":9779,"192.168.2.140":9779,"192.168.2.74":9779'`; if [[ $$? == 0 ]];then break; fi; sleep 1; echo "retry to add hosts."; done && tail -f /dev/null; depends_on: - graphd network_mode: host
对应的在每个spark节点都拷贝一份docker-compose.yaml
-
每个spark节点执行如下命令启动 NebulaGraph 服务。
[nebula-docker-compose]$ HOST_IP=$(hostname -I | awk '{print $1}') docker-compose up -d
连接 NebulaGraph
连接 NebulaGraph 有两种方式:
-
在容器外通过 Nebula Console 连接。因为容器的配置文件中将 Graph 服务的外部映射端口也固定为 9669,因此可以直接通过默认端口连接。
-
登录安装了 NebulaGraph Console 的容器,然后再连接 Graph 服务。本小节介绍这种方式。
-
使用
docker-compose ps
命令查看 NebulaGraph Console 容器名称。[root@stmt-k8s-node02 nebula-docker-compose]# HOST_IP=$(hostname -I | awk '{print $1}') docker-compose ps NAME COMMAND SERVICE STATUS PORTS nebula-docker-compose-console-1 "sh -c 'for i in `se…" console running nebula-docker-compose-graphd-1 "/usr/local/nebula/b…" graphd running (healthy) nebula-docker-compose-metad-1 "/usr/local/nebula/b…" metad running (healthy) nebula-docker-compose-storaged-1 "/usr/local/nebula/b…" storaged running (healthy) [root@stmt-k8s-node02 nebula-docker-compose]#
Note
nebula-docker-compose_console_1
和nebula-docker-compose_graphd1_1
为容器的名称。 -
进入 NebulaGraph Console 容器中。
$ docker exec -it nebula-docker-compose_console_1 /bin/sh / #
-
通过 NebulaGraph Console 连接 NebulaGraph 。
/ # ./usr/local/bin/nebula-console -u <user_name> -p <password> --address=graphd --port=9669
Note
默认情况下,身份认证功能是关闭的,只能使用已存在的用户名(默认为
root
)和任意密码登录。 -
查看集群状态。
nebula> SHOW HOSTS; +-----------------+------+----------+--------------+---------------------+------------------------+---------+ | Host | Port | Status | Leader count | Leader distribution | Partition distribution | Version | +-----------------+------+----------+--------------+---------------------+------------------------+---------+ | "192.168.2.10" | 9779 | "ONLINE" | 2 | "sf01:1, sf10:1" | "sf01:1, sf10:1" | "3.8.0" | | "192.168.2.74" | 9779 | "ONLINE" | 2 | "sf01:1, sf10:1" | "sf01:1, sf10:1" | "3.8.0" | | "192.168.2.140" | 9779 | "ONLINE" | 2 | "sf01:1, sf10:1" | "sf01:1, sf10:1" | "3.8.0" | +-----------------+------+----------+--------------+---------------------+------------------------+---------+
执行两次exit
可以退出容器。
部署 Studio
-
配置docker-compose文件
version: '3.4' services: web: image: vesoft/nebula-graph-studio:v3.10 environment: USER: root ports: - 57008:7001
57008:9669
表示内部的 7001 映射到外部的端口也是 57008。 -
构建并启动 Studio 服务。其中,
-d
表示在后台运行服务容器。docker-compose up -d
当屏幕返回以下信息时,表示 Docker 版 Studio 已经成功启动。
Importer Docker 方式运行
需要拉取 NebulaGraph Importer 的镜像,并将本地配置文件和 CSV 数据文件挂载到容器中。命令如下:
$ docker pull vesoft/nebula-importer:<version>
$ docker run --rm -ti \
--network=host \
-v <config_file>:<config_file> \
-v <data_dir>:<data_dir> \
vesoft/nebula-importer:<version> \
--config <config_file>
<config_file>
:填写 YAML 配置文件的绝对路径。<data_dir>
:填写 CSV 数据文件的绝对路径。如果文件不在本地,请忽略该参数。<version>
:填写 Importer 的版本号。
例如:
docker run --rm -ti \
--network=host \
-v /data/logs:/home/logs \
-v /data/tanwei/nebula-importer.yaml-sf01:/home/nebula-importer-sf01.yaml \
-v /data/tanwei/0.1/dynamic/person_0_0.csv:/home/0.1/dynamic/person_0_0.csv \
-v /data/tanwei/0.1/dynamic/comment_0_0.csv:/home/0.1/dynamic/comment_0_0.csv \
-v /data/tanwei/0.1/dynamic/post_0_0.csv:/home/0.1/dynamic/post_0_0.csv \
-v /data/tanwei/0.1/dynamic/forum_0_0.csv:/home/0.1/dynamic/forum_0_0.csv \
-v /data/tanwei/0.1/static/organisation_0_0.csv:/home/0.1/static/organisation_0_0.csv \
-v /data/tanwei/0.1/static/place_0_0.csv:/home/0.1/static/place_0_0.csv \
-v /data/tanwei/0.1/static/tag_0_0.csv:/home/0.1/static/tag_0_0.csv \
-v /data/tanwei/0.1/static/tagclass_0_0.csv:/home/0.1/static/tagclass_0_0.csv \
-v /data/tanwei/0.1/dynamic/person_knows_person_0_0.csv:/home/0.1/dynamic/person_knows_person_0_0.csv \
-v /data/tanwei/0.1/dynamic/person_likes_comment_0_0.csv:/home/0.1/dynamic/person_likes_comment_0_0.csv \
-v /data/tanwei/0.1/dynamic/person_likes_post_0_0.csv:/home/0.1/dynamic/person_likes_post_0_0.csv \
-v /data/tanwei/0.1/dynamic/post_hasCreator_person_0_0.csv:/home/0.1/dynamic/post_hasCreator_person_0_0.csv \
-v /data/tanwei/0.1/dynamic/comment_hasCreator_person_0_0.csv:/home/0.1/dynamic/comment_hasCreator_person_0_0.csv \
-v /data/tanwei/0.1/dynamic/comment_hasTag_tag_0_0.csv:/home/0.1/dynamic/comment_hasTag_tag_0_0.csv \
-v /data/tanwei/0.1/dynamic/comment_isLocatedIn_place_0_0.csv:/home/0.1/dynamic/comment_isLocatedIn_place_0_0.csv \
-v /data/tanwei/0.1/dynamic/comment_replyOf_comment_0_0.csv:/home/0.1/dynamic/comment_replyOf_comment_0_0.csv \
-v /data/tanwei/0.1/dynamic/comment_replyOf_post_0_0.csv:/home/0.1/dynamic/comment_replyOf_post_0_0.csv \
-v /data/tanwei/0.1/dynamic/forum_containerOf_post_0_0.csv:/home/0.1/dynamic/forum_containerOf_post_0_0.csv \
-v /data/tanwei/0.1/dynamic/forum_hasMember_person_0_0.csv:/home/0.1/dynamic/forum_hasMember_person_0_0.csv \
-v /data/tanwei/0.1/dynamic/forum_hasModerator_person_0_0.csv:/home/0.1/dynamic/forum_hasModerator_person_0_0.csv \
-v /data/tanwei/0.1/dynamic/forum_hasTag_tag_0_0.csv:/home/0.1/dynamic/forum_hasTag_tag_0_0.csv \
-v /data/tanwei/0.1/dynamic/person_hasInterest_tag_0_0.csv:/home/0.1/dynamic/person_hasInterest_tag_0_0.csv \
-v /data/tanwei/0.1/dynamic/person_isLocatedIn_place_0_0.csv:/home/0.1/dynamic/person_isLocatedIn_place_0_0.csv \
-v /data/tanwei/0.1/dynamic/person_studyAt_organisation_0_0.csv:/home/0.1/dynamic/person_studyAt_organisation_0_0.csv \
-v /data/tanwei/0.1/dynamic/person_workAt_organisation_0_0.csv:/home/0.1/dynamic/person_workAt_organisation_0_0.csv \
-v /data/tanwei/0.1/dynamic/post_hasTag_tag_0_0.csv:/home/0.1/dynamic/post_hasTag_tag_0_0.csv \
-v /data/tanwei/0.1/dynamic/post_isLocatedIn_place_0_0.csv:/home/0.1/dynamic/post_isLocatedIn_place_0_0.csv \
-v /data/tanwei/0.1/static/organisation_isLocatedIn_place_0_0.csv:/home/0.1/static/organisation_isLocatedIn_place_0_0.csv \
-v /data/tanwei/0.1/static/place_isPartOf_place_0_0.csv:/home/0.1/static/place_isPartOf_place_0_0.csv \
-v /data/tanwei/0.1/static/tag_hasType_tagclass_0_0.csv:/home/0.1/static/tag_hasType_tagclass_0_0.csv \
-v /data/tanwei/0.1/static/tagclass_isSubclassOf_tagclass_0_0.csv:/home/0.1/static/tagclass_isSubclassOf_tagclass_0_0.csv \
vesoft/nebula-importer:v4.1 \
--config /home/nebula-importer-sf01.yaml
配置nebula-importer.yaml文件
client:
version: v3
address: "192.168.2.10:9669,192.168.2.140:9669,192.168.2.74:9669"
user: root
password: nebula
ssl:
enable: false
certPath: "/home/xxx/cert/importer.crt"
keyPath: "/home/xxx/cert/importer.key"
caPath: "/home/xxx/cert/root.crt"
insecureSkipVerify: false
concurrencyPerAddress: 10
reconnectInitialInterval: 1s
retry: 3
retryInitialInterval: 1s
manager:
spaceName: sf01
batch: 1024
readerConcurrency: 50
importerConcurrency: 50
statsInterval: 10s
hooks:
before:
- statements:
- |
DROP SPACE IF EXISTS sf01;
CREATE SPACE IF NOT EXISTS sf01(partition_num=3, replica_factor=1, vid_type=FIXED_STRING(64));
USE sf01;
CREATE TAG Person(firstName string,lastName string,gender string,birthday string,creationDate string,locationIP string,browserUsed string,language string,email string);
CREATE TAG `Comment`(creationDate string,locationIP string,browserUsed string,content string,length string);
CREATE TAG Post(imageFile string,creationDate string,locationIP string,browserUsed string,language string,content string,length string);
CREATE TAG Forum(title string,creationDate string);
CREATE TAG Organisation(type string,name string,url string);
CREATE TAG Place(name string,url string,type string);
CREATE TAG `Tag`(name string,url string);
CREATE TAG TagClass(name string,url string);
CREATE EDGE PersonKnowsPerson(creationDate string);
CREATE EDGE PersonLikesComment(creationDate string);
CREATE EDGE PersonLikesPost(creationDate string);
CREATE EDGE PostHasCreatorPerson();
CREATE EDGE CommentHasCreatorPerson();
CREATE EDGE CommentHasTagTag();
CREATE EDGE CommentIsLocatedInPlace();
CREATE EDGE CommentReplyOfComment();
CREATE EDGE CommentReplyOfPost();
CREATE EDGE ForumContainerOfPost();
CREATE EDGE ForumHasMemberPerson(joinDate string);
CREATE EDGE ForumHasModeratorPerson();
CREATE EDGE ForumHasTagTag();
CREATE EDGE PersonHasInterestTag();
CREATE EDGE PersonIsLocatedInPlace();
CREATE EDGE PersonStudyAtOrganisation(classYear string);
CREATE EDGE PersonWorkAtOrganisation(workFrom string);
CREATE EDGE PostHasTagTag();
CREATE EDGE PostIsLocatedInPlace();
CREATE EDGE OrganisationIsLocatedInPlace();
CREATE EDGE PlaceIsPartOfPlace();
CREATE EDGE TagHasTypeTagClass();
CREATE EDGE TagClassIsSubclassOfTagClass();
wait: 10s
after:
- statements:
- |
SHOW SPACES;
log:
level: INFO
console: true
files:
- /home/logs/nebula-importer.log
sources:
- path: /home/0.1/dynamic/person_0_0.csv
batch: 1024
csv:
delimiter: "|"
withHeader: true
lazyQuotes: false
tags:
- name: Person
mode: INSERT
id:
type: "STRING"
index: 0
props:
- name: "firstName"
type: "STRING"
index: 1
- name: "lastName"
type: "STRING"
index: 2
- name: "gender"
type: "STRING"
index: 3
- name: "birthday"
type: "STRING"
index: 4
- name: "creationDate"
type: "STRING"
index: 5
- name: "locationIP"
type: "STRING"
index: 6
- name: "browserUsed"
type: "STRING"
index: 7
- name: "language"
type: "STRING"
index: 8
- name: "email"
type: "STRING"
index: 9
- path: /home/0.1/dynamic/comment_0_0.csv
batch: 1024
csv:
delimiter: "|"
withHeader: true
lazyQuotes: false
tags:
- name: Comment
mode: INSERT
id:
type: "STRING"
index: 0
props:
- name: "creationDate"
type: "STRING"
index: 1
- name: "locationIP"
type: "STRING"
index: 2
- name: "browserUsed"
type: "STRING"
index: 3
- name: "content"
type: "STRING"
index: 4
- name: "length"
type: "STRING"
index: 5
- path: /home/0.1/dynamic/post_0_0.csv
batch: 1024
csv:
delimiter: "|"
withHeader: true
lazyQuotes: false
tags:
- name: Post
mode: INSERT
id:
type: "STRING"
index: 0
props:
- name: "imageFile"
type: "STRING"
index: 1
- name: "creationDate"
type: "STRING"
index: 2
- name: "locationIP"
type: "STRING"
index: 3
- name: "browserUsed"
type: "STRING"
index: 4
- name: "language"
type: "STRING"
index: 5
- name: "content"
type: "STRING"
index: 6
- name: "length"
type: "STRING"
index: 7
- path: /home/0.1/dynamic/forum_0_0.csv
batch: 1024
csv:
delimiter: "|"
withHeader: true
lazyQuotes: false
tags:
- name: Forum
mode: INSERT
id:
type: "STRING"
index: 0
props:
- name: "title"
type: "STRING"
index: 1
- name: "creationDate"
type: "STRING"
index: 2
- path: /home/0.1/static/organisation_0_0.csv
batch: 1024
csv:
delimiter: "|"
withHeader: true
lazyQuotes: false
tags:
- name: Organisation
mode: INSERT
id:
type: "STRING"
index: 0
props:
- name: "type"
type: "STRING"
index: 1
- name: "name"
type: "STRING"
index: 2
- name: "url"
type: "STRING"
index: 3
- path: /home/0.1/static/place_0_0.csv
batch: 1024
csv:
delimiter: "|"
withHeader: true
lazyQuotes: false
tags:
- name: Place
mode: INSERT
id:
type: "STRING"
index: 0
props:
- name: "name"
type: "STRING"
index: 1
- name: "url"
type: "STRING"
index: 2
- name: "type"
type: "STRING"
index: 3
- path: /home/0.1/static/tag_0_0.csv
batch: 1024
csv:
delimiter: "|"
withHeader: true
lazyQuotes: false
tags:
- name: Tag
mode: INSERT
id:
type: "STRING"
index: 0
props:
- name: "name"
type: "STRING"
index: 1
- name: "url"
type: "STRING"
index: 2
- path: /home/0.1/static/tagclass_0_0.csv
batch: 1024
csv:
delimiter: "|"
withHeader: true
lazyQuotes: false
tags:
- name: TagClass
mode: INSERT
id:
type: "STRING"
index: 0
props:
- name: "name"
type: "STRING"
index: 1
- name: "url"
type: "STRING"
index: 2
- path: /home/0.1/dynamic/person_knows_person_0_0.csv
batch: 1024
csv:
delimiter: "|"
withHeader: true
lazyQuotes: false
edges:
- name: PersonKnowsPerson
mode: INSERT
src:
id:
type: "STRING"
index: 0
dst:
id:
type: "STRING"
index: 1
props:
- name: "creationDate"
type: "STRING"
index: 2
- path: /home/0.1/dynamic/person_likes_comment_0_0.csv
batch: 1024
csv:
delimiter: "|"
withHeader: true
lazyQuotes: false
edges:
- name: PersonLikesComment
mode: INSERT
src:
id:
type: "STRING"
index: 0
dst:
id:
type: "STRING"
index: 1
props:
- name: "creationDate"
type: "STRING"
index: 2
- path: /home/0.1/dynamic/person_likes_post_0_0.csv
batch: 1024
csv:
delimiter: "|"
withHeader: true
lazyQuotes: false
edges:
- name: PersonLikesPost
mode: INSERT
src:
id:
type: "STRING"
index: 0
dst:
id:
type: "STRING"
index: 1
props:
- name: "creationDate"
type: "STRING"
index: 2
- path: /home/0.1/dynamic/post_hasCreator_person_0_0.csv
batch: 1024
csv:
delimiter: "|"
withHeader: true
lazyQuotes: false
edges:
- name: PostHasCreatorPerson
mode: INSERT
src:
id:
type: "STRING"
index: 0
dst:
id:
type: "STRING"
index: 1
- path: /home/0.1/dynamic/comment_hasCreator_person_0_0.csv
batch: 1024
csv:
delimiter: "|"
withHeader: true
lazyQuotes: false
edges:
- name: CommentHasCreatorPerson
mode: INSERT
src:
id:
type: "STRING"
index: 0
dst:
id:
type: "STRING"
index: 1
- path: /home/0.1/dynamic/comment_hasTag_tag_0_0.csv
batch: 1024
csv:
delimiter: "|"
withHeader: true
lazyQuotes: false
edges:
- name: CommentHasTagTag
mode: INSERT
src:
id:
type: "STRING"
index: 0
dst:
id:
type: "STRING"
index: 1
- path: /home/0.1/dynamic/comment_isLocatedIn_place_0_0.csv
batch: 1024
csv:
delimiter: "|"
withHeader: true
lazyQuotes: false
edges:
- name: CommentIsLocatedInPlace
mode: INSERT
src:
id:
type: "STRING"
index: 0
dst:
id:
type: "STRING"
index: 1
- path: /home/0.1/dynamic/comment_replyOf_comment_0_0.csv
batch: 1024
csv:
delimiter: "|"
withHeader: true
lazyQuotes: false
edges:
- name: CommentReplyOfComment
mode: INSERT
src:
id:
type: "STRING"
index: 0
dst:
id:
type: "STRING"
index: 1
- path: /home/0.1/dynamic/comment_replyOf_post_0_0.csv
batch: 1024
csv:
delimiter: "|"
withHeader: true
lazyQuotes: false
edges:
- name: CommentReplyOfPost
mode: INSERT
src:
id:
type: "STRING"
index: 0
dst:
id:
type: "STRING"
index: 1
- path: /home/0.1/dynamic/forum_containerOf_post_0_0.csv
batch: 1024
csv:
delimiter: "|"
withHeader: true
lazyQuotes: false
edges:
- name: ForumContainerOfPost
mode: INSERT
src:
id:
type: "STRING"
index: 0
dst:
id:
type: "STRING"
index: 1
- path: /home/0.1/dynamic/forum_hasMember_person_0_0.csv
batch: 1024
csv:
delimiter: "|"
withHeader: true
lazyQuotes: false
edges:
- name: ForumHasMemberPerson
mode: INSERT
src:
id:
type: "STRING"
index: 0
dst:
id:
type: "STRING"
index: 1
props:
- name: "joinDate"
type: "STRING"
index: 2
- path: /home/0.1/dynamic/forum_hasModerator_person_0_0.csv
batch: 1024
csv:
delimiter: "|"
withHeader: true
lazyQuotes: false
edges:
- name: ForumHasModeratorPerson
mode: INSERT
src:
id:
type: "STRING"
index: 0
dst:
id:
type: "STRING"
index: 1
- path: /home/0.1/dynamic/forum_hasTag_tag_0_0.csv
batch: 1024
csv:
delimiter: "|"
withHeader: true
lazyQuotes: false
edges:
- name: ForumHasTagTag
mode: INSERT
src:
id:
type: "STRING"
index: 0
dst:
id:
type: "STRING"
index: 1
- path: /home/0.1/dynamic/person_hasInterest_tag_0_0.csv
batch: 1024
csv:
delimiter: "|"
withHeader: true
lazyQuotes: false
edges:
- name: PersonHasInterestTag
mode: INSERT
src:
id:
type: "STRING"
index: 0
dst:
id:
type: "STRING"
index: 1
- path: /home/0.1/dynamic/person_isLocatedIn_place_0_0.csv
batch: 1024
csv:
delimiter: "|"
withHeader: true
lazyQuotes: false
edges:
- name: PersonIsLocatedInPlace
mode: INSERT
src:
id:
type: "STRING"
index: 0
dst:
id:
type: "STRING"
index: 1
- path: /home/0.1/dynamic/person_studyAt_organisation_0_0.csv
batch: 1024
csv:
delimiter: "|"
withHeader: true
lazyQuotes: false
edges:
- name: PersonStudyAtOrganisation
mode: INSERT
src:
id:
type: "STRING"
index: 0
dst:
id:
type: "STRING"
index: 1
props:
- name: "classYear"
type: "STRING"
index: 2
- path: /home/0.1/dynamic/person_workAt_organisation_0_0.csv
batch: 1024
csv:
delimiter: "|"
withHeader: true
lazyQuotes: false
edges:
- name: PersonWorkAtOrganisation
mode: INSERT
src:
id:
type: "STRING"
index: 0
dst:
id:
type: "STRING"
index: 1
props:
- name: "workFrom"
type: "STRING"
index: 2
- path: /home/0.1/dynamic/post_hasTag_tag_0_0.csv
batch: 1024
csv:
delimiter: "|"
withHeader: true
lazyQuotes: false
edges:
- name: PostHasTagTag
mode: INSERT
src:
id:
type: "STRING"
index: 0
dst:
id:
type: "STRING"
index: 1
- path: /home/0.1/dynamic/post_isLocatedIn_place_0_0.csv
batch: 1024
csv:
delimiter: "|"
withHeader: true
lazyQuotes: false
edges:
- name: PostIsLocatedInPlace
mode: INSERT
src:
id:
type: "STRING"
index: 0
dst:
id:
type: "STRING"
index: 1
- path: /home/0.1/static/organisation_isLocatedIn_place_0_0.csv
batch: 1024
csv:
delimiter: "|"
withHeader: true
lazyQuotes: false
edges:
- name: OrganisationIsLocatedInPlace
mode: INSERT
src:
id:
type: "STRING"
index: 0
dst:
id:
type: "STRING"
index: 1
- path: /home/0.1/static/place_isPartOf_place_0_0.csv
batch: 1024
csv:
delimiter: "|"
withHeader: true
lazyQuotes: false
edges:
- name: PlaceIsPartOfPlace
mode: INSERT
src:
id:
type: "STRING"
index: 0
dst:
id:
type: "STRING"
index: 1
- path: /home/0.1/static/tag_hasType_tagclass_0_0.csv
batch: 1024
csv:
delimiter: "|"
withHeader: true
lazyQuotes: false
edges:
- name: TagHasTypeTagClass
mode: INSERT
src:
id:
type: "STRING"
index: 0
dst:
id:
type: "STRING"
index: 1
- path: /home/0.1/static/tagclass_isSubclassOf_tagclass_0_0.csv
batch: 1024
csv:
delimiter: "|"
withHeader: true
lazyQuotes: false
edges:
- name: TagClassIsSubclassOfTagClass
mode: INSERT
src:
id:
type: "STRING"
index: 0
dst:
id:
type: "STRING"
index: 1
查看 NebulaGraph 服务的状态和端口
执行命令docker-compose ps
可以列出 NebulaGraph 服务的状态和端口。
$ docker-compose ps
NAME COMMAND SERVICE STATUS PORTS
tanwei-web-1 "./server" web running 0.0.0.0:57008->7001/tcp, :::57008->7001/tcp
如果服务有异常,用户可以先确认异常的容器名称(例如nebula-docker-compose_graphd2_1
),然后登录容器排查问题:
$ docker exec -it nebula-docker-compose_graphd2_1 bash
查看 NebulaGraph 服务的数据和日志
NebulaGraph 的所有数据和日志都持久化存储在nebula-docker-compose/data
和nebula-docker-compose/logs
目录中。
目录的结构如下:
nebula-docker-compose/
|-- docker-compose.yaml
├── data
│ ├── meta
│ └── storage
└── logs
├── graph
├── meta
└── storage
修改配置
Docker Compose 配置文件位置为nebula-docker-compose/docker-compose.yaml
,修改该文件内的配置并重启服务即可使新配置生效。
docker-compose.yaml
文件中的配置会覆盖服务容器内的配置文件(/usr/local/nebula/etc
)的配置,因此也可以通过修改docker-compose.yaml
文件设置服务的配置。
重启 NebulaGraph 服务
重启所有 NebulaGraph 服务:
$ docker-compose restart
Restarting nebula-docker-compose_console_1 ... done
Restarting nebula-docker-compose_graphd_1 ... done
Restarting nebula-docker-compose_graphd1_1 ... done
Restarting nebula-docker-compose_graphd2_1 ... done
Restarting nebula-docker-compose_storaged1_1 ... done
Restarting nebula-docker-compose-storaged0_1 ... done
Restarting nebula-docker-compose_storaged2_1 ... done
Restarting nebula-docker-compose_metad1_1 ... done
Restarting nebula-docker-compose_metad2_1 ... done
Restarting nebula-docker-compose_metad0_1 ... done
重启多个服务,例如重启 graphd 和 stoarged0 服务:
$ docker-compose restart graphd storaged0
Restarting nebula-docker-compose_graphd_1 ... done
Restarting nebula-docker-compose_storaged0_1 ... done
停止并删除 NebulaGraph 服务
用户可以执行如下命令停止并删除 Docker Compose启动的所有 NebulaGraph 服务:
Danger
该命令会停止并删除所有 NebulaGraph 服务的容器,以及相关网络。如果用户在docker-compose.yaml
中定义了卷(volumes
),则会保留相关数据。
命令docker-compose down -v
的参数-v
将会删除所有本地的数据。如果使用的是 nightly 版本,并且有一些兼容性问题,请尝试这个命令。
$ docker-compose down
如果返回如下信息,表示已经成功停止服务。
sheStopping nebula-docker-compose_console_1 ... done
Stopping nebula-docker-compose_graphd1_1 ... done
Stopping nebula-docker-compose_graphd_1 ... done
Stopping nebula-docker-compose_graphd2_1 ... done
Stopping nebula-docker-compose_storaged1_1 ... done
Stopping nebula-docker-compose_storaged0_1 ... done
Stopping nebula-docker-compose_storaged2_1 ... done
Stopping nebula-docker-compose_metad2_1 ... done
Stopping nebula-docker-compose_metad0_1 ... done
Stopping nebula-docker-compose_metad1_1 ... done
Removing nebula-docker-compose_console_1 ... done
Removing nebula-docker-compose_graphd1_1 ... done
Removing nebula-docker-compose_graphd_1 ... done
Removing nebula-docker-compose_graphd2_1 ... done
Removing nebula-docker-compose_storaged1_1 ... done
Removing nebula-docker-compose_storaged0_1 ... done
Removing nebula-docker-compose_storaged2_1 ... done
Removing nebula-docker-compose_metad2_1 ... done
Removing nebula-docker-compose_metad0_1 ... done
Removing nebula-docker-compose_metad1_1 ... done
Removing network nebula-docker-compose_nebula-net