重装搞了2次,2次都是没有插入数据前就rebuild了索引,插入一个点数据后match不出来
第一次是删除了3个组件的目录,但没有卸载安装包,结果弄好后发下元数据tag啥的都还在,只是数据没有了,
第二次是安装包卸载和组件目录都删除了,2个都做了,重新部署
(root@nebula) [(none)]> show hosts;
±-----------------±-----±---------±-------------±--------------------±-----------------------+
| Host | Port | Status | Leader count | Leader distribution | Partition distribution |
±-----------------±-----±---------±-------------±--------------------±-----------------------+
| “10.xx.xx.1” | 9779 | “ONLINE” | 5 | “logplatform:5” | “logplatform:5” |
±-----------------±-----±---------±-------------±--------------------±-----------------------+
| “10.xx.xx.2” | 9779 | “ONLINE” | 5 | “logplatform:5” | “logplatform:5” |
±-----------------±-----±---------±-------------±--------------------±-----------------------+
| “10.xx.xx.3” | 9779 | “ONLINE” | 5 | “logplatform:5” | “logplatform:5” |
±-----------------±-----±---------±-------------±--------------------±-----------------------+
| “Total” | | | 15 | “logplatform:15” | “logplatform:15” |
±-----------------±-----±---------±-------------±--------------------±-----------------------+
Got 4 rows (time spent 1097/2337 us)
Wed, 09 Jun 2021 09:39:11 CST
–以下是按您发的验证,其实这个我已经验证多次,确实match或lookup不出来
(root@nebula) [(none)]> CREATE SPACE test_space (partition_num=1,replica_factor=1, vid_type=fixed_string(30));
Execution succeeded (time spent 10692/12008 us)
Wed, 09 Jun 2021 09:40:39 CST
(root@nebula) [(none)]> USE test_space
Execution succeeded (time spent 1196/2348 us)
Wed, 09 Jun 2021 09:43:56 CST
(root@nebula) [test_space]> create tag t1(c1 fixed_string(40))
Execution succeeded (time spent 8580/10120 us)
Wed, 09 Jun 2021 09:44:06 CST
(root@nebula) [test_space]> INSERT VERTEX t1(c1) VALUES “1”:(“row_1”)
Execution succeeded (time spent 1287/2730 us)
Wed, 09 Jun 2021 09:45:33 CST
(root@nebula) [test_space]> lookup on t1 where t1.c1 == “row_1”
[ERROR (-8)]: IndexNotFound: No valid index found
Wed, 09 Jun 2021 09:45:56 CST
(root@nebula) [test_space]> fetch prop on t1 ‘1’
±-----------------------+
| vertices_ |
±-----------------------+
| (“1” :t1{c1: “row_1”}) |
±-----------------------+
Got 1 rows (time spent 1804/3320 us)
Wed, 09 Jun 2021 09:46:23 CST
(root@nebula) [test_space]> create tag index i1 on t1(c1)
Execution succeeded (time spent 11651/13052 us)
Wed, 09 Jun 2021 09:46:50 CST
(root@nebula) [test_space]> rebuild tag index i1;
±-----------+
| New Job Id |
±-----------+
| 9 |
±-----------+
Got 1 rows (time spent 5435/6608 us)
Wed, 09 Jun 2021 09:47:14 CST
(root@nebula) [test_space]> show job 9
±---------------±--------------------±-----------±------------------------±------------------------+
| Job Id(TaskId) | Command(Dest) | Status | Start Time | Stop Time |
±---------------±--------------------±-----------±------------------------±------------------------+
| 9 | “REBUILD_TAG_INDEX” | “FINISHED” | 2021-06-09T01:47:41.000 | 2021-06-09T01:47:41.000 |
±---------------±--------------------±-----------±------------------------±------------------------+
| 0 | “10.xx.xx.15” | “FINISHED” | 2021-06-09T01:47:41.000 | 2021-06-09T01:48:02.000 |
±---------------±--------------------±-----------±------------------------±------------------------+
| 1 | “10.xx.xx.22” | “FINISHED” | 2021-06-09T01:47:41.000 | 2021-06-09T01:48:02.000 |
±---------------±--------------------±-----------±------------------------±------------------------+
| 2 | “10.xx.xx.114” | “FINISHED” | 2021-06-09T01:47:41.000 | 2021-06-09T01:48:02.000 |
±---------------±--------------------±-----------±------------------------±------------------------+
Got 4 rows (time spent 1115/2449 us)
Wed, 09 Jun 2021 09:47:19 CST
(root@nebula) [test_space]>
(root@nebula) [test_space]>
(root@nebula) [test_space]>
(root@nebula) [test_space]> lookup on t1 where t1.c1 == “row_1”
Empty set (time spent 1547/2720 us)
Wed, 09 Jun 2021 09:47:25 CST
(root@nebula) [test_space]>
graphd的配置如下
########## basics ##########
Whether to run as a daemon process
–daemonize=true
The file to host the process id
–pid_file=/data/pids/nebula-graphd.pid
Whether to enable optimizer
–enable_optimizer=true
########## logging ##########
The directory to host logging files
–log_dir=/data/graphlogs
Log level, 0, 1, 2, 3 for INFO, WARNING, ERROR, FATAL respectively
–minloglevel=0
Verbose log level, 1, 2, 3, 4, the higher of the level, the more verbose of the logging
–v=0
Maximum seconds to buffer the log messages
–logbufsecs=0
Whether to redirect stdout and stderr to separate output files
–redirect_stdout=true
Destination filename of stdout and stderr, which will also reside in log_dir.
–stdout_log_file=graphd-stdout.log
–stderr_log_file=graphd-stderr.log
Copy log messages at or above this level to stderr in addition to logfiles. The numbers of severity levels INFO, WARNING, ERROR, and FATAL are 0, 1, 2, and 3, respectively.
–stderrthreshold=2
########## query ##########
Whether to treat partial success as an error.
This flag is only used for Read-only access, and Modify access always treats partial success as an error.
–accept_partial_success=false
########## networking ##########
Comma separated Meta Server Addresses
–meta_server_addrs=10.xx.xx.22:9559
Local IP used to identify the nebula-graphd process.
Change it to an address other than loopback if the service is distributed or
will be accessed remotely.
–local_ip=0.0.0.0
Network device to listen on
–listen_netdev=any
Port to listen on
–port=9669
To turn on SO_REUSEPORT or not
–reuse_port=false
Backlog of the listen socket, adjust this together with net.core.somaxconn
–listen_backlog=1024
Seconds before the idle connections are closed, 0 for never closed
–client_idle_timeout_secs=0
Seconds before the idle sessions are expired, 0 for no expiration
–session_idle_timeout_secs=60000
The number of threads to accept incoming connections
–num_accept_threads=1
The number of networking IO threads, 0 for # of CPU cores
–num_netio_threads=0
The number of threads to execute user queries, 0 for # of CPU cores
–num_worker_threads=0
HTTP service ip
–ws_ip=0.0.0.0
HTTP service port
–ws_http_port=19669
HTTP2 service port
–ws_h2_port=19670
Heartbeat interval of communication between meta client and graphd service
–heartbeat_interval_secs=10
########## authorization ##########
Enable authorization
–enable_authorize=true
########## authentication ##########
User login authentication type, password for nebula authentication, ldap for ldap authentication, cloud for cloud authentication
–auth_type=password
–metad的配置
########## basics ##########
Whether to run as a daemon process
–daemonize=true
The file to host the process id
–pid_file=/data/pids/nebula-metad.pid
########## logging ##########
The directory to host logging files
–log_dir=/data/metalogs
Log level, 0, 1, 2, 3 for INFO, WARNING, ERROR, FATAL respectively
–minloglevel=0
Verbose log level, 1, 2, 3, 4, the higher of the level, the more verbose of the logging
–v=0
Maximum seconds to buffer the log messages
–logbufsecs=0
Whether to redirect stdout and stderr to separate output files
–redirect_stdout=true
Destination filename of stdout and stderr, which will also reside in log_dir.
–stdout_log_file=metad-stdout.log
–stderr_log_file=metad-stderr.log
Copy log messages at or above this level to stderr in addition to logfiles. The numbers of severity levels INFO, WARNING, ERROR, and FATAL are 0, 1, 2, and 3, respectively.
–stderrthreshold=2
########## networking ##########
Comma separated Meta Server addresses
–meta_server_addrs=10.116.148.15:9559,10.116.148.22:9559,10.116.148.114:9559
Local IP used to identify the nebula-metad process.
Change it to an address other than loopback if the service is distributed or
will be accessed remotely.
–local_ip=10.116.148.15
Meta daemon listening port
–port=9559
HTTP service ip
–ws_ip=0.0.0.0
HTTP service port
–ws_http_port=19559
HTTP2 service port
–ws_h2_port=19560
########## storage ##########
Root data path, here should be only single path for metad
–data_path=/data/meta
########## Misc #########
The default number of parts when a space is created
–default_parts_num=100
The default replica factor when a space is created
–default_replica_factor=1
–heartbeat_interval_secs=10
–stroaged的配置
########## basics ##########
Whether to run as a daemon process
–daemonize=true
The file to host the process id
–pid_file=/data/pids/nebula-storaged.pid
########## logging ##########
The directory to host logging files
–log_dir=/data/storlogs
Log level, 0, 1, 2, 3 for INFO, WARNING, ERROR, FATAL respectively
–minloglevel=0
Verbose log level, 1, 2, 3, 4, the higher of the level, the more verbose of the logging
–v=0
Maximum seconds to buffer the log messages
–logbufsecs=0
Whether to redirect stdout and stderr to separate output files
–redirect_stdout=true
Destination filename of stdout and stderr, which will also reside in log_dir.
–stdout_log_file=storaged-stdout.log
–stderr_log_file=storaged-stderr.log
Copy log messages at or above this level to stderr in addition to logfiles. The numbers of severity levels INFO, WARNING, ERROR, and FATAL are 0, 1, 2, and 3, respectively.
–stderrthreshold=2
########## networking ##########
Comma separated Meta server addresses
–meta_server_addrs=10.xx.xx.22:9559
Local IP used to identify the nebula-storaged process.
Change it to an address other than loopback if the service is distributed or
will be accessed remotely.
–local_ip=10.xx.xx.15
Storage daemon listening port
–port=9779
HTTP service ip
–ws_ip=0.0.0.0
HTTP service port
–ws_http_port=19779
HTTP2 service port
–ws_h2_port=19780
heartbeat with meta service
–heartbeat_interval_secs=10
######### Raft #########
Raft election timeout
–raft_heartbeat_interval_secs=30
RPC timeout for raft client (ms)
–raft_rpc_timeout_ms=500
recycle Raft WAL
–wal_ttl=14400
########## Disk ##########
Root data path. split by comma. e.g. --data_path=/disk1/path1/,/disk2/path2/
One path per Rocksdb instance.
–data_path=/data/storage
The default reserved bytes for one batch operation
–rocksdb_batch_size=4096
The default block cache size used in BlockBasedTable. (MB)
recommend: 1/3 of all memory
–rocksdb_block_cache=4096
Compression algorithm, options: no,snappy,lz4,lz4hc,zlib,bzip2,zstd
For the sake of binary compatibility, the default value is snappy.
Recommend to use:
* lz4 to gain more CPU performance, with the same compression ratio with snappy
* zstd to occupy less disk space
* lz4hc for the read-heavy write-light scenario
–rocksdb_compression=lz4
Set different compressions for different levels
For example, if --rocksdb_compression is snappy,
“no:no:lz4:lz4::zstd” is identical to “no:no:lz4:lz4:snappy:zstd:snappy”
In order to disable compression for level 0/1, set it to “no:no”
–rocksdb_compression_per_level=
############## rocksdb Options ##############
rocksdb DBOptions in json, each name and value of option is a string, given as “option_name”:“option_value” separated by comma
–rocksdb_db_options={“max_subcompactions”:“4”,“max_background_jobs”:“4”}
rocksdb ColumnFamilyOptions in json, each name and value of option is string, given as “option_name”:“option_value” separated by comma
–rocksdb_column_family_options={“disable_auto_compactions”:“false”,“write_buffer_size”:“67108864”,“max_write_buffer_number”:“4”,“max_bytes_for_level_base”:“268435456”}
rocksdb BlockBasedTableOptions in json, each name and value of option is string, given as “option_name”:“option_value” separated by comma
–rocksdb_block_based_table_options={“block_size”:“8192”}
Whether or not to enable rocksdb’s statistics, disabled by default
–enable_rocksdb_statistics=false
Statslevel used by rocksdb to collection statistics, optional values are
* kExceptHistogramOrTimers, disable timer stats, and skip histogram stats
* kExceptTimers, Skip timer stats
* kExceptDetailedTimers, Collect all stats except time inside mutex lock AND time spent on compression.
* kExceptTimeForMutex, Collect all stats except the counters requiring to get time inside the mutex lock.
* kAll, Collect all stats
–rocksdb_stats_level=kExceptHistogramOrTimers
Whether or not to enable rocksdb’s prefix bloom filter, disabled by default.
–enable_rocksdb_prefix_filtering=true
Whether or not to enable the whole key filtering.
–enable_rocksdb_whole_key_filtering=true
The prefix length for each key to use as the filter value.
can be 12 bytes(PartitionId + VertexID), or 16 bytes(PartitionId + VertexID + TagID/EdgeType).
–rocksdb_filtering_prefix_length=16
############### misc ####################
–max_handlers_per_req=1
我们来一步一步的排除问题吧,按照我的办法,应该能跑起来。
- 关闭集群
- meta配置文件里要改,只留一个meta。graphd和storaged里的meta设置需要和metad中的ip:port一致。
- heartbeat_interval_secs都改成1.
- 删除所有meta和storage的data目录,不需要删安装包。
- 删除所有cluster.id
- 重启集群
- 然后严格按照以下的语句执行,不要执行rebuild。
- CREATE SPACE test_space (partition_num=1,replica_factor=1, vid_type=fixed_string(30));
- USE test_space
- create tag t1(c1 fixed_string(40))
- create tag index i1 on t1(c1)
- INSERT VERTEX t1(c1) VALUES “1”:(“row_1”)
- lookup on t1 where t1.c1 == “row_1”
我看配置文件里的 –local_ip 有的是0.0.0.0 , 最好改成物理IP
(root@nebula) [(none)]> CREATE SPACE test_space (partition_num=1,replica_factor=1, vid_type=fixed_string(30));
Execution succeeded (time spent 8668/9798 us)
Thu, 10 Jun 2021 10:13:11 CST
(root@nebula) [(none)]> USE test_space
Execution succeeded (time spent 917/2029 us)
Thu, 10 Jun 2021 10:13:18 CST
(root@nebula) [test_space]> create tag t1(c1 fixed_string(40));
Execution succeeded (time spent 8728/10070 us)
Thu, 10 Jun 2021 10:13:31 CST
(root@nebula) [test_space]> create tag index i1 on t1(c1);
Execution succeeded (time spent 11619/12746 us)
Thu, 10 Jun 2021 10:14:24 CST
(root@nebula) [test_space]> INSERT VERTEX t1(c1) VALUES “1”:(“row_1”);
Execution succeeded (time spent 2443/3559 us)
Thu, 10 Jun 2021 10:15:00 CST
(root@nebula) [test_space]> lookup on t1 where t1.c1 == “row_1”
±---------+
| VertexID |
±---------+
| “1” |
±---------+
Got 1 rows (time spent 1072/2227 us)
Thu, 10 Jun 2021 10:15:07 CST
(root@nebula) [test_space]> match (v:t1) where v.c1 == “row_1” return v;
Empty set (time spent 3154/4431 us)
Thu, 10 Jun 2021 10:15:35 CST
(root@nebula) [test_space]>
(root@nebula) [test_space]> rebuild tag index i1;
±-----------+
| New Job Id |
±-----------+
| 4 |
±-----------+
Got 1 rows (time spent 23675/24852 us)
Thu, 10 Jun 2021 10:16:56 CST
(root@nebula) [test_space]> show jobs 4;
[ERROR (-7)]: SyntaxError: syntax error near `4;’
Thu, 10 Jun 2021 10:17:00 CST
(root@nebula) [test_space]> show job 4;
±---------------±--------------------±-----------±------------------------±------------------------+
| Job Id(TaskId) | Command(Dest) | Status | Start Time | Stop Time |
±---------------±--------------------±-----------±------------------------±------------------------+
| 4 | “REBUILD_TAG_INDEX” | “FINISHED” | 2021-06-10T02:17:25.000 | 2021-06-10T02:17:25.000 |
±---------------±--------------------±-----------±------------------------±------------------------+
| 0 | “10.xx.xx.15” | “FINISHED” | 2021-06-10T02:17:25.000 | 2021-06-10T02:17:26.000 |
±---------------±--------------------±-----------±------------------------±------------------------+
| 1 | “10.xx.xx.22” | “FINISHED” | 2021-06-10T02:17:25.000 | 2021-06-10T02:17:26.000 |
±---------------±--------------------±-----------±------------------------±------------------------+
| 2 | “10.xx.xx.114” | “FINISHED” | 2021-06-10T02:17:25.000 | 2021-06-10T02:17:26.000 |
±---------------±--------------------±-----------±------------------------±------------------------+
Got 4 rows (time spent 985/2138 us)
Thu, 10 Jun 2021 10:17:03 CST
(root@nebula) [test_space]> match (v:t1) where v.c1 == “row_1” return v;
Empty set (time spent 1773/2891 us)
Thu, 10 Jun 2021 10:17:08 CST
(root@nebula) [test_space]>
专家,您好,上面是按照您的方案重新部署集群后,做的验证,结果是lookup可以查出数据,但match还是匹配不出来
看日志storage的没有什么报错,
麻烦再帮忙看看,谢谢
(root@nebula) [(none)]> create space logplatform(partition_num=15,replica_factor=1,vid_type=fixed_string(64));
Execution succeeded (time spent 7267/8328 us)
Fri, 11 Jun 2021 08:15:52 CST
(root@nebula) [(none)]> use logplatform;
Execution succeeded (time spent 828/1942 us)
Fri, 11 Jun 2021 08:15:58 CST
(root@nebula) [logplatform]>
(root@nebula) [logplatform]>
(root@nebula) [logplatform]> create tag serialnumber(sn string);
Execution succeeded (time spent 7548/8806 us)
Fri, 11 Jun 2021 08:17:25 CST
(root@nebula) [logplatform]> create tag index idx_serialnumber_sn on serialnumber(sn(30));
Execution succeeded (time spent 9894/11056 us)
Fri, 11 Jun 2021 08:17:33 CST
(root@nebula) [logplatform]> insert vertex serialnumber(sn) values ‘sn1’:(‘sn1’);
Execution succeeded (time spent 1728/2845 us)
Fri, 11 Jun 2021 08:17:53 CST
(root@nebula) [logplatform]> match (v:serialnumber) where v.sn == ‘sn1’ return v;
Empty set (time spent 2347/3442 us)
Fri, 11 Jun 2021 08:17:58 CST
(root@nebula) [logplatform]> fetch prop on serialnumber ‘sn1’;
±---------------------------------+
| vertices_ |
±---------------------------------+
| (“sn1” :serialnumber{sn: “sn1”}) |
±---------------------------------+
Got 1 rows (time spent 1066/2183 us)
Fri, 11 Jun 2021 08:18:06 CST
(root@nebula) [logplatform]> create tag atename(name string);
Execution succeeded (time spent 8115/9234 us)
Fri, 11 Jun 2021 08:18:12 CST
(root@nebula) [logplatform]> create tag index idx_atename_name on atename(name(20));
Execution succeeded (time spent 21246/22335 us)
Fri, 11 Jun 2021 08:18:17 CST
(root@nebula) [logplatform]>
(root@nebula) [logplatform]> create tag testtime(start_time datetime,end_time datetime);
Execution succeeded (time spent 13250/14378 us)
Fri, 11 Jun 2021 08:18:33 CST
(root@nebula) [logplatform]> create tag index idx_testtime_starttime on testtime(start_time);
Execution succeeded (time spent 18062/19150 us)
Fri, 11 Jun 2021 08:18:53 CST
(root@nebula) [logplatform]> create tag testlog(task_id string,sn string,ate_name string,content string,r1_guid string,service_name string,service_ver string,node_code string,localer string,caller string,create_time datetime);
Execution succeeded (time spent 7673/8732 us)
Fri, 11 Jun 2021 08:19:08 CST
(root@nebula) [logplatform]> create tag index idx_testlog_taskid on testlog(task_id(30));
Execution succeeded (time spent 11500/12590 us)
Fri, 11 Jun 2021 08:19:14 CST
(root@nebula) [logplatform]>
(root@nebula) [logplatform]> create edge sn2testlog(task_id string,atename string);
Execution succeeded (time spent 7580/8759 us)
Fri, 11 Jun 2021 08:19:38 CST
(root@nebula) [logplatform]> create edge ate2testtime(dt date);
Execution succeeded (time spent 8059/9315 us)
Fri, 11 Jun 2021 08:19:50 CST
(root@nebula) [logplatform]> create edge time2testlog(log_level string);
Execution succeeded (time spent 7690/8777 us)
Fri, 11 Jun 2021 08:19:58 CST
(root@nebula) [logplatform]> lookup on serialnumber where serialnumber.sn == ‘sn1’;
Empty set (time spent 1757/2880 us)
Fri, 11 Jun 2021 08:21:32 CST
(root@nebula) [logplatform]> fetch prop on serialnumber ‘sn1’;
±---------------------------------+
| vertices_ |
±---------------------------------+
| (“sn1” :serialnumber{sn: “sn1”}) |
±---------------------------------+
Got 1 rows (time spent 975/2125 us)
Fri, 11 Jun 2021 08:21:52 CST
(root@nebula) [logplatform]> rebuild tag index idx_serialnumber_sn;
±-----------+
| New Job Id |
±-----------+
| 17 |
±-----------+
Got 1 rows (time spent 8925/10286 us)
Fri, 11 Jun 2021 08:22:03 CST
(root@nebula) [logplatform]> show job 17
±---------------±--------------------±-----------±------------------------±------------------------+
| Job Id(TaskId) | Command(Dest) | Status | Start Time | Stop Time |
±---------------±--------------------±-----------±------------------------±------------------------+
| 17 | “REBUILD_TAG_INDEX” | “FINISHED” | 2021-06-11T00:22:32.000 | 2021-06-11T00:22:32.000 |
±---------------±--------------------±-----------±------------------------±------------------------+
| 0 | “10.xx.xx.15” | “FINISHED” | 2021-06-11T00:22:32.000 | 2021-06-11T00:22:58.000 |
±---------------±--------------------±-----------±------------------------±------------------------+
| 1 | “10.xx.xx.22” | “FINISHED” | 2021-06-11T00:22:32.000 | 2021-06-11T00:22:58.000 |
±---------------±--------------------±-----------±------------------------±------------------------+
| 2 | “10.xx.xx.114” | “FINISHED” | 2021-06-11T00:22:32.000 | 2021-06-11T00:22:58.000 |
±---------------±--------------------±-----------±------------------------±------------------------+
Got 4 rows (time spent 1108/2265 us)
Fri, 11 Jun 2021 08:22:08 CST
(root@nebula) [logplatform]>
(root@nebula) [logplatform]> fetch prop on serialnumber ‘sn1’;
±---------------------------------+
| vertices_ |
±---------------------------------+
| (“sn1” :serialnumber{sn: “sn1”}) |
±---------------------------------+
Got 1 rows (time spent 997/2112 us)
Fri, 11 Jun 2021 08:22:11 CST
(root@nebula) [logplatform]> lookup on serialnumber where serialnumber.sn == ‘sn1’;
Empty set (time spent 1133/2232 us)
Fri, 11 Jun 2021 08:22:14 CST
(root@nebula) [logplatform]> match (v:serialnumber) where v.sn == ‘sn1’ return v;
Empty set (time spent 1235/2264 us)
Fri, 11 Jun 2021 08:22:29 CST
(root@nebula) [logplatform]>
专家您好,今早我又在测试环境重新创建了一个图空间和点边,然后插入验证数据,fetch可以出数据,但lookup和match又都查不出来了,执行rebuild后也还是不行,业务验证比较紧急再帮忙看看,麻烦了!
(root@nebula) [test_space]> explain lookup on t1 where t1.c1 == “row_1”;
Execution succeeded (time spent 139/1263 us)
Execution Plan (optimize time 27 us)
-----±----------±-------------±---------------±----------------------------------
| id | name | dependencies | profiling data | operator info |
-----±----------±-------------±---------------±----------------------------------
| 2 | IndexScan | 0 | | outputVar: [ |
| | | | | { |
| | | | | “colNames”: [ |
| | | | | “VertexID” |
| | | | | ], |
| | | | | “name”: “__IndexScan_1”, |
| | | | | “type”: “DATASET” |
| | | | | } |
| | | | | ] |
| | | | | inputVar: |
| | | | | space: 1 |
| | | | | dedup: false |
| | | | | limit: 9223372036854775807 |
| | | | | filter: |
| | | | | orderBy: [] |
| | | | | schemaId: 2 |
| | | | | isEdge: false |
| | | | | returnCols: [ |
| | | | | “_vid” |
| | | | | ] |
| | | | | indexCtx: [ |
| | | | | { |
| | | | | “columnHints”: [ |
| | | | | { |
| | | | | “endValue”: “EMPTY”, |
| | | | | “beginValue”: "“row_1”, |
| | | | | “column”: “c1”, |
| | | | | “scanType”: “PREFIX” |
| | | | | } |
| | | | | ], |
| | | | | “index_id”: 3, |
| | | | | “filter”: “” |
| | | | | } |
| | | | | ] |
-----±----------±-------------±---------------±----------------------------------
| 0 | Start | | | outputVar: [ |
| | | | | { |
| | | | | “colNames”: [], |
| | | | | “name”: “__Start_0”, |
| | | | | “type”: “DATASET” |
| | | | | } |
| | | | | ] |
-----±----------±-------------±---------------±----------------------------------
Fri, 11 Jun 2021 09:29:28 CST
(root@nebula) [test_space]>
(root@nebula) [test_space]> explain match (v:t1) where v.c1 == “row_1” return v;
Execution succeeded (time spent 262/1517 us)
Execution Plan (optimize time 60 us)
-----±------------±-------------±---------------±---------------------------------------------------
| id | name | dependencies | profiling data | operator info |
-----±------------±-------------±---------------±---------------------------------------------------
| 10 | Project | 9 | | outputVar: [ |
| | | | | { |
| | | | | “colNames”: [ |
| | | | | “v” |
| | | | | ], |
| | | | | “name”: “__Project_10”, |
| | | | | “type”: “DATASET” |
| | | | | } |
| | | | | ] |
| | | | | inputVar: __Filter_9 |
| | | | | columns: [ |
| | | | | “$v” |
| | | | | ] |
-----±------------±-------------±---------------±---------------------------------------------------
| 9 | Filter | 8 | | outputVar: [ |
| | | | | { |
| | | | | “colNames”: [ |
| | | | | “v”, |
| | | | | “__COL_0” |
| | | | | ], |
| | | | | “name”: “__Filter_9”, |
| | | | | “type”: “DATASET” |
| | | | | } |
| | | | | ] |
| | | | | inputVar: __Filter_8 |
| | | | | condition: ($v.c1==“row_1”) |
-----±------------±-------------±---------------±---------------------------------------------------
| 8 | Filter | 7 | | outputVar: [ |
| | | | | { |
| | | | | “colNames”: [ |
| | | | | “v”, |
| | | | | “__COL_0” |
| | | | | ], |
| | | | | “name”: “__Filter_8”, |
| | | | | “type”: “DATASET” |
| | | | | } |
| | | | | ] |
| | | | | inputVar: __Project_7 |
| | | | | condition: (hasSameEdgeInPath($-.__COL_0)==false) |
-----±------------±-------------±---------------±---------------------------------------------------
| 7 | Project | 6 | | outputVar: [ |
| | | | | { |
| | | | | “colNames”: [ |
| | | | | “v”, |
| | | | | “__COL_0” |
| | | | | ], |
| | | | | “name”: “__Project_7”, |
| | | | | “type”: “DATASET” |
| | | | | } |
| | | | | ] |
| | | | | inputVar: __Project_6 |
| | | | | columns: [ |
| | | | | “startNode($-._path) AS v”, |
| | | | | “reversePath(PathBuild[$-._path]) AS __COL_0” |
| | | | | ] |
-----±------------±-------------±---------------±---------------------------------------------------
| 6 | Project | 5 | | outputVar: [ |
| | | | | { |
| | | | | “colNames”: [ |
| | | | | “_path” |
| | | | | ], |
| | | | | “name”: “__Project_6”, |
| | | | | “type”: “DATASET” |
| | | | | } |
| | | | | ] |
| | | | | inputVar: __Filter_5 |
| | | | | columns: [ |
| | | | | “PathBuild[VERTEX]” |
| | | | | ] |
-----±------------±-------------±---------------±---------------------------------------------------
| 5 | Filter | 13 | | outputVar: [ |
| | | | | { |
| | | | | “colNames”: [], |
| | | | | “name”: “__Filter_5”, |
| | | | | “type”: “DATASET” |
| | | | | } |
| | | | | ] |
| | | | | inputVar: __GetVertices_4 |
| | | | | condition: (“t1” IN tags(VERTEX)) |
-----±------------±-------------±---------------±---------------------------------------------------
| 13 | GetVertices | 11 | | outputVar: [ |
| | | | | { |
| | | | | “colNames”: [], |
| | | | | “name”: “__GetVertices_4”, |
| | | | | “type”: “DATASET” |
| | | | | } |
| | | | | ] |
| | | | | inputVar: __IndexScan_1 |
| | | | | space: 1 |
| | | | | dedup: true |
| | | | | limit: 9223372036854775807 |
| | | | | filter: |
| | | | | orderBy: [] |
| | | | | src: $_vid |
| | | | | props: [ |
| | | | | { |
| | | | | “props”: [ |
| | | | | “c1”, |
| | | | | “_tag” |
| | | | | ], |
| | | | | “tagId”: 2 |
| | | | | } |
| | | | | ] |
| | | | | exprs: [] |
-----±------------±-------------±---------------±---------------------------------------------------
| 11 | IndexScan | 0 | | outputVar: [ |
| | | | | { |
| | | | | “colNames”: [ |
| | | | | “_vid” |
| | | | | ], |
| | | | | “name”: “__IndexScan_1”, |
| | | | | “type”: “DATASET” |
| | | | | } |
| | | | | ] |
| | | | | inputVar: |
| | | | | space: 1 |
| | | | | dedup: false |
| | | | | limit: 9223372036854775807 |
| | | | | filter: |
| | | | | orderBy: [] |
| | | | | schemaId: 2 |
| | | | | isEdge: false |
| | | | | returnCols: [ |
| | | | | “_vid” |
| | | | | ] |
| | | | | indexCtx: [ |
| | | | | { |
| | | | | “columnHints”: [ |
| | | | | { |
| | | | | “endValue”: “EMPTY”, |
| | | | | “beginValue”: "“row_1”, |
| | | | | “column”: “c1”, |
| | | | | “scanType”: “PREFIX” |
| | | | | } |
| | | | | ], |
| | | | | “index_id”: 3, |
| | | | | “filter”: “” |
| | | | | } |
| | | | | ] |
-----±------------±-------------±---------------±---------------------------------------------------
| 0 | Start | | | outputVar: [ |
| | | | | { |
| | | | | “colNames”: [], |
| | | | | “name”: “__Start_0”, |
| | | | | “type”: “DATASET” |
| | | | | } |
| | | | | ] |
-----±------------±-------------±---------------±---------------------------------------------------
Fri, 11 Jun 2021 09:29:44 CST
(root@nebula) [test_space]>
(root@nebula) [test_space]> lookup on t1 where t1.c1 == “row_1”;
±---------+
| VertexID |
±---------+
| “1” |
±---------+
Got 1 rows (time spent 1160/2367 us)
Fri, 11 Jun 2021 09:30:03 CST
(root@nebula) [test_space]> match (v:t1) where v.c1 == “row_1” return v;
Empty set (time spent 1792/2907 us)
Fri, 11 Jun 2021 09:30:12 CST
(root@nebula) [test_space]>
match和lookup的执行计划,差异好像很大,如上
专家您好,刚我们测试环境确认清楚了,图空间fixed_string长度超36,lookup和match都不行,不超过36,lookup和match都可以正常使用了,
上面您刚讲的不太清楚,是需要编译的安装的是吧,我们这边都使用rpm直接安装的,我们可以先设置成36先用着,这个问题您们分析原因解决来得及,多谢
好的,你们先用着。64我在新版上测试的也可以,有问题随时沟通。
好的,多谢您,再问下新版本的版本号是多少,多久可以出rpm的安装包
下个版本的时间还不太确定,但是那个64的问题我去琢磨一下,等我消息。
多谢!

