Auth failed: Authenticate failed: Expected protocol id ffffff82 but got 0

  • nebula 版本:使用rpm包安装,2.5.0版本
  • 部署方式: 单机
  • 安装方式:源RPM
  • 是否为线上版本: N
  • 硬件信息
    • 磁盘( 推荐使用 SSD)
    • CPU、内存信息
  • 问题的具体描述
    使用的示例代码
public class FlinkConnectorExample {
    private static final Logger LOG = LoggerFactory.getLogger(FlinkConnectorExample.class);

    public static void main(String[] args) {
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        DataStream<List<String>> playerSource = constructVertexSourceData(env);
        sinkVertexData(env, playerSource);
    }

    /**
     * construct flink data source
     */
    public static DataStream<List<String>> constructVertexSourceData(
            StreamExecutionEnvironment env) {
        List<List<String>> player = new ArrayList<>();
        List<String> fields1 = Arrays.asList("15", "Bob", "38");
        List<String> fields2 = Arrays.asList("16", "Tina", "39");
        List<String> fields3 = Arrays.asList("17", "Jena", "30");
        List<String> fields4 = Arrays.asList("18", "Tom", "30");
        List<String> fields5 = Arrays.asList("19", "Viki", "35");
        List<String> fields6 = Arrays.asList("20", "Jime", "33");
        List<String> fields7 = Arrays.asList("21", "Jhon", "36");
        List<String> fields8 = Arrays.asList("22", "Crea", "30");
        player.add(fields1);
        player.add(fields2);
        player.add(fields3);
        player.add(fields4);
        player.add(fields5);
        player.add(fields6);
        player.add(fields7);
        player.add(fields8);
        DataStream<List<String>> playerSource = env.fromCollection(player);
        return playerSource;
    }

    /**
     * sink Nebula Graph with default INSERT mode
     */
    public static void sinkVertexData(StreamExecutionEnvironment env,
                                      DataStream<List<String>> playerSource) {
        NebulaClientOptions nebulaClientOptions =
                new NebulaClientOptions.NebulaClientOptionsBuilder()
                        .setGraphAddress("127.0.0.1:9669")
                        .setMetaAddress("127.0.0.1:9559")
                        .build();
        NebulaGraphConnectionProvider graphConnectionProvider =
                new NebulaGraphConnectionProvider(nebulaClientOptions);
        NebulaMetaConnectionProvider metaConnectionProvider =
                new NebulaMetaConnectionProvider(nebulaClientOptions);

        ExecutionOptions executionOptions = new VertexExecutionOptions.ExecutionOptionBuilder()
                .setGraphSpace("en_relation")
                .setTag("en_tag")
                .setIdIndex(0)
                .setFields(Arrays.asList("name", "age"))
                .setPositions(Arrays.asList(1, 2))
                .setBatch(2)
                .builder();

        NebulaBatchOutputFormat outPutFormat =
                new NebulaBatchOutputFormat(graphConnectionProvider, metaConnectionProvider)
                        .setExecutionOptions(executionOptions);
        NebulaSinkFunction nebulaSinkFunction = new NebulaSinkFunction(outPutFormat);
        DataStream<Row> dataStream = playerSource.map(row -> {
            Row record = new Row(row.size());
            for (int i = 0; i < row.size(); i++) {
                record.setField(i, row.get(i));
            }
            return record;
        });
        dataStream.addSink(nebulaSinkFunction);
        try {
            env.execute("Write Nebula");
        } catch (Exception e) {
            LOG.error("error when write Nebula Graph, ", e);
            System.exit(-1);
        }
    }
}

运行后报错


后台服务显示正常的
image

studio连接也是正常的,请大佬支招

你 Nebula-Flink 用的是哪个版本?

2.0.0版本

聪明的你大概应该知道我要说啥了,版本不对齐,你换成 2.5 分支的 Nebula-Flink 试下,顺便 Studio 的话
v3.0.0 对齐 v2.5.0 的 Nebula 内核

1 个赞

2.5.0版本不支持对tag进行update操作,有什么办法解决重复问题

- -。一直都支持啊,文档里都说了啊,为啥你觉得不支持

2.0版本可以设置writemode,


但是2.5.0版本已经没有这个方法,是底层自动去重了吗

2.0.0和2.5.0 都不支持UPDATE, 不知道你从哪里用的这个方法, 或者是你前面的版本号说错了。 这个功能9月份加上的,2.6.0正式版本发布的。

1 个赞

maven里加载2.6.0失败了,只能加载到2.5.0 :joy:
image

此话题已在最后回复的 30 天后被自动关闭。不再允许新回复。