24/03/29 17:12:00 INFO Configs$: Source Config Hive source exec: select ssn as `身份证号`,name as `姓名` from bigData.fakedata20240304_test
24/03/29 17:12:00 INFO Configs$: Sink Config Hive source exec: select ssn as `身份证号`,name as `姓名` from bigData.fakedata20240304_test
24/03/29 17:12:00 INFO Configs$: Edge Config: Edge name: 身份证关联人, source: Hive source exec: select ssn as `身份证号`,name as `姓名` from bigData.fakedata20240304_test, sink: Nebula sink addresses: [10.26.120.55:9669, 10.26.120.53:9669], writeMode: insert, source field: 身份证号, source policy: Some(hash), ranking: None, target field: 姓名, target policy: Some(hash), batch: 256, partition: 32, ignoreIndex: false, srcVertexUdf: NonedstVertexUdf: None.
24/03/29 17:12:00 INFO Exchange$: >>>>> Config Configs(DataBaseConfigEntry:{graphAddress:List(10.26.120.55:9669, 10.26.120.53:9669), space:test汉语, metaAddress:List(10.26.120.55:9559, 10.26.120.53:9559)},UserConfigEntry{user:root, password:xxxxx},cConnectionConfigEntry:{timeout:3000, retry:3},ExecutionConfigEntry:{timeout:2147483647, retry:3},ErrorConfigEntry:{errorPath:file:///tmp/errors, errorMaxSize:32},RateConfigEntry:{limit:1024, timeout:1000},SslConfigEntry:{enableGraph:false, enableMeta:false, signType:ca},,List(),List(Edge name: 身份证关联人, source: Hive source exec: select ssn as `身份证号`,name as `姓名` from bigData.fakedata20240304_test, sink: Nebula sink addresses: [10.26.120.55:9669, 10.26.120.53:9669], writeMode: insert, source field: 身份证号, source policy: Some(hash), ranking: None, target field: 姓名, target policy: Some(hash), batch: 256, partition: 32, ignoreIndex: false, srcVertexUdf: NonedstVertexUdf: None.),None)
24/03/29 17:12:00 INFO Exchange$: >>>>> you don't com.vesoft.exchange.common.config hive source, so using hive tied with spark.
从这个打印的日志中发现没有prefix的输入识别,使用的exchange版本为nebula-exchange_spark_3.0-3.7.0.jar
udf的功能都是正确的,只有prefix设置后无输出