exchange 读csv写sst nebula_codec错误

忘了,没遇到过,我估计是jni和rocksdb编译环境不匹配

运行nebula-algorithm-1.1.0.jar 也报错了
client 为了替换guava 包自己编译的
这个问题怎么处理
21/01/19 18:56:13 INFO NativeUtils: Load /tmp/nativeutils11409569081598089/libnebula_codec.so as libnebula_codec.so
21/01/19 18:56:13 ERROR NebulaCodec: no nebula_codec in java.library.path
java.lang.UnsatisfiedLinkError: no nebula_codec in java.library.path
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867)
at java.lang.Runtime.loadLibrary0(Runtime.java:870)
at java.lang.System.loadLibrary(System.java:1122)
at com.vesoft.nebula.NebulaCodec.(NebulaCodec.java:23)
at com.vesoft.nebula.data.RowReader.decodeValue(RowReader.java:86)
at com.vesoft.nebula.data.RowReader.decodeValue(RowReader.java:81)
at com.vesoft.nebula.client.storage.processor.ScanEdgeProcessor.process(ScanEdgeProcessor.java:62)
at com.vesoft.nebula.client.storage.processor.ScanEdgeProcessor.process(ScanEdgeProcessor.java:25)
at com.vesoft.nebula.reader.ScanEdgeIterator.hasNext(ScanEdgeIterator.java:64)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:614)
at org.apache.spark.sql.execution.UnsafeExternalRowSorter.sort(UnsafeExternalRowSorter.java:216)
at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec$$anonfun$2.apply(ShuffleExchangeExec.scala:295)
at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec$$anonfun$2.apply(ShuffleExchangeExec.scala:266)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:836)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:836)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

动态库没有添加

WARNING: Logging before InitGoogleLogging() is written to STDERR
F0119 20:07:19.159057 1935 RowReader.cpp:118] Check failed: ver == schema->getVersion() (2 vs. 0)
/usr/local/java/bin/java: symbol lookup error: /tmp/nativeutils11413834438420331/libnebula_codec.so: undefined symbol: _Ux86_64_getcontext

添加后,看样子是版本问题

工具最好和图版本对应(小版本最好也一致), 图更新很快,文档更新不及时

我们的错误一样,版本也一致,请问是否解决了?

我再看看代码,目前这个SO库影响两个项目:nebula-algorithm 和exchange sst sink

也遇到了相同的问题,coredump 显示在 rocksdb::SstFileWriter::Finish(rocksdb::ExternalSstFileInfo*) 函数里了,这个问题有解决方法吗?

看下环境下有没有rocksdb的so文件