Star

spark reader 运行报错

使用master分支的NebulaReaderExample运行报错如下:

ERROR [Executor task launch worker for task 0] - Exception in task 0.0 in stage 0.0 (TID 0)
java.lang.UnsatisfiedLinkError: /private/var/folders/vx/tsqg0pgs7mbdf_wjd2t88_ch0000gn/T/nativeutils235051160004036/libnebula_codec.so: dlopen(/private/var/folders/vx/tsqg0pgs7mbdf_wjd2t88_ch0000gn/T/nativeutils235051160004036/libnebula_codec.so, 1): no suitable image found.  Did find:
	/private/var/folders/vx/tsqg0pgs7mbdf_wjd2t88_ch0000gn/T/nativeutils235051160004036/libnebula_codec.so: unknown file type, first eight bytes: 0x7F 0x45 0x4C 0x46 0x02 0x01 0x01 0x03
	/private/var/folders/vx/tsqg0pgs7mbdf_wjd2t88_ch0000gn/T/nativeutils235051160004036/libnebula_codec.so: unknown file type, first eight bytes: 0x7F 0x45 0x4C 0x46 0x02 0x01 0x01 0x03
	at java.lang.ClassLoader$NativeLibrary.load(Native Method)
	at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1934)
	at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1817)
	at java.lang.Runtime.load0(Runtime.java:809)
	at java.lang.System.load(System.java:1086)
	at com.vesoft.nebula.utils.NativeUtils.loadLibraryFromJar(NativeUtils.java:56)
	at com.vesoft.nebula.data.RowReader.<clinit>(RowReader.java:35)
	at com.vesoft.nebula.client.storage.processor.ScanVertexProcessor.process(ScanVertexProcessor.java:47)
	at com.vesoft.nebula.client.storage.processor.ScanVertexProcessor.process(ScanVertexProcessor.java:26)
	at com.vesoft.nebula.reader.ScanVertexIterator.hasNext(ScanVertexIterator.java:60)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$2.hasNext(WholeStageCodegenExec.scala:636)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:255)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:836)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:836)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:123)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:411)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

使用的测试代码除了图库的地址其他无修改
JDK:1.8
MAC: 10.15.6

需要重新编译 libnebula_codec.so ,我们验证下 mac 可不可以编译,不过建议 linux 环境

目前开发调试在MAC,最好能运行起来哈,那就等你的消息,如果需要配合的可以联系我。
另外跟服务端的版本或环境有关系吗?

编译好之后怎么使用呢?

mvn install 下 nebula-utils 的 jar 包

mvn install:install-file -Dfile=your-nebula-utils.jar -DgroupId=com.vesoft -DartifactId=nebula-utils -Dversion=1.0.0-rc4 -Dpackaging=jar

sorry, 我有更新步骤,需要修改 nebula/src/jni/CMakeLists.txtnebula/src/jni/src/CMakeLists.txt

mvn install 的时候需要带上 -Dpackaging=jar

:+1::+1:

image
打出来的jar包中没有so文件

jar tf nebula-utils-1.0.0-rc4.jar 咧?

一样的,这个jar只有7KB大小
不知道为什么这个so文件没有包含进去

你在什么环境下编译的

JDK:1.8
MAC: 10.15.6
其他的按照上面的提示编译

我们应该是不支持Mac下面编译

我们是希望能在mac下运行spark程序进行调试,要不然不太方便
有个疑问是这个so看名字是用来做编解码的或者读取存储的?java-client可以,为什么spark不行?不都是java调用吗?有点好奇哈

很早的时候试过,mac上能编,但是有些坑需要踩。java client可能也需要改部分代码,生成的动态库也不是so后缀名,类似dylib之类的名字。

浙ICP备20010487号