nebula graph2.0集群运行graphx

  • nebula 版本:2.0GA
  • 部署方式:参考https://github.com/vesoft-inc/nebula-docker-compose在虚机上部署了3节点的集群
  • 硬件信息
    • 磁盘SSD
    • 4核32G虚机 * 3

Nebula Graph集群信息:

# docker-compose ps
              Name                             Command                  State                                              Ports
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
nebula-docker-compose_graphd1_1     /usr/local/nebula/bin/nebu ...   Up (healthy)   0.0.0.0:49163->19669/tcp, 0.0.0.0:49162->19670/tcp, 0.0.0.0:49164->9669/tcp
nebula-docker-compose_graphd2_1     /usr/local/nebula/bin/nebu ...   Up (healthy)   0.0.0.0:49166->19669/tcp, 0.0.0.0:49165->19670/tcp, 0.0.0.0:49167->9669/tcp
nebula-docker-compose_graphd_1      /usr/local/nebula/bin/nebu ...   Up (healthy)   0.0.0.0:49175->19669/tcp, 0.0.0.0:49174->19670/tcp, 0.0.0.0:9669->9669/tcp
nebula-docker-compose_metad0_1      ./bin/nebula-metad --flagf ...   Up (healthy)   0.0.0.0:49160->19559/tcp, 0.0.0.0:49159->19560/tcp, 0.0.0.0:49161->9559/tcp,
                                                                                    9560/tcp
nebula-docker-compose_metad1_1      ./bin/nebula-metad --flagf ...   Up (healthy)   0.0.0.0:49154->19559/tcp, 0.0.0.0:49153->19560/tcp, 0.0.0.0:49155->9559/tcp,
                                                                                    9560/tcp
nebula-docker-compose_metad2_1      ./bin/nebula-metad --flagf ...   Up (healthy)   0.0.0.0:49157->19559/tcp, 0.0.0.0:49156->19560/tcp, 0.0.0.0:49158->9559/tcp,
                                                                                    9560/tcp
nebula-docker-compose_storaged0_1   ./bin/nebula-storaged --fl ...   Up (healthy)   0.0.0.0:49177->19779/tcp, 0.0.0.0:49176->19780/tcp, 9777/tcp, 9778/tcp,
                                                                                    0.0.0.0:49178->9779/tcp, 9780/tcp
nebula-docker-compose_storaged1_1   ./bin/nebula-storaged --fl ...   Up (healthy)   0.0.0.0:49169->19779/tcp, 0.0.0.0:49168->19780/tcp, 9777/tcp, 9778/tcp,
                                                                                    0.0.0.0:49170->9779/tcp, 9780/tcp
nebula-docker-compose_storaged2_1   ./bin/nebula-storaged --fl ...   Up (healthy)   0.0.0.0:49172->19779/tcp, 0.0.0.0:49171->19780/tcp, 9777/tcp, 9778/tcp,
                                                                                    0.0.0.0:49173->9779/tcp, 9780/tcp

Nebula Graph建模Schema:

CREATE SPACE csv(partition_num = 15, replica_factor = 1);
 
CREATE TAG user(id string);
 
CREATE EDGE action (startId string, endId string);

随后导入了一批数据。

GraphX连接Nebula Graph并运行PageRank的代码:

import java.text.SimpleDateFormat
import com.facebook.thrift.protocol.TCompactProtocol
import org.apache.spark.graphx.Graph
import com.vesoft.nebula.connector.connector.NebulaDataFrameReader
import com.vesoft.nebula.connector.{NebulaConnectionConfig, ReadNebulaConfig}
import org.apache.spark.SparkConf
import org.apache.spark.sql.SparkSession

val df = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss")
val sparkConf = new SparkConf
sparkConf
  .set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
  .registerKryoClasses(Array[Class[_]](classOf[TCompactProtocol]))
val spark = SparkSession
  .builder()
  .master("local")
  .config(sparkConf)
  .getOrCreate()

val config = NebulaConnectionConfig
.builder()
.withMetaAddress("127.0.0.1:9669")
.build()
val nebulaReadVertexConfig = ReadNebulaConfig
.builder()
.withSpace("csv")
.withLabel("user")
.withNoColumn(false)
.build()
val nebulaReadEdgeConfig = ReadNebulaConfig
.builder()
.withSpace("csv")
.withLabel("action")
.withNoColumn(false)
.build()

val vertexRDD = spark.read.nebula(config, nebulaReadVertexConfig).loadVerticesToGraphx()
val edgeRDD = spark.read.nebula(config, nebulaReadEdgeConfig).loadEdgesToGraphx()
val graph = Graph(vertexRDD, edgeRDD)

println("===============edges count================ is " + graph.edges.count())

println("===============Start run pageRank================time is " + df.format(System.currentTimeMillis()))

val ranks = graph.pageRank(0.0001).vertices

println("==============pageRank compute end================time is " + df.format(System.currentTimeMillis()))

println(ranks.top(10).mkString("\n"))

运行报错:

# spark-shell -i NebulaSparkReaderExample.scala
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/data1/spark/spark-2.4.7-bin-hadoop2.7/jars/nebula-spark-connector-2.0.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/data1/spark/spark-2.4.7-bin-hadoop2.7/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
21/04/01 01:31:45 WARN Utils: Your hostname, tigergraph9001 resolves to a loopback address: 127.0.0.1; using 10.27.20.112 instead (on interface eth0)
21/04/01 01:31:45 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
21/04/01 01:31:46 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Spark context Web UI available at http://tigergraph9001:4040
Spark context available as 'sc' (master = local[*], app id = local-1617240714905).
Spark session available as 'spark'.
21/04/01 01:31:59 WARN SparkSession$Builder: Using an existing SparkSession; some spark core configurations may not take effect.
21/04/01 01:31:59 WARN ReadNebulaConfig$: returnCols is empty and your result will contain all properties for user
21/04/01 01:31:59 WARN ReadNebulaConfig$: returnCols is empty and your result will contain all properties for action
com.facebook.thrift.TApplicationException: Method name getSpace not found
  at com.facebook.thrift.TApplicationException.read(TApplicationException.java:130)
  at com.vesoft.nebula.meta.MetaService$Client.recv_getSpace(MetaService.java:513)
  at com.vesoft.nebula.meta.MetaService$Client.getSpace(MetaService.java:488)
  at com.vesoft.nebula.client.meta.MetaClient.getSpace(MetaClient.java:131)
  at com.vesoft.nebula.client.meta.MetaClient.getTag(MetaClient.java:175)
  at com.vesoft.nebula.connector.nebula.MetaProvider.getTag(MetaProvider.scala:37)
  at com.vesoft.nebula.connector.reader.NebulaSourceReader.getSchema(NebulaSourceReader.scala:68)
  at com.vesoft.nebula.connector.reader.NebulaSourceReader.readSchema(NebulaSourceReader.scala:31)
  at org.apache.spark.sql.execution.datasources.v2.DataSourceV2Relation$.create(DataSourceV2Relation.scala:175)
  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:223)
  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:186)
  at com.vesoft.nebula.connector.connector.package$NebulaDataFrameReader.loadVerticesToDF(package.scala:121)
  at com.vesoft.nebula.connector.connector.package$NebulaDataFrameReader.loadVerticesToGraphx(package.scala:153)
  ... 65 elided
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.4.7
      /_/

Using Scala version 2.11.12 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_60)
Type in expressions to have them evaluated.
Type :help for more information.
1 个赞

端口配错了,再认真检查下哈~

ps: 你用docker部署的服务,通过metad服务拿到的storaged服务的地址是docker内部的地址,你的Spark程序不一定能访问到

你看我的docker信息,应该是哪个端口?

docker-compose ps

          Name                             Command                  State                                              Ports

nebula-docker-compose_graphd1_1 /usr/local/nebula/bin/nebu … Up (healthy) 0.0.0.0:49163->19669/tcp, 0.0.0.0:49162->19670/tcp, 0.0.0.0:49164->9669/tcp
nebula-docker-compose_graphd2_1 /usr/local/nebula/bin/nebu … Up (healthy) 0.0.0.0:49166->19669/tcp, 0.0.0.0:49165->19670/tcp, 0.0.0.0:49167->9669/tcp
nebula-docker-compose_graphd_1 /usr/local/nebula/bin/nebu … Up (healthy) 0.0.0.0:49175->19669/tcp, 0.0.0.0:49174->19670/tcp, 0.0.0.0:9669->9669/tcp
nebula-docker-compose_metad0_1 ./bin/nebula-metad --flagf … Up (healthy) 0.0.0.0:49160->19559/tcp, 0.0.0.0:49159->19560/tcp, 0.0.0.0:49161->9559/tcp,
9560/tcp
nebula-docker-compose_metad1_1 ./bin/nebula-metad --flagf … Up (healthy) 0.0.0.0:49154->19559/tcp, 0.0.0.0:49153->19560/tcp, 0.0.0.0:49155->9559/tcp,
9560/tcp
nebula-docker-compose_metad2_1 ./bin/nebula-metad --flagf … Up (healthy) 0.0.0.0:49157->19559/tcp, 0.0.0.0:49156->19560/tcp, 0.0.0.0:49158->9559/tcp,
9560/tcp
nebula-docker-compose_storaged0_1 ./bin/nebula-storaged --fl … Up (healthy) 0.0.0.0:49177->19779/tcp, 0.0.0.0:49176->19780/tcp, 9777/tcp, 9778/tcp,
0.0.0.0:49178->9779/tcp, 9780/tcp
nebula-docker-compose_storaged1_1 ./bin/nebula-storaged --fl … Up (healthy) 0.0.0.0:49169->19779/tcp, 0.0.0.0:49168->19780/tcp, 9777/tcp, 9778/tcp,
0.0.0.0:49170->9779/tcp, 9780/tcp
nebula-docker-compose_storaged2_1 ./bin/nebula-storaged --fl … Up (healthy) 0.0.0.0:49172->19779/tcp, 0.0.0.0:49171->19780/tcp, 9777/tcp, 9778/tcp,

这个是docker信息,应该填哪个端口?

这个api很明显,要配metaAddress。
所以要配你metad服务的端口啊,下面这三个都可以 49161,49155,49158

PS:你改好端口后如果存在 storaged服务连接被拒,有两种解决:

  1. docker-compose的配置中 将网络配成host模式。
  2. 在服务器上部署Nebula服务 再使用java storage client、spark-connector。

果然报了Storage服务连接被拒?

21/04/01 04:54:39 ERROR ScanEdgeResultIterator: get storage client error,
java.util.NoSuchElementException: Unable to activate object
        at org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:400)
        at org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:277)
        at com.vesoft.nebula.client.storage.StorageConnPool.getStorageConnection(StorageConnPool.java:42)
        at com.vesoft.nebula.client.storage.scan.ScanEdgeResultIterator.lambda$next$0(ScanEdgeResultIterator.java:79)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.UnknownHostException: storaged1
        at java.net.InetAddress.getAllByName0(InetAddress.java:1280)
        at java.net.InetAddress.getAllByName(InetAddress.java:1192)
        at java.net.InetAddress.getAllByName(InetAddress.java:1126)
        at java.net.InetAddress.getByName(InetAddress.java:1076)
        at com.vesoft.nebula.client.storage.GraphStorageConnection.open(GraphStorageConnection.java:36)
        at com.vesoft.nebula.client.storage.StorageConnPoolFactory.activateObject(StorageConnPoolFactory.java:59)
        at com.vesoft.nebula.client.storage.StorageConnPoolFactory.activateObject(StorageConnPoolFactory.java:16)
        at org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:391)
        ... 8 more

请教下,在docker-compose的配置中将网络配成host模式该怎么配置?

参考这个帖子的配置:

没搞定啊,我放弃了
单机用RPM包安装的话,端口是多少?

你在安装目录下查看:
nebula/script/nebula.service status all

# ../node/scripts/nebula.service status all
[INFO] nebula-metad: Running as 30057, Listening on 9559
[INFO] nebula-graphd: Running as 30127, Listening on 9669
[INFO] nebula-storaged: Running as 30146, Listening on 9779

代码

import java.text.SimpleDateFormat
import com.facebook.thrift.protocol.TCompactProtocol
import org.apache.spark.graphx.Graph
import com.vesoft.nebula.connector.connector.NebulaDataFrameReader
import com.vesoft.nebula.connector.{NebulaConnectionConfig, ReadNebulaConfig}
import org.apache.spark.SparkConf
import org.apache.spark.sql.SparkSession

val df = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss")
val sparkConf = new SparkConf
sparkConf
  .set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
  .registerKryoClasses(Array[Class[_]](classOf[TCompactProtocol]))
val spark = SparkSession
  .builder()
  .master("local")
  .config(sparkConf)
  .getOrCreate()

val config = NebulaConnectionConfig
.builder()
.withMetaAddress("127.0.0.1:9559")
.build()
val nebulaReadVertexConfig = ReadNebulaConfig
.builder()
.withSpace("csv")
.withLabel("user")
.withNoColumn(false)
.build()
val nebulaReadEdgeConfig = ReadNebulaConfig
.builder()
.withSpace("csv")
.withLabel("action")
.withNoColumn(false)
.build()

val vertexRDD = spark.read.nebula(config, nebulaReadVertexConfig).loadVerticesToGraphx()
val edgeRDD = spark.read.nebula(config, nebulaReadEdgeConfig).loadEdgesToGraphx()
val graph = Graph(vertexRDD, edgeRDD)

println("===============edges count================ is " + graph.edges.count())

println("===============Start run pageRank================time is " + df.format(System.currentTimeMillis()))

val ranks = graph.pageRank(0.0001).vertices

println("==============pageRank compute end================time is " + df.format(System.currentTimeMillis()))

println(ranks.top(10).mkString("\n"))

运行结果:

# spark-shell -i NebulaSparkReaderExample.scala
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/data1/spark/spark-2.4.7-bin-hadoop2.7/jars/nebula-spark-connector-2.0.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/data1/spark/spark-2.4.7-bin-hadoop2.7/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
21/04/01 10:24:31 WARN Utils: Your hostname, tigergraph9001 resolves to a loopback address: 127.0.0.1; using 10.27.20.112 instead (on interface eth0)
21/04/01 10:24:31 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
21/04/01 10:24:32 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Spark context Web UI available at http://tigergraph9001:4040
Spark context available as 'sc' (master = local[*], app id = local-1617272679465).
Spark session available as 'spark'.
21/04/01 10:24:44 WARN SparkSession$Builder: Using an existing SparkSession; some spark core configurations may not take effect.
21/04/01 10:24:44 WARN ReadNebulaConfig$: returnCols is empty and your result will contain all properties for user
21/04/01 10:24:44 WARN ReadNebulaConfig$: returnCols is empty and your result will contain all properties for action
[Stage 0:>                                                        (0 + 4) / 100]21/04/01 10:24:57 ERROR ScanEdgeResultIterator: get storage client error,
java.util.NoSuchElementException: Unable to activate object
        at org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:400)
        at org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:277)
        at com.vesoft.nebula.client.storage.StorageConnPool.getStorageConnection(StorageConnPool.java:42)
        at com.vesoft.nebula.client.storage.scan.ScanEdgeResultIterator.lambda$next$0(ScanEdgeResultIterator.java:79)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: com.facebook.thrift.transport.TTransportException: java.net.ConnectException: Cannot assign requested address
        at com.facebook.thrift.transport.TSocket.open(TSocket.java:204)
        at com.vesoft.nebula.client.storage.GraphStorageConnection.open(GraphStorageConnection.java:40)
        at com.vesoft.nebula.client.storage.StorageConnPoolFactory.activateObject(StorageConnPoolFactory.java:59)
        at com.vesoft.nebula.client.storage.StorageConnPoolFactory.activateObject(StorageConnPoolFactory.java:16)
        at org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:391)
        ... 8 more
Caused by: java.net.ConnectException: Cannot assign requested address
        at java.net.PlainSocketImpl.socketConnect(Native Method)
        at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
        at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
        at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
        at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
        at java.net.Socket.connect(Socket.java:589)
        at com.facebook.thrift.transport.TSocket.open(TSocket.java:199)
        ... 12 more
21/04/01 10:24:57 ERROR ScanEdgeResultIterator: get storage client error,
java.util.NoSuchElementException: Unable to activate object
        at org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:400)
        at org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:277)
        at com.vesoft.nebula.client.storage.StorageConnPool.getStorageConnection(StorageConnPool.java:42)
        at com.vesoft.nebula.client.storage.scan.ScanEdgeResultIterator.lambda$next$0(ScanEdgeResultIterator.java:79)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: com.facebook.thrift.transport.TTransportException: java.net.ConnectException: Cannot assign requested address
        at com.facebook.thrift.transport.TSocket.open(TSocket.java:204)
        at com.vesoft.nebula.client.storage.GraphStorageConnection.open(GraphStorageConnection.java:40)
        at com.vesoft.nebula.client.storage.StorageConnPoolFactory.activateObject(StorageConnPoolFactory.java:59)
        at com.vesoft.nebula.client.storage.StorageConnPoolFactory.activateObject(StorageConnPoolFactory.java:16)
        at org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:391)
        ... 8 more
Caused by: java.net.ConnectException: Cannot assign requested address
        at java.net.PlainSocketImpl.socketConnect(Native Method)
        at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
        at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
        at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
        at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
        at java.net.Socket.connect(Socket.java:589)
        at com.facebook.thrift.transport.TSocket.open(TSocket.java:199)
        ... 12 more
21/04/01 10:24:57 ERROR ScanEdgeResultIterator: get storage client error,
java.util.NoSuchElementException: Unable to activate object
        at org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:400)
        at org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:277)
        at com.vesoft.nebula.client.storage.StorageConnPool.getStorageConnection(StorageConnPool.java:42)
        at com.vesoft.nebula.client.storage.scan.ScanEdgeResultIterator.lambda$next$0(ScanEdgeResultIterator.java:79)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: com.facebook.thrift.transport.TTransportException: java.net.ConnectException: Cannot assign requested address
        at com.facebook.thrift.transport.TSocket.open(TSocket.java:204)
        at com.vesoft.nebula.client.storage.GraphStorageConnection.open(GraphStorageConnection.java:40)
        at com.vesoft.nebula.client.storage.StorageConnPoolFactory.activateObject(StorageConnPoolFactory.java:59)
        at com.vesoft.nebula.client.storage.StorageConnPoolFactory.activateObject(StorageConnPoolFactory.java:16)
        at org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:391)
        ... 8 more
Caused by: java.net.ConnectException: Cannot assign requested address
        at java.net.PlainSocketImpl.socketConnect(Native Method)
        at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
        at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
        at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
        at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
        at java.net.Socket.connect(Socket.java:589)
        at com.facebook.thrift.transport.TSocket.open(TSocket.java:199)
        ... 12 more
21/04/01 10:24:57 ERROR ScanEdgeResultIterator: get storage client error,
java.util.NoSuchElementException: Unable to activate object
        at org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:400)
        at org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:277)
        at com.vesoft.nebula.client.storage.StorageConnPool.getStorageConnection(StorageConnPool.java:42)
        at com.vesoft.nebula.client.storage.scan.ScanEdgeResultIterator.lambda$next$0(ScanEdgeResultIterator.java:79)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: com.facebook.thrift.transport.TTransportException: java.net.ConnectException: Cannot assign requested address
        at com.facebook.thrift.transport.TSocket.open(TSocket.java:204)
        at com.vesoft.nebula.client.storage.GraphStorageConnection.open(GraphStorageConnection.java:40)
        at com.vesoft.nebula.client.storage.StorageConnPoolFactory.activateObject(StorageConnPoolFactory.java:59)
        at com.vesoft.nebula.client.storage.StorageConnPoolFactory.activateObject(StorageConnPoolFactory.java:16)
        at org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:391)
        ... 8 more
Caused by: java.net.ConnectException: Cannot assign requested address
        at java.net.PlainSocketImpl.socketConnect(Native Method)
        at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
        at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
        at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
        at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
        at java.net.Socket.connect(Socket.java:589)
        at com.facebook.thrift.transport.TSocket.open(TSocket.java:199)
        ... 12 more
21/04/01 10:24:57 WARN BlockManager: Putting block rdd_22_1 failed due to exception com.vesoft.nebula.client.meta.exception.ExecuteFailedException: Execute failed: noparts succeed, error message: Unable to activate object.
21/04/01 10:24:57 WARN BlockManager: Putting block rdd_22_0 failed due to exception com.vesoft.nebula.client.meta.exception.ExecuteFailedException: Execute failed: noparts succeed, error message: Unable to activate object.
21/04/01 10:24:57 WARN BlockManager: Putting block rdd_22_3 failed due to exception com.vesoft.nebula.client.meta.exception.ExecuteFailedException: Execute failed: noparts succeed, error message: Unable to activate object.
21/04/01 10:24:57 WARN BlockManager: Block rdd_22_3 could not be removed as it was not found on disk or in memory
21/04/01 10:24:57 WARN BlockManager: Putting block rdd_22_2 failed due to exception com.vesoft.nebula.client.meta.exception.ExecuteFailedException: Execute failed: noparts succeed, error message: Unable to activate object.
21/04/01 10:24:57 WARN BlockManager: Block rdd_22_1 could not be removed as it was not found on disk or in memory
21/04/01 10:24:57 WARN BlockManager: Block rdd_22_0 could not be removed as it was not found on disk or in memory
21/04/01 10:24:57 WARN BlockManager: Block rdd_22_2 could not be removed as it was not found on disk or in memory
21/04/01 10:24:57 ERROR Executor: Exception in task 2.0 in stage 0.0 (TID 2)
com.vesoft.nebula.client.meta.exception.ExecuteFailedException: Execute failed: no parts succeed, error message: Unable to activate object
        at com.vesoft.nebula.client.storage.scan.ScanResultIterator.throwExceptions(ScanResultIterator.java:99)
        at com.vesoft.nebula.client.storage.scan.ScanEdgeResultIterator.next(ScanEdgeResultIterator.java:133)
        at com.vesoft.nebula.connector.reader.NebulaEdgePartitionReader.next(NebulaEdgePartitionReader.scala:65)
        at org.apache.spark.sql.execution.datasources.v2.DataSourceRDD$$anon$1.hasNext(DataSourceRDD.scala:49)
        at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
        at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
        at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
        at scala.collection.Iterator$class.foreach(Iterator.scala:891)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
        at org.apache.spark.graphx.EdgeRDD$$anonfun$1.apply(EdgeRDD.scala:107)
        at org.apache.spark.graphx.EdgeRDD$$anonfun$1.apply(EdgeRDD.scala:105)
        at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$25.apply(RDD.scala:875)
        at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$25.apply(RDD.scala:875)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
        at org.apache.spark.rdd.RDD$$anonfun$7.apply(RDD.scala:359)
        at org.apache.spark.rdd.RDD$$anonfun$7.apply(RDD.scala:357)
        at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1165)
        at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1156)
        at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:1091)
        at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1156)
        at org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:882)
        at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:357)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:308)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
        at org.apache.spark.scheduler.Task.run(Task.scala:123)
        at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
21/04/01 10:24:57 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
com.vesoft.nebula.client.meta.exception.ExecuteFailedException: Execute failed: no parts succeed, error message: Unable to activate object
        at com.vesoft.nebula.client.storage.scan.ScanResultIterator.throwExceptions(ScanResultIterator.java:99)
        at com.vesoft.nebula.client.storage.scan.ScanEdgeResultIterator.next(ScanEdgeResultIterator.java:133)
        at com.vesoft.nebula.connector.reader.NebulaEdgePartitionReader.next(NebulaEdgePartitionReader.scala:65)
        at org.apache.spark.sql.execution.datasources.v2.DataSourceRDD$$anon$1.hasNext(DataSourceRDD.scala:49)
        at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
        at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
        at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
        at scala.collection.Iterator$class.foreach(Iterator.scala:891)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
        at org.apache.spark.graphx.EdgeRDD$$anonfun$1.apply(EdgeRDD.scala:107)
        at org.apache.spark.graphx.EdgeRDD$$anonfun$1.apply(EdgeRDD.scala:105)
        at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$25.apply(RDD.scala:875)
        at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$25.apply(RDD.scala:875)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
        at org.apache.spark.rdd.RDD$$anonfun$7.apply(RDD.scala:359)

我更新了,没成功,信息参见上一条…

贴一下storage服务的配置文件,盲猜你的nebula服务部署在服务器且服务的地址配置的是127.0.0.1,这样你在其他机器是访问不到127.0.0.1:9779服务的。

将配置文件中的127.0.0.1 改成真实ip。

那贴一下storaged的配置文件以及 在console中执行下show hosts看下。

你贴的还是127.0.0.1. 上面说过了 你要改成真实ip。 你在程序中去访问127.0.0.1:9779 肯定访问不到的。

我是在部署nebula的那台机器上跑的spark程序,所以程序上写的是.withMetaAddress(“127.0.0.1:9559”)

为了避免回复的同学反复作业,这个帖子关掉了,解决方法参考这个帖子:Nebula Graph2.0GA单机版运行GraphX问题 - #8 由 zhaochuanyun btw,不要反复提问哦哟~