nebula-spark-connector跑example下面的reader例子报错

  • nebula 版本:2.0.0
  • 部署方式(分布式 / 单机 / Docker / DBaaS):docker-compose
  • 是否为线上版本:Y
  • 问题的具体描述:跑的spark reader例子,报get storage client error
终端报错:
ERROR [pool-3438-thread-2] - get storage client error, 
java.util.NoSuchElementException: Unable to activate object
	at org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:400)
	at org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:277)
	at com.vesoft.nebula.client.storage.StorageConnPool.getStorageConnection(StorageConnPool.java:42)
	at com.vesoft.nebula.client.storage.scan.ScanVertexResultIterator.lambda$next$0(ScanVertexResultIterator.java:89)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266)
	at java.util.concurrent.FutureTask.run(FutureTask.java)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.UnknownHostException: storaged1
	at java.net.InetAddress.getAllByName0(InetAddress.java:1281)
	at java.net.InetAddress.getAllByName(InetAddress.java:1193)
	at java.net.InetAddress.getAllByName(InetAddress.java:1127)
	at java.net.InetAddress.getByName(InetAddress.java:1077)
	at com.vesoft.nebula.client.storage.GraphStorageConnection.open(GraphStorageConnection.java:36)
	at com.vesoft.nebula.client.storage.StorageConnPoolFactory.activateObject(StorageConnPoolFactory.java:59)
	at com.vesoft.nebula.client.storage.StorageConnPoolFactory.activateObject(StorageConnPoolFactory.java:16)
	at org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:391)
	... 9 more
代码:
def readVertex(spark: SparkSession): Unit = {
    LOG.info("start to read nebula vertices")
    val config =
      NebulaConnectionConfig
        .builder()
        .withMetaAddress("127.0.0.1:51295")
        .withConenctionRetry(2)
        .build()
    val nebulaReadVertexConfig: ReadNebulaConfig = ReadNebulaConfig
      .builder()
      .withSpace("test_xsy")
      .withLabel("company")
      .withNoColumn(false)
      .withReturnCols(List("name"))
      .withLimit(10)
      .withPartitionNum(10)
      .build()
    val vertex = spark.read.nebula(config, nebulaReadVertexConfig).loadVerticesToDF()
    vertex.printSchema()
    vertex.show(20)
    println("vertex count: " + vertex.count())
  }


vertex.show()就出错了

@nicole @steam 大佬求助

这个神奇的。。端口,你确定你本机能访问对吗

在论坛搜一下吧,太多类似的问题了

你把 ip 改成真实的 ip 试下,应该就可以了

代码中换成真实ip还是一样的报错

这句话何解啊老哥

你点进去看帖子啊, 你这个是一模一样的情况。

简单来说,就是容器的外的 connector 应用无法访问容器里的 nebula 服务。你要不试试将 nebula 的网络模式改成 host,直接 nebula 容器使用宿主机的 ip 和端口(我只是觉得理论可行)

在哪儿改呢

docker network ls

看下

1 个赞

看了的,里面分析了原因,不过咋解决的没说

主要的问题是网络不通,你可以修改docker-compose的配置,把meta_server_addrs、local_ip等换成真实的ip地址部署,这样show host 取到的就是真实地址,client就能通过ip port访问storaged了

只改meta的还是graph、meta、storage都改?


meta.conf改了,又跑了下还是一样报错,是要重启容器吗

先docker-compose down, 在docker-compose.yaml上改ip,记得把network去掉,然后docker-compose up -d

1 个赞