nebula-python版本:1.1.1
执行client.execute_query查询,查到的数据类型为nebula.graph.ttypes.ExecutionResponse,我通过以下代码转成pd.DataFrame,但是数据量大的时候两层for逻辑的速度会特别慢,有较快的方法推荐吗?
nebulaObj=gClient.execute_query("XXXXX")
if nebulaObj.column_names is not None:
columnList = [colItem.decode("utf8") for colItem in nebulaObj.column_names]
else:
return pd.DataFrame([])
dataList = []
if nebulaObj.rows is not None:
for rowItem in nebulaObj.rows:
rowList = []
for colItem in rowItem.columns:
if type(colItem.value) == bytes:
rowList.append(colItem.value.decode("utf8"))
else:
rowList.append(colItem.value)
dataList.append(rowList.copy())
else:
return pd.DataFrame([])
return pd.DataFrame(dataList, columns=columnList).drop_duplicates()