我正在尝试从TF-IDF结果向量获取分数数组。例如:
rescaledData.select("words", "features").show()
+-----------------------------+---------------------------------------------------------------------------------------------+
|words |features |
+-----------------------------+---------------------------------------------------------------------------------------------+
|[a, b, c] |(4527,[0,1,31],[0.6363067860791387,1.0888040725098247,4.371858972705023]) |
|[d] |(4527,[8],[2.729945780576634]) |
+-----------------------------+---------------------------------------------------------------------------------------------+
rescaledData.select(rescaledData['features'].getItem('values')).show()
但是,而不是数组我得到了一个错误。
AnalysisException: u"Can't extract value from features#1786: need struct type but got struct<type:tinyint,size:int,indices:array<int>,values:array<double>>;"
我想要的是
+--------------------------+-----------------------------------------------------------+
|words |features |
+--------------------------+-----------------------------------------------------------+
|[a, b, c] |[0.6363067860791387, 1.0888040725098247, 4.371858972705023]|
+--------------------------+-----------------------------------------------------------+
如何解决这个问题?
另一个选择是创建一个udf以从稀疏向量中获取值:
from pyspark.sql.functions import udf
from pyspark.sql.types import DoubleType, ArrayType
sparse_values = udf(lambda v: v.values.tolist(), ArrayType(DoubleType()))
df.withColumn("features", sparse_values("features")).show(truncate=False)
+---------+-----------------------------------------------------------+
|word |features |
+---------+-----------------------------------------------------------+
|[a, b, c]|[0.6363067860791387, 1.0888040725098247, 4.371858972705023]|
|[d] |[2.729945780576634] |
+---------+-----------------------------------------------------------+
本文收集自互联网,转载请注明来源。
如有侵权,请联系 [email protected] 删除。
我来说两句