Issue
I have the following spark dataframe:
+--------------------+--------------------+
| f1| f2|
+--------------------+--------------------+
| [380.1792652309408]|[-91793.40296983652]|
|[-18662.02751719936]|[-99674.18149372772]|
|[-736.5125444921572]| [-23736.3626879109]|
|[-143436.24812848...|[-136748.6250801389]|
|[-10325.057466551...|[-108747.85455021...|
|[-9771.868356757912]|[-164454.02688403...|
But I want to convert the values in these columns from vector type to double output. How can I do so?
Sample output:
+--------------------+--------------------+
| f1| f2|
+--------------------+--------------------+
| 380.1792652309408|-91793.40296983652|
|-18662.02751719936|-99674.18149372772|
|-736.5125444921572| -23736.3626879109|
|-143436.24812848...|-136748.6250801389|
|-10325.057466551...|-108747.85455021...|
|-9771.868356757912|-164454.02688403...|
Solution
Updated answer. Improvement on the original answer as I did not use Row.
With the enforced down time doing some pyspark and machine learning stuff in the backgroud. Here goes with focus on Vector with cardinality > 1 and same cardinality in all rows as would make sense. And renaming of cols.
You can use this example now:
%python
from pyspark.ml.linalg import Vectors
from pyspark.sql import Row
source_data = [
Row(city="AMS", temps=Vectors.dense([-1.0, -2.0, -3.0])),
Row(city="BRU", temps=Vectors.dense([-7.0, -7.0, -5.0])),
]
df = spark.createDataFrame(source_data)
def convertToCols(row):
return ( tuple(row.temps.toArray().tolist()))
df2 = df.rdd.map(convertToCols).toDF(["C1"])
df3 = df2.toDF(*(c.replace('_', 'C') for c in df2.columns))
df3.show()
returns:
+----+----+----+
| C1| C2| C3|
+----+----+----+
|-1.0|-2.0|-3.0|
|-7.0|-7.0|-5.0|
+----+----+----+
Important in my example to have used Row as was creating DF in-line.
Answered By - thebluephantom
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.