Issue
I have a list in python that contains duplicate dataframes. The goal is to remove these duplicate dataframes in whole. Here is some code:
import pandas as pd
import numpy as np
##Creating Dataframes
data1_1 =[[1,2018,80], [2,2018,70]]
data1_2 = [[1,2017,77], [3,2017,62]]
df1 = pd.DataFrame(data1_1, columns = ['ID', 'Year', 'Score'])
df2 = pd.DataFrame(data1_2, columns = ['ID', 'Year', 'Score'])
###Creating list with duplicates
all_df_list = [df1,df1,df1,df2,df2,df2]
The desired result is this:
###Desired results
desired_list = [df1,df2]
Is there a way to remove any duplicated dataframes within a python list?
Thank you
Solution
We can use pandas DataFrame.equals
with list comprehension
in combination with enumerate
to compare the items in the list between each other:
desired_list = [all_df_list[x] for x, _ in enumerate(all_df_list) if all_df_list[x].equals(all_df_list[x-1]) is False]
print(desired_list)
[ ID Year Score
0 1 2018 80
1 2 2018 70, ID Year Score
0 1 2017 77
1 3 2017 62]
DataFrame.equals
returns True
if the compared dataframes are equal:
df1.equals(df1)
True
df1.equals(df2)
False
Note
As Wen-Ben noted in the comments. Your list should be sorted like [df1, df1, df1, df2, df2, df2]
. Or with more df's: [df1, df1, df2, df2, df3, df3]
Answered By - Erfan
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.