Issue
I'm trying to import large excel files with multiple sheets into my sql server but the problem is that it taking too much time, is there any way i can make it efficient or do it in a better way.I'm kinda new to this language so any help can be grateful.Here is my code:
#taking file input and reading it
filename = input("Input the Filename: ")
dfs = pd.read_excel(filename, sheet_name=None)
#my main function
d = {k: v[['SR_NO', 'NTN']].values.tolist()
for k, v in pd.read_excel(filename, sheet_name=None).items()
}
for k, v in dfs.items():
cols = ['SR_NO', 'NTN']
dfs = pd.read_excel(filename, usecols=cols, sheet_name=None)
records = pd.concat(df for df in dfs.values()).to_numpy().tolist()
d[k] = records
#for test sheetnames
print (d.keys())
#cursor connection to insert records
try:
cursor = conn.cursor()
cursor.executemany(sql_insert, records)
cursor.commit();
except Exception as e:
cursor.rollback()
print(str(e[1]))
finally:
print('Task is complete.')
cursor.close()
conn.close()
Solution
I may be misunderstanding your intent, but I suspect all you need is this. Read the file ONCE. And the for
loop is already enumerating through the dfs values, so you don't need another loop inside. And you don't need concat
, if you only have one thing.
filename = input("Input the Filename: ")
dfs = pd.read_excel(filename, usecols=['SR_NO','NTN'], sheet_name=None)
d = {}
for k, v in dfs.items():
d[k] = v.to_numpy().tolist()
print (d.keys())
Answered By - Tim Roberts
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.