Issue
I am trying to implement DeepSurv for survival analysis with the Python package pycox. The author of the package also provide also a notbook with a coding example so I tried to transfer the code to my data. However, there seems to be a problem defining x_train due to their proposed Feature transforms with DataFrameMapper.
In the notbook it says:
cols_standardize = ['x0', 'x1', 'x2', 'x3', 'x8']
cols_leave = ['x4', 'x5', 'x6', 'x7']
standardize = [([col], StandardScaler()) for col in cols_standardize]
leave = [(col, None) for col in cols_leave]
x_mapper = DataFrameMapper(standardize + leave)
x_train = x_mapper.fit_transform(df_train).astype('float32')
x_val = x_mapper.transform(df_val).astype('float32')
x_test = x_mapper.transform(df_test).astype('float32')
In the notbook they are standardizing the 5 numerical covariates but I have nothing to standardize. So I changed the code into:
cols_standardize = []
cols_leave = df_train.columns.values.tolist()
standardize = [([col], StandardScaler()) for col in cols_standardize]
leave = [(col, None) for col in cols_leave]
x_mapper = DataFrameMapper(standardize + leave)
x_train = x_mapper.fit_transform(df_train).astype('float32')
x_val = x_mapper.transform(df_val).astype('float32')
x_test = x_mapper.transform(df_test).astype('float32')
But when I execute training the model this error occurs:
batch_size = 256
lrfinder = model.lr_finder(x_train, y_train, batch_size, tolerance=10)
_ = lrfinder.plot()
RuntimeError: expected device cpu and dtype Float but got device cpu and dtype Long
Is it maybe because of the batch_size? What does batch_size actually mean?
However, I also tried to skip the whole Feature transforms step, so I just changed my dataframes into floats:
x_train = df_train.astype('float32')
x_val = df_val.astype('float32')
x_test = df_test.astype('float32')
But then if I go on training the modell it says:
All objects in 'data' doest have the same type.
I am really confused how to prepare my data to use pycox. Especially this label transforms step with standardization appears really confusing. I would be glad for any help!
Solution
There is no problem with the code. Try to upgrade pytorch, like suggested here https://github.com/huggingface/transformers/issues/2126
Answered By - Giacomo F
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.