Issue
Would you tell me why I failed to import CUFAR10DataModule()?
At first, I run the code on GoogleColab,
from pl_bolts.datamodules import CIFAR10DataModule
dm = CIFAR10DataModule()
then, the code was performed for the confirmation
from torch.optim import Adam
optimizer = Adam(finetune_layer.parameters(), lr=1e-4)
for epoch in range(10):
for batch in dm.train_loader:
x, y = batch
with torch.no_grad():
features = backbone(x)
preds = finetune_layer(features)
loss = cross_entropy(preds, y)
loss.backward()
optimizer.step()
optimizer.zero_grad()
print(loss.item())
However, the message AttributeError: 'CIFAR10DataModule' object has no attribute 'train_loader'
was returned after running the code.
When the code was run to confirm the dm
,
for batch in dm.train_dataloader:
x, y = batch
print(x.shape, y.shape)
break
The error says TypeError: 'method' object is not iterable
.
The code looks the same with an example, but I wonder why such an error was generated?
Solution
Two problems with your code:
First, the way you get the underlying PyTorch dataloader is dm.train_dataloader()
not dm.train_loader
. It is a function, not a property.
for batch in dm.train_dataloader():
x, y = batch
...
Secondly, since you are trying to use a LightningDataModule
without a Trainer
, you need to manually invoke
dm.prepare_data()
dm.setup()
.. in order for the dataloader to be available via .train_dataloader()
.
Answered By - ayandas
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.