Issue
I have a pretrained convolution neural network which produces and output of shape (X,164) where X is the number of test examples. So output layer has 164 nodes. I want to take this output and give this two another network which is simply a fully connected neural network whereby the first layer has 64 nodes and output layer has 1 node with sigmoid function. How can I do that? My first network looks like:
class LambdaBase(nn.Sequential):
def __init__(self, fn, *args):
super(LambdaBase, self).__init__(*args)
self.lambda_func = fn
def forward_prepare(self, input):
output = []
for module in self._modules.values():
output.append(module(input))
return output if output else input
class Lambda(LambdaBase):
def forward(self, input):
return self.lambda_func(self.forward_prepare(input))
class LambdaMap(LambdaBase):
def forward(self, input):
return list(map(self.lambda_func,self.forward_prepare(input)))
class LambdaReduce(LambdaBase):
def forward(self, input):
return reduce(self.lambda_func,self.forward_prepare(input))
def get_model(load_weights = True):
pretrained_model_reloaded_th = nn.Sequential( # Sequential,
nn.Conv2d(4,300,(19, 1)),
nn.BatchNorm2d(300),
nn.ReLU(),
nn.MaxPool2d((3, 1),(3, 1)),
nn.Conv2d(300,200,(11, 1)),
nn.BatchNorm2d(200),
nn.ReLU(),
nn.MaxPool2d((4, 1),(4, 1)),
nn.Conv2d(200,200,(7, 1)),
nn.BatchNorm2d(200),
nn.ReLU(),
nn.MaxPool2d((4, 1),(4, 1)),
Lambda(lambda x: x.view(x.size(0),-1)), # Reshape,
nn.Sequential(Lambda(lambda x: x.view(1,-1) if 1==len(x.size()) else x ),nn.Linear(2000,1000)), # Linear,
nn.BatchNorm1d(1000,1e-05,0.1,True),#BatchNorm1d,
nn.ReLU(),
nn.Dropout(0.3),
nn.Sequential(Lambda(lambda x: x.view(1,-1) if 1==len(x.size()) else x ),nn.Linear(1000,1000)), # Linear,
nn.BatchNorm1d(1000,1e-05,0.1,True),#BatchNorm1d,
nn.ReLU(),
nn.Dropout(0.3),
nn.Sequential(Lambda(lambda x: x.view(1,-1) if 1==len(x.size()) else x ),nn.Linear(1000,164)), # Linear,
nn.Sigmoid(),
)
if load_weights:
sd = torch.load('pretrained_model.pth')
pretrained_model_reloaded_th.load_state_dict(sd)
return pretrained_model_reloaded_th
model = get_model(load_weights = True)
If I want to get output for this model on my test set I can simply do:
output = model(X.float())
This produces a final output of shape (X,164). Now I want to take this output and give it to another neural network mentioned above. How can I combine these two networks now and how can I optimise these networks together? Insights will be appreciated.
Edit: My second model is:
# define second model architecture
next_model = nn.Sequential(
nn.Linear(164, 64),
nn.ReLU(),
nn.Linear(64, 1),
nn.Sigmoid()
)
# print model architecture
print(next_model)
And my classifier is trained as:
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('Finished Training')
Solution
If the two models do not need any adapting to be done at the first's model output, you can simply use a nn.Sequential
:
>>> network = nn.Sequential(model, next_model)
And use it the same way as you did with model
:
>>> output = network(X.float())
Which will correspond to next_model(model(X.float()))
.
Answered By - Ivan
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.