Issue
I am trying to use PyTorch's '''nn.TransformerEncoder''' module for a classification task.
I have data points of varying lengths i.e I have sequences of different lengths. All sequences have one corresponding output(target which is either 0 or 1).[![enter code here][1]][1]
This image outlines my dataset
This image shows how the sequences vary in length
However the entries of each sequence are of the same length.
I want to use this dataset to train an encoder part of the Transformer to be able to predict the corresponding outputs. How can I go about doing this? And are there any examples I can check online?
Solution
It depends on how your data actually looks like and what kind of output you expect. In general I would suggest to you to use the Transformers Library from HuggingFace, they have a lot of documentation and detailed code examples that you can work on -- plus an active forum. Here is a link to their description of Encoder-Decoder Models. I hope that helps you a bit.
Answered By - Stimmot
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.