Issue
I've trained a model in the cloud and I saved it. Now, I'm trying to download it onto my local machine and also share it with other people. However, it seems that because the file is too large (700MB) it gets truncated every time I try to download it (ends up being 300MB in size rather than 700MB). I really need to download this model, and it takes days to re-train, is there anyway of getting a file this large from the Juypter Notebook Instance?
Solution
In your pipeline you can define a ModelExportOP that will export your model to a Google Storage bucket that you have created and defined within the pipeline.
Once you have it in the pipeline you can grant yourself read privilege to the bucket and download it.
Answered By - Jake Nelson
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.