Issue
I'm running machine learning (ML) jobs that make use of very little GPU memory. Thus, I could run multiple ML jobs on a single GPU.
To achieve that, I would like to add multiple lines in the gres.conf file that specify the same device. However, it seems the slurm deamon doesn't accept this, the service returning:
fatal: Gres GPU plugin failed to load configuration
Is there any option I'm missing to make this work?
Or maybe a different way to achieve that with SLURM?
It is kind of smiliar to this one, but this one seems specific to some CUDA code with compilation enabled. Something which seems way more specific than my general case (or at least as far as I understand). How to run multiple jobs on a GPU grid with CUDA using SLURM
Solution
I don't think you can oversubscribe GPUs, so I see two options:
- You can configure the CUDA Multi-Process Service or
- pack multiple calculations into a single job that has one GPU and run them in parallel.
Answered By - Marcus Boden
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.