Issue
I tried to train a model using PyTorch on my Macbook pro. It uses the new generation apple M1 CPU. However, PyTorch couldn't recognize my GPUs.
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
Does anyone know any solution?
I have updated all the libraries to the latest versions.
Solution
It looks like PyTorch support for the M1 GPU is in the works, but is not yet complete.
From @soumith on GitHub:
"So, here's an update.
We plan to get the M1 GPU supported. @albanD, @ezyang and a few core-devs have been looking into it. I can't confirm/deny the involvement of any other folks right now.
So, what we have so far is that we had a prototype that was just about okay. We took the wrong approach (more graph-matching-ish), and the user-experience wasn't great -- some operations were really fast, some were really slow, there wasn't a smooth experience overall. One had to guess-work which of their workflows would be fast.
So, we're completely re-writing it using a new approach, which I think is a lot closer to your good ole PyTorch, but it is going to take some time. I don't think we're going to hit a public alpha in the next ~4 months.
We will open up development of this backend as soon as we can."
That post: https://github.com/pytorch/pytorch/issues/47702#issuecomment-965625139
TL;DR: a public beta is at least 4 months out.
Answered By - Peter LaMantia
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.