Issue
I want to implement local DP model using TFF, that is, each client trains it's own differentially private model and sends noisy gradients to the server, and the server just aggregates and distributes in a standard FL fashion. I tried changing the client optimizer to keras DP optimizer, but that didnt work. Any suggestions are appreciated.
Solution
First, perhaps have a look at Differential Privacy in TFF tutorial which shows how to do central DP training in TFF. Once you understand that, I can see two different ways to change it to provide some local DP guarantees.
- Look at how the
tff.learning.dp_aggregator
is implemented. Instead of the pre-packagedtff.aggregators.DifferentiallyPrivateFactory
, instantiate it with atfp.DPQuery
object that implements the local DP mechanism you are interested in. Perhaps an implementation you need already exists somewhere. - Implement a custom aggregator from scratch doing exactly what you need. See Implementing Custom Aggregations tutorial for a starting point.
Answered By - Jakub Konecny
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.