Research Activities

A8

Design of attacks with active adversaries in FL. We intend to design membership inference attacks (to know if a subject belongs to a training group) and property inference attacks (to validate if a record in the database satisfies a property).

A9

Evaluation of the level of privacy in FL. The aim is to define privacy metrics in FL that establish how much information an active adversary can obtain a priori (before running a given number of interactions of the learning algorithm) and a posteriori (after running that number of interactions). Real-time privacy loss estimation methods will be developed.

A10

Design of secure aggregation techniques in (D)FL to prevent attacks against the aggregator(s) or the learning network computation nodes. Homomorphic encryption, secure multiparty computation and differential privacy techniques will be applied. Much of these techniques provide post-quantum resistance in a natural way, since they are either based on hard problems over lattices, or provide theoretical security under certain assumptions of non-collusion.

A11

Improving the efficiency and robustness of (D)FL algorithms and design authentication and accountability algorithms. These mechanisms would improve the behavior under adverse situations such as failures, interruption or delay in communications, and statistical inhomogeneity in the data. Besides, it would ensure traceability of learning processes.

A12

Adversarial aggregator detection based on watermarking.

A13

Use case to detect bank fraud using (D)FL techniques in a secure and private way.