Allgemein

Microsoft Research Develops Novel Approaches to Enforce Privacy in AI Models

Microsoft Research Develops Novel Approaches to Enforce Privacy in AI Models

A team of AI researchers at Microsoft introduces two novel approaches for enforcing contextual integrity in large language models: PrivacyChecker, an open-source lightweight module that acts as a privacy shield during inference, and CI-CoT + CI-RL, an advanced training method designed to teach models to reason about privacy.

By Sergio De Simone