Google makes it easier for AI developers to ensure data privacy

The tech giant has added a new module to TensorFlow that allows users to enhance privacy with just a few lines of coding

7Mar

Google has made moves to make it easier for AI developers to ensure the privacy of users' data by introducing a new module for its machine learning (ML) framework. The module, TensorFlow Privacy, will allow developers to add a few lines of extra code to enhance privacy.

TensorFlow, the tech giant's ML framework, is used to create programs such as audio, image and text recognition algorithms.

"Modern ML is increasingly applied to create amazing new technologies and user experiences, many of which involve training machines to learn responsibly from sensitive data such as personal photos or email," the team at TensorFlow wrote in a Medium blog post announcing the new module. "Ideally, the parameters of trained-ML models should encode general patterns rather than facts about specific training examples.

"To ensure this, and to give strong privacy guarantees when the training data is sensitive, it is possible to use techniques based on the theory of differential privacy," the team added. "In particular, when training on users' data, those techniques offer strong mathematical guarantees that models do not learn or remember the details about any specific user."

Developers do not need any expertise in privacy or the underlying mathematics behind it to use TensorFlow Privacy, meaning those using the module do not need to alter their model architectures, training procedures or processes.

Last year, Google published its Responsible AI Practices guidelines in which it highlighted the importance of data security when developing AI. 

Threat intelligence market set to hit 13bn by 2025 home

Read next:

Threat intelligence market set to hit $13bn by 2025

i