Part 1 – Data Privacy in the Age of AI: What Communicators Should Know

With the rapid rise of advanced technologies, including artificial intelligence, the threats to data privacy have grown exponentially. Communicators need to understand these risks and their role in protecting data.

January 27–31 is Data Privacy Week, an international campaign to raise awareness about the need to respect privacy and safeguard data. But whether or not your organization observes this event, protecting data should be a 24/7/365 effort.

By partnering with Data Security colleagues, communicators can play a vital role in helping employees manage their personal data and protect the organization’s most valuable asset.

The AI effect: The difference is the scale

Artificial Intelligence is having a profound effect on individual privacy. Jennifer King, a Privacy and Data Policy Fellow at Stanford University, says, “AI systems pose many of the same privacy risks we’ve been facing during the past decades of Internet commercialization and mostly unrestrained data collection. The difference is the scale: AI systems are so data-hungry that it is basically impossible for people using online products or services to escape systematic digital surveillance across most facets of life.”

Today, more and more companies are embracing AI to gain competitive advantage, but its application carries enormous risk. When used improperly or maliciously, AI can lead to significant security breaches, data theft, and exploitation of personal information.

Here are some examples:

  • Data privacy violations: AI systems often rely on vast amounts of data, some of which may be sensitive. Without proper safeguards, this data can be exposed to unauthorized access or misuse. In one recent case, Ireland’s Data Protection Commission fined Meta $1.3 billion for transferring personal data from the EU to the US without adequate data privacy safeguards.
  • Phishing: AI-driven tools can deceive employees into sharing proprietary information, making them more susceptible to phishing attacks. At a Belgian bank, the email account of a high-level executive’s email account was hacked, and employees were instructed to transfer money to the attacker’s account. The scam cost the company $78.5 million.
  • Bias and discrimination: AI algorithms can inadvertently perpetuate biases, leading to unfair treatment of employees or customers. In one example, Google’s AI-powered resume screening tool was found to exhibit gender bias by downgrading resumes containing terms associated with women.
  • Personal devices: Employees who work from home need to be vigilant about voice assistants and smart home devices, which can gather data about users. If mishandled, this could impact both personal and professional security. So employees who use these technologies need to know how to protect corporate information. They should also be careful about what information they post on social media.

Communicators can play a key role in educating the workforce on best practices for handling and managing data. Maybe more than any other organizational concern, data privacy has a parallel effect on employees’ personal lives. And we’ve found that employees are likely to sit up and pay attention to an issue that impacts them personally as well as professionally.

In Part 2 of this blog we discuss how communicators can partner with Data Security colleagues to help employees protect personal data at home and at work. In the meantime, if you’d like to learn more about communicating in the era of AI, the O’Keefe Group is here to help.