Protecting privacy takes precedence

By Amy Ta
Staff Writer

With the rapid development of artificial intelligence, it isn’t surprising to hear that AI use increased in 2023 by 35%. AI use in the workforce can be incredibly helpful in our daily lives. However, it could also do a lot of harm, especially when it comes to privacy.

One of the most prominent uses of AI is facial recognition technology, which is quickly becoming commonplace. The increasing usage of FRT AI in surveillance cameras creates a major fear of its capability to collect personal data without our knowledge. With enough training, FRT AI programs are capable of grabbing any personal information that it can gather from your name.

However, the biggest issue stems from the fact that these programs can mine your biometric data from photos you post on social media. Biometric data includes your identifying features, such as your face, race and medical history. This is different from the consensual collection of personal data, such as location and usernames from these sites, which tends to be for user engagement purposes. It’s scary to think that anyone could buy our biometric data, especially with social media so prevalent in our lives.

This has some big consequences, especially since the U.S. government has few regulations over AI’s unethical gathering of personal data, probably because the government also uses these FRT programs. According to the U.S. Government Accountability Office, “18 of 24 agencies reported using FRT for one or more purposes, with digital access and domestic law enforcement [being] the most common.”

There is no excuse for our country’s lack of procedures concerning the collection of our personal data via AI. Sure, President Joseph Biden signed an executive order concerning “safe AI.” However, upon reading it, I found that it was incredibly vague and only affected government agencies, not private companies.

Since private companies create these FRT programs, they are not held responsible by the executive order Biden signed. They don’t need to report to the government what they are doing with our personal information. The only thing that is required of these larger private companies is that they share their AI training data with the government. Unfortunately, disclosing these safety tests do nothing to protect our private information that the FRT AI collects.

The UK and EU have already set regulations concerning shady data collection by private companies, so why haven’t we? In the EU, FRT AI companies must protect privacy by design (PbD). PbDs ensure that any intimate information the FRT gathers, such as biometric data and medical history, are immediately destroyed. This is a viable solution, as it requires all companies to follow the rule, not just federal agencies.

Unfortunately, our government isn’t terribly focused on the ethical issues AI causes. But that doesn’t mean we can’t do anything about it. As corny as it sounds, the best way to have our concerns heard is through contacting our local representatives. If sponsored by a state representative, bills can be introduced to the House of Representatives and hopefully passed onto the Senate.

The next best thing you can do is to be sensible of what you put online. So many of the FRT AI programs form a portfolio of you from what you post online. Limit the amount of times you post your face to social media, or, at the very least, be wary of how long you stay on it. What you post can be used against you, even if your account is private.

Fight for our right to privacy; we deserve to have our private information remain private.