Ken Regum

On Artificial Intelligence and Privacy

I've always thought of Artificial Intelligence as a fad, but boy was I wrong. Not only was I thinking narrowly - AI as a concept has been around since the 1950s and encapsulates a lot of things I already interact with, like Google Maps, Minecraft, or Netflix - I was also wrong about the advantages of generative AI, which was the type of AI I thought was a fad. Can you blame me for being cautious and skeptical after the rise and fall of NFTs, various cryptocurrency scams, and chatbots that seem to devolve into neo-Nazis?

As a privacy lawyer, it's awesome to be at the forefront of AI politics. You have, on one side, people who wish to push AI innovation, and on the other, people who wish to be careful and place guardrails lest AI unexpectedly goes into places that we never expect them to (i.e., a growing field in privacy called AI governance). Interestingly, these two sides can sometimes be the same coin, with people who work in AI themselves pushing more regulation in the field. Who else knows the dangers of what can happen but the people who have the technical and practical experience with it?

In any case, when talking about AI, privacy is interested in how data subjects can exercise their rights under the law. When you feed AI personal data for training, even after transformation that makes it harder to link such data back to its source, data subjects should still be able to exercise rights like access, blocking, rectification, and erasure. The AI model itself should not contain personal data or only involve anonymized data that can no longer identify an individual.

Moreover, there is the problem with collecting personal data for AI. Do you seek consent from data subjects? That would be impractical in general, considering the volume of data and the fact that consent can be withdrawn. Do you use legitimate interest instead? That would work, but it should be noted that, at least in the Philippines, legitimate interest can only be used as a legal basis to process ordinary personal information. Do you have the means to filter sensitive personal information from your set of data? There is also the matter of transparency - what if you are just planning to use AI now and you already have collected data from previous engagements? Is it sufficient to update your privacy notice and use the data for another purpose? Should data subjects opt-in or opt-out of being training fodder for AI?

Finally (but certainly not all there is to it), there is the problem with accountability and ethics. Since we cannot see how output is made, who is accountable if an AI system makes a decision that violates the rights of data subjects? Should it be the one who developed the system, the one who deployed the system, or the one who acted on the system? Also, since AI is a product of partly human data, how can we prevent these systems from having biases and discriminating against persons using their personal information like race, sex, gender, and health?

These are some questions about AI being engaged in the privacy field. This does not even cover the security risks of developing AI, as well as the dangers of profiling, automated decision-making, extensive surveillance, and CBDT rules.

According to NPC Advisory Opinion No. 2024-002:

…(W)e see no manifest conflict with the use of AI in relation to the provisions of the Data Privacy Act of 2012 (DPA). The DPA recognizes the policy of the State to ensure the free flow of information and to promote innovation and growth, alongside its duty to protect the fundamental human rights of privacy and of communication.

Section 4 of the DPA states that the law applies to the processing of all types of personal information, save for some exceptions. The DPA does not distinguish as to the type of technology used in the processing of personal information. Hence, whether the processing uses AI technology or not, the processing must abide by the provisions of the DPA as with other means and methods of processing information. In other words, personal information controllers (PICs) who are processing personal information using AI technology must adhere to the general principles of privacy, have a lawful basis for processing, implement reasonable appropriate security measures, and uphold data subject rights, among other obligations under the DPA. Consequently, PICs are accountable for the means and methods they use in processing personal information.

Read more? |

#law #privacy