Unsure what NOYB is, even after skimming this, but an interesting bit in there about how people wouldn’t have the right to be forgotten once the AI has been trained.
I think there’s some „reasonable” keyword in the right to be forgotten. Like first if you have some old backups on tapes and you must keep them for whatever reason still for few years m, you can deny altering them if it the cost would be exorbitant and you ensure the users won’t come back after a recovery from said backup.
Also they might train their models on pseudo-anonymized dataset so as long it’s too expensive to deanonymize the user data it could be fine in terms of GDPR.
For example: you generate car trips stats per city in a country, per day. You could argue that you don’t need to delete user data that is part of this set if you ensure there are always enough of trips recorded (so can’t deanonymise someone from a single entry) and also it would falsify your historical stats.
At my company who likes to be super compliant we do remove people from this kind of stats using some pseudo-anonymous references. So if you remove your account, there’s an event that changes the historical analytics data and removes all traces of your activity. But that’s because we can and want to be cool (company culture principles).
Other data we have (website analytics) are impossible to go into this process as we ensure we never know WHO did something. We only know what and when.
I think there’s some „reasonable” keyword in the right to be forgotten.
The original case was a Spanish cook being haunted by the first google result for his name being an article in a local newspaper about his restaurant going bankrupt decades ago. No scandal or such, just an ordinary bankruptcy, but he could demonstrate that it was impacting his current business.
He sued, and google had to remove the thing. Not when you search for his name and bankruptcy, not when you search for “what happened to that restaurant”, and the newspaper itself also didn’t have to do anything. As far as I know you can still find the article.
If you’re a journalist writing the guy’s biography, you’ll find it, push come to shove in some offline archive. But random people won’t see him nailed to a virtual pillory, that’s what all this is about.
I don’t think it’s really an issue for AI, but it has to be engineered in. Ultimately it’s about judging relevancy.
Unsure what NOYB is, even after skimming this, but an interesting bit in there about how people wouldn’t have the right to be forgotten once the AI has been trained.
a quick overview of the legal battles of Max Schrems / NOYB: https://en.wikipedia.org/wiki/Max_Schrems
I think there’s some „reasonable” keyword in the right to be forgotten. Like first if you have some old backups on tapes and you must keep them for whatever reason still for few years m, you can deny altering them if it the cost would be exorbitant and you ensure the users won’t come back after a recovery from said backup.
Also they might train their models on pseudo-anonymized dataset so as long it’s too expensive to deanonymize the user data it could be fine in terms of GDPR.
For example: you generate car trips stats per city in a country, per day. You could argue that you don’t need to delete user data that is part of this set if you ensure there are always enough of trips recorded (so can’t deanonymise someone from a single entry) and also it would falsify your historical stats.
At my company who likes to be super compliant we do remove people from this kind of stats using some pseudo-anonymous references. So if you remove your account, there’s an event that changes the historical analytics data and removes all traces of your activity. But that’s because we can and want to be cool (company culture principles).
Other data we have (website analytics) are impossible to go into this process as we ensure we never know WHO did something. We only know what and when.
The original case was a Spanish cook being haunted by the first google result for his name being an article in a local newspaper about his restaurant going bankrupt decades ago. No scandal or such, just an ordinary bankruptcy, but he could demonstrate that it was impacting his current business.
He sued, and google had to remove the thing. Not when you search for his name and bankruptcy, not when you search for “what happened to that restaurant”, and the newspaper itself also didn’t have to do anything. As far as I know you can still find the article.
If you’re a journalist writing the guy’s biography, you’ll find it, push come to shove in some offline archive. But random people won’t see him nailed to a virtual pillory, that’s what all this is about.
I don’t think it’s really an issue for AI, but it has to be engineered in. Ultimately it’s about judging relevancy.