Slack Caught Using User Data To Train AI Models Without Explicit Permission
Several companies are using customer data to train their AI tools and models without getting explicit permission. Slack is one such company that was recently caught following this practice. Learn more about it in this article.
- Several companies are using customer data to train their AI tools and models without getting explicit permission.
- Slack is one such company that was recently caught following this practice. Learn more about it in this article.
Recently, several companies have been found using users’ data and information to train their AI tools without explicit consent. The latest one to be caught doing this is Slack. According to the latest revelation, Slack has been using its users’ files, messages, and data to train its AI features. Worse, the software’s users were automatically opted into this arrangement without their consent.
This incident has annoyed many people because the company didn’t make its intentions clear from the beginning. Corey Quinn, a Duckbill Group executive, expressed his anger in a post on X. He was referring to an excerpt from Slack’s Privacy Principles that read, “To develop AI/ML models, our systems analyze Customer Data (e.g., messages, content, and files) submitted to Slack as well as Other Information (including usage information) as defined in our Privacy Policy and in your customer agreement.”
Replying to this post, Slack agreed that it uses customer content to train certain AI tools. However, it also said that the data wasn’t going toward their premium offering, which they would bill as entirely isolated from user information.
Still, many people argued that the company should have given a prominent heads-up and allowed people to opt out before data collection began.
Slack’s opt-out process is also problematic, as people cannot opt-out independently. They require an admin for the entire organization to request it by sending an email with a specific subject line to a particular email ID. There have also been several inconsistencies in Slack’s policies, adding fuel to the fire. On the one hand, Slack claims that users do not need to worry about their data privacy and their data isn’t used to train Slack AI. For example, one section of the page marketing Slack’s premium generative AI tools reads, “Work without worry. Your data is your data. We don’t use it to train Slack AI. Everything runs on Slack’s secure infrastructure, meeting the same compliance standards as Slack itself.”
However, this specific incident seems to contradict the above statement.
See more: The Global Quest For Data Privacy In The AI Era
More Companies Resort To Similar Tactics
The incident highlights the rising tensions around artificial intelligence and data privacy as the race to provide AI offerings heats up among businesses. Slack is just one of the companies recently that has been using user data to train its AI tools without their consent. OpenAI was recently in trouble, as The New York Times sued the company for allegedly using its archives without user permission to train chatbots. The media outlet reported in 2020 that Clearview AI had harvested billions of social media images without user consent. In another incident, Getty Images sued Stable Diffusion for copyright infringement.
To exacerbate the problem, many companies are making it difficult for users to opt out of this automatic opt-in arrangement. Again, Slack is just one example.
Stack Overflow recently announced that OpenAI would soon be able to train its AI models using the knowledge and answers users had contributed to the platform over the last 15 years. When a Stack Overflow poster expressed unhappiness with the platform’s decision to use user data without explicit permission and protested, his account was suspended for a week. The company even reportedly told him that it no longer belonged to him once he posted content on the platform. Several other users claimed to have similar experiences. Simply put, the company makes it impossible for its users to revoke the permission for Stack Overflow to publish, store, distribute, and use such content.
This seems to have become a common practice in the tech industry, which runs counter to data privacy principles of providing people with explicit choices over how their personal data and information gets used.
MORE ON DATA PRIVACY
Can Synthetic Data Impact Data Privacy in the New World of AI
Develop Your Privacy Habit: Beyond Dreaming to Action
Advancing AI and Data Privacy: Insights from DMEXCO 2023
Navigating the Future: AI, ML, and the New Era of Public Web Data