User Outcry as Slack Scrapes Customer Data for AI Model Training

Enterprise workplace collaboration platform Slack has sparked a privacy backlash with the revelation that it has been scraping customer data, including messages and files, to develop new AI and ML models.

By default, and without requiring users to opt-in, Slack said its systems have been analyzing customer data and usage information (including messages, content and files) to build AI/ML models to improve the software.

The company insists it has technical controls in place to block Slack from accessing the underlying content and promises that data will not lead across workplaces but, despite these assurances, corporate Slack admins are scrambling to opt-out of the data scraping.

This line in Slack’s communication sparked a social media controversy with the realization that content in direct messages and other sensitive content posted to Slack was being used to develop AI/ML models and that opting out world require sending e-mail requests:

“If you want to exclude your Customer Data from Slack global models, you can opt out. To opt out, please have your org, workspace owners or primary owner contact our Customer Experience team at feedback@slack.com with your workspace/org URL and the subject line ‘Slack global model opt-out request’. We will process your request and respond once the opt-out has been completed.”

Multiple CISOs polled by SecurityWeek say they’re not surprised to hear that Slack — like many big-tech vendors — is developing AI/ML models on data flowing through its platform but grumbled that customers should not bear the burden of opting out of this data scraping.

In a social media post in response to critics, Slack said it has platform-level machine-learning models for things like channel and emoji recommendations and search results and insists that customers can exclude their data from helping train those (non-generative) ML models.

The company said Slack AI – which is a gen-AI experience natively built in Slack – is a separately purchased add-on that uses Large Language Models (LLMs) but does not train those LLMs on customer data. “Because Slack AI hosts the models on its own infrastructure, your data remains in your control and exclusively for your organization’s use. It never leaves Slack’s trust boundary and no third parties, including the model vendor, will have access to it,” the company said.

In its documentation, Slack said data will not leak across workspaces. “For any model that will be used broadly across all of our customers, we do not build or train these models in such a way that they could learn, memorize, or be able to reproduce some part of customer data.”

error: Content is protected !!