Slack users across the web—on Mastodon, on Threads, and on Hackernews—have responded with alarm to an obscure privacy page that outlines the ways in which their Slack conversations, including DMs, are used to train what the Salesforce-owned company calls “Machine Learning” (ML) and “Artificial Intelligence” (AI) systems. The only way to opt out of these features is for the admin of your company’s Slack setup to send an email to Slack requesting it be turned off.
The policy, which applies to all Slack instances—not just those that have opted into the Slack AI add-on—states that Slack systems “analyze Customer Data (e.g. messages, content and files) submitted to Slack as well as Other Information (including usage information) as defined in our privacy policy and in your customer agreement.”
So, basically, everything you type into Slack is used to train these systems. Slack states that data “will not leak across workspaces” and that there are “technical controls in place to prevent access.” Even so, we all know that conversations with AI chatbots are not private, and it’s not hard to imagine this going wrong somehow. Given the risk, the company must be offering something extremely compelling in return…right?
What are the benefits of letting Slack use your data to train AI?
The section outlining the potential benefits of Slack feeding all of your conversations into a large language model says this will allow the company to provide improved search results, better autocomplete suggestions, better channel recommendations, and (I wish I was kidding) improved emoji suggestions. If this all sounds useful to you, great! I personally don’t think any of these things—except possibly better search—will do much to make Slack more useful for getting work done.
The emoji thing, particularly, is absurd. Slack is literally saying that they need to feed your conversations into an AI system so that they can provide better emoji recommendations. Consider this actual quote, which I promise you is from Slack’s website and not The Onion:
Slack might suggest emoji reactions to messages using the content and sentiment of the message, the historic usage of the emoji and the frequency of use of the emoji in the team in various contexts. For instance, if 🎉 is a common reaction to celebratory messages in a particular channel, we will suggest that users react to new, similarly positive messages with 🎉.
I am overcome with awe just thinking about the implications of this incredible technology, and am no longer concerned about any privacy implications whatsoever. AI is truly the future of communication.
How to opt your company out of Slack’s AI training
The bad news is that you, as an individual user, cannot opt out of Slack using your conversation history to train its large language model. That can only be done by a Slack admin, which in most cases is going to be someone in the IT department of your company. And there’s no button in the settings for opting out—admins need to send an email asking for it to happen.
Here’s Slack exact language on the matter:
If you want to exclude your Customer Data from Slack global models, you can opt out. To opt out, please have your org, workspace owners or primary owner contact our Customer Experience team at [email protected] with your workspace/org URL and the subject line ‘Slack global model opt-out request’. We will process your request and respond once the opt-out has been completed.
This smells like a dark pattern—making something annoying to do in order to discourage people from doing it. Hopefully the company makes the opt-out process easier in the wake of the current earful they’re getting from customers.
A reminder that Slack DMs aren’t private
I’ll be honest, I’m a little amused at the prospect of my Slack data being used to improve search and emoji suggestions for my former employers. At previous jobs, I frequently sent DMs to work friends filled with negativity about my manager and the company leadership. I can just picture Slack recommending certain emojis every time a particular CEO is mentioned.
Funny as that idea is, though, the whole situation serves as a good reminder to employees everywhere: Your Slack DMs aren’t actually private. Nothing you say on Slack—even in a direct message—is private. Slack uses that information to train tools like this, yes, but the company you work for can also access those private messages pretty easily. I highly recommend using something not controlled by your company if you need to shit talk said company. Might I suggest Signal?