When It’s OK to Use AI at Work (and When It’s Not)

When It’s OK to Use AI at Work (and When It’s Not)

  • Post category:AI

The following content is brought to you by Lifehacker partners. If you buy a product featured here, we may earn an affiliate commission or other compensation.


This post is part of Lifehacker’s “Living With AI” series: We investigate the current state of AI, walk through how it can be useful (and how it can’t), and evaluate where this revolutionary tech is heading next. Read more here.

Almost as soon as ChatGPT launched in late 2022, the world started talking about how and when to use it. Is it ethical to use generative AI at work? Is that “cheating?” Or are we simply witnessing the next big technological innovation, one that everyone will either have to embrace, or fall behind dragging their feet?

AI is now a part of work, whether you like it or not

AI, like anything else, is a tool first and foremost, and tools help us get more done than we can on our own. (My job would literally not be possible without my computer.) In that regard, there’s nothing wrong, in theory, with using AI to be more productive. In fact, some work apps have fully embraced the AI bandwagon. Just look at Microsoft: The company basically conquered the meaning of “computing at work,” and it’s adding AI functionality directly into its products.

Since last year, the entire Microsoft 365 suite—including Word, PowerPoint, Excel, Teams, and more—has adopted “Copilot,” the company’s AI assist tool. Think of it like Clippy from back in the day, only now way more useful. In Teams, you can ask the bot to summarize your meeting notes; in Word, you can ask the AI to draft a work proposal based on your bullet list, then request it tighten up specific paragraphs you aren’t thrilled with; in Excel, you can ask Copilot to analyze and model your data; in PowerPoint, you can ask for an entire slideshow to be created for you based on a prompt.

These tools don’t just exist: They’re being actively created by the companies that make our work products, and their use is encouraged. It reminds me of how Microsoft advertised Excel itself back in 1990: The ad presents spreadsheets as time consuming, rigid, and featureless, but with Excel, you can create a working presentation in an elevator ride. We don’t see that as “cheating” work: This is work.

Intelligently relying on AI is the same thing: Just as 1990’s Excel extrapolates data into cells you didn’t create yourself, 2023’s Excel will answer questions you have about your data, and will execute commands you give it in normal language, rather than formulas and functions. It’s a tool.

What work shouldn’t you use AI for?

Of course, there’s still an ethical line you can cross here. Tools can be used to make work better, but they can also be used to cheat. If you use the internet to hire someone else to do your job, then pass that work off as your own, that’s not using the tool to do your work better. That’s wrong. If you simply ask Copilot or ChatGPT to do your job for you in its entirety, same deal.

You also have to consider your own company’s guidelines when it comes to AI and the use of outside technology. It’s possible your organization has already established these rules, given AI’s prominence over the past year and a half or so: Maybe your company is giving you the green light to use AI tools within reason. If so, great! But if your company decides you can’t use AI for any purpose as far as work in concerned, you might want to log out of ChatGPT during business hours.

But, let’s be real: Your company probably isn’t going to know whether or not you use AI tools if you’re using them responsibly. The bigger issue here is privacy and confidentiality, and it’s something not enough people think about when using AI in general.

In brief, generative AI tools work because they are trained on huge sets of data. But AI is far from perfect, and the more data the system has to work with, the more it can improve. You train AI systems with every prompt you give them, unless the service allows you to specifically opt out of this training. When you ask Copilot for help writing an email, it takes in the entire exchange, from how you reacted to its responses, to the contents of the email itself.

As such, it’s a good rule of thumb to never give confidential or sensitive information to AI. An easy way to avoid trouble is to treat AI like you would you work email: Only share information with something like ChatGPT you’d be comfortable emailing a colleague. After all, your emails could very well be made public someday: Would you be OK with the world seeing what you said? If so, you should be fine sharing with AI. If not, keep it away from the robots.

If the service offers you the choice, opt out of this training. By doing so, your interactions with the AI will not be used to improve the service, and your previous chats will likely be deleted from the servers after a set period of time. Even so, always refrain from sharing private or corporate data with an AI chatbot: If the developer keeps more data than we realize, and they’re ever hacked, you could put your work data in a precarious place.



by Life Hacker