Like the idea of AI, but wish you could use it without having to trust some large server somewhere? You can run large language models locally, giving you something like ChatGPT that works entirely offline.
Jan is a free an open source application that makes it easy to download multiple large language models and start chatting with them. There are simple installers for Windows, macOS, and Linux. Now, this isn’t perfect. The models aren’t necessarily as good as the latest ones from OpenAI or Google, and depending on how powerful your computer is, the results might take a while to come in. On the flip side, though, you can use this technology without having to worry about the privacy and security concerns that come with using an online AI service.
Credit: Justin Pot
After installing Jan, you will need to choose a model—if you don’t know which one, I’d start with Mistral, at the top of the list—you can always try something else later if you don’t like the results. As soon as the model downloads, you can start chatting.
You can provide a few general instructions for the bot in the right panel, if you want—the default is “you are a helpful assistant,” but you can change it to whatever you like for a bit more context. After that, you can start using the service the same way you would ChatGPT or Google Gemini. I tried asking for a quick summary of a recent article of mine; it did a decent job.
Credit: Justin Pot
Again, if you don’t like the results, or find that getting results takes too long, try a few different models. They’re all free and all optimized for different things: some are specifically for coding questions, for example, and some are optimized to run on computers without a lot of CPU power. It’s a matter of finding what works best for you.
And there’s another cool feature: if the application is running, it can also work as a OpenAI equivalent API. If you don’t know what this means, don’t worry—it’s a very geeky thing. In summary, though, it means you can use Jan in apps that would otherwise require a paid ChatGPT subscription—just enable the API feature in the settings and point your other applications to the local IP address and port number instead of to OpenAI.