I’m experimenting with the Auditor MCP server and so far I am really liking how easy it is to mine my data quickly and without having to fumble through writing the appropriate query in Auditor directly.
However … I don’t have a subscription to OpenAI (or any other AI for that matter) so the limits with Claude are less than ideal (after just one question with some refinement it locked me out for 6 hours).
Are there any truly free clients I could use? All I am looking for is to work with the Auditor data, I don’t need anything outside of that.
Hi Matt. Good question. I don’t think you’ll find any unlimited/free hosted (SaaS) ones. But you could run an LLM locally that should work fairly well. I’ve heard good things about the open source Goose agent (codename goose | codename goose) which supports MCP. I’ve not had a chance to test this yet, but it’s on my list to try. Would love to hear any feedback if you’re able to check it out.
@Grady … I’ll definitely have to take a look at that. From a quick glance I would likely need to pair it with a local LLM but thats not out of the realm of possibility.
@jordan.violet … I’d have the resources of a multi-node VMWare Cluster available for it. I could provide it access to a bunch of Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GH cores (maybe 16? possibly more) and maybe 32 GB of RAM without unbalancing the cluster … no GPU avaialble tho. It would be something that has limited use for now (just me and my occasional Auditor usage) but if I can find some good utility for other things I could definitely justify building out something bigger/better in the future.
@mlaski I see that the resources you have access to does not include a GPU. I wouldn’t even bother unless a GPU resource is available. If you can get a GPU resource, I would do the following:
Based on your technical experience download LM studio (user friendly) or Ollama (more command line experience needed).
Those will both provide you with a local LLM that you can use.
Goose is pretty good but that is just an agent that works on top of one of the LLMs you install locally.
As for the model, Google’s local Gemma models provide the best experience with limited hardware.