From Local LLM to Local Agent: Tools, Prompts, and the Sharp Edges 🤖💻
An earlier post covered building a local LLM stack with Ollama, Open WebUI, and Continue. That stack was about getting something running — a chat box, a model in the IDE, a feel for what local inference looks like. This follow-up takes the next step: turning that stack into an...
[Read More]