Chat with local LLMs, generate code, and format markdown without leaving Visual Studio Code. Completely offline. No telemetry.
Enhance your coding workflow with local AI integration.
Powered by Ollama. Use models like Llama3, Mistral, DeepSeek, etc directly in your side panel.
Full rendering support for bold, italics, tables, and lists. Read responses clearly.
Code blocks for Python, JS, JSON, and Bash are automatically highlighted for readability.
Before installing, ensure your environment is ready to run local LLMs efficiently.
$ python --version
$ ollama serve