# 🚀 Complete Installation Guide: OpenCode + LM Studio + Qwen 3.6 (Q4 Unsloth)

Started by Theo Gottwald, April 18, 2026, 03:11:41 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

Theo Gottwald

# 🚀 Complete Installation Guide: OpenCode + LM Studio + Qwen 3.6 (Q4 Unsloth)

I have tested this combination. Its currently the BEST free coding combination.
FREE because everything happens on your compute - nothing is sent to the cloud.

It will work with a 5090 an 260000 Token Context. That is professional API Quality - and the speed will also be perfect.
If you do not have a 5090 then you will need to put some Layers of the Model in RAM to still keep the full  Context-window which will be slower - but may still work ok!
I generally do not recommend to go below Q4 (Q2 or like that) because you may end up in nonsense which is not great for coding.

But it may still work. unless your computer is really old aged.

OpenCode.png

This guide walks you through setting up a **fully local AI coding environment** using **OpenCode**, **LM Studio**, and the optimized **Qwen 3.6 Q4 (Unsloth)** model.
No cloud. No API costs. Full control. ⚡

---

## 🧩 1. Install LM Studio (Local LLM Runtime)

🔹 Download LM Studio
👉 https://lmstudio.ai

🔹 Install & Launch

* Run installer
* Start LM Studio
* Go to **"Discover" tab**

🔹 Enable Developer Mode

* Settings → Enable **Developer Mode**
* This unlocks API access (important!)

---

## 🤖 2. Download Qwen 3.6 Q4 (Unsloth)

🔍 In LM Studio → Search:

```
Qwen 3.6 Unsloth Q4
If you have a very weak system, with not much VRAM you can try smaller versions like Q2 or below.
```

📦 Recommended:

* Quantization: **Q4_K_M or similar**
* Reason: Best balance between speed ⚡ and quality 🧠

⬇️ Click **Download**

---

## ▶️ 3. Load & Start the Model

* Go to **"Local Server" tab**
* Select the downloaded Qwen model
* Click **Start Server**

📡 Default API Endpoint:

```
http://127.0.0.1:1234/v1
```

✅ Now your local AI API is running

---

## 🧠 4. Install OpenCode

📦 Install via Node.js:

```
npm install -g opencode-ai
```

(If Node.js missing → install from https://nodejs.org)

---

## ⚙️ 5. Connect OpenCode to LM Studio

Create or edit config:

```
~/.opencode/config.json
```

Example:

```json
{
  "providers": {
    "local": {
      "baseUrl": "http://127.0.0.1:1234/v1",
      "model": "qwen-3.6"
    }
  },
  "defaultProvider": "local"
}
```

---

## 🧪 6. Test Your Setup

Run:

```
opencode chat
```

💬 Try prompt:

```
Write a hello world in PowerBASIC
```

If everything is correct → 🎉 AI responds locally!

---

## 🧠 Performance Tips

⚡ Use GPU if available (RTX recommended)
⚡ Close unused apps → free VRAM
⚡ Q4 models = faster, Q2 - smaller, Q6/Q8 = higher quality larger

---

## 🧰 Optional Add-ons

✔ VS Code Integration
✔ MCP Server (for tool automation)
✔ Custom prompts / agents
✔ File system tools (auto-edit code)

---

## 🧨 Troubleshooting

❌ Model not responding
→ Check LM Studio server is running

❌ Connection error
→ Verify:

```
http://127.0.0.1:1234/v1
```

❌ Slow performance
→ Use smaller quant (Q4)

---

## 🎯 Result

You now have:

✅ Fully local AI coding assistant
✅ No API costs
✅ Fast inference with Qwen 3.6
✅ OpenCode automation ready

---

## 💬 Final Note

This setup is ideal for:

* Code generation 💻
* AI agents 🤖
* Local automation 🔧
* Privacy-focused workflows 🔐

---

🔥 Once running, you can extend this with MCP tools and turn it into a **full autonomous coding system**.

---