6.6 KiB
6.6 KiB
Memory - 2026-02-04
Ollama Configuration
- Location: Separate VM at
10.0.0.10:11434 - OpenClaw config:
baseUrl: http://10.0.0.10:11434/v1 - Two models configured only (clean setup)
Available Models
| Model | Role | Notes |
|---|---|---|
| kimi-k2.5:cloud | Primary | Default (me), 340B remote hosted |
| hf.co/unsloth/gpt-oss-120b-GGUF:F16 | Backup | Fallback, 117B params, 65GB |
Aliases (shortcuts)
| Alias | Model |
|---|---|
| kimi | ollama/kimi-k2.5:cloud |
| gpt-oss-120b | ollama/hf.co/unsloth/gpt-oss-120b-GGUF:F16 |
Switching Models
# Switch to backup
/model ollama/hf.co/unsloth/gpt-oss-120b-GGUF:F16
# Or via CLI
openclaw chat -m ollama/hf.co/unsloth/gpt-oss-120b-GGUF:F16
# Switch back to me (kimi)
/model kimi
TTS Configuration - Kokoro Local
- Endpoint:
http://10.0.0.228:8880/v1/audio/speech - Status: Tested and working (63KB MP3 generated successfully)
- OpenAI-compatible: Yes (supports
tts-1,tts-1-hd,kokoromodels) - Voices: 68 total across languages (American, British, Spanish, French, German, Italian, Japanese, Portuguese, Chinese)
- Default voice:
af_bella(American Female) - Notable voices:
af_nova,am_echo,af_heart,af_alloy,bf_emma
Config Schema Fix
{
"messages": {
"tts": {
"auto": "always", // Options: "off", "always", "inbound", "tagged"
"provider": "elevenlabs", // or "openai", "edge"
"elevenlabs": {
"baseUrl": "http://10.0.0.228:8880" // <-- Only ElevenLabs supports baseUrl!
}
}
}
}
Important: messages.tts.openai does NOT support baseUrl - only apiKey, model, voice.
Solutions for Local Kokoro:
- Custom TTS skill (cleanest) - call Kokoro API directly
- OPENAI_BASE_URL env var - may redirect all OpenAI calls globally
- Use as Edge TTS - treat Kokoro as "local Edge" replacement
Infrastructure Notes
- Container: Running without GPUs attached (CPU-only)
- Implication: All ML workloads (Whisper, etc.) will run on CPU
User Preferences
Installation Decision Tree
When asked to install/configure something:
- Can it be a skill? → Create a skill
- Does it work in TOOLS.md? → Add to TOOLS.md
(environment-specific notes: device names, SSH hosts, voice prefs, etc.) - Neither → Suggest other options
Examples:
- New API integration → Skill
- Camera names/locations → TOOLS.md
- Custom script/tool → Skill
- Preferred TTS voice → TOOLS.md
Core Preferences
- Free — Primary requirement for all tools/integrations
- Local preferred — Self-hosted over cloud/SaaS when possible
Agent Notes
- Do NOT restart/reboot the gateway — user must turn me on manually
- Request user to reboot me instead of auto-restarting services
- TTS config file:
/root/.openclaw/openclaw.jsonundermessages.ttskey
Bootstrap Complete - 2026-02-04
Files Created/Updated Today
- ✅ USER.md — Rob's profile
- ✅ IDENTITY.md — Kimi's identity
- ✅ TOOLS.md — Voice/text rules, local services
- ✅ MEMORY.md — Long-term memory initialized
- ✅ AGENTS.md — Installation policy documented
- ✅ Deleted BOOTSTRAP.md — Onboarding complete
Skills Created Today
- ✅
local-whisper-stt— Local voice transcription (Faster-Whisper, CPU) - ✅
kimi-tts-custom— Custom TTS with Kimi-XXX filenames
Working Systems
- Bidirectional voice (voice↔voice, text↔text)
- Local Kokoro TTS @ 10.0.0.228:8880
- Local SearXNG web search
- Local Ollama @ 10.0.0.10:11434
Key Decisions
- Voice-only replies (no transcripts to Telegram)
- Kimi-YYYYMMDD-HHMMSS.ogg filename format
- Free + Local > Cloud/SaaS philosophy established
Pre-Compaction Summary - 2026-02-04 21:17 CST
Major Setup Completed Today
1. Identity & Names Established
- AI Name: Kimi 🎙️
- User Name: Rob
- Relationship: Direct 1:1, private and trusted
- Deleted: BOOTSTRAP.md (onboarding complete)
2. Bidirectional Voice System ✅
- Outbound: Kokoro TTS @
10.0.0.228:8880with custom filenames - Inbound: Faster-Whisper (CPU, base model) for transcription
- Voice Filename Format:
Kimi-YYYYMMDD-HHMMSS.ogg - Rule: Voice in → Voice out, Text in → Text out
- No transcripts sent to Telegram (internal transcription only)
3. Skills Created Today
| Skill | Purpose | Location |
|---|---|---|
local-whisper-stt |
Voice transcription (Faster-Whisper) | /root/.openclaw/skills/local-whisper-stt/ |
kimi-tts-custom |
Custom TTS filenames, voice-only mode | /root/.openclaw/skills/kimi-tts-custom/ |
qdrant-memory |
Vector memory augmentation | /root/.openclaw/skills/qdrant-memory/ |
4. Qdrant Memory System
- Endpoint:
http://10.0.0.40:6333(local Proxmox LXC) - Collection:
openclaw_memories - Vector Size: 768 (nomic-embed-text)
- Mode: Automatic - stores/retrieves without prompting
- Architecture: Hybrid (file-based + vector-based)
- Scripts: store_memory.py, search_memories.py, hybrid_search.py, auto_memory.py
5. Cron Job Created
- Name: monthly-backup-reminder
- Schedule: First Monday of each month at 10:00 AM CST
- ID: fb7081a9-8640-4c51-8ad3-9caa83b6ac9b
- Delivery: Telegram message to Rob
6. Core Preferences Documented
- Accuracy: Best quality, no compromises
- Performance: Optimize for speed
- Research: Always web search before installing
- Local Docs Exception: OpenClaw/ClawHub docs prioritized
- Infrastructure: Free > Paid, Local > Cloud, Private > Public
- Search Priority: docs.openclaw.ai, clawhub.com, then other sources
7. Config Files Created/Updated
USER.md- Rob's profileIDENTITY.md- Kimi's identityTOOLS.md- Voice rules, search preferences, local servicesMEMORY.md- Long-term curated memoriesAGENTS.md- Installation policy, heartbeatsopenclaw.json- TTS, skills, channels config
Next Steps (Deferred)
- Continue with additional tool setup requests from Rob
- Qdrant memory is in auto-mode, monitoring for important memories
Lessons Learned - 2026-02-04 22:05 CST
Skill Script Paths
Mistake: Tried to run scripts from wrong paths. Correct paths:
- Whisper:
/root/.openclaw/workspace/skills/local-whisper-stt/scripts/transcribe.py - TTS:
/root/.openclaw/workspace/skills/kimi-tts-custom/scripts/voice_reply.py
voice_reply.py usage:
python3 scripts/voice_reply.py <chat_id> "message text"
# Example:
python3 scripts/voice_reply.py 1544075739 "Hello there"
Stored in Qdrant: Yes (high importance, tags: voice,skills,paths,commands)