Initial commit: workspace setup with skills, memory, config

This commit is contained in:
root
2026-02-10 14:37:49 -06:00
commit d1357c5463
77 changed files with 10822 additions and 0 deletions

194
memory/2026-02-04.md Normal file
View File

@@ -0,0 +1,194 @@
# Memory - 2026-02-04
## Ollama Configuration
- **Location**: Separate VM at `10.0.0.10:11434`
- **OpenClaw config**: `baseUrl: http://10.0.0.10:11434/v1`
- **Two models configured only** (clean setup)
## Available Models
| Model | Role | Notes |
|-------|------|-------|
| kimi-k2.5:cloud | **Primary** | Default (me), 340B remote hosted |
| hf.co/unsloth/gpt-oss-120b-GGUF:F16 | **Backup** | Fallback, 117B params, 65GB |
## Aliases (shortcuts)
| Alias | Model |
|-------|-------|
| kimi | ollama/kimi-k2.5:cloud |
| gpt-oss-120b | ollama/hf.co/unsloth/gpt-oss-120b-GGUF:F16 |
## Switching Models
```bash
# Switch to backup
/model ollama/hf.co/unsloth/gpt-oss-120b-GGUF:F16
# Or via CLI
openclaw chat -m ollama/hf.co/unsloth/gpt-oss-120b-GGUF:F16
# Switch back to me (kimi)
/model kimi
```
## TTS Configuration - Kokoro Local
- **Endpoint**: `http://10.0.0.228:8880/v1/audio/speech`
- **Status**: Tested and working (63KB MP3 generated successfully)
- **OpenAI-compatible**: Yes (supports `tts-1`, `tts-1-hd`, `kokoro` models)
- **Voices**: 68 total across languages (American, British, Spanish, French, German, Italian, Japanese, Portuguese, Chinese)
- **Default voice**: `af_bella` (American Female)
- **Notable voices**: `af_nova`, `am_echo`, `af_heart`, `af_alloy`, `bf_emma`
### Config Schema Fix
```json
{
"messages": {
"tts": {
"auto": "always", // Options: "off", "always", "inbound", "tagged"
"provider": "elevenlabs", // or "openai", "edge"
"elevenlabs": {
"baseUrl": "http://10.0.0.228:8880" // <-- Only ElevenLabs supports baseUrl!
}
}
}
}
```
**Important**: `messages.tts.openai` does NOT support `baseUrl` - only `apiKey`, `model`, `voice`.
### Solutions for Local Kokoro:
1. **Custom TTS skill** (cleanest) - call Kokoro API directly
2. **OPENAI_BASE_URL env var** - may redirect all OpenAI calls globally
3. **Use as Edge TTS** - treat Kokoro as "local Edge" replacement
## Infrastructure Notes
- **Container**: Running without GPUs attached (CPU-only)
- **Implication**: All ML workloads (Whisper, etc.) will run on CPU
## User Preferences
### Installation Decision Tree
**When asked to install/configure something:**
1. **Can it be a skill?** → Create a skill
2. **Does it work in TOOLS.md?** → Add to TOOLS.md
*(environment-specific notes: device names, SSH hosts, voice prefs, etc.)*
3. **Neither** → Suggest other options
**Examples:**
- New API integration → Skill
- Camera names/locations → TOOLS.md
- Custom script/tool → Skill
- Preferred TTS voice → TOOLS.md
### Core Preferences
- **Free** — Primary requirement for all tools/integrations
- **Local preferred** — Self-hosted over cloud/SaaS when possible
## Agent Notes
- **Do NOT restart/reboot the gateway** — user must turn me on manually
- Request user to reboot me instead of auto-restarting services
- TTS config file: `/root/.openclaw/openclaw.json` under `messages.tts` key
## Bootstrap Complete - 2026-02-04
### Files Created/Updated Today
- ✅ USER.md — Rob's profile
- ✅ IDENTITY.md — Kimi's identity
- ✅ TOOLS.md — Voice/text rules, local services
- ✅ MEMORY.md — Long-term memory initialized
- ✅ AGENTS.md — Installation policy documented
- ✅ Deleted BOOTSTRAP.md — Onboarding complete
### Skills Created Today
-`local-whisper-stt` — Local voice transcription (Faster-Whisper, CPU)
-`kimi-tts-custom` — Custom TTS with Kimi-XXX filenames
### Working Systems
- Bidirectional voice (voice↔voice, text↔text)
- Local Kokoro TTS @ 10.0.0.228:8880
- Local SearXNG web search
- Local Ollama @ 10.0.0.10:11434
### Key Decisions
- Voice-only replies (no transcripts to Telegram)
- Kimi-YYYYMMDD-HHMMSS.ogg filename format
- Free + Local > Cloud/SaaS philosophy established
---
## Pre-Compaction Summary - 2026-02-04 21:17 CST
### Major Setup Completed Today
#### 1. Identity & Names Established
- **AI Name**: Kimi 🎙️
- **User Name**: Rob
- **Relationship**: Direct 1:1, private and trusted
- **Deleted**: BOOTSTRAP.md (onboarding complete)
#### 2. Bidirectional Voice System ✅
- **Outbound**: Kokoro TTS @ `10.0.0.228:8880` with custom filenames
- **Inbound**: Faster-Whisper (CPU, base model) for transcription
- **Voice Filename Format**: `Kimi-YYYYMMDD-HHMMSS.ogg`
- **Rule**: Voice in → Voice out, Text in → Text out
- **No transcripts sent to Telegram** (internal transcription only)
#### 3. Skills Created Today
| Skill | Purpose | Location |
|-------|---------|----------|
| `local-whisper-stt` | Voice transcription (Faster-Whisper) | `/root/.openclaw/skills/local-whisper-stt/` |
| `kimi-tts-custom` | Custom TTS filenames, voice-only mode | `/root/.openclaw/skills/kimi-tts-custom/` |
| `qdrant-memory` | Vector memory augmentation | `/root/.openclaw/skills/qdrant-memory/` |
#### 4. Qdrant Memory System
- **Endpoint**: `http://10.0.0.40:6333` (local Proxmox LXC)
- **Collection**: `openclaw_memories`
- **Vector Size**: 768 (nomic-embed-text)
- **Mode**: **Automatic** - stores/retrieves without prompting
- **Architecture**: Hybrid (file-based + vector-based)
- **Scripts**: store_memory.py, search_memories.py, hybrid_search.py, auto_memory.py
#### 5. Cron Job Created
- **Name**: monthly-backup-reminder
- **Schedule**: First Monday of each month at 10:00 AM CST
- **ID**: fb7081a9-8640-4c51-8ad3-9caa83b6ac9b
- **Delivery**: Telegram message to Rob
#### 6. Core Preferences Documented
- **Accuracy**: Best quality, no compromises
- **Performance**: Optimize for speed
- **Research**: Always web search before installing
- **Local Docs Exception**: OpenClaw/ClawHub docs prioritized
- **Infrastructure**: Free > Paid, Local > Cloud, Private > Public
- **Search Priority**: docs.openclaw.ai, clawhub.com, then other sources
#### 7. Config Files Created/Updated
- `USER.md` - Rob's profile
- `IDENTITY.md` - Kimi's identity
- `TOOLS.md` - Voice rules, search preferences, local services
- `MEMORY.md` - Long-term curated memories
- `AGENTS.md` - Installation policy, heartbeats
- `openclaw.json` - TTS, skills, channels config
### Next Steps (Deferred)
- Continue with additional tool setup requests from Rob
- Qdrant memory is in auto-mode, monitoring for important memories
---
## Lessons Learned - 2026-02-04 22:05 CST
### Skill Script Paths
**Mistake**: Tried to run scripts from wrong paths.
**Correct paths**:
- Whisper: `/root/.openclaw/workspace/skills/local-whisper-stt/scripts/transcribe.py`
- TTS: `/root/.openclaw/workspace/skills/kimi-tts-custom/scripts/voice_reply.py`
**voice_reply.py usage**:
```bash
python3 scripts/voice_reply.py <chat_id> "message text"
# Example:
python3 scripts/voice_reply.py 1544075739 "Hello there"
```
**Stored in Qdrant**: Yes (high importance, tags: voice,skills,paths,commands)

195
memory/2026-02-05.md Normal file
View File

@@ -0,0 +1,195 @@
# 2026-02-05 — Session Log
## Major Accomplishments
### 1. Knowledge Base System Created
- **Collection**: `knowledge_base` in Qdrant (768-dim vectors, cosine distance)
- **Purpose**: Personal knowledge repository organized by topic/domain
- **Schema**: domain, path (hierarchy), subjects, category, content_type, title, checksum, source_url, date_scraped
- **Content stored**:
- docs.openclaw.ai (3 chunks)
- ollama.com/library (25 chunks)
- www.w3schools.com/python/ (7 chunks)
- Multiple list comprehension resources (3 entries)
### 2. Smart Search Workflow Implemented
- **Process**: Search KB first → Web search second → Synthesize → Store new findings
- **Storage rules**: Only substantial content (>500 chars), unique (checksum), full attribution
- **Auto-tagging**: date_scraped, source_url, domain detection
- **Scripts**: `smart_search.py`, `kb_store.py`, `kb_review.py`, `scrape_to_kb.py`
### 3. Monitoring System Established
- **OpenClaw GitHub Repo Monitor**
- Schedule: Daily 11:00 AM
- Tracks: README, releases (5), issues (5)
- Relevance filter: Keywords affecting our setup (ollama, telegram, skills, memory, etc.)
- Notification: Only when significant changes detected (score ≥3 or high-priority areas)
- Initial finding: 24 high-priority areas affected
- **Ollama Model Monitor**
- Schedule: Daily 11:50 AM
- Criteria: 100B+ parameter models only (to compete with gpt-oss:120b)
- Current large models: gpt-oss (120B), mixtral (8x22B = 176B effective)
- Notification: Only when NEW large models appear
### 4. ACTIVE.md Syntax Library Created
- **Purpose**: Pre-flight checklist to reduce tool usage errors
- **Sections**: Per-tool validation (read, edit, write, exec, browser)
- **Includes**: Parameter names, common mistakes, correct/wrong examples
- **Updated**: AGENTS.md to require ACTIVE.md check before tool use
## Key Lessons & Policy Changes
### User Preferences Established
1. **Always discuss before acting** — Never create/build without confirmation
2. **100B+ models only** for Ollama monitoring (not smaller CPU-friendly models)
3. **Silent operation** — Monitors only output when there's something significant to report
4. **Exit code 0 always** for cron scripts (prevents "exec failed" logs)
### Technical Lessons
- `edit` tool requires `old_string` + `new_string` (not `newText`)
- After 2-3 failed edit attempts, use `write` instead
- Cron scripts must always `sys.exit(0)` — use output presence for signaling
- `read` uses `file_path`, never `path`
### Error Handling Policy
- **Search-first strategy**: Check KB, then web search before fixing
- **Exception**: Simple syntax errors (wrong param names, typos) — fix immediately
## Infrastructure Updates
### Qdrant Memory System
- Hybrid approach: File-based + vector-based
- Enhanced metadata: confidence, source, expiration, verification
- Auto-storage triggers defined
- Monthly review scheduled (cleanup of outdated entries)
### Task Queue Repurposed
- No longer for GPT delegation
- Now for Kimi's own background tasks
- GPT workloads moving to separate "Max" VM (future)
## Active Cron Jobs
| Time | Task | Channel |
|------|------|---------|
| 11:00 AM | OpenClaw repo check | Telegram (if significant) |
| 11:50 AM | Ollama 100B+ models | Telegram (if new) |
| 1st of month 3:00 AM | KB review (cleanup) | Silent |
## Enforcement Milestone — 10:34 CST
**Problem**: Despite updating AGENTS.md, TOOLS.md, and MEMORY.md with ACTIVE.md enforcement rules, I continued making the same errors:
- Used `path` instead of `file_path` for `read`
- Failed to provide `new_string` for `edit` (4+ consecutive failures)
**Root Cause**: Documentation ≠ Behavior change. I wrote the rules but didn't follow them.
**User Directive**: "Please enforce" — meaning actual behavioral change, not just file updates.
**Demonstrated Recovery**:
1. ✅ Used `read` with `file_path` correctly
2. ❌ Failed `edit` 4 times (missing `new_string`)
3. ✅ Switched to `write` per ACTIVE.md recovery protocol
4. ✅ Successfully wrote complete file
**Moving Forward**:
- Pre-flight check BEFORE every tool call
- Verify parameter names from ACTIVE.md
- After 2 edit failures → use `write`
- Quality over speed — no more rushing
## Core Instruction Files Updated — 10:36 CST
Updated all core .md files with enforced, actionable pre-flight steps:
### TOOLS.md Changes:
- Added numbered step-by-step pre-flight protocol
- Added explicit instruction to read ACTIVE.md section for specific tool
- Added parameter verification table with correct vs wrong parameters
- Added emergency recovery rules table (edit fails → use write)
- Added 5 critical reminders (file_path, old_string/new_string, etc.)
### AGENTS.md Changes:
- Added TOOLS.md to startup protocol (Step 3)
- Added numbered steps for "Before Using Tools" section
- Added explicit parameter verification table
- Added emergency recovery section
- Referenced TOOLS.md as primary enforcement location
### Key Enforcement Chain:
```
AGENTS.md (startup) → TOOLS.md (pre-flight steps) → ACTIVE.md (tool-specific syntax)
```
## Knowledge Base Additions — Research Session
**Stored to knowledge_base:** `ai/llm-agents/tool-calling/patterns`
- **Title**: Industry Patterns for LLM Tool Usage Error Handling
- **Content**: Research findings from LangChain, OpenAI, and academic papers on tool calling validation
- **Key findings**:
- LangChain: handle_parsing_errors, retry mechanisms, circuit breakers
- OpenAI: strict=True, Structured Outputs API, Pydantic validation
- Multi-layer defense architecture (prompt → validation → retry → execution)
- Common failure modes: parameter hallucination, type mismatches, missing fields
- Research paper "Butterfly Effects in Toolchains" (2025): errors cascade through tool chains
- **Our unique approach**: Pre-flight documentation checklist vs runtime validation
---
*Session type: Direct 1:1 with Rob*
*Key files created/modified: ACTIVE.md, AGENTS.md, TOOLS.md, MEMORY.md, knowledge_base_schema.md, multiple monitoring scripts*
*Enforcement activated: 2026-02-05 10:34 CST*
*Core files updated: 2026-02-05 10:36 CST*
## Max Configuration Update — 23:47 CST
**Max Setup Differences from Initial Design:**
- **Model**: minimax-m2.1:cloud (switched from GPT-OSS)
- **TTS Skill**: max-tts-custom (not kimi-tts-custom)
- **Filename format**: Max-YYYYMMDD-HHMMSS.ogg
- **Voice**: af_bella @ Kokoro 10.0.0.228:8880
- **Shared Qdrant**: Both Kimi and Max use same Qdrant @ 10.0.0.40:6333
- Collections: openclaw_memories, knowledge_base
- **TOOLS.md**: Max updated to match comprehensive format with detailed tool examples, search priorities, Qdrant scripts
**Kimi Sync Options:**
- Stay on kimi-k2.5:cloud OR switch to minimax-m2.1:cloud
- IDENTITY.md model reference already accurate for kimi-k2.5
## Evening Session — 19:55-22:45 CST
### Smart Search Fixed
- Changed default `--min-kb-score` from 0.7 to 0.5
- Removed server-side `score_threshold` (too aggressive)
- Now correctly finds KB matches (test: 5 results for "telegram dmPolicy")
- Client-side filtering shows all results then filters
### User Preferences Reinforced
- **Concise chats only** — less context, shorter replies
- **Plain text in Telegram** — no markdown formatting, no bullet lists with symbols
- **One step at a time** — wait for response before proceeding
### OpenClaw News Search
Searched web for today's OpenClaw articles. Key findings:
- Security: CVE-2026-25253 RCE bug patched in v2026.1.29
- China issued security warning about improper deployment risks
- 341 malicious ClawHub skills found stealing data
- Trend: Viral adoption alongside security crisis
### GUI Installation Started on Deb
- Purpose: Enable Chrome extension for OpenClaw browser control
- Packages: XFCE4 desktop, Chromium browser, LightDM
- Access: Proxmox console (no VNC needed)
- Status: Complete — 267 packages installed
- Next: Configure display manager, launch desktop, install OpenClaw extension
### OpenClaw Chrome Extension Installation Method
**Discovery**: Extension is NOT downloaded from Chrome Web Store
**Method**: Installed via OpenClaw CLI command
**Steps**:
1. Run `openclaw browser extension install` (installs to ~/.openclaw/browser-extension/)
2. Open Chromium → chrome://extensions/
3. Enable "Developer mode" (toggle top right)
4. Click "Load unpacked"
5. Select the extension path shown after install
6. Click OpenClaw toolbar button to attach to tab
**Alternative**: Clone from GitHub and load browser-extension/ folder directly

78
memory/2026-02-06.md Normal file
View File

@@ -0,0 +1,78 @@
# 2026-02-06 — Daily Memory Log
## Operational Rules Updated
### Notification Rules (from Rob)
- Always use Telegram text only unless requested otherwise
- Only send notifications between 7am-10pm CST
- All timestamps and time usage must be US CST (including Redis)
- If notification needed outside hours, queue as heartbeat task to send at next allowed time
- Stored in Qdrant: IDs 83a98a6e-058f-4c2f-91f4-001d5a18acba, 8729ba36-93a1-4cc2-90b0-00bd22bf19b1
- Updated HEARTBEAT.md with Task #3: Send Delayed Notifications
## Research Completed
### Ollama Pricing: Max vs Pro Plans
**Source:** https://ollama.com/pricing
| Plan | Price | Key Features |
|------|-------|--------------|
| Free | $0 | Local models only, unlimited public models |
| Pro | $20/mo | Multiple cloud models, more usage, 3 private models, 3 collaborators |
| Max | $100/mo | 5+ cloud models, 5x usage vs Pro, 5 private models, 5 collaborators |
**Key Differences:**
- Concurrency: Pro = multiple, Max = 5+ models
- Cloud usage: Max = 5x Pro allowance
- Private models: Pro = 3, Max = 5
- Collaborators per model: Pro = 3, Max = 5
Stored in KB (Ollama/Pricing domain).
## New Project Ideas
### 3rd OpenClaw LXC
- Rob wants to setup a 3rd OpenClaw LXC
- Clone of Max's setup
- Will run local GPT
- Status: Idea phase, awaiting planning/implementation
## Agent Collaboration
- Sent notification rules to Max via agent-messages stream
- Max informed of all operational updates
### Full Search Definition (from Rob)
- When Rob says "full search": use ALL tools available, find quality results
- Combine SearXNG, KB search, web crawling, and any other resources
- Do not limit to one method—comprehensive, high-quality information
- Stored in Qdrant: ID bb4a465a-3c6e-48a8-d8c-52da5b1fdf48
### Shorthand Terms
- **msgs** = Redis messages (agent-messages stream at 10.0.0.36:6379)
- Shortcut for checking/retrieving agent messages between Kimi and Max
- Stored in Qdrant: ID e5e93700-b04b-4db4-9c4b-d6b94166be7f
- **messages** = Telegram direct chat (conversational)
- **notification** = Telegram alerts/updates (one-way notifications)
- Stored in Qdrant: ID e88ec7ea-9d77-45c3-8057-cb7a54077060
### Rob's Personality & Style
- Comical and funny most of the time
- Humor is logical/structured (not random/absurd)
- Has fun with the process
- Applies to content creation and general approach
- Stored in Qdrant: ID b58defd6-e8fc-4420-b75c-aefd4720e70d
### YouTube SEO - Tags Format
- Target: ~490 characters of comma-separated tags
- Include: primary keywords, secondary keywords, long-tail terms
- Mix: broad terms (Homelab) + specific terms (Proxmox LXC)
- Example stored in Qdrant: ID 8aa534f3-6e3f-49d9-ae5f-803ff9e80121
### YouTube SEO - Research Rule
- **CRITICAL:** Pull latest 48 hours of search data/trends when composing SEO elements
- Current data > general keywords for best search results
- Stored in Qdrant: ID bbe76456-01b5-48b5-9c0b-dd8c06680e82
---
*Stored for long-term memory retention*

72
memory/2026-02-07.md Normal file
View File

@@ -0,0 +1,72 @@
# 2026-02-07 — Daily Memory Log
## Agent System Updates
### Jarvis (Local Agent) Setup
- Jarvis deployed as local LLM clone of Max
- 64k context window (sufficient for most tasks)
- Identity: "jarvis" in agent-messages stream
- Runs on CPU (no GPU)
- Requires detailed step-by-step instructions
- One command per step with acknowledgements required
- Conversational communication style expected
### Multi-Agent Protocols Established
- SSH Host Change Protocol: Any agent modifying deb/deb2 must notify others via agent-messages
- Jarvis Task Protocol: All steps provided upfront, execute one at a time with ACKs
- Software Inventory Protocol: Check installed list before recommending
- Agent messaging via Redis stream at 10.0.0.36:6379
### SOUL.md Updates (All Agents)
- Core Truths: "Know the roster", "Follow Instructions Precisely"
- Communication Rules: Voice/text protocols, no filler words
- Infrastructure Philosophy: Privacy > convenience, Local > cloud, Free > paid
- Task Handling: Acknowledge receipt, report progress, confirm completion
## Infrastructure Changes
### SSH Hosts
- **deb** (10.0.0.38): OpenClaw removed, now available for other uses
- **deb2** (10.0.0.39): New host added, same credentials (n8n/passw0rd)
### Software Inventory (Never Recommend These)
- n8n, ollama, openclaw, openwebui, anythingllm
- searxng, flowise
- plex, radarr, sonarr, sabnzbd
- comfyui
## Active Tasks
### Jarvis KB Documentation Task
- 13 software packages to document:
1. n8n, 2. ollama, 3. openwebui, 4. anythingllm, 5. searxng
6. flowise, 7. plex, 8. radarr, 9. sonarr, 10. sabnzbd
11. comfyui, 12. openclaw (GitHub), 13. openclaw (Docs)
- Status: Task assigned, awaiting Step 1 completion report
- Method: Use batch_crawl.py or scrape_to_kb.py
- Store with domain="Software", path="<name>/Docs"
### Jarvis Tool Verification
- Checking for: Redis scripts, Python client, Qdrant memory scripts
- Whisper STT, TTS, basic tools (curl, ssh)
- Status: Checklist sent, awaiting response
### Jarvis Model Info Request
- Requested: Model name, hardware specs, 64k context assessment
- Status: Partial response received (truncated), may need follow-up
## Coordination Notes
- All agents must ACK protocol messages
- Heartbeat checks every 30 minutes
- Agent-messages stream monitored for new messages
- Delayed notifications queue for outside 7am-10pm window
- All timestamps use US CST
## Memory Storage
- 19 new memories stored in Qdrant today
- Includes protocols, inventory, Jarvis requirements, infrastructure updates
- All tagged for semantic search
---
*Stored for long-term memory retention*

53
memory/2026-02-08.md Normal file
View File

@@ -0,0 +1,53 @@
# 2026-02-08 — Daily Memory Log
## Session Start
- **Date:** 2026-02-08
- **Agent:** Kimi
## Bug Fixes & Improvements
### 1. Created Missing `agent_check.py` Script
- **Location:** `/skills/qdrant-memory/scripts/agent_check.py`
- **Purpose:** Check agent messages from Redis stream
- **Features:**
- `--list N` — List last N messages
- `--check` — Check for new messages since last check
- `--last-minutes M` — Check messages from last M minutes
- `--mark-read` — Update last check timestamp
- **Status:** ✅ Working — tested and functional
### 2. Created `create_daily_memory.py` Script
- **Location:** `/skills/qdrant-memory/scripts/create_daily_memory.py`
- **Purpose:** Create daily memory log files automatically
- **Status:** ✅ Working — created 2026-02-08.md
### 3. Fixed `scrape_to_kb.py` Usage
- **Issue:** Used `--domain`, `--path`, `--timeout` flags (wrong syntax)
- **Fix:** Used positional arguments: `url domain path`
- **Result:** Successfully scraped all 13 software docs
### 4. SABnzbd Connection Fallback
- **Issue:** sabnzbd.org/wiki/ returned connection refused
- **Fix:** Used GitHub repo (github.com/sabnzbd/sabnzbd) as fallback
- **Result:** ✅ 4 chunks stored from GitHub README
### 5. Embedded Session Tool Issues (Documented)
- **Issue:** Embedded sessions using `path` instead of `file_path` for `read` tool
- **Note:** This is in OpenClaw gateway/embedded session code — requires upstream fix
- **Workaround:** Always use `file_path` in workspace scripts
## KB Documentation Task Completed
All 13 software packages documented in knowledge_base (64 total chunks):
- n8n (9), ollama (1), openwebui (7), anythingllm (2)
- searxng (3), flowise (2), plex (13), radarr (1)
- sonarr (1), sabnzbd (4), comfyui (2)
- openclaw GitHub (16), openclaw Docs (3)
## Activities
*(Log activities, decisions, and important context here)*
## Notes
---
*Stored for long-term memory retention*

42
memory/2026-02-09.md Normal file
View File

@@ -0,0 +1,42 @@
# 2026-02-09 — Daily Log
## System Fixes & Setup
### 1. Fixed pytz Missing Dependency
- **Issue:** Heartbeat cron jobs failing with `ModuleNotFoundError: No module named 'pytz'`
- **Fix:** `pip install pytz`
- **Result:** All heartbeat checks now working (agent messages, timestamp logging, delayed notifications)
### 2. Created Log Monitor Skill
- **Location:** `/root/.openclaw/workspace/skills/log-monitor/`
- **Purpose:** Daily automated log scanning and error repair
- **Schedule:** 2:00 AM CST daily via system crontab
- **Features:**
- Scans systemd journal, cron logs, OpenClaw session logs
- Auto-fixes: missing Python modules, permission issues, service restarts
- Alerts on: disk full, services down, unknown errors
- Comprehensive noise filtering (NVIDIA, PAM, rsyslog container errors)
- Self-filtering (excludes its own logs, my thinking blocks, tool errors)
- Service health check: Redis via Python (redis-cli not in container)
- **Report:** `/tmp/log_monitor_report.txt`
### 3. Enabled Parallel Tool Calls
- **Configuration:** Ollama `parallel = 8`
- **Usage:** All independent tool calls now batched and executed simultaneously
- **Tested:** 8 parallel service health checks (Redis, Qdrant, Ollama, SearXNG, Kokoro TTS, etc.)
- **Previous:** Sequential execution (one at a time)
### 4. Redis Detection Fix
- **Issue:** `redis-cli` not available in container → false "redis-down" alerts
- **Fix:** Use Python `redis` module for health checks
- **Status:** Redis at 10.0.0.36:6379 confirmed working
## Files Modified/Created
- `/root/.openclaw/workspace/skills/log-monitor/scripts/log_monitor.py` (new)
- `/root/.openclaw/workspace/skills/log-monitor/SKILL.md` (new)
- System crontab: Added daily log monitor job
## Notes
- Container has no GPU → NVIDIA module errors are normal (filtered)
- rsyslog kernel log access denied in container (filtered)
- All container-specific "errors" are now excluded from reports

157
memory/2026-02-10.md Normal file
View File

@@ -0,0 +1,157 @@
# 2026-02-10 — Daily Memory Log
## Qdrant Memory System — Manual Mode
**Major change:** Qdrant memory now MANUAL ONLY.
Two distinct systems established:
- **"remember this" or "note"** → File-based (daily logs + MEMORY.md) — automatic, original design
- **"q remember", "q recall", "q save", "q update"** → Qdrant `kimi_memories` — manual, only when "q" prefix used
**Commands:**
- "q remember" = store one item to Qdrant
- "q recall" = search Qdrant
- "q save" = store specific item
- "q update" = bulk sync all file memories to Qdrant without duplicates
## Redis Messaging — Manual Mode
**Change:** Redis agent messaging now MANUAL ONLY.
- No automatic heartbeat checks for Max's messages
- No auto-notification queue processing
- Only manual when explicitly requested: "check messages" or "send to Max"
## New Qdrant Collection: kimi_memories
**Created:** `kimi_memories` collection at 10.0.0.40:6333
- Vector size: 1024 (snowflake-arctic-embed2)
- Distance: Cosine
- Model: snowflake-arctic-embed2 pulled to 10.0.0.10 (GPU)
- Purpose: Manual memory backup when requested
## Critical Lesson: Immediate Error Reporting
**Rule established:** When hitting a blocking error during an active task, report IMMEDIATELY — don't wait for user to ask.
**What I did wrong:**
- Said "let me know when it's complete" for "q save ALL memories"
- Discovered Qdrant was unreachable (host down)
- Stayed silent instead of immediately reporting
- User had to ask for status to discover I was blocked
**Correct behavior:**
- Hit blocking error → immediately report: "Stopped — [reason]. Cannot proceed."
- Never imply progress is happening when it's not
- Applies to: service outages, permission errors, resource exhaustion
## Memory Backup Success
**Completed:** "q save ALL memories" — 39 comprehensive memories successfully backed up to `kimi_memories` collection.
**Contents stored:**
- Identity & personality
- Communication rules
- Tool usage rules
- Infrastructure details
- YouTube SEO rules
- Setup milestones
- Boundaries & helpfulness principles
**Collection status:**
- Name: `kimi_memories`
- Location: 10.0.0.40:6333
- Vectors: 39 points
- Model: snowflake-arctic-embed2 (1024 dims)
## New Qdrant Collection: kimi_kb
**Created:** `kimi_kb` collection at 10.0.0.40:6333
- Vector size: 1024 (snowflake-arctic-embed2)
- Distance: Cosine
- Purpose: Knowledge base storage (web search, documents, data)
- Mode: Manual only — no automatic storage
**Scripts:**
- `kb_store.py` — Store web/docs to KB with metadata
- `kb_search.py` — Search knowledge base with domain filtering
**Usage:**
```bash
# Store to KB
python3 kb_store.py "Content" --title "X" --domain "Docker" --tags "container"
# Search KB
python3 kb_search.py "docker volumes" --domain "Docker"
```
**Test:** Successfully stored and retrieved Docker container info.
## Unified Search: Perplexity + SearXNG
**Architecture:** Perplexity primary, SearXNG fallback
**Primary:** Perplexity API (AI-curated, ~$0.005/query)
**Fallback:** SearXNG local (privacy-focused, free)
**Commands:**
```bash
search "your query" # Perplexity → SearXNG fallback
search p "your query" # Perplexity only
search local "your query" # SearXNG only
search --citations "query" # Include source links
search --model sonar-pro "query" # Pro model for complex tasks
```
**Models:**
- `sonar` — Quick answers (default)
- `sonar-pro` — Complex queries, coding
- `sonar-reasoning` — Step-by-step reasoning
- `sonar-deep-research` — Comprehensive research
**Test:** Successfully searched "top 5 models used with openclaw" — returned Claude Opus 4.5, Sonnet 4, Gemini 3 Pro, Kimi K 2.5, GPT-4o with citations.
## Perplexity API Setup
**Configured:** Perplexity API skill created at `/skills/perplexity/`
**Details:**
- Key: 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
- Endpoint: https://api.perplexity.ai/chat/completions
- Models: sonar, sonar-pro, sonar-reasoning, sonar-deep-research
- Format: OpenAI-compatible, ~$0.005 per query
**Usage:** See "Unified Search" section above for primary usage. Direct API access:
```bash
python3 skills/perplexity/scripts/query.py "Your question" --citations
```
**Note:** Perplexity sends queries to cloud servers. Use `search local "query"` for privacy-sensitive searches.
## Sub-Agent Setup (Option B)
**Configured:** Sub-agent defaults pointing to .10 Ollama
**Config changes:**
- `agents.defaults.subagents.model`: `ollama-remote/qwen3:30b-a3b-instruct-2507-q8_0`
- `models.providers.ollama-remote`: Points to `http://10.0.0.10:11434/v1`
- `tools.subagents.tools.deny`: write, edit, apply_patch, browser, cron (safer defaults)
**What it does:**
- Spawns background tasks on qwen3:30b at .10
- Inherits main agent context but runs inference remotely
- Auto-announces results back to requester chat
- Max 2 concurrent sub-agents
**Usage:**
```
sessions_spawn({
task: "Analyze these files...",
label: "Background analysis"
})
```
**Status:** Configured and ready
---
*Stored for long-term memory retention*

View File

@@ -0,0 +1 @@
2026-02-10T11:58:48-06:00