158 lines
5.0 KiB
Markdown
158 lines
5.0 KiB
Markdown
# 2026-02-10 — Daily Memory Log
|
|
|
|
## Qdrant Memory System — Manual Mode
|
|
|
|
**Major change:** Qdrant memory now MANUAL ONLY.
|
|
|
|
Two distinct systems established:
|
|
- **"remember this" or "note"** → File-based (daily logs + MEMORY.md) — automatic, original design
|
|
- **"q remember", "q recall", "q save", "q update"** → Qdrant `kimi_memories` — manual, only when "q" prefix used
|
|
|
|
**Commands:**
|
|
- "q remember" = store one item to Qdrant
|
|
- "q recall" = search Qdrant
|
|
- "q save" = store specific item
|
|
- "q update" = bulk sync all file memories to Qdrant without duplicates
|
|
|
|
## Redis Messaging — Manual Mode
|
|
|
|
**Change:** Redis agent messaging now MANUAL ONLY.
|
|
|
|
- No automatic heartbeat checks for Max's messages
|
|
- No auto-notification queue processing
|
|
- Only manual when explicitly requested: "check messages" or "send to Max"
|
|
|
|
## New Qdrant Collection: kimi_memories
|
|
|
|
**Created:** `kimi_memories` collection at 10.0.0.40:6333
|
|
- Vector size: 1024 (snowflake-arctic-embed2)
|
|
- Distance: Cosine
|
|
- Model: snowflake-arctic-embed2 pulled to 10.0.0.10 (GPU)
|
|
- Purpose: Manual memory backup when requested
|
|
|
|
## Critical Lesson: Immediate Error Reporting
|
|
|
|
**Rule established:** When hitting a blocking error during an active task, report IMMEDIATELY — don't wait for user to ask.
|
|
|
|
**What I did wrong:**
|
|
- Said "let me know when it's complete" for "q save ALL memories"
|
|
- Discovered Qdrant was unreachable (host down)
|
|
- Stayed silent instead of immediately reporting
|
|
- User had to ask for status to discover I was blocked
|
|
|
|
**Correct behavior:**
|
|
- Hit blocking error → immediately report: "Stopped — [reason]. Cannot proceed."
|
|
- Never imply progress is happening when it's not
|
|
- Applies to: service outages, permission errors, resource exhaustion
|
|
|
|
## Memory Backup Success
|
|
|
|
**Completed:** "q save ALL memories" — 39 comprehensive memories successfully backed up to `kimi_memories` collection.
|
|
|
|
**Contents stored:**
|
|
- Identity & personality
|
|
- Communication rules
|
|
- Tool usage rules
|
|
- Infrastructure details
|
|
- YouTube SEO rules
|
|
- Setup milestones
|
|
- Boundaries & helpfulness principles
|
|
|
|
**Collection status:**
|
|
- Name: `kimi_memories`
|
|
- Location: 10.0.0.40:6333
|
|
- Vectors: 39 points
|
|
- Model: snowflake-arctic-embed2 (1024 dims)
|
|
|
|
## New Qdrant Collection: kimi_kb
|
|
|
|
**Created:** `kimi_kb` collection at 10.0.0.40:6333
|
|
- Vector size: 1024 (snowflake-arctic-embed2)
|
|
- Distance: Cosine
|
|
- Purpose: Knowledge base storage (web search, documents, data)
|
|
- Mode: Manual only — no automatic storage
|
|
|
|
**Scripts:**
|
|
- `kb_store.py` — Store web/docs to KB with metadata
|
|
- `kb_search.py` — Search knowledge base with domain filtering
|
|
|
|
**Usage:**
|
|
```bash
|
|
# Store to KB
|
|
python3 kb_store.py "Content" --title "X" --domain "Docker" --tags "container"
|
|
|
|
# Search KB
|
|
python3 kb_search.py "docker volumes" --domain "Docker"
|
|
```
|
|
|
|
**Test:** Successfully stored and retrieved Docker container info.
|
|
|
|
## Unified Search: Perplexity + SearXNG
|
|
|
|
**Architecture:** Perplexity primary, SearXNG fallback
|
|
|
|
**Primary:** Perplexity API (AI-curated, ~$0.005/query)
|
|
**Fallback:** SearXNG local (privacy-focused, free)
|
|
|
|
**Commands:**
|
|
```bash
|
|
search "your query" # Perplexity → SearXNG fallback
|
|
search p "your query" # Perplexity only
|
|
search local "your query" # SearXNG only
|
|
search --citations "query" # Include source links
|
|
search --model sonar-pro "query" # Pro model for complex tasks
|
|
```
|
|
|
|
**Models:**
|
|
- `sonar` — Quick answers (default)
|
|
- `sonar-pro` — Complex queries, coding
|
|
- `sonar-reasoning` — Step-by-step reasoning
|
|
- `sonar-deep-research` — Comprehensive research
|
|
|
|
**Test:** Successfully searched "top 5 models used with openclaw" — returned Claude Opus 4.5, Sonnet 4, Gemini 3 Pro, Kimi K 2.5, GPT-4o with citations.
|
|
|
|
## Perplexity API Setup
|
|
|
|
**Configured:** Perplexity API skill created at `/skills/perplexity/`
|
|
|
|
**Details:**
|
|
- Key: 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
|
|
- Endpoint: https://api.perplexity.ai/chat/completions
|
|
- Models: sonar, sonar-pro, sonar-reasoning, sonar-deep-research
|
|
- Format: OpenAI-compatible, ~$0.005 per query
|
|
|
|
**Usage:** See "Unified Search" section above for primary usage. Direct API access:
|
|
```bash
|
|
python3 skills/perplexity/scripts/query.py "Your question" --citations
|
|
```
|
|
|
|
**Note:** Perplexity sends queries to cloud servers. Use `search local "query"` for privacy-sensitive searches.
|
|
|
|
## Sub-Agent Setup (Option B)
|
|
|
|
**Configured:** Sub-agent defaults pointing to .10 Ollama
|
|
|
|
**Config changes:**
|
|
- `agents.defaults.subagents.model`: `ollama-remote/qwen3:30b-a3b-instruct-2507-q8_0`
|
|
- `models.providers.ollama-remote`: Points to `http://10.0.0.10:11434/v1`
|
|
- `tools.subagents.tools.deny`: write, edit, apply_patch, browser, cron (safer defaults)
|
|
|
|
**What it does:**
|
|
- Spawns background tasks on qwen3:30b at .10
|
|
- Inherits main agent context but runs inference remotely
|
|
- Auto-announces results back to requester chat
|
|
- Max 2 concurrent sub-agents
|
|
|
|
**Usage:**
|
|
```
|
|
sessions_spawn({
|
|
task: "Analyze these files...",
|
|
label: "Background analysis"
|
|
})
|
|
```
|
|
|
|
**Status:** Configured and ready
|
|
|
|
---
|
|
*Stored for long-term memory retention*
|