Agent with Persistent Memory
Build an agent that stores and recalls information across sessions using ZeroMemory.
Prerequisites
- AINative API key (get one free)
Store Memories
import requests
TOKEN = "your-api-key"
BASE = "https://api.ainative.studio/api/v1/public/memory/v2"
HEADERS = {"Authorization": f"Bearer {TOKEN}", "Content-Type": "application/json"}
# Store user preferences
requests.post(f"{BASE}/remember", headers=HEADERS, json={
"content": "User prefers dark mode and uses Python for backend",
"entity_id": "user_123",
"memory_type": "semantic",
"importance": 0.9,
"tags": ["preferences"],
})
# Store a conversation fact
requests.post(f"{BASE}/remember", headers=HEADERS, json={
"content": "User is building a RAG chatbot for their company wiki",
"entity_id": "user_123",
"memory_type": "episodic",
"importance": 0.7,
"tags": ["project"],
})
# Store a relationship
requests.post(f"{BASE}/relate", headers=HEADERS, json={
"subject": "user_123",
"predicate": "works_on",
"object": "RAG Chatbot Project",
"confidence": 0.9,
})
Recall in a New Session
# Later, in a completely new session...
response = requests.post(f"{BASE}/recall", headers=HEADERS, json={
"query": "What is the user working on?",
"entity_id": "user_123",
"limit": 5,
})
for memory in response.json()["results"]:
print(f"[{memory['score']:.2f}] {memory['content']}")
Output:
[0.94] User is building a RAG chatbot for their company wiki
[0.72] User prefers dark mode and uses Python for backend
Build a Profile
profile = requests.get(
f"{BASE}/profile/user_123",
headers=HEADERS,
).json()
print(profile)
What to Try Next
- Use GraphRAG for relationship-aware retrieval
- Add reflection for AI-generated insights
- Connect via MCP for agent tool access