Frequently Asked Questions

Current Webpage Content Recommendations:

Product Information & Core Concepts

What is FalkorDB?

FalkorDB is a high-performance graph database designed for managing complex relationships and enabling advanced AI applications. It is purpose-built for development teams working with interconnected data in real-time or interactive environments, supporting use cases like Text2SQL, security graphs, GraphRAG, agentic AI, and fraud detection. Learn more.

What is the mem0-falkordb plugin?

The mem0-falkordb plugin is a drop-in integration that registers FalkorDB as the graph backend for Mem0, enabling persistent, high-performance graph memory for AI agents. It uses Python runtime patching, so you don't need to fork Mem0 or modify its source code. View on GitHub.

How does graph memory differ from vector memory for LLM agents?

Graph memory stores typed relationships between entities, enabling multi-hop reasoning across connected facts. In contrast, vector memory retrieves semantically similar text chunks but lacks the ability to traverse logical relationships, which can limit reasoning and context for LLM agents.

What are the main use cases for FalkorDB?

FalkorDB is used for Text2SQL (natural language to SQL queries), security graphs (for CNAPP, CSPM, CIEM), GraphRAG (advanced graph-based retrieval), agentic AI and chatbots, fraud detection, and high-performance graph storage for complex relationships. See use cases.

How does FalkorDB handle per-user graph isolation?

FalkorDB, via the mem0-falkordb plugin, automatically maps each user to a dedicated graph (e.g., mem0_alice, mem0_bob), ensuring zero data leakage and constant query time as you scale from 10 to 10,000 users. This architecture also simplifies GDPR/CCPA deletion requests by allowing you to delete an entire user graph with a single command.

What is the primary purpose of FalkorDB?

FalkorDB's primary purpose is to provide an accurate, multi-tenant RAG (Retrieval-Augmented Generation) solution powered by a low-latency, scalable graph database. It is designed for developers working with complex, interconnected data in real-time or interactive environments.

How does the mem0-falkordb plugin work under the hood?

The plugin uses a runtime patching layer that intercepts Mem0's internal graph calls and translates them into FalkorDB-optimized Cypher queries. This allows seamless integration without forking Mem0, ensuring compatibility with future Mem0 releases.

What are the architecture advantages of per-user graph isolation?

Per-user graph isolation in FalkorDB ensures zero data leakage, constant query performance regardless of user count, trivial data cleanup for compliance, and optimized memory allocation per user, resulting in better cache hits and lower latency.

How is information structured in FalkorDB when used with Mem0?

Information added via Mem0 is parsed by the LLM, converted into entities and relationships, and persisted into FalkorDB as a logical graph structure. This enables agents to reason across facts and relationships, not just retrieve isolated text chunks.

What is the quickstart process for running the Mem0 + FalkorDB demo?

To run the demo: (1) Spin up FalkorDB using Docker, (2) set your OpenAI API key, and (3) clone the mem0-falkordb repo and run the demo scripts. Full instructions are available in the GitHub repository.

Features & Capabilities

What are the key performance metrics for FalkorDB?

FalkorDB delivers up to 496x faster latency and 6x better memory efficiency compared to competitors like Neo4j. It supports over 10,000 multi-graphs and offers flexible horizontal scaling, making it ideal for real-time, large-scale, and AI-driven applications. See benchmarks.

Does FalkorDB support multi-tenancy?

Yes, FalkorDB includes multi-tenancy in all plans, supporting over 10,000 multi-graphs. This is especially valuable for SaaS providers and organizations with diverse user bases.

What integrations does FalkorDB offer?

FalkorDB integrates with frameworks such as Graphiti (by ZEP), g.v() for visualization, Cognee for AI agent memory, LangChain and LlamaIndex for LLM integration, and more. See all integrations.

Does FalkorDB provide an API?

Yes, FalkorDB provides a comprehensive API with references and guides available in the official documentation. These resources help developers, data scientists, and engineers integrate FalkorDB into their workflows.

What technical documentation is available for FalkorDB?

FalkorDB offers complete guides and API references at docs.falkordb.com and release notes on the GitHub Releases Page. These resources cover setup, advanced configurations, and integration tips.

What are the key capabilities and benefits of FalkorDB?

FalkorDB supports 10,000+ multi-graphs, is open source, offers linear scalability, ultra-low latency, and is optimized for AI use cases like GraphRAG and agent memory. It provides trust, reliability, scalability, enhanced user experience, regulatory compliance, and built-in multi-tenancy.

How does FalkorDB optimize for AI applications?

FalkorDB is tailored for advanced AI use cases such as GraphRAG and agent memory, enabling intelligent agents and chatbots with real-time adaptability. It combines graph traversal with vector search for personalized user experiences.

What is the business impact of using FalkorDB?

Customers using FalkorDB can expect improved scalability, enhanced trust and reliability, reduced alert fatigue in cybersecurity, faster time-to-market, better user experience, regulatory compliance, and support for advanced AI applications. These outcomes help organizations unlock the full potential of their data and achieve strategic goals. See case studies.

Implementation & Onboarding

How easy is it to implement FalkorDB?

FalkorDB is built for rapid deployment, allowing teams to go from concept to enterprise-grade solutions in weeks, not months. Users can sign up for FalkorDB Cloud, try it for free, run it locally with Docker, or schedule a demo. Comprehensive documentation and community support are available for onboarding.

What support and training options are available?

FalkorDB provides comprehensive documentation, community support via Discord and GitHub Discussions, access to solution architects, and free trial/demo options. Tutorials and technical articles are also available on the FalkorDB blog.

What feedback have customers given about FalkorDB's ease of use?

Customers like AdaptX and 2Arrows have praised FalkorDB for its user-friendly design and high-speed performance. AdaptX highlighted rapid access to clinical insights, while 2Arrows' CTO called it a 'game-changer' for ease of running non-traversal queries compared to Neo4j. Read testimonials.

How can I get started with FalkorDB?

You can sign up for FalkorDB Cloud, try a free instance, run FalkorDB locally with Docker, or schedule a demo. Documentation, tutorials, and community support are available to help you get started quickly. Get started.

Security & Compliance

What security and compliance certifications does FalkorDB have?

FalkorDB is SOC 2 Type II compliant, meeting rigorous standards for security, availability, processing integrity, confidentiality, and privacy. This demonstrates a strong commitment to protecting customer data and regulatory compliance. Learn more.

How does FalkorDB ensure data privacy and protection?

FalkorDB's SOC 2 Type II compliance ensures protection against unauthorized access, operational availability, accurate data processing, confidentiality of sensitive information, and privacy of personal data. These controls are independently audited and verified.

How does FalkorDB handle GDPR or CCPA data deletion requests?

With per-user graph isolation, FalkorDB allows you to delete all data for a user by simply running DELETE GRAPH for that user's graph (e.g., mem0_alice), ensuring compliance with GDPR and CCPA requirements without complex queries.

Pricing & Plans

What pricing plans does FalkorDB offer?

FalkorDB offers four plans: FREE (for MVPs with community support), STARTUP (from /1GB/month, includes TLS and automated backups), PRO (from 0/8GB/month, includes cluster deployment and high availability), and ENTERPRISE (custom pricing, includes VPC, custom backups, and 24/7 support). See pricing.

What features are included in the Free plan?

The Free plan is designed for building a powerful MVP and includes community support. It is ideal for developers and small teams starting with graph database projects.

What features are included in the Startup plan?

The Startup plan starts at /1GB/month and includes TLS encryption and automated backups, making it suitable for growing teams that need additional security and reliability.

What features are included in the Pro plan?

The Pro plan starts at 0/8GB/month and includes advanced features such as cluster deployment, high availability, and more robust infrastructure for production workloads.

What features are included in the Enterprise plan?

The Enterprise plan offers tailored pricing and includes enterprise-grade features such as VPC deployment, custom backups, and 24/7 support, making it suitable for large organizations with advanced requirements.

Competition & Comparison

How does FalkorDB compare to Neo4j?

FalkorDB offers up to 496x faster latency and 6x better memory efficiency than Neo4j, supports flexible horizontal scaling, and includes multi-tenancy in all plans. Neo4j uses an on-disk storage model and offers multi-tenancy only in premium plans. See detailed comparison.

How does FalkorDB compare to AWS Neptune?

FalkorDB is open source, supports multi-tenancy, and delivers better latency performance compared to AWS Neptune, which is proprietary, closed-source, and lacks multi-tenancy support. FalkorDB also supports Cypher query language and efficient vector search. See comparison.

How does FalkorDB compare to TigerGraph and ArangoDB?

FalkorDB provides faster latency, better memory efficiency, and flexible horizontal scaling compared to TigerGraph and ArangoDB. Both competitors offer multi-tenancy and vector search, but FalkorDB's performance and scalability make it a strong choice for demanding applications.

Why should a customer choose FalkorDB over alternatives?

FalkorDB stands out for its exceptional performance, scalability, built-in multi-tenancy, advanced AI integration, open-source licensing, and enhanced user experience. It is trusted by customers in healthcare, media, and AI development. See customer stories.

Use Cases & Customer Success

Who is the target audience for FalkorDB?

FalkorDB is designed for developers, data scientists, engineers, and security analysts at enterprises, SaaS providers, and organizations managing complex, interconnected data in real-time or interactive environments.

What industries are represented in FalkorDB's case studies?

FalkorDB case studies include healthcare (AdaptX), media and entertainment (XR.Voyage), and artificial intelligence/ethical AI development (Virtuous AI). Explore case studies.

Can you share specific customer success stories?

Yes. AdaptX uses FalkorDB for rapid clinical data analysis, XR.Voyage overcame scalability challenges in immersive media, and Virtuous AI built a high-performance, multi-modal data store for ethical AI. Read their stories.

Who are some of FalkorDB's customers?

Notable customers include AdaptX (healthcare), XR.Voyage (media/entertainment), and Virtuous AI (ethical AI development). Their success stories are publicly available on the FalkorDB website.

What core problems does FalkorDB solve?

FalkorDB addresses trust and reliability in LLM-based applications, scalability and data management, alert fatigue in cybersecurity, performance limitations of competitors, interactive data analysis, regulatory compliance, and support for agentic AI and chatbots.

What pain points does FalkorDB address for its users?

FalkorDB helps users overcome challenges such as trust and reliability in LLM-based apps, managing large-scale data, reducing alert fatigue in security, outperforming competitors in speed and memory, enabling interactive data analysis, and ensuring regulatory compliance.

Beyond Flat Memory: Persistent Graph-Structured LLM Memory with mem0-falkordb

Beyond Flat Memory Persistent Graph-Structured LLM Memory with mem0-falkordb

Highlights

from mem0_falkordb import register register() from mem0 import Memory config = { "graph_store": { "provider": "falkordb", "config": { "host": "localhost", "port": 6379, "database": "mem0", }, }, # Add your LLM and embedder config as usual "llm": { "provider": "openai", "config": {"model": "gpt-4o-mini"}, }, } m = Memory.from_config(config) m.add("I love pizza", user_id="alice") results = m.search("what does alice like?", user_id="alice")

The current state of AI agent memory is, frankly, a bit like 1950s file cabinets. Most agentic frameworks rely on vector stores to provide long-term memory. While vector search is great for “find things that sound like this,” it’s inherently flat.

If an agent learns that “Alice is vegan” and “Alice is allergic to nuts,” a vector store treats these as two distinct points in a high-dimensional space. To the LLM, these are two separate fragments of information retrieved via nearest-neighbor search. But in reality, these aren’t just isolated strings; they are related attributes of a single entity (in this case, Alice).

When the agent’s memory is represented as a Graph, it doesn’t just store “Alice” and “Vegan” as embeddings. It stores a relationship: (Alice)-[:FOLLOWS_DIET]->(Vegan). This structural connectivity allows agents to reason across facts rather than just retrieving them.

To address this challenge, we’re released the FalkorDB graph store plugin for Mem0. It adds persistent, high-performance graph memory to your AI agents with a single line of code.

mem0 falkordb release FalkorDB

Why Graph Memory?

Traversal vs. Search

To understand why your agent needs a graph, consider a common scenario from one of FalkorDB’s demos:

Alice is a vegan software engineer, allergic to tree nuts, and currently leading a GraphQL migration at her company. She is also planning a trip to Japan.

In a conventional vector-only memory setup, a query like “What should Alice eat in Japan?” triggers a similarity search. The results might return her trip to Japan and perhaps her nut allergy, but might miss her veganism if the “vector distance” between “Japan” and “Vegan” is too high in that specific context.

In a Knowledge Graph, the agent navigates the relationships. It starts at the node Alice, traverses to her DietaryPreferences, sees Vegan, traverses to Allergies, sees TreeNuts, and checks her Destination, Japan. The agent isn’t just “guessing” based on semantic similarity; it is traversing a verified web of facts. The result is a response that is logically sound.

The mem0-falkordb plugin

mem0-falkordb is a drop-in plugin that registers FalkorDB as the graph backend for Mem0. It uses Python runtime patching, meaning you don’t need to fork Mem0 or modify its source code. You simply import the plugin, register it, and your agents are backed by an ultra-fast graph database.

Setup

Setting it up is designed to be developer-friendly. Here is how you initialize a Mem0 instance with FalkorDB as the persistent graph store:

				
					from mem0_falkordb import register
from mem0 import Memory

# 1. Register FalkorDB as a Mem0 provider
register()

# 2. Define your configuration
config = {
    "graph_store": {
        "provider": "falkordb",
        "config": {
            "host": "localhost", 
            "port": 6379, 
            "database": "mem0"
        },
    },
    "llm": {
        "provider": "openai", 
        "config": {"model": "gpt-5-mini"}
    },
}

# 3. Initialize Memory
m = Memory.from_config(config)

# 4. Add data and search
m.add("I'm a vegan software engineer allergic to nuts", user_id="alice")
results = m.search("what can alice eat?", user_id="alice")



				
			

With this configuration, every piece of information added to m.add() is parsed by the LLM, converted into entities and relations, and persisted into FalkorDB.

The Standout Feature: Per-User Graph Isolation

One of the biggest headaches in building multi-tenant agent applications is data leakage and query performance. In widespread implementations today, you might store all users in one massive graph and append a user_id property to every node, then filter every query with WHERE n.user_id = ‘alice’.
mem0-falkordb takes a different, more robust architectural approach: Automatic Graph Isolation.
When you provide a user_id to Mem0, the FalkorDB plugin automatically maps that user to their own dedicated graph: mem0_alice, mem0_bob, mem0_carol.

Architecture Advantages

Why This Is Superior

Per-user graph isolation delivers unmatched data safety, performance, and operational simplicity.

Zero Leakage

There is no physical way for a query for Bob to touch Alice's data. The graph engine is literally operating on a different data structure.

Performance at Scale

As you move from 10 users to 10,000, your query time remains constant. The graph engine only ever traverses the specific subgraph for that user, rather than filtering through a global index.

Trivial Cleanup

If a user requests their data be deleted (GDPR/CCPA), you don't need to run complex MATCH or DELETE queries with filters. You simply run DELETE GRAPH mem0_alice.

Memory Efficiency

Individual graphs allow FalkorDB to optimize memory allocation per user, leading to better cache hits and lower latency.

Demo

The demo/demo.py script provides a full lifecycle look at how this works in practice across five distinct “scenes.”

mem0-falkordb agent memory installation

Scene 1 & 2: Onboarding and Retrieval

We initialize three vastly different users:

  1. Alice Chen: A backend engineer hiking the NH 48.
  2. Bob: An Italian chef and “soccer dad” in Boston.
  3. Dr. Carol Martinez: A cardiologist and marathoner researching ML at MIT.

When we query, “What is her research about?” specifically for user_id=”carol”, the system ignores the hundreds of facts about Alice or Bob. It returns Carol’s specific focus on ML and AFib.

Scene 3: Facts Over Time

Memory isn’t static. People change. In Scene 3, Alice informs the agent she is transitioning from vegan to pescatarian.

Mem0 and FalkorDB handle the resolution of contradictory facts. The graph updates, the IS_VEGAN relationship is superseded or modified, and subsequent queries reflect her new diet. This is significantly harder to achieve in a pure vector store where “old” embeddings often linger and pollute search results.

Scene 4: Proving Isolation

We run a search for “marathons” against Alice’s graph. It returns nothing. We run the same search against Carol’s graph, and it returns her history in the Boston and NYC marathons. This proves that even with similar semantic concepts, the per-user isolation acts as a hard boundary.

Scene 5: Scaling Without Friction

The demo programmatically creates 10 synthetic users. Because of the isolation architecture, querying any one of them takes the same amount of time as it did when there was only one user. The total user count does not degrade the individual user’s experience.

mem0 falkordb inspecting graph llm memory2 FalkorDB

Inspecting the Raw Graph

The “wow moment” for most developers comes when running inspect_graphs.py. This utility bypasses the Mem0 abstraction and connects directly to FalkorDB via falkordb-py to show you exactly what the LLM has synthesized.

When you run it after the demo, you’ll see an output similar to this:

				
					
Graph: mem0_alice
├── Alice --[IS_A]--> SoftwareEngineer
├── Alice --[FOLLOWS_DIET]--> Vegan
├── Alice --[PLANS_TO_VISIT]--> Japan
├── Alice --[PREFERS]--> Python
└── Alice --[ALLERGIC_TO]--> TreeNuts

Graph: mem0_carol
├── Carol --[OCCUPATION]--> Cardiologist
├── Carol --[RESEARCHES]--> AFib
├── Carol --[AFFILIATED_WITH]--> MIT
└── Carol --[COMPLETED]--> BostonMarathon

				
			

This structure is built automatically from natural language. You didn’t have to define a schema, write Cypher CREATE statements, or manage nodes manually. The agent heard a fact and organized it into a logical hierarchy.

How it Works Under the Hood

Developer Notes

Mem0 was designed around other graph stores. Our runtime patching layer intercepts Mem0 internal graph calls and translates them into FalkorDB-optimized Cypher — no fork, no upstream wait.

from mem0_falkordb import register

# Runtime patch registration (no mem0 fork required)
register()

from mem0 import Memory

memory = Memory.from_config({
    "graph_store": {
        "provider": "falkordb",
        "config": {"host": "localhost", "port": 6379, "database": "mem0"}
    }
})
Neo4j-style call
FalkorDB translation
Why this matters
db.index.vector.queryNodes(...)
db.idx.vector.queryNodes(...)
Hybrid vector + graph search works out of the box.
elementId(n)
id(n)
Preserves node identity semantics across engines.
SET n.embedding = $embedding
SET n.embedding = vecf32($embedding)
Faster, native float32 embedding writes.
CALL { ... UNION ... }
outgoing query + incoming query
Rewrites to directional scans that align with FalkorDB strengths.

Quickstart

Run the Mem0 + FalkorDB Demo in 3 Steps

Spin up FalkorDB

docker run --rm -p 6379:6379 falkordb/falkordb:latest

Set your environment

export OPENAI_API_KEY='your-key-here'

Run the Demo

We recommend using uv for lightning-fast dependency management:

git clone https://github.com/FalkorDB/mem0-falkordb.git
cd mem0-falkordb/demo
uv sync
uv run python demo.py
uv run python inspect_graphs.py

Memory shouldn’t be a flat list of text chunks. For agents to truly act as assistants, they need to understand the entities they interact with and the complex web of relationships that define them.

By combining Mem0’s sophisticated memory management with FalkorDB’s speed and per-user isolation, you can build agents that are faster, safer, and significantly smarter.

Give it a spin, star the repo, and let us know what you build!

FAQ

How does graph memory differ from vector memory for LLM agents?

Vector search retrieves semantically similar text chunks. Graph memory stores typed relationships between entities, enabling multi-hop reasoning across connected facts, not just similarity scores.

 No. It uses Python runtime patching to translate Mem0’s internal calls into FalkorDB-optimized Cypher. No forks, no upstream PRs needed to stay compatible as Mem0 evolves.

Each user maps to a dedicated graph (e.g., mem0_alice). To purge all data for that user, run `DELETE` GRAPH mem0_alice. No complex filtered DELETE queries across a shared dataset.

References and citations

  1. Mem0 Graph Memory Documentation — Mem0’s official reference on entity extraction, relationship storage, and hybrid vector+graph retrieval: https://docs.mem0.ai/open-source/features/graph-memory[docs.mem0]​

  2. FalkorDB vs. Neo4j Performance Benchmarks — Sub-140ms p99 latency vs. Neo4j’s 46,923ms under equivalent workloads on aggregate expansion queries: https://www.falkordb.com/blog/graph-database-performance-benchmarks-falkordb-vs-neo4j/[falkordb]​

  3. Mem0 Research Paper (arXiv) — Empirical results showing Mem0 with graph memory achieves ~2% higher overall score vs. base vector-only configuration across multi-hop and temporal question categories: https://arxiv.org/abs/2504.19413[arxiv]​

  4. mem0-falkordb GitHub Repository — Plugin source code, demo scripts, and setup instructions: https://github.com/FalkorDB/mem0-falkordb

  5. Graphiti + FalkorDB Integration (Zep Blog) — Independent benchmark corroboration citing 496x faster p99 latency and 6x better memory efficiency for FalkorDB: https://blog.getzep.com/graphiti-knowledge-graphs-falkordb-support/[blog.getzep]​

  6. Mem0 Graph Memory for AI Agents (Mem0 Blog) — Breakdown of graph vs. vector memory tradeoffs, with data on 91% faster responses and 90% lower token costs in hybrid retrieval mode: https://mem0.ai/blog/graph-memory-solutions-ai-agents[mem0]​