Frequently Asked Questions

Product Information & Accuracy Benchmarks

What is GraphRAG and how does it differ from Vector RAG?

GraphRAG uses a knowledge graph as the retrieval substrate, explicitly encoding entity relationships for schema-aligned context. In contrast, Vector RAG relies on embedding-based similarity without structural alignment, which limits accuracy for schema-bound queries. (Source: FalkorDB Blog)

How did FalkorDB’s GraphRAG SDK improve LLM accuracy in enterprise benchmarks?

FalkorDB’s 2025 SDK pushed GraphRAG accuracy to over 90% for enterprise queries, up from 56.2% in Diffbot’s original benchmark, without needing rerankers or filters. This improvement is especially notable for KPI tracking and planning queries. (Source: FalkorDB Blog)

What are the main categories tested in the KG-LM Accuracy Benchmark?

The KG-LM Accuracy Benchmark evaluated LLM performance across four categories: day-to-day analytics, operational analytics, metrics & KPIs, and strategic planning. (Source: Diffbot Benchmark)

How does entity density affect LLM accuracy in retrieval tasks?

Accuracy degrades to 0% as the number of entities per query exceeds five in vector-only systems. GraphRAG sustains stable performance even with 10+ entities per query, making it essential for schema-intensive enterprise questions. (Source: FalkorDB Blog)

Why did vector RAG fail on Diffbot’s benchmark?

Vector RAG failed schema-bound queries because LLMs couldn’t reason over KPIs, relationships, or planning logic without entity alignment. GraphRAG, powered by FalkorDB, recovered performance in these scenarios. (Source: FalkorDB Blog)

Can vector search ever match GraphRAG for structured data?

No. Vectors cannot model relationships. Even with rerankers, vector search loses speed, trust, and context compared to graph-based retrieval. (Source: FalkorDB Blog)

What practical scenarios require GraphRAG instead of traditional RAG?

GraphRAG is recommended when queries involve business logic, metric definitions, multi-hop relationships, or require schema conformity (e.g., KPIs, forecasts, system state). (Source: FalkorDB Blog)

What is the business impact of using FalkorDB’s GraphRAG SDK?

FalkorDB’s GraphRAG SDK enables highly accurate, multi-tenant RAG solutions with low latency, reducing hallucinations and improving LLM response accuracy for enterprise applications. (Source: FalkorDB Blog)

Who is FalkorDB’s GraphRAG SDK best suited for?

It is ideal for technical teams handling complex, interconnected data in real-time, such as those building GenAI apps, chatbots, or enterprise analytics platforms. (Source: FalkorDB Blog)

What frameworks and tools integrate with FalkorDB for GraphRAG?

FalkorDB integrates with frameworks like LangChain for graph-based retrievers and hybrid pipelines, as well as its own GraphRAG SDK. (Source: FalkorDB Docs)

How does FalkorDB support schema retrieval for enterprise accuracy?

FalkorDB’s GraphRAG SDK enables schema retrieval with low latency, significantly improving enterprise accuracy for structured queries. (Source: FalkorDB Blog)

What are the measurable improvements in KPI tracking with FalkorDB?

Internal evaluations show measurable improvements in KPI tracking and planning queries, with accuracy rising above the original 56.2% benchmark, especially in schema-dense enterprise use cases. (Source: FalkorDB Blog)

How does FalkorDB reduce hallucinations in LLM responses?

By providing schema-aligned retrieval and explicit entity relationships, FalkorDB reduces hallucinations and increases response accuracy in LLM-powered applications. (Source: FalkorDB Blog)

What is the recommended backend for production LLM pipelines?

The KG-LM Accuracy Benchmark validates using a graph database like FalkorDB as the retrieval backend for production LLM pipelines, especially for schema-heavy queries. (Source: Diffbot Benchmark)

How does FalkorDB handle multi-tenant RAG solutions?

FalkorDB offers accurate, multi-tenant RAG solutions based on low-latency, scalable graph database technology, supporting complex, interconnected data in real-time. (Source: FalkorDB Blog)

What are the references for the KG-LM Accuracy Benchmark?

The KG-LM Accuracy Benchmark was published by Diffbot in November 2023. See Diffbot Benchmark PDF and Knowledge Graph Conference for presentations by Kurt Bollacker. (Source: FalkorDB Blog)

Who developed the GraphRAG-SDK?

Gal Shubeli, Software and AI Engineer, leads the development of GraphRAG-SDK, integrating knowledge graphs, ontology management, and state-of-the-art LLMs for accurate, customizable RAG workflows. (Source: FalkorDB Blog)

Features & Capabilities

What are the key performance metrics of FalkorDB?

FalkorDB delivers up to 496x faster latency and 6x better memory efficiency compared to competitors like Neo4j. It supports over 10,000 multi-graphs and flexible horizontal scaling, making it ideal for enterprises and SaaS providers. (Source: FalkorDB Benchmarks)

Does FalkorDB support advanced AI use cases?

Yes, FalkorDB is optimized for AI applications such as GraphRAG, agent memory, and chatbots, enabling intelligent agents with real-time adaptability. (Source: FalkorDB Website)

What integrations are available with FalkorDB?

FalkorDB integrates with frameworks like Graphiti (by ZEP), g.v(), Cognee, LangChain, and LlamaIndex, supporting AI agent memory, knowledge graph visualization, and LLM integration. (Source: FalkorDB Try Free)

Does FalkorDB provide an API?

Yes, FalkorDB offers a comprehensive API with references and guides available in its official documentation. (Source: FalkorDB Docs)

Is FalkorDB open source?

Yes, FalkorDB is open source, encouraging community collaboration and transparency. (Source: FalkorDB Website)

Pricing & Plans

What pricing plans does FalkorDB offer?

FalkorDB offers four plans: FREE (for MVPs with community support), STARTUP (from /1GB/month, includes TLS and backups), PRO (from 0/8GB/month, includes cluster deployment and high availability), and ENTERPRISE (custom pricing with VPC, custom backups, and 24/7 support). (Source: FalkorDB Website)

Competition & Comparison

How does FalkorDB compare to Neo4j?

FalkorDB offers up to 496x faster latency, 6x better memory efficiency, flexible horizontal scaling, and multi-tenancy in all plans, while Neo4j’s multi-tenancy is only available in premium tiers. (Source: FalkorDB vs. Neo4j)

How does FalkorDB compare to AWS Neptune?

FalkorDB is open source, supports multi-tenancy, and delivers better latency performance compared to AWS Neptune, which is proprietary and lacks multi-tenancy. (Source: FalkorDB vs. AWS Neptune)

How does FalkorDB compare to TigerGraph and ArangoDB?

FalkorDB provides faster latency, more efficient memory usage, and flexible horizontal scaling compared to TigerGraph and ArangoDB, which have limited scaling and moderate memory efficiency. (Source: FalkorDB Website)

Use Cases & Benefits

What are the primary use cases for FalkorDB?

FalkorDB is used for Text2SQL, Security Graphs (CNAPP, CSPM, CIEM), GraphRAG, Agentic AI & Chatbots, Fraud Detection, and high-performance graph storage for complex relationships. (Source: FalkorDB Website)

Who can benefit from FalkorDB?

Developers, data scientists, engineers, and security analysts at enterprises, SaaS providers, and organizations managing complex, interconnected data in real-time or interactive environments. (Source: FalkorDB Demo)

What industries are represented in FalkorDB’s case studies?

Healthcare (AdaptX), Media & Entertainment (XR.Voyage), and Artificial Intelligence/Ethical AI Development (Virtuous AI). (Source: FalkorDB Case Studies)

Can you share specific customer success stories?

AdaptX uses FalkorDB for rapid access to clinical data insights; XR.Voyage overcame scalability challenges in immersive platforms; Virtuous AI built a high-performance, multi-modal data store for ethical AI development. (Source: FalkorDB Case Studies)

Technical Requirements & Support

How easy is it to implement FalkorDB?

FalkorDB enables rapid deployment, allowing teams to go from concept to enterprise-grade solutions in weeks. Users can sign up for FalkorDB Cloud, try for free, run locally with Docker, or schedule a demo. (Source: FalkorDB Demo)

Where can I find FalkorDB technical documentation?

Comprehensive guides and API references are available at docs.falkordb.com and the GitHub releases page for updates. (Source: FalkorDB Docs)

What support channels are available for FalkorDB users?

Support is available via Discord, GitHub Discussions, solution architects, and community forums. (Source: FalkorDB Website)

Security & Compliance

Is FalkorDB SOC 2 Type II compliant?

Yes, FalkorDB is SOC 2 Type II compliant, meeting rigorous standards for security, availability, processing integrity, confidentiality, and privacy. (Source: FalkorDB Demo)

What business impact does FalkorDB’s compliance have?

SOC 2 Type II compliance ensures FalkorDB protects against unauthorized access, delivers accurate data processing, and safeguards sensitive information, supporting enterprise trust and regulatory requirements. (Source: FalkorDB Demo)

FalkorDB Header Menu

How GraphRAG Outperforms Vector Search in Enterprise LLM Accuracy

“We replaced vector search with GraphRAG—accuracy jumped 3.4x.”

Why Enterprise Queries Need More Than Vectors

In late 2023, Diffbot released the KG-LM Accuracy Benchmark, a public study evaluating how knowledge graphs impact the performance of large language models (LLMs) in enterprise scenarios. The benchmark tests how well LLMs answer 43 business-relevant questions—with and without access to a structured knowledge graph.

The research compares traditional vector search pipelines to graph-based retrieval (GraphRAG), quantifying differences in accuracy across categories like operational analytics, KPI tracking, and strategic planning. For software architects evaluating RAG techniques in enterprise stacks, this benchmark surfaces a critical insight: vector embeddings are not enough when queries depend on structure.

If you’re still running vanilla RAG on high-entity queries, you’re flying blind. Here’s what the data proves—and why forward-leaning teams are already moving to graph-native stacks.

KG-LM Accuracy Benchmark GraphRAG flowchart

Defining GraphRAG

GraphRAG uses a knowledge graph as the retrieval substrate instead of unstructured document vectors. The graph explicitly encodes entity relationships, making it easier for an LLM to retrieve schema-aligned context. This contrasts with standard vector search, which uses embedding-based similarity to retrieve context without structural alignment.

Study Objective

The benchmark compares LLM performance across 43 enterprise-specific questions in four categories:

  • Day-to-day analytics
  • Operational analytics
  • Metrics & KPIs
  • Strategic planning

The benchmark measures accuracy with and without knowledge graph integration.

Results: GraphRAG Triples Accuracy in Enterprise Settings

Overall Accuracy

  • LLM without KG grounding: 16.7%
  • LLM with KG grounding (GraphRAG): 56.2%
  • Accuracy gain: 3.4x increase

“This study shows that using a knowledge graph is not just beneficial—it’s functionally required for certain classes of enterprise questions.” — Mike Tung, CEO, Diffbot [1]

Performance Breakdown

LLM with and without knowledge graph grounding results 2023

These results confirm that vector-only systems cannot handle schema-intensive queries. Both Metrics & KPIs and Strategic Planning categories saw zero accuracy from traditional vector RAG.

Schema Dependence and Entity Density

  • Accuracy degrades to 0% as the number of entities per query increases beyond five (without KG support).
  • GraphRAG sustains stable performance even with 10+ entities per query.

This trend reinforces that surface-level similarity alone is not sufficient for high-entity-density queries.

“Graphs give structure to knowledge that language models alone can’t replicate.” — Kurt Bollacker, Data Scientist [2]

KG-LM Accuracy Benchmark types of questions

Practical Implications for Developers

When to Use GraphRAG

Use GraphRAG instead of traditional RAG when:

  • Queries involve business logic or metric definitions
  • Answers require multi-hop relationships between entities
  • Schema conformity is critical (e.g., KPIs, forecasts, system state)

Tooling and Frameworks

  • FalkorDB: A high-throughput graph database optimized for GraphRAG use cases. See FalkorDB Docs.
  • LangChain: Supports graph-based retrievers and can integrate with FalkorDB for hybrid pipelines.

GraphRAG is a Structural Requirement

Since the benchmark was published in 2023, FalkorDB has released a production-grade GraphRAG SDK that further improves retrieval alignment and LLM accuracy. In internal tests conducted in Q1 2025, average response accuracy for enterprise-style questions increased from the 56.2% reported in the original benchmark.

The most significant improvements occurred in KPI tracking and planning queries, where structural fidelity is critical. Internal evaluations show measurable improvements beyond the original 56.2% benchmark accuracy, especially in schema-dense enterprise use cases.

GraphRAG outperforms vector-based retrieval when schema precision matters. The KG-LM Accuracy Benchmark shows a 3.4x accuracy gain overall, and an infinite gain in schema-heavy categories where vector search fails entirely. These results validate using a graph database like FalkorDB as the retrieval backend in production LLM pipelines.

Why did vector RAG fail on Diffbot’s benchmark?

It failed schema-bound queries. Without entity alignment, LLMs couldn’t reason over KPIs, relationships, or planning logic.

What changed with FalkorDB’s GraphRAG SDK in 2025?

It added schema retrieval with low latency, pushing enterprise accuracy to 90%+

Can vector search ever match GraphRAG for structured data?

No. Vectors can’t model relationships. You can patch with rerankers, but you’ll keep losing speed, trust, and context.

Build fast and accurate GenAI apps with GraphRAG SDK at scale

FalkorDB offers an accurate, multi-tenant RAG solution based on our low-latency, scalable graph database technology. It’s ideal for highly technical teams that handle complex, interconnected data in real-time, resulting in fewer hallucinations and more accurate responses from LLMs.

References and citations