Frequently Asked Questions

Product Information

What is FalkorDB and what does it do?

FalkorDB is a high-performance graph database designed to manage complex relationships and enable advanced AI applications. It is purpose-built for development teams working with interconnected data in real-time or interactive environments, supporting use cases such as Text2SQL, security graphs, GraphRAG, agentic AI, chatbots, and fraud detection.

What are the main products and services offered by FalkorDB?

FalkorDB offers a graph database platform with features for managing interconnected data, advanced AI integrations, and multi-tenancy. Key offerings include Text2SQL, security graphs for CNAPP/CSPM/CIEM, GraphRAG for fast graph-based retrieval, agentic AI and chatbots, and fraud detection. The platform is available in Free, Startup, Pro, and Enterprise plans, with comprehensive documentation and support.

What is GraphRAG and how does it differ from VectorRAG?

GraphRAG is a retrieval-augmented generation architecture that explicitly encodes entity relationships in a graph, enabling multi-hop reasoning and logical consistency for complex queries. In contrast, VectorRAG relies on embedding-based similarity for broad retrieval but struggles with structured multi-hop reasoning and entity disambiguation. GraphRAG is better suited for relationship-intensive domains, while VectorRAG excels in content-heavy applications.

What is the primary purpose of FalkorDB?

FalkorDB is designed to deliver accurate, multi-tenant RAG solutions powered by a low-latency, scalable graph database. It addresses the needs of development teams working with complex, interconnected data in real-time or interactive environments, enabling trust, scalability, and advanced AI workflows.

Who is the target audience for FalkorDB?

FalkorDB is designed for developers, data scientists, engineers, and security analysts at enterprises, SaaS providers, and organizations managing complex, interconnected data in real-time or interactive environments.

Features & Capabilities

What are the key features of FalkorDB?

Key features of FalkorDB include support for over 10,000 multi-graphs (multi-tenancy), open-source licensing, linear scalability, ultra-low latency, GraphRAG and agent memory optimization, cloud and on-prem deployment, and advanced AI integrations. It also offers dashboards and custom views for interactive data analysis.

Does FalkorDB support advanced AI use cases?

Yes, FalkorDB is optimized for advanced AI use cases such as GraphRAG, agent memory, and agentic AI applications. It enables intelligent agents and chatbots with real-time adaptability and supports combining graph traversal with vector search for personalized user experiences.

What integrations are available with FalkorDB?

FalkorDB integrates with frameworks such as Graphiti (by ZEP) for AI agent memory, g.v() for knowledge graph visualization, Cognee for mapping knowledge graphs, LangChain and LlamaIndex for LLM integration, and more. FalkorDB is open to new integrations and provides documentation for developers.

Does FalkorDB provide an API?

Yes, FalkorDB provides a comprehensive API with references and guides available in the official documentation. These resources support developers, data scientists, and engineers in integrating FalkorDB into their workflows.

What technical documentation is available for FalkorDB?

FalkorDB offers comprehensive technical documentation and API references at docs.falkordb.com, including setup guides, advanced configurations, and release notes on the GitHub releases page.

How does FalkorDB handle multi-tenancy?

FalkorDB supports multi-tenancy in all plans, enabling management of over 10,000 multi-graphs. This is particularly beneficial for SaaS providers and organizations with diverse user bases, ensuring isolated tenant data and scalability.

Performance & Scalability

How does FalkorDB perform compared to other graph databases?

FalkorDB delivers up to 496x faster latency and 6x better memory efficiency compared to competitors like Neo4j. It supports real-time data analysis, flexible horizontal scaling, and efficient handling of large-scale, high-dimensional data. For detailed benchmarks, visit benchmark.falkordb.com.

What are the scalability capabilities of FalkorDB?

FalkorDB supports flexible horizontal scaling and can handle over 10,000 multi-graphs, making it suitable for enterprises and SaaS providers managing complex, large-scale datasets.

How does GraphRAG compare to VectorRAG in terms of query performance?

GraphRAG enables direct traversal of structured relationships, preserving logical connections for multi-hop queries, but requires more upfront schema design. VectorRAG offers sub-second latency for broad retrieval but struggles with deep multi-hop reasoning and recall precision. Hybrid approaches can combine both for optimal performance.

What is the computational cost difference between VectorRAG and GraphRAG for real-time data updates?

VectorRAG requires O(n) re-embedding for modified documents, leading to higher computational overhead for updates. GraphRAG supports incremental node and edge updates, which are more efficient, but demands custom temporal versioning for relationship changes.

How does FalkorDB enable interactive data analysis?

FalkorDB enhances user experience by enabling fast, interactive analysis of complex data through dashboards and custom views, allowing users to filter, explore, and visualize data efficiently.

Use Cases & Benefits

What are the primary use cases for FalkorDB?

FalkorDB is used for Text2SQL (natural language to SQL queries), security graphs (CNAPP, CSPM, CIEM), GraphRAG (advanced graph-based retrieval), agentic AI and chatbots, and fraud detection (real-time pattern detection across transaction networks).

Which industries benefit most from GraphRAG over VectorRAG implementations?

Industries such as financial services (fraud detection), healthcare (treatment pathways), and supply chain (multi-tier analysis) benefit most from GraphRAG's relationship traversal and multi-hop reasoning capabilities, despite higher implementation complexity.

What business impact can customers expect from using FalkorDB?

Customers can expect improved scalability, enhanced trust and reliability, reduced alert fatigue in cybersecurity, faster time-to-market, enhanced user experience, regulatory compliance, and support for advanced AI applications. These outcomes empower businesses to unlock the full potential of their data and achieve strategic goals.

Can you share specific case studies or success stories of customers using FalkorDB?

Yes, AdaptX uses FalkorDB to analyze high-dimensional medical data, XR.Voyage overcame scalability challenges in immersive experiences, and Virtuous AI created a high-performance, multi-modal data store for ethical AI development. Read their stories at FalkorDB case studies.

What feedback have customers given about FalkorDB's ease of use?

Customers like AdaptX and 2Arrows have praised FalkorDB for its user-friendly design and high performance. AdaptX highlighted rapid access to clinical insights, while 2Arrows' CTO called FalkorDB a 'game-changer' for ease of running non-traversal queries compared to Neo4j.

How long does it take to implement FalkorDB and how easy is it to start?

FalkorDB is built for rapid deployment, allowing teams to go from concept to enterprise-grade solutions in weeks, not months. Users can sign up for FalkorDB Cloud, try a free instance, run locally with Docker, or schedule a demo. Comprehensive documentation and community support are available for onboarding.

Pricing & Plans

What pricing plans does FalkorDB offer?

FalkorDB offers four main plans: Free (for MVPs with community support), Startup (from /1GB/month, includes TLS and automated backups), Pro (from 0/8GB/month, includes cluster deployment and high availability), and Enterprise (custom pricing with VPC, custom backups, and 24/7 support).

What features are included in the Free plan?

The Free plan is designed for building a powerful MVP and includes community support. It is ideal for users who want to explore FalkorDB's capabilities before upgrading to paid plans.

What features are included in the Startup plan?

The Startup plan starts at /1GB/month and includes features such as TLS encryption and automated backups, making it suitable for small teams and startups needing secure, reliable graph database solutions.

What features are included in the Pro plan?

The Pro plan starts at 0/8GB/month and includes advanced features like cluster deployment, high availability, and additional resources for scaling production workloads.

What features are included in the Enterprise plan?

The Enterprise plan offers tailored pricing and includes enterprise-grade features such as VPC deployment, custom backups, and 24/7 support, making it suitable for large organizations with advanced security and compliance needs.

Competition & Comparison

How does FalkorDB compare to Neo4j?

FalkorDB offers up to 496x faster latency and 6x better memory efficiency than Neo4j, supports flexible horizontal scaling, and includes multi-tenancy in all plans. Neo4j uses an on-disk storage model and offers multi-tenancy only in premium plans. For more details, see the FalkorDB vs. Neo4j comparison.

How does FalkorDB compare to AWS Neptune?

FalkorDB is open source, supports multi-tenancy, and provides better latency performance compared to AWS Neptune, which is proprietary, closed-source, and lacks multi-tenancy. FalkorDB also supports the Cypher query language and efficient vector search. See the FalkorDB vs. AWS Neptune comparison for more information.

How does FalkorDB compare to TigerGraph?

FalkorDB offers faster latency, better memory efficiency, and flexible horizontal scaling compared to TigerGraph, which has limited scaling and moderate memory efficiency. FalkorDB is rated as fast, while TigerGraph is considered adequate in latency.

How does FalkorDB compare to ArangoDB?

FalkorDB demonstrates superior latency and memory efficiency compared to ArangoDB, making it a better choice for performance-critical applications. Both support multi-tenancy and vector search, but FalkorDB offers more efficient scaling and faster performance.

Why should a customer choose FalkorDB over alternatives?

FalkorDB stands out for its exceptional performance (up to 496x faster latency), scalability, built-in multi-tenancy, advanced AI integration, open-source licensing, and enhanced user experience. It is trusted by customers in healthcare, media, and AI development, and is SOC 2 Type II compliant for security and compliance needs.

Security & Compliance

What security and compliance certifications does FalkorDB have?

FalkorDB is SOC 2 Type II compliant, meeting rigorous standards for security, availability, processing integrity, confidentiality, and privacy. This demonstrates FalkorDB's commitment to maintaining the highest standards of security and compliance.

How does FalkorDB ensure data security and privacy?

FalkorDB protects against unauthorized access, ensures system availability, delivers accurate and timely data processing, safeguards sensitive information, and complies with privacy regulations. These measures are validated by SOC 2 Type II certification.

Technical Requirements & Support

What technical resources are available for developers using FalkorDB?

Developers can access comprehensive documentation, API references, Docker deployment guides, community support via Discord and GitHub Discussions, and practical tutorials on the FalkorDB blog.

How can I get support or contact the FalkorDB team?

You can contact FalkorDB for support via their website, join the Discord server for real-time discussions, participate in GitHub Discussions, or schedule a demo for personalized assistance.

What is the process for integrating FalkorDB with other tools?

FalkorDB provides integration guides and documentation for connecting with frameworks like LangChain, LlamaIndex, Graphiti, and Cognee. Developers can follow official documentation or contact FalkorDB for custom integration support.

What are the supported deployment options for FalkorDB?

FalkorDB supports both cloud and on-premises deployments, allowing organizations to choose the best fit for their infrastructure and compliance requirements.

How does FalkorDB handle schema evolution and updates?

FalkorDB supports granular, incremental updates at the node and edge level, enabling efficient schema evolution and real-time data updates without full reprocessing. This is particularly advantageous for dynamic, high-volume data environments.

Industry & Customer Proof

What industries are represented in FalkorDB's case studies?

FalkorDB case studies feature industries such as healthcare (AdaptX), media and entertainment (XR.Voyage), and artificial intelligence/ethical AI development (Virtuous AI). Each case study demonstrates FalkorDB's impact on scalability, data management, and advanced AI workflows.

Who are some of FalkorDB's customers?

Notable FalkorDB customers include AdaptX (healthcare analytics), XR.Voyage (immersive media), and Virtuous AI (ethical AI development). Their success stories are available on the FalkorDB website.

What pain points does FalkorDB address for its customers?

FalkorDB addresses pain points such as trust and reliability in LLM-based applications, scalability and data management, alert fatigue in cybersecurity, performance limitations of competitors, interactive data analysis, regulatory compliance, and support for agentic AI and chatbots.

What core problems does FalkorDB solve?

FalkorDB solves core problems including trust and reliability in AI workflows, efficient management of large-scale, high-dimensional data, reducing alert fatigue in cybersecurity, outperforming competitors in speed and memory efficiency, enabling interactive data analysis, ensuring regulatory compliance, and supporting agentic AI and chatbots.

VectorRAG vs GraphRAG: Technical Challenges in Enterprise Generative AI

I thought VectorRAG would scale for multi-hop queries. I was wrong.

Highlights

Enterprise AI teams building Retrieval-Augmented Generation (RAG) systems must choose between VectorRAG and GraphRAG architectures. Each approach presents distinct technical challenges that affect accuracy, scalability, and query complexity.

  • VectorRAG relies on embedding-based similarity, offering efficient broad retrieval but struggling with structured multi-hop reasoning.
  • GraphRAG explicitly encodes entity relationships, improving logical consistency but introducing higher upfront schema complexity.

 

This article analyzes the core engineering challenges of both approaches, covering multi-hop reasoning, update mechanisms, schema trade-offs, query performance, and explainability.

knowledge graph vs vector database

Multi-Hop Reasoning: Structural Constraints and Scalability

VectorRAG Limitations

Vector-based retrieval operates on semantic similarity, which inherently flattens hierarchical information. This makes multi-step reasoning computationally expensive and prone to accuracy degradation.

  • Embeddings lack explicit structural awareness, forcing reliance on approximate similarity.
  • Queries requiring deep logical connections—e.g., tracing ESG controversies affecting a supplier’s financials—suffer from connection loss in high-dimensional space.
  • Performance degrades exponentially beyond two to three logical hops, making deep reasoning unreliable.

GraphRAG Strengths and Complexity

Graph-based retrieval enables direct traversal of structured relationships (e.g., Company → Supplier → ESG Violation → Financial Impact).

  • Path-based reasoning preserves logical connections, ensuring accuracy across deep multi-hop queries.
  • Schema complexity grows with depth—more relationships require precise ontological design.
  • Implementation requires significant upfront effort in defining relationship types, constraints, and traversal rules.

Hybrid Approaches

Some architectures use vector similarity for candidate selection, followed by graph verification to confirm structured relationships. This balances broad recall with logical precision.

pros-cons-comparison-table-vectorRAG

Data Updates: Real-Time Constraints and Computational Overhead

VectorRAG: Re-Embedding Challenges

Updating vectorized data requires full re-embedding, leading to O(n) computational complexity.

  • Partial updates risk embedding drift, where inconsistencies arise between newly added and pre-existing vectors.
  • Continuous data streams (e.g., financial news, legal filings) face high reprocessing costs.
  • Embedding recalibration often requires offline batch processing, delaying real-time updates.

GraphRAG: Granular Incremental Updates

Graph-based architectures support node- and edge-level updates, avoiding full reprocessing.

  • Temporal edge versioning allows tracking of relationship evolution (e.g., contract amendments).
  • Graph synchronization requires custom consistency mechanisms, especially in distributed environments.

Optimization Strategies

  • VectorRAG: Batch reprocessing with change-detection algorithms minimizes unnecessary updates.
  • GraphRAG: Temporal partitioning and incremental graph expansion optimize update efficiency.
vector vs graph visual selection 1 FalkorDB

Schema Flexibility vs. Query Precision

VectorRAG: Schema-Agnostic Flexibility

Vector-based systems simplify ingestion but risk semantic dilution—unrelated concepts with similar embeddings may cluster incorrectly.

  • Example: “Java” (island) vs. “Java” (programming language) in a technical search.
  • Disambiguation requires embedding tuning, increasing implementation complexity.

GraphRAG: Schema-Dependent Rigor

Graph models enforce strict entity-relationship rules, improving precision but requiring ongoing ontology alignment.

  • Example: Pharmaceutical R&D models must manually curate Compound → Target Protein → Pathway relationships.
  • Schema evolution becomes a long-term maintenance challenge.

Domain-Specific Schema Considerations

  • Finance: Regulatory compliance and risk modeling require explicit entity relationships.
  • Healthcare: Standardized ontologies (SNOMED CT, RxNorm) ensure precise data linkage.
  • Manufacturing: Temporal tracking of supply chain relationships demands structured updates.
Enhancing Data Integrity Across Domains with Structured Schema FalkorDB

Query Performance and Latency Optimization

Vector Search: Speed vs. Recall Trade-offs

Vector-based retrieval achieves sub-second latency using Approximate Nearest Neighbor (ANN) algorithms (e.g., HNSW).

  • Speed comes at the cost of recall precision—important legal or compliance queries risk missing critical documents.
  • Sliding window chunking helps mitigate noise but fragments logical dependencies.

Graph Query Optimization

Graph queries use index-assisted traversal, reducing latency but struggling with dense subgraph bottlenecks.

  • Multi-hop queries (e.g., tracking supply chain dependencies) can exceed 300ms, limiting real-time feasibility.
  • Query planning and cache-based acceleration improve response times.

Hardware Acceleration

  • VectorRAG: GPU acceleration speeds up similarity calculations.
  • GraphRAG: Specialized graph processors enhance complex traversals.
  • Memory-optimized architectures support billion-node graphs efficiently.

Entity Disambiguation at Scale

VectorRAG: Embedding Disambiguation Challenges

Semantic embeddings struggle with polysemy—words with multiple meanings.

  • Example: “Apple” (fruit) vs. “Apple” (company).
  • Sense-specific embeddings require manual data tagging, increasing complexity.

GraphRAG: Context-Aware Disambiguation

Graphs resolve entities by leveraging surrounding context.

  • Example: A node labeled “Apple” connected to Cupertino and iPhone resolves ambiguity.
  • Indexing large-scale graphs significantly increases memory requirements.
Which entity disambiguation method to use?

Structural Information Preservation

VectorRAG: Information Loss

Vectors flatten syntactic structure, discarding critical elements like negation (“not compliant”).

  • Fine-tuning positional encoding partially restores structure but increases computational overhead.

GraphRAG: Explicit Structural Integrity

Graphs maintain structural integrity through direct relationships (e.g., Regulation → Compliance Check).

  • NLP-based graph extraction has 15-20% error rates, requiring post-processing verification.

Explainability and Compliance

VectorRAG: Opaque Similarity Scores

Vector retrieval lacks transparent logic, complicating compliance audits.

  • Post-hoc explanation models (e.g., LIME) introduce latency and only approximate retrieval reasoning.

GraphRAG: Native Explainability

Graph retrieval produces explicit query paths (e.g., Patient → Prescription → Drug Interaction).

  • Path explosion in cyclic graphs (corporate ownership loops) increases complexity.
  • Query visualization is necessary for human interpretability.

Key Takeaways

What's the computational cost difference between VectorRAG vs GraphRAG for real-time data updates?

VectorRAG requires O(n) re-embedding for modified documents, while GraphRAG supports incremental node/edge updates but demands custom temporal versioning for relationship changes.

How do VectorRAG vs GraphRAG architectures handle entity disambiguation differently?

VectorRAG struggles with polysemy, requiring sense-specific embeddings, while GraphRAG leverages graph context (connected nodes) for disambiguation but doubles memory overhead in billion-node graphs.

Which industries benefit most from GraphRAG over VectorRAG implementations?

Financial services (fraud detection), healthcare (treatment pathways), and supply chain (multi-tier analysis) benefit most from GraphRAG's relationship traversal, despite its higher implementation complexity.