Frequently Asked Questions

Product Information

What is FalkorDB and what does it do?

FalkorDB is a high-performance graph database designed to manage complex relationships and enable advanced AI applications. It is purpose-built for development teams working with interconnected data in real-time or interactive environments. FalkorDB supports use cases such as Text2SQL, Security Graphs, GraphRAG, agentic AI, chatbots, and fraud detection. Learn more.

What are the main products and services offered by FalkorDB?

FalkorDB offers a graph database platform with features for real-time data analysis, multi-tenancy, advanced AI integration, and regulatory compliance. Key services include cloud and on-prem deployment, comprehensive documentation, community support, and solution architect guidance. See full details.

What are the primary use cases for FalkorDB?

FalkorDB is used for Text2SQL (natural language to SQL queries), building security graphs for CNAPP, CSPM & CIEM, advanced graph-based retrieval (GraphRAG), agentic AI and chatbots, fraud detection, and high-performance graph storage for complex relationships. Explore use cases.

Who is the target audience for FalkorDB?

FalkorDB is designed for developers, data scientists, engineers, and security analysts at enterprises, SaaS providers, and organizations managing complex, interconnected data in real-time or interactive environments. Learn more.

How does FalkorDB address the needs of AI and LLM-based applications?

FalkorDB is optimized for AI use cases such as GraphRAG and agent memory, enabling intelligent agents and chatbots with real-time adaptability. It combines graph traversal with vector search for personalized user experiences and supports advanced AI workflows with low-latency, high-accuracy data retrieval. More info.

Features & Capabilities

What are the key features of FalkorDB?

Key features include ultra-low latency (up to 496x faster than Neo4j), 6x better memory efficiency, support for 10,000+ multi-graphs (multi-tenancy), open-source licensing, linear scalability, advanced AI integration (GraphRAG, agent memory), and flexible cloud/on-prem deployment. See all features.

Does FalkorDB support multi-tenancy?

Yes, FalkorDB supports multi-tenancy in all plans, enabling management of over 10,000 multi-graphs. This is especially valuable for SaaS providers and organizations with diverse user bases. Learn more.

How does FalkorDB enable real-time and interactive data analysis?

FalkorDB delivers ultra-low latency and high throughput, allowing users to perform fast, interactive analysis of complex data through dashboards and custom views. This enhances user experience and supports real-time decision-making. Details here.

What integrations does FalkorDB offer?

FalkorDB integrates with frameworks such as Graphiti (for AI agent memory), g.v() (for knowledge graph visualization), Cognee (for AI agent memory), LangChain (for LLM integration), and LlamaIndex (for advanced knowledge graph applications). See integration details.

Does FalkorDB provide an API and technical documentation?

Yes, FalkorDB provides a comprehensive API and technical documentation, including setup guides and advanced configuration references. Access the documentation at docs.falkordb.com and the latest releases at GitHub.

How does FalkorDB combine vector search and graph traversal?

FalkorDB allows users to build query contexts using a combination of vector search and graph traversals. For example, a user question can initiate a vector search to find relevant nodes, then continue with graph traversal to reach important, connected data fragments, resulting in richer and more relevant LLM context. Read the blog.

Performance & Scalability

How does FalkorDB perform compared to other graph databases?

FalkorDB offers up to 496x faster latency and 6x better memory efficiency compared to competitors like Neo4j. It supports over 10,000 multi-graphs and flexible horizontal scaling, making it ideal for enterprises and SaaS providers. See benchmarks.

What makes FalkorDB suitable for large-scale, high-dimensional data?

FalkorDB's memory efficiency, linear scalability, and support for 10,000+ multi-graphs enable it to handle large-scale, high-dimensional data efficiently. This is particularly beneficial for organizations with complex, interconnected datasets. Learn more.

How quickly can FalkorDB be implemented?

FalkorDB is built for rapid deployment, allowing teams to go from concept to enterprise-grade solutions in weeks, not months. Users can sign up for FalkorDB Cloud, try it for free, or run it locally via Docker. Get started.

What customer feedback has FalkorDB received regarding performance and ease of use?

Customers like AdaptX and 2Arrows have praised FalkorDB for its rapid access to complex data and ease of use. AdaptX highlighted its ability to provide clinicians with fast access to SPC charts, while 2Arrows' CTO called it a 'game-changer' for performance and non-traversal queries. Read case studies.

Pricing & Plans

What pricing plans does FalkorDB offer?

FalkorDB offers four main plans: FREE (for MVPs with community support), STARTUP (from /1GB/month, includes TLS and automated backups), PRO (from 0/8GB/month, includes cluster deployment and high availability), and ENTERPRISE (custom pricing with VPC, custom backups, and 24/7 support). See pricing.

What features are included in the FREE plan?

The FREE plan is designed for building a powerful MVP and includes community support. It is ideal for developers and small teams starting with FalkorDB. More info.

What features are included in the STARTUP plan?

The STARTUP plan starts at /1GB/month and includes TLS encryption and automated backups, making it suitable for growing teams and early-stage companies. See details.

What features are included in the PRO plan?

The PRO plan starts at 0/8GB/month and includes advanced features such as cluster deployment and high availability, targeting organizations with more demanding requirements. See details.

What features are included in the ENTERPRISE plan?

The ENTERPRISE plan offers tailored pricing and includes enterprise-grade features such as VPC, custom backups, and 24/7 support. It is designed for large organizations with complex needs. Contact sales.

Competition & Comparison

How does FalkorDB compare to Neo4j?

FalkorDB offers up to 496x faster latency, 6x better memory efficiency, and flexible horizontal scaling compared to Neo4j. Unlike Neo4j, FalkorDB includes multi-tenancy in all plans and is open source. See detailed comparison.

How does FalkorDB compare to AWS Neptune?

FalkorDB is open source, supports multi-tenancy, and provides better latency performance than AWS Neptune. It also supports the Cypher query language and offers more efficient vector search capabilities. See comparison.

How does FalkorDB compare to TigerGraph?

FalkorDB delivers faster latency, more efficient memory usage, and flexible horizontal scaling compared to TigerGraph. It is rated as 'fast' versus TigerGraph's 'adequate' latency and is open source. Learn more.

How does FalkorDB compare to ArangoDB?

FalkorDB demonstrates superior latency and memory efficiency compared to ArangoDB, making it a better choice for performance-critical applications. It also supports flexible horizontal scaling and multi-tenancy. See details.

What are the advantages of using a knowledge graph over a vector database for RAG?

Knowledge graphs, like those built with FalkorDB, capture entities and relationships, enabling richer context extraction for LLMs. In practical demos, graph databases answered more complex questions than vector databases, which are limited to semantically similar results. FalkorDB also supports combining vector search and graph traversal for optimal results. Read the blog.

Use Cases & Benefits

What problems does FalkorDB solve for its customers?

FalkorDB addresses trust and reliability in LLM-based applications, scalability and data management, alert fatigue in cybersecurity, performance limitations of competitors, interactive data analysis, regulatory compliance, and support for agentic AI and chatbots. See details.

What business impact can customers expect from using FalkorDB?

Customers can expect improved scalability, enhanced trust and reliability, reduced alert fatigue, faster time-to-market, enhanced user experience, regulatory compliance, and support for advanced AI applications. These outcomes empower businesses to unlock the full potential of their data. Learn more.

Which industries are represented in FalkorDB's case studies?

Industries include healthcare (AdaptX), media and entertainment (XR.Voyage), and artificial intelligence/ethical AI development (Virtuous AI). See case studies.

Can you share specific customer success stories using FalkorDB?

Yes. AdaptX used FalkorDB to analyze clinical data, XR.Voyage overcame scalability challenges in immersive media, and Virtuous AI built a high-performance, multi-modal data store for ethical AI. Read their stories.

Who are some of FalkorDB's customers?

Notable customers include AdaptX, XR.Voyage, and Virtuous AI. Their case studies are available on the FalkorDB website. See customer stories.

Security & Compliance

What security and compliance certifications does FalkorDB have?

FalkorDB is SOC 2 Type II compliant, meeting rigorous standards for security, availability, processing integrity, confidentiality, and privacy. Learn more.

How does FalkorDB ensure data security and privacy?

FalkorDB protects against unauthorized access, ensures system availability, delivers accurate data processing, safeguards sensitive information, and complies with privacy regulations as part of its SOC 2 Type II certification. More info.

Support & Implementation

How easy is it to get started with FalkorDB?

Getting started is straightforward: sign up for FalkorDB Cloud, try a free instance, run locally via Docker, or schedule a demo. Comprehensive documentation and community support are available. Start here.

What support and training resources are available for FalkorDB?

FalkorDB offers comprehensive documentation, community support via Discord and GitHub, solution architects for tailored advice, and practical guides and tutorials on its blog. See documentation.

Where can I find the latest updates and release notes for FalkorDB?

The latest updates and release notes are available on the FalkorDB GitHub Releases page.

How can I contact FalkorDB for sales or technical questions?

You can contact FalkorDB via the Contact Us page for sales, technical questions, or to discuss integrations.

FalkorDB Header Menu

RAG battle: vector database vs knowledge graph

Blog-4

LLMs today

The potential of using LLMs for knowledge extraction is nothing less than amazing, in this last couple of months we’ve seen a rush towards integrating large language models to perform a variety of tasks, e.g. data summarization, Q&A chat bots and entity extraction are just a few examples of what people are doing with these models.

With this new technology new disciplines and challenges emerge:

Current approach

As It seems Vector databases became the default options for indexing, storing and retrieving data which will later be presented as context along with a question or a task to the LLM.
The flow is quite straightforward, consider a list of documents containing data we would like to query (these can be Wikipedia pages, corporate proprietary knowledge or a list of recipes) the data is usually chunked into smaller pieces, embeddings are created for each piece and finally the data along with its embeddings are stored within a Vector database.

When it’s time to ask a question e.g. suggest three Italian recipes which don’t contain eggplants for a dinner party of four.

The question itself gets embedded into a vector, the Vector database is asked to provide K (let’s say 20) semantically similar vectors (recipes in our case), it is these results from the DB which will form a context presented to the LLM along with the original question, in the hope that the context is rich and accurate enough for the LLM to provide suitable answers.

One major flaw with this approach is it too limited, the backing DB will only provide results which are semantically “close” to the user question, as such the generated context is lacking vital information need by the LLM to provide a decent answer.

 

 

Alternative

As an alternative one can use knowledge graph to not only store and query the original documents but to also capture different entities & relations embedded within one’s data.

To utilize a graph DB as a knowledge base for LLMs we start out by constructing a knowledge graph from our documents, this process includes identifying different entities and the relationships among them, e.g.  (Napoleon Bonaparte) – [IMPRISONED] -> (island of Saint Helena)

* Surprisingly LLMs can be used for this process as well.

Once the graph is constructed we’ll be using it for context construction, A question presented by a user is translated into a graph query, at this point we’re not limited to a set of K semantically similar vectors but we can utilize all of the connections stored within our graph to generate a much richer context, it is this context along with the original question that is presented to the LLMs to get the final answer.

 

Context extraction

Querying graph for context

 

Graph generation

Entity and relation extraction from raw text

Demo

To put all of the above into practice I’ve constructed a demo using LangChain querying music related Wikipedia pages comparing a vector store setup against a knowledge graph (FalkorDB).

            from langchain.chains import GraphCypherQAChain
from langchain.chat_models import ChatOpenAIdef
from langchain.graphs import FalkorDBGraph

def query_graph(graph_id, query):
  graph = FalkorDBGraph(graph_id, host="localhost", port=6380)
  graph.refresh_schema()

  chain = GraphCypherQAChain.from_llm(cypher_llm=ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo"),
  qa_llm = ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo"), graph=graph, verbose=True)
  return chain.run(query)

query_graph("music", "Which musician had the most collaborations?")

query_graph("music", "Which two musician are married ?")

query_graph("music", "Which country produced the most talent?")
        

Q&A

Here are the questions and answers I’ve got from each setup:

Vector –  Mark Hudson did the most collaborations.
Graph – Mark Hudson did the most collaborations with a total of 8.

Vector – There is no information provided about any musicians being married to each other.
Graph – Bob Thiele and Teresa Brewer are married musicians.

Vector – Usher won multiple awards, including Grammy Awards and Soul Train Music Awards.
Graph – Usher won a Grammy Award.

Vector – The document does not provide information about which country produced the most talent in country music.
Graph – The country that produced the most talent is the United States of America.

Vector – There is no indirect connection between Kylie Minogue and Simon Franglen.
Graph – Yes, there is an indirect connection between Kylie Minogue and Simon Franglen. The artists on that path are Whitney Houston, Barbra Streisand, Graham Stack, and Rod Stewart.

Conclusions

As can be seen the Vector database setup managed to answer only 2/5 of the questions, this is quite expected as the questions asked are not semantically close to the answers, as such we can’t expect the retrieved documents from the Vector DB to contain the necessary information to answer the questions.

On the other hand the Graph database setup did manage to answer all 5 questions, the success of this approach is primarily account for the auto generated graph query used to build a much more relevant and richer LLM context.

Although in this examples we’ve seen Graph doing quite well, It is my belief that a more robust solution combines both worlds, this is why FalkorDB had introduced a Vector index as part of its indexing suite, now one can start building a query context using a combination of a vector search and graph traversals, consider a user question which kicks of a vector search which ends up with K nodes from which graph traversal continues, reaching important fragments of data scattered along the graph.