Frequently Asked Questions

Migration from Relational Database to Graph Database

Why should I migrate from a relational database to a graph database?

Graph databases excel at handling complex, interconnected data, offering superior scalability and query performance for AI/ML applications. They enable real-time analytics and relationship-based queries, which are often challenging for traditional relational databases. Migrating allows you to leverage advanced graph algorithms and vector indexing for dynamic, evolving datasets. Source

What are the main steps in migrating from a relational to a graph database?

The migration process includes analyzing your relational schema, designing the graph model, extracting and transforming data, loading it into the graph database, and optimizing the graph structure. Each step ensures your data is accurately represented and ready for advanced queries. Source

How do relational databases and graph databases differ in modeling data?

Relational databases organize data into tables with rows and columns, using primary and foreign keys for relationships. Graph databases represent entities as nodes and relationships as edges, allowing direct traversal and real-time exploration of connections without costly joins. Source

What is the recommended approach for transforming relational data to graph data?

Start by mapping tables to nodes and join keys to edges. Extract data in CSV or JSON format, transform it into Cypher CREATE queries, and use import tools to load it into FalkorDB. Validate and optimize the graph by indexing frequently queried nodes and refining traversal patterns. Source

How can I verify that my migration to FalkorDB was successful?

After loading your data, use Cypher queries to count nodes and relationships, and visualize the graph using the FalkorDB Browser at http://localhost:3000. Run sample queries to ensure the graph behaves as expected. Source

What tools and libraries are required to migrate to FalkorDB?

You can install FalkorDB using Docker and use the Python client for data transformation and insertion. The FalkorDB Browser helps visualize data, and import tools like the bulk loader streamline large migrations. Source

How does FalkorDB support clustering and scalability during migration?

FalkorDB's cluster architecture enables horizontal scaling, live replication, and fault tolerance. You can deploy clusters using Docker or opt for FalkorDB Cloud for streamlined scaling and high availability. Cluster Documentation

What are some advanced migration considerations for complex data?

For complex data, adapt transformation scripts to represent properties as separate nodes or edges. For example, JSON properties can become distinct nodes linked to original entities, maximizing graph flexibility and query performance. Source

What is the role of Cypher query language in FalkorDB migration?

Cypher is used to create, query, and manipulate graph data in FalkorDB. During migration, transform relational data into Cypher CREATE queries for seamless insertion and validation. Cypher Documentation

How can I optimize my graph model after migration?

Refine your graph by indexing frequently queried nodes or relationships and optimizing traversal patterns. This enhances performance for AI/ML workflows and real-time analytics. Indexing Documentation

Features & Capabilities

What are the key features of FalkorDB?

FalkorDB offers ultra-low latency, native support for advanced graph algorithms, vector indexing, multi-tenancy (10K+ multi-graphs), open-source licensing, linear scalability, and optimized AI use cases like GraphRAG and agent memory. Source

Does FalkorDB support AI and ML applications?

Yes, FalkorDB is purpose-built for AI/ML workflows, including GraphRAG, agentic AI, chatbots, recommendation systems, and semantic search. Its advanced graph algorithms and vector indexing make it ideal for these use cases. Source

What integrations are available with FalkorDB?

FalkorDB integrates with frameworks like Graphiti (by ZEP), g.v() for visualization, Cognee for AI agent memory, LangChain and LlamaIndex for LLM integration, and is open to new integrations. Source

Does FalkorDB provide an API?

Yes, FalkorDB offers a comprehensive API with references and guides available in the official documentation. This supports developers, data scientists, and engineers in integrating FalkorDB into their workflows. API Documentation

Where can I find technical documentation for FalkorDB?

Technical documentation, including guides and API references, is available at docs.falkordb.com and the GitHub releases page for updates. GitHub Releases

What is the performance advantage of FalkorDB?

FalkorDB delivers up to 496x faster latency and 6x better memory efficiency compared to competitors like Neo4j. It supports real-time data analysis, interactive dashboards, and flexible horizontal scaling for large datasets. Benchmarks

Does FalkorDB support multi-tenancy?

Yes, FalkorDB supports multi-tenancy in all plans, enabling management of over 10,000 multi-graphs. This is essential for SaaS providers and enterprises with diverse user bases. Source

Is FalkorDB open source?

Yes, FalkorDB is open source, encouraging community collaboration and transparency. This differentiates it from proprietary solutions like AWS Neptune. Source

What security and compliance certifications does FalkorDB have?

FalkorDB is SOC 2 Type II compliant, meeting rigorous standards for security, availability, processing integrity, confidentiality, and privacy. Source

Pricing & Plans

What pricing plans does FalkorDB offer?

FalkorDB offers four plans: FREE (for MVPs with community support), STARTUP (from /1GB/month, includes TLS and automated backups), PRO (from 0/8GB/month, includes cluster deployment and high availability), and ENTERPRISE (custom pricing with VPC, custom backups, and 24/7 support). Source

What features are included in the FalkorDB PRO plan?

The PRO plan starts at 0/8GB/month and includes advanced features such as cluster deployment, high availability, and enhanced support for enterprise-grade solutions. Source

Is there a free trial or demo available for FalkorDB?

Yes, you can try FalkorDB for free by launching a cloud instance or running locally with Docker. Personalized demos are also available by scheduling with the FalkorDB team. Source

Competition & Comparison

How does FalkorDB compare to Neo4j?

FalkorDB offers up to 496x faster latency, 6x better memory efficiency, flexible horizontal scaling, and includes multi-tenancy in all plans. Neo4j uses an on-disk storage model and offers multi-tenancy only in premium plans. Comparison

How does FalkorDB compare to AWS Neptune?

FalkorDB is open source, supports multi-tenancy, delivers better latency performance, and offers highly efficient vector search. AWS Neptune is proprietary, has limited vector search, and does not support multi-tenancy. Comparison

How does FalkorDB compare to TigerGraph?

FalkorDB provides faster latency, better memory efficiency, and flexible horizontal scaling. TigerGraph offers multi-tenancy and vector search but has limited horizontal scaling and moderate memory efficiency. Source

How does FalkorDB compare to ArangoDB?

FalkorDB demonstrates superior latency and memory efficiency, flexible horizontal scaling, and robust multi-tenancy. ArangoDB offers multi-tenancy and vector search but has limited scaling and moderate memory efficiency. Source

Use Cases & Benefits

What are the primary use cases for FalkorDB?

FalkorDB is used for Text2SQL, security graphs (CNAPP, CSPM, CIEM), GraphRAG, agentic AI & chatbots, fraud detection, and high-performance graph storage for complex relationships. Source

Who can benefit from using FalkorDB?

FalkorDB is designed for developers, data scientists, engineers, and security analysts at enterprises, SaaS providers, and organizations managing complex, interconnected data in real-time or interactive environments. Source

What business impact can customers expect from FalkorDB?

Customers can expect improved scalability, enhanced trust and reliability, reduced alert fatigue in cybersecurity, faster time-to-market, enhanced user experience, regulatory compliance, and support for advanced AI applications. Source

What pain points does FalkorDB address?

FalkorDB addresses trust and reliability in LLM-based applications, scalability and data management, alert fatigue in cybersecurity, performance limitations of competitors, interactive data analysis, regulatory compliance, and agentic AI challenges. Source

How easy is it to implement FalkorDB?

FalkorDB is built for rapid deployment, enabling teams to go from concept to enterprise-grade solutions in weeks. Getting started is straightforward with cloud sign-up, Docker guides, demos, and comprehensive documentation. Source

What feedback have customers given about FalkorDB's ease of use?

Customers like AdaptX and 2Arrows have praised FalkorDB for its rapid access to insights, user-friendly dashboards, and superior performance for non-traversal queries. These testimonials highlight its intuitive design and frictionless user experience. AdaptX Case Study, 2Arrows Feedback

Can you share specific case studies of FalkorDB customers?

Yes, AdaptX uses FalkorDB for clinical data analysis, XR.Voyage for immersive experience scalability, and Virtuous AI for ethical AI development. Read their stories in the case studies section. Case Studies

What industries are represented in FalkorDB case studies?

Industries include healthcare (AdaptX), media and entertainment (XR.Voyage), and artificial intelligence/ethical AI development (Virtuous AI). Case Studies

Technical Requirements & Support

What are the technical requirements for running FalkorDB?

FalkorDB can be deployed in the cloud or on-premises, with Docker support for local installations. Cluster setups are recommended for large or growing datasets to enable horizontal scaling and high availability. Cluster Documentation

What support and training options are available for FalkorDB?

Support includes comprehensive documentation, community forums on Discord and GitHub, solution architects for tailored advice, and onboarding via free trials and demos. Documentation, Discord, GitHub Discussions

Migrate from Relational Database to Graph Database

Easily migrate from a relational database to a GraphDB

Highlights

How to Migrate from Relational Database to Graph Database

Graph databases have become a cornerstone of modern AI and ML applications, powering breakthroughs in areas like Retrieval-Augmented Generation (GraphRAG), recommendation systems, and semantic search.

Unlike traditional relational databases, graph databases are designed to model complex relationships and interconnected data with unparalleled efficiency. They excel in scenarios where the rigid schemas of relational databases pose limitations, offering the flexibility and explainability required to handle dynamic and evolving data sets effectively. Modern graph databases, such as FalkorDB, are equipped with native support for advanced graph algorithms and vector indexing, making them a natural fit for developing agentic AI systems and multi-hop reasoning applications.

If your data currently lives in a relational database and you’d like to build AI applications that rely on dynamic, interconnected data, now’s the time to migrate to a graph database.

This guide will walk you through the migration process step-by-step, showing you how to transition seamlessly from any relational database to FalkorDB, an ultra-low latency graph database designed for building AI applications.

By the end, you’ll be equipped to harness the full power of graph technology and AI for even the most complex and demanding use cases.

Understanding the Migration Approach

Before we dive in, let’s quickly compare how relational databases and graph databases model data. Relational databases organize information into rigid, predefined tables with rows and columns, using primary and foreign keys to define relationships. While this works well for structured, tabular data, it starts to falter when handling deeply interconnected datasets, as joins can quickly become cumbersome and computationally expensive.

How to Migrate from Relational Database to Graph Database
Fig. 1. Simple data transformation of relational data to graph data

Graph databases, on the other hand, take a completely different approach. They store Knowledge Graphs that naturally represent entities as nodes and relationships as edges within a graph structure. This allows you to traverse and query complex connections directly, eliminating the need for costly joins and enabling real-time exploration of relationships. 

Migrating from a relational database to a graph database involves more than just copying data—it’s a shift in mindset. You’ll need to transition from a table-centric model to one that revolves around entities (nodes) and their relationships (edges). The process includes analyzing your relational schema, mapping tables and join keys to nodes and edges, and transforming your data into a graph-compatible format ready for insertion.

High-Level Steps for Migration

  1. Analyze the Relational Schema: Begin by understanding your current relational database schema. Identify key tables that represent entities and the relationships between them using primary and foreign keys.
  2. Design the Graph Model: Map entities to nodes and relationships to edges. Determine which attributes of your tables should be properties of nodes or edges to best represent your data in the graph.
  3. Extract Data from the Relational Database: Export the relevant tables and relationships from your relational database in a format such as CSV or JSON.
  4. Transform Data for the Graph Database: Transform the extracted data to match your graph model. Ensure that nodes and relationships are formatted into Cypher CREATE queries, so they can be inserted into the graph database easily.
  5. Load Data into the Graph Database: Use a script or import tools provided by your graph database to insert the transformed data into the database.
  6. Validate the Graph: Verify that the data has been correctly imported. Check node and relationship counts, and run sample queries to ensure that the graph behaves as expected.
  7. Optimize the Graph Model: Refine your graph by indexing frequently queried nodes or relationships and optimizing traversal patterns to enhance performance.
  8. Update the Applications: Modify your application’s data access layer to query the graph database using Cypher query language and ensure all workflows are functioning correctly.
Migrate from Relational Database to Graph Database

Steps to Migrate from a Relational Database to a Graph Database

Now, let’s start with the actual implementation. For simplicity, we’ll use a SQLite database to demonstrate the process, but the same approach can be applied to any other database technology.

Step 1: Install FalkorDB and the Required Libraries

First, let’s install FalkorDB using Docker, and keep it ready. 

				
					docker run -p 6379:6379 -p 3000:3000 -it -v ./data:/data falkordb/falkordb
				
			

You can visit http://localhost:3000 to launch the FalkorDB Browser and visualize the inserted data later. 

Additionally, you’ll need to install the Python client for FalkorDB. Start by setting up a virtual environment, then install the client using pip.

				
					pip install falkordb


				
			

Step 2: Export the Database Schema

Next, we’ll write a function to fetch the table schema from the database. 

				
					def extract_schema(db_path):
    conn = sqlite3.connect(db_path)
    cursor = conn.cursor()
   
    # Fetch all table definitions
    cursor.execute("SELECT name, sql FROM sqlite_master WHERE type='table';")
    tables = cursor.fetchall()
   
    schema = {}
    for table_name, table_sql in tables:
        schema[table_name] = {"columns": [], "primary_key": None, "foreign_keys": []}
       
        # Get column details
        cursor.execute(f"PRAGMA table_info('{table_name}');")
        columns = cursor.fetchall()
        for column in columns:
            column_name = column[1]
            column_type = column[2]
            is_pk = column[5] == 1
            schema[table_name]["columns"].append({"name": column_name, "type": column_type})
            if is_pk:
                schema[table_name]["primary_key"] = column_name
       
        # Get foreign key details
        cursor.execute(f"PRAGMA foreign_key_list('{table_name}');")
        foreign_keys = cursor.fetchall()
        for fk in foreign_keys:
            schema[table_name]["foreign_keys"].append({
                "column": fk[3],         # Column in the current table
                "ref_table": fk[2],      # Referenced table
                "ref_column": fk[4]      # Referenced column
            })
   
    conn.close()
    return schema

# Extract and print schema
db_path = "your_sqlite_db.db"
schema = extract_schema(db_path)
				
			

The extract_schema function starts by connecting to the specified SQLite database and retrieving the names and SQL definitions of all tables. For each table, it collects column details (name, type, and primary key) and organizes them into a dictionary. Additionally, it extracts foreign key relationships, including the source column, referenced table, and referenced column, and adds this information to the schema.

We will use the extracted schema in the next step to transform and insert data into FalkorDB.

Step 3: Transform and Import the Data into FalkorDB

Next, we will write a function to transform the data based on the schema we have extracted, convert it into a Cypher query, and insert it into FalkorDB. 

				
					from falkordb import FalkorDB

def populate_falkordb(schema, db_path):
    # Connect to FalkorDB
    db = FalkorDB(host='localhost', port=6379)
    graph = db.select_graph("MoviesGraph")
    graph.delete()  # Clear the graph if it already exists
   
    conn = sqlite3.connect(db_path)
    cursor = conn.cursor()
   
    # Add nodes and relationships to FalkorDB
    for table_name, details in schema.items():
        # Add nodes
        cursor.execute(f"SELECT * FROM {table_name};")
        rows = cursor.fetchall()
        for row in rows:
            properties = ", ".join(f"{col['name']}: '{row[idx]}'"
                                   for idx, col in enumerate(details["columns"]))
            graph.query(f"CREATE (:{table_name} {{{properties}}})")
       
        # Add relationships
        for fk in details["foreign_keys"]:
            cursor.execute(f"SELECT {fk['column']}, {fk['ref_column']} FROM {table_name};")
            relations = cursor.fetchall()
            for rel in relations:
                graph.query(f"""
                    MATCH (a:{table_name} {{ {fk['column']}: '{rel[0]}' }}),
                          (b:{fk['ref_table']} {{ {fk['ref_column']}: '{rel[1]}' }})
                    CREATE (a)-[:{fk['column']}]->(b)
                """)
   
    conn.close()
    print("Data populated into FalkorDB successfully.")

populate_falkordb(schema, db_path)


				
			

The populate_falkordb function transfers data from an SQLite database into a FalkorDB graph. It uses the provided schema to create nodes for each table and edges based on foreign key relationships. The function fetches data from the SQLite database, formats it into Cypher queries, and populates it into FalkorDB. 

Once you execute this function, your graph database should be populated with data from your relational database. 

Step 4: Verify the Migration

You can now head to http://localhost:3000 to visualize the migrated data. To see the imported graph, run the following Cypher query: 

				
					MATCH (n)-[r]->(m) 
RETURN n, r, m
				
			

To verify only the nodes, use the following query:

				
					MATCH (n) 
RETURN n
				
			

To see the relationships, run the following query: 

				
					MATCH ()-[r]->() 
RETURN r
				
			

You can also count all the nodes to ensure they match the number of tables in your SQLite database:

				
					MATCH (n) 
RETURN COUNT(n)
				
			

Additional Considerations

If your data is extensive or expected to grow rapidly, consider deploying a FalkorDB cluster. A cluster setup enables horizontal scaling by distributing data across multiple nodes, improving query performance and ensuring high availability. FalkorDB’s cluster architecture supports live replication and fault tolerance. You can set up a cluster using Docker and fine-tune it based on your workload. Alternatively, you can opt for the FalkorDB cloud for a more streamlined solution.

Additionally, you may need to adapt the populate_falkordb function to handle more complex data transformations. For instance, if certain properties in your relational database are better represented as separate nodes or edges in the graph, modify the function to account for this during the data transformation step.

For example, a JSON property in a relational table could be extracted into a separate node, with an edge linking it to the original entity. Customizing the transformation process in this way allows you to fully harness the flexibility of the graph model and ensures optimal performance for queries in your AI or ML workflows.

Conclusion

Migrating from a relational database to a modern graph database like FalkorDB opens up new possibilities for working with dynamic, interconnected data. By embracing the graph model, you can build AI and ML workflows that are not only faster and more scalable but also inherently explainable. As you refine your graph model and scale your system, FalkorDB’s advanced features—such as vector embeddings and seamless clustering—will help you tackle even the most complex data challenges.

Why migrate from a relational database to a graph database?

Graph databases excel at handling complex, interconnected data, offering better performance for AI/ML applications, real-time analytics, and relationship-based queries.

What are the main steps in migrating from a relational to a graph database?

Key steps include analyzing the relational schema, designing the graph model, extracting and transforming data, loading it into the graph database, and optimizing the graph structure.

How does FalkorDB compare to other graph databases?

FalkorDB offers ultra-low latency, native support for advanced graph algorithms, and vector indexing, making it ideal for AI applications and real-time analytics. See benchmarks: https://benchmarks.falkordb.com