Close Menu
    Facebook X (Twitter) Instagram
    • Privacy Policy
    • Terms Of Service
    • Social Media Disclaimer
    • DMCA Compliance
    • Anti-Spam Policy
    Facebook X (Twitter) Instagram
    Bytecore News
    • Home
    • Crypto News
      • Bitcoin
      • Ethereum
      • Altcoins
      • Blockchain
      • DeFi
    • AI News
    • Stock News
    • Learn
      • AI for Beginners
      • AI Tips
      • Make Money with AI
    • Reviews
    • Tools
      • Best AI Tools
      • Crypto Market Cap List
      • Stock Market Overview
      • Market Heatmap
    • Contact
    Bytecore News
    Home»AI News»SurrealDB 3.0 wants to replace your five-database RAG stack with one
    SurrealDB 3.0 wants to replace your five-database RAG stack with one
    AI News

    SurrealDB 3.0 wants to replace your five-database RAG stack with one

    February 17, 20264 Mins Read
    Share
    Facebook Twitter LinkedIn Pinterest Email
    kraken



    Building retrieval-augmented generation (RAG) systems for AI agents often involves using multiple layers and technologies for structured data, vectors and graph information. In recent months it has also become increasingly clear that agentic AI systems need memory, sometimes referred to as contextual memory, to operate effectively.

    The complexity and synchronization of having different data layers to enable context can lead to performance and accuracy issues. It's a challenge that SurrealDB is looking to solve.

    SurrealDB on Tuesday launched version 3.0 of its namesake database alongside a $23 million Series A extension, bringing total funding to $44 million. The company had taken a different architectural approach than relational databases like PostgreSQL, native vector databases like Pinecone or a graph database like Neo4j. The OpenAI engineering team recently detailed how it scaled Postgres to 800 million users using read replicas — an approach that works for read-heavy workloads. SurrealDB takes a different approach: Store agent memory, business logic, and multi-modal data directly inside the database. Instead of synchronizing across multiple systems, vector search, graph traversal, and relational queries all run transactionally in a single Rust-native engine that maintains consistency.

    "People are running DuckDB, Postgres, Snowflake, Neo4j, Quadrant or Pinecone all together, and then they're wondering why they can't get good accuracy in their agents," CEO and co-founder Tobie Morgan Hitchcock told VentureBeat. "It's  because they're having to send five different queries to five different databases which only have the knowledge or the context that they deal with."

    Customgpt

    The architecture has resonated with developers, with 2.3 million downloads and 31,000 GitHub stars to date for the database. Existing deployments span edge devices in cars and defense systems, product recommendation engines for major New York retailers, and Android ad serving technologies, according to Hitchcock.

    Agentic AI memory baked into the database

    SurrealDB stores agent memory as graph relationships and semantic metadata directly in the database, not in application code or external caching layers. 

    The Surrealism plugin system in SurrealDB 3.0 lets developers define how agents build and query this memory; the logic runs inside the database with transactional guarantees rather than in middleware.

    Here's what that means in practice: When an agent interacts with data, it creates context graphs that link entities, decisions and domain knowledge as database records. These relationships are queryable through the same SurrealQL interface used for vector search and structured data. An agent asking about a customer issue can traverse graph connections to related past incidents, pull vector embeddings of similar cases, and join with structured customer data — all in one transactional query.

    "People don't want to store just the latest data anymore," Hitchcock said. "They want to store all that data. They want to analyze and have the AI understand and run through all the data of an organization over the last year or two, because that informs their model, their AI agent about context, about history, and that can therefore deliver better results."

    How SurrealDB's architecture differs from traditional RAG stacks

    Traditional RAG systems query databases based on data types. Developers write separate queries for vector similarity search, graph traversal, and relational joins, then merge results in application code. This creates synchronization delays as queries round-trip between systems.

    In contrast, Hitchcock explained that SurrealDB stores data as binary-encoded documents with graph relationships embedded directly alongside them. A single query through SurrealQL can traverse graph relationships, perform vector similarity searches, and join structured records without leaving the database.

    That architecture also affects how consistency works at scale: Every node maintains transactional consistency, even at 50+ node scale, Hitchcock said. When an agent writes new context to node A, a query on node B immediately sees that update. No caching, no read replicas.

    "A lot of our use cases, a lot of our deployments are where data is constantly updated and the relationships, the context, the semantic understanding, or the graph connections between that data needs to be constantly refreshed," he said. "So no caching. There's no read replicas. In SurrealDB, every single thing is transactional."

    What this means for enterprise IT

    "It's important to say SurrealDB is not the best database for every task. I'd love to say we are, but it's not. And you can't be," Hitchcock said. "If you only need analysis over petabytes of data and you're never really updating that data, then you're going to be best going with object storage or a columnar database. If you're just dealing with vector search, then you can go with a vector database like Quadrant or Pinecone, and that's going to suffice."

    The inflection point comes when you need multiple data types together. The practical benefit shows up in development timelines. What used to take months to build with multi-database orchestration can now launch in days, Hitchcock said.



    Source link

    binance
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    CryptoExpert
    • Website

    Related Posts

    Inside the AI agent playbook driving enterprise margin gains

    April 1, 2026

    How to Build and Evolve a Custom OpenAI Agent with A-Evolve Using Benchmarks, Skills, Memory, and Workspace Mutations

    March 31, 2026

    MIT researchers use AI to uncover atomic defects in materials | MIT News

    March 30, 2026

    When product managers ship code: AI just broke the software org chart

    March 29, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    10web
    Latest Posts

    5 EASIEST Ways to Make Money With AI (No One is Doing This)

    April 1, 2026

    FREE AI Tools To Create Videos & Images 😳🔥 (Full Beginner Tutorial 2026)

    April 1, 2026

    Crypto-Revenge ‘On Demand’ – Why Are Rogue Groups Taking Justice On Their Own Hands?

    April 1, 2026

    How To Use Grok AI FREE Forever (Unlimited Hack Revealed)

    April 1, 2026

    Token Voting Is Crypto’s Broken Incentive System

    April 1, 2026
    aistudios
    LEGAL INFORMATION
    • Privacy Policy
    • Terms Of Service
    • Social Media Disclaimer
    • DMCA Compliance
    • Anti-Spam Policy
    Top Insights

    Naoris Launches Post-Quantum Blockchain as Quantum Risks Grow

    April 1, 2026

    Bitcoin Transaction Fees Hit Lowest Level Since 2017: But It’s Not Due to Weak Demand

    April 1, 2026
    aistudios
    Facebook X (Twitter) Instagram Pinterest
    © 2026 BytecoreNews.com - All rights reserved.

    Type above and press Enter to search. Press Esc to cancel.