Close Menu
    Facebook X (Twitter) Instagram
    • Privacy Policy
    • Terms Of Service
    • Social Media Disclaimer
    • DMCA Compliance
    • Anti-Spam Policy
    Facebook X (Twitter) Instagram
    Bytecore News
    • Home
    • Crypto News
      • Bitcoin
      • Ethereum
      • Altcoins
      • Blockchain
      • DeFi
    • AI News
    • Stock News
    • Learn
      • AI for Beginners
      • AI Tips
      • Make Money with AI
    • Reviews
    • Tools
      • Best AI Tools
      • Crypto Market Cap List
      • Stock Market Overview
      • Market Heatmap
    • Contact
    Bytecore News
    Home»Crypto News»Blockchain»LangChain Splits AI Agents Into Two Security Classes With Fleet Update
    LangChain Launches LangSmith Fleet for Enterprise AI Agent Management
    Blockchain

    LangChain Splits AI Agents Into Two Security Classes With Fleet Update

    March 23, 20263 Mins Read
    Share
    Facebook Twitter LinkedIn Pinterest Email
    aistudios




    Darius Baruo
    Mar 23, 2026 18:08

    LangSmith Fleet introduces Assistants and Claws agent types, solving a critical authorization problem for enterprise AI deployments.





    LangChain has formalized two distinct authorization models for AI agents in its LangSmith Fleet platform, addressing what’s become a thorny problem as enterprises deploy autonomous systems that need to access sensitive company data.

    The framework, detailed in a March 23 blog post, splits agents into “Assistants” that inherit end-user permissions and “Claws” that operate with fixed organizational credentials—a distinction that emerged partly from how OpenClaw changed developer expectations around agent identity.

    Why This Matters for Enterprise Adoption

    The authorization question sounds technical but has real consequences. When an AI agent pulls data from Slack or searches your company’s Notion workspace, whose permissions should it use? The wrong answer creates either security holes or useless agents.

    Consider an onboarding bot with access to HR systems. If it uses Alice’s credentials when Alice asks questions, that’s appropriate. But if Bob can query the same bot and accidentally access Alice’s private salary information, you’ve got a compliance nightmare.

    notion

    LangChain’s solution:

    Assistants authenticate through per-user OAuth. The agent inherits whatever access the invoking user already has—nothing more. Each user’s interactions remain siloed in their own Agent Inbox.

    Claws use a shared service account. Everyone interacting with the agent gets the same fixed permissions, regardless of who they are. This works for team-wide automations where individual identity doesn’t matter.

    The OpenClaw Factor

    The two-model approach reflects how agent usage patterns have evolved. Traditional thinking assumed agents always act “on-behalf-of” a specific user. Then OpenClaw popularized a different model—agents that creators expose to others through channels like email or social media.

    When someone creates an agent and shares it publicly, using the creator’s personal credentials becomes problematic. The agent could access private documents the creator never intended to expose. This pushed developers toward creating dedicated service accounts for their agents, effectively inventing the Claw pattern organically.

    Channel Limitations

    There’s a practical constraint: Assistants currently work only in channels where LangSmith can map external user IDs (like Slack) to LangSmith accounts. Claws face fewer restrictions but require more careful human-in-the-loop guardrails since they’re effectively opening fixed credentials to variable inputs.

    LangChain provided concrete examples from their own deployments. Their onboarding agent runs as an Assistant—it needs to respect individual Notion permissions. Their email agent operates as a Claw with human approval gates for sending messages, since it manages one person’s calendar regardless of who’s emailing.

    What’s Next

    The company flagged user-specific memory as an upcoming feature. Current memory permissions are binary—you either can edit an agent’s memory or you can’t. Future versions will prevent Assistants from leaking information learned from one user’s session into another’s.

    For enterprises evaluating agent platforms, the authorization model matters as much as the underlying AI capabilities. LangSmith Fleet launched March 19 with these identity controls baked in from the start.

    Image source: Shutterstock



    Source link

    murf
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    CryptoExpert
    • Website

    Related Posts

    Circle Unveils Star DKG Protocol for Hardware-Secured Crypto Wallets

    March 31, 2026

    XRP Expert Says The Moment Has Finally Come, Here’s What He Means

    March 30, 2026

    Bitcoin drops as Rubio privately signals Iran war may last weeks, locking in high oil prices

    March 29, 2026

    Stablecoins Will Be Crypto’s “ChatGPT Moment,” Says Ripple

    March 28, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    livechat
    Latest Posts

    Bitcoin Tapped $69K, Oil Prices Ended March With 60% Surge: Market Watch

    April 1, 2026

    Ethereum Is Flashing a Warning Signal Most Holders Are Ignoring – Here Is What It Says

    April 1, 2026

    Bitmine Just Locked $340M More In Ethereum – Supply Keeps Shrinking

    April 1, 2026

    CFTC Warns Prediction Market Insider Traders

    April 1, 2026

    Bitcoin stalls below key resistance as technical signals skew bearish

    April 1, 2026
    bybit
    LEGAL INFORMATION
    • Privacy Policy
    • Terms Of Service
    • Social Media Disclaimer
    • DMCA Compliance
    • Anti-Spam Policy
    Top Insights

    What Does ETH Need to Break Out of Consolidation?

    April 1, 2026

    Ripple’s RLUSD Stablecoin Sits On $1.57 Billion In Reserves: Audit Firm

    April 1, 2026
    aistudios
    Facebook X (Twitter) Instagram Pinterest
    © 2026 BytecoreNews.com - All rights reserved.

    Type above and press Enter to search. Press Esc to cancel.