Close Menu
    Facebook X (Twitter) Instagram
    • Privacy Policy
    • Terms Of Service
    • Social Media Disclaimer
    • DMCA Compliance
    • Anti-Spam Policy
    Facebook X (Twitter) Instagram
    Bytecore News
    • Home
    • Crypto News
      • Bitcoin
      • Ethereum
      • Altcoins
      • Blockchain
      • DeFi
    • AI News
    • Stock News
    • Learn
      • AI for Beginners
      • AI Tips
      • Make Money with AI
    • Reviews
    • Tools
      • Best AI Tools
      • Crypto Market Cap List
      • Stock Market Overview
      • Market Heatmap
    • Contact
    Bytecore News
    Home»Uncategorized»Personal AI agents could solve DAO failures
    Uncategorized

    Personal AI agents could solve DAO failures

    February 22, 20263 Mins Read
    Share
    Facebook Twitter LinkedIn Pinterest Email
    kraken



    Ethereum co-founder Vitalik Buterin identified limits to human attention as the core problem plaguing decentralized autonomous organizations (DAOs) and democratic governance systems.

    Summary

    • Buterin says limited human attention is DAOs’ core governance flaw.
    • Personal AI agents could vote using user preferences and context.
    • Suggestion markets and MPC may improve privacy and decisions.

    Writing on X, Buterin argued that participants face thousands of decisions across multiple domains of expertise without sufficient time or skill to evaluate them properly.

    kraken

    The usual solution of delegation creates disempowerment where a small group controls decision-making while supporters have no influence after clicking the delegate button.

    Buterin proposed personal large language models as the solution to the attention problem and shared four approaches. Personal governance agents, public conversation agents, suggestion markets, and privacy-preserving multi-party computation for sensitive decisions.

    Personal LLMs can vote based on preferences

    Personal governance agents would perform all necessary votes based on preferences inferred from personal writing, conversation history, and direct statements.

    When the agent faces uncertainty about voting preferences and considers an issue important, it should ask the user directly while providing all relevant context.

    “AI becomes the government” is dystopian: it leads to slop when AI is weak, and is doom-maximizing once AI becomes strong. But AI used well can be empowering, and push the frontier of democratic / decentralized modes of governance.

    The core problem with democratic /…

    — vitalik.eth (@VitalikButerin) February 21, 2026

    Public conversation agents would aggregate information from many participants before giving each person or their LLM a chance to respond.

    The system would summarize individual views, convert them into shareable formats without exposing private information, and identify commonalities between inputs similar to LLM-enhanced Polis systems.

    Buterin noted that good decisions cannot come from “a linear process of taking people’s views that are based only on their own information, and averaging them (even quadratically).” “Processes must aggregate collective information first, then allow informed responses.

    Suggestion markets could surface high-quality proposals

    Governance mechanisms valuing high-quality inputs could implement prediction markets where anyone submits proposals while AI agents bet on tokens. When the mechanism accepts the input, it pays out to token holders.

    The approach applies to proposals, arguments, or any conversation units the system passes along to participants. The market structure creates financial incentives for surfacing valuable contributions.

    Decentralized governance fails when important decisions need secret information, Buterin argued. Organizations generally handle adversarial conflicts, internal disputes, and compensation decisions by appointing individuals with great power.

    Multi-party computation using trusted execution environments could incorporate many people’s inputs without compromising privacy.

    “You submit your personal LLM into a black box, the LLM sees private info, it makes a judgement based on that, and it outputs only that judgement,” Buterin explained.

    Privacy protection becomes important as participants submit larger inputs containing more personal information. Anonymity needs zero-knowledge proofs, which Buterin said should be built into all governance tools.





    Source link

    changelly
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    CryptoExpert
    • Website

    Related Posts

    Bitcoin Transaction Fees Hit Lowest Level Since 2017: But It’s Not Due to Weak Demand

    April 1, 2026

    Bitcoin Below $54K Would Signal Best Accumulation Zone: Analyst

    April 1, 2026

    Crypto-Revenge ‘On Demand’ – Why Are Rogue Groups Taking Justice On Their Own Hands?

    April 1, 2026

    Token Voting Is Crypto’s Broken Incentive System

    April 1, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    changelly
    Latest Posts

    Inside the AI agent playbook driving enterprise margin gains

    April 1, 2026

    Bitcoin Below $54K Would Signal Best Accumulation Zone: Analyst

    April 1, 2026

    5 EASIEST Ways to Make Money With AI (No One is Doing This)

    April 1, 2026

    FREE AI Tools To Create Videos & Images 😳🔥 (Full Beginner Tutorial 2026)

    April 1, 2026

    Crypto-Revenge ‘On Demand’ – Why Are Rogue Groups Taking Justice On Their Own Hands?

    April 1, 2026
    aistudios
    LEGAL INFORMATION
    • Privacy Policy
    • Terms Of Service
    • Social Media Disclaimer
    • DMCA Compliance
    • Anti-Spam Policy
    Top Insights

    SOL price stalls below key resistance even as Solana’s fundamentals surge

    April 1, 2026

    Iran threat to 18 U.S. firms opens a new risk front for crypto

    April 1, 2026
    aistudios
    Facebook X (Twitter) Instagram Pinterest
    © 2026 BytecoreNews.com - All rights reserved.

    Type above and press Enter to search. Press Esc to cancel.