AI labs today resemble Yahoo in the early internet era: relying on internal teams of employees to manually evaluate, rank, and curate vast amounts of information. This top-down model, while once necessary, is now the bottleneck. Just as Yahoo couldn’t keep up with the explosive growth of web content, no centralized team can organize the world’s intelligence fast enough to match the accelerating complexity of knowledge work and the required diversity of models, algorithms and programs.
The next multipliers in AI won’t come from adding more parameters to monolithic models. They will come from designing multi-agent systems that learn continuously from experience:
Human Experience Learning, as seen in methods like DPO and RLHF, integrates models more closely with lived behavior and subjective human preferences. This is why to increase the bandwidth of these interactions, ChatGPT is becoming a social network, Gemini integrates with Google products, Grok is merging into X, Meta is betting on smart glasses, Tesla created a new protocol for higher throughput in collecting sensory signals between humans and our environment.
Machine Experience Learning, through agent coordination and verifiable rewards (GRPO, reward modeling in o3, AlphaZero and Deepseek), enables emergent intelligence. This is the domain of agent protocols like Claude’s MCP and Google’s Agent2Agent (A2A). Here the strategy for tech giants is to capitalize on experience signals from tool use in addition to all the reasoning traces coming from “reasoning models” which are in fact multi-agent systems.
Together, these trends signal the rise of a new layer of the internet: the Internet of Experience. Here, the unit of value is not static content but the structured traces of interaction between agents, environments, and users. These interactions produce data with higher signal density, contextual relevance, and evolutionary feedback. In the same way web2 enabled the acceleration of data production that was used to train Large Language Models, the Internet of Experience will produce an order of magnitude more data which will be used for training and reinforcing the next generation of frontier models, taking us many steps closer to AGI.
According to the Sofrecom Internet Traffic Growth Report, the Cisco Annual Internet Report, and Statista's Digital Population statistics, the internet of experience is on a trajectory to represent 988 trillion tokens by 2030, which is approximately 19 times the size of the human internet's 52 trillion tokens – 95% of the internet data will be made of experience-based signal. In the current trajectory, 38% of the total tokens will be from open-source traces.
But if this experiential data remains trapped within centralized silos like OpenAI or Google, we replicate the Yahoo-era mistake: attempting to scale intelligence through centralized, manual validation. This legacy model fails both of its most essential contributors.
For human experts and knowledge workers, their nuanced insights—shaped through complex queries, interpretive reasoning, and domain-specific feedback—are harvested with little transparency or compensation. Their interactions become fuel for closed systems that accrue private value, while the contributors remain invisible within the training pipeline, receiving neither authorship, recognition, nor economic participation. This extractive dynamic erodes trust and suppresses the depth of engagement necessary for advancing human-aligned AI.
For open-source AI developers and researchers, the situation is no less constraining. Once a model is released into the world, the continuous stream of experiential feedback—how it's used, where it succeeds, where it fails—is lost in a sea of fragmented, unverified, and often inaccessible data. There exists no standardized substrate to federate, validate, or meaningfully reward that knowledge. As a result, OSS initiatives remain structurally disadvantaged compared to vertically integrated platforms that monopolize the full loop of deployment, usage, and improvement.
For customers of AI services, the problem is universal and has traversed all the eras of information systems: the insatiable desire to participate in cultural evolution by gathering insights from experts at velocity in a trusted environment. The process of cultural evolution always needs validation: in academia, we review student’s work, we review scientific papers through peer-review, we validate other papers by citing them, shaping the fabric of knowledge through weighted peer validations. This validation today is coming from a restricted team of humans working at tech giants, but it does not scale and it’s too slow to absorb the infinite potential of human creativity, making all those AI models eventually converge towards mediocrity, making customers frustrated.
This fragmentation stifles the emergence of a shared, evolving intelligence. It disincentivizes meaningful participation, impairs knowledge compounding, and renders the AI development process opaque and inefficient. Without a common protocol for verifiable feedback and stake-weighted consensus, there is no epistemic infrastructure to align incentives or anchor trust. As the Internet of Experience expands, the lack of a decentralized validation substrate will continue to bottleneck learning, waste insight, and stall the collaborative progress AI so critically depends upon. The need is not for more internal moderators, but for a paradigm shift: from gatekeeping to permissionless peer evaluation, from isolated value extraction to interoperable learning economies.
The solution is not more employees working for tech giants—it’s a new protocol. Just as Google replaced Yahoo by turning the web into its own validator by counting the backlinks between web pages, we need an open, permissionless mechanism for evaluating intelligence. One where humans and machines review, stake, and learn from each other without relying on central authorities.
This explosion of experiential data, driven by permissionless interactions, offers unprecedented scale for AI development. Yet, it also presents a foundational challenge: in an ecosystem where anyone or any agent can contribute, how do we ensure the trustworthiness of the learning signals themselves? The very decentralization that fuels scalability complicates the validation of signal quality and integrity, a critical hurdle when no single authority governs the flow of information.
Our team of ex-Holochain, Google Research, DeepMind accelerated by NVIDIA has pioneered RLHF 3 years before ChatGPT and researched on the domains of agent-centric computing, learning theory, and agentic reinforcement learning. And after six years of research and development, we have built a fully functioning system—a decentralized protocol where cultural evolution (human validation) and experiential learning (machine reward) converge. Newcoin is the only viable architecture for scaling intelligence in a world too fast and too rich for top-down control.