AI represents something fundamental and powerful: an entirely new computing paradigm that will reshape how we work, learn, and interact. But today's AI landscape offers limited options: either access AI models through commercial APIs or use "open source" models released by large tech companies. Neither approach fully addresses the opportunity for broader participation in AI development. As we wrote in our recent Four Futures post on where AI might head, we believe that there is an opportunity for “Actually Open AI” which expands our horizons substantially.
Today, we're excited to announce our investment in Pluralis Research, a company advancing an approach to AI model development that is “actually open”. USV and CoinFund co-led the $7.6M seed round, with participation from Variant, Topology and other investors who share in our vision for a more open AI ecosystem.
Pluralis enables collaborative, decentralized training of frontier AI models -- they call it "Protocol Learning". As Pluralis founder Alexander Long writes in his post today:
"The key properties that define open-source—the ability for anyone to participate, innovate, and build on others’ work—do not exist at the foundation model layer. I believe this is a bad thing, and founded Pluralis Research to change it. We are developing a new approach, called Protocol Learning, that will enable truly open-source AI."
The core technical challenge is training large neural networks across distributed nodes of heterogenous hardware, connected by standard internet connections. This is a radical departure from today’s model training, which requires high-end hardware co-located in single data centers – a design which is only accessible to the largest, most well-resourced entities.
Another key innovation is that finished models remain within the protocol, with no single entity able to extract the full weights. In other words, the models are “protocol-owned”, with the ongoing usage revenue distributed back to all contributors. This is a novel take on the notion of open source in AI, which today obscures the model architecture and data while releasing only the full weights. Pluralis inverts this paradigm, opening up the full process of model training while preserving an economic interest for the contributors.
The implications are substantial. First, it prevents concentration of power in AI by ensuring advanced capabilities aren't controlled by just a small handful of large companies. Second, it allows expertise rather than just capital to drive development by separating and valuing both computational contributions and intellectual contributions like model architecture design. This means AI researchers can monetize their expertise directly without needing to own massive compute infrastructure or be employed by a large lab. Finally, by aggregating distributed compute, this approach could marshal resources that rival or potentially exceed what even the largest tech companies could assemble on their own.
As Albert recently argued, there are compelling reasons to prefer a world with multiple competing AI systems. He notes that "The safest number of ASIs is 0. The least safe number is 1. Our odds get better the more there are." This perspective emphasizes the importance of distributed development and governance of frontier AI models, which Pluralis enables.
Pluralis has deep technical expertise. Their team of AI PhDs brings experience from Amazon, Oxford University, and other leading research institutions, and is developing pioneering techniques that we believe will make decentralized model training and protocol-ownership a practical reality. With this funding, Pluralis will continue developing their core technology while beginning to build their network of compute contributors. Their first collaborative training runs are expected in the coming months.
We believe Pluralis represents an important step toward more open, collaborative and economically sustainable AI development, and we are thrilled to support them on this journey.
Over 900 subscribers