- Outpost šµ
- Posts
- š§ Big Brain Breakdown: Key Trends in DeAI
š§ Big Brain Breakdown: Key Trends in DeAI
Bittensor resources & trends we like
š¢ Sponsor Us | š² Follow us on X | šØ Past Editions | šļø TAO Times
Quick Catchup šš¼āāļø
Good morning everyone,
We're thrilled to see new subscribers joining our newsletter! Given this recent growth, we thought it would be an excellent opportunity to update you on the latest developments at Onchain Outpost.
As you may have noticed, we're enthusiastic about the Bittensor ecosystem. To share our excitement, we've developed a few engaging content streams for those of you who are interested in learning more.
TAO Times by Onchain Outpost: A weekly Bittensor newsletter summarizing all of the events across the ecosystem and subnet teams.
TAO Talk by Blocmates: A weekly Bittensor banter session where we chat through the weekās updates. Last week we had CK, the co-founder of Tensorplex, on to discuss some project alpha that you wonāt wanna missā¦
TAO Talk with Blocmates
Bittensor Resource Portal by Onchain Outpost: Landing page that showcases educational resources, tools, and products built by subnet teams.
While admittedly pretty confusing at first, Bittensor is a project that we are very excited about and recommend plugging into if youāre interested in DeAI. We put together the resource portal to make your research even easier anyways!
Now, itās been awhile since we have done a Big Brain Breakdown, and trust us when we say, thereās a lot to breakdown.
Itās clear that the DeAI vertical is heating up fast. Here are the trends worth watching. š
Key Trends
How we have started to evaluate the DeAI sector is by asking ourselves āwhere does the convergence of crypto and AI actually make sense?ā Wow, so profound! But no, seriously, we frequently discuss:
Where do blockchains unlock net-new benefits for AI development?
Which components of the AI stack are optimized through decentralized protocols?
Where are open-sourced DeAI applications reaching performance parity with their closed-source counterparts?
Looking at it from this perspective, we have noticed some key verticals in DeAI that are definitely worth paying attention to.
But before jumping into it, letās revisit this excellent DeAI ecosystem map built by Topology to get a lay-of-the-land of where we should digā¦
Topology VC
If youāre new here, you may be overwhelmed by the sheer amount of micro-niche verticals there are within the DeAI space alone. But fret not, here are the big hitters we think you should be focusing on:
Coordination Layers
To us, this is probably the coolest and most important vertical to watch, simply because it addresses our first criterion: Where do blockchains unlock net-new benefits for AI development?
Fundamentally speaking, coordination layers are protocols designed to coordinate AI/ML developers to createā intelligenceā by providing their models and resources in exchange for a reward, typically based off of the value of the intelligence produced. This intelligence could be anything from a stock or token price prediction, the routing of an inference request, or the generation of an image.
Why is this interesting? These protocols unlock net-new benefits to open-source AI development by coordinating independent actors through crypto-economic incentives to compete amongst each other, producing the highest quality intelligence, or otherwise referred to as digital commodities.
Never before in the history of software development has it been possible to organize a globally distributed network of developers, all economically aligned, to build open-source software. Yet here we are, building these structures for open-source AI.
As a result, we believe that these types of protocol architectural models have the chance to completely flip AI development upside down and thrust the industry into a position where closed-source big tech players are playing catchup with open-source hackers, instead of the other way around.
But hey, donāt just take our word for it. Bittensor, the most popular coordination layer, is on a tear, both in price and social mindshare.
Kaito AI
Want to check out some more related projects? We wrote about some here.
Data
āData is the new oil!ā Yeah, 2015 called. They wanted their slogan backā¦
Nonetheless, data is a huge bottleneck that is always hindering the next AI breakthrough, and itās why teams in the DeAI space are trying to solve for it. This address our next question: Which components of the AI stack are optimized through decentralized protocols?
By incentivizing people to contribute unique datasets to a globally distributed network of āauthenticators,ā we are in turn capturing and combining new robust datasets to be used for model training. Since AI models are only as good as the data they are trained on, itās important that the sources are accurate, tamper-proof, and owned by the people that contributed it.
A few of the teams we are watching closely are Grass Network and Vana, both of which are creating new optimized and efficient mechanisms for people to improve the process of data collection through incentives and ownership. While we have written about Grass in the past, letās focus on Vana.
In essence, Vana is creating a user-owned network for data that allows users to contribute data, earn ownership in AI models, and participate in an open marketplace for data and compute. Through Data DAOs, users contribute to unique datasets where they will be rewarded in to relation to the demand AI developers have for that specific data. Some examples of early-stage Data DAOs:
Vana
We anticipate that this data vertical will continue to explode, as developers begin to wrap their heads around the ādata wallā problem and the limitations imposed by pure synthetic-based datasets.
Distributed Model Training
What was once thought of as nearly impossible is now being achieved. Thanks to pioneers such as Nous Research and Prime Intellect, the open-source and DeAI breakthroughs in distributed model training are hitting a stride, addressing our final consideration: Where are open-sourced DeAI applications reaching performance parity with their closed-source counterparts?
Keeping it high-level, AI model training is an extremely resource intensive process that involves feeding large datasets through a neural network to teach the model a specific task. The process iteratively adjusts the network's internal parameters (weights) to minimize the difference between its predictions and the correct answers, called a loss curve. This cycle of prediction, error calculation, and weight adjustment is repeated many times, gradually improving the model's performance until it can effectively perform the desired task on new, unseen data.
Where the bottleneck lies is in the passing of data between computing resources and steps in the training process. However, the model training process, once favored to be conducted in data centers where the computing resources were colocated, can now be executed in a decentralized fashion thanks to two recent distributed training discoveries: Nous Researchās DisTrO and Prime Intellectās DiLoCo optimization methods, both demonstrating significant advancements in reducing communication requirements for distributed training of LLMs
In fact, OpenDiLoCo achieved an 857x reduction in bandwidth usage while matching the convergence and final loss of standard training methods, enabling the training of a 1.1B parameter model across multiple countries with high compute utilization.
On the other hand, DisTrO pushed even further, reporting 857x-3000x bandwidth reduction during pre-training and up to 10,000x for post-training and fine-tuning, while also matching standard AdamW+All-Reduce performance. Const even deployed this on a Bittensor subnet last week during the ecosystem community call.
Novelty Search
Again, these breakthroughs in distributed model training were once thought highly improbable just 6 months ago, and now, they are in production and implemented on Bittensor. Incredible.
Conclusion
We oftentimes hear several misconceptions about the DeAI space, which is why we decided to tackle Onchain Outpost in the first place: Where is the value here?
A wise man once said āif they arenāt FUDing you then you arenāt building anything worth FUDing,ā and we believe that to be the case with this vertical.
After all, we accept the FUD. It makes us take a step back and develop stronger frameworks and evaluations for these verticals that are seemingly convoluted and difficult to interpret. Where there is information asymmetry lies opportunities, and we are excited to see how the opinions expressed here stand the test of time within the coming months, years, and decades ahead.
You read and share, we listen and improve. Send feedback to [email protected]
Disclaimer: This newsletter is provided for educational and informational purposes only and is not intended as legal, financial, or investment advice. The content is not to be construed as a recommendation to buy or sell any assets or to make any financial decisions. The reader should always conduct their own due diligence and consult with professional advisors for legal and financial advice specific to their situation