Building Useful Stuff: Can Gaming Communities Train AI?
Beryl joined Nicholas Soler of Being Invested on an episode of OORT’s Building Useful Stuff podcast, where they discussed the crucial role of collaboration in innovating AI technologies responsibly.
Early in April, it was announced that YGG had joined the HumanAIx Alliance, launched by the verifiable cloud computing platform OORT, to pioneer decentralized AI. This partnership will enable the YGG community to actively participate by contributing to model training processes. This includes improving model quality, mitigating bias, and ensuring inclusivity through verifiable, distributed participation. The initiative also commits to advancing AI ethics by involving real communities in the feedback loop.
On an episode of OORT’s Building Useful Stuff podcast, YGG co-founder Beryl Li joined host Nicholas Soler to discuss how YGG’s gaming community is helping train AI, improving their quality, and reducing bias. They also talked about the ways that emerging markets are accelerating AI development while opening up more economic opportunities, the potential for decentralized AI to rival programs like ChatGPT, and how AI can create lifelike simulations for fields such as space exploration.
The following is an excerpt from their conversation, where Nick asks about the potential implications of HumanAIx building decentralized AI, pointing out that some big tech companies have argued that closed-source AI models are actually better as they can safeguard against misuse. In response, Beryl highlights that builders still retain responsibility for the way they architect the technology, while also acknowledging that a lot of self-regulation as well as government regulation will be required.
Can Gaming Communities Train AI? YGG Says Yes
Nick (06:18): I was reading about this question, or rather, this debate over open- and closed-source AI. One justification that a lot of these tech giants give for being closed-source is that they're protecting humanity from misuse and unethical usage. My question here is that HumanAIx is open-source and heading in that direction. How do you guys address safety, since decentralized AI is open-source? Is a discussion even being had right now?
Beryl (07:05): This is a very important discussion to be had, especially with all parties trying to work with AI. It depends on the architect. There are two things: I can use AI to build technology that replaces humans entirely, which a lot of builders intend to do. It’s much faster, cheaper, and with less human error. That is the intention. But if we're responsible, we should build AI now to be a companion that augments what humans can do.
For example, with what I mentioned earlier with gaming, you have an AI companion telling you, “Hey, look left, right, shoot right.” That also applies to medicine. We're going to see AI companions augmenting doctors. There's already a shortage of medical professionals. What we want AI to do is work alongside them so that they can do their job with high accuracy and speed, by telling them and reminding them. “Hey, have you checked this?”, “You forgot to ask for this person's blood sugar levels or blood pressure.” “You forgot about that.” You have this AI assisting you.
Back to your question. It’s the responsibility of builders to architect the technology. That's one. The second one is where a lot of government bodies will have to be involved, and if we start self-regulating. For example, with HumanAIx, this could be a discussion among ourselves on how to do this. What if cyberattacks happen? What about the data that we store on the cloud? How do we deal with that? How do we secure people's data? Many questions around the biggest threat right now are about privacy, especially in today's world. How do we protect that? There are so many questions around it.
We need to have those conversations now, rather than later, because there are a lot of engineers building right now and thinking, “What can my product do?” versus taking responsibility. Our responsibility is to be builders and innovators in this space. There's a lot going on, and this is why HumanAIx, this collaborative initiative, is important. You get to brainstorm ideas and ask important questions for society and for what we build in the future.
You can listen to the full recording on YouTube.
Follow Beryl Li to learn more about AI and Future of Work. Follow HumanAIx and OORT to learn the latest developments in decentralized AI.
Join the YGG Discord and follow YGG on X for future updates.