Building AI for the Public Good: Preliminary Insights from Our Responsible AI Pipeline
by Abi Leung & Carlos Salinas
The AI landscape is moving fast. From content creation to community safety, its capabilities are evolving quickly enough to reshape how we organize, communicate and build power. At the same time, many of today’s tools are brittle, biased and built without public accountability.
The tension, between promise and risk, is why we launched the Responsible AI Funding Cycle this year. We received over 200 submissions and what is emerging is a clearer picture of where innovation is happening in the Responsible AI ecosystem and who is stepping up to lead
.
Our preliminary insights (final insights will be shared in August) show tell an interesting story: When it comes to demographics, 60% of nonprofit organizations have a CEO or Executive Director who identifies as a person of color—Black, Latine, Asian, Native, and others, while 68% of for-profit companies have founders or CEOs from these same communities. Gender diversity is also strong across the board. 50% of nonprofit organizations and 47% of for-profits are led by individuals who identify as women, gender nonbinary, or trans. We are encouraged to see AI tools being built by underrepresented communities, ones often harmed by the biases in existing LLMs, in a hope that new tools can be built to be more inclusive.
While we’ll release a longer report when we’ve made our decisions, here is a first look at the themes we are encountering in our pipeline:
For-profits
The Growth of Content Creation Tools & Ethical Concerns
AI is democratizing sophisticated content creation capabilities, particularly for mission-driven organizations and underrepresented voices. Applications include predictive journalism that identifies emerging stories hours before they break, automated content writing specifically designed for nonprofits, and intelligent advertising platforms that connect data directly with content creation.
However, the democratized access to these tools does not resolve open questions about the quality and reliability of AI-generated content. And we still lack well-established and unbiased input data for training these models, not to mention policy requirements for AI-content labeling and public and political use of AI-generated content. Applicants in our pipeline have not yet addressed these concerns directly in their proposals, and some have built their product around the current top LLMs, which each have varying degrees of encryption and open-sourcing, making data security another concern in most cases.
Financial and Social Incentives for Impact
AI-powered gamification is emerging as a compelling strategy to drive civic engagement and social good, particularly among younger audiences. Tools in our pipeline use points, streaks, and rewards systems to encourage actions like environmental responsibility, educational achievement, and community participation. By integrating behavioral psychology and real-world incentives, these platforms offer a scalable way to turn abstract values into concrete habits, making civic action feel accessible, even fun.
The impact of gamification depends heavily on intentional design. While these tools show promise in motivating behavior, few applicants demonstrated how they are measuring long-term impact or avoiding unintended consequences like superficial engagement or reward fatigue.
Non-profits
Civic Tech Tools that Center Transparency & Democratic Discourse
AI is being deployed to build civic tools that go beyond transparency, they’re becoming core infrastructure for democratic engagement. Platforms are enabling communities to track legislative activity in real time, surface public sentiment, and engage directly with local decision-makers. These tools could help shift participation from episodic bursts around elections to sustained, everyday involvement, making democracy more responsive and accessible.
Narrative Safety Is Becoming a Core Component of Digital Democracy
Organizations are developing AI systems designed to monitor digital threats in real time—flagging harmful narratives, tracking harassment campaigns, and protecting underrepresented leaders from online abuse. These tools mark a shift in focus: from reactive content moderation to proactive protection of civic spaces, especially for communities disproportionately targeted by disinformation and digital harassment.
What’s Next?
AI can’t solve our problems if it replicates the systems that caused them. But this year’s pipeline so far gives us hope. Founders are designing AI tools that are redistributing power, not just solving for efficiency. They’re asking hard questions: Whose data? Whose voice? Whose values? Whether they’re building for-profit or nonprofit startups, they’re rejecting extractive models and experimenting with care-centered design.
But we still need to bridge the gap between promise and practice. That means:
Pushing for accountability and labeling standards
Funding open-source, multilingual, community-first solutions
Creating on-ramps for small orgs and underrepresented founders to build responsibly, without compromising on scale
The future of AI is not inevitable, it’s a choice. And in a moment when democracy feels increasingly fragile, responsible AI offers us a chance to reimagine what power, participation, and progress can look like.
We’re choosing to invest in people who are building that future—with vision, urgency, and care. We look forward to sharing more insights as we finalize reviewing all our submissions!