Building an Ethical AI Strategy from the Ground Up
We collaborated with a multi-site nonprofit organization to lay the groundwork for a comprehensive, responsible AI strategy. The project focused on aligning leadership, equipping teams with department-specific AI tools, and launching a clear governance structure. Together, we designed a practical roadmap for integrating AI across operations—guided by human values, safety, and mission relevance.

Strategic Alignment & Leadership Readiness​
We kicked off with a two-day workshop that brought staff and leadership into a shared understanding of AI’s risks and potential. Through live demonstrations, ethical use case discussions, and platform evaluations, the organization mapped its highest-value opportunities. From this, we helped craft an AI mission statement and strategic goals that reflect both their aspirations and responsibilities.
Department-Specific GPTs for Real Workflows​
To create immediate impact, we developed a set of custom GPT assistants—tailored to HR, finance, early childhood services, facilities, and other departments. These tools now support critical workflows like onboarding, compliance tracking, grant writing, and budgeting. Paired with documentation and training, these assistants gave staff a tangible entry point into safe and productive AI use.
Training, Governance & Long-Term Planning​
We equipped the organization with an AI Safety & Ethics Agreement, an internal governance plan, and a phased implementation roadmap. Staff are now supported by an "AI Champions" model, ensuring sustainable learning and iteration. The strategy includes long-term impact metrics, such as adoption, time savings, and equity access checks—ensuring that AI growth stays grounded in community outcomes.
