Al Risk Governance Platform
Role Researcher, Lead UXUI Designer
Year 2025- (Ongoing)
Key words algorithmic accountability, AI governance, design
This project aims to develop an AI governance platform that support the alignment of AI models with the conformity assessment requirements under the EU AI Act. The platform will help organizations demonstrate compliance and maintain assurance throughout the AI lifecycle.
My Role
- Designing human-centered risk database: Synthesized 65+ frameworks from the MIT AI Risk Repository into a three-layer structure (high-level taxonomy → specific risk types → concrete case studies) that makes abstract regulatory requirements actionable for organizational decision-making. This forms a foundational AI risk database serving as a regularly maintained, searchable repository of AI-related incidents and uncovered vulnerabilities.
- Leading UX/UI: Translated regulatory language and technical risk categories into intuitive interfaces that guide users through systematic compliance processes. Collaborated with legal teams and eight engineers to integrate risk taxonomy with case studies inventory and LLM-based scenario generation tool.
Background
The EU AI Act imposes obligations on both providers and deployers of high-risk AI systems. Providers must complete conformity assessments to obtain CE marks by August 2026—yet organizations face an overwhelming landscape of compliance requirements without contextual guidance for conducting risk assessments.
Interview
I have been conducting two types of user research (interviews and trial conformity assessments) to gain deeper understanding of our potential users.
Interview summary
User story
Current work flow in AI governance process.
Pain Points
Overwhelming Risk Assessment Without Actionable Context: AI providers conducting conformity assessments face hundreds of potential risk scenarios across 65+ frameworks without knowing which risks apply to their specific system. They waste time reviewing irrelevant scenarios or produce superficial assessments that fail conformity reviews,
Scattered Evidence: Organizations struggle to collect audit evidence—logs, model versions, data snapshots, and policies—scattered across departments and systems. Without a centralized repository that tags each item with its relevant regulation, teams waste time gathering information and lack visibility into what's compliant and what's missingGovernance Lag: AI model updates occur continuously, but technical documentation updates lag behind, creating version mismatches. This governance lag increases workload, risks audit failures, and delays product launches due to outdated legal documentation.
Solution
LLM-supported AI Risk Scenario Generator : LLM-based scenario generation tool that integrates a searchable database that automatically produces risk assessments tailored to users' specific system contexts—eliminating manual review of irrelevant scenarios.
I conducted deep research on this feature, the LLM-Supported AI Risk Scenario Generator, leading repository development, validation, and database design.
Model Monitoring and Audit Evidence Management: Monitors AI model behavior in real time and centralizes all related logs and records, ensuring traceable, regulation-linked audit evidence.Document synchronization: Automatically updates legal and compliance documents whenever AI systems changed or monitoring events were triggered.
Repository Development
To enable contextualized risk scenario generation, I first built a comprehensive risk repository through thematic analysis of MIT's database of 70 AI risk taxonomies (from industry and research sources).
Database Design
Using this risk repository, I designed a database structure to enable LLM-powered scenario generation—integrating the taxonomy with concrete case studies and building a tool that automatically produces risk assessments tailored to users' specific system contexts, eliminating manual review of irrelevant scenarios
Low-fi & Mid-fi
I created low-fidelity and mid-fidelity prototypes iteratively, incorporating feedback from engineers and lawyers to refine the user experience.
Low-fi
Mid-fi
Next Steps
- Feasibility assessment for automated data collection: Evaluate technical approaches for automated incident and vulnerability collection to reduce manual data entry and ensure the repository stays current
- Database implementation: Build and deploy the database model to support the risk taxonomy structure
- High-fidelity prototype and LLM integration: Design high-fidelity interfaces and integrate LLM-powered scenario generation capabilities into the platform
Credits
Masanobu Kikuchi: Product manager
Yui Kondo: Researcher: Desginer
Prof Michael Osbourne: Repository advisor
Alex Constantin: ML engineer
Alessandra Tosi: ML engineer
Marco Caselli: ML engineer
Steph Clacksman: Software Engineer
Adam Hazell: Software Engineer
Keiji Tonomura: Law Consultant
Juliette Zaccour: Intern