Governing AI responsibly means developing and using AI systems while minimizing risks and maximizing benefits to society. But what should organizations do to govern their AI responsibly? And how should organizations measure their level of responsibility? The Responsible AI Governance Maturity Model answers these questions.

The maturity model consists of a questionnaire and scoring guidelines for evaluating the social responsibility of AI governance in AI-enabled organizations based on the NIST AI RMF, one of the most influential AI governance frameworks in the world. The questionnaire includes a list of statements divided into nine topics:

  1. Map impacts
  2. Identify requirements
  3. Responsibility mindset
  4. Measure impacts
  5. Transparency
  6. Risk mitigation plan
  7. Risk mitigation activities
  8. Pre-deployment checks
  9. Monitoring

The scoring guidelines help evaluators assess the company’s performance in these nine topics and explain their scoring decisions using concrete information about the company (read more about the maturity model here).

Light-it used the maturity model to evaluate itself. In addition, it set an example for other companies by sharing its experiences with others as part of the Responsible AI Governance Maturity Model Hackathon, in which participants used the framework to evaluate companies.

Light-it’s insights can help many other companies in their responsible AI governance journey, and we share them below.

Light-it’s self-evaluation process

Light-it’s evaluation was carried out by two people: Adam Mallát, Innovation Manager, and Javier Lempert, CTO and founder.

  1. The first step was meeting with our team to learn about the Maturity Model.
  2. Then, Adam, filled out the questionnaire, and Javier reviewed it.
  3. The last step was another meeting with our team, in which we went over the evaluation and discussed learnings from it, some of which were shared at the hackathon and below.

Why Light-it cares about AI responsibility

The Competitive Advantage of AI Responsibility
Light-it stays ahead of competitors, including much larger ones, thanks to its expertise in addressing safety and compliance issues. In developing AI-based applications, the competitive advantage is even larger relative to other software. Since there are no strict AI-specific regulations and guidelines yet, many competitors are still far behind on AI responsibility. This creates a considerable opportunity for a competitive edge. Moreover, AI responsibility is key to ensuring compliance with sector-specific regulations, such as HIPAA.

“We managed to have a competitive edge by being experts at safety and compliance.”
– Adam Mallát

The Ethical Value of AI Responsibility

Having a positive social impact is important to the company. In the healthcare sector, the stakes are especially high. For example, when patients are communicating with chatbots for mental healthcare needs, lives may be at risk. Therefore, for example, when developing such a chatbot, Puppeteer ensures that if the chatbot identifies suicidal or self-harm thoughts, the chat stops and the person is immediately referred to a human care provider.

Light-it’s approach to explaining scores

Large companies often create a lot of documentation and structured processes, which may then be used to show the company’s priorities. However, as a startup, Light-it has less of an emphasis on documentation. Light-it uses an agile approach, which allows them to be nimble and have a lot of control over the product. Their prioritization is reflected in the company’s objectives and resource allocation. For example, how many employees are empowered to work on the topic? Is the topic reflected in their Objectives and Key Results (OKRs)? Do they track measurable metrics related to the topic?

How the evaluation process helped Light-it

Engaging with the questionnaire became an opportunity to think systematically and analytically about the company’s efforts in AI ethics.

“[Filling out the questionnaire] was the first time that I personally went into so much detail and became so analytical about what we do in the area of ethics.”
– Adam Mallát

AI responsibility growth opportunities Light-it identified

Risk Assessment and Mitigation

The questionnaire helped the company think carefully about some of the risks related to their products, such as bias. In particular, it helped them understand that all AI systems face bias risks and they have decided to to further empower their engineers to identify and reduce these risks.

“Bias is one of the main points we will tackle in the future that we weren’t…We are now developing tools to allow developers to understand bias.”
– Javier Lampert

Documentation

Light-it is considering adding documentation related to AI responsibility. Their main priorities are documents related to risk assessment and risk management.

Final Insights

The collaboration with Light-It has been tremendously helpful to the maturity model and the hackathon. First, Light-It helped us understand how to make the maturity model serve start-ups better. The NIST framework, which is the foundation of the maturity model, has been criticized for favoring large over small companies. We learned from Light-It that one of the ways that this happens is through NIST’s emphasis on documentation, which the maturity model inherited. As a result of learning this, we de-emphasized documentation in the questionnaire. Second, in the hackathon, Light-It set an example to evaluators and answered questions from a practitioner’s perspective. This was invaluable in empowering participants to do their own evaluations. In particular, relating to the theme of documentation, Light-It helped participants understand how to explain scores in ways that are fair to small companies.