Ensuring Ethical AI

Posted | Category: BRM Community, BRM Philosophy | Contributed

With the growing interest in having AI teams in house alongside the multiplication of regulations internationally, many new responsibilities are created and spread across teams.

At the Applied AI Institute, our commitment to Responsible AI led to the creation of a novel role, the Responsible AI Coordinator. Detangling the overlap of this role from other roles, such as Product Manager or BRM, is key to ensure good accountability.

On the onset, we’ve found that it’s virtually impossible to take on the responsibilities of product/project managers while taking into account everything that could have a negative impact or considering potential blind spots.

The nature of AI product development and AI project implementation is highly iterative, making highly challenging for PMs who drive these cycles, to also monitor risks, impacts and blind spots.

While crucial to the development of solutions, the scope of knowledge of legal and compliance teams often stops where PMs would most benefit from additional support, insight and guidance. For example, beyond GDPR compliance, the RAI coordinator ensures that processes are created to further inspect data for myriad forms of bias. RAI is a consideration throughout the lifecycle of an AI system.

The gap between the two underscore the value and need for RAI Coordinator roles.

With the proliferation of AI teams across many departments, RAI coordinators have horizontal visibility and are able to connect. This visibility is essential to ensure transparency.

Without visibility into the significant impacts (read very broadly here, from data bias to how stakeholders will be affected) of developing or deploying solutions, departments and organizations risk impacting their brand as well as the morale of their teams. Especially since so much work and resources are invested in the development and/or deployment of these solutions. Having to backtrack or abandon the project because of a negative impact can significantly damage team morale and revenue. Getting it right the first time means ensuring that the team has the capabilities and capacity to make more intentional decisions, and ensure that balls are caught before they drop.

The tools I have developed to keep our eyes on the ball are the 6 Dimensions and the Responsible AI Curve. The 6 dimensions enable us to know all our stakeholders and to choose to take second-order impacts into account, even if we don’t always have control over them. For example, by keeping in mind that building a tool for our customer can transform their relationships with other teams and the general public, we ensure that we retain our share of responsibility. The Responsible AI Curve is another tool we use to communicate within the team. This tool was born out of the need to look at responsible AI tools from a collaborative rather than a competitive perspective. We want to ensure technical measures of fairness weigh as much as research on data harms. This tool invites us to consider the entire AI lifecycle from the outset, when thinking about risks, impacts and responsibility.

Acquiring most other technology services, or building them in-house, usually warrants careful thought before adoption; this is simply not the case with AI.

Do we want to build a relationship with this supplier?

Will the tool supply our needs for years ahead?

How can we manage change with the implementation of this tool?

With AI products hitting the market faster than we can say ‘Black box’ given the current ‘release-and-regulate-later’ model, many teams and departments don’t have or take the time to take in this level of depth of consideration. While many state the compounding cost of late AI adoption, fewer teams and organizations consider the compounding cost of implementing or developing solutions that optimize inadequate processes or actively break relationships with stakeholders.

It is likely that people on your teams are already partially taking on some of these responsibilities, and ultimately responsibility must be shared to be solid. However, the landscape of Responsible AI is changing rapidly and it hasn’t yet been fully operationalized.

Therefore much remains to be done to grasp the scale of these responsibilities and effectively operationalize the field. The role of Responsible AI Coordinator is changing but ultimately it can help teams navigate the storm!

Interested in learning more about this topic?

Watch the full video of the webinar on Ethics in AI!

Meghan Wester, Responsible AI Coordinator


Applied AI Institute  

About the Author

Meghan is currently the responsible AI coordinator for the Applied AI Institute in Canada. With a background as a strategic foresight and policy analyst, Meaghan brings a wealth of experience to the table. Her work in assessing the policy implications of generative AI has been instrumental in informing decision-making processes and shaping regulatory frameworks to accommodate technological advancements responsibly.

Previously, her research investigated the intricate dynamics of AI governance in public procurement in Canada. In recognition of their outstanding contributions to the field, Meaghan was honored as the co-winner of the prestigious 2022 CRTC Prize for Excellence in Policy Research for her work on de-identification and privacy legislation.


Leave a Reply

You must be logged in to post a comment.

Pin It on Pinterest

Share This

Share This

Share this post with your peers!