From the course: Building a Responsible AI Program: Context, Culture, Content, and Commitment

Documenting AI

- Several years ago, I joined a small startup in the HR recruitment space. We were less than 10 people, but we had a lot of software. My second day on the job, I was given the technological keys to the castle. In essence, I became the defacto IT department. In addition to our social media accounts, website, suite of email and office tools, and our financial system, we also had an application tracking system to manage all the prospective job applicants for open roles, a learning management system where we offered courses, a CRM for customer management, and of course we had LinkedIn for recruitment. There were also technologies to help tie disparate systems together to provide some semblance of workflow. My point in sharing all of this is that it's not unusual for even a very small organization to have a lot of digital technologies, and increasingly, all of these digital tools will include some level of artificial intelligence. In addition, especially if you are a larger organization, you might have a data science team, and you may have built some AI tools specifically for your organization. Your first challenge is documenting all of the digital tools that reside in your organization. Your IT department will likely have a list of the official tools, and that list is a good starting point. Your second challenge is mapping instances of AI within these tool sets in order to be able to perform accurate risk assessments. However, before you can get to step two, you have to determine what might seem obvious. What is AI? How will you know if a tool involves the use of AI if you don't have a working definition? The term AI is problematic. It's an umbrella term that tends to cover many things, from facial recognition technologies to spam filters for email to recommendation systems, just to name a few different applications. In choosing a definition, you'll likely want to consult the definitions that have legal relevance for your jurisdiction as a starting point. You can go back to your legal scan to review those definitions. However, you may also wish to augment or tweak those definitions with any additional information that might be relevant given your business focus and priorities. The OECD is a well-respected, international organization that has defined AI at this moment as the following. "An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment." I like this definition, but you don't have to adopt this definition. Just make sure how you define AI meets any relevant legal obligations or regulations. In addition to internal agreement on a definition for AI, you should also know that some of the things that are by definition AI, might not pose significant risks for your organization. That might be surprising to hear, even counterintuitive to everything we've been saying so far. So let's take one example. Spelling, grammar tools, and auto complete, the features that suggest the next word in a sentence. These tools are woven into most document creation systems. These things involve AI, but do they create an ethical risk that you want to worry a lot about? There might be interesting, big picture philosophical takes on auto completes impact to language overall, but for the average business, this isn't really an AI ethical risk. Yet that same probabilistic language recommendation set of capabilities is also in generative AI. And generative AI is a technology that can create a lot of risk in your business if not adequately managed. Consider this an example as to how we will use this list when it comes to assessing ethical risks.

Contents