From the course: Building a Responsible AI Program: Context, Culture, Content, and Commitment
Connector: Start with context
From the course: Building a Responsible AI Program: Context, Culture, Content, and Commitment
Connector: Start with context
- Typically, conversations about responsible AI tend to dive right into topics like algorithmic bias, discussions about data privacy and machine learning models, or even how to address the risks of ChatGPT. Those are certainly things that need to be covered, but in my opinion, it's a mistake to start with those conversations when it comes to building a responsible AI program. You should be aware of these topics. But starting to plan your program by focusing on the technology and the harms it causes is too narrow a focus. Instead, it's important to start bigger picture by taking a look at context. This can be challenging because a particular incident might be the catalyst that's given rise to addressing responsible AI in the first place. Maybe your company had an incident involving AI, the kind that results in ugly headlines or maybe even a lawsuit. Senior management wants a solution now. It's important to put out the fire, but don't let your whole program be defined by that incident. Hopefully, you didn't have an ugly incident, and instead, you're interested in responsible AI because you're being proactive. In either case, being informed about context by starting with a bigger picture scan will give you a more holistic perspective and help you to decide what specifically your organization should focus on in your responsible AI program.
Practice while you learn with exercise files
Download the files the instructor uses to teach the course. Follow along and learn by watching, listening and practicing.