Generative Artificial Intelligence technology is advancing quickly. And with expansion and advantages also come risks and precautions. A recent article published by McKinsey partners alerts that leaders who want to design their teams and processes to mitigate those risks should act from the start—not only to meet fast-evolving regulatory requirements but also to protect their business and earn consumers’ digital trust.
Before getting started
In the paper, the authors (Michael Chui, Roger Roberts, Tanya Rodchenko, Alex Singla, Alex Sukharevsky, Lareina Yee, and Delphine Zurkiya) put together the following points of attention:
Fairness: “Models may generate algorithmic bias due to imperfect training data or decisions made by the engineers developing the models.”
Intellectual property (IP): “Training data and model outputs can generate significant IP risks, including infringing on copyrighted, trademarked, patented, or otherwise legally protected materials. Even when using a provider’s generative AI tool, organizations will need to understand what data went into training and how it’s used in tool outputs.”
Privacy: “Privacy concerns could arise if users input information that later ends up in model outputs in a form that makes individuals identifiable. Generative AI could also be used to create and disseminate malicious content such as disinformation, deepfakes, and hate speech.”
Security: “Generative AI may be used by bad actors to accelerate the sophistication and speed of cyberattacks. It also can be manipulated to provide malicious outputs. For example, through a technique called prompt injection, a third party gives a model new instructions that trick the model into delivering an output unintended by the model producer and end user.”
Explainability: “Generative AI relies on neural networks with billions of parameters, challenging our ability to explain how any given answer is produced.”
Reliability: “Models can produce different answers to the same prompts, impeding the user’s ability to assess the accuracy and reliability of outputs.”
Organizational impact: “Generative AI may significantly affect the workforce, and the impact on specific groups and local communities could be disproportionately negative.”
Social and environmental impact: “The development and training of foundation models may lead to detrimental social and environmental consequences, including an increase in carbon emissions (for example, training one large language model can emit about 315 tons of carbon dioxide).”
With that in mind, AI can perform several functions in organizations, including classifying, editing, summarizing, answering questions, and drafting new content. Here are some examples:
- A fraud-detection analyst can input transaction descriptions and customer documents into a generative AI tool and ask it to identify fraudulent transactions.
- A customer-care manager can use generative AI to categorize audio files of customer calls based on caller satisfaction levels.
- A copywriter can use generative AI to correct grammar and convert an article to match a client’s brand voice.
- A graphic designer can remove an outdated logo from an image.
- A production assistant can create a highlight video based on hours of event footage.
- A business analyst can create a Venn diagram that summarizes key points from an executive’s presentation.
- Employees of a manufacturing company can ask a generative AI–based “virtual expert” technical questions about operating procedures.
- A consumer can ask a chatbot questions about how to assemble a new piece of furniture.
- A software developer can prompt generative AI to create entire lines of code or suggest ways to complete partial lines of existing code.
- A marketing manager can use generative AI to draft various versions of campaign messaging.