Generative Artificial Intelligence technology is advancing quickly. And with expansion and advantages also come risks and precautions. A recent article published by McKinsey partners alerts that leaders who want to design their teams and processes to mitigate those risks should act from the start—not only to meet fast-evolving regulatory requirements but also to protect their business and earn consumers’ digital trust.
In the paper, the authors (Michael Chui, Roger Roberts, Tanya Rodchenko, Alex Singla, Alex Sukharevsky, Lareina Yee, and Delphine Zurkiya) put together the following points of attention:
Fairness: “Models may generate algorithmic bias due to imperfect training data or decisions made by the engineers developing the models.”
Intellectual property (IP): “Training data and model outputs can generate significant IP risks, including infringing on copyrighted, trademarked, patented, or otherwise legally protected materials. Even when using a provider’s generative AI tool, organizations will need to understand what data went into training and how it’s used in tool outputs.”
Privacy: “Privacy concerns could arise if users input information that later ends up in model outputs in a form that makes individuals identifiable. Generative AI could also be used to create and disseminate malicious content such as disinformation, deepfakes, and hate speech.”
Security: “Generative AI may be used by bad actors to accelerate the sophistication and speed of cyberattacks. It also can be manipulated to provide malicious outputs. For example, through a technique called prompt injection, a third party gives a model new instructions that trick the model into delivering an output unintended by the model producer and end user.”
Explainability: “Generative AI relies on neural networks with billions of parameters, challenging our ability to explain how any given answer is produced.”
Reliability: “Models can produce different answers to the same prompts, impeding the user’s ability to assess the accuracy and reliability of outputs.”
Organizational impact: “Generative AI may significantly affect the workforce, and the impact on specific groups and local communities could be disproportionately negative.”
Social and environmental impact: “The development and training of foundation models may lead to detrimental social and environmental consequences, including an increase in carbon emissions (for example, training one large language model can emit about 315 tons of carbon dioxide).”
With that in mind, AI can perform several functions in organizations, including classifying, editing, summarizing, answering questions, and drafting new content. Here are some examples:
—
Source: McKinsey