Everyone’s talking about it. Most of us are using it in one form or another. There’s a lot of noise surrounding Generative AI, but governance and policy are in short supply. Right now, businesses need guidance about what Generative AI can deliver, and how and when it should be used.
Understanding Generative AI
At its core, generative AI uses a large language model (LLM) trained on massive amounts of data, making it capable of creating entirely new content, from text and images to music and code.
It represents the stage of Artificial Intelligence (AI) that we have currently reached, exemplified by popular and emerging applications such as ChatGPT and Gemini for text, Dall-E2 for images, Amper for audio and Pictory for video.
Until recently, the best-known applications have been consumer focused, but technologies aimed at enterprise are now widespread, including Microsoft Co-pilot, Google Gemini and Samsung’s Generative AI, built into their latest generation of handsets. Businesses are adopting these tools for various applications, including automating customer service, generating marketing content, and streamlining product development.
The governance gap
Despite the rapid uptake, there’s a noticeable void – governance. As generative AI integrates seamlessly into our daily operations, the need for robust policy guidance has never been more pronounced.
When we talk with our clients, it’s clear that most organisations are yet to catch up, with a significant number operating without a clear policy for generative AI usage. This gap is more than just a procedural oversight, it’s a ticking time bomb of ethical dilemmas, privacy breaches, and biased outputs waiting to explode.
Why governance matters
Governance in the realm of generative AI isn’t about control, it’s about establishing a framework that ensures ethical, secure, and lawful use of the technologies. Without it, businesses risk facing legal consequences, damaging their reputation, and losing stakeholder trust. Here are five examples:
- Bias and Discrimination: Generative AI systems such as AI-powered recruitment applications have been found to perpetuate biases present in their training data. For instance, an AI system used for screening job applicants might favour certain demographics over others, leading to discrimination in hiring practices.
- Privacy concerns: With the capability to generate detailed synthetic data, generative AI poses significant privacy risks. For example, deepfake technology, which creates realistic video and audio recordings, can be used to produce misleading content to enable identity theft, fraud, and character defamation.
- Intellectual property rights: AI that can produce art, music, or written work challenges traditional notions of creativity and intellectual property. For instance, AI-generated paintings or music tracks could lead to disputes over copyright ownership, questioning whether the original creators or the AI developers hold the legal rights to the generated content.
- Social manipulation: Generative AI can be used to create realistic but false content, such as fake news articles or altered videos, which can easily spread misinformation. This use of AI in crafting and propagating deceptive content has serious implications for public opinion, democratic processes, and trust in media.
- Autonomous decisions and accountability: AI systems, particularly in areas like autonomous vehicles, make decisions that can have life-or-death consequences. The dilemma arises when these systems perform actions without direct human oversight, leading to questions about accountability and the moral responsibility for AI-driven decisions.
Creating a Generative AI Policy
An effective AI governance policy should encompass ethical guidelines, data security measures, adherence to legal standards, and mechanisms for accountability. The process involves:
- Conducting a risk assessment to identify potential ethical and legal vulnerabilities.
- Developing a clear policy framework that outlines permissible uses of AI.
- Training employees on the importance of ethical AI use and compliance.
The advent of generative AI has opened a Pandora’s box of possibilities and challenges. As we chart this unexplored territory, it’s clear that businesses must not only embrace the transformative power of AI but also champion the cause for its responsible use.
We can support you in developing AI capabilities, skills, and knowledge across your organisation.
If you’d like to chat about what’s involved, get in touch. Email Colin at colin@cognitiveunion.com, or contact our team here today to learn more.