Date
March 27, 2024
Category
AI
Reading Time
5 minutes

When AI goes wrong: 3 ways to mitigate your risk

What do you do when you find an AI use case that’s promising — but also risky?

The Dangers of AI — How to Spot Them, Assess Them, and Avoid Them 

Quick recap: In our last email, we talked about the AI Alignment Matrix: A 2x2 framework to facilitate strategic AI discussions within your company.

It can also be used to evaluate and assess use cases based on their potential risk. Regardless of the model, universal generative AI risks include: 

  • Hallucinations: Occasionally, GenAI outputs are very convincing — but entirely inaccurate or false. 
  • Bias and Discrimination: AI systems reflect biases present in their training data, which can lead to discriminatory outputs.
  • Transparency and Ownership: AI can raise questions about content authenticity and copyright ownership. 
  • Dependency: Over-reliance on AI for tasks such as writing, decision-making, and creative processes can lead to a decline in individual abilities or organizational deficiencies in these areas.

In addition, companies should consider risks that are unique to them, like:

  • Industry regulation: Regulatory requirements that might complicate or prohibit AI’s use.  
  • Input risks: Risks inherent in the data required to perform the task, like personally identifiable information (PII) or trade secrets. 
  • Output risks: The potential damage (financial, reputational, legal, etc.) if the AI system fails to perform as expected. In other words, if the output is offensive, biased, or just plain incorrect. 
  • Investment: The cost of offloading this task to AI, which is often correlated to how complex or essential the task is. Expenses might include data hygiene and preparation, prompt engineering, training and upskilling your team, human oversight the use case requires, and so on.  

So if you identify a use case with high value but also high risk, first ask: Is it possible to lower the risk? 


You’re reading part 4 of a 4-part series on AI use cases. We’re exploring frameworks and strategies to think about: 


3 Ways to Mitigate AI Risk 

Three ways to think about risk mitigation when you’re implementing AI: Make it smaller, make it smarter, or make it safer. 

Can you make it smaller, smarter, or safer?

1. Make it smaller

Reduce the scope to an even smaller use case. A task might be too risky to fully offload to AI — but that doesn’t mean that efficiencies can’t be gained. 

This is another case where “microproductivity” — breaking apart tasks into smaller components — is useful. If the task is too risky to automate, can a portion of it be automated? 

  • Instead of drafting complete legal documents, could it be tasked with pre-filling standard clauses? 
  • Instead of generating an entire client proposal, can you generate a template with placeholders for client details? 
  • Instead of an AI chatbot handling customer interactions, could an AI-powered knowledge base make customer service reps more efficient?

2. Make it smarter 

Increase human involvement in the final product. Some tasks (particularly unstructured ones) require more experience, expertise, or even instinct to navigate unexpected obstacles or complex decision points.

This is where Microsoft’s critical integration sandwich can be helpful: Don’t fully delegate the task to AI; instead, use AI as a productivity booster — a way to accelerate the process while a human remains deeply involved throughout the task.

It’s also why we recommend turning prompts into prompt SOPs: to insert necessary standards and checkpoints in your production process.

3. Make it safer 

Add more guardrails to the process, whether that’s masking data or choosing to use an on-premise Large Language Model (LLM) — running the model on your organization's own hardware and infrastructure to increase security, privacy, and control.

Question(s) to Consider: 

A challenge for you: What is the *smallest* use of AI that could still bring massive efficiency gains?  

You're Reading a Preview. Want More?

Welcome! This is a preview of our newsletter. Join dozens of marketing and business leaders as we explore frameworks, templates, and advice to navigate the challenges that come with growth. Sign up to receive new insights every other week.

Get a free template to organize your company's generative AI prompt library.

Not having a well-organized prompt library could lead to:

  • Lost knowledge. If you can't find it, you can't use it. And if you can't use it, you can't replicate it.
  • Redundancies. Multiple copies of the same or similar prompts creates inefficiency and undermines version control.
  • Information silos. By controlling the repository of prompts and versions, everyone's work gets better.
  • Compliance breaches. Unmanaged prompts can threaten your company with the exposure of confidential information or the creation of harmful or biased work.

Use our free prompt library template to get started.

Let's start building your brand's unique story together.

Want to see what a difference a strong brand can make? Request a consultation today.
Get in Touch