Skip to content
← All Board Briefs
Operational Frameworks 4 min read

Kaspersky Warns on AI Risks for Businesses in Morocco

Kaspersky warns on AI risks for businesses in Morocco. What CEOs and CHROs must do this week to govern AI use in their organization.

Naïm Bentaleb

Naïm Bentaleb

AI Strategy & Governance Advisor

Kaspersky Warns on AI Risks for Businesses in Morocco: What Leaders Need to Know

Kaspersky has just published a study on AI-related risks in Moroccan businesses. This is not another alert to file away.

It is a signal that the issue is now documented, named, and taken seriously by a reference player in cybersecurity.

What the Study Actually Says

Kaspersky identifies three concrete risk categories for companies deploying AI without a structured framework: security risks (sensitive data leaks through unsanctioned AI tools), ethical risks (bias in automated decisions), and compliance risks (use of tools that do not meet local data protection requirements).

What is striking is that these risks do not come from a sophisticated external attack. They come from within. An employee pasting a client contract into ChatGPT to generate a summary. A recruitment tool filtering CVs with a model trained on biased data. A management team deploying a conversational agent without checking where the data is stored.

This is what we call unmanaged AI. And in Morocco, as elsewhere, it is growing faster than internal policies.

The Real Question Nobody Asks in Board Meetings

Do you know how many AI tools are currently being used in your organization without validation from your CIO or CHRO?

Probably more than you think.

This is not a problem of bad intent. It is a problem of a vacuum. When there is no clear policy, teams use whatever tools are available. And available tools are often free, fast, and non-compliant. The absence of internal policy is more dangerous than the tool itself.

Devoteam Morocco just partnered with Inteqy to deploy a human-controlled AI approach in large enterprises. That signal is worth noting: players in the Moroccan market are beginning to offer structured responses to this problem.

I have built a 6-dimension diagnostic framework to assess your organization’s exposure to these risks. Download the Board Pack AI 2026.

What This Means Concretely for You

If you are a CEO or CHRO, the Kaspersky study gives you three practical obligations.

First: map. You need to know which AI tools are being used, by whom, on what data. Not in six months. Now.

Second: decide. Either you ban unvalidated tools (which never really works), or you create a fast approval process that allows teams to use sanctioned tools.

Third: train. Not technical training. AI literacy training: what counts as sensitive data, what is algorithmic bias, when to ask for validation. As I explained in my analysis on change management with AI, resistance does not come from tools, it comes from the absence of a framework.

What I Would Do in Your Position

I would not launch a major 18-month AI governance project. I do not have the time, and neither do you.

I would do three things this week.

A rapid audit of AI tools used across the organization, including informal usage. A clear internal memo on what is authorized and what is not, pending a formal policy. And a conversation with my CIO and CHRO about who is accountable when an AI incident occurs.

That last point is often the most revealing. If nobody can answer that question, you have your diagnosis.

Morocco is moving forward on AI, as shown by the major AI projects launched in 2026. But moving fast without guardrails means taking risks you did not choose to take.

If you want to structure your approach in weeks rather than months, request a free diagnostic.

FAQ

What is unmanaged AI in a business context?

It refers to the use of artificial intelligence tools by employees without prior validation from management, the CIO, or legal teams. These tools may process sensitive data without the company’s knowledge.

What are the main risks Kaspersky identified for Moroccan businesses?

The study highlights three categories: security risks related to data leaks, ethical risks related to algorithmic bias, and compliance risks with local data protection regulations.

Where should I start to govern AI in my organization?

Start with an audit of existing usage. Before building a policy, you need to know what is actually happening. Then issue a clear internal memo on authorizations, followed by targeted AI literacy training.

Is AI governance only for large enterprises?

No. A simple, operational framework is accessible to any organization, regardless of size. The question is not scale, it is the clarity of internal accountability.

Share this brief

Next Step

Ready to structure AI governance in your organization?

Start with an AI Governance Sprint – a 2-3 week diagnostic that gives you a clear action plan.