back
Blog > Technology
The Intersection of AI, Customer Data, and Copyright in the Middle East: A Legal Perspective for Marketers
Technology
Digital Marketing
Marketing
16 October 2024
The Intersection of AI, Customer Data, and Copyright in the Middle East: A Legal Perspective for Marketers
Paylaş
Link Copy Facebook Twitter Linkedin

Here is the second article of my series, where I share insights from my conversations with senior marketing leaders and other stakeholders across the Middle East. We explore how they are incorporating AI into their marketing strategies and discuss their perspectives on its impact within their respective countries. This series has been inspired by the podcast series Decoding AI for Marketing by MMA Global - which delves into the intricate dance between artificial intelligence and its opportunity to radically change the world of marketing.

In this article we’re exploring the intersection of AI technology in marketing, data privacy, and regulatory compliance – of course at a very high level. My guest, Laura Reynaud Esq., Chair of Legal Committee, MMA MENA brings a wealth of knowledge to the table and shares her insights with our readers.

AI in the Middle East: Opportunities and Hesitations
The discussion begins with a broad look at AI adoption in marketing in the Middle East. With its potential to transform customer experience, AI offers exciting possibilities. But there’s a catch—companies are reluctant to dive in, particularly when it comes to generative AI and content creation.

Nurcan: "In the Middle East, companies are eager to embrace AI technologies. The potential for hyper-personalized marketing is undeniable. However, there’s still some reluctance, particularly when it comes to generative AI for content creation. Regulations, especially around customer data, might not always be crystal clear, and businesses fear the potential repercussions of non-compliance."
Laura: "That makes complete sense. While there’s tremendous potential in AI, especially for creating more tailored marketing strategies, the regulatory landscape needs to be understood before jumping into that potential. Right now, the laws in the Middle East are still evolving, especially when it comes to data privacy and AI usage. For instance, in the UAE, while there’s a federal privacy law, we don’t yet have detailed regulations on how to enforce it. But the Personal Data Privacy Law in Saudi Arabia is now in effect."

Understanding the Legal Landscape of Data Usage
Moving beyond the hesitation, Laura dives deeper into the legal intricacies surrounding the use of personal data in AI-driven marketing, specifically focusing on the requirements for consent.

Nurcan: "Our marketers are increasingly using AI to analyze customer behavior. We know where they eat, where they shop, and can even predict whether they have kids based on their purchasing behavior. This level of insight is game-changing. But how do we ensure we’re using this data legally?"
Laura: "Consent is important in this context. You must obtain explicit and meaningful consent from the data subjects to use their personal data. This consent is also necessary before sending any advertising or awareness materials. Recipients (data subjects) need to clearly understand what they are consenting to, and there should be an easy way for them to opt-out of receiving your materials. Consent must also be documented for future verification.
If a recipient requests to stop receiving materials, you must comply immediately and at no cost to them. Additionally, if consent is withdrawn, you must delete their personal data. It’s important to collect only the minimum amount of data necessary and allow data subjects the opportunity to amend or destroy their data.
There are stricter rules around sensitive personal data and credit data. There are some restrictions around the transfer of data (especially sensitive data) outside of the country. You must be cognizant of such restrictions.
In Europe, the GDPR sets very clear consent requirements. However, in the Middle East, especially in the UAE, although similar laws exist, enforcement may not be at the same level as in Europe. This lack of enforcement leads to practices like unsolicited WhatsApp marketing, which would not be allowed in Europe, to continue."

Balancing Innovation and Regulation
As the conversation progresses, I delved into the challenge of balancing regulatory compliance with the need to innovate, especially when the potential of AI can offer so much to marketers.

Nurcan: "We don’t want to hold back innovation, but we also don’t want to overstep the boundaries of the law. How can we strike the right balance between leveraging AI’s capabilities and ensuring that we’re compliant with data privacy regulations?"
Laura: "The key is transparency and responsibility. For instance, AI-generated profiles are incredibly useful for marketing campaigns. You might build a digital twin of a customer to test how they would respond to different products. But even with this advanced capability, you still need to be transparent with your users about how their data is being used (and obtain all the necessary consents). And more importantly, if AI makes a mistake, who is responsible? Companies need to ensure that there’s always a human being accountable for AI-driven decisions. A human being needs to be able to explain how an AI solution is making a decision. You have to ensure that AI doesn’t end up becoming a black box. This is where AI governance comes into place. Your company must have clear AI governance measures in place before you start using AI."

Defining Meaningful Consent in the Middle East
Laura elaborates on the complexity of obtaining meaningful consent, a critical aspect of data-driven AI marketing.

Nurcan: "Our audience may not always understand what they are consenting to. How can we ensure that the consent we’re getting from them is meaningful?"
Laura: " This is a big challenge. Consent must be provided with sufficient information for the individual to make an informed decision. They need to understand not only that you will use their data, but specifically which data, how you will use it, process it, where it will be transferred, how it will be stored, and how it will be maintained and destroyed. Without this understanding, the consent is not meaningful. Additionally, consent cannot be permanent; users must have the ability to withdraw it at any time. You must have proper mechanisms in place to destroy the data when consent is withdrawn."

Copyright and AI-Generated Content
The conversation turns to a critical issue in AI marketing—copyright infringement when using generative AI for content creation.

Nurcan: "Another challenge we’ve faced is the copyright issues surrounding AI-generated content. For example, if we use a generative AI tool to create marketing copy, who owns the copyright to that text?"
Laura: "That’s a tricky area. According to the U.S. Copyright Office, only humans can create copyrighted works, meaning AI-generated content can’t be copyrighted. But that’s not the only issue. Many generative AI tools, like ChatGPT, are trained on vast amounts of data—much of which is copyrighted. If the AI uses copyrighted material without permission, it’s a copyright infringement."
There are currently several lawsuits against companies like OpenAI for copyright infringement. For example, the New York Times initiated legal action against OpenAI and Microsoft, claiming that their AI technology unlawfully copied millions of Times articles for training ChatGPT and other related services. This technology now competes with the Times by providing instant access to information.

The Twin Dilemma: Data Privacy and Digital Twins
I introduced the concept of digital twins in marketing—virtual representations of customers used to test marketing strategies—while exploring ethical considerations.

Nurcan: "We’re seeing more potential in using digital twins for marketing. Instead of targeting me directly, companies could target my digital twin and run simulations. But is this legally and ethically sound?"
Laura: " It’s an interesting concept, but it comes with significant ethical and legal challenges. Using a digital twin involves processing a substantial amount of personal data, which must be accurate and consented to by the individuals involved. Transparency is important; customers need to be aware of the existence of a digital twin and how it is being used. Additionally, you should only collect and use the minimum amount of data necessary, which can be difficult and costly to maintain. Consider whether your organization can manage this responsibly: Do you have the capability to collect only the necessary data with proper consent? Is there a data privacy officer available for data subjects to contact for corrections or deletions? Are you aware of all the data you hold? How are you protecting this data from threats and cyberattacks? Do you have all necessary safeguards in place? If your organization faces a cyberattack, do you have a crisis management plan? These are all critical considerations.”

Cross-Border Data Challenges
I raised a specific concern about cross-border data sharing, a common issue for multinational corporations using AI. Many of us often collaborate with colleagues in other countries. If we share data with them—perhaps by simply showing them a screen during a video call—is that considered a data transfer, and is it legal?
Laura says that this should indeed be considered a transfer and is subject to the same rules as physically sending a file. For instance, in Saudi Arabia, personal data cannot be transferred outside the country unless it is for certain legitimate reasons like fulfilling agreements involving the Kingdom, centralized operations, contract performance (where the data subject is a contractual party), vital interests, and scientific research, among others. Although there is no clear definition of the term “transfer,” the new SCCs define it as “access.” Even something as simple as screen sharing during a meeting may count as a transfer, so we need to be cautious. Additionally, consent alone is not sufficient to transfer data outside of Saudi Arabia.

The Problem with Opt-Out Consent
The conversation shifts to a fundamental difference in how data privacy is handled in different regions, particularly around opt-in versus opt-out consent.
Nurcan: "Many companies in the Middle East still operate on an opt-out basis. Isn’t this problematic compared to the opt-in approach required by GDPR?"
Laura: "Opt-out is not considered valid consent in most data privacy frameworks, including GDPR. The user must actively choose to give consent (again consent needs to be meaningful), not passively allow it by failing to opt out. It is not good to assume that something is legally sound just because it is common practice."

The Ethical Responsibility of AI in Marketing
The discussion takes a turn toward the ethical responsibilities that come with using AI in marketing, particularly when technology starts to make decisions autonomously.

Nurcan: "As AI becomes more autonomous in its decision-making, what are the ethical implications for marketers?"
Laura: " The ethical implications are huge. AI must be explainable — marketers need to understand and be able to explain how and why AI makes certain decisions. Accountability is also important; if an AI system makes a mistake, a human must be held responsible. You can’t hide behind technology. Additionally, it’s important to use a diverse pool of data and ensure that the teams responsible for AI development, training, testing, and monitoring have diverse perspectives. Lack of diversity can lead to biased inputs and outcomes. Organizations should label products and models as AI-based, both internally and externally. Consumers should be notified when they interact with AI or receive outputs/decisions generated by it. Privacy notices must disclose how personal information is used to develop and train AI. If personal information is used for automated profiling, organizations must obtain consent in accordance with applicable privacy regulations (e.g., GDPR, California Consumer Rights Act, omnibus U.S. state privacy laws, etc.). Consumers should be able to access and delete their personal information used to develop and train AI models, in compliance with applicable laws. While developing and training AI models requires a lot of data, it’s important to minimize the amount of personal data used. Cyber intrusions, including the exfiltration of confidential information or poisoning of AI models, must be mitigated through robust AI development practices.”

This reminded me the Air Canada case. When Air Canada's chatbot gave incorrect information to a traveller, the airline argued its chatbot is "responsible for its own actions". In 2022, Air Canada's chatbot promised a discount that wasn't available to the passenger. In the end the Airline was held responsible for its chatbot giving passenger bad advice. The legal decision was: “It makes no difference whether the information comes from a static page or a chatbot.”

Preparing for the Future of AI and Data Privacy
For our closing remarks, I asked Laura what companies should be doing now to prepare for future changes in AI regulations.
She listed transparency first. Companies need to be clear about how they’re using AI and customer data. Second, they should build flexibility into their systems to adapt to new regulations as they emerge and establish appropriate AI governance frameworks. Lastly, companies should prioritize ethical AI practices—doing the right thing, even when not explicitly required by law.
To start an AI governance framework, companies can use external frameworks like those from NIST and OECD as a foundation. They should also familiarize themselves with the EU AI Act because of its extraterritorial implications and its potential to serve as a blueprint for future AI regulations in our region, like how GDPR inspired new privacy laws.

The delicate balance between innovation and regulation in AI marketing will continue to need careful consideration. As companies push the boundaries of what’s possible, they must remain vigilant about the legal and ethical implications of their work. In this segment, I aimed to summarize the topic with my expert guest, Laura. However, as mentioned earlier, this is a high-level discussion, and we strongly encourage companies to consult their legal departments for a thorough evaluation and take appropriate actions based on their guidance.

Nurcan Bicakci Arcan
MMA MENA Board Member & Chair of AI & Data Committee

arrow
BYOAI Trend: How Employees Are Bringing Their Own AI to Work
BYOAI Trend: How Employees Are Bringing Their Own AI to Work
12 September 2024
Newsletter
Be the first to receive the latest newsletters
Stay up to date with MMA MENA!
© 2024 All rights reserved.