AIAI Labeling Requirement

AI “Face Swap” Marketing Campaign Sparks Negative Public Sentiment; Game Marketing Must Prioritize Content Review

AI“换脸”宣发引负面舆情,游戏营销重点关注素材审核

January 8, 2026
25 views

Summary

The First Descendant sparked backlash after AI-generated deepfake videos of influencers were used in TikTok ads without their consent. Original creator videos were edited and altered by AI to make them appear to promote the game, which the creators later denied. NEXON said the ads were part of a Season 3 creator campaign and is investigating the production process with TikTok. The case highlights the legal and ethical risks of generative AI in marketing. South Korea’s AI Basic Act now requires labeling of deepfakes and AI-generated content, with significant fines for violations. Combined with earlier scrutiny of Netmarble over suspected AI art use, the article argues that misuse of AI creates lasting “reputational debt.” To manage risk, game companies should secure explicit consent, apply clear AI labels, adopt internal ethics frameworks, and conduct strict due diligence on third-party partners.

The First Descendant has faced intense criticism for using AI-generated influencers in TikTok ads to promote its product, as well as deepfaking original video content without the knowledge or consent of the actual hosts.

While the TikTok ads appear to feature genuine streamers promoting The First Descendant, closer inspection reveals unnatural-sounding voices, awkward lip-syncing mismatched with scripted dialogue, and peculiar head movements. Take the deepfake video of horror game streamer DanieltheDemon, for instance, where he discusses playing and endorsing the game. However, DanieltheDemon later clarified that he has no affiliation with the game or the advertisement. DanieltheDemon's most popular TikTok video, which has garnered 8.3 million views, shows him playing the indie horror game The Guest. The fake advertisement appears to have extracted a clip from this video, mirror-flipped it, used AI to alter his lip movements and speech, and then spliced in footage from a completely different game to make it seem like he was promoting it.

Following the public outcry, NEXON, the developer of The First Descendant, promptly issued a statement clarifying that such advertisements were part of a marketing campaign for the game's third season. The company launched a creative challenge program targeting TikTok creators, allowing them to voluntarily submit their content for use as advertising material. All submitted videos undergo verification through TikTok's system to check for copyright infringement before being approved as advertising content. However, the production environment of the controversial ad in question appears to have involved improper practices. Consequently, NEXON is conducting a thorough joint investigation with TikTok to ascertain the facts.

Clearly, with the continuous advancement of AI technology, the ability to plagiarize or blend artistic styles is no longer rare. Significant progress has also been made in deepfakes and generative art. People can now easily use specific AI tools to generate dynamic, continuous sequences from static works to meet their needs. The root cause of NEXON's controversial advertisement lies in the improper use of AI technology following its enhanced capabilities.

To address challenges posed by generative AI, South Korea's “Basic Act on Artificial Intelligence” was enacted in December 2024. It mandates transparency obligations for AI service providers, including requiring operators of generative AI services to inform users when content is AI-generated and to apply labels or watermarks to deepfake content that may be mistaken for reality. Non-compliance may incur administrative fines of up to 30 million KRW.

NEXON's controversy is not an isolated case. For instance, fellow South Korean gaming giant Netmarble faced heightened player skepticism when launching a new title based on the globally popular IP “I Level Up Alone.” After the game's release, numerous players posted on social platforms like Reddit, pointing out that many in-game art assets—including character portraits and cutscene illustrations—exhibited clear signs of AI generation, such as inconsistent detail handling, blurred texture blending, and illogical linework.

Although Netmarble has not issued an official response, this series of questions itself reflects a deeper issue. A single confirmed instance of improper AI use can inflict long-term reputational damage on a company. This leads player communities to view all subsequent releases from the company through a biased lens, proactively conducting “AI audits.” This community-driven, ongoing scrutiny means that even when companies use AI tools within regulatory boundaries, they face significant reputational risks. This demonstrates that in the AI era, while winning legal battles is crucial, maintaining community trust may prove a more arduous and enduring challenge.

Clearly, embracing AI is an inevitable path to enhancing R&D efficiency and aiding creative processes. However, to mitigate legal risks, gaming industry participants are advised to adopt the following strategic measures to ensure reasonable use of AIGC technology while protecting themselves from negative public sentiment and legal exposure:

1. Establish “Explicit Consent” as the Foundation: Before using any individual's likeness, voice, or other identifying characteristics to train or generate AI content, obtain their explicit, informed, and comprehensive authorization. Standard model release agreements may be insufficient for AI applications. Contracts should explicitly mention terms like “artificial intelligence,” “deepfakes,” and “synthetic generation,” clearly defining the scope and duration of use.

2. Proactive Transparency and Mandatory Labeling: Implement clear, prominent watermarking or labeling systems across all marketing materials and game content utilizing AIGC technology. This not only complies with global legal standards but also effectively mitigates accusations of “deception,” fostering user trust.

3. Establish and Enforce a Robust Internal AI Ethics Framework: Companies should develop explicit internal AI ethics guidelines covering data source legitimacy, algorithmic fairness, application transparency, and respect for creator rights. This framework not only supplements legal compliance but also serves as critical evidence demonstrating the company's “duty of care” in the event of disputes.

4. Strengthen due diligence for third-party suppliers: For activities involving third parties or user-generated content, such as NEXON's “Creative Challenge,” reliance on platform-level reviews alone is insufficient. Companies must implement rigorous due diligence processes to ensure all marketing partners and content-submitting users adhere to corporate standards regarding licensing, intellectual property, and ethics.

中文原文

《第一后裔》(《The First Descendant》) 因其在TikTok广告中使用AI生成的网红发布产品推广广告,以及在未经真实主播知情或同意的情况下对其原视频内容进行深度伪造而受到猛烈批评。

虽然TikTok视频广告看起来像是真实主播们宣传《第一后裔》的片段,但如果用户细心观察,则会发现这些看似主播的声音听起来很不自然,与脚本内容并不匹配的奇怪嘴型和头部动作。以恐怖游戏主播DanieltheDemon的深度伪造视频为例,视频中他正在谈论自己玩《第一后裔》并推广这款游戏。然而,DanieltheDemon随后声明,他与这款游戏或广告无关。DanieltheDemon在TikTok上最受欢迎的视频观看次数达830万次,视频中他正在玩独立恐怖游戏《The Guest》。而虚构广告似乎截取了这一视频切片,对其进行了镜像翻转,利用人工智能改变了他的口型和言语,然后添加了完全不同的游戏的片段,使其看起来像是在推广这款游戏。

舆情发生后,《第一后裔》开发商NEXON立刻发布声明称,此类广告系《第一后裔》第三赛季的营销活动,面向 TikTok 创作者开展了一项创意挑战计划,允许创作者自愿提交其内容作为广告素材。所有提交的视频都会通过 TikTok 的系统进行验证,以检查是否存在版权侵权,之后才能批准为广告内容。而前述争议广告的制作环境似乎存在不当之处。因此,NEXON正在与TikTok进行彻底的联合调查,以确定事实。

显然,随着AI技术的不断发展,美术作品洗稿或风格融合已经并不是什么稀缺能力,如今在深度伪造以及动视创作领域也取得了长足发展,人们可以轻而易举地通过特定AI工具,将静态作品生成为满足自己需要的动态连续画面。而NEXON此次的争议广告,其根源便是AI技术力提升后的不当使用。

面对生成式AI带来的挑战,韩国《人工智能基本法》于2024年12月通过,它规定了AI服务提供商的透明度义务,包括要求生成式AI服务的运营者告知用户内容由AI生成,并对可能被误认为真实的深度伪造内容施加标签或水印。不遵守这些规定可能导致最高3000万韩元的行政罚款。

NEXON的争议舆情也并非首例,例如同为韩国游戏大厂的Netmarble,推出基于全球热门IP《我独自升级》的新作时,玩家社区表现出了极高的警惕性。游戏上线后,大量玩家在Reddit等社交平台上发帖,指出游戏内的许多美术素材,包括角色立绘和过场插图,存在明显的AI生成痕迹,如细节处理不一致、纹理模糊融合、线条逻辑混乱等 。

尽管Netmarble未就此作出官方回应,但这一系列质疑本身就反映了一个更深层次的问题。一次被证实的AI不当使用,会给公司带来长期的“声誉负债”。这导致玩家社区在面对该公司后续所有作品时,都会戴上有色眼镜,主动进行“AI鉴定”。这种由社区驱动的、持续的审视,使得公司即便在合规范围内使用AI工具,也面临着巨大的舆论风险。这表明,在AI时代,赢得法律诉讼固然重要,但维护社区的信任可能是一项更艰巨、更持久的挑战。

显然,拥抱AI无疑是提升研发效率与辅助创作的必经之路,但在应对法律风险层面,建议游戏产业的参与者采取以下战略措施,以合理使用AIGC技术的同时维护自身免受负面舆情与法律风险的影响:

1. 以“明确同意”为基石: 在使用任何个人的肖像、声音或其他身份标识来训练或生成AI内容之前,必须获得其明确、知情且范围广泛的授权。标准的模特授权协议可能不足以覆盖AI应用场景,合同中应明确提及“人工智能”、“深度伪造”、“合成生成”等术语,并清晰界定使用范围和期限。

2. 主动透明与强制标签: 在所有使用AIGC技术的市场营销材料和游戏内容中,实施清晰、显著的水印或标签制度。这不仅符合各国的法律标准,也能有效减轻外界关于“欺骗”的指控,建立用户信任。

3. 建立并执行稳健的内部AI伦理框架: 公司应建立明确的内部AI伦理指南,涵盖数据来源的合法性、算法的公平性、应用的透明度以及对创作者权利的尊重。这套框架不仅是法律合规的补充,也是在发生争议时证明公司已尽到“注意义务”的关键证据。

4. 加强对第三方供应商的尽职调查: 对于类似NEXON“创意挑战”这样涉及第三方或用户生成内容的活动,绝不能仅仅依赖平台级别的审核。公司必须建立一套严格的尽职调查流程,确保所有营销合作伙伴和提交内容的用户都遵守公司关于授权、知识产权和伦理的标准。

分享文章

相关文章

General

【Weekly Gaming Law】Lawyers Comment on miHoYo’s Anti-Fraud Actions; Infringing “Reskinned” Game Ordered to Pay RMB 5 Million

【每周游戏法】律师评米哈游反舞弊;侵权游卡被判赔500万

This weekly update examines three recent legal developments in the gaming industry: miHoYo’s anti-fraud enforcement and supplier blacklist measures; a “reskin” infringement case involving a Three Kingdoms-themed card game resulting in a RMB 5 million damages award based on unfair competition; and Roblox’s launch of AI-powered interactive content generation tools. The article outlines the legal considerations arising from supply chain compliance, the boundary between public domain materials and protectable game design, and the intellectual property and compliance implications of AI-generated interactive content within UGC platforms.

1 views
General

How to Build Official Game Payment Systems in a Compliant Manner (Part II): Overseas

游戏官方支付如何合规搭建(二)海外篇

Against the backdrop of a global economic slowdown and evolving regulatory scrutiny over major app distribution platforms, an increasing number of overseas-oriented game companies are exploring the establishment of official website top-up platforms to reduce reliance on channel commissions. Building on the prior discussion of platform policies regarding payment redirection and third-party payment access, this article reviews practical cases of official website payment models adopted by several game companies, including their login mechanisms, purchasable content, regional availability, and qualification disclosures. Based on these practices, it outlines compliance considerations that overseas game companies should focus on when constructing official website payment systems, particularly in relation to account management, price display, promotional methods, and refund policy design across different jurisdictions.

6 views
General

EU’s DMA Enforcement Push: Apple and Epic Games Reach Temporary Truce

欧盟DMA强监管,苹果与Epic Games暂时握手言和

Since 2020, Apple and Epic Games have been locked in a global antitrust dispute over App Store policies. While Epic lost its U.S. lawsuit, it continued its resistance through noncompliance, resulting in a developer account ban. However, the dynamics shifted with the EU Digital Markets Act (DMA) coming into force on March 6, 2024. Epic reported that Apple, under pressure from the European Commission, agreed to reinstate its developer account in the EU. The DMA’s provisions, especially Article 5(3) and Article 6(4), require gatekeepers like Apple to allow third-party app stores and payment systems on iOS. Apple’s attempt to ban Epic amid DMA implementation triggered regulatory attention, leading to rapid Commission intervention. This incident not only highlights the DMA’s enforcement teeth but also signals a broader shift in platform governance within the EU. For global developers and digital exporters, especially those dependent on app store distribution, DMA compliance represents a strategic inflection point. Non-compliance risks include fines of up to 10–20% of global turnover, exemplified by the €1.84 billion fine Apple recently faced. As more third-party app stores (e.g., Mobivention, MacPaw) emerge, the EU’s digital market is poised for structural transformation.

5 views