top of page

Definition and Language Choices in AI Policy

Introduction


Since the era of rapid technological development started, it has been difficult for governments and policymakers to keep up with understanding and creating accurate legal definitions for terms that may change on a month-to-month basis. Additionally, these definitions may vary across different regions based on the real-world use of the technology itself. The language used to craft legislation plays a crucial role in shaping regulations and defining legal boundaries, as it communicates various goals and intentions, thus influencing interpretations and implementations of the law. This is especially true in the era of online communication, machine learning, and artificial intelligence of various forms. 


Both China and the EU have created AI legislation within the past year that uses very deliberate language choices. The decision of what to have written into law and what to specifically define within said regulations is worthy of analysis, as they imply certain intentions. This article explores the terminology used within recent regulations, comparing human-centred and tech-centred language choices to understand state priorities in AI regulatory policy. 


Defining AI


Artificial Intelligence is notoriously hard to define, both in international and domestic settings. There is no standard legal or technological definition for the term, and likely will not be for the foreseeable future. This is due to the many different functions that artificial intelligence can carry out, from simplistic calculations to video generation (Maccario, 2023; O'Shaughnessy, 2022). Industry applications of AI vary so widely that each company could have a different definition based on their usage of AI, and this variance is reflected in policy as well. Here, the definition and language choices surrounding AI in EU and Chinese government policy will be the main focus of this discussion in order to narrow the topic to a manageable size. 


In practice, defining AI for policy development from industry terms has had limited success. Terms that are industry standard, including anything from “machine learning” to “statistical models,” are grouped under the umbrella of AI in policy documentation (O'Shaughnessy, 2022). Realistically, this is a wide swathe of ground to cover, and needs special consideration in order to avoid leaving large regulatory gaps. Because of this difficulty, policy-makers and researchers working on this issue have defined what AI is in broad enough terms to encompass what is necessary, but specific enough to be clear. Many definitions in policy can be divided into two types: human-focused and technology-focused. The former focuses on defining AI in terms of its human intelligence capacity. While this method of defining AI does not place its technical ability – such as data sets or generation capabilities – at the forefront, it is a helpful approach in policymaking because it addresses a broad spectrum of technologies. By omitting the technical aspects in regulation and instead focusing on the possible societal impact of AI, a human-based definition can adapt to new developments (O'Shaughnessy, 2022).


However, a technical-based definition does offer more specificity to legislation. Through the use of specific language, the regulation regarding AI can be more exacting, covering minutiae that the human-based definition cannot. Human intelligence cannot be defined as precisely as machine intelligence, and it has nuances that technical definitions can often do without. This means that in creating legal specificity, the technical-based definition is unable to adapt to technological developments, therefore legislators must be agile in creating new regulation as technology emerges (O’Shaughnessy, 2022). It is not often that technology regulation and agility go hand in hand, so policymakers generally opt for a human-based definition. 


In recent regulation, these methods of defining AI have been combined to use both sets of advantages. For example, the recently published EU Artificial Intelligence Act defines an AI system as:


A machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments (European Commission, 2024).


This definition constitutes a very broad definition of AI that incorporates both technical and human definitions. Without using exceedingly jargonistic terms, it explores the technical abilities that varying AI systems are capable of performing, while also recognising the probable effect that these may have on humans and society. This combination of human and technical aspects is also seen in the “risk factor” characterisation of the EU AI Act, which categorises AI by its technical aspects as well as possible socioeconomic effects. 


In Chinese policy, AI itself is not defined within regulatory documentation in the same singular way that it is in the EU AI Act, but there are several definitions that are present. The Shanghai and Shenzhen regulations both have a definition of AI as well as the China White Paper on AI. They are listed as follows, in named order:


  1. The theories, methods, technologies, and application systems for using computers or computer-controlled machines to simulate, extend, and expand human intelligence, perceive the environments, acquire knowledge, and use knowledge to obtain the best results.

  2. The use of computers or computer-controlled devices, through the perception of environments, acquisition of knowledge, deductions, or other methods, to simulate, extend, and expand human intelligence. 

  3. [The] theories, methods, technologies, and application systems for using digital computers or digital computer-controlled machines to simulate, extend, and expand human intelligence, perceive the environments, acquire knowledge, and use knowledge to obtain the best results. ( Schildkraut and Zhang, 2023)


One interesting commonality between all three definitions is the explicit mention of the human element of AI. Indeed, the characters for AI in Mandarin Chinese, 人工智能 (réngōng zhìnéng), define artificial intelligence literally as human-made intelligence. Because of this translation, it is appropriate to consider this a human-based definition, approaching AI as a tool meant to enhance the intelligence that is already present in humans (Maccario, 2023). 



Specific Definitions in AI Policy


The choice of terms used in Chinese and EU AI policy are very different as well and this is reflected in how these regulations emerged within each policy environment. As noted earlier, AI is defined within the policy documentation of both regimes, but this is not a regular and explicit practice within Chinese regulation. In fact, the terms that are the most clearly defined in all of AI regulation are the terms relating to “deep synthesis”, which refers to ‘the use of technologies such as deep learning and virtual reality, that use generative sequencing algorithms to create text, images, audio, video, virtual scenes, or other information’ (Cyberspace Administration of China, 2022; China Law Translate, 2023). This definition is then followed by six bullet points worth of examples of what deep synthesis may look like as well as several more definitions. Conversely, this concept is categorised in EU AI policy simply as ‘deep fakes’ and ‘general purpose AI models’ (European Commission, 2024). Additionally, China’s regulations take further care to differentiate between “deep synthesis” and “deep fake,” establishing that the former is the legal use of such technology and the latter is the illegal and harmful method of use (Liu, 2023). 


While examining other AI regulations enacted in China both before and after the Deep Synthesis Regulations, definitions of terms used in legislation are not clearly outlined, but rather left open to interpretation in legislation. One clearly defined example would be “use of algorithmic recommendation technology,” which ‘refers to the use of algorithm technologies types such as generation and synthesis, individualised pushing, sequence refinement, search filtering, schedule decision-making, and so forth to provide users with information’ (Cyberspace Administration of China, 2021; China Law Translate, 2022). Two things can be derived from this definition that apply generally to how China developed its regulatory framework. The first is the open-ended nature of the included categories. By including the character 等 (děng), which can be used to indicate “other,” “et cetera,” or “so forth,” this definition can be applied to other fitting recommendation algorithms as needed. The second is the explicit mentioning of people or users. As was previously mentioned, Chinese regulation of AI tends to lean toward the human-based method of definition, focusing on the possible effects and risks of AI to society. 


This is an ongoing trend in much of Chinese AI regulation and Chinese policymaking in general. Definitions are often not explicitly delineated, and when terms are defined, they are left quite open in order to preserve regulatory flexibility. Where very explicit and definite terms are used, it is something to take note of as especially concerning to Chinese policymakers. The explicit distinction between deep synthesis and deep fake is the perfect example of this because of China’s regulatory emphasis on preserving societal order. The potential of computer-generated content that can sow discord through fake images or videos is of deep concern to a regime that emphasises social order (Sheehan, 2023; Cyber Administration of China, 2022; Cyber Administration of China, 2023). 


EU AI policy is more exacting in its definition of terms. In the comprehensive EU AI Act, sixty-two definitions are outlined within the general provisions. These define both technical and societal-impact terms in order to provide clear guidance to the Member States. These terms include actors within the AI regulatory sphere, actions that these aforementioned entities can take, technical aspects, regulatory bodies, legal requirements, and risk factors (European Commission, 2024). By clearly outlining these terms in the beginning, there is much less regulatory ambiguity.


However, where China chooses to keep this ambiguity to preserve regulatory flexibility, EU AI policy has chosen to preserve this flexibility by employing various regulatory strategies, which necessitates both human and technology-based terminology. The terms that the EU chose to define when regulating AI merge both kinds of classification through categories of risk. These categories outline the technical accommodations needed to mitigate the risk of AI through analysis of its possible impact on society.  Here, the idea of risk is divided into a few definitions, with the first being ‘the combination of the probability of an occurrence of harm and the severity of that harm’ (European Commission, 2024). This is followed by two other definitions that are relevant to the concept of risk in this document: widespread infringement and systemic risk to the Union. The former has a multi-part definition that protects the interest of the individual across different Member States, whether in the case of already present harm or likely harm. The latter refers to the possible risks that certain general-purpose AI models pose to public health, safety, public security, fundamental rights, or society as a whole (European Commission, 2024). 


It is these definitions of risk that determine which category an AI system belongs to within EU policy. The higher the likelihood that an AI system has of causing negative impact, the higher categorisation of risk it belongs to. However, with Chinese AI policy, the mitigation of risk is much more subjective. In sum, these measures require that generative AI must reflect Socialist Core Values, prevent discrimination, respect intellectual property rights, be true and accurate, and respect the rights and interests of others (Cyberspace Administration of China, 2023; Seaton Huang et al., 2023). While each of these bullet points is subsequently explained, there is no set definition outside of what constitutes Socialist Core Values and various examples of the lawful rights and interests of individuals. In this regulation, risk and violation would be determined on a case-by-case basis, with reference to other relevant laws. 



Conclusions


Through analysis of the language used in emerging AI policy in China and the EU, several priorities for both regimes can be highlighted. One commonality is a focus on providing real-world risk mitigation for the citizens who are under their jurisdiction. This is presented through the human-based language that is used to define how AI may cause harm toward individuals as well as society. Meanwhile, the two territories differ in the exactitude of their definitions. The EU has to regulate for several different Member States who may all have different understandings of the terms used. Therefore, precise definitions are necessary. Yet, China is developing the legislation only for itself, however large a country it may be, and generalises their definitions in order to maintain flexibility to keep both the interests of citizens and of the government in mind. This difference also translates directly to what is considered risk in both sets of regulation. The EU is inherently empowered by the contributions and feedback of the Member States, therefore they must develop a definition of risk that incorporates that mindset. Risk, in the EU AI Act, is determined by the possible impact and mitigation being posed toward Member States and individuals who live within. China’s definition of risk is also based on protection of the individual, but has a heavy focus on mitigating the threat that AI may pose to authorities. This difference may also be a factor in why the EU chose to develop a set of regulatory tools in order to give resources to the Member States, whereas China allows for regulatory ambiguity and flexibility. For one, the main purpose is to protect others for oneself, and for the other, to protect oneself for others. 



ABOUT THE AUTHOR


Leanne Voshell is a researcher focused on technology policy and the associated developing regulatory environment between China, the EU, and the USA. She graduated from Leiden University with her Research Master's in 2022. Currently, she is looking for her start as a policy analyst and writes for European Guanxi. You can find her at LinkedIn here.


This article was edited by Marina Ferrero, David Dinca and René Neumann.


BIBLIOGRAPHY


China Law Translate, 2022. Provisions on the Management of Algorithmic Recommendations in Internet Information Services. China Law Translate, 1 April. Available from: https://www.chinalawtranslate.com/en/algorithms/ [Accessed 03 April 2024].


China Law Translate, 2022. Provisions on the Administration of Deep Synthesis Internet Information Services, China Law Translate, 11 December. Available from: https://www.chinalawtranslate.com/en/deep-synthesis/ [Accessed 03 April 2024].


China. China State Council, 2017. Notice of the Development Plan for the New Generation of Artificial Intelligence. Beijing: Central People’s Government of the People’s Republic of China. Available from: https://www.gov.cn/zhengce/content/2017-07/20/content_5211996.htm [Accessed 03 April 2024].


China. Cyberspace Administration of China, 2021. Internet Information Service Algorithm Recommendation Management Regulations. Beijing: Central People’s Government of the People’s Republic of China. Available from: https://www.gov.cn/zhengce/zhengceku/2022-01/04/content_5666429.htm [Accessed 03 April 2024].


China. Cyberspace Administration of China, 2023. Measures for the Administration of Generative Artificial Intelligence Services. Beijing: Office of the Central Cyberspace Administration of China. Available from: http://www.cac.gov.cn/2023-04/11/c_1682854275475410.htm [Accessed 03 April 2024].


China. Cyberspace Administration of China, 2022. Provisions on the In-depth Synthesis Management of Internet Information Services. Beijing: Central People’s Government of the People’s Republic of China. Available from: https://www.gov.cn/zhengce/zhengceku/2022-12/12/content_5731431.htm [Accessed 03 April 2024].


European Commission, 2018. Artificial Intelligence for Europe. Brussels: EUR-Lex. Available from: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM%3A2018%3A237%3AFIN [Accessed 03 April 2024].


European Commission, 2021. Fostering a European approach to Artificial Intelligence. Brussels: EUR-Lex. Available from: https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=COM%3A2021%3A205%3AFIN [Accessed 11 April 2024].


European Commission, 2024. The AI act explorer. Brussels: EU Artificial Intelligence Act. Available from: https://artificialintelligenceact.eu/ai-act-explorer/ [Accessed 03 April 2024].


Eugenesiow, 2020. Eugenesiow/AI-Glossary-Mandarin: An English to mandarin glossary of AI terminology, grouped topically by areas (e.g. NLP) and then by tasks (e.g. NER). 人工智能术语/词汇库, GitHub. Available from: https://github.com/eugenesiow/ai-glossary-mandarin [Accessed 14 April 2024]. 


Huang, S., Toner, H., Haluza, Z., Creemers, R., and Webster, G., 2023. Translation: Measures for the management of Generative Artificial Intelligence Services (draft for comment). DigiChina, 12 April. Available from: https://digichina.stanford.edu/work/translation-measures-for-the-management-of-generative-artificial-intelligence-services-draft-for-comment-april-2023/ [Accessed 03 April 2024].


Industrie 4.0, 2020. AI Glossary, Plattform Industrie 4.0 - Glossary. Available from: https://www.plattform-i40.de/IP/Navigation/EN/Industrie40/Glossary/glossary.html [Accessed 14 April 2024]. 


Liu, H., 2023. Deep synthesis works compliance governance and copyright protection: Chinese perspective and practices. Scientific and Social Research, 5(12), pp.84–93. doi:10.26689/ssr.v5i12.5901. 


Maccario, G., 2023. AI what? AI 什么意思? AI Shénme Yìsī?. Imminent, n.d. Available from: https://imminent.translated.com/ai-what-ai-%E4%BB%80%E4%B9%88%E6%84%8F%E6%80%9D-ai-shenme-yisi [Accessed 14 April 2024]. 


O’Shaughnessy, M., 2022. One of the biggest problems in regulating AI is agreeing on a definition. Carnegie Endowment for International Peace, 6 October. Available from: https://carnegieendowment.org/2022/10/06/one-of-biggest-problems-in-regulating-ai-is-agreeing-on-definition-pub-88100 [Accessed 03 April 2024]. 


Schildkraut, P. and Zhang, H., 2023. What to know about China’s new AI regulations. Arnold Porter, n.d. Available from: https://www.arnoldporter.com/-/media/files/perspectives/publications/2023/04/what- to-know-about-chinas-new-ai-regulations.pdf?rev=d872d730384040619c1301e098cd90ee [Accessed 30 April 2024]. 


Sheehan, M., 2023. China’s AI regulations and How They Get Made. Carnegie Endowment for International Peace, 10 July. Available from: https://carnegieendowment.org/2023/07/10/china-s-ai-regulations-and-how-they-get-made-pub-90117 [Accessed 03 April 2024].






24 views0 comments
bottom of page