12-08-2024
Benjamin Arenas Sanchez
Human Rights Researcher
Global Human Rights Defense
The modernisation of today’s society has opened the doors to innovation in a multitude of fields. In recent years, Artificial Intelligence (AI) has taken the world by storm, whether it is with funny memes, help in school work, or its use in the music industry. AI’s influence is ever more present, and finding a way to regulate this powerful tool is the responsibility of the world’s leading powers. Japan has taken an approach that differs from many European countries, focusing on soft law and non-binding regulations that allow AI to evolve while respecting human rights and data protection.
Japan has long supported the use of soft laws to oversee and regulate artificial intelligence. It has favoured integrating AI into society by creating rules that direct users and businesses without preventing its development. It has divided its policy making into two categories: regulation on AI and regulation of AI. Regulation on AI is centred on the risks associated with AI, approached by soft law and non-binding regulations put in place for the companies to use voluntarily. Regulation of AI, on the other hand, is set up as regulations that promote the positive use of AI in society.
In 2019, the Japanese Government published the Social Principles of Human-Centric AI, as an attempt to enhance the beneficial impacts of AI on society, rather than suppress it for fear of risks. Those principles are based on three philosophies: human dignity, diversity and inclusion, and sustainability. These Principles aim to materialise the philosophies through AI and its application, not to protect them by limiting AI. The three philosophies have laid the groundwork for the seven principles of the policy: human-centric, education and literacy, privacy protection, ensuring user security, fair competition, fairness, accountability, and transparency, and innovation. The Japanese’s approach to regulation is based on those principles, which englobe both the user’s security and the guidance of use of AI.
Japan has long been taking point in the regulation of AI, within the G7 and bilaterally with other countries. In May 2023, the Hiroshima AI Process was launched by the G7 during Japan’s presidency, which aims to promote a safe, secure, and trustworthy AI. This gave way to the first comprehensive international rules on generative AI in December 2023. In February 2024, Japan created the Japan AI Safety Institute to develop methods to evaluate AI safety, in collaboration with both domestic and external services. Japan also has entered into a collaboration with the United States to strengthen the development and protection of new technologies, AI included.
Japan is a leading power in the regulation of AI, which comes with a great responsibility to do so in a way that would not hinder its development while still respecting human rights.
Sources and further reading:
Daisuke Akimoto, ‘Japan’s Ai Diplomacy’ (The Diplomat, 26 April 2024) <https://thediplomat.com/2024/04/japans-ai-diplomacy/> accessed 12 August 12, 2024.
Hiroki Habuka, ‘Japan’s Approach to AI Regulation and Its Impact on the 2023 G7 Presidency’ (CSIS) <https://www.csis.org/analysis/japans-approach-ai-regulation-and-its-impact-2023-g7-presidency> accessed 26 July 2024.
Inge Odendaal, ‘The Hiroshima Ai Process: Japan’s Role in Shaping Global AI Governance’ (Stellenbosch University Japan Centre, 7 November 2023) <https://www0.sun.ac.za/japancentre/2023/11/07/the-hiroshima-ai-process-japans-role-in-shaping-global-ai-governance/> accessed 26 July 2024.
‘The Hiroshima AI Process: Leading the Global Challenge to Shape Inclusive Governance for Generative AI’ (The Government of Japan – JapanGov –) <https://www.japan.go.jp/kizuna/2024/02/hiroshima_ai_process.html#:~:text=Amid%20the%20growing%20global%20debate,%2C%20secure%2C%20and%20trustworthy%20AI.> accessed 12 August 2024.
Comments