1. Establishment of the Headquarters for Artificial Intelligence Strategy and Publication of the Outline of AI Basic Plan
On September 1, 2025, the Government of Japan established the Headquarters for Artificial Intelligence Strategy within the Cabinet pursuant to the Act on the Promotion of Research, Development and Utilization of Artificial Intelligence–related Technologies (AI Act). The Prime Minister serves as the head of the Headquarters, and all Cabinet ministers are members. Going forward, AI policies will be considered and implemented across the government under the leadership of this new body.
At the meeting of the Headquarters on September 12, 2025, the outline of the AI Basic Plan, which will serve as the government’s future policy guideline, was released. This outline identifies Japan’s goal of becoming “the most AI-friendly country in the world for development and utilization” and the following four pillars:
- Accelerated promotion of AI utilization (“Using AI”)
- Strategic reinforcement of AI development capabilities (“Creating AI”)
- Leadership in AI governance (“Enhancing reliability of AI”)
- Continuous transformation toward an AI-oriented society (“Collaborating with AI”)
Examples of specific initiatives under these pillars include support for the development and testing of AI in sectors facing labor shortages, such as healthcare and nursing care, as well as the expansion of essential infrastructure such as AI data centers to strengthen Japan’s development capacity. The formal AI Basic Plan is scheduled to be adopted by the end of 2025, following further discussions at the Headquarters and its expert committees.
At the same time, legal challenges surrounding AI are becoming more pronounced both inside and outside Japan. For instance, in August 2025, The Yomiuri Shimbun filed a lawsuit against the U.S. AI startup Perplexity, alleging copyright infringement. Subsequently, Nikkei Inc. and The Asahi Shimbun Company also filed suits against the same company seeking injunctions, deletion of content, and damages. These high-profile cases, which place the use of copyrighted content by generative AI squarely before the courts, are expected to significantly influence the future course of rights protection and AI regulation.
Against this backdrop, companies will need to pay close attention not only to forthcoming government policies, such as the AI Basic Plan to be finalized by the Headquarters, but also to the evolving positions of private stakeholders with regard to AI.
2. Japan Fair Trade Commission: Subordinate Legislations and Guidelines of the Mobile Software Competition Act (MSCA)
On July 29, 2025, the Japan Fair Trade Commission announced Cabinet Orders, Regulations, and Guidelines concerning the Act on the Promotion of Competition for Specified Smartphone Software (the “Mobile Software Competition Act” or “MSCA”). This followed the public comment period held from May 15 to June 13, 2025. The MSCA will come into full effect on December 18, 2025.
Pursuant to a Cabinet Order, and based on criteria established as of March 31, 2025, the designated operators subject to regulation under the MSCA are Apple Inc., iTunes K.K., and Google LLC.
As the foregoing Guidelines address a wide range of issues, the following highlight restrictions concerning app stores by way of example. Article 7, Item 1 of the MSCA prohibits designated operators from engaging in conduct that obstructs other operators from providing app stores. The Guidelines clarify that “obstructive conduct” includes acts that are highly likely to substantially impede the provision or use of alternative app stores. Examples include charging fees at levels that would make it difficult to operate alternative app stores.
Such acts, however, may be justified if they are necessary to achieve purposes specified by the aforementioned Cabinet Order—such as ensuring cybersecurity, protecting user information, or safeguarding minors—and if those purposes cannot be reasonably achieved through alternative means. The Cabinet Order expressly identifies two such purposes: (i) preventing significant delays in smartphone operation or other abnormal smartphone behavior, and (ii) preventing gambling or other criminal activities conducted via smartphones.
When considering any justification, an appropriate balance must be struck between these objectives and the overarching goal of promoting competition. The Guidelines provide examples where justification may be recognized, such as where a designated operator reviews alternative app stores against necessary standards from a cybersecurity perspective and prohibits their provision if those standards are not met.
3. Children and Families Agency: Summary of Issues and Discussion Points on Protecting Young People in Internet Use
On August 7, 2025, the Children and Families Agency compiled a summary of issues and discussion points from the Working Group on Protecting Young People in Internet Use (full report and executive summary). With the growing prevalence of smartphones and generative AI, Internet use has spread to younger age groups. At the same time, a wide range of risks are becoming increasingly apparent, including illegal and harmful content, underground recruitment for illicit part-time jobs, defamation and abuse, and AI-driven deepfakes. The summary outlines the basic direction for policy discussions as well as key issues which the government should address.
In some foreign countries, age restrictions on the use of social media are already in place or under consideration. However, this summary emphasizes that rather than imposing uniform restrictions on content or usage time by law, based solely on age or developmental stage, the fundamental direction should be to secure an environment where content and services appropriate to age and maturity can be provided. This includes multifaceted and comprehensive measures such as age verification systems and strengthening media literacy among young people themselves.
The main issues identified include:
- Updating the Act on the Development of an Environment that Provides Safe and Secure Internet Use for Young People
- Promoting voluntary initiatives by private companies and other stakeholders
- Addressing harmful content risks such as adult advertising
- Responding to conduct/contact risks such as underground job recruitment, bullying, and sexting
- Addressing consumer-related risks
- Responding to cross-cutting risks arising from advanced technologies such as generative AI and VR
- Addressing overarching risks related to younger starting ages, long-term use, impacts on mental and physical health, and algorithm-driven risks
- Strengthening public awareness and outreach activities
Going forward, the Children and Families Agency will take the lead, in cooperation with relevant ministries and agencies, to implement measures categorized into those that can be addressed in the short term and those requiring medium- to long-term study. For the latter, specific initiatives, including possible legal measures, will be compiled by around 2026. These will then be reflected in the Basic Plan on Measures to Ensure the Safe and Secure Use of the Internet by Young People (7th edition), scheduled to be finalized and released in 2027.