A digital shield for future generations Azerbaijan in the global fight for children’s online safety
The decree of the President of the Republic of Azerbaijan, Ilham Aliyev, “On Measures to Protect Children from Harmful Content and Negative Impacts in the Digital Environment,” signed on 27 February 2026, is, without exaggeration, an event whose significance goes far beyond an ordinary regulatory act. The document establishes a fundamentally new approach by the state to a problem that today affects every country in the world—the rapid immersion of children into the digital environment, which is not accompanied by adequate mechanisms to safeguard their psychological, emotional, and social development. In effect, Baku is joining a developing global consensus whose essence is simple: the era of unrestricted influence by tech giants is over, and states must assume responsibility for the safety of the most vulnerable category of digital users.
To fully appreciate the scale and timeliness of Azerbaijan’s initiative, it is necessary to understand the context in which it emerged. The world is experiencing a tectonic shift regarding children’s presence on social media. What just five years ago seemed a matter of individual parental oversight has now become the subject of intense legislative battles at the level of national parliaments and supranational institutions. The reason is clear. Scientific studies conducted by leading institutes worldwide, in particular by the American Psychological Association, convincingly demonstrate a correlation between intensive social media use by adolescents and a rise in disorders, depression, and, in the most tragic cases, suicidal behaviour.

Australia became a pioneer in this global movement by passing the Online Safety Amendment (Social Media Minimum Age) Act in November 2024, which introduced a complete ban on social media use for anyone under sixteen. The law came into effect on December 10, 2025 and affected the largest platforms: YouTube, Facebook, Instagram, TikTok, Snapchat, Reddit, X, Threads, Twitch, and Kick.
Notably, the Australian model allows no exceptions for parental consent—the state deliberately decided that it, rather than parents, must establish an insurmountable barrier between children and potentially harmful digital content. Companies violating the law face fines of up to 49.5 million Australian dollars. Within the first week of the law’s implementation, 4.7 million accounts were deactivated. Meta, which owns Facebook and Instagram, reported the removal of roughly half a million accounts.
However, two months after the ban came into force, experts at the University of Sydney noted that teenagers were finding ways to circumvent the restrictions, particularly by exploiting facial recognition systems, which often misidentified fifteen-year-olds as adults.
The Australian experiment became a powerful catalyst for Europe. In January 2026, the French National Assembly approved a bill banning social media use for individuals under fifteen, with the vote concluding at 130 in favour and 21 against. President Emmanuel Macron described this step as a “major milestone” in protecting French children and adolescents and personally pushed for the accelerated passage of the law, publicly stating:
"The brains of our children and teenagers are not for sale. Their emotions are not for sale and must not be manipulated—neither by American platforms nor by Chinese algorithms."
In addition to the social media ban, the French law extends the existing 2018 prohibition on mobile phone use in primary and secondary schools to the upper grades of high schools. According to data from the French health authority, one in two teenagers in the country spends between two and five hours per day on their mobile phone, and 58 per cent of children aged twelve to seventeen regularly use social media.
The European Union has generally adopted a somewhat different, systemic strategy. Instead of direct age-based bans, Brussels has focused on the Digital Services Act (DSA), Article 28 of which requires all online platforms accessible to minors to ensure a high level of privacy, safety, and protection.
In July 2025, the European Commission published its final recommendations for protecting minors under the DSA—a document that, while formally non-binding, has effectively become the benchmark for evaluating platforms’ compliance with European law. The recommendations are built around the so-called “5C” risk model: content risks (exposure to harmful content), conduct risks (behaviour), contact risks (harmful interactions), consumer risks, and cross-cutting risks.
Platforms are required to set minors’ accounts to maximum protection by default, restrict interactions with strangers, disable geolocation and tracking features, block targeted advertising based on children’s data profiling, and deactivate design elements that encourage excessive use—such as autoplay, streaks, and late-night push notifications. The European Commission has also announced the development of its own age-verification app, which is intended to become the minimum standard for all platforms operating within the EU.
The United Kingdom, having left the European Union, is developing a parallel but in many ways even more ambitious system. The Online Safety Act, passed in October 2023, imposes fines of up to £18 million or 10 per cent of a violating company’s global turnover—whichever is higher.
The regulator, Ofcom, has been implementing the law in several phases: the first, concerning illegal content, came into force in March 2025; the second, focusing on child protection, came into force on July 25, 2025. The Codes of Practice on protecting children, published by Ofcom in April 2025, cover a wide range of measures—from mandatory age verification for access to pornographic content to moderation of materials related to self-harm, suicide, eating disorders, bullying, and content encouraging dangerous behaviour.
Ofcom has already launched investigations into several major pornography websites that failed to implement timely age verification. In December 2025, it imposed a £1 million fine on one company. The platform 4chan was fined for non-compliance and was ordered to pay an additional £100 for each day the violation continued.

The United Kingdom, having left the European Union, is developing a parallel but in many ways even more ambitious system. The Online Safety Act, passed in October 2023, imposes fines of up to £18 million or 10 per cent of a violating company’s global turnover—whichever is higher.
The regulator, Ofcom, has been implementing the law in several phases: the first, concerning illegal content, came into force in March 2025; the second, focusing on child protection, on July 25, 2025. The Codes of Practice on protecting children, published by Ofcom in April 2025, cover a wide range of measures—from mandatory age verification for access to pornographic content to moderation of materials related to self-harm, suicide, eating disorders, bullying, and content encouraging dangerous behaviour.
Ofcom has already launched investigations into several major pornography websites that failed to implement timely age verification. In December 2025, it imposed a £1 million fine on one company. The platform 4chan was fined for non-compliance and was ordered to pay an additional £100 for each day the violation continued.
In the United States, the situation is developing less straightforwardly, which is understandable given the unique American political and legal landscape, where any restrictions on online content inevitably collide with the First Amendment. Nevertheless, a movement is underway.
The Kids Online Safety Act (KOSA), first introduced in 2022 by Senators Richard Blumenthal and Marsha Blackburn, was approved by the Senate in July 2024 by an overwhelming majority—91 votes to 3. The bill requires platforms to exercise “reasonable care” in designing features that affect the engagement of underage users and prohibits the algorithmic promotion of content related to suicide, eating disorders, substance abuse, and sexual exploitation.
However, the bill stalled in the House of Representatives, where Republicans raised concerns about potential censorship and excessive government intervention. In May 2025, KOSA was reintroduced to Congress, and in January 2026, the House Judiciary Subcommittee held hearings on nineteen new digital safety bills, including an updated version of KOSA.
At the same time, several U.S. states—including California, Texas, and Utah—passed their own laws limiting children’s access to certain categories of online content.
Particular attention deserves the Chinese approach, which differs radically from Western models due to the scale of state intervention. Beijing set the tone as early as 2021, when the National Press and Publication Administration limited online gaming for minors to one hour per day—allowed only on Fridays, Saturdays, Sundays, and public holidays, within a strictly defined time slot from 8:00 PM to 9:00 PM. All gaming companies operating in China were required to integrate the government’s identity verification system.
In April 2025, the Cyberspace Administration of China launched the so-called “minor mode”—a comprehensive set of restrictions for mobile devices, including a daily screen-time limit of two hours, a block on all applications from 10:00 PM to 6:00 AM, content filtering by age category, and control over private messages from strangers. The system requires coordination among three parties: device manufacturers, app developers, and app store operators.
By the end of 2023, the number of Internet users under eighteen in China had reached 196 million, making digital safety an integral part of the state’s ideological agenda.
Against this global backdrop, President Ilham Aliyev’s decree appears as a carefully considered, timely, and strategically calculated step. The document instructs the Cabinet of Ministers, within a three-month period and with the involvement of state bodies, scientific institutions, and civil society organisations, to draft regulatory acts covering the protection of children from harmful content and exposure on social media.
A key element of the forthcoming legislation will be the introduction of age restrictions for social media registration, a measure directly aligned with the experiences of Australia and France. In addition, the decree provides for measures regulating the use of mobile and electronic devices in preschool groups and general educational institutions, as well as the integration of digital literacy, cybersecurity, and responsible online behaviour into educational curricula for preschool and school subjects. This includes content standards, as well as the development and implementation of media awareness programmes targeting parents, educators, and children.
As we can see, while the Australian model primarily relies on prohibitive measures and the European approach focuses on regulating platform behaviour, Azerbaijan’s strategy lays the foundation for developing critical thinking and digital competence among children themselves. This aligns with the recommendations of UNICEF and several leading research organisations, which emphasise that prohibitions alone cannot ensure long-term safety if the younger generation is not equipped with the skills to interact consciously and responsibly with the digital environment.
Moreover, Azerbaijan is not approaching this issue from scratch. In December 2025, Baku hosted the international conference “Protecting Children in the Digital Environment: Modern Tools and International Cooperation”, where Minister of Digital Development and Transport Rashad Nabiyev stated that turning digital opportunities into real benefits depends directly on the joint efforts of the state and parents. At the same time, the Presidential decree establishing the Council for Digital Development of the Republic of Azerbaijan, signed almost simultaneously with the decree under discussion, demonstrates a comprehensive approach to digital transformation, in which child protection is an integral part of a broader strategy for digital sovereignty and technological development.

Critics of such initiatives—present in every country—traditionally point to several vulnerabilities.
First, the technical feasibility of age verification: Australia’s experience has shown that existing technologies are far from perfect, and mandatory identification raises serious privacy and data protection concerns.
Second, the risk of “pushing” teenagers into less regulated and potentially more dangerous corners of the Internet.
Third, the question of balancing protection with access to information. The Electronic Frontier Foundation (EFF) and the American Civil Liberties Union (ACLU) have repeatedly argued that broadly drafted child-protection laws inevitably lead to the censorship of legal content and infringe upon rights guaranteed by freedom of speech. Several child-rights organisations in France have urged lawmakers to hold platforms accountable rather than restrict children’s presence online.
However, these arguments, valid as they may be, do not negate a fundamental reality: states can no longer afford inaction in the face of the systemic harm that algorithmically optimised platforms inflict on the mental health of the younger generation.
The book The Anxious Generation by American social psychologist Jonathan Haidt, published in 2024, has become an intellectual foundation for policy decisions from Canberra to Baku. Haidt’s central thesis—that we overprotect children in the physical world while leaving them catastrophically underprotected in the digital world—resonated with politicians across the ideological spectrum. Australian Prime Minister Anthony Albanese explicitly stated that the age-restriction law represents “the day when Australian families are taking back power from these big tech companies.”
President Aliyev’s decree fits within this global trend but does so with Azerbaijan’s characteristic pragmatism. The document contains no radical prohibitions that could create legal conflicts or provoke social polarisation. Instead, it initiates a systemic process involving in-depth expert analysis, study of international experience, and engagement of a broad range of stakeholders—from the scientific community to civil society. The three-month timeframe allocated for drafting the legislation ensures sufficient time for thorough legal and technical review.
Attention should also be paid to the decree’s formulation, which emphasises the need for a “balanced, evidence-based, and systemic approach.” This is a fundamentally important methodological principle. In global practice, there are frequent examples of online child-protection legislation being adopted amid political hype—for instance, Australia’s law passed through parliament in less than two weeks from introduction to vote, drawing justified criticism from some experts. Azerbaijan’s approach, grounded in an evidence base and involving scientific institutions, appears more measured and promising in terms of creating a sustainable and effective regulatory framework.
The international context shows that the trend toward stricter regulation of children’s presence in the digital environment is irreversible. Beyond the countries already mentioned, similar measures are being considered or implemented in Denmark, Spain, Italy, Greece, Germany, Malaysia, Indonesia, New Zealand, and Singapore.
In September 2025, European Commission President Ursula von der Leyen stated that she was closely monitoring the Australian experiment and inspired by Canberra’s example. Spanish Prime Minister Pedro Sánchez, speaking at the World Government Summit in Dubai, promised that his government would adopt laws to protect children from “spaces of addiction, abuse, pornography, manipulation, and violence.”
In this way, a global coalition of states is taking shape, recognising that children’s digital safety is both a public health issue and a matter of national security.







