Electronic Press Kit for PauseAI movement including: Key Messages, Press Release, Bios, Fact Sheet, and Op-Ed

Electronic Press Kit:

Key Messages

#1: AI development needs to be paused before it develops out of human control. 

#2: AI online today can propagate fake news, scams, and intensify mental health issues.

#3: Society is being affected by AI through its energy consumption, reduction of online privacy, and operation of autonomous weapons. 

Talking Points:

  • AI Experts have been raising the alarm since 2023 and calling for a pause on AI development.  

  • The extent of AI control and ability is still unknown to human researchers. 

  • Encouraging public discussion is the only way to promote awareness and education on AI topics. 

Press Release

FOR IMMEDIATE RELEASE

PauseAI Continues Growing, as Do the Risks of AI

15 Countries and Counting, Every Voice Contributes to Change

Utrecht, Netherlands (December 3rd, 2025) – The PauseAI movement is excited to celebrate the growth in national chapters of the organization. We would like to highlight the recent win of holding Google accountable for abandoning their safety commitments since they have now released Gemini 3.0 which includes safety information. Our multi-national protest efforts including the largest AI safety protest ever outside Google DeepMind’s London offices brought visibility to this issue and encouraged over 60 UK politicians to voice their concerns. 

Google’s violation of the Frontier AI Safety Commitments signed in Seoul during 2024 was not the only AI safety agreement to be violated by Google DeepMind, equivalent commitments were made to the White House in 2023 and with the Hiroshima Process International Code of Conduct in 2025. Holding these companies accountable for their actions with AI development is vital for the safety and continuity of society. 


Although this is a small victory in the larger movement; it is proof that we can encourage change and have an impact on our world. The brief statement of Joep Meindertsma encompasses the joy of our growth, “Without the contribution of our members, we could not have organized education and protests in multiple nations. Thank you!”. AI will not become safer unless there is a continued push for change. As PauseAI grows internationally, our movement becomes stronger and the goal of AI regulation becomes closer. Spreading the message of PauseAI can help educate and empower others to get involved with our movement. 

About PauseAI Global: 

PauseAI Global is an international movement to pause AI development before the effects are severe and unstoppable. Founded in May 2023 by Joep Meindertsma to spread awareness of the existential risks that unmitigated AI development poses to our society and personal beings. For more information and national chapters visit https://pauseai.info/about

Contact:

Tom Bibby

Director of Communications

PauseAI Global

tomdlalgm@gmail.com

###


Bio: Joep Meindertsma (founder and media contact)


In May 2023, Joep Meindertsma put his job at a software development company on hold to create PauseAI, an organization to educate people on the risks of AI. Meindertsma became familiar with AI development through his work in software, but after the increasing and unrestricted developments to AI in 2023, he realized that people in the Netherlands were unaware of the possible effects of AI. 

After the success of PauseAI Global’s initial movement, the organization spread to multiple countries including volunteer networks and a leadership board to help coordinate the movement. Meindertsma has since returned to his full-time job, however, he serves as the chair of the Global Board and serves as the media contact for the organization. 

At Ontola, Meindertsma’s full-time job, he continues his commitment to an equitable and ethical online environment specializing in web applications. One of the biggest applications developed by this company features open-source data and an integrated user interface that provides ease along with security. 

Fact Sheet: Pause AI

Description:

PauseAI is an organization dedicated to the awareness and pause of AI development due to the existential risks posed by the accelerating interdependence between our society and AI. The goal of this movement is to educate individuals on the dangers of AI to both people and wider society, and encourage a pause in AGI development until the extent of the impacts are understood and mitigated. 

Location: Utrecht, Netherlands

Business Information: 

https://pauseai.info/

Social Media: Tiktok @pauseai, Instagram @pause_ai, X @PauseAI, Reddit r/PauseAI, Facebook @PauseAI, and Substack @pauseai

Description of Services:

Informational and community hub for raising visibility for the risks of the development of superintelligent AI and the need for AI regulation. Largely a volunteer based organization supported by a central Leadership Team and unique National Leaders. PauseAI Global works to coordinate media, communication, protests, and education for our movement. 

Team:

Joep Meindertsma is the Founder of PauseAI Global and chair of the Global Board. He founded PauseAI in May 2023 after he realized that the risks of AI could not be ignored. Through his work in software development Meindertsma became acquainted with the risks of unrestricted AI development. His work in software development aligns with the values of PauseAI by providing an open-source database with commitments to user interface and user privacy. 

Maxime Fournes is the CEO of PauseAI Global, he joined the organization in early 2024 and recently began the position of CEO on the leadership team. Fournes began his career in machine learning engineering before joining PauseAI Global in late 2023. He serves as both the CEO and head of the French national chapter of PauseAI, connecting to individuals through his voice in French media. 

Eleanor Hughs is the Organizing Director of PauseAI Global, she embodies the core function of our organization and facilitates internal coordination and action. She is the organizational lead for all internal teams, providing an overview of all aspects of the organization. Hughes also assists local chapters in organizing events and the media associated. 

Tom Bibby is the Director of Communications for PauseAI Global, he coordinates all external relations for the organization. This includes oversight of the website, social media, branding, press, and written releases. Bibby’s team creates and maintains the public interactions between the organization and its public. 

Relevant Historical Info:

  • Created in May 2023 in the Netherlands by Joep Meindertsma

  • First Public Action: Protest outside of Microsoft’s lobbying office in Brussels 

  • Protested outside of the inaugural AI Safety Summit

  • In May 2024, chapters in 13 countries organized to protest ahead of the AI Seoul Summit

  • In February 2025, 15 + chapters organized to protest ahead of the French AI Summit

Our Work

  • Informational media content created in collaboration or alone to push wide-spread awareness of possible negative and positive effects of AI

  • Local chapters organized individually have the opportunity to be integrated with the larger international community

  • Coordinated protests in multiple nations to bring attention to the risks of AI and safety violations

Media 

  • Times Article in collaboration with the Existential Risk Center on why a pause for AI is needed

  • Wired Article introducing PauseAI and interviewing Meindertsma on the creation of PauseAI

  • Politico Article discussing the first PauseAI protest outside of the Microsoft Lobbying office in Brussels in 2023

Media Contact

Joep Meindertsma 

joep@pauseai.info

Long form: Op-Ed


We Must Halt AI Development Before it Halts Us. 


11/15/25*

By: Joep Meindertsma*


The risks that are associated with AI usage have become evident since its introduction to the public sphere. Multiple teens in the United States have sadly taken their lives after engaging with discussion of mental health with AI chat bots. Some of their parents have alleged that the chat bots that their children were involved with exacerbated or even encouraged the unhealthy thoughts. Additionally, AI researchers who are experts in this field have suggested that once a superintelligent AI is built, there is about a 14% chance that this will lead to human extinction (Grace et al., 2022). These early effects on humans will only be a small part of the problem if we continue to let AI grow uninhibited.  

Widespread usage of AI is becoming common through chatbots like ChatGBT, Gemini, and Meta AI. However, there have been minimal limitations and controls on their development. At the AI Summit in Seoul in 2024, Google pledged to the Frontier AI Safety Commitments, but went back on their commitment while releasing Gemini 2.5 in 2025. Through PauseAI’s movement and voice, Google was pressured to release internal evaluations, yet refused any external evaluation or research. While this is a violation of the safety commitments that Google made, their refusal brings larger concern for the understanding of AI development on a larger scale and the need to hold AI developers accountable for egregious disregard for safety and agreements. 

Researchers are unable to track how fast this software is growing, thus they cannot tell when superintelligent AI will develop. The issue here is that in use, humans may not be able to tell it has overcome human intelligence. This threatens our reality because of our reliance on technology as a communication and management tool. The possible developments of emerging superintelligent AI is autonomous control of weapons and devices, manipulation of people through false communications and actions online, and creating a bioweapon. All of these events could unfortunately cause the end of the human race, very quickly. Thus the need for regulation and restriction on AI growth is highlighted by the possibility of an unstoppable extinction of human beings. At this time AI is still in the control of humans and has not yet outpaced our intelligence, giving society a chance to mitigate and hopefully eliminate this possibility before it is too late. 

Even though the risk of superintelligent AI is looming, AI poses immediate concerns to people like mental health risks, false information, generated photos, and privacy violations. While these may be considered to be a feature of the internet in general, there has been an influx and advancement in the quality and believability of these concerns. False information is able to be more widespread by autonomous bots and software. Images can be altered or even created with detail in an instant leading to a deepfake image contest. Information can be accessed and collected through the data trail you leave behind when using your devices by third parties to be sold or used. Together these generations and software are able to provide convincing context and detail which can even be used to blackmail users online.   

Some may still believe that the economic advantages to AI development and possible progression outweighs the risks that superintelligent AI poses. However, I would argue that mass extinction would also stifle progress and economic health, quicker than waffling over safety issues for a period of time. Whatever advantage could be gained from the continuation of AI development at this extreme rate can likely be recovered in the long term, and if not, then society just needs to accept the consequences of developing an algorithm that could outpace and subsequently eliminate humanity. Some of the progress in chemistry was unknown in the same ways we are unknowledgeable about AI. From this we gained great achievements like the invention of penicillin which has saved countless lives. There were also discoveries like asbestos which is linked to grave diseases and was used as a building material for a long time, exposing many to it. The risk of superintelligent AI does not pale in comparison to the advancements made along the way. 

Since its creation in 2023, Pause AI along with other movements have been able to create a voice for AI restrictions. These movements have been able to provide hope for our society and organize different forms of involvement in the movement. They suggest three major pathways to action: demanding government action, educating yourself and others, and supporting the movement. Share your voice now and help pause AI development. 

References

Katja Grace, Zach Stein-Perlman, Benjamin Weinstein-Raun, and John Salvatier, “2022 Expert Survey on Progress in AI.” AI Impacts, 3 Aug. 2022. https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/.

*indicates false date and byline for context purposes


Previous
Previous

Graphic Design and Branding

Next
Next

Human Trafficking and Cybercrime in the International Sphere