We Quit ChatGPT – Here’s Why

Following the revelations that co-founder and president of OpenAI (the company behind ChatGPT) Greg Brockman and his wife, Anna, donated $25 million to Donald Trump (via MAGA Inc.), we at Go Well Consulting have decided to cancel our ChatGPT subscriptions. This blog explains our reasoning, what AI tool we’ve changed to, and some food for thought for other businesses using AI.  

Before we explain our reasoning, we want to acknowledge that understanding the ethics and governance of AI is a minefield. There are no perfect solutions, and we do not believe abstaining from using AI is the answer. 

The power and capability of AI is remarkable, and there is no doubt in our minds that it will fundamentally change life on Earth – and it’s not going anywhere. For these reasons, we believe a complete rejection of AI is not the right choice. Instead, we believe in understanding its capabilities, how it works, who is behind it, and the breadth of its impacts so we are able to make well-informed choices on how we utilise it and well-informed recommendations on how it is regulated. We 100% believe it needs to be regulated – more on that later.  

Much like climate change, biodiversity loss, inequality, plastic pollution and any other of the big issues we are all facing, businesses need to step up when it comes to AI. We need businesses – and the people who lead them – to take the time to understand the issues, identify their contributions to them, and make well-informed decisions for all their stakeholders.  

This is not what Open AI are doing. Despite starting out as a non-profit organisation with what looked like good intentions (including a commitment of dedicating 20% of their computing resources to AI safety research and a mission statement that included the phrases “AI that safely benefits humanity, unconstrained by a need to generate financial returns..”, and “…develop and responsibly deploy safe AI technology, ensuring that it’s benefits are widely and evenly distributed as possible.)”, those leading the organisation have transformed Open AI into a business increasingly focused on profits and winning the AI race at the cost of safety and human centred design.  

In Open AI’s defence, they are in the midst of a global AI arms race. There are enormous geopolitical forces at play that are focussed on their own national interests and staying ahead of their enemies. However, that is not an excuse to disregard the safety and wellbeing of humanity, and we believe that consumers of AI cannot just sit by and let this technology happen to us. On the contrary, we need to stand up and speak out to ensure that AI does make the world a better place.  

On learning about the donation made by Mr and Mrs Brockman, we discussed the implications on us as a team and decided unanimously that we couldn’t continue to use ChatGPT, or more specifically, spend any money with Open AI. We are a sustainability consultancy that dedicates our time, knowledge, and skills to help businesses navigate their sustainability journeys and the transition to a circular, regenerative and inclusive economy. Given that Donald Trump has done arguably more than any other world leader in history to block and repeal the progress that has been made in this global economic transformation, this donation was not one we could overlook. 

Why did Open AI make the donation? From our research, it all seems to come down to regulation. Open AI do not want governments to regulate against AI: they want complete freedom to develop the technology in any way they can, or in any way AI can develop itself (AI is now self-coding). Considering the power of this technology, it cannot be allowed to proceed without any checks and balances, or the social and environmental implications could be disastrous – we are already seeing extremely concerning AI behaviour interacting with youth and the huge amounts of energy and water required to run AI data centres (along with the associated emissions).  

Hopefully, we have learned some lessons from our experience with Facebook. A tech company that started out with seemingly such good intentions of connecting people has now harvested data without consentproliferated the spread of mis and dis-informationand allowed it to be used to promote violence Yet, so many individuals and businesses feel stuck in their reliance on Facebook, while governments around the world try to reverse engineer regulations to keep their people safe (particularly their children), and ensure Facebook pays its fair share of tax. We don’t want to repeat the same mistakes with AI companies. (For the record Go Well has a policy of not paying for any advertising on Facebook or Instagram but we do use the platforms).  

So, what AI tool will we use now? This is where we went down the proverbial rabbit hole. We are very mindful of the length of this blog but wanted to share what we learnt to assist other business as best we can.  

Being well aware of the #quitchatgpt campaign that is being pushed online, we wanted to make sure that we did our own research and didn’t simply swap one poisoned chalice for another. The rate and scale of change around AI is hard to fathom. From the proliferation of data centres and their resulting demands on water and energy, to the capabilities of these computers and the numbers of organisations trying to get in on the boom, there is a lot to consider. 

In a great example of why we still believe in the need for using AI tools, we utilised their capability to help our research.  Following some initial guidance from our AI partners, Ten Past Tomorrow, then some hours of human research we narrowed it down to two options: Claude, by Anthropic, or Le Chat by Mistral. We then asked both tools to rank the top 10 AI providers by performance and then research their actions across the following criteria: 

  • Governance structure  
  • Transparency 
  • Political donations  
  • Safety commitments and actions taken, and 
  • Environmental commitments and actions taken 

The results were not exactly optimistic: “The environmental picture across all 10 companies is deeply concerning” (Claude). However, it did help us understand what was being done or committed to by each platform, as well as the people behind the platforms.  

While we could find little in the way of environmental data or commitments from Anthropic, in comparison, Mistral are a world leader in the transparency relating to their environmental impacts by being the first to share a third-party verified Life Cycle Assessment (LCA) of their tool.  

It should be noted that while transparency is such a critical part of AI development, 

it does not equal better impacts. Details on the environmental impacts of AI are for another blog (it’s another rabbit warren!), but the main considerations relate to their significant electricity demands and the source of that electricity generation (noting that fossil fuels made up nearly 60% of 2024 electricity generation), and their significant water use for cooling (noting that one in four people in the world do not have access to safe drinking water). 

When looking at governance and ethics we were impressed with Anthropic’s commitments to the safety, security and transparency of their models. We are also encouraged by the fact that Anthropic are a Public Benefit Corporation1, and governed by their Long-Term Benefit Trust.  

In comparison, Mistral is a private company located in France and therefore comes under the European Union laws and the EU Artificial Intelligence Act – the world’s first binding law on AI. This Act demonstrates the human-centred approach to regulating AI that we believe in.  

Now for the politics and militarisation of AI… 

Being an American company, Anthropic are inevitably caught up in the polarised politics and foreign policy of that country, and it should be noted that they and their backers have made significant political donations to the Democrat Party and most recently $20m to Public First Action, a political group that opposes federal efforts to quash state AI regulations. 

As touched on earlier there is an AI arms race underway amongst global militaries and until recently Anthropic were working with the US Department of War (formerly Defence). But the relationship was ended and Anthropic have now filed legal proceeding against the Department after it labelled the company a “supply-chain risk”, apparently due to Anthropic refusing to let its tools be used in mass surveillance and autonomous weapons. Major tech companies have publicly supported Anthropic’s legal action to overturn War Secretary Pete Hegseth’s unprecedented decision. Meanwhile OpenAI have immediately swooped in and signed the contract with the Department.   
 
Meanwhile Mistral having signed a deal to provide AI technology to France’s military

Back to the political donations, the key difference for us at Go Well is that the Democrats, under Biden, had brought in regulation around AI requiring new safety assessments, equity and civil rights guidance and research on AI’s impact on the labour market. Compare this to Donald Trump, who signed an executive order that seeks to halt any laws limiting artificial intelligence and block states from regulating the rapidly emerging technology. President Biden also brought in The Biden Plan for a Clean Energy Revolution & Environmental Justice, which while we don’t claim to understand the detail of, we do know that it aligns with our work far more than the policies of President Trump. 

Lastly, when comparing capability of the two tools and our use cases for AI, we found Claude to be significantly better. 

Following this research we collectively decided we would move forward with Claude (Anthropic). 

However, this is not a closed case. The reality is that AI is very much in its infancy, and there is no doubt many more developments, controversies, and considerations to come. We will continue to monitor these and ensure that our use of AI aligns with our values. Although we have gone with Claude for now, we will continue to monitor their corporate behaviour and that of other tools and their creators.  

Swapping tools does not eliminate the environmental impacts of AI, so this decision has not changed our AI Policy and our requirements for our team to use AI purposefully and never for meaningless tasks that cannot be shown to add value to the business or to our clients. It also hasn’t changed our commitment to always disclose our use of AI (except for social media posts).  

Ultimately, decisions around the use of AI are for each independent business to make, but we do urge everyone to do their own research and to be very careful that they do not become overly reliant on any one tool. Unlike the Facebook example, in the world of AI there are multiple other providers to ChatGPT so businesses can easily change with minimal disruption. For those of you who do decide to also quit Chat GPT, here are some great tips on steps to take before you press cancel. 

This blog doesn’t even scratch the surface of all the developments in AI and its wide-ranging impacts, but we hope it has laid out our reasons for leaving ChatGPT and moving to Claude, as well offering businesses insights to help with their own decision making around the use of this technology. We want to finish on these parting reflections: 

We are at the most critical juncture in human history. We are the first generation to deeply understand the impacts of our actions on our planet and other people, and we are the last generation that can turn it around before the planet is set on a path of changing so significantly only a few will be able to inhabit it.  

AI will rapidly increase our speed down whichever path we choose, but whether that pathway is toward a decarbonised future with a stable climate and global peace, or climate breakdown and global chaos depends on how we let AI be used. The decisions we make now will have impacts for many generations and Go Well is a business who is committed to facing that responsibility head on. 

——– 

AI Disclosure: 

No AI tools were used in the writing or editing of this article (other than autocorrect). Claude and Le Chat were used to assist in researching for this article.  

Written by Nick Morrison, Founding Director at Go Well Consulting.