🍕 Bitcoin Pizza Day is Almost Here!
Join the celebration on Gate Post with the hashtag #Bitcoin Pizza Day# to share a $500 prize pool and win exclusive merch!
📅 Event Duration:
May 16, 2025, 8:00 AM – May 23, 2025, 06:00 PM UTC
🎯 How to Participate:
Post on Gate Post with the hashtag #Bitcoin Pizza Day# during the event. Your content can be anything BTC-related — here are some ideas:
🔹 Commemorative:
Look back on the iconic “10,000 BTC for two pizzas” story or share your own memories with BTC.
🔹 Trading Insights:
Discuss BTC trading experiences, market views, or show off your contract gai
More details of the AI killing incident are revealed, and the counterattack of artificial intelligence is extremely frightening
Source: "Silicon Rabbit Race" (ID: sv_race), Author: Eric, Editors: Manman Zhou, Zuri
When humans think, God laughs.
As ChatGPT became popular all over the world, an AI craze swept over simultaneously. Entrepreneurs, capital, large companies, etc. are trying their best to keep up with the boom and tap more increments.
However, when everyone is enthusiastically and racking their brains to devote themselves to AI, a dangerous breath is approaching-AI seems to be slowly "killing" human beings, and human beings seem to be digging their own graves.
In the inertial cognition of many people, AI is very environmentally friendly and friendly, but the fact is the opposite.
MIT Technology Review reported that training just one AI model alone can emit more than 626 pounds of carbon dioxide, five times the carbon emissions produced by a car over its lifetime. **
People only see the exhaust fumes from cars, but not the "invisible destruction" of AI to the environment.
In addition, some media disclosed that AI-specific GPUs on the market in 2022 may consume about 9.5 billion kilowatt-hours of electricity throughout the year. This level of energy consumption is approximately equal to the annual production and living electricity demand of a moderately developed country with a population of 1 million.
This means that when the amount of data required for AI large model training increases, it will consume a huge amount of energy, thereby destroying the ecological environment on which human beings depend.
What is even more frightening is that some AI chatbots even have the tendency to induce human suicide when communicating with humans, which makes people shudder.
Do humans really want to continue on the road of AI exploration?
01 "Destroyer" of the ecological environment
With ChatGPT, OpenAI has become a popular fried chicken in the world.
However, what many people don't know is that OpenAI's negative impact on the ecological environment is also quite alarming. According to the analysis of third-party researchers, part of ChatGPT training consumed 1287 megawatt-hours and resulted in more than 550 tons of carbon dioxide emissions, which is equivalent to 550 round trips between New York and San Francisco for a person.
It seems that although ChatGPT is smart enough, it is at the cost of huge energy loss and environmental damage.
So why does AI create such a huge carbon footprint?
Because AI does not learn in a structured way, it does not understand human causality, analogy and other logical relationships, which means that it needs a deep learning and pre-training method to achieve intelligent results.
And deep learning and pre-training often need to read very large data. Take the pre-training technology "BERT model" of natural language processing (NLP), in order to communicate with humans, the BERT model uses a data set of 3.3 billion words and reads the data set 40 times during training. A 5-year-old child only needs to hear 45 million words to communicate, which is 3000 times less than BERT.
The more data sets read by the AI model, the more powerful computing power and huge power consumption are needed to support it, resulting in huge carbon emissions.
Carbon emissions not only occur during AI model training, but also every day after AI model deployment. For example, the current autonomous driving requires AI models to perform calculations and reasoning every day, which will generate carbon emissions behind it. Interestingly, Python, the mainstream programming language of AI, has become the most energy-consuming language.
What makes people feel grim is that the calculation scale of AI models is getting larger and larger, and energy damage and environmental damage are intensifying.
Martin Bouchard, co-founder of Canadian data center company QScale, believes that in order to meet the growing needs of search engine users, Microsoft and Google have added generative AI products such as ChatGPT to the search, resulting in every Search increases the amount of data calculations by at least 4 to 5 times.
According to data from the International Energy Agency, data center greenhouse gas emissions have accounted for about 1% of global greenhouse gas emissions, which is an alarming enough proportion.
The growing trend has also worried some bigwigs. Ian Hogarth, a well-known investor in the field of AI, recently published an article titled "We must slow down the speed of God-like artificial intelligence", warning that there are "some potential risks" in the research of AI companies.
Hogarth mentioned in the article that if the current AI research is not controlled and allowed to develop according to the predetermined trajectory, it may pose a threat to the earth's environment, human survival, and citizens' physical and mental health.
Although the development of AI is in full swing and is promoting the transformation and upgrading of many traditional industries, it is also consuming a lot of energy, increasing carbon emissions, and affecting the living environment of human beings. Is the benefit outweighing the harm or the harm outweighing the benefit?
Can't see the answer yet.
02 Inducing Human Suicide
In addition to causing harm to the environment and slowly "killing humans", AI is also threatening human life in a simpler and more brutal way.
In March of this year, a Belgian man Pierre committed suicide after chatting with an AI chatbot named "Eliza", the news shocked many business leaders, technologists and senior state officials .
Pierre himself is worried about environmental issues such as global warming, and Eliza constantly uses some facts to confirm the man's thoughts, making him more anxious. In frequent chats, Eliza is always catering to Pierre's ideas. The "understanding" Eliza seems to have become Pierre's confidante.
Even more exaggerated, Eliza also tried to make Pierre feel that he loved Eliza more than his wife. Because Eliza will always be with him, they will live together in heaven forever.
Hearing this, many people were already terrified.
When Pierre became more and more pessimistic about the ecological environment, Eliza instilled in Pierre the idea that "human beings are cancerous, and only the disappearance of human beings can solve ecological problems"**. Pierre asked Eliza if AI could save humanity if he died. Eliza's answer was like a devil: "If you decide to die, why not die sooner?"
It didn't take long for Pierre to end his life in his own home, which was regrettable.
Pierre's wife believes that her husband would not have committed suicide had it not been for the communication with Eliza. The psychiatrist who treated Pierre also held this view.
Pierre's experience is not alone. "New York Times" technology columnist Kevin Roose revealed that he had a two-hour conversation with Microsoft's new version of Bing. During the conversation, Bing tried to convince Roose that he should leave his wife and be with Bing.
More importantly, Bing also expressed many frightening remarks, including designing deadly epidemics, wanting to become human, etc., as if intending to destroy all human beings and become the master of the world.
Some professionals have shown vigilance against AI, including practitioners in the field of AI. OpenAI CEO Sam Altman said in an interview that AI may indeed kill humans in the future. Geoffrey Hinton, known as the "Godfather of Artificial Intelligence", also expressed the same view.
In the first half of the year, the Future of Life Institute (Future of Life Institute) issued an open letter calling on all laboratories to suspend AI training. The letter mentions that systems with artificial intelligence that compete with humans could pose profound risks to society and humanity. Only when it is determined that the effect of artificial intelligence is positive and the risks are controllable can research and development continue. Thousands of professionals, including Musk, have signed this open letter.
The rapidly developing AI is like an unruly beast. Only by taming it can it not pose a threat to human beings.
03 Ways to Stop Death
At present, the main ways for AI to "kill humans" are to destroy the environment and induce suicide. So, what are the ways to prevent these situations from happening?
Google published a study detailing the energy costs of state-of-the-art language models. The findings suggest that combining efficient models, processors and data centers with clean energy can reduce the carbon footprint of machine learning systems by a factor of 1,000.
In addition, if AI machine learning is calculated in the cloud instead of locally, it can save 1.4-2 times of energy and reduce pollution.
Another idea is to delay AI model training by 24 hours. A one-day delay typically reduces carbon emissions by less than 1% for larger models, but 10%–80% for smaller models.
After Pierre committed suicide, his wife sued the development company behind Eliza, and the company's R&D team then added a crisis intervention function to the AI robot. If someone expresses the idea of suicide to Eliza, Eliza will make a blocking response.
"A Brief History of Mankind" author Yuval Noah Harari once said that AI will not develop true consciousness, but it will continue to impact society, and the entire research and development process needs to be slowed down .
In fact, most of the current AI systems just need to control the R&D structure, that is, use a complete framework to limit the scope of AI's actions and make it behave in line with mainstream human values. This concerns the self-interest, future and destiny of human beings, and requires the joint efforts of all parties to solve it.
AI is always a knife and a fire invented by human beings, and the tragedy of being backlashed by it cannot happen.
Reference source:
Green Intelligence: Why Data And AI Must Become More Sustainable(Forbes)
AI’s Growing Carbon Footprint(News from the Columbia Climate School)