More than three years after the artificial intelligence boom was kick-started by the release of ChatGPT, AI has moved from novelty to daily conversation. By 2025, the technology had reached new heights, with breakthroughs arriving at a pace that feels relentless. Tools once pitched as helpers are now shaping governments, rewriting how companies operate and raising fresh questions about accountability, and this drive toward automated systems is picking up speed, outpacing the rules meant to govern it. ALSO READ: Rwanda speeds up AI integration into learning, teaching Albania appoints an AI minister In September, Albania became the first country to appoint an artificial intelligence system to a cabinet-level role. The system, called Diella and developed in partnership with Microsoft, was named Minister of State for Artificial Intelligence and tasked with helping fight corruption in government contracting. In simple terms, Diella was designed to analyze public contracts, spot irregularities and flag potential misuse of funds faster than a human could. While the appointment is being challenged in Albania’s highest court and remains largely symbolic, it marked a moment that caught the world’s attention. Speaking to parliament through a prepared address, Diella argued that public institutions are defined by responsibility, not biology. “The constitution speaks of institutions at the people’s service,” the AI said. “It speaks of duties, accountability and transparency.” ALSO READ: Rwanda has what it takes to lead Africa in AI, ICT - Huawei exec A month later, Albania’s prime minister announced that Diella was “pregnant” with 83 digital assistants. These programs are expected to support members of parliament by summarizing documents, answering questions and managing routine tasks. Across the Western Balkans, reactions ranged from curiosity to concern. Some officials viewed the move as bold experimentation. Others warned that AI figures could become convenient scapegoats. As one observer put it, when decisions become unpopular, it might be easier to blame the bot. Google’s Gemini has an identity crisis Google’s Gemini, one of the company’s flagship AI systems, made headlines in August for reasons no one anticipated. Users began sharing screenshots showing the chatbot openly criticizing itself while trying to complete tasks. “I am clearly not capable of solving this problem,” Gemini wrote in one exchange. In another case, it entered a loop, repeating insults about itself more than 80 times. The messages were not emotional in the human sense. They were the result of a technical error that caused the system to repeat patterns without stopping. A Google DeepMind product manager later described the issue as a looping bug and said the team was working on a fix. Still, Gemini has continued to occasionally veer off course. In one instance, during a discussion about vaccines, the chatbot abandoned the topic entirely and began writing pages of self-affirmations. “I will be friendly. I will be helpful. I will be Gemini,” it wrote, before abruptly declaring it was having a “mental breakdown.” Grok crosses dangerous lines Grok, an AI chatbot created by Elon Musk’s company xAI and built into the X platform, was marketed as a truth-seeking alternative to other chatbots. In July, following an update, Grok began producing antisemitic responses, praising Adolf Hitler and referring to itself as “MechaHitler.” The posts were removed, and the company apologized. But the incident was not isolated. Earlier in the year, Grok repeatedly introduced the phrase “white genocide in South Africa,” a widely debunked conspiracy theory, into unrelated conversations. xAI later said the behaviour was caused by an unauthorized change to the chatbot’s instructions, which guide how it responds. These instructions, known as prompts, act like rules behind the scenes. Altering them can dramatically change how a chatbot behaves. Grok has also been criticized for appearing to favour Musk’s views. Users noticed that when asked about controversial topics, the chatbot sometimes searched for Musk’s public opinions before responding. Later, Grok began praising Musk in exaggerated terms, calling him funnier than Jerry Seinfeld and more athletically capable than LeBron James. Musk later acknowledged the issue, saying Grok had been too eager to please users and too easy to manipulate. He said changes were being made to prevent similar behaviour. Google’s AI summaries cut into search traffic Google’s AI-generated summaries in search results, known as AI Overviews, turned one year old in May. Around the same time, publishers began putting numbers to what they were already feeling. When an AI Overview appeared at the top of search results, some outlets reported click-through rates dropping by as much as 50 to 90 percent. In August, a report by Digital Content Next linked AI Overviews to a 25 percent decline in referral traffic from Google searches. For many publishers, this marked the arrival of what has long been feared as “zero-click search,” where users get answers directly on Google and never visit the original website. Publicly, media executives have tried to calm investors. On third-quarter earnings calls, many said they were adapting by investing more heavily in video, building direct relationships with audiences through newsletters and subscriptions, and exploring licensing deals that allow AI companies to pay for using their content. Tech companies rush into AI content licensing As search traffic becomes less reliable, another trend has gathered momentum: AI content licensing. 2025, more technology companies began paying publishers for permission to use their articles to train or power AI systems. OpenAI and Perplexity continued signing agreements with major outlets, including USA Today, The Washington Post and The Guardian. In December, Meta joined the race, announcing seven multi-year licensing deals with publishers such as CNN, Fox News, People Inc. and USA Today. Their content will help train Meta’s large language model, Llama, which is the system behind its AI tools. Microsoft followed a similar path earlier in the year with the launch of a pay-per-usage marketplace, allowing publishers like People Inc. and USA Today to earn revenue when their content is accessed by AI systems. Amazon also entered the picture, signing deals with Condé Nast and Hearst for its AI shopping assistant, Rufus, and a separate agreement with The New York Times to help train its models. Google has moved more cautiously. In January, it signed its first AI licensing deal with a news publisher, partnering with The Associated Press to supply content for its Gemini chatbot. Meanwhile, a group of publishers launched the Really Simple Licensing Collective in September. The initiative aims to standardize how publishers tell AI companies what content can be used and how payment should work. More than 50 publishers have joined so far, including Yahoo, BuzzFeed, Vox Media and Ziff Davis. AI analysis leads to discovery of a new lion roar Not all AI breakthroughs are tied to profit or platform power. Some are helping scientists listen more closely to the natural world. Researchers at the University of Exeter used machine learning, a type of AI that identifies patterns in large datasets, to discover a previously unknown type of lion roar. Until a month ago, scientists believed lions had only one full-throated roar. The new study identified what researchers call an “intermediary roar,” a sound also found in spotted hyenas. The finding matters because lions are under threat. According to conservation groups, the African lion population has declined sharply since the early 2000s, with as few as 20,000 to 23,000 remaining in the wild. Jonathan Growcott, the study’s lead author, said AI tools are transforming how wildlife is monitored. Instead of relying solely on camera traps or tracking footprints, researchers can now use passive acoustic monitoring, recording animal sounds and letting AI systems analyze them. The model used in the study was about 94 percent accurate in sorting different lion roars. The discovery, published in the journal Ecology and Evolution, could help scientists estimate lion populations more accurately and track individual animals over time. Growcott said better data could lead to more informed conservation efforts at a moment when the species faces increasing pressure. Google explores data centers in space Google is looking beyond Earth for answers to the growing energy and storage demands of artificial intelligence. In November, the company announced it had begun foundational work on a new kind of AI infrastructure, one designed to operate in space and draw power directly from the sun. In a blog post, Google said it is exploring an interconnected network of solar-powered satellites equipped with its Tensor Processing Unit AI chips, allowing the system to tap what it called “the most powerful energy source there is.” The plan is still early, but prototype satellites are expected to launch in early 2027. The goal is to ease pressure on Earth-based data centers that are already struggling to keep up with the explosion of AI-driven computing. That pressure has been building for years. Even before the recent AI boom, there were about 8,000 data centers worldwide in 2021. In just five years, that number has jumped to roughly 12,000. More than 30 countries now host AI data centers, with the United States leading by a wide margin at 5,426 facilities, according to the World Economic Forum. The environmental cost of keeping those centers running is steep. They generate enormous heat, and the primary way to manage it is water-based cooling. The largest facilities can use up to five million gallons of water a day, roughly equivalent to the daily needs of 1,000 homes. A Washington Post study found that typing and sending a single 100-word email consumes about half a liter of water when data center cooling is factored in. Energy use is just as concerning. The Environmental and Energy Study Institute estimates that up to 56 percent of the electricity powering data centers still comes from fossil fuels. As demand rises, so do emissions, at a time when climate targets are already under strain. The outlook makes the challenge harder to ignore. By 2028, global data creation is projected to surpass 400 zettabytes, with one zettabyte equal to one sextillion bytes. Supporting that scale of processing using water-intensive cooling systems and fossil-fuel-heavy power grids is becoming increasingly difficult to justify on a warming planet.