Looking Back at 2025: How the AI Year Unfolded

By 
Ellen Björnberg
December 4, 2025

Looking Back at 2025: How the AI Year Unfolded

When we shared our outlook for 2025 at the end of last year, it was based on early signals we were already seeing across the organisations and technologies we work closest to. Now, as the year wraps up, it feels natural to revisit those themes, not to evaluate predictions, but to understand how those signals actually evolved as adoption scaled.

Over the past twelve months, several patterns became much clearer. Some moved faster than expected, others took shape in new directions, and a few surfaced challenges that weren’t yet visible a year ago. Together, they offer a useful lens on where momentum is building, and what might matter most as we move into the next cycle of AI development.

AI Foundations

1. Agents Beyond the Chat Interface

When we looked ahead at where AI was heading, one shift felt particularly important: the move from chatbots to real, operational agents. That turned out to be true, but in a much more grounded way than the hype suggested.

The biggest impact came from agents designed for specific, tightly scoped tasks, not “autonomous employees.” Teams used them to produce structured outputs, support documentation, or run predictable multi-step workflows. We saw this especially in:

• regulatory and compliance work

• document-heavy internal processes

• public-sector reviews and summarisation

• IT and DevOps automation, where quick wins were easiest to capture

The pattern was consistent: the most successful agents stayed small, focused and supervised. Not because ambition was lacking, but because reliability and traceability still matter. Meanwhile, the tooling around agents matured. Frameworks like LangGraph, access protocols such as MCP, and more robust orchestration via the Agent API made it much easier to turn prototypes into something stable enough for production.

2. AI Moves Into the Operating System

One of the clearest shifts this year was how AI quietly became part of the operating system rather than something users open in a browser tab.

Apple rolled out built-in summarisation, rewriting and more contextual search. Microsoft took a similar path with a more privacy-aware version of Recall and Copilot runtime integrations. Android and Chrome embedded Gemini Nano for secure, on-device assistance.

None of this arrived with huge fanfare, but it changed how people interacted with their devices. “AI inside the OS” became less of a concept, and more of a default expectation.

3. Blurring the Lines Between Agents and LLMs

The distinction between “agent” and “model” continued to soften as architectures became more modular and workflow-aware.

A few developments made this especially clear:

OpenAI and Anthropic expanded tool use and multi-step orchestration directly inside their models.

Mixture-of-experts models (including Mistral’s) showed how specialisation can be activated dynamically within one unified system.

• LangGraph adoption grew, letting LLMs manage state, call tools and coordinate sub-agents without custom glue code.

• Multi-model routing frameworks became more common, allowing workflows to mix small parsing models with larger reasoning models.

4. RAG Beyond the Vector Database

This was the year organisations started admitting what they already suspected: vector search alone isn’t enough for many real-world use cases. Graph-enhanced retrieval filled that gap. Approaches like GraphRAG and Lazy-GraphRAG saw real adoption, especially in environments with interconnected internal data, research archives, compliance repositories, product documentation and knowledge bases.

Major platforms picked up on the pattern too. Snowflake, Databricks and MongoDB expanded their graph and hybrid search capabilities, making relationship-aware retrieval far easier to build. Vectors still matter, but they now sit inside richer retrieval pipelines that better reflect how organisations actually store information.

5. Giving Voice to AI

Voice continued to improve this year, but adoption stayed measured rather than explosive. The technology took a clear step forward, OpenAI, ElevenLabs and Google all released more natural, responsive voice models, yet most organisations treated it as a complement instead of a primary interface.

A few pilots appeared in support flows, onboarding tools and internal assistants, but text remained the default for anything requiring precision, privacy or auditability. The result: voice is getting better, but it’s still finding its place.

6. Model Migrations Become Routine

Switching models used to be an occasional, high-effort event. This year, it became routine. Companies leaned into multi-model setups, using smaller models for utility tasks, larger ones for reasoning, and swapping providers when performance or pricing shifted. Tools like LiteLLM made routing trivial, while LangChain and LangSmith helped teams validate behaviour and catch regressions during migrations.

Deprecations and fast version cycles meant teams moved models more often than expected, and many now treat LLMs less like monolithic systems and more like interchangeable components.

7. Transformer Architecture Finds Its Next Jobs

Transformers didn’t revolutionise entirely new categories this year, but they did expand meaningfully into areas where they create clear value. Time-series forecasting was one of the biggest examples. Models like PatchTST and Chronos found their way into energy, finance and logistics teams looking for more accurate predictions and anomaly detection.

Healthcare saw similar momentum, with transformer-based early-warning systems running in pilots across Europe and the US. Cybersecurity platforms (including Elastic’s ecosystem) increasingly turned to attention-driven approaches for log analysis and behavioural modelling.

Business and Industry Impact

8. Advertising and AI Responses

We didn’t see explicit ads inside model outputs, but commerce still crept closer to the interface.

OpenAI introduced “Buy with ChatGPT”, and early partners like Shopify and Stripe tested conversational purchasing flows. This shifted AI from being a search-and-summarise tool to a transactional channel.

LLM-SEO also became more visible as companies started optimising how their content is interpreted by AI systems - something we saw clearly in our own analysis of Nordic and global websites earlier this year.

9. The SaaS Model Under Pressure

Across the year, more organisations questioned whether every workflow really requires a SaaS subscription. AI-assisted development changed the equation.

One of the clearest signals came from the rise of AI gencode platforms. Lovable, in particular, became one of the most talked-about AI startups of the year, helping teams generate production-ready applications with minimal engineering overhead. Tools like Cursor and Replit followed the same momentum, lowering the barrier for internal teams to create single-purpose software or lightweight AI agents that solve very specific problems.

This didn’t replace SaaS, but it changed expectations. Companies started questioning whether they needed full-scale platforms when AI could generate the exact functionality they required, tightly integrated with their own workflows, data and infrastructure.

10. The Rise of New Foundational Model Providers

2025 continued to broaden the foundational model ecosystem, even as the United States maintained a clear lead through OpenAI, Anthropic, Google and Meta, who still drive the most capable frontier models.

Alongside these dominant players, several new entrants gained momentum. A recent example is Amazon’s release of its Nova and Nova 2 models, marking a noticeable step in Amazon’s move from infrastructure-heavy AI to developing its own foundation models. While not a defining shift for the industry, it’s a fresh signal that more major cloud providers are now entering the model race directly.

Beyond the U.S., competition expanded globally. Mistral AI continued to strengthen Europe’s role in the open-source landscape, while companies like Alibaba, ByteDance, and Tencent pushed forward in Asia with increasingly sophisticated model families. Smaller, specialised labs also carved out space in areas such as speech, security and multimodal understanding.

Interest in alternative architectures, including state-space models like Mamba, persisted as organisations explored more efficient ways to scale, even though transformers remained the dominant backbone for most production deployments.

11. Compute Costs and Cloud Competition

This was the year compute costs moved from an engineering challenge to an executive priority. Alternative providers like Modal, Together AI, Predibase and RunPod gained traction by offering flexible, lower-cost GPU access. This didn’t threaten the big clouds, but it changed the dynamic - for the first time in a while, organisations had realistic options.

Tooling also played a major role. Lighter-weight fine-tuning, LoRA adapters and more efficient inference stacks helped teams run workloads on smaller footprints. Some companies even brought targeted workloads in-house for cost reasons.

We also saw deeper optimisation efforts. Our own work on token tariffs and custom tokenizers reflected a broader shift: compute is no longer a fixed cost, it’s something that can be engineered, negotiated and improved.

12. Sovereign AI Clouds

Sovereign AI move from regulatory concept to concrete infrastructure projects. As organisations faced stricter requirements for data locality and auditability, demand for region-bound AI deployments increased across both public and private sectors.

A few developments stood out:

• France expanded Bleu, their sovereign Microsoft-based cloud.

• Germany accelerated its T-Systems + Google Cloud sovereign region.

• The Nordics introduced sector-specific setups, particularly in healthcare.

• The UAE and Saudi Arabia invested heavily in domestic AI capacity to keep sensitive data inside national borders.

Society and AI

13. Proof of Personhood

The 2024 elections revealed the significant risks posed by AI-driven misinformation, such as deepfakes and synthetic political ads, which blurred the line between fact and fiction and undermined public trust. In response, 2025 saw increased awareness and the rollout of new countermeasures, including improved content labeling, watermarking pilots, and enhanced identity verification tools. Despite these advances, no single solution has proven fully effective, and the rapid evolution of AI technologies continues to present ongoing challenges for detection and prevention.

14. IP Battles and New Rules

Legislation and IP enforcement accelerated noticeably this year as questions about training data, licensing and creator rights moved from debate into courts and regulatory processes. Several cases and policy moves stood out:

The New York Times vs. OpenAI revealed how copyrighted material had been used in training datasets, setting up a precedent-defining decision in the US.

• The music industry escalated its response to AI-generated songs, with major labels filing and settling cases involving platforms like Suno and Udio,  raising new questions about derivative rights and compensation.

Japan, India and Brazil began drafting lighter or sector-specific AI copyright rules.

15. A More Sophisticated Threat Landscape

AI made attackers faster, louder and harder to detect. Deepfake phone scams rose sharply, particularly targeting seniors, and banks responded with stronger authentication layers while several regions launched public awareness campaigns. Even EU institutions flagged the same trend, with a recent European Parliament brief highlighting the rapid rise of AI-enabled cybercrime.

Cybersecurity teams shifted toward model-aware defence, adding prompt-injection monitoring, model-manipulation detection and deepfake analysis as standard capabilities. Large financial and security players expanded through acquisitions to keep up, and regulatory divergence across regions made global risk management increasingly complex.

16. Energy Consumption in Focus

Energy use became one of the most visible pressure points as AI scaled. Growing model sizes and rapid enterprise adoption intensified scrutiny around the environmental impact of training, inference and expanding data-centre capacity.

A few developments stood out:

• Tech companies deepened their nuclear partnerships. Google expanded its work with Kairos Power, while Meta and  Amazon signed new long-term power agreements tied to emerging nuclear projects - signalling a growing interest in cleaner, high-capacity energy sources as AI demand increases.

• Europe pushed for stricter transparency. New EU reporting rules required more detailed disclosure of energy use and emissions, and countries including France, the UK, the Czech Republic and Poland accelerated national nuclear investment plans.

• The US debate intensified. Rapid AI build-out continued under comparatively light regulatory oversight, speeding up deployment but drawing criticism over grid strain and carbon intensity.

• GPU demand continued to reshape infrastructure. Rising adoption of large models drove sustained investment in new data-centre capacity across the US, Europe and the Middle East

Looking Ahead

As the year comes to a close, one thing is clear: AI has moved from experimentation to infrastructure. The biggest shifts of 2025 weren’t the loudest ones, but the ones that quietly reshaped how organisations build, operate and make decisions. Agents became part of daily workflows, operating systems absorbed AI by default, new regulatory questions took centre stage, and energy, compute and IP moved from technical detail to strategic priority.

These patterns don’t just explain the year behind us, they also offer clues about the forces that will shape the year ahead. We’ll explore those early signals, and what they might mean for the next wave of change, in our 2026 predictions coming next week.

Stay tuned.

Learn more