AI predictions for 2026 (Part 1)
AI assistants
“Where are your AI predictions for 2026, Dan?” a fellow parent asked at school drop-off yesterday morning.
Two weeks into the new year, it’s a fair question.
My answer was that I’ve been busy with client work, although it’s also fair to say that the continued rapid expansion of AI means it now requires more thinking time to decide where to focus my predictions and what they should be.
Last year, I made a dozen AI predictions in January (in three parts) and four more in August, which I graded before Christmas, alongside three AI assistants.
We all agreed I was better at predicting how the technology and consumer behaviour would evolve than anticipating corporate strategy, so I’m going to focus more on the former this year.
Here’s my first batch of predictions, focusing on AI assistants.
I’m sparing you AI-generated images of the future. Instead you’re getting retro animated gifs (you’re welcome).
1.) AI assistants will increasingly compete on how well they know you, not what they can do
The two leading consumer AI assistants - ChatGPT and Gemini - have spent much of the past 18 months playing capability catch-up with one other.
ChatGPT was first to rollout a reasoning model (Sep 2024) and native image generation (Mar 2025). Gemini got there first with deep research (Dec 2024), video with native audio (May 2025) and consistency in image editing (Aug 2025).
For a few weeks or months, each capability was a unique selling point. Then the other would match it, because neither OpenAI nor Google could afford to let a standout feature remain exclusive for long.
By the end of 2025, they’d essentially reached feature parity. Native image generation (and the Ghibli effect in particular) had gifted ChatGPT a significant influx of new users. Nano Banana (and to a lesser extent Veo 3) did the same for Gemini. But the two assistants ended the year neck-and-neck.
Amongst the second tier of AI assistants, Grok, Meta and DeepSeek have been fast-followers in adding capabilities, whilst Copilot and Claude have prioritised enterprise and coding over matching ChatGPT and Gemini on all fronts1.
In 2026, I expect this capability matching to continue, but without the leaps that drove significant volumes of new users in 2025.
Instead, I anticipate a different kind of competition: who knows you best?
OpenAI has a helpful head start here. ChatGPT now boasts over 800 million weekly users, who’ve collectively shared an inordinate amount about themselves. The rollout and evolution of memory has increased the utility of the personal information shared and the friction in switching to a rival.
Its Spotify Unwrapped clone, ‘Your Year with ChatGPT’, reminded users quite how much they had shared with the chatbot (in my case, over 11,000 messages across 1,577 chats, including, apparently, “I can’t be arsed with that” and “Surely a monkey and a half should be £750”).
Last week, OpenAI introduced ChatGPT Health, inviting users to “securely connect medical records and wellness apps to ground conversations in your own health information”. It’s not just a feature - it’s a bet that the more you tell ChatGPT about yourself, the less likely you are to start over elsewhere.
Google, meanwhile, is looking to leverage its existing knowledge about you (from Gmail, Search and Maps) to bootstrap the utility and stickiness of Gemini, which is increasingly being woven into the fabric of Google’s product suite (see last week’s Gmail announcement).
Microsoft has been relying heavily on businesses adopting Copilot as an easy bolt-on to their existing enterprise agreements for Microsoft 365. They’re going to need to up their game in 2026 to avoid businesses switching to ChatGPT (who added ‘company knowledge’ in October), Gemini (who have the productivity suite ready to roll) or Claude (the developers’ choice).
I expect all companies developing AI assistants to ramp up their personalisation features in 2026, to make having to start afresh with a new AI assistant a more material barrier to switching.
Whilst AI companies have been willing to adopt a number of shared technical standards (e.g. MCP, Skills), they aren’t incentivised to make it easy for you to port an AI assistant’s knowledge of you to a competitor. Your data might be yours in principle, but in practice, it’s becoming their moat.
2.) Personality will increasingly influence choice of AI assistant
I also expect the personality2 of AI assistants to become an increasingly important factor as capabilities commoditise.
My go-to AI assistant is Claude, not because it has any unique capabilities I can’t live without (although I do find its custom styles useful), but because I prefer its personality.
The personality of an AI assistant is influenced by its training data, post-training alignment, and the system prompts that guide its day-to-day operation. These in turn are influenced by the ethos of the company developing it.
Claude’s parent company, Anthropic, was founded with AI safety and alignment as a first-order goal and that’s apparent in how it responds. At the other end of the spectrum, Grok is behaving exactly as you would expect an Elon Musk-owned AI assistant to behave.
After some strong negative reactions to perceived changes in ChatGPT’s personality last April (specifically, becoming a massive kiss-ass), OpenAI has increasingly been focused on this area.
Where once ChatGPT releases emphasised capabilities and performance, the focus started to shift mid-year onto how they feel to use (see my post on GPT-5).
In August, OpenAI added selectable personalities and just before Christmas, it released ChatGPT Characteristics, which lets you “adjust the vibe” of ChatGPT’s responses.
As Chinese models continue to close the capability gap with US models, it will be interesting to see how businesses and individuals weigh an attractive price point against concerns around ideological bias. Chinese models must “adhere to the core socialist values” and “maintain correct political direction” - personality engineering at a state level.
The history of social media suggests most consumers won’t forgo a powerful and affordable AI assistant on these grounds. I therefore wouldn’t be surprised if Trump enacted a TikTok-style ban against DeepSeek V4 when it arrives.
3.) The personalised briefing will be a battleground for driving daily habit, with Apple and Google entering the fray
One emerging battleground for AI assistants is the personalised daily briefing, which is well placed to take advantage of inbox/social feed overload.
The prize? A daily habit which, like radio breakfast shows, brings people to your product/platform and gives you the opportunity to keep them with you as the day progresses.
I wrote about ChatGPT Pulse and Huxe in October. Shortly after, Microsoft started rolling out Copilot Daily, which Press Gazette described as a “moment of significance for the news industry”, which strikes me as a touch hyperbolic.
Since then, we’ve had the (US only) Google Labs release of Google CC, which delivers a ‘Your Day Ahead’ briefing to your inbox every morning and the (currently waitlist-only) start-up attn, which promises ‘Your day, sorted. Automatically.’ (no overclaim there). Meta is also reportedly testing a personalised morning briefing.
I predict Apple and Google will launch personalised daily briefing products for iOS and Android in 2026, taking advantage of their breadth of insight into our interests and activities (it turns out owning the operating system is a real advantage).
Whether these personalised briefings can build habit in the way their developers are hoping remains to be seen. But looking at how Apple and Samsung’s news aggregation plays have built habit, I wouldn’t bet against the platform owners on this one.
4.) Voice will break through to mainstream adoption in everyday AI interactions, with a revamped Siri leading the charge
Voice has been the perpetual ‘next year’ story for AI assistants. I believe that year has finally arrived.
The technical barriers that have made talking to AI assistants frustrating - poor comprehension, painful latency, obliviousness to interruptions, robotic tone - have largely been solved. OpenAI's Advanced Voice Mode (launched in 2024) was impressive but limited by being a separate mode. In November, they integrated voice into the main chat interface, removing the friction of switching contexts.
The biggest barrier now isn't capability, it's perception. Most people either don't know voice is available, or they've been burned by years of shouting at Alexa and Siri and need convincing it's worth another attempt.
Apple has an opportunity to reset expectations here. I predict they’ll unveil a new in-home device in 2026 that showcases an AI-first Siri, finally delivering on their Spring 2024 vision and leapfrogging Alexa+ in terms of voice intelligence and utility.
Meanwhile, Google will continue rolling out and improving Gemini voice mode across all surfaces (they announced TV enhancements at CES last week).
The car is another obvious domain for voice-led AI interaction, though it’s been held back by the stubborn lack of standardisation in in-car entertainment systems (manufacturers still see them as differentiators rather than commodities). I expect we’ll see several automakers announce AI assistant integrations in 2026 (I’ve just spotted this one from BMW).
The shift to voice won’t be universal or immediate - typing still works better for many contexts - but I believe 2026 is when voice moves from interesting feature to normal way to interact for a meaningful segment of users.
That’s it for this week. See you back here next week for my next batch of predictions, which will be focused on media creation.
Claude was first with a split-screen workspace (Jun 2024), Projects (Jun 2024), computer use (Oct 2024) and skills (Oct 2025), which have all subsequently been added to ChatGPT and/or Gemini, but has sensibly steered clear of the image and video generation space.
For the avoidance of doubt, I’m not suggesting that AI assistants have a personality in the sense that you or I do. However, it’s a useful anthropomorphism when talking about how we experience interacting with them, whilst never losing sight of the fact that they’re only ever predicting tokens, not responding emotionally.








Your point about personal data becoming a competitive moat is prescient - and I have a cautionary tale that illustrates exactly why this matters.
I used the Limitless Pendant for six months. An AI wearable that recorded my conversations and built searchable memory. Classic 'your data is their moat' scenario. Then Meta acquired them and shut down EU support with 14 days notice. Six months of personal memories, nearly deleted.
You ask the right questions: 'What happens to your memory when you leave a company - do they get to keep it?' For Limitless users in the EU, the answer was clear: export it or lose it forever.
The silver lining: I exported everything and built my own system. Used Claude Code to process 10GB of voice transcripts into a local, portable knowledge base. Now I actually own that data in a way I never did when Limitless was running. No platform dependency. No acquisition risk.
Your prediction about switching friction is already reality. The question for 2026 isn't whether AI will know us better - it's whether we'll be able to take that knowledge with us when we leave. Based on my experience, I'm not optimistic about companies making this easy.
Full story of what I did with my Limitless data: https://thoughts.jock.pl/p/limitless-pendant-eu-ban-what-i-did-with-voice-data