Home > TECHNOLOGY > 6 WAYS AI WILL CHANGE YOUR LIFE IN COMING WEEKS.

6 WAYS AI WILL CHANGE YOUR LIFE IN COMING WEEKS.

AI has long promised to change how we live, but much of it stayed just that — a promise. Recently, Google, OpenAI, Microsoft and Anthropic made a flurry of announcements that showed something different: many of those once-distant ideas are now real, working tools. You can try them — in weeks, not months.

YOU’LL SOON WEAR YOUR AI, NOT CARRY IT:

It might be hard to imagine how your glasses could double up as screens. But the way Google imagines it is by placing the information on a translucent layer over what you see.

WHY YOU SHOULD CARE:

It’s everything you can now do with your phone and a live AI app — like point the camera at a restaurant and pull up its menu, move the camera to show your surroundings and get directions, or scan a document and get a summary — but without a device in your hand. Which means AI is becoming more wearable, and wearables are becoming more unobtrusive.

The Google Glass never caught on. It was clunky, expensive, and people didn’t entirely get the point. But two years after retiring its last attempt, Google is reviving the smart glass — with AI. It’s placing Android XR, its operating system for “extended reality” (the mega-mash of augumented reality, virtual reality, and mixed reality), on fairly normal-looking glasses and then using its AI, Gemini, to make that “reality” way sharper, it should be out later this year. Google announced at its developer conference, I/O.

The demo at the event got a little glitchy, but it was still quite cool to see someone read a text, answer a text, talk to someone, translate a conversation live, and take a photo — just with glasses.

So far, Meta’s been ahead in the smart glass race with its relatively low-cost Ray-Bans. But Apple, which is yet to dazzle us with anything AI, is getting serious too. It has ditched plans for a smartwatch that can “see” its surroundings, and is focusing on a 2026 release of its first smart glasses with AI. In the meantime, it is introducing a tool that will let you look up information about on-screen images. It’s slightly behind on offering this — but at least it’s there.

But the buzziest new thing is the device that OpenAI will create with legendary Apple designer Jony Ive, having acquired Ive’s secretve startup io in a $6.5 billion deal. The Wall Street Journal reported that it won’t be a phone, it won’t be glasses, and it won’t be something you wear on your body.

DON’T SEARCH. JUST ‘TALK’ TO THE SITE:

Say you need to claim travel insurance after a flight delay. If the insurer’s site uses Microsoft’s open project NLWeb (Natural Language to Web), which it announced at its developer conference, you won’t have to hunt through policy documents and forms while figuring out what counts as proof. You can just say, “I want to file a claim for my delayed flight,” and the AI will walk you through it in the way a support representative beside you would.

Google, meanwhile, blurred the difference between its search and its AI by launching an ‘AI Mode’ for search in a few countries. It’s a tab on the search results pages ( like “Images” or “News”) and will let you look up what you want with its Gemini chatbot. How’s that different from a regular search? You’ll get an “answer” and follow-up questions == like a conversation — instead of a list of links. It changes how search works — from something you navigate to something that you “talk” to — at a time when there are quite a few conversations about the imminent fall of Google searches.

Why you should care:

It will change how you search for and do things online. The effort of looking through lists and links to discover what you want will keep declining, and the ideal scenario is one in which you get exactly what you want with, maybe, one question.

YOU WON’T NEED A COMMON LANGUAGE TO TALK:

You’re on Google Meet, the person on the other side only speaks Spanish, and you only speak English. Within a few weeks, you could have this conversation while your speech is translated in real time. While retaining — that’s the promise — your tone and flow. All of this will be powered by its AI, Gemini. Google says it will keep adding support for more languages.

Microsoft had launched a similar real-time AI translation feature within Teams, with support for nine languages. In April this year, and Apple has announced what might prove to be an even more useful AI-powered live translation for messages, phone calls, Face Time and Apple Music.

Why you should care:

It’s like having a personal intrepreter for free, 24/7. And it could soon open up a lot that is bound by language constraints — like online classes or medical consultations or even job interviews.

YOU CAN MAKE A REALISTIC-ENOUGH FILM, NOT JUST A WEIRED AI CLIP:

You can create a character with Google’s image generation model (an upgraded Imagen 4, which is supposed to be able to spell better), generate a video with its incredibly advanced video generation model (Veo3), and use Gemini to come up with a script and prompts — and then use all of that to create your own film on a new platform called Flow.

It looks far better than the AI “slop” that we’ve come to associate with a lot of AI visuals. With multiple camera angles, smooth transitions, and near-realistic lighting. And unlike Open AI’s Sora, it can also include dialogue, sound effects, ambient sound, and a soundtrack.

Why you should care:

It matters if you like creating things from scratch. It matters more if you spend enough time online to keep coming across more and more realistic AI-generated videos ( which, to be honest, most of us do now) and can’t tell the difference.

BUILD AN APP FROM A DOODLE. NO CODING NEEDED:

Vibe coding is what AI professionals call it when we do stuff that requires code — but without any code. So far, it has been possible to tell AI what we want, with zero technical instructions, and get workable things like apps and games. Google is stretching vibe coding further. It showed a demo of its coding agent, Jules, which lets you vibe code to say, build an app, but with something as rudimentary as a scribble on a napkin. “Not a copilot, not a code-completion sidekick, but an autonomous agent.”

Microsoft also announced a new GitHub coding agent that can work on many things at once without you having to check in over and over.

Days before that, OpenAI also released a coding agent, Codex, which works unsupervised on multiple things.

Why you should care:

You can do things you’d never even thought of. Coding isn’t something we casually do to fill our time. But if the gap between an idea — like a system which matches retired tutors with students in your housing society or an app that coordinates shared cab rides within your colony based on commute schedules — and actually doing it is just about tools, then they become doable when you delegate the coding and building to AI.

AGE OF AI AGENTS IS FORESEEABLE:

Anyone with a state in AI has been chasing AI agents, or systems and processes designed to “act” on your behalf. Amazon (with Nova Act), Anthropic (with Computer Use), OpenAI ( with Operator) — they’ve all promised their agents can do mind-numbing things like filling forms or ordering groceries for you. But now it’s a little closer to the promise. Microsoft went ahead and called this the “age of AI agents.”

In a few months, Google has said, its AI agent will let you do that and it says, super efficiently. Its Project Mariner can “oversee” 10 tasks simultaneously. Which means you can let it look up apartment listings or film tickets without opening multiple tabs, comparing prices and juggling half-filled forms on your own.

But Anthropic stole the spotlight with the “world’s best coding model”, Claude Opus 4, which is designed for “agent workflows”. Or agentic AI (an AI agent would be a specific component with that system).

Anthropic says Clause Opus 4 is so powerful that it had to turn on additional safety controls because “you could try to synthesize something like Covid or a more dangerous version of the flu — and basically, our modelling suggests that this might be possible. “For more benign things you want to do, Claude Opus 4 can plan, write and fix entire programs — for nearly seven hours — with little to no follow-up. That measure is important, too. Because it means the other contest, alongside the one over what AI can do, will be about how long AI can do thing without errors or oversight.

Why you should care:

It’s a bit like an assistant suddenly figured out how to do things entirely on their own. The boring bits, the exhausting bits, the time-consuming bits. This kind of autonomy is behind progress in self-driving cars — a space where Google’s Waymo and Elon Musk’s Tesla are competing. And it means a lot of human work disappears and changes.

Leave a Reply