AI Weekly Insights #89

Smarter Chats, Working Bots, and Power to Match

Happy Sunday,

Welcome all to ‘AI Weekly Insights’ #89, where the line between digital and physical life keeps getting thinner. OpenAI turned ChatGPT into an operating system, Google built its own AI workplace platform, humanoid robots took another step toward our living rooms, and the silicon war powering it all just got a major jolt.

Ready? Let’s dive in.

The Insights

For the Week of 10/05/25 - 10/11/25 (P.S. Click the story’s title for more information 😊):

  • What’s New: OpenAI’s DevDay 2025 introduced Apps inside ChatGPT, the new AgentKit framework, and fresh model updates including GPT-5 Pro and Sora 2 in the API.

  • Apps + Agents: OpenAI spent this DevDay showing how ChatGPT is becoming more than a chatbot. With the Apps SDK, developers can now build mini-apps that live directly inside a chat. You can pull up Canva to edit a design or check homes on Zillow without ever leaving the thread. A full app store and monetization system are coming later this year. AgentKit gives teams the tools to build autonomous assistants with far less setup. It includes a visual builder, connectors to data, and monitoring to keep things stable. On the model side, GPT-5 Pro aims at deeper reasoning and long-form accuracy, while Sora 2 brings cinematic-level video generation to the API. Companies like Mattel are already using it to create product concepts and marketing clips. New lightweight “mini” models were also announced for faster real-time tasks.

  • Why It Matters: DevDay 2025 felt like OpenAI revealing its hand. The company is not just building smarter models but creating the infrastructure for how people might use computers in the future. Apps in ChatGPT turn conversation into the new interface, a place where everything you do on your computer could eventually happen. The idea of opening separate tabs, juggling logins, and moving between disconnected tools starts to look outdated when everything from writing to data analysis to video editing can live in a single flowing chat. That vision is not just about convenience; it is about control, positioning ChatGPT as the center of your digital life. AgentKit reinforces that shift by making it easier for developers and companies to bring their own agents into this space, creating a network of specialized AIs that can think, plan, and act alongside you. Sora 2 completes the picture by showing that media creation is part of this same ecosystem, with videos generated from text now realistic enough to feel like film. OpenAI is not just expanding ChatGPT’s abilities, it is turning it into a full computing platform where text, logic, and creativity exist together in one place. This year’s DevDay was less about individual product updates and more about revealing a plan for how AI could become the new operating layer of everyday life.

  • What's New: Google introduced Gemini Enterprise, a platform for companies to create and use AI agents across their data and everyday tools, and a new Google Skills program to train one million developers.

  • Inside Gemini Enterprise: Gemini Enterprise puts a conversational interface on top of company systems so employees can search, summarize, and automate tasks through chat. It also includes tools to build custom agents using internal data, APIs, and connected apps. Google calls it the new front door for AI in the workplace. Early versions come with a gallery of starter agents, built-in security and governance controls, and partnerships that tie into popular business apps. The platform has three tiers, reported at $21 for Business and $30 for Standard and Plus, designed to scale from small teams to large enterprises. Alongside the launch, Google announced its Google Skills initiative to grow a new developer ecosystem around Gemini agents.

  • Why it Matters: Gemini Enterprise is Google’s clearest answer yet to Microsoft Copilot and ChatGPT Enterprise, and it shows how fast the race for workplace AI platforms is heating up. Instead of sprinkling chatbots across products, Google is building a single workspace where agents handle research, data pulls, and repetitive workflows inside one conversation. That could save time and cut the constant switching between documents, dashboards, and emails that define modern office life. The pricing also makes this more accessible than many expected, hinting that Google wants adoption before exclusivity. The real key, though, is what happens next: Gemini’s newest models, including Gemini 2.5 and its Computer Use variant, can now interact directly with software interfaces. That means future agents could literally click through spreadsheets, update systems, or pull files without human help. If that vision plays out, Gemini Enterprise could shift Google from being just a collection of apps into something closer to an intelligent operating layer for work.

  • What's New: Figure unveiled Figure 03, its third-generation humanoid powered by the company’s Helix AI. The new model is lighter, faster, and built with safety and touch sensitivity in mind.

  • Humanoid Robots: The new robot is a ground-up redesign built to make Helix smarter in real spaces: faster cameras with a wider field of view, palm cameras in each hand, fingertip sensors down to a few grams of pressure, and a stronger audio system for natural voice interaction. The body is lighter than Figure 02 and wrapped in soft textiles, with wireless inductive charging through its feet and high-speed data offload for fleet learning. On paper, the unit stands about five foot eight, runs roughly five hours per charge, and moves at about 1.2 meters per second. Figure also introduced BotQ, a manufacturing line intended to produce up to twelve thousand robots per year initially and ramp toward one hundred thousand over four years, shifting parts to die-cast and injection-molded components to lower cost and speed assembly.

  • Why it Matters: Figure 03 represents the point where humanoid robots stop being science experiments and start becoming potential household technology. It is not yet ready to live in homes full-time, but its design shows that the goal is now practical, not theoretical. The improved grip, balance, and vision systems give the robot a kind of physical awareness that was missing from earlier generations, and the ability to train across fleets means every task it learns could instantly spread to thousands of others. That kind of shared intelligence could accelerate progress faster than most people expect. The manufacturing piece matters just as much as the technology, because scale is what turns a futuristic concept into something normal people can buy. Figure is betting that a robot designed to do dishes or fold laundry is not a decade away, but something that could begin appearing within the next few years. If that happens, robots will move from the factory floor into the everyday home, changing how we think about labor, privacy, and what it means to live alongside machines built to help.

  • What's New: OpenAI signed a multi-year partnership with AMD to secure up to six gigawatts of GPU power for its next generation of AI models.

  • The Partnership: This agreement gives OpenAI a second major supplier beyond Nvidia, which currently dominates the AI chip market. The first wave of AMD chips is expected to power new data centers coming online in the next few years, helping OpenAI keep pace with the demand created by ChatGPT, Sora, and its growing ecosystem of apps and agents. AMD, for its part, gains a major validation point as it fights for relevance in AI computing after years of Nvidia’s near-total lead. The warrant piece sweetens the deal: OpenAI’s potential stock gain is tied to AMD actually delivering on performance and scale, keeping both sides invested in the long game.

  • Why it Matters: This deal might sound like background infrastructure news, but it will shape how fast AI actually reaches people. Every conversation with ChatGPT, every video generated by Sora, and every AI-powered app depends on access to enormous amounts of compute. Six gigawatts is enough to power several small cities, and that energy translates directly into how quickly AI can think, respond, and learn. For users, it means faster chats, longer memories, and smoother creative tools. For AMD, it is a comeback moment that challenges Nvidia’s dominance and opens up real competition in the AI hardware space. And for OpenAI, it is a way to turn its ambitions into something sustainable, making sure its models can keep up as the world begins to depend on them. The hardware behind AI rarely makes headlines, but it is the foundation that will decide how intelligent, accessible, and affordable the next generation of tools will be.

This week’s headlines had one message in common: AI is getting roots. From OpenAI’s new platforms to Google’s enterprise play, from Figure’s humanoid prototypes to the AMD deal fueling it all, the pieces are locking together fast. What was once abstract intelligence is now physical, visible, and increasingly personal.

We’re no longer just experimenting with AI tools, we’re building a world where the tools learn, act, and exist alongside us.

See you next Sunday.

Warm regards,

Kharee