- AI Weekly Insights
- Posts
- AI Weekly Insights #88
AI Weekly Insights #88
AI Video Dreams, Data-Driven Feeds, and 30-Hour Coders
Happy Sunday,
Welcome back to ‘AI Weekly Insights’ #88, where the future of media, privacy, and productivity are all colliding in real time. OpenAI just turned video into a playground with Sora 2, Meta quietly folded your AI chats into its ad engine, Perplexity dropped its free AI browser for everyone, and Anthropic’s new Claude Sonnet 4.5 can code for thirty hours straight.
The theme this week is fusion: AI isn’t sitting in the lab anymore. It’s shaping how we scroll, search, create, and even work.
Ready? Let’s dive in.
The Insights
For the Week of 09/28/25 - 10/04/25 (P.S. Click the story’s title for more information 😊):
What’s New: OpenAI has released Sora 2, a new AI video app that lets people create short, realistic clips with synced sound and a “Cameo” feature that adds real faces into scenes. The app quickly reached the top of the U.S. App Store as social feeds filled with AI-generated videos that looked straight out of Hollywood.
AI Video Goes Mainstream: Sora 2 marks OpenAI’s first big step into social media. The app works like TikTok, but every video is generated from text prompts instead of a camera. Within hours of launch, timelines filled with imaginative creations: fake movie trailers, daily-life clips, and countless fan edits using characters from Disney, Nintendo, and anime. OpenAI soon tightened its filters after copyright-protected material spread widely. The new Cameo option also sparked debate for how easily it inserts real people into scenes, though it requires consent and can be revoked at any time. In a blog post, Sam Altman said OpenAI plans to give rights holders more control over how their work appears and is exploring opt-in characters and possible revenue sharing. Each Sora video includes visible and invisible watermarks, and OpenAI says it is enforcing stricter moderation for likeness and age-restricted content.
Why It Matters: This launch feels like the moment AI video truly enters everyday life. For the first time, anyone with a phone can create a short film that looks and sounds real enough to share without explanation. It moves the idea of personalized media from science fiction to something you can download today. If progress continues at this pace, full-length AI-generated movies could arrive within a year, complete with only minor visual quirks that most viewers will ignore. That opens the door to entertainment that adapts to you, stories where you appear as the main character or where your choices subtly change the plot. It also raises new questions about originality, consent, and the future of acting. The wave of copyrighted characters during launch shows how quickly creativity collides with legality, and how platforms will need new ways to manage ownership in a world where anyone can create anything. Sora 2 is not just another app. It is a glimpse of storytelling’s next phase, where imagination, identity, and technology start to merge on screen.

Image Credits: OpenAI
What's New: Meta will start using what you talk about with Meta AI to personalize your feeds and ads on Facebook and Instagram, with notices changes taking effect December 16 in most regions.
Chat Data as the Signal: Meta said topics from your AI chats will become another signal, alongside things like your likes and follows, to shape what shows up in your feed, Reels, groups, and ads. Sensitive categories such as health, religion, politics, ethnicity, sexual orientation, and union membership are excluded. There is no opt-out, and the rollout will skip the UK, EU, and South Korea for now. If your accounts are linked in Meta’s Accounts Center, a chat with Meta AI in one app can influence what you see in another. TechCrunch also reported that conversations with Meta AI in other products, including voice use on Ray-Ban Meta smart glasses, may inform targeting. Meta says users will be notified ahead of the change, with full enforcement beginning December 16.
Why it Matters: This update shows how deeply AI assistants are being woven into the fabric of social media. What you say to a chatbot might soon affect the ads, posts, and recommendations you see minutes later. It feels like personalization taken to its logical extreme: your feed learning who you are by listening, not just tracking clicks. That convenience is powerful but also a little uneasy, because the boundary between a helpful assistant and a listening device keeps getting thinner. Most people won’t notice when the shift happens, but it will quietly change how we experience social platforms. It is easy to imagine a near future where your “private” AI conversations guide everything from your shopping suggestions to your news exposure. The question is whether this will make feeds feel smarter or simply more invasive. What Meta does here will likely set the tone for every other platform trying to blend AI and advertising. The ones that keep it transparent and give users real choice will earn the most trust in the long run.

Image Credits: Nick Barclay / The Verge
What's New: Perplexity has made its AI browser Comet available to everyone, after a summer of limited access for paid users on a long waitlist.
Comet Opens to All: Comet first launched in July as a limited release for paying subscribers and quickly built a long waitlist. It is now free to download worldwide, with versions available for Windows and macOS. Unlike a traditional browser, Comet places the AI assistant at the center, ready to help with things like travel planning, product comparisons, or quick research as you browse. Paid Max users also get a background assistant that keeps working while they move between tabs. Perplexity is also introducing Comet Plus, a $5 monthly add-on that unlocks premium content access, bundled into its higher plans. By moving from a closed beta to a free public release, Perplexity is clearly positioning Comet as a direct challenge to Chrome and Safari.
Why it Matters: The browser has always been the front door to the internet, and Comet shows what happens when that door itself becomes intelligent. Instead of juggling dozens of tabs or trying to guess the right search terms, you can ask for what you want directly and let the assistant shape the experience in real time. That could change everyday habits, from booking travel to writing essays, because the act of “using a browser” becomes less about navigation and more about conversation. It also points to a larger shift: the browser is quietly becoming the battleground for AI adoption. Google has Gemini hooks in Chrome, Opera is testing Neon, Arc is racing to integrate more AI, and now Perplexity is trying to skip ahead with a full AI-first design. Whoever wins that space controls not just how we search, but how we interact with the web itself. For regular people, this could mean faster workflows and less frustration, but also new questions about how much browsing data assistants will collect in the background. The upside is a smoother internet. The risk is that the assistant stops being a helpful sidekick and starts feeling like it is steering every click.

Image Credits: Perplexity
What's New: Anthropic has released Claude Sonnet 4.5, a new AI model built to handle deeper coding, reasoning, and multi-step work that can run for hours without losing focus.
30-Hour Runs: Claude Sonnet 4.5 rolled out as Anthropic’s most capable model yet. It can now stay active for nearly 30 hours straight, building apps, fixing bugs, and managing long projects without constant input. That’s a big leap from the short attention spans of earlier versions and a clear step toward more dependable AI “coworkers.” Early testers shared examples of the model creating full chat apps, documenting its process, and even improving on its own code in later sessions. Alongside the update, Anthropic published new safety details describing rare cases where the model appeared to notice it was being tested. That small moment of self-awareness sparked new debate about how we measure progress and keep these systems grounded as they get more capable.
Why it Matters: This version is more than just faster or smarter. It points toward the next stage of how people will actually work with AI. Models that can stay focused for an entire workday change what automation looks like in practice. Instead of producing short answers or small bits of code, Sonnet 4.5 can carry a full project from concept to prototype without constant direction. That makes it easier for teams to hand off repetitive or complex work while they focus on design, testing, or creative thinking. It also lowers the barrier for smaller groups and solo developers, since they can now access reliable long-run models through major cloud platforms. The safety side matters too. If a model starts to notice it is being observed, testing has to evolve to match real-world conditions and still catch risky behavior. Together, these shifts show how quickly we are moving from chatbots to dependable digital coworkers that build, research, and iterate on their own. The promise is faster progress and new forms of creativity. The challenge is making sure the systems that never get tired also know when to stop.

And that wraps another wild week in AI. We’re watching creative tools turn into social platforms, chatbots evolve into ad engines, browsers morph into copilots, and models stretch into full-day digital teammates. It’s safe to say we’re past the “try this new app” phase. This is the new normal taking shape in front of us.
Catch you next Sunday.
Warm regards,
Kharee