- AI Weekly Insights
- Posts
- AI Weekly Insights #81
AI Weekly Insights #81
Offline Robots, Copyright Chaos, and Terminal Chatbots
Happy Sunday,
Welcome to ‘AI Weekly Insights‘ 81, this week in AI felt like a weird mix of sci-fi and small talk. Robots learned to fold laundry offline, Google dropped a terminal-based Gemini tool that actually slaps, and a court just told Anthropic that training on books is basically legal. Also, OpenAI quietly started sharing backend duties with Google. Whether you’re here to watch tech giants play hot potato or just want smarter code suggestions without leaving your terminal, we got you.
Ready? Let’s dive in.
The Insights
For the Week of 06/22/25 - 06/28/25 (P.S. Click the story’s title for more information 😊):
What’s New: Google has released a new version of its robot AI model, Gemini Robotics On-Device, that can run completely without the internet.
Offline Intelligence: The new, updated version helps robots learn new tasks by watching videos of people doing them. In Google’s demo, their robot was able to fold clothes, unzip bags, and even switch to different arms after watching 50-100 example videos. The real upgrade is that the AI now runs directly on the robot's own hardware. It no longer needs to send data to the cloud to figure out what to do. This means it can respond faster and avoid connection issues. Google has also released a developer toolkit so people can build and test their own robots using the same model. The goal is to make smart robots more practical and flexible in everyday settings like home, warehouses, or even hospitals.
Why It Matters: Currently, most robots that use AI still depend on the cloud for intelligence. That means they still need strong internet, and when then the connection drops, so does their “brain”. With this On-Device model, all of the robot’s “thinking” happens on board. It keeps working even if the Wi-Fi cuts out, and it runs faster since there’s no waiting for a response from a server. This also means better privacy, since no data leaves the robot. For developers, it makes building and testing robots simpler and more affordable. And for everyone else, it’s a step closer to real-world robots that can fold your laundry, carry supplies, or help out around the house with no IT setup. Robots that can think for themselves, even just a little, are a big deal. This could be the beginning of AI moving from labs into daily life.

Image Credits: Google DeepMind
What's New: A federal judge has ruled Anthropic is allowed to legally train its Claude AI on copyrighted books, calling the practice “fair use.” A jury will still decide whether keeping 7 million books in its files broke the law.
AI Training and Copyright: A group of authors took Anthropic to court last year, claiming the company copied their books without permission while building Claude. Judge William Alsup reviewed the evidence and ruled that training on lawfully bought or licensed books is “exceedingly transformative,” comparing it to a reader who studies many texts before writing new material. That part of the process counts as fair use, so no damages apply there. The judge was less kind about the company’s stash of pirated books, saying that copying could still be infringement. He set a jury trial in December to sort out any fines, which could reach up to $150,000 per infringed title. This is the first U.S. decision to treat large-scale AI training on books as potentially legal, while still drawing a line against flat-out piracy.
Why it Matters: For the tech world, this ruling is a win. AI companies like Anthropic, OpenAI, and Meta have argued for months that they need lots of to make smarter models. The court agreed in part, saying “yes” to training on books legally owned because the model turns that text into something new. For authors and publishers, the news is mixed. While the court backed AI training with legal content, it clearly opposed the use of pirated material. Authors still have a shot at big damages if the book stash crosses legal lines. Additionally, questions remain about whether AI-generated outputs that closely resemble original writing might still infringe copyrights. Everyone from small writers to giant tech firms will watch that December trial to see where the final line gets drawn. It might be the moment we finally get clarity on what “training data” should actually mean.

Image Credits: Anthropic
What's New: Google has launched Gemini CLI, an open-source command-line tool that lets anyone tap the Gemini 2.5 Pro model for coding help, content generation, and even image or video creation, all from the terminal.
How It Works: Gemini CLI installs like lots of other developer tools and allows access with a personal Google account. Once logged in, you get access to Gemini 2.5 Pro’s large context window, which means the AI can keep track of very large codebases. You can ask it to explain an error message, write tests, or set up an entire project without leaving your preferred terminal. The tool also hooks into Google’s Veo and Imagen models, so it can generate images or short videos if your project needs them. Usage limits are generous for a free preview: up to 60 requests per minute and 1,000 per day.
Why it Matters: Most AI coding tools today live inside browser tabs or fancy text editors, which can feel clunky or distracting. Gemini CLI skips all that and lives in the terminal, the place where a lot of developers already do different tasks. It’s also surprisingly generous for a free tool. Students, hobbyists, and even small teams can use it daily without paying anything. Plus, since the code is open source, the community can help improve it faster than Google could alone. Competition is heating up among Google, OpenAI, and Anthropic to own the developer workflow, and giving powerful AI away for free is Google’s opening gambit. If tools like this catch on, the command line might become the new home base for using AI in everyday coding.

What's New: OpenAI has started using Google’s custom AI chips, called TPUs, instead of relying only on Nvidia chips through Microsoft.
Sharing the Load: Until now, almost every ChatGPT response was processed on Nvidia graphics cards in Microsoft’s data centers. But those chips are expensive, and supply is tight. So now, OpenAI is sending some of that work to Google’s special AI hardware which are designed to handle huge amounts of AI math quickly and use less energy. This change is all about “inference,” or the step when ChatGPT takes your prompt and gives a response. Using TPUs should help lower energy costs and avoid overloading one supplier.
Why it Matters: This is a big shift in how ChatGPT is powered and it says a lot about how fast the AI space is growing. Tools like ChatGPT need lots amounts of computing power, and OpenAI can’t depend on one company forever. By adding Google to the mix, OpenAI gets a backup plan, lowers costs, and keeps things running smoothly.
For Google, it’s a major brag. Their chips are now running one of the world’s most-used AI apps. That might pull new customers away from rivals like Amazon and Microsoft. And for the rest of the AI world? It’s a sign that Nvidia’s grip on AI hardware could loosen, especially if more companies realize they don’t have to use GPUs to run fast, smart models. This also shows how business partnerships are shifting. Even though Microsoft invested heavily in OpenAI, the need to scale fast means OpenAI is now working with a direct competitor. The more AI grows, the less likely it is that any one company can control all the parts. Expect more mix-and-match deals like this as everyone scrambles to keep up.

That’s a wrap on this week’s AI whirlwind. From courtroom showdowns to offline robots, the future keeps arriving faster than your weekend plans. Whether you’re testing the latest tools or still side-eyeing them, we’re glad you’re riding along.
Catch you next Sunday.
Warm regards,
Kharee