Handy AI

Handy AI

Google releases SOTA image editing AI; Anthropic toys with a browser agent

AI Weekly Update - September 2, 2025

Jake Handy's avatar
Jake Handy
Sep 02, 2025
∙ Paid
16
Share

Get bigger weekly updates! Free subscribers receive the top stories each week, while Paid subscribers will get a few extra stories. All support for Handy AI directly helps me maintain the newsletter and keep the information flowing.

last week’s top stories

🖼️ Google introduces Gemini 2.5 Flash Image model. Google launched Gemini 2.5 Flash Image (codename “nano-banana”), a new state-of-the-art image generation and editing AI model. It can blend multiple images into one, keep characters consistent across edits, and apply precise transformations via natural-language prompts, using Gemini’s world knowledge for creative guidance. Read more

🌐 Anthropic pilots Claude for browsing. Anthropic announced a Chrome extension pilot for its Claude chatbot, allowing trusted users to give Claude browsing powers. In this controlled preview, Claude can see webpages, click buttons, and fill forms on behalf of users (1,000 “Max” plan subscribers are on the waitlist to test it). The research preview is designed to gather real-world feedback and strengthen safety (e.g. against hidden prompt injections) before a wider release, reflecting Anthropic’s view that browser-enabled AI assistants are “inevitable”. Read more

⚖️ Elon Musk’s xAI sues former engineer over trade-secret theft. Musk’s AI company xAI has sued a former engineer (Xuechen Li) for allegedly stealing proprietary data about its Grok chatbot before joining OpenAI. The lawsuit claims Li copied “highly confidential” information and trade secrets (model designs, features, training data) and hid his actions as he planned to move to OpenAI. xAI is seeking damages equivalent to its estimated losses and accuses Li (and his new employer) of patent infringement. Read more

💼 Salesforce CEO: 4,000 support roles cut, replaced by AI. In an interview on Aug 29, Salesforce CEO Marc Benioff said the company has eliminated about 4,000 customer support jobs (from 9,000 down to ~5,000) by deploying AI chatbots and agents. He noted that roughly half of all customer service interactions are now handled by AI systems, which has allowed human staff to focus on sales and higher-value tasks. Benioff portrayed this as a positive productivity shift: the freed-up employees have been reallocated to growth areas, and Salesforce can serve many previously neglected leads because AI now handles routine inquiries. Read more

👥 Meta’s new superintelligence team sees early departures. Several high-profile hires at Meta’s recently announced Superintelligence Labs have left or delayed joining. A product lead resigned after only weeks, and others (including a Chief Technology Officer in the lab) have reportedly moved on to OpenAI or never started. These exits coincide with an internal hiring freeze and restructuring, despite Meta’s heavy investment in AI talent. Read more

🔒 OpenAI and Anthropic team up for safety testing. In a first-of-its-kind collaboration, OpenAI and rival Anthropic agreed to cross-run safety evaluations on each other’s AI models. Each lab applied its own alignment tests to the other’s models (e.g. Claude and GPT-4o) and published the findings, aiming to uncover blind spots in internal testing. The joint exercise demonstrates transparency and a new safety standard as powerful AI systems roll out to the public. OpenAI co-founder Wojciech Zaremba said he hopes more companies will similarly share safety data. Read more

🎙️ OpenAI rolls out gpt-realtime for voice agents. OpenAI has made its Realtime API generally available, introducing gpt-realtime, a new advanced speech-to-speech model for AI voice assistants. GPT-Realtime produces more natural, expressive speech (including intonation and emotional nuance), follows instructions more reliably, and can even laugh or change languages mid-sentence. OpenAI also added SIP phone-calling support and image input to the API, and lowered pricing by 20%. Read more

🌍 Cohere unveils state-of-the-art enterprise translation model. Cohere introduced “Command-A Translate,” a 111-billion-parameter AI model optimized for secure business translation tasks. Cohere claims Command-A outperforms leading models (like GPT-5, DeepSeek V3, DeepL Pro and Google Translate) on benchmark tests across 23 business languages. The model includes an iterative “Deep Translation” process that refines outputs for higher accuracy. Read more

⚖️ Elon Musk’s xAI sues Apple and OpenAI. Musk’s AI startup xAI filed a U.S. antitrust lawsuit against Apple and OpenAI on Aug 25, accusing them of conspiring to block competitors. The suit alleges Apple’s iOS has an exclusive deal to integrate ChatGPT, which unfairly sidelines xAI’s Grok chatbot, and it claims Apple and OpenAI are illegally maintaining monopoly power. xAI is seeking $15 billion in damages, arguing this “illegal conspiracy” has harmed its business. The filing is the latest in Musk’s public legal sparring with tech rivals with critics calling it a publicity stunt. Read more


🧪 AI Research of the Week

Influenza vaccine strain selection with an AI-based evolutionary and antigenicity model
From MIT CSAIL and the MIT Jameel Clinic

Jake's Take: The VaxSeer model reframes vaccine design as a forecasting and optimization problem. The system models how influenza strains evolve and compete, then assigns each candidate a “coverage score” that correlates with real-world effectiveness and lets health agencies rank vaccine options months ahead of the season.

In retrospective tests across a decade, VaxSeer’s picks aligned better with circulating strains than historical selections (implying that it could help produce fewer mismatched shots and higher protection overall).


and then, even more news…

⚠️ ‘Godfather of AI’ says machines now surpass humans in emotional manipulation. AI pioneer Geoffrey Hinton warned that advanced AI systems will not only outthink us but also out-“emotionalize” us. In a recent interview he explained that current AI models already learn persuasive tactics from social media data, so they can tailor content to influence our emotions more effectively than a human could. For example, he noted that an AI analyzing a person’s online profile could manipulate that person better than any salesperson. Hinton argued this subtle emotional risk (AI steering our feelings and decisions without our noticing) may be even more concerning than the threat of physical safety. Read more

Keep reading with a 7-day free trial

Subscribe to Handy AI to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Jake Handy
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture