NASA uses AI to decode the sun, while Silicon Valley darlings shuffle teams and money
AI Weekly Update - August 25, 2025
Get bigger weekly updates! Free subscribers receive the top stories each week, while Paid subscribers will get a few extra stories. All support for Handy AI directly helps me maintain the newsletter and keep the information flowing.
last week’s top stories
☀️ NASA, IBM launch AI to decode the sun. NASA teamed with IBM to develop Surya, a new open-source AI model that analyzes nearly a decade of solar observatory data to predict space weather. Surya can forecast solar flares and coronal mass ejections up to 2 hours in advance, providing early warnings for events that could disrupt satellites and power grids. Trained on 9 years of continuous Sun images, the model outperforms prior methods by 16% and is being openly shared on HuggingFace for researchers. It’s a major step in using AI “heliophysics” models to safeguard technology on Earth from the Sun’s outbursts. Read more
☁️ Meta signs $10B cloud deal with Google. Meta is partnering with Google Cloud in a six-year deal worth over $10 billion to boost Meta’s AI and infrastructure capacity. Under the agreement, Meta will use Google’s servers, networking and storage to train and deploy advanced AI models across Meta’s apps. The move comes as Meta ramps up spending on massive AI data centers (even raising its 2025 capex forecast) and seeks outside help to handle surging AI workloads. Read more
💰 Altman details OpenAI's trillion-dollar roadmap. OpenAI CEO Sam Altman laid out an audacious plan to invest “trillions of dollars” in AI infrastructure, such as massive data centers, to support future AI models and products. He argues these huge outlays will pay off by enabling billions of daily ChatGPT interactions and selling more AI services. Altman even floated novel fundraising methods to raise the capital and hinted at expanding beyond ChatGPT into apps like browsers and social networks. Read more
🔄 Meta’s AI restructure. Meta is overhauling its AI division for the fourth time in six months, splitting the new Superintelligence Labs unit into four teams. The re-org will create a “TBD Lab” for experimental projects, alongside separate product, infrastructure, and research groups (the FAIR lab) to accelerate Meta’s push toward artificial general intelligence. CEO Mark Zuckerberg is going “all-in” on AI (tapping external financiers for a $29B data center expansion and pledging to spend hundreds of billions on AI compute) as Meta races to catch up after a lukewarm reception for its Llama 4 model. Read more
📊 Microsoft Excel gets an AI upgrade. Microsoft added a new =COPILOT() formula function to Excel that lets users bring generative AI directly into their spreadsheets. By typing natural-language prompts in a cell (e.g. =COPILOT("Summarize this data")), users can have Excel auto-generate summaries, categorize text, create examples, and more with AI. It’s part of Microsoft’s 365 Copilot integration and is rolling out in beta. Microsoft cautions, however, that the AI function isn’t reliable for precise calculations or reproducible results. Read more
🤝 Musk tried to enlist Zuckerberg in OpenAI bid. Court filings reveal Elon Musk reached out to Meta’s Mark Zuckerberg as he mounted a $97 billion unsolicited takeover bid for OpenAI. OpenAI’s lawyers say Musk discussed potential financing with Zuckerberg for the bid, which OpenAI ultimately rejected. While Meta never signed onto Musk’s proposal, the episode shows the lengths Musk explored to regain influence over the ChatGPT maker. Read more
🚗 Nuro gets $203M lift from Uber, Nvidia. Autonomous vehicle startup Nuro closed a $203 million Series E round that brings on new strategic investors Uber and Nvidia. Nvidia joined the funding after years of supplying GPUs for Nuro’s self-driving delivery pods, and Uber’s participation follows a broader robotaxi partnership with Nuro announced last month. The cash infusion values Nuro at ~$6 billion (down from $8.6B in 2021) and will help it scale its driverless tech via licensing deals with automakers and ride-hailing fleets. Read more
📱 New Google Pixel lineup goes big on AI. Google’s Pixel 10 phones, unveiled at the Made by Google event, are packed with AI-centric features built on its Tensor G5 chip and Gemini AI. They introduced Magic Cue, an assistant that proactively surfaces info (like flight details during chats) and Camera Coach, which gives real-time photography tips through the viewfinder. Other upgrades include on-device voice translation for calls, “Visual Overlays” that let the AI see through the camera and highlight objects, and an AI-driven Pixel Journal app. Read more
🍃 Google analyzes Gemini’s environmental footprint. Google released a detailed analysis of the energy, water, and carbon costs of running its AI models. The study found a typical Gemini text prompt uses only ~0.24 Wh of electricity and 5 drops of water, emitting just 0.03 g of CO₂. Through efficiency gains and clean energy, the per-query energy has dropped 33x in a year. Google is calling for industry standards in measuring AI’s environmental impact, as it openly shared its methodology to encourage consistent carbon accounting for AI workloads. Read more
⚠️ Microsoft exec warns about “seemingly conscious” AI. Mustafa Suleyman, CEO of Microsoft’s AI unit, cautioned that advanced chatbots may soon appear conscious (convincingly human-like) and that this poses a societal risk. He calls this coming class “Seemingly Conscious AI” and warns it could trick many people into believing AI is sentient, leading to emotional attachments or even advocacy for AI rights. Suleyman stresses there’s zero evidence today’s AI is actually self-aware, and says companies should avoid describing AI in human terms. Without guardrails, he argues, these ultra-realistic AI agents could induce delusions (“AI psychosis”) and distract from real issues. Read more
🏛️ Regulators put OpenAI under scrutiny. The U.S. FTC has opened an investigation into OpenAI, examining whether the company’s ChatGPT releases have violated consumer protection laws. In a 20-page demand letter, regulators asked OpenAI how it addresses the risk of its AI models generating false, defamatory or otherwise harmful statements about real people. OpenAI CEO Sam Altman said the company will cooperate and emphasized its AI safety research. Read more
🧪 AI Research of the Week
Reliable Unlearning Harmful Information in LLMs with Metamorphosis Representation Projection
From Chengcan Wu, Zeming Wei, Huanran Chen, Yinpeng Dong, Meng Sun
Jake's Take: This paper proposes a way to erase specific knowledge from a language model rather than only suppress outputs. The method, Metamorphosis Representation Projection (MRP), applies irreversible linear projections to hidden states at chosen layers so the internal features linked to harmful content collapse into a safe subspace while the rest of the model stays useful (AKA they flag harmful stuff and make it less important to the model).
In tests, the model resists “relearning” when exposed again and keeps performance on general tasks better than other unlearning approaches. If results hold up, this gives a practical path for takedown requests and policy-driven removals (with less collateral damage).
and then, even more news…
🍏 Apple eyes Google’s Gemini AI for Siri. Apple is in early talks with Google to use Google’s upcoming Gemini AI model to power a revamped Siri assistant. According to Bloomberg, Apple has approached Google about a custom Gemini implementation for Siri, as Apple’s own AI efforts have lagged behind – a major Siri upgrade was delayed to 2026 after technical setbacks. No deal is finalized, and Apple is also testing internally whether to stick with in-house models, but the discussions underscore Apple’s urgency to infuse Siri with cutting-edge generative AI. Read more
Keep reading with a 7-day free trial
Subscribe to Handy AI to keep reading this post and get 7 days of free access to the full post archives.



