Sitemap

GT Protocol AI Digest №57: Global AI Battles — Billionaires, Brain Drain & Big Bets

7 min readSep 21, 2025
Press enter or click to view image in full size

Intro

Welcome to GT Protocol Digest #57 — your weekly dive into the fast-moving world of artificial intelligence. This edition captures a turbulent week where billionaires shifted places on the wealth charts, Google faced lawsuits and ethical backlash, regulators tightened their grip on chatbot safety, and researchers unveiled breakthroughs in healthcare and agriculture. From the global AI brain drain to Microsoft’s shifting allegiances and NVIDIA’s big bets on the UK, the race for AI dominance is intensifying. Let’s explore the highlights that are shaping the industry, policy, and society.

Press enter or click to view image in full size
Image credits: ChatGPT

1. Industry Shifts and Partnerships

  • Top AI Scientist Leaves U.S. for China: Song-Chun Zhu, a pioneering figure in computer vision and cognitive AI, has left UCLA to lead major AI research efforts in Beijing. In an in-depth Guardian profile, Zhu explains his decision was driven by U.S. political restrictions on collaboration and what he views as China’s stronger long-term commitment to AI research. His move highlights the intensifying “AI brain drain” as top scientists weigh national policies, funding, and research freedom when choosing where to work — raising concerns about how geopolitical tensions shape global AI leadership. Read more here
  • NVIDIA Bets on UK as AI Superpower: Jensen Huang, CEO of NVIDIA, declared that the UK is on its way to becoming a global “AI superpower” as tech giants pour billions into data centers, model training hubs, and semiconductor research. Speaking in London, Huang emphasized the UK’s academic base and regulatory openness, noting that NVIDIA is expanding its local presence. The announcement reflects a wider trend of countries competing to attract AI infrastructure investment as they position themselves in the next wave of technological dominance. Read more here
  • Microsoft Favors Anthropic Over OpenAI: In a surprising move, Microsoft is reportedly integrating Anthropic’s Claude 4 into Visual Studio Code, rather than relying solely on OpenAI’s models. While Microsoft remains OpenAI’s largest investor, insiders say the company wants to diversify its AI partnerships to avoid over-dependence. Developers using VS Code could soon see Claude-powered code completions and natural language assistance, raising questions about OpenAI’s role within Microsoft’s broader ecosystem. Read more here
  • Larry Ellison Briefly Tops Rich List: Oracle co-founder Larry Ellison briefly overtook Elon Musk as the world’s richest person, according to the Bloomberg Billionaires Index. Ellison’s fortune surged thanks to Oracle’s aggressive push into AI-driven cloud services, particularly the company’s $500 billion “Stargate” data center project, which analysts believe could reshape the global AI infrastructure market. Although Musk quickly reclaimed the top spot due to Tesla’s stock rebound, Ellison’s rise reflects how rapidly AI infrastructure investments are creating new centers of wealth and power in the tech industry. Read more here
Press enter or click to view image in full size
Image credits: www.theverge.com

2. Google Under Fire

  • Rolling Stone’s Parent Company Sues Google: Penske Media Corporation, owner of Rolling Stone, Variety, and The Hollywood Reporter, has filed a lawsuit against Google alleging that AI Overviews divert readers away from original publishers. The suit claims this practice has cost media companies billions in lost ad revenue and undermined their ability to sustain journalism. Legal experts say the outcome could set a precedent for how AI platforms compensate content creators, similar to past battles over music and video piracy. Read more here
  • Exploited AI Trainers: A Guardian investigation revealed that behind Google’s Gemini AI lies an army of outsourced workers, mostly in Kenya, India, and the Philippines, who label data, detect biases, and filter harmful content. Paid as little as $1.44 an hour, these workers report facing grueling shifts and exposure to graphic material, while Google presents Gemini as an advanced “self-learning” system. Critics argue this hidden human backbone of AI represents a new form of digital exploitation, raising questions about fairness, transparency, and workers’ rights in the AI supply chain. Read more here
  • Google Declares the ‘Open Web’ in Decline: Internal Google communications, surfaced in a report, described the “open web” as being in “rapid decline,” which sparked outrage among publishers. Media companies accused Google of undermining independent journalism by redirecting users toward AI-generated summaries in its “AI Overviews.” Publishers argue this constitutes “AI content theft,” with Google profiting from their reporting while reducing referral traffic that keeps outlets alive. This clash highlights the tension between the survival of traditional media and the growing dominance of generative AI platforms. Read more here
Press enter or click to view image in full size
Image credits: www.theverge.com

3. Policy, Regulation & Business Tools

  • FTC Probes AI’s Impact on Kids: The U.S. Federal Trade Commission has launched a wide-ranging inquiry into how chatbots and AI companions affect children and teenagers. Companies including OpenAI, Google, Anthropic, and Character.AI must provide detailed data on user engagement, privacy safeguards, and internal research on risks such as addiction, emotional harm, or manipulation. The move signals that U.S. regulators are increasingly worried about AI’s psychological and developmental impact on young users and could lead to tighter rules on how AI systems interact with minors. Read more here
  • Yext Scout Tackles AI Search Challenges: Yext has introduced “Scout,” an analytics platform designed to help businesses navigate the fast-changing world of AI-driven search. As tools like ChatGPT, Gemini, and Perplexity provide direct answers instead of linking to websites, brands are losing visibility and customer traffic. Scout monitors how generative AI tools present brand content, identifies distortions or omissions, and offers strategies to maintain presence across AI-powered search environments. This represents one of the first attempts to systematize brand management in the new AI search economy. Read more here
  • NVIDIA Powers UK-LLM for Local Languages: A partnership between NVIDIA and UK researchers has produced the UK-LLM, a large language model trained to support regional languages like Welsh, Scots Gaelic, and Cornish. Built using NVIDIA’s Nemotron framework, the model aims to preserve linguistic diversity and ensure AI tools serve speakers of less dominant languages. The initiative is also framed as a sovereignty effort, allowing the UK to maintain local AI capacity instead of relying solely on global U.S.-based platforms. Read more here
Press enter or click to view image in full size
Image credits: ChatGPT

4. Research & Innovation

  • OpenAI Upgrades Codex: OpenAI has unveiled significant improvements to Codex, its code-generation model that underpins GitHub Copilot. The upgrade expands support for modern programming languages like Rust and Swift, improves performance on large and complex codebases, and reduces latency in generating usable code. OpenAI also highlighted advancements in handling ambiguous queries and multi-file reasoning, making Codex more reliable as a full-scale developer assistant. This move strengthens its positioning against rivals like Anthropic’s Claude and Google’s Gemini for coding tasks. Read more here
  • AI Helps 38 Million Farmers Predict Monsoons: Google Research has deployed an AI-powered forecasting system that delivers highly localized monsoon predictions to 38 million farmers across India. By integrating satellite imagery, historical weather data, and machine learning models, the system provides village-level forecasts with up to two weeks of lead time. Farmers use the tool to decide when to sow, irrigate, or harvest, reducing crop losses caused by erratic monsoon rains. This is one of the largest real-world applications of AI for climate resilience in agriculture. Read more here
Press enter or click to view image in full size
Image credits: news.mit.edu

5. AI in Healthcare

  • AI for Fetal Health Imaging: MIT scientists have developed a machine learning system that can assemble highly detailed 3D models of fetuses from ordinary ultrasound scans. The technology enables doctors to detect abnormalities in the heart, brain, and other organs earlier in pregnancy, without invasive procedures like amniocentesis. Researchers hope this breakthrough will become a vital tool for prenatal care, particularly in regions with limited access to advanced imaging technologies. Read more here
  • AI Predicts Risk of 1,000+ Diseases: Researchers have unveiled a new AI tool that can analyze patient health data to predict the risk of more than 1,000 diseases, from cardiovascular conditions to rare genetic disorders. The model uses a combination of genomic information, electronic health records, and lifestyle data to deliver personalized forecasts. Medical experts believe such predictive tools could transform preventive medicine, allowing doctors to intervene years before symptoms emerge, though they also raise concerns about data privacy and over-reliance on algorithmic diagnoses. Read more here

Outro

That wraps up Digest #57. This week reminded us that AI is not just a technology — it’s a battlefield of wealth, ethics, geopolitics, and health innovation. From the hidden labor behind chatbots to life-saving predictive tools, AI continues to expose risks while offering transformative opportunities. As always, GT Protocol will keep you updated on the forces driving the AI revolution. Stay tuned for next week’s digest.

--

--

GT PROTOCOL
GT PROTOCOL

Written by GT PROTOCOL

BLOCKCHAIN AI EXECUTION PROTOCOL

No responses yet