Trust Issues
When AI's biggest names don't trust each other, where does that leave the rest of us?
Academia is no stranger to bitter disputes and philosophical disagreements. Data analysis is much more open to subjectivity than most understand or are willing to admit. The glory of working at the forefront of scientific advancement can make hoarders and bickerers out of the world’s most brilliant minds. Artificial intelligence is, in many ways, uniquely qualified to bring these kinds of disputes into the public square: the possible economic, behavioral, and societal impacts mean we’re often talking about very urgent and pressing public matters. Much is also still a mystery to us, meaning the boldest claims aren’t always the most scientific, but are often the most deeply invested.
The fight over AI’s future is less about the tech—and more about who gets to define it. Herewith, a small sample of disagreements between AI’s biggest power players on what the future holds:
Anthropic CEO Dario Amodei warns AI’s advancements could cause a dramatic spike in unemployment and eliminate half of entry-level, white-collar jobs, advocating for more regulation and intentional development. Nvidia CEO Jensen Huang suggested Amodei is making exaggerated claims to bolster Anthropic’s market position above its competitors. Huang, notably, also has a massive incentive to keep Nvidia’s GPUs as the gold standard for running AI models.
The UK government is carefully weighing how to regulate the AI industry, including a proposal where artists consent to use their work for training, while Reddit goes after Anthropic over claims the AI company’s bots accessed its human-authored content more than 100,000 times in the last year. Former Meta executive and former UK deputy prime minister Nick Clegg claims asking artists for consent instead of opting out after the fact is “implausible” because of its impact on the AI industry. Clegg seems noticeably less interested in the livelihood of practicing artists and writers. A recent OpenAI Substack post also suggested the UK’s regulatory caution was keeping it from significant economic opportunity re: the building of AI infrastructure.
OpenAI CEO Sam Altman says seeing the company’s recently released ChatGPT agent work was a “feel the agi” moment, referring to the lofty goal of Artificial General Intelligence he and others claim is coming soon. OpenAI’s biggest backer, Microsoft, may be losing confidence in that promise: Microsoft CEO Satya Nadella derided self-defined AGI milestones as “nonsensical benchmark hacking,” fueling rumors of a fractured relationship. Of note: there is currently no universal agreement on what constitutes AGI.
Who’s in the right here? While all these leaders and organizations take their postures on the future of AI, they all also look to benefit deeply from being perceived as reading the tea leaves correctly. The back and forth leaves us in limbo. Our jobs still feel like they’re at risk. Creatives are still seeing their works and livelihoods harvested while we decide how much say we get. AGI is either so close or totally abstract, depending on who you talk to, and the actual benefit to you is unclear.
So where does that leave the rest of us? All is not lost. There are steps you can take today to help your career, your industry, and our future:
Stay laser-focused on understanding what AI can or cannot do expertly well when it comes to your job or profession. Then, position yourself outside of AI’s expertise. Every job has some mix of administrative drudgery, repetitive tasks, interpersonal networking, and ideation/creativity. That will continue to shift regardless of whether some jobs become obsolete. Your best chance to stay relevant is to avoid the perpetual trap of easy work, knowing that work is also the most likely to go to AI, and to advocate for yourself and your contributions.
Embody the potential of an ever-evolving career. The career-long, cushy job is dead. We will all be asked to shift what we do. While that introduces uncertainty, it also introduces high potential reward, and it’s totally okay for your job to morph into something different or unexpected, even more so when you’re actively shaping it. What is the higher, more efficient, more fulfilling form of what you do, and how will you step into it?
Mentor lower- and entry-level colleagues, or college students. The current structure of many career paths puts a lot of admin and clerical work on early-career employees. If AI automates a majority of their jobs, they could get entirely cut off from their chosen career path. Less opportunity for skill growth and early-stage mentorship affects them and you. We benefit from bringing everyone along and keeping everyone skilled and meaningfully employed where possible.
Last, but not least: keep up with the news. The industry is constantly changing, and the technology is sprinting into the future. You cannot come out on top if you’re speaking about AI from the perspective of six months ago. Hopefully, that’s part of why you’re here if you’re a subscriber!
The Future of Video
AI is embedding itself deeper into how stories are created, distributed, and experienced. Google DeepMind revealed behind-the-scenes details of “ANCESTRA,” a short film that combines its Veo video model with live-action filmmaking. Midjourney also launched its first video model, bringing its stylized aesthetic into motion for the first time.
Runway is partnering with AMC Networks to collaborate on marketing and content development, and SAG-AFTRA has reached a tentative deal with gaming companies that touches on AI likeness protections, formalizing labor protections and partnerships.
Emotional & Artificial
Large language models are increasingly surpassing humans in tasks involving emotion and judgment. A recent study published in Nature showed that models like GPT-4, Claude 3.5, and Gemini 1.5 Flash outperformed humans on five emotional intelligence tests, achieving 81% average accuracy compared to humans’ 56%.
Spy.AI vs. Spy.AI
AI is becoming a must-have tool for global influence and intrigue. OpenAI released a threat intelligence report outlining how it disrupted several operations, including a pro-China influence campaign, Iran-linked trolling around U.S. immigration, and multi-stage malware networks.
At the same time, Elon Musk’s Grok AI is reportedly expanding its reach into U.S. government systems, raising fresh questions about conflicts of interest and the entanglement of private AI in public institutions.
Daily Life & Consumer Tech
AI’s slow takeover of day-to-day interaction continues. Google rolled out upgrades to Search that make it easier to ask follow-up questions using natural language. Amazon launched a new audio feature that uses generative AI to read aloud product summaries and reviews.
And in education, an MIT study observed how students are using ChatGPT as a first resort for problem-solving. Researchers worry that offloading too much thinking to AI could have long-term consequences on learning and retention.
Governments are also responding to growing concerns over misuse. In China, tech companies froze access to AI tools during national exams to prevent cheating, highlighting AI’s rapidly growing influence on public policy and education systems.
See more interesting stories in the newsroom. As always, you can find information and use cases on the wiki. If this was helpful, consider becoming a paid subscriber. Your support helps me keep the work independent and exploratory.
Thanks for reading.



Very informative. I hadnt considered some of this and am glad I can keep up to date via this substack series. I like the advice at the end-very insightful and dead-on. I hope others take heed. Opening original artwork is always something I look forward to as well.