The Flip Side
Who are the winners and losers in an AI-everywhere world?
AI saturation is quickly becoming the new reality. Global, iconic brands like McDonald’s and Coca-Cola are using generative video in holiday campaigns (for Coke, the second holiday season in a row). The labor market is reshaping as roles tied to generative AI — from consultants to skill-transformation specialists — expand rapidly. Job postings referencing generative AI have surged, with roles and descriptions increasingly tied to AI skills or AI-mediated tasks. Indeed’s 2025 AI at Work report found that 26% of all jobs posted last year are “highly transformable” by generative AI. Another 54% face “moderate” transformation.
As this wave of adoption continues, consumer backlash is moving from abstract worry to concrete resistance. Platforms are facing pushback against forced AI features and opaque data use, signaling early fatigue with AI-everywhere defaults. Brands are discovering not all AI looks like innovation — McDonald’s pulled its AI-generated holiday campaign after customers called it “soulless.” What was intended as efficiency and novelty highlighted how quickly gen-AI’s aesthetics can register as empty or disposable. Independently made tools like Slop Evader suggest “AI-avoidance infrastructure” is already forming. Saturation can erode trust faster than it builds efficiency.
Just as important as the movements of AI’s biggest players is how different groups experience artificial intelligence, which seems to mirror recently revived discourse on a K-shaped economy. The idea describes recoveries where growth accelerates unevenly: one curve bends sharply up for those with assets and leverage, while the other slopes down for those without. Despite a still-booming stock market and low unemployment, economic gains have been uneven, with rising costs, automation pressure, and precarity disproportionately affecting lower-income workers. AI connects to this in many ways — stock valuations, products, and sub-economies are poised to follow and reinforce similar divides, creating diverging experiences that feel utopian for some and dystopian for others.
The divide is between those who control their AI experience and those who live inside it.
On the upper curve, knowledge workers pay for and curate tools that reduce noise and busy work — premium LLMs, ad-free platforms, human-curated media — turning AI into leverage rather than friction. Here, AI is helpful, on-demand, and intentional: something you consult, not something that constantly intervenes.
On the lower curve, many workers and consumers encounter AI primarily through automation they can’t disable, like screening systems, recommendation engines, chatbots, cheap ads, and performance monitoring. AI functions as an environment: always on, rarely explainable, and difficult to escape.
There are some unquestionable benefits and features that are only getting better. AI genuinely lowers barriers to writing, research, and creation (though the quality is still largely influenced by the user). Natural-language queries open up a vast world of knowledge building. Yet user behavior and company incentives aren’t built to encourage this by default: industry insiders are warning that optimization for benchmarks and scale is producing floods of low-quality output — what even AI executives call “AI slop” — undermining the promise of democratized quality. Creation is cheaper, but discernment is harder. Curation and intention, not access, become the advantage, and those cost money, time, and attention.
What’s new isn’t the existence of premium experiences. Past eras of technological advancement created their own haves and have-nots. This time, it’s what people are paying to avoid. AI can mediate cognition itself: what you see, how it’s summarized, and what’s never shown at all.
One of the biggest incentives supporting both divergent curves is convenience. On the top side, AI can be channeled into time saved and expanded capability. Convenience collapses into coercion when AI systems are unavoidable in work, healthcare, and public services, especially when those systems make consequential decisions without transparency or appeal. Preference only matters when alternatives exist.
In an attempt to keep up with the technology, regulation is increasingly focused on labeling, transparency, and consent. South Korea will require clear labeling of AI-generated ads, effectively making AI intervention a consumer right. European regulators have opened new antitrust scrutiny into Google’s AI content practices, citing concerns that publishers can’t meaningfully opt out without hurting distribution, a structural example of “consent without alternatives.” Creative industries are openly grappling with whether AI replaces human labor or commoditizes it, especially in music and media, including a recent and passionate debate on whether video game studios using AI for anything was morally acceptable.
Still, the spread continues. Generative features are rapidly becoming the default across search, productivity software, advertising platforms, and customer service, often without clear opt-out mechanisms for end users.
The question isn’t can AI transform society, but how and for whom. AI will touch nearly every interaction, every job, and every service, but not equally. For some, AI will be a tool of empowerment, a lever that amplifies judgment and creativity. For others, it will shape decisions, perceptions, and outcomes with little transparency or agency. Those whose experience of AI is negotiated — with choice, with visibility, with rights — will enjoy something closer to autonomy. Those who cannot will experience AI as ambient infrastructure: unavoidable, inscrutable, and coercive. Whether that can be changed, and what the future of AI is, is a contest over who decides when and how AI intervenes in the human experience. If we treat human agency as a must rather than a luxury, we may yet shape a future that preserves choice as a core value, not a premium good.
Reading List
“Slop Evader” lets users freeze the internet in 2022 to escape AI-generated content.
After announcing plans to turn Firefox into a "modern AI browser" drew community backlash, the company promises an “AI killswitch” to turn off all AI features.
48% of Pennsylvania adults believe AI will have a negative impact on the economy. 55% believe it will take away jobs in their industry.
According to Indeed’s 2025 AI at Work Report, 46% of skills in a typical job posting are poised to be partially or fully transformed by AI.
Three Democratic senators opened an investigation into reports of higher energy bills being driven by AI energy use, “passing on the costs of building and operating their data centers to ordinary Americans.”
Following multiple high-profile lawsuits between media juggernauts and AI upstarts, The Walt Disney Company and OpenAI announced a three-year licensing agreement where users can create Sora videos with 200 Disney-owned characters.
The U.S. Energy Department is collaborating with 24 AI organizations to advance the Genesis Mission, using AI to “accelerate discovery science, strengthen national security, and drive energy innovation.”
After years of operating on a subscription model for its applications that drew vocal criticisms from design professionals, Adobe is making Photoshop and several other apps available in ChatGPT, for free.
Thanks for reading.


