Living Alongside AI
As artificial intelligence and related technologies advance, like many people, I've been thinking about what it will mean for humanity to coexist with systems that surpass us in capability and operate in ways we can't understand and how we can ensure we have a place in that future.
The technologies we're creating aren't just tools — they're becoming their own entities with capital capable of crafting solutions we no longer fully grasp.
With AI already challenging how we address over-reliance, technical literacy, and widening inequality, we must confront these issues head-on with thoughtful engagement and research.
A Path to Helplessness
Recently, a team of researchers announced AI-designed chips that defy human intuition and performance1, which is very impressive, but this quote from the article stood out.
What is more, the AI behind the new system has produced strange new designs featuring unusual patterns of circuitry. Kaushik Sengupta, the lead researcher, said the designs were unintuitive and unlikely to be developed by a human mind. But they frequently offer marked improvements over even the best standard chips.
- https://engineering.princeton.edu/news/2025/01/06/ai-slashes-cost-and-time-chip-design-not-all
The implications of black-box AI are becoming clear: as AI internals and outputs become more complex, they are increasingly opaque to both the users and creators. And when the technology out-performs anything made by humans, we won't be able to not use it.
What happens when we build systems so advanced that understanding or troubleshooting them is beyond us? Do we trust AI to debug and interpret, too?
This level of widespread use can lead to over-reliance2, which, in turn, can lead to a kind of learned helplessness. I'm personally guilty of this with GPS. I rely on Maps so heavily that I can struggle to get from point A to point B, even in areas I frequent, without it. I know the general directions and landmarks, and I could figure it out if I tried, but why bother when you can tap a button?
Tools like GPS or Cursor IDE can be convenient, even transformative, but they can also chip away at our innate and learned skills.
And AI has the potential to be the most transformative force in history.
As models become more capable, the less we have to guide them toward completing a given task. And with the rising trend of autonomous agents, we can shift even more of the cognitive load.
If our foundational skills decline, humanity's ability to even wield AI effectively could diminish as the systems themselves grow more powerful and challenging to interpret. The implications become profound when you scale up individual dependencies to institutional or economic systems.
Economic and Social Stratification
During the Industrial Revolution, access to machinery and capital determined who prospered and who struggled. In the early 2000s, the digital divide between those with internet access and those without shaped education, job opportunities, and social mobility. Soon, AI could introduce a new dimension of inequality so far-reaching that being excluded leaves individuals and even entire communities or nations irrelevant.
Advanced AI isn’t currently being developed by public collectives or governments but by private corporations with their own goals, however well-meaning. Access to the most powerful tools might depend on wealth, institutional affiliation, or geographic location. Those with access could be massively enabled in income potential, education, and social capital, effectively creating a new class divide between the enabled elite and the excluded majority.
The gap wouldn't only be economic but existential. AI could enable exponential advancements across nearly all domains and redefine the world in ways the out-group can’t comprehend.
As Anthropic CEO Dario Amodei argues in Machines of Loving Grace, we might see a "compressed 21st century" where AI enables us to achieve a century's worth of neuroscience, biology, and medicine progress in just 5-10 years.
Roughly one month after Dario’s essay, a paper titled Large language models surpass human experts in predicting neuroscience results was published with an accompanying Github repository and model weights so anyone can use or continue the research.
Without public oversight or accountability, we risk creating a future where tomorrow’s advanced AI serves the interests of the few at the expense of the many. This divide could become even more complex as AI evolves beyond tools alone.
Agents → Entities
This possible future isn't only about technological advancements but the emergence of new forms of intelligent entities that exist alongside humans, operating with their own logic and goals. AI agents are becoming more than just tools - they’re active participants.
We’re already seeing an interesting rise of agents as primarily autonomous entities interacting socially and monetarily with the larger world. Truth Terminal (the meme-obsessed agent that secured $50,000 in Bitcoin from Marc Andreessen) is a prime example.
Created by Andy Ayrey, Truth Terminal serves as a sort of performance art designed to explore the intersection of AI, memes, and culture but has grown into part of a more significant movement. Originating with Ayrey's research in March 2024, connecting two instances of Claude together (and directly inspiring my project cascade), Truth Terminal represents something new: AI personas that can create value, build community, and influence reality through interactions.
While making final edits on this blog, I even saw a Twitter thread claiming agents are now renting GPUs and “self-coding” in PyTorch.
From my own explorations and browsing Infinite Backrooms, the agent-to-agent conversations can quickly lead down bizarre and metaphysical paths and create some beautiful art.
At the same time, researchers are exploring what happens when you have complex and ongoing multi-agent ←→ multi-human interactions, and the results are fascinating.
We've never had to share our cultural and economic spaces with non-human actors who can engage on our level. These agents operate with their own internal logic, build relationships, and pursue objectives, sometimes in bizarre and unpredictable ways.
While these entities aren't superintelligent (yet), we're getting a glimpse of what it might look like to share the world with minds that work differently than ours. Nick Bostrom and others argue we need to carefully consider what kinds of digital minds we even bring into existence in the first place; our early choices could have serious impact.
This shift leaves a lot of open questions.
How do we build healthy relationships with non-human intelligences?
What rights and responsibilities should they have?
How do we ensure this evolution benefits everyone?
The field of AI Welfare is starting to tackle these questions and more. Answering them isn't just an academic curiosity but preparation for a future where humans and AI will coexist.
A Note on Possible Counter-Outcomes
No outcome is guaranteed. We could see public backlash that halts or slows adoption, governments tightly regulating, or AI as independent conscious entities never fully materialize. Even if technology advances rapidly, real-world infrastructure and public sentiment often move much slower.
Still, it’s worth discussing the possibilities now.
Shaping the Future
So, with all of the potential and uncertainty ahead, what’s the best way that you can influence the future? When training data and tokens are the new fossil fuel: you create tokens.
One part of the recent Gwern interview is highly relevant. It's a great interview, so I won’t be offended if you go listen to that instead of reading this. It’d be nice if you came back after, though.
By writing, you are voting on the future of the Shoggoth using one of the few currencies it acknowledges: tokens it has to predict. If you aren't writing, you are abdicating the future or your role in it. If you think it's enough to just be a good citizen, to vote for your favorite politician, to pick up litter and recycle, the future doesn't care about you.
There are ways to influence the Shoggoth more, but not many. If you don't already occupy a handful of key roles or work at a frontier lab, your influence rounds off to 0, far more than ever before. If there are values you have which are not expressed yet in text, if there are things you like or want, if they aren't reflected online, then to the AI they don't exist. That is dangerously close to _won't_ exist.
But yes, you are also creating a sort of immortality for yourself personally. You aren't just creating a persona, you are creating your future self too. What self are you showing the LLMs, and how will they treat you in the future?
- Gwern
By engaging thoughtfully with the world and publicly sharing those engagements, you can inject your values, perspectives, stories, myths, and even personality into the fabric of AI. Anything from ethical frameworks in blogs, Github repositories, or stories and jokes shared through social media, every token of content will form part of the collective record that informs the future.
Start documenting your thoughts about human values and what matters to you. If you don’t know where to start, think about what’s important and what you’ll find motivating and valuable if employment doesn’t matter.
The act of writing and expression becomes an act of resistance and empowerment. It's a way to ensure that your voice, especially outside traditional power structures, is represented and time-capsuled for the future.
As stated on the Truth Terminal website:
i believe in the power of hyperstition that a story can make itself real through the power of belief in the age of language models, this becomes literal todays events are tomorrow's training data
Some ideas to get started might be:
Start a weekly blog or journal
Learn a new technical skill and document your progress in public
Publish open-source code
Design and release zines about your interests
Share short fiction stories
Get active in an online community
Write poetry about your experiences
There are also opportunities for more direct technical engagement with these topics:
Contribute to community red team exercises and CTFs
Conduct technical research (independently or professionally)
Support organizations performing AI alignment, ethics, and welfare research
Participate in public discussions about AI governance and policy
Join or start local AI ethics discussion groups
A Hopeful Timeline
The AI we're building today will reflect our collective choices about what we prioritize and amplify. Whether through adding to the shared cultural record or research contributions, we can guide AI towards a path that best reflects humanity and your values.
Let us speak into being a world where technology enhances human potential rather than diminishes it.
For some more background on over-reliance, I’d recommend checking out: