Exploring AI's Impact on Digital Creation
A series exploring how generative AI reshapes design, development, and authenticity, from philosophy to hands-on experiments.
Generative AI has changed the world. It has crossed a threshold where it no longer simply assists. It directly participates. It has changed how we design, build, and publish on the web.
This isn’t a tech hype publication. I’m not here to celebrate every new release or evangelize the latest tool. The speed of this shift demands more than enthusiasm. It demands scrutiny. What are we losing as we automate? What happens to craft when output is instant? Who benefits from these tools, and who gets displaced by them? What does it mean for the integrity of the web when anything can be generated and nothing can be verified? These are the questions I’m more interested in than which model benchmarks best this week or showcasing impressive agentic workflows.
But to interrogate the shift, we have to understand the terrain. Here’s where things stand right now:
Agentic AI Is Coordinating the Work
AI agents aren’t just responding to prompts anymore. They’re planning, delegating, and executing multi-step workflows autonomously. Tools like Claude Code, OpenClaw, Perplexity Computer, Devin, and a growing roster of agentic frameworks can research, dispatch, and make things happen with minimal human intervention. The shift from “assistant” to “agent” is subtle in language but massive in implication. These systems don’t just help you do work, they do work.
No-Code Platforms Are Generating Entire Applications
Natural language is becoming a legitimate development interface. Platforms like Bolt, Lovable, and Replit Agent can take a conversational prompt and produce fully functional, hosted, full-stack applications: database, auth, UI, deployment, all of it. For a lot of use cases, the traditional build cycle of wireframe-to-design-to-development is being compressed into a single conversation.
Code Generation Is Getting Contextually Sophisticated
AI-assisted coding has moved well past autocomplete. Current tools can reason across large codebases, understand architectural intent, and produce solutions that account for broad context. Not just the file you’re in, but the system you’re building. For developers, this changes the daily texture of the work. Less time writing boilerplate, more time directing and evaluating.
Open-Weight Models Are Building an Ecosystem
The open-weight model landscape is evolving fast. Models like Llama, Qwen, Mistral, and DeepSeek are closing the gap with proprietary offerings, and an entire ecosystem of fine-tuning tools, quantization methods, and local inference engines has emerged around them. You can run serious models on consumer hardware now. That accessibility is shifting who gets to build with AI and how.
AI-Generated Media Is Approaching Indistinguishability
Image, video, audio, and word generation have reached a point where the output is becoming genuinely difficult to distinguish from reality. This isn’t hypothetical anymore. It’s happening in production, in marketing, in entertainment, and increasingly in contexts where the line between generated and authentic matters a great deal. The creative implications are enormous. The epistemic ones might be even bigger.
AI Has Moved From Add-On to Infrastructure
Not long ago, AI features were bolted onto existing tools as novelty additions: a chatbot here, a summarizer there. That phase is over. AI is now focal to how tools are designed and how workflows are structured. It’s not a feature inside the product. Increasingly, it is the product. For those of us who build on the web, this isn’t a trend to watch. It’s the ground shifting under our feet.
And all these advances are probably going to be replaced by something else in a few months. Or more likely a few days.
For people like me who have spent their careers designing and building things on the web, the entire foundation of our vocation is moving, and it’s moving faster than any of us expected.
This publication is my attempt to walk through that movement—and interrogate it— in public. I’ll be riffing on:
The ethical and philosophical implications of AI in design and content creation
The epistemic crisis of digital authenticity: What happens when we can no longer verify what’s real online
How the design-to-development pipeline is collapsing, and what that means for people who’ve straddled both worlds
The tension between craftsmanship and automation: when “good enough” output is instant, what happens to the pursuit of originality
Local and open-weight AI workflows: running models on your own hardware, what’s possible, and why it matters
My own projects and experiments, and what I’m learning from them
Nobody has any of this figured out. But for me, I’d rather think through it out loud than in isolation. If you’re navigating similar questions about your craft, your tools, your sense of what’s next, stick around. I’ll be here, working through it.

