The "Vibes" Era: The most important shift in technology
I’m not being hyperbolic about the title of this article. We are, unquestionably, going through the most radical shift in technology - particularly in the user interface. The nature of our relationship and how we interact with software is right on the cusp of getting completely rewritten. We’re entering the “vibes” era where there is no formal relationship with any one piece of software but rather a “vibe” relationship.
Let’s take a little walk Beyond the Yellow Woods.
What are Vibes
In the most general sense, vibes are simply how humans feel about a thing. Do you feel good or bad? Are the vibes good or bad? The thing to note is that vibes imply a generality to the feeling. Something is “generally” positive or negative. It’s an “overall” statement.
The other part of vibes comes down to a relationship/intimacy level. If you’ve got “general” good vibes towards something, it often means the relationship with the thing is good but shallow. However, if you really “fuck with that vibe” as the Gen-Z would say, it’s a much deeper relationship because you’ve spent more time with it. Like, you really like the vibe.
The nuance is dumb, I know. Explaining this makes me feel like a boomer but hopefully, there’s a general understanding. Anyways, moving on.
The Big Shift
The meme of “vibe coding” has penetrated the internet lately. Went totally viral and now everyone is building out their own “vibe” for their domain.
What’s happening is that LLMs are getting so good at interacting with programs, tools, and utilities that humans can much more naturally converse with them without requiring any deep expertise. Sure, if you knew the software at an expert level, you could probably do stuff that LLMs couldn’t (or at least very easily). However, with LLMs, someone who knows nothing about the software other than its general intended use can build incredible things by simply talking to the chat interface (LLM). They “vibe” in conversation with the interface where the vibe-feedback comes from the real-time visualization of whatever their building. Images, code, animations, 3d models, etc.
There’s a new degree of experimentation that LLMs can provide because they warp time. There’s no clicking or thinking through how to accomplish a task with the software interface. It just does it because it knows everything about the API endpoints, how they function, and how they work relative to what the user is asking for. Humans get to play with the software so fluidly. Knowledge of a domain and tools for that domain are completely flattened.
The clearest example I have is Claude talking to Blender. I can type up some sort of scenery I want to see in Claude and it will access Blender (3d visual modeler) to create that scenery without me needing to know a fucking thing about Blender. 3D modeling software is notoriously hard to learn because there are so many elements to it. Even video games are notoriously hard to build. Well, here’s an example of a dude who “vibe coded” an entire game that now clears $1M ARR.
The vibes era is really the ability to instruct (formally and informally) tasks from tools. You ask it to create a landscape for you and it will do a first pass. You get a “vibe” from the image and say “it needs a castle”. It does an iteration and builds an example of a castle. You get a “vibe” and feel like the “castle needs a moat”. It does another iteration and so on.
The vibes era allows you to paint with these systems by providing broad strokes and outlines and then iterating with the system on the details. Like tennis, each volley back and forth is a vibe check. The better these LLMs get in both intelligence and their connectivity to tooling, the longer the volley goes on.
It’s good vibes, dude.
The Shift in User Interface
So now that we know about the big shift, let’s hone in on the thing that is monumental about this shift: the user interface.
I touched on it in the last section but let’s call it out again. One of the biggest changes is that I don’t need to know how the software works to wield it as a weapon. I don’t need to know the navigation. I don’t need to know the sub-tabs. I don’t need to know the fields, processes, quirks, or tips-n-tricks to get my job done. I just write out what I need in a chat interface.
A great example of a big shift that will happen are CRMs. We will see AI-Native CRMs where the predominant way of interacting with the system is through natural language.
“Jerry is ready to sign on the contract so let’s get a deal created and send over the contract PDF to sign”.
Instead of navigating to the “Contacts” tab, creating a deal underneath this contact, then changing the status to ready to sign, then downloading the correct contract PDF template, then emailing Jerry with the contract… I just simply write the sentence out.
Now, there’s a separate key shift here that I don’t think most people are seeing. Historically, the UI has been the view into the software and its functionality. What now matters is how the software publishes the results of the natural language query. What does the user get back? Is it just text? Is it an image? Is it an mini-app in the browser alongside the chat interface?
This is the open question right now that we are starting to figure out. In the Claude→Blender example given earlier, we’re doing the prompting in Claude and then the display in Blender. This is a 1:1 relationship. But what happens when there’s a 1:N relationship? What happens if I need Claude→Blender→Runway? (Runway is a text-to-video creator).
Here’s an example of vibing with 3 tools.
What is clear to me is that LLM chat interfaces are the entry portal but they have fractured what it means to receive published content. These chat interfaces will have to advance into displaying data back from different systems so that I can vibe with them in harmony as opposed to making a change here, then the next one, then the next one. I want to vibe with many applications in harmony.
Prompt once, vibe with many systems.
A possible solution to this is a chat interface with "tabs” that map to specific tools. The tools become headless in this sense - much like what happened when content management systems went headless. All software will become headless and we’ll just focus on building engines behind APIs with tightly defined visual outputs.
Why Is This Big
This is huge because anyone can build or do anything now (digitally, at least). Now, that doesn’t mean that the quality will be higher or whatever, but anyone will be able to create a movie, a 3D rendering of a home, a piece of software, etc. And it’s the worst it will ever be, right now.
I’ve talked about this before but understanding exponential curves for humans is super difficult to do. We’re just bad at at. But on the exponential curves, this change will alter how the majority of humans work.
Specializations of knowing particular software or coding languages become worthless. Instead, really knowing how to vibe with an industry will become paramount. Knowing how to talk the lingo, understand the pain points, and understand how to shape things in your head such that you steer the LLMs + tools in specific ways.
Language and the ability to describe and explain things such that you shape and nudge the published responses from the tools you’re leveraging will be what separates the amateurs from the pros.
We already see this today where there are people (often Gen-Z) who just get prompting because they have no prior knowledge about most software interfaces. Their prior knowledge is smartphones where the interface is trimmed and more fluid than desktops, and they’re growing up in the LLM era. Millennials have a lot of priors because we grew up with the internet and not on it. Big difference.
What will ultimately happen is that LLM's depth of capabilities with understanding and interacting with APIs will be what matters most moving forward. Apart from that, software creators will eventually shift from having a UI to having specific ways of uniquely visualizing responses from their system.
My ability to vibe with the responses from these APIs/software creators in synthesis with other tools is what will ultimately win. Much like a Git Repo, there will be “vibe” repos - tools that the community votes as easy to vibe with (meaning, easy to get what you want out of it).
Superseding all of these tools will be a new type of “browser” that understands how to orchestrate these tools together. It will have the ability to take a prompt, analyze it, create a plan on how to accomplish the prompt, and then execute tool/software calling to build it, and then render a response. So there may be browsers specific to making movies or ones specifically for software (eg. Cursor/Replit) or even finance. The point is, that there will become these specialized IDEs/Browsers/whatever that are really good at orchestrating and interacting with a suite or marketplace of vibe repos (tools/utilities) for a specific intention.
That’s a big shift. That’s a vibe shift.