Aug 26, 2024
Help wanted: AI designers
LLMs feel like genuine magic. Yet, somehow we haven’t been able to use this amazing new wand to churn out amazing new products. This is puzzling.
The products are stalled because LLMs blew up our software production line
Over decades, our industry has gotten pretty good at turning napkin sketch into full fledged products. That isn’t because we just learned to type out code faster! We got better at figuring out what software to make.
UXRs report on user studies, PMs prioritize roadmaps, designers make user flows, and engineers design architectures. All these people take a huge number of potential things we could do and turn it into a clear picture of a single thing worth doing.
Now, maybe you’re at a startup, and you’re saying my team doesn’t have 5 people yet alone 5 different roles. First, I’m jealous, and second, even if you don’t have a full time person figuring out each of these components, you almost certainly have someone wearing each of these hats for part of their time.
All of these roles have emerged because each helps us resolve a key kind of ambiguity:
who’s our customer?
What do they need?
How do they want to interact with our software?
What can we feasibly build given a complex matrix of constraints?
With answers to these questions in hand, we have a tidy user flow backed by a neat engineering architecture that we can bang out with some fast typing and ship.
… Until you throw an LLM in the mix. Then, all of the sudden, there’s an alien in the middle of your software stack that defies all attempts to be specified.
What makes the models so compelling — the ability to turn infinite input possibilities into infinite creative outputs — is exactly what makes them so hard to pin down into a shippable product. Infinite possibility is awesome, but it also takes our industry’s tidy little user flows with 6 states and a dozen edge cases and throws it right out the window.
We need… someone new?
In my time trying (and mostly failing) to make new LLM based products, I’ve seen tons of teams get stuck in an AI swamp. It starts off well! A team will have a spark of inspiration: “wouldn’t it be awesome if instead of having to read all my email, an agent could give me a single digest of all the important stuff in my inbox?” Hell yes it would! LFG!!! We’re going to invent the future of communication!
Then, we sprint straight into the swamp. No one has the skills to pin down the really hard questions that turn that sketch into a real thing.
How do we decide what important means? How should the digest read? Or should the user read it at all? Maybe it should be audio? Can a user interact with it? Chat to it? Teach it about their preferences? Can any of the models we have access to actually do any of those things reliably?
People have a bunch of great skills, but none really answer these questions.
Designers can help us visualize possible software, but they’re generally not equipped to help us experience how different AI behaviors will feel.
Researchers can tell you what state of the art performance on a task is and how we might push beyond it, but not without a pretty concrete notion of what eval should be.
Similarly, engineers can help us optimize a system once we know what “good” is, but they’re generally not trained to explore the vast range of possible behaviors to define what good means.
UXRs can tell us about what problems users care about, but they’re powerless to tell us which of those problems an LLM can solve.
PMs can balance trade offs across all these disciplines, but they critically rely on the expertise of all these other practitioners to do that.
Given that no one on the team is good at solving these critical questions, the team gets stuck, spins their wheels, and eventually one of a few things happens:
Instead of shipping something great, the team reverts to the obvious tiny step: let's throw a sparkle icon somewhere that writes you a draft! Then no one uses it and they spend months trying to make users care.
Instead of trying to figure out what problem is worth solving, a team selects a really hard technical problem semi-randomly and starts hill climbing only to find out that problem doesn’t matter 6 months later.
Instead of trying to build things at all, they just argue in circles about which abstract approach will work. With no way to evaluate these theories, arguments continue until everyone flames out.
I think this is why new products have been so slow to emerge.
We know how to explore and nail down UI issues and technical issues and gaps in user understanding, but we have no way to nail down the part that matters most in LLM based products: what the hell do we want the model to do.
If you can’t quickly explore that and eventually clearly define that, then it’s going to be really hard to find great applications for these models.
Enter the AI designer
I think the solution to the problem we’re observing is better design — in the Steve Jobs sense of the word:
“Most people make the mistake of thinking design is what it looks like. People think it's this veneer — that the designers are handed this box and told, 'Make it look good!' That's not what we think design is. It's not just what it looks like and feels like. Design is how it works.”
“What the hell do we want the model to do” is the ultimate how it works question.
To answer it, you need user empathy and taste and the ability to imagine experiences that have never existed before — all things the tradition of design brings in spades.
However, this new person will need skills that are rare on most design teams:
Just like designers have to know their users, this new person needs to know the new alien they’re partnering with. That means they need to be just as obsessed about hanging out with models as they are with talking to users.
The only way to really understand how we want the model to behave in our application is to build a bunch of prototypes that demonstrate different model behaviors. This — and a need to have good intuition for the possible — means this person needs enough technical fluency to look kind of like an engineer.
Each of the behaviors you’re trying to design have near limitless possibility that you have to wrangle into a single, shippable product, and there’s little to no prior art to draft off of. That means this person needs experience facing the kind of “blank page” existential ambiguity that founders encounter.
With this new person and these new skills, design teams will have an ability to deeply shape the product — and improve users’ lives — in ways they never had before.
The behavior of algorithms has long been the domain of data scientists and engineers. A place where AB testing, not taste and vision reign supreme. LLMs don’t just give us the opportunity to create more powerful algorithms; they give us the opportunity for design to finally mold that critical part of the software experience. In doing this, we’ll unlock whole new types of products. We’ll finally see this technology live up to its product potential.
LLMs feel like genuine magic. Yet, somehow we haven’t been able to use this amazing new wand to churn out amazing new products. This is puzzling.
The products are stalled because LLMs blew up our software production line
Over decades, our industry has gotten pretty good at turning napkin sketch into full fledged products. That isn’t because we just learned to type out code faster! We got better at figuring out what software to make.
UXRs report on user studies, PMs prioritize roadmaps, designers make user flows, and engineers design architectures. All these people take a huge number of potential things we could do and turn it into a clear picture of a single thing worth doing.
Now, maybe you’re at a startup, and you’re saying my team doesn’t have 5 people yet alone 5 different roles. First, I’m jealous, and second, even if you don’t have a full time person figuring out each of these components, you almost certainly have someone wearing each of these hats for part of their time.
All of these roles have emerged because each helps us resolve a key kind of ambiguity:
who’s our customer?
What do they need?
How do they want to interact with our software?
What can we feasibly build given a complex matrix of constraints?
With answers to these questions in hand, we have a tidy user flow backed by a neat engineering architecture that we can bang out with some fast typing and ship.
… Until you throw an LLM in the mix. Then, all of the sudden, there’s an alien in the middle of your software stack that defies all attempts to be specified.
What makes the models so compelling — the ability to turn infinite input possibilities into infinite creative outputs — is exactly what makes them so hard to pin down into a shippable product. Infinite possibility is awesome, but it also takes our industry’s tidy little user flows with 6 states and a dozen edge cases and throws it right out the window.
We need… someone new?
In my time trying (and mostly failing) to make new LLM based products, I’ve seen tons of teams get stuck in an AI swamp. It starts off well! A team will have a spark of inspiration: “wouldn’t it be awesome if instead of having to read all my email, an agent could give me a single digest of all the important stuff in my inbox?” Hell yes it would! LFG!!! We’re going to invent the future of communication!
Then, we sprint straight into the swamp. No one has the skills to pin down the really hard questions that turn that sketch into a real thing.
How do we decide what important means? How should the digest read? Or should the user read it at all? Maybe it should be audio? Can a user interact with it? Chat to it? Teach it about their preferences? Can any of the models we have access to actually do any of those things reliably?
People have a bunch of great skills, but none really answer these questions.
Designers can help us visualize possible software, but they’re generally not equipped to help us experience how different AI behaviors will feel.
Researchers can tell you what state of the art performance on a task is and how we might push beyond it, but not without a pretty concrete notion of what eval should be.
Similarly, engineers can help us optimize a system once we know what “good” is, but they’re generally not trained to explore the vast range of possible behaviors to define what good means.
UXRs can tell us about what problems users care about, but they’re powerless to tell us which of those problems an LLM can solve.
PMs can balance trade offs across all these disciplines, but they critically rely on the expertise of all these other practitioners to do that.
Given that no one on the team is good at solving these critical questions, the team gets stuck, spins their wheels, and eventually one of a few things happens:
Instead of shipping something great, the team reverts to the obvious tiny step: let's throw a sparkle icon somewhere that writes you a draft! Then no one uses it and they spend months trying to make users care.
Instead of trying to figure out what problem is worth solving, a team selects a really hard technical problem semi-randomly and starts hill climbing only to find out that problem doesn’t matter 6 months later.
Instead of trying to build things at all, they just argue in circles about which abstract approach will work. With no way to evaluate these theories, arguments continue until everyone flames out.
I think this is why new products have been so slow to emerge.
We know how to explore and nail down UI issues and technical issues and gaps in user understanding, but we have no way to nail down the part that matters most in LLM based products: what the hell do we want the model to do.
If you can’t quickly explore that and eventually clearly define that, then it’s going to be really hard to find great applications for these models.
Enter the AI designer
I think the solution to the problem we’re observing is better design — in the Steve Jobs sense of the word:
“Most people make the mistake of thinking design is what it looks like. People think it's this veneer — that the designers are handed this box and told, 'Make it look good!' That's not what we think design is. It's not just what it looks like and feels like. Design is how it works.”
“What the hell do we want the model to do” is the ultimate how it works question.
To answer it, you need user empathy and taste and the ability to imagine experiences that have never existed before — all things the tradition of design brings in spades.
However, this new person will need skills that are rare on most design teams:
Just like designers have to know their users, this new person needs to know the new alien they’re partnering with. That means they need to be just as obsessed about hanging out with models as they are with talking to users.
The only way to really understand how we want the model to behave in our application is to build a bunch of prototypes that demonstrate different model behaviors. This — and a need to have good intuition for the possible — means this person needs enough technical fluency to look kind of like an engineer.
Each of the behaviors you’re trying to design have near limitless possibility that you have to wrangle into a single, shippable product, and there’s little to no prior art to draft off of. That means this person needs experience facing the kind of “blank page” existential ambiguity that founders encounter.
With this new person and these new skills, design teams will have an ability to deeply shape the product — and improve users’ lives — in ways they never had before.
The behavior of algorithms has long been the domain of data scientists and engineers. A place where AB testing, not taste and vision reign supreme. LLMs don’t just give us the opportunity to create more powerful algorithms; they give us the opportunity for design to finally mold that critical part of the software experience. In doing this, we’ll unlock whole new types of products. We’ll finally see this technology live up to its product potential.
LLMs feel like genuine magic. Yet, somehow we haven’t been able to use this amazing new wand to churn out amazing new products. This is puzzling.
The products are stalled because LLMs blew up our software production line
Over decades, our industry has gotten pretty good at turning napkin sketch into full fledged products. That isn’t because we just learned to type out code faster! We got better at figuring out what software to make.
UXRs report on user studies, PMs prioritize roadmaps, designers make user flows, and engineers design architectures. All these people take a huge number of potential things we could do and turn it into a clear picture of a single thing worth doing.
Now, maybe you’re at a startup, and you’re saying my team doesn’t have 5 people yet alone 5 different roles. First, I’m jealous, and second, even if you don’t have a full time person figuring out each of these components, you almost certainly have someone wearing each of these hats for part of their time.
All of these roles have emerged because each helps us resolve a key kind of ambiguity:
who’s our customer?
What do they need?
How do they want to interact with our software?
What can we feasibly build given a complex matrix of constraints?
With answers to these questions in hand, we have a tidy user flow backed by a neat engineering architecture that we can bang out with some fast typing and ship.
… Until you throw an LLM in the mix. Then, all of the sudden, there’s an alien in the middle of your software stack that defies all attempts to be specified.
What makes the models so compelling — the ability to turn infinite input possibilities into infinite creative outputs — is exactly what makes them so hard to pin down into a shippable product. Infinite possibility is awesome, but it also takes our industry’s tidy little user flows with 6 states and a dozen edge cases and throws it right out the window.
We need… someone new?
In my time trying (and mostly failing) to make new LLM based products, I’ve seen tons of teams get stuck in an AI swamp. It starts off well! A team will have a spark of inspiration: “wouldn’t it be awesome if instead of having to read all my email, an agent could give me a single digest of all the important stuff in my inbox?” Hell yes it would! LFG!!! We’re going to invent the future of communication!
Then, we sprint straight into the swamp. No one has the skills to pin down the really hard questions that turn that sketch into a real thing.
How do we decide what important means? How should the digest read? Or should the user read it at all? Maybe it should be audio? Can a user interact with it? Chat to it? Teach it about their preferences? Can any of the models we have access to actually do any of those things reliably?
People have a bunch of great skills, but none really answer these questions.
Designers can help us visualize possible software, but they’re generally not equipped to help us experience how different AI behaviors will feel.
Researchers can tell you what state of the art performance on a task is and how we might push beyond it, but not without a pretty concrete notion of what eval should be.
Similarly, engineers can help us optimize a system once we know what “good” is, but they’re generally not trained to explore the vast range of possible behaviors to define what good means.
UXRs can tell us about what problems users care about, but they’re powerless to tell us which of those problems an LLM can solve.
PMs can balance trade offs across all these disciplines, but they critically rely on the expertise of all these other practitioners to do that.
Given that no one on the team is good at solving these critical questions, the team gets stuck, spins their wheels, and eventually one of a few things happens:
Instead of shipping something great, the team reverts to the obvious tiny step: let's throw a sparkle icon somewhere that writes you a draft! Then no one uses it and they spend months trying to make users care.
Instead of trying to figure out what problem is worth solving, a team selects a really hard technical problem semi-randomly and starts hill climbing only to find out that problem doesn’t matter 6 months later.
Instead of trying to build things at all, they just argue in circles about which abstract approach will work. With no way to evaluate these theories, arguments continue until everyone flames out.
I think this is why new products have been so slow to emerge.
We know how to explore and nail down UI issues and technical issues and gaps in user understanding, but we have no way to nail down the part that matters most in LLM based products: what the hell do we want the model to do.
If you can’t quickly explore that and eventually clearly define that, then it’s going to be really hard to find great applications for these models.
Enter the AI designer
I think the solution to the problem we’re observing is better design — in the Steve Jobs sense of the word:
“Most people make the mistake of thinking design is what it looks like. People think it's this veneer — that the designers are handed this box and told, 'Make it look good!' That's not what we think design is. It's not just what it looks like and feels like. Design is how it works.”
“What the hell do we want the model to do” is the ultimate how it works question.
To answer it, you need user empathy and taste and the ability to imagine experiences that have never existed before — all things the tradition of design brings in spades.
However, this new person will need skills that are rare on most design teams:
Just like designers have to know their users, this new person needs to know the new alien they’re partnering with. That means they need to be just as obsessed about hanging out with models as they are with talking to users.
The only way to really understand how we want the model to behave in our application is to build a bunch of prototypes that demonstrate different model behaviors. This — and a need to have good intuition for the possible — means this person needs enough technical fluency to look kind of like an engineer.
Each of the behaviors you’re trying to design have near limitless possibility that you have to wrangle into a single, shippable product, and there’s little to no prior art to draft off of. That means this person needs experience facing the kind of “blank page” existential ambiguity that founders encounter.
With this new person and these new skills, design teams will have an ability to deeply shape the product — and improve users’ lives — in ways they never had before.
The behavior of algorithms has long been the domain of data scientists and engineers. A place where AB testing, not taste and vision reign supreme. LLMs don’t just give us the opportunity to create more powerful algorithms; they give us the opportunity for design to finally mold that critical part of the software experience. In doing this, we’ll unlock whole new types of products. We’ll finally see this technology live up to its product potential.
Go deeper…
Join 10,000+ designers
Get our weekly breakdowns
"There's no doubt that Dive has made me a better designer"
@ned_ray
Join 10,000+ designers
Get our weekly breakdowns
"There's no doubt that Dive has made me a better designer"
@ned_ray
Join 10,000+ designers
Get our weekly breakdowns
"There's no doubt that Dive has made me a better designer"
@ned_ray
hello@dive.club
Ⓒ Dive 2024