Aug 15, 2024

We're in the "Discord bot" era of design

Creator of Dive

When it comes to designing with AI, I see two big issues and two opportunities 👇

Issue #1: We’re pointing AI at the wrong problem

How often do you stare at Figma’s #F5F5F5 canvas, unable to think of where to start?

Maybe it’s just me, but I hardly ever feel that way…

If I’m in my design tool it’s because something is in my brain (even if it’s just a simple sketch).

That’s why I don’t buy the so-called “blank canvas problem” as a real pain point for professional designers.

Pointing AI at this “problem” is really a way to expand the user base by lowering the bar for non-designers to participate.

When it comes to generative UI, we’re no longer the ICP.

Here’s how I’d like to see that change 👇

Opportunity #1: Focus on iteration

Rather than helping me go from 0 to 1…

I want AI to help me go from 1 to 100.

Professional designers have no shortage of ideas.

What separates the great designers though is their obsession with iterating past “good enough”.

It’s why I dedicate a whole lesson in Figma Academy to my workflow for exploring concepts as efficiently as possible. Because in order to reach “informed simplicity” you have to try a lot of ideas.

Today’s approach to generative UI makes the initial design easier. But I want a tool that magnifies my initial creative spark… something that allows me to explore far more concepts than I possibly could on my own.

HMW use AI to achieve a higher level of refinement and elegance in our work?

Issue #2: Language is an awkward medium for visual ideas

When you focus on the 0→1 experience, the input has to be natural language.

But this is a pretty terrible way to design 😬

Generating high-level designs

When you describe full page layouts or flows, it eliminates a lot of creative problem solving and UX thinking.

The result is you generate safe, cookie-cutter UI.

And again… this might satisfy non-designers. But I don’t see it meaningfully impacting the workflow of professional designers because it caps what we can uniquely bring to the table.

Generating low-level designs

On the flip side, if you focus on generating more granular UI elements instead…

  1. Ideas become increasingly visual

  2. Natural language becomes increasingly cumbersome

Take this piece of UI from Daryl Ginn for example.

I want you to spend the next 30 seconds thinking in specifics about how you would articulate this design (even just the static version).

If you ignore the confetti, this CTA composition only has four elements.

I could’ve easily cherry-picked a more challenging example, but hopefully this still demonstrates how frustrating it can be to communicate more nuanced, visual ideas.

That’s why I’m interested in a different type of AI input 👇

Opportunity #2: Screenshots as inputs

What if I could feed an AI model a screenshot(s) from my taste library?

Now imagine if the model generated a sandbox of design elements based on these screenshots (textures, gradients, components, typography, shadows, etc.).

I’m not talking about a formal brand guide… picture more of a messy, interactive mood board.

That is a 0→1 workflow that I can get excited about.

Because all of a sudden it would be 10x easier to create stunning visual languages. All you’d need is the right input.

Maybe you want cards in the style of Perry Wang’s portfolio…

Or maybe you’re curious what it would look like to build an aesthetic by extrapolating Amo’s appstore screenshots (I know I am)…

Or what if you could generate highly detailed chrome borders in a few clicks?

Perhaps the most interesting output of all would draw inspiration outside of software. What would a medieval visual language inspired by Teenage Engineering hardware look like?

The Discord bot era

Both of the opportunities listed above share the same requirement: visual inputs.

  • We need a starting point for AI to iterate on the product design

  • We need screenshots for AI to spin up compelling visual designs

Right now we’re still in the “Discord bot era” of designing with AI.

I’m ready for our Visual Electric moment.

When it comes to designing with AI, I see two big issues and two opportunities 👇

Issue #1: We’re pointing AI at the wrong problem

How often do you stare at Figma’s #F5F5F5 canvas, unable to think of where to start?

Maybe it’s just me, but I hardly ever feel that way…

If I’m in my design tool it’s because something is in my brain (even if it’s just a simple sketch).

That’s why I don’t buy the so-called “blank canvas problem” as a real pain point for professional designers.

Pointing AI at this “problem” is really a way to expand the user base by lowering the bar for non-designers to participate.

When it comes to generative UI, we’re no longer the ICP.

Here’s how I’d like to see that change 👇

Opportunity #1: Focus on iteration

Rather than helping me go from 0 to 1…

I want AI to help me go from 1 to 100.

Professional designers have no shortage of ideas.

What separates the great designers though is their obsession with iterating past “good enough”.

It’s why I dedicate a whole lesson in Figma Academy to my workflow for exploring concepts as efficiently as possible. Because in order to reach “informed simplicity” you have to try a lot of ideas.

Today’s approach to generative UI makes the initial design easier. But I want a tool that magnifies my initial creative spark… something that allows me to explore far more concepts than I possibly could on my own.

HMW use AI to achieve a higher level of refinement and elegance in our work?

Issue #2: Language is an awkward medium for visual ideas

When you focus on the 0→1 experience, the input has to be natural language.

But this is a pretty terrible way to design 😬

Generating high-level designs

When you describe full page layouts or flows, it eliminates a lot of creative problem solving and UX thinking.

The result is you generate safe, cookie-cutter UI.

And again… this might satisfy non-designers. But I don’t see it meaningfully impacting the workflow of professional designers because it caps what we can uniquely bring to the table.

Generating low-level designs

On the flip side, if you focus on generating more granular UI elements instead…

  1. Ideas become increasingly visual

  2. Natural language becomes increasingly cumbersome

Take this piece of UI from Daryl Ginn for example.

I want you to spend the next 30 seconds thinking in specifics about how you would articulate this design (even just the static version).

If you ignore the confetti, this CTA composition only has four elements.

I could’ve easily cherry-picked a more challenging example, but hopefully this still demonstrates how frustrating it can be to communicate more nuanced, visual ideas.

That’s why I’m interested in a different type of AI input 👇

Opportunity #2: Screenshots as inputs

What if I could feed an AI model a screenshot(s) from my taste library?

Now imagine if the model generated a sandbox of design elements based on these screenshots (textures, gradients, components, typography, shadows, etc.).

I’m not talking about a formal brand guide… picture more of a messy, interactive mood board.

That is a 0→1 workflow that I can get excited about.

Because all of a sudden it would be 10x easier to create stunning visual languages. All you’d need is the right input.

Maybe you want cards in the style of Perry Wang’s portfolio…

Or maybe you’re curious what it would look like to build an aesthetic by extrapolating Amo’s appstore screenshots (I know I am)…

Or what if you could generate highly detailed chrome borders in a few clicks?

Perhaps the most interesting output of all would draw inspiration outside of software. What would a medieval visual language inspired by Teenage Engineering hardware look like?

The Discord bot era

Both of the opportunities listed above share the same requirement: visual inputs.

  • We need a starting point for AI to iterate on the product design

  • We need screenshots for AI to spin up compelling visual designs

Right now we’re still in the “Discord bot era” of designing with AI.

I’m ready for our Visual Electric moment.

When it comes to designing with AI, I see two big issues and two opportunities 👇

Issue #1: We’re pointing AI at the wrong problem

How often do you stare at Figma’s #F5F5F5 canvas, unable to think of where to start?

Maybe it’s just me, but I hardly ever feel that way…

If I’m in my design tool it’s because something is in my brain (even if it’s just a simple sketch).

That’s why I don’t buy the so-called “blank canvas problem” as a real pain point for professional designers.

Pointing AI at this “problem” is really a way to expand the user base by lowering the bar for non-designers to participate.

When it comes to generative UI, we’re no longer the ICP.

Here’s how I’d like to see that change 👇

Opportunity #1: Focus on iteration

Rather than helping me go from 0 to 1…

I want AI to help me go from 1 to 100.

Professional designers have no shortage of ideas.

What separates the great designers though is their obsession with iterating past “good enough”.

It’s why I dedicate a whole lesson in Figma Academy to my workflow for exploring concepts as efficiently as possible. Because in order to reach “informed simplicity” you have to try a lot of ideas.

Today’s approach to generative UI makes the initial design easier. But I want a tool that magnifies my initial creative spark… something that allows me to explore far more concepts than I possibly could on my own.

HMW use AI to achieve a higher level of refinement and elegance in our work?

Issue #2: Language is an awkward medium for visual ideas

When you focus on the 0→1 experience, the input has to be natural language.

But this is a pretty terrible way to design 😬

Generating high-level designs

When you describe full page layouts or flows, it eliminates a lot of creative problem solving and UX thinking.

The result is you generate safe, cookie-cutter UI.

And again… this might satisfy non-designers. But I don’t see it meaningfully impacting the workflow of professional designers because it caps what we can uniquely bring to the table.

Generating low-level designs

On the flip side, if you focus on generating more granular UI elements instead…

  1. Ideas become increasingly visual

  2. Natural language becomes increasingly cumbersome

Take this piece of UI from Daryl Ginn for example.

I want you to spend the next 30 seconds thinking in specifics about how you would articulate this design (even just the static version).

If you ignore the confetti, this CTA composition only has four elements.

I could’ve easily cherry-picked a more challenging example, but hopefully this still demonstrates how frustrating it can be to communicate more nuanced, visual ideas.

That’s why I’m interested in a different type of AI input 👇

Opportunity #2: Screenshots as inputs

What if I could feed an AI model a screenshot(s) from my taste library?

Now imagine if the model generated a sandbox of design elements based on these screenshots (textures, gradients, components, typography, shadows, etc.).

I’m not talking about a formal brand guide… picture more of a messy, interactive mood board.

That is a 0→1 workflow that I can get excited about.

Because all of a sudden it would be 10x easier to create stunning visual languages. All you’d need is the right input.

Maybe you want cards in the style of Perry Wang’s portfolio…

Or maybe you’re curious what it would look like to build an aesthetic by extrapolating Amo’s appstore screenshots (I know I am)…

Or what if you could generate highly detailed chrome borders in a few clicks?

Perhaps the most interesting output of all would draw inspiration outside of software. What would a medieval visual language inspired by Teenage Engineering hardware look like?

The Discord bot era

Both of the opportunities listed above share the same requirement: visual inputs.

  • We need a starting point for AI to iterate on the product design

  • We need screenshots for AI to spin up compelling visual designs

Right now we’re still in the “Discord bot era” of designing with AI.

I’m ready for our Visual Electric moment.

Join 10,000+ designers

Get our weekly breakdowns

"There's no doubt that Dive has made me a better designer"

@ned_ray

Join 10,000+ designers

Get our weekly breakdowns

"There's no doubt that Dive has made me a better designer"

@ned_ray

Join 10,000+ designers

Get our weekly breakdowns

"There's no doubt that Dive has made me a better designer"

@ned_ray

"

I've been binging Dive Club lately and the quality is nuts

Literally the only show about design I watch”

Eugene Fedorenko

"

I've been binging Dive Club lately and the quality is nuts

Literally the only show about design I watch”

Eugene Fedorenko

hello@dive.club

Ⓒ Dive 2024