Solves the blank canvas dilemma with clues for how to prompt


For as powerful as these models are, they suffer from the common 'blank canvas' problem. They can do so much but users don’t know where to start.

Sample suggestions help a user learn what they could ask the system to do, and keep the generative conversation moving forward as it progresses. Generally these appear in a list of 3-5 nudges that pre-fill the chat input when selected.

This pattern is very similar to templates and nudges, other wayfinding devices. The difference here is that these are generally used in the open chat request type. Templates and nudges are more closely associated with writing and creative apps or more structured database tools.

However despite the fact that this is now one of the most common patterns seen in the wild, I hesitate to list its status as “set.” The affordance itself seems likely to stick around, but we should expect (at least, I hope we can expect) changes to how the suggestions are formed and their relevance to the user.

There is a natural cliff to the value of this pattern as it's currently implemented across most products. An internet joke is already forming around the insistence of AI companies to let their chatbot plan your vacation.

I just started a new ChatGPT conversation. Despite months of use at the premium tier, the four options it provides to me are completely irrelevant to my interests and my work:

four suggested options in the ChatGPT interface: (1) Help me study - vocabulary for a college entrance exam (2) Create a content calendar - for a TikTok account (3) Suggest some codenames - for a project introducing flexible work arrangement (4) Recommend a dish - to bring to a potluck
Four suggestions from ChatGPT, none of which have any relevance to me despite months of use

Until these products introduce personalization and smart routes based on the user's existing behavior, or learn from the user's existing data in their systems, they have diminishing returns as a useful pattern.

That said, consider this pattern table stakes if you are designing a generative interface. For now, even if they have a cliff of usefulness, they get users starting to interact with the experience, which gives you the start of the data you need to improve it.

Lesser pattern

It's worth noting a related pattern emerging that may prove to be more useful: the prompt improver nudge. This is present in writing tools like Jasper, which also includes the standard icebreakers. If the first cliff a user faces is figuring out what they CAN ask the model, a fast follow is the dilemma of learning HOW to ask these questions effectively.

Prompt improvers take a simple prompt written by a user (i.e. what should I pack for my trip to Paris) and return a stronger prompt that can be requested directly from the input box.

As products improve onboarding and learn the users' preference, perhaps we’ll see icebreakers get replaced with these types of learning affordances instead.



Irrelevant suggestions
The first interaction can get away with feeling random. After that, users will expect them to learn their preferences, or they will ignore these elements completely. Once someone has seen useless suggestions a few times in a row, it cheapens the entire experience. Consider combining these with the ability for users to set and see their preferences, or provide fingerprints to the conversations from which you are drawing the suggestions.

Jasper borrows from the OG icebreaker interface made popular by OpenAI to get people started
Perplexity offers suggestions in a similar way, but the tone is much more aligned to the purpose of the app (fueling curiosity, not just efficiency) has sample prompts to help you create your first bot
Icebreakers can also be found in the context of other parts of the interface, introducing AI at a time when it may make more sense
Writing tools like Notion combine the suggested actions for editing and remixing alongside icebreaker prompts (towards the bottom)
Adobe puts them in the context of demonstrating the product's capabilities
Musho's prompts are available from the image generation screen. They are very simple, perhaps with the goal of just getting people started. A template could help here
Another image generator, Umso, also uses simple prompts on the first screen. It will take a very advanced prompt and model to get your perfect website. It seems these tools are focusing on letting the user peek behind the curtain
Figjam combines an icebreaker with nudges in their canvas generation template
Github puts their suggestions in the code editor itself. They operate as nudges, but tell the user what they can text to Copilot directly
Interestingly, the best icebreakers that image and web generators have created exist outside the product. users can see the prompts that other people used to generate their sites
Midjourney also shows users the prompts used to create public images. The barrier to entry is dramatically decreased when a user can copy-paste a few sentences and create a picture that makes them feel like a magician
Figjam follows' Github's lead and puts suggestions into the cavas itself. Miro on the otherhand relies on the user to know what to say to their bot
Google waits until a user has made their first search query before they see suggestions for how to keep searching
Although having shared that, Google now appears to be make suggestions below the search bar on home
No items found.

What examples have you seen?

Share links or reach out with your thoughts?

Thanks for reaching out!
There was an error submitting your form. Please check that all required fields are complete.