Suggestions
Solves the blank canvas dilemma with clues for how to prompt
Overview:
For as powerful as these models are, they suffer from the common 'blank canvas' problem. They can do so much but users don’t know where to start.
Sample suggestions help a user learn what they could ask the system to do, and keep the generative conversation moving forward as it progresses. Generally these appear in a list of 3-5 nudges that pre-fill the chat input when selected.
This pattern is very similar to templates and nudges, other wayfinding devices. The difference here is that these are generally used in the open chat request type. Templates and nudges are more closely associated with writing and creative apps or more structured database tools.
However despite the fact that this is now one of the most common patterns seen in the wild, I hesitate to list its status as “set.” The affordance itself seems likely to stick around, but we should expect (at least, I hope we can expect) changes to how the suggestions are formed and their relevance to the user.
There is a natural cliff to the value of this pattern as it's currently implemented across most products. An internet joke is already forming around the insistence of AI companies to let their chatbot plan your vacation.
I just started a new ChatGPT conversation. Despite months of use at the premium tier, the four options it provides to me are completely irrelevant to my interests and my work:
Until these products introduce personalization and smart routes based on the user's existing behavior, or learn from the user's existing data in their systems, they have diminishing returns as a useful pattern.
That said, consider this pattern table stakes if you are designing a generative interface. For now, even if they have a cliff of usefulness, they get users starting to interact with the experience, which gives you the start of the data you need to improve it.
Lesser pattern
It's worth noting a related pattern emerging that may prove to be more useful: the prompt improver nudge. This is present in writing tools like Jasper, which also includes the standard icebreakers. If the first cliff a user faces is figuring out what they CAN ask the model, a fast follow is the dilemma of learning HOW to ask these questions effectively.
Prompt improvers take a simple prompt written by a user (i.e. what should I pack for my trip to Paris) and return a stronger prompt that can be requested directly from the input box.
As products improve onboarding and learn the users' preference, perhaps we’ll see icebreakers get replaced with these types of learning affordances instead.
Benefits:
Anti-patterns:
Irrelevant suggestions
The first interaction can get away with feeling random. After that, users will expect them to learn their preferences, or they will ignore these elements completely. Once someone has seen useless suggestions a few times in a row, it cheapens the entire experience. Consider combining these with the ability for users to set and see their preferences, or provide fingerprints to the conversations from which you are drawing the suggestions.