Filters / Parameters

Let users define constraints to improve the quality of their results


The cliff of adoption for generative AI is steep. Once users generally understand the nature of how to shape a request of the model, they begin to wonder how to get better results.

Filters and parameters give users that control, while helping to teach the user more advanced prompting methods through progressive disclosure.

  • Parameters are generally included ahead of time. For example, Midjourney allows users to include negative tokens through the use of the --not parameter, which reduce the sample set for the generative image to exclude images with that parameter
  • Filters operate as parameters in the background, but tend to take the form of more familiar UI elements, such as Perplexity's option to focus results to specific media, i.e. summaries of academic articles vs. discover videos to watch.

I‘ve listed this pattern as emerging despite it’s reliance on familiar paradigms. The way it can shift how users interact with and command models remains to be seen.

On one hand, this opens up the ability for more agentive power on the part of users, in its ability to give specific and predictable commands to the model, and improve the consistency of results. For example, if the model were built to understand the parameters --source [[journal name]] or --cc [[by-sa]], a user could explicitly query information aggregated from a single academic journal, or information shared under a specific type of Creative Commons license, respectively.

Commercial considerations

The commercial implications to this is massive, and it‘s something I don’t see many companies exploring yet. Could authors assign meta data to their work that allows it to be licensed to models that follow specific commercial terms, dictated by their set parameters?

Could this information be traced through the fingerprints of the sources aggregated in existing LLMs?

Token intelligence

I also find this pattern fascinating based on its ability to help us better understand what is happening beneath the hood of models without being able to see the tubes and data themselves.

For example, we know that tokens carry inherent bias due to its statistical relationship to other tokens. For example, if I prompt ChatGPT and include the spoken parameter [[Take a deep breath]] within my request, there is evidence to suggest the model will return a better result. The reasons for this may be unclear so far, but by playing with different parameters, we can isolate parts of the system to study that aren’t clear on the surface.

A more direct example of this can be seen in image generators. Combining the tokens ::Panda bear:: and ::City:: will return a bear in an urban environment surrounded by red pagodas and other symbols of Chinese architecture and culture. The tokens ::Bear:: and ::City:: most likely return a bear in a more Western-appearing urban environment. Adding the parameter --no [[china]] to the first result brings it closer to the second.

While this is obviously beneficial for people using prompting to generate specific results, it also represents a great tool to teach the ways that these models permeate bias into their results. Students, social workers, and so on can benefit from seeing this in action so they can understand the nature of inherent bias in predictive models.

I‘m bullish on the ways designers can use parameters to improve the ethics and the results of the models they are designing, and the interfaces they are designing to interact with those models.

Putting it into use…

Filters are a familiar pattern that help users get the results they expect. Rather than relying on users to know how to use parameters, and which to lean into, consider allowing users to built their prompt filtered to specific parameters with a template, similar to Hypotenuse.

If parameters are injected into the users' prompt, let them see it alongside the results to they understand that it was used, how it was used, and how to use it in the future.



Blank canvas anxiety
Parameters, though helpful, are still a complicated feature of LLMs. Consider using Wayfinders to help people discover parameters to apply, or include them upfront as filters such as in the Perplexity interface. Don't rely on users understanding how this advanced feature works.

Users can add parameters including negative tokens directly into Midjourney's chat interface
Hypotenuse makes it easier to manage common parameters, without the broad flexibility of Midjourney. Presumably their audience is less technical.
Jasper allows users to specify parameters as inputs into their writing
Jasper makes parameters easy to find from the editor as well, so users can specify them before remixing the result
Some writing tools like Jasper give users the option to bias for speed or quality. This would be an interesting pattern to see explored by UI generators, given the different needs for wireframes vs. glossy comps
ReWord lets users add training documents to each prompt as well
Parameters are extended to AutoFill prompts in Coda's interface
No items found.

What examples have you seen?

Share links or reach out with your thoughts?

Thanks for reaching out!
There was an error submitting your form. Please check that all required fields are complete.