The prompt-serve schema
A schema to help you manage large language model prompts and associated settings
A Very Short History
Not long after I started to use and develop with Language Language Models (LLMs), I noticed a problem. I was creating or finding all of these useful prompts, but I had no standard approach to storing them.
I first started compiling them in local notes, which quickly got out of hand, then I moved to a Notion database where I could use tags and filter on different fields. The Notion database worked well, but I still couldn’t really interact with the prompts in the way I wanted. I spent a lot of time copying and pasting prompts from Notion into my terminal, browser, or IDE.
There’s no shortage of prompt engineering tools and IDEs, but most of these focus on creating the prompt itself or essentially proxying your prompts to an LLM for inspection purposes. Though they do touch on some of what I’m looking for with the ability to store settings alongside prompts.
Specific prompts work best with certain models or settings like temperature, penalties, maximum tokens, etc. Then if you start getting into running LLMs locally, different base models require their prompts in a specific format.
For example, the LLaMa/Alpaca models use the base prompt:
### Instruction:
Write a haiku about the moons of Jupiter
### Response:
While OpenAssistant uses:
<|prompter|> Write a haiku about the moons of Jupiter
<|assistant|>:
Introducing the prompt-serve schema
To help manage LLM prompts in a sane manner, I’ve created a project I’m tentatively calling “prompt-serve”. prompt-serve is a new YAML schema for storing prompts and their associated metadata and settings.
You can check out the project over at the Github repository:
Right now, the schema is available for anyone to use, along with a collection of prompts I’ve collected for different base models and tasks. The plan is to add a small API server for uploading/validating new prompts and pushing them to a Git repository, and retrieving stored prompts for use in other applications or directly in code.
I’ll continue to add new prompts there as well, but those will likely become their own repository in the near future.
Using the Alpaca prompt from above as an example, here’s how it is stored using the prompt-serve schema.
In addition to the standard descriptive fields, prompt, and model settings, the schema also includes two fields I hope will be as helpful to other people as I’m finding them:
associations
Associate prompts to one another to represent chains or workflows with a list of UUIDs for other prompts.
packs
Bundle multiple prompts together under one or more UUIDs. For example, if you had several prompts for different summarization tasks, you could create a new UUID for that category and apply it to each prompt.
This project is still in the early stages, but I wanted to share the schema sooner than later in case anyone finds itseful or wants to contribute. After I finish the API server, I will probably add a web GUI on top for creating prompts and managing the back-end repo.
Check out the Github repository and feel free to open a pull request or issue directly on the repo. Any feedback would be great! 🙂