Learn how to create and define AI capabilities using structured prompts and typed arguments with Axiom.
The Create stage is about defining a new AI as a structured, version-able asset in your codebase. The goal is to move away from scattered, hard-coded string prompts and toward a more disciplined and organized approach to prompt engineering.
Defining a capability as a prompt object
In Rudder, every capability is represented by a Prompt
object. This object serves as the single source of truth for the capability’s logic, including its messages, metadata, and the schema for its arguments.
For now, these Prompt
objects can be defined and managed as TypeScript files within your own project repository.
A typical Prompt
object looks like this:
Strongly-typed arguments with Template
To ensure that prompts are used correctly, the @axiomhq/ai
package includes a Template
type system (exported as Type
) for defining the schema of a prompt’s arguments
. This provides type safety, autocompletion, and a clear, self-documenting definition of what data the prompt expects.
The arguments
object uses Template
helpers to define the shape of the context:
You can even infer the exact TypeScript type for a prompt’s context using the InferContext
utility.
Prototyping and local testing
Before using a prompt in your application, you can test it locally using the parse
function. This function takes a Prompt
object and a context
object, rendering the templated messages to verify the output. This is a quick way to ensure your templating logic is correct.
Managing prompts with Axiom
To enable more advanced workflows and collaboration, Axiom is building tools to manage your prompt assets centrally.
- Coming soonThe
axiom
CLI will allow you topush
,pull
, andlist
prompt versions directly from your terminal, synchronizing your local files with the Axiom platform. - Coming soonThe SDK will include methods like
axiom.prompts.create()
andaxiom.prompts.load()
for programmatic access to your managed prompts. This will be the foundation for A/B testing, version comparison, and deploying new prompts without changing your application code.
What’s next?
Now that you’ve created and structured your capability, the next step is to measure its quality against a set of known good examples.
Learn more about this step of the Rudder workflow in the Measure docs.