Skip to content

Prompt Module

openaivec.prompt

This module provides a builder for creating few‑shot prompts, which are used to train large language models (LLMs) by providing them with examples of input/output pairs. The builder allows for the construction of a prompt in a structured way, including setting the purpose, adding cautions, and providing examples.

from openaivec.prompt import FewShotPromptBuilder

prompt_str: str = (
    FewShotPromptBuilder()
    .purpose("some purpose")
    .caution("some caution")
    .caution("some other caution")
    .example("some input", "some output")
    .example("some other input", "some other output")
    .build()
)
print(prompt_str)
this will produce an XML string that looks like this:
<Prompt>
    <Purpose>some purpose</Purpose>
    <Cautions>
        <Caution>some caution</Caution>
        <Caution>some other caution</Caution>
    </Cautions>
    <Examples>
        <Example>
            <Input>some input</Input>
            <Output>some output</Output>
        </Example>
        <Example>
            <Input>some other input</Input>
            <Output>some other output</Output>
        </Example>
    </Examples>
</Prompt>

Example

Bases: BaseModel

Represents a single input/output example used in few‑shot prompts.

Attributes:

Name Type Description
input str

The input text that will be passed to the model.

output str

The expected output corresponding to the given input.

Source code in src/openaivec/prompt.py
class Example(BaseModel):
    """Represents a single input/output example used in few‑shot prompts.

    Attributes:
        input (str): The input text that will be passed to the model.
        output (str): The expected output corresponding to the given input.
    """

    input: str
    output: str

FewShotPrompt

Bases: BaseModel

Represents a prompt definition used for few‑shot learning.

The data collected in this model is later rendered into XML and sent to a large‑language model as part of the system prompt.

Attributes:

Name Type Description
purpose str

A concise, human‑readable statement describing the goal of the prompt.

cautions list[str]

A list of warnings, edge cases, or pitfalls that the model should be aware of when generating answers.

examples list[Example]

Input/output pairs demonstrating the expected behaviour for a variety of scenarios.

Source code in src/openaivec/prompt.py
class FewShotPrompt(BaseModel):
    """Represents a prompt definition used for few‑shot learning.

    The data collected in this model is later rendered into XML and sent to a
    large‑language model as part of the system prompt.

    Attributes:
        purpose (str): A concise, human‑readable statement describing the goal
            of the prompt.
        cautions (list[str]): A list of warnings, edge cases, or pitfalls that
            the model should be aware of when generating answers.
        examples (list[Example]): Input/output pairs demonstrating the expected
            behaviour for a variety of scenarios.
    """

    purpose: str
    cautions: List[str]
    examples: List[Example]

Step

Bases: BaseModel

A single refinement iteration produced by the LLM.

Attributes:

Name Type Description
id int

Sequential identifier of the iteration (0 for the original, 1 for the first change, and so on).

analysis str

Natural‑language explanation of the issue addressed in this iteration and why the change was necessary.

prompt FewShotPrompt

The updated prompt after applying the described modification.

Source code in src/openaivec/prompt.py
class Step(BaseModel):
    """A single refinement iteration produced by the LLM.

    Attributes:
        id (int): Sequential identifier of the iteration (``0`` for the
            original, ``1`` for the first change, and so on).
        analysis (str): Natural‑language explanation of the issue addressed
            in this iteration and why the change was necessary.
        prompt (FewShotPrompt): The updated prompt after applying the
            described modification.
    """

    id: int
    analysis: str
    prompt: FewShotPrompt

FewShotPromptBuilder

Source code in src/openaivec/prompt.py
class FewShotPromptBuilder:
    _prompt: FewShotPrompt
    _steps: List[Step]

    def __init__(self):
        """Initialize an empty FewShotPromptBuilder."""
        self._prompt = FewShotPrompt(purpose="", cautions=[], examples=[])

    @classmethod
    def of(cls, prompt: FewShotPrompt) -> "FewShotPromptBuilder":
        """Create a builder pre‑populated with an existing prompt.

        Args:
            prompt (FewShotPrompt): The prompt to start from.

        Returns:
            FewShotPromptBuilder: A new builder instance.
        """
        builder = cls()
        builder._prompt = prompt
        return builder

    @classmethod
    def of_empty(cls) -> "FewShotPromptBuilder":
        """Create a builder.

        Returns:
            FewShotPromptBuilder: A new builder instance with an empty prompt.
        """
        return cls.of(FewShotPrompt(purpose="", cautions=[], examples=[]))

    def purpose(self, purpose: str) -> "FewShotPromptBuilder":
        """Set the purpose of the prompt.

        Args:
            purpose (str): A concise statement describing the prompt’s goal.

        Returns:
            FewShotPromptBuilder: The current builder instance (for chaining).
        """
        self._prompt.purpose = purpose
        return self

    def caution(self, caution: str) -> "FewShotPromptBuilder":
        """Append a cautionary note to the prompt.

        Args:
            caution (str): A caution or edge‑case description.

        Returns:
            FewShotPromptBuilder: The current builder instance.
        """
        if self._prompt.cautions is None:
            self._prompt.cautions = []
        self._prompt.cautions.append(caution)
        return self

    def example(
        self,
        input_value: str | BaseModel,
        output_value: str | BaseModel,
    ) -> "FewShotPromptBuilder":
        """Add a single input/output example.

        Args:
            input_value (str | BaseModel): Example input; if a Pydantic model is
                provided it is serialised to JSON.
            output_value (str | BaseModel): Expected output; serialised if needed.

        Returns:
            FewShotPromptBuilder: The current builder instance.
        """
        if self._prompt.examples is None:
            self._prompt.examples = []

        input_string = input_value if isinstance(input_value, str) else input_value.model_dump_json()
        output_string = output_value if isinstance(output_value, str) else output_value.model_dump_json()
        self._prompt.examples.append(Example(input=input_string, output=output_string))
        return self

    def improve(
        self,
        client: OpenAI,
        model_name: str,
        temperature: float = 0.0,
        top_p: float = 1.0,
    ) -> "FewShotPromptBuilder":
        """Iteratively refine the prompt using an LLM.

        The method calls a single LLM request that returns multiple
        editing steps and stores each step for inspection.

        Args:
            client (openai.OpenAI): Configured OpenAI client.
            model_name (str): Model identifier (e.g. ``gpt-4o-mini``).
            temperature (float, optional): Sampling temperature. Defaults to 0.0.
            top_p (float, optional): Nucleus sampling parameter. Defaults to 1.0.

        Returns:
            FewShotPromptBuilder: The current builder instance containing the refined prompt and iteration history.
        """

        response: ParsedResponse[Response] = client.responses.parse(
            model=model_name,
            instructions=_PROMPT,
            input=Request(prompt=self._prompt).model_dump_json(),
            temperature=temperature,
            top_p=top_p,
            text_format=Response,
        )

        # keep the original prompt
        self._steps = [Step(id=0, analysis="Original Prompt", prompt=self._prompt)]

        # add the histories
        for step in response.output_parsed.iterations:
            self._steps.append(step)

        # set the final prompt
        self._prompt = self._steps[-1].prompt

        return self

    def explain(self) -> "FewShotPromptBuilder":
        """Pretty‑print the diff of each improvement iteration.

        Returns:
            FewShotPromptBuilder: The current builder instance.
        """
        for previous, current in zip(self._steps, self._steps[1:]):
            print(f"=== Iteration {current.id} ===\n")
            print(f"Instruction: {current.analysis}")
            diff = difflib.unified_diff(
                _render_prompt(previous.prompt).splitlines(),
                _render_prompt(current.prompt).splitlines(),
                fromfile="before",
                tofile="after",
                lineterm="",
            )
            for line in diff:
                print(line)
        return self

    def _validate(self) -> None:
        """Validate the internal FewShotPrompt.

        Raises:
            ValueError: If required fields such as purpose or examples are
                missing.
        """
        # Validate that 'purpose' and 'examples' are not empty.
        if not self._prompt.purpose:
            raise ValueError("Purpose is required.")
        if not self._prompt.examples or len(self._prompt.examples) == 0:
            raise ValueError("At least one example is required.")

    def get_object(self) -> FewShotPrompt:
        """Return the underlying FewShotPrompt object.

        Returns:
            FewShotPrompt: The validated prompt object.
        """
        self._validate()
        return self._prompt

    def build(self) -> str:
        """Build and return the prompt as XML.

        Returns:
            str: XML representation of the prompt.
        """
        self._validate()
        return self.build_xml()

    def build_json(self, **kwargs: Any) -> str:
        """Build and return the prompt as a JSON string.

        Args:
            **kwargs: Keyword arguments forwarded to ``model_dump_json``.

        Returns:
            str: JSON representation of the prompt.
        """
        self._validate()
        return self._prompt.model_dump_json(**kwargs)

    def build_xml(self) -> str:
        """Alias for :py:meth:`build` for explicit XML generation.

        Returns:
            str: XML representation of the prompt.
        """
        self._validate()
        return _render_prompt(self._prompt)

__init__

__init__()

Initialize an empty FewShotPromptBuilder.

Source code in src/openaivec/prompt.py
def __init__(self):
    """Initialize an empty FewShotPromptBuilder."""
    self._prompt = FewShotPrompt(purpose="", cautions=[], examples=[])

of classmethod

of(prompt: FewShotPrompt) -> FewShotPromptBuilder

Create a builder pre‑populated with an existing prompt.

Parameters:

Name Type Description Default
prompt FewShotPrompt

The prompt to start from.

required

Returns:

Name Type Description
FewShotPromptBuilder FewShotPromptBuilder

A new builder instance.

Source code in src/openaivec/prompt.py
@classmethod
def of(cls, prompt: FewShotPrompt) -> "FewShotPromptBuilder":
    """Create a builder pre‑populated with an existing prompt.

    Args:
        prompt (FewShotPrompt): The prompt to start from.

    Returns:
        FewShotPromptBuilder: A new builder instance.
    """
    builder = cls()
    builder._prompt = prompt
    return builder

of_empty classmethod

of_empty() -> FewShotPromptBuilder

Create a builder.

Returns:

Name Type Description
FewShotPromptBuilder FewShotPromptBuilder

A new builder instance with an empty prompt.

Source code in src/openaivec/prompt.py
@classmethod
def of_empty(cls) -> "FewShotPromptBuilder":
    """Create a builder.

    Returns:
        FewShotPromptBuilder: A new builder instance with an empty prompt.
    """
    return cls.of(FewShotPrompt(purpose="", cautions=[], examples=[]))

purpose

purpose(purpose: str) -> FewShotPromptBuilder

Set the purpose of the prompt.

Parameters:

Name Type Description Default
purpose str

A concise statement describing the prompt’s goal.

required

Returns:

Name Type Description
FewShotPromptBuilder FewShotPromptBuilder

The current builder instance (for chaining).

Source code in src/openaivec/prompt.py
def purpose(self, purpose: str) -> "FewShotPromptBuilder":
    """Set the purpose of the prompt.

    Args:
        purpose (str): A concise statement describing the prompt’s goal.

    Returns:
        FewShotPromptBuilder: The current builder instance (for chaining).
    """
    self._prompt.purpose = purpose
    return self

caution

caution(caution: str) -> FewShotPromptBuilder

Append a cautionary note to the prompt.

Parameters:

Name Type Description Default
caution str

A caution or edge‑case description.

required

Returns:

Name Type Description
FewShotPromptBuilder FewShotPromptBuilder

The current builder instance.

Source code in src/openaivec/prompt.py
def caution(self, caution: str) -> "FewShotPromptBuilder":
    """Append a cautionary note to the prompt.

    Args:
        caution (str): A caution or edge‑case description.

    Returns:
        FewShotPromptBuilder: The current builder instance.
    """
    if self._prompt.cautions is None:
        self._prompt.cautions = []
    self._prompt.cautions.append(caution)
    return self

example

example(
    input_value: str | BaseModel,
    output_value: str | BaseModel,
) -> FewShotPromptBuilder

Add a single input/output example.

Parameters:

Name Type Description Default
input_value str | BaseModel

Example input; if a Pydantic model is provided it is serialised to JSON.

required
output_value str | BaseModel

Expected output; serialised if needed.

required

Returns:

Name Type Description
FewShotPromptBuilder FewShotPromptBuilder

The current builder instance.

Source code in src/openaivec/prompt.py
def example(
    self,
    input_value: str | BaseModel,
    output_value: str | BaseModel,
) -> "FewShotPromptBuilder":
    """Add a single input/output example.

    Args:
        input_value (str | BaseModel): Example input; if a Pydantic model is
            provided it is serialised to JSON.
        output_value (str | BaseModel): Expected output; serialised if needed.

    Returns:
        FewShotPromptBuilder: The current builder instance.
    """
    if self._prompt.examples is None:
        self._prompt.examples = []

    input_string = input_value if isinstance(input_value, str) else input_value.model_dump_json()
    output_string = output_value if isinstance(output_value, str) else output_value.model_dump_json()
    self._prompt.examples.append(Example(input=input_string, output=output_string))
    return self

improve

improve(
    client: OpenAI,
    model_name: str,
    temperature: float = 0.0,
    top_p: float = 1.0,
) -> FewShotPromptBuilder

Iteratively refine the prompt using an LLM.

The method calls a single LLM request that returns multiple editing steps and stores each step for inspection.

Parameters:

Name Type Description Default
client OpenAI

Configured OpenAI client.

required
model_name str

Model identifier (e.g. gpt-4o-mini).

required
temperature float

Sampling temperature. Defaults to 0.0.

0.0
top_p float

Nucleus sampling parameter. Defaults to 1.0.

1.0

Returns:

Name Type Description
FewShotPromptBuilder FewShotPromptBuilder

The current builder instance containing the refined prompt and iteration history.

Source code in src/openaivec/prompt.py
def improve(
    self,
    client: OpenAI,
    model_name: str,
    temperature: float = 0.0,
    top_p: float = 1.0,
) -> "FewShotPromptBuilder":
    """Iteratively refine the prompt using an LLM.

    The method calls a single LLM request that returns multiple
    editing steps and stores each step for inspection.

    Args:
        client (openai.OpenAI): Configured OpenAI client.
        model_name (str): Model identifier (e.g. ``gpt-4o-mini``).
        temperature (float, optional): Sampling temperature. Defaults to 0.0.
        top_p (float, optional): Nucleus sampling parameter. Defaults to 1.0.

    Returns:
        FewShotPromptBuilder: The current builder instance containing the refined prompt and iteration history.
    """

    response: ParsedResponse[Response] = client.responses.parse(
        model=model_name,
        instructions=_PROMPT,
        input=Request(prompt=self._prompt).model_dump_json(),
        temperature=temperature,
        top_p=top_p,
        text_format=Response,
    )

    # keep the original prompt
    self._steps = [Step(id=0, analysis="Original Prompt", prompt=self._prompt)]

    # add the histories
    for step in response.output_parsed.iterations:
        self._steps.append(step)

    # set the final prompt
    self._prompt = self._steps[-1].prompt

    return self

explain

explain() -> FewShotPromptBuilder

Pretty‑print the diff of each improvement iteration.

Returns:

Name Type Description
FewShotPromptBuilder FewShotPromptBuilder

The current builder instance.

Source code in src/openaivec/prompt.py
def explain(self) -> "FewShotPromptBuilder":
    """Pretty‑print the diff of each improvement iteration.

    Returns:
        FewShotPromptBuilder: The current builder instance.
    """
    for previous, current in zip(self._steps, self._steps[1:]):
        print(f"=== Iteration {current.id} ===\n")
        print(f"Instruction: {current.analysis}")
        diff = difflib.unified_diff(
            _render_prompt(previous.prompt).splitlines(),
            _render_prompt(current.prompt).splitlines(),
            fromfile="before",
            tofile="after",
            lineterm="",
        )
        for line in diff:
            print(line)
    return self

get_object

get_object() -> FewShotPrompt

Return the underlying FewShotPrompt object.

Returns:

Name Type Description
FewShotPrompt FewShotPrompt

The validated prompt object.

Source code in src/openaivec/prompt.py
def get_object(self) -> FewShotPrompt:
    """Return the underlying FewShotPrompt object.

    Returns:
        FewShotPrompt: The validated prompt object.
    """
    self._validate()
    return self._prompt

build

build() -> str

Build and return the prompt as XML.

Returns:

Name Type Description
str str

XML representation of the prompt.

Source code in src/openaivec/prompt.py
def build(self) -> str:
    """Build and return the prompt as XML.

    Returns:
        str: XML representation of the prompt.
    """
    self._validate()
    return self.build_xml()

build_json

build_json(**kwargs: Any) -> str

Build and return the prompt as a JSON string.

Parameters:

Name Type Description Default
**kwargs Any

Keyword arguments forwarded to model_dump_json.

{}

Returns:

Name Type Description
str str

JSON representation of the prompt.

Source code in src/openaivec/prompt.py
def build_json(self, **kwargs: Any) -> str:
    """Build and return the prompt as a JSON string.

    Args:
        **kwargs: Keyword arguments forwarded to ``model_dump_json``.

    Returns:
        str: JSON representation of the prompt.
    """
    self._validate()
    return self._prompt.model_dump_json(**kwargs)

build_xml

build_xml() -> str

Alias for 🇵🇾meth:build for explicit XML generation.

Returns:

Name Type Description
str str

XML representation of the prompt.

Source code in src/openaivec/prompt.py
def build_xml(self) -> str:
    """Alias for :py:meth:`build` for explicit XML generation.

    Returns:
        str: XML representation of the prompt.
    """
    self._validate()
    return _render_prompt(self._prompt)