moorellm package

Submodules

class moorellm.main.MooreFSM(initial_state: str, end_state: str = 'END')[source]

Bases: object

Moore Finite State Machine based LLM Agent class, this allows to define states and transitions and use latest structured response from OpenAI/Azure OpenAI API

Parameters:
  • initial_state (str) – Initial state of the FSM

  • end_state (str) – End state of the FSM, default is “END” (currently not used)

Returns:

MooreFSM object

Return type:

moorellm.main.MooreFSM

get_chat_history()[source]

Get the chat history.

get_context_data(key: str, default: Any = None)[source]

Get data from user defined context.

get_current_state()[source]

Get the current state.

get_next_state()[source]

Get the next state.

is_completed()[source]

Check if the FSM is completed.

reset()[source]

Reset the FSM to initial state.

async run(async_openai_instance: AsyncAzureOpenAI | AsyncOpenAI, user_input: str, model: str = 'gpt-4o-2024-08-06', *args, **kwargs) MooreRun[source]

Run the FSM with user input and get the response from OpenAI API, only one iteration is done at a time.

Parameters:
  • async_openai_instance (Union[openai.AsyncAzureOpenAI, openai.AsyncOpenAI]) – OpenAI/AzureOpenAI instance to use for completion

  • user_input (str) – User input to the FSM

  • model (str) – Model to use for completion, default is “gpt-4o-2024-08-06”

Returns:

MooreRun object

Return type:

moorellm.models.MooreRun

while True:
    user_input = input("You: ")
    run: MooreRun = await fsm.run(async_openai_instance, user_input)
    print(f"AI: {run.response}")
    if run.state == "END":
        break

Note

The run function should be called in a loop to keep the FSM running, it will return the details from the run.

Note

Only use AsyncOpenAI or AsyncAzureOpenAI instance for async completion.

set_context_data(key: str, value: Any)[source]

Set data into user defined context.

set_next_state(next_state: str)[source]

Set the next state.

state(state_key: str, system_prompt: str, temperature: float = 0.5, transitions: Dict[str, str] = {}, response_model: BaseModel | None = None, pre_process_chat: Callable | None = None, pre_process_system_prompt: Callable | None = None)[source]

Decorator to add a state to the FSM while defining transitions and other details.

Parameters:
  • state_key (str) – Key of this state

  • system_prompt (str) – System prompt for this state

  • temperature (float) – Temperature for the completion

  • transitions (Dict[str, str]) – Transitions to other states, defined by key-value pairs of next_state_key: condition

  • response_model (BaseModel, optional) – Pydantic model for response parsing, default is None (ie give string response)

  • pre_process_chat (Callable, optional) – Pre-process chat history before sending to OpenAI API

  • pre_process_system_prompt (Callable, optional) – Pre-process system prompt before sending to OpenAI API

Returns:

Decorator function

Return type:

Callable

@fsm.state(
    state_key="START",
    system_prompt="Hello, how can I help you?",
    temperature=0.5,
    transitions={"END": "when user has said quit or exit"},
    response_model=DefaultResponse,
)
async def start_state(fsm: MooreFSM, response: DefaultResponse, will_transition: bool):
    pass

Note

The state function should be an async function and should have the following signature:

async def state_function(fsm: MooreFSM, response: Any, will_transition: bool):
    pass

Note

The response model should be a Pydantic model, if not defined then the response will be a string.

class moorellm.models.DefaultResponse(*, content: str)[source]

Bases: BaseModel

Default response model for AI output.

Parameters:

content (str) – The content of the response from the AI model.

Note

This model can be extended or replaced with custom response models as needed.

content: str
model_computed_fields: ClassVar[dict[str, ComputedFieldInfo]] = {}

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'content': FieldInfo(annotation=str, required=True)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

class moorellm.models.MooreRun(*, state: str, chat_history: list[dict], context_data: dict[str, Any], response_raw: dict, response: Any)[source]

Bases: BaseModel

Represents the outcome of a single run/step in the FSM.

Parameters:
  • state (str) – The current state key of the FSM.

  • chat_history (list[dict]) – A list of dictionaries representing the history of the conversation.

  • context_data (dict[str, Any]) – Contextual data relevant to the FSM run.

  • response_raw (dict) – The raw response from the AI model.

  • response (Any) – The processed response, potentially modeled by response_model.

Note

The response attribute may be of any type, depending on the response model used.

chat_history: list[dict]
context_data: dict[str, Any]
model_computed_fields: ClassVar[dict[str, ComputedFieldInfo]] = {}

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'chat_history': FieldInfo(annotation=list[dict], required=True), 'context_data': FieldInfo(annotation=dict[str, Any], required=True), 'response': FieldInfo(annotation=Any, required=True), 'response_raw': FieldInfo(annotation=dict, required=True), 'state': FieldInfo(annotation=str, required=True)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

response: Any
response_raw: dict
state: str
class moorellm.models.MooreState(*, key: str, func: Callable, system_prompt: str, temperature: float, transitions: dict[str, str], response_model: Type[BaseModel] | None, pre_process_chat: Callable | None, pre_process_system_prompt: Callable | None)[source]

Bases: BaseModel

Represents a state in the Finite State Machine (FSM) for managing the AI’s conversation flow.

Parameters:
  • key (str) – Unique identifier for the state.

  • func (Callable) – Callable function that defines the action to take in this state.

  • system_prompt (str) – The system prompt to be sent to the model.

  • temperature (float) – The temperature setting for the model’s response generation.

  • transitions (dict[str, str]) – A dictionary mapping possible user inputs to the next state.

  • response_model (Type[BaseModel], optional) – The Pydantic model that will parse the AI’s response, if provided.

  • pre_process_chat (Callable, optional) – Optional callable for pre-processing the chat input before running the state function.

  • pre_process_system_prompt (Callable, optional) – Optional callable for pre-processing the system prompt before it is sent.

Note

The transitions dictionary should map input keys to corresponding state keys for proper FSM flow.

func: Callable
key: str
model_computed_fields: ClassVar[dict[str, ComputedFieldInfo]] = {}

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'func': FieldInfo(annotation=Callable, required=True), 'key': FieldInfo(annotation=str, required=True), 'pre_process_chat': FieldInfo(annotation=Union[Callable, NoneType], required=True), 'pre_process_system_prompt': FieldInfo(annotation=Union[Callable, NoneType], required=True), 'response_model': FieldInfo(annotation=Union[Type[BaseModel], NoneType], required=True), 'system_prompt': FieldInfo(annotation=str, required=True), 'temperature': FieldInfo(annotation=float, required=True), 'transitions': FieldInfo(annotation=dict[str, str], required=True)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

pre_process_chat: Callable | None
pre_process_system_prompt: Callable | None
response_model: Type[BaseModel] | None
system_prompt: str
temperature: float
transitions: dict[str, str]
exception moorellm.models.StateMachineError[source]

Bases: Exception

Custom exception for errors within the FSM.

Note

Raise this exception to indicate errors specific to FSM operations.

moorellm.utils.wrap_into_json_response(data: BaseModel, next_state: str) BaseModel[source]

Module contents