LLMs
BaseRagasLLM
dataclass
BaseRagasLLM(run_config: RunConfig = RunConfig(), multiple_completion_supported: bool = False, cache: Optional[CacheInterface] = None)
Bases: ABC
get_temperature
is_finished
abstractmethod
generate
async
generate(prompt: PromptValue, n: int = 1, temperature: Optional[float] = 0.01, stop: Optional[List[str]] = None, callbacks: Callbacks = None) -> LLMResult
Generate text using the given event loop.
Source code in src/ragas/llms/base.py
InstructorBaseRagasLLM
Bases: ABC
Base class for LLMs using the Instructor library pattern.
generate
abstractmethod
Generate a response using the configured LLM.
For async clients, this will run the async method in the appropriate event loop.
Source code in src/ragas/llms/base.py
agenerate
abstractmethod
async
Asynchronously generate a response using the configured LLM.
InstructorLLM
InstructorLLM(client: Any, model: str, provider: str, model_args: Optional[InstructorModelArgs] = None, **kwargs)
Bases: InstructorBaseRagasLLM
LLM wrapper using the Instructor library for structured outputs.
Source code in src/ragas/llms/base.py
generate
Generate a response using the configured LLM.
For async clients, this will run the async method in the appropriate event loop.
Source code in src/ragas/llms/base.py
agenerate
async
Asynchronously generate a response using the configured LLM.
Source code in src/ragas/llms/base.py
HaystackLLMWrapper
HaystackLLMWrapper(haystack_generator: Any, run_config: Optional[RunConfig] = None, cache: Optional[CacheInterface] = None)
Bases: BaseRagasLLM
A wrapper class for using Haystack LLM generators within the Ragas framework.
This class integrates Haystack's LLM components (e.g., OpenAIGenerator,
HuggingFaceAPIGenerator, etc.) into Ragas, enabling both synchronous and
asynchronous text generation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
haystack_generator
|
AzureOpenAIGenerator | HuggingFaceAPIGenerator | HuggingFaceLocalGenerator | OpenAIGenerator
|
An instance of a Haystack generator. |
required |
run_config
|
RunConfig
|
Configuration object to manage LLM execution settings, by default None. |
None
|
cache
|
CacheInterface
|
A cache instance for storing results, by default None. |
None
|
Source code in src/ragas/llms/haystack_wrapper.py
OCIGenAIWrapper
OCIGenAIWrapper(model_id: str, compartment_id: str, config: Optional[Dict[str, Any]] = None, endpoint_id: Optional[str] = None, run_config: Optional[RunConfig] = None, cache: Optional[Any] = None, default_system_prompt: Optional[str] = None, client: Optional[Any] = None)
Bases: BaseRagasLLM
OCI Gen AI LLM wrapper for Ragas.
This wrapper provides direct integration with Oracle Cloud Infrastructure Generative AI services without requiring LangChain or LlamaIndex.
Args: model_id: The OCI model ID to use for generation compartment_id: The OCI compartment ID config: OCI configuration dictionary (optional, uses default if not provided) endpoint_id: Optional endpoint ID for the model run_config: Ragas run configuration cache: Optional cache backend
Source code in src/ragas/llms/oci_genai_wrapper.py
generate_text
generate_text(prompt: PromptValue, n: int = 1, temperature: Optional[float] = 0.01, stop: Optional[List[str]] = None, callbacks: Optional[Any] = None) -> LLMResult
Generate text using OCI Gen AI.
Source code in src/ragas/llms/oci_genai_wrapper.py
agenerate_text
async
agenerate_text(prompt: PromptValue, n: int = 1, temperature: Optional[float] = 0.01, stop: Optional[List[str]] = None, callbacks: Optional[Any] = None) -> LLMResult
Generate text asynchronously using OCI Gen AI.
Source code in src/ragas/llms/oci_genai_wrapper.py
is_finished
Check if the LLM response is finished/complete.
Source code in src/ragas/llms/oci_genai_wrapper.py
llm_factory
llm_factory(model: str, provider: str = 'openai', client: Optional[Any] = None, **kwargs: Any) -> InstructorBaseRagasLLM
Create an LLM instance for structured output generation using Instructor.
Supports multiple LLM providers with unified interface for both sync and async operations. Returns instances with .generate() and .agenerate() methods that accept Pydantic models for structured outputs.
Args: model: Model name (e.g., "gpt-4o", "gpt-4o-mini", "claude-3-sonnet"). provider: LLM provider. Default: "openai". Supported: openai, anthropic, google, litellm. client: Pre-initialized client instance (required). For OpenAI, can be OpenAI(...) or AsyncOpenAI(...). **kwargs: Additional model arguments (temperature, max_tokens, top_p, etc).
Returns: InstructorBaseRagasLLM: Instance with generate() and agenerate() methods.
Raises: ValueError: If client is missing, provider is unsupported, or model is invalid.
Examples: from openai import OpenAI
client = OpenAI(api_key="...")
llm = llm_factory("gpt-4o", client=client)
response = llm.generate(prompt, ResponseModel)
# Async
from openai import AsyncOpenAI
client = AsyncOpenAI(api_key="...")
llm = llm_factory("gpt-4o", client=client)
response = await llm.agenerate(prompt, ResponseModel)
Source code in src/ragas/llms/base.py
443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 | |
oci_genai_factory
oci_genai_factory(model_id: str, compartment_id: str, config: Optional[Dict[str, Any]] = None, endpoint_id: Optional[str] = None, run_config: Optional[RunConfig] = None, cache: Optional[Any] = None, default_system_prompt: Optional[str] = None, client: Optional[Any] = None) -> OCIGenAIWrapper
Factory function to create an OCI Gen AI LLM instance.
Args: model_id: The OCI model ID to use for generation compartment_id: The OCI compartment ID config: OCI configuration dictionary (optional) endpoint_id: Optional endpoint ID for the model run_config: Ragas run configuration **kwargs: Additional arguments passed to OCIGenAIWrapper
Returns: OCIGenAIWrapper: An instance of the OCI Gen AI LLM wrapper
Examples: # Basic usage with default config llm = oci_genai_factory( model_id="cohere.command", compartment_id="ocid1.compartment.oc1..example" )
# With custom config
llm = oci_genai_factory(
model_id="cohere.command",
compartment_id="ocid1.compartment.oc1..example",
config={"user": "user_ocid", "key_file": "~/.oci/private_key.pem"}
)