Docs Home | Configuration | Examples | Basic | Caching | Events | LLM | Architecture | Agent-Native | Benchmarks | Ecosystem
This page documents the public runtime API and the behavior that matters in production.
function processText(text: string, configPath?: string): Promise<QirrelContext>- Creates a fresh
Pipelineper call. - Loads config using
ConfigLoaderprecedence (see Configuration). - Returns one
QirrelContext.
Use this for one-off parsing. For repeated calls, reuse Pipeline to retain cache and event handlers.
function processTexts(
texts: string[],
configPath?: string,
options?: { concurrency?: number },
): Promise<QirrelContext[]>- Creates a fresh
Pipelineper call. - Preserves input order in output.
- Uses bounded worker concurrency.
new Pipeline(configPath?: string)Construction does the following:
- loads config,
- builds tokenizer,
- assembles built-in processors based on config flags,
- initializes caches,
- starts async LLM adapter initialization when enabled.
init(): Promise<void>- waits for async adapter initialization.
process(text: string): Promise<QirrelContext>- processes one input,
- emits events (
RunStart, processor events,RunEnd,Error), - caches result when caching is enabled.
processBatch(texts: string[], options?: { concurrency?: number }): Promise<QirrelContext[]>- validates inputs,
- parallelizes work up to
concurrency, - throws on invalid input types or invalid concurrency.
use(component: PipelineComponent): thisaddCustomProcessor(component: PipelineComponent): thisaddLLMProcessor(component: PipelineComponent): this
on(event: PipelineEvent, handler: EventHandler): thisoff(event: PipelineEvent, handler: EventHandler): this
See Events for payload contracts and error behavior.
getConfig(): MiniparseConfiggetLLMAdapter(): LLMAdapter | undefinedgetCacheManager(): LruCacheManagerisCached(text: string): booleangetCached(text: string): QirrelContext | undefinedsetCached(text: string, result: QirrelContext, ttl?: number): void
interface PipelineComponent {
name: string;
version?: string;
cacheable?: boolean;
run(input: QirrelContext): Promise<QirrelContext>;
}Notes:
- Components are expected to mutate and return
QirrelContext. - If
cacheable: true, Qirrel may wrap the component with cache logic when cache is enabled.
interface QirrelContext {
meta?: {
requestId?: string;
timestamp?: number;
source?: 'http' | 'cli' | 'worker';
trace?: Record<string, string>;
};
memory?: {
shortTerm?: unknown;
longTerm?: unknown;
cache?: Record<string, unknown>;
};
llm?: {
model?: string;
temperature?: number;
safety?: {
allowTools: boolean;
redactions?: string[];
};
};
data?: {
text: string;
tokens: Token[];
entities: Entity[];
llmResponse?: LLMResponse;
};
}interface Entity {
type: string;
value: string;
start: number;
end: number;
}RunStart(run.start)RunEnd(run.end)ProcessorStart(processor.start)ProcessorEnd(processor.end)LLMCall(llm.call, reserved)Error(error)
Qirrel also exports agent-native APIs:
AgentBridgecreateQirrelAgentBridgecreateMcpRequestHandlerstartMcpStdioServer
For full behavior and protocol notes, see Agent-Native Integration.
processTextandprocessTextsdo not share cache between calls because they instantiate a newPipelineeach time.PipelineEvent.LLMCallexists in enum but is currently reserved and not emitted by the default pipeline path.- Cached contexts are cloned on read/write; do not rely on object identity across calls.