Skip to content
Open
12 changes: 6 additions & 6 deletions src/examples/server/simpleStreamableHttp.ts
Original file line number Diff line number Diff line change
Expand Up @@ -379,11 +379,11 @@ const getServer = () => {

return { task };
},
async getTask(_args, { taskId, taskStore: getTaskStore }) {
return await getTaskStore.getTask(taskId);
async getTask({ taskId, taskStore }) {
return await taskStore.getTask(taskId);
},
async getTaskResult(_args, { taskId, taskStore: getResultTaskStore }) {
const result = await getResultTaskStore.getTaskResult(taskId);
async getTaskResult({ taskId, taskStore }) {
const result = await taskStore.getTaskResult(taskId);
return result as CallToolResult;
}
}
Expand Down Expand Up @@ -605,10 +605,10 @@ const getServer = () => {
task
};
},
async getTask(_args, { taskId, taskStore }) {
async getTask({ taskId, taskStore }) {
return await taskStore.getTask(taskId);
},
async getTaskResult(_args, { taskId, taskStore }) {
async getTaskResult({ taskId, taskStore }) {
const result = await taskStore.getTaskResult(taskId);
return result as CallToolResult;
}
Expand Down
9 changes: 3 additions & 6 deletions src/experimental/tasks/interfaces.ts
Original file line number Diff line number Diff line change
Expand Up @@ -70,19 +70,16 @@ export type CreateTaskRequestHandler<
* Handler for task operations (get, getResult).
* @experimental
*/
export type TaskRequestHandler<
SendResultT extends Result,
Args extends undefined | ZodRawShapeCompat | AnySchema = undefined
> = BaseToolCallback<SendResultT, TaskRequestHandlerExtra, Args>;
export type TaskRequestHandler<SendResultT extends Result> = (extra: TaskRequestHandlerExtra) => SendResultT | Promise<SendResultT>;

/**
* Interface for task-based tool handlers.
* @experimental
*/
export interface ToolTaskHandler<Args extends undefined | ZodRawShapeCompat | AnySchema = undefined> {
createTask: CreateTaskRequestHandler<CreateTaskResult, Args>;
getTask: TaskRequestHandler<GetTaskResult, Args>;
getTaskResult: TaskRequestHandler<CallToolResult, Args>;
getTask: TaskRequestHandler<GetTaskResult>;
getTaskResult: TaskRequestHandler<CallToolResult>;
}

/**
Expand Down
74 changes: 67 additions & 7 deletions src/server/mcp.ts
Original file line number Diff line number Diff line change
Expand Up @@ -82,9 +82,43 @@ export class McpServer {
private _registeredTools: { [name: string]: RegisteredTool } = {};
private _registeredPrompts: { [name: string]: RegisteredPrompt } = {};
private _experimental?: { tasks: ExperimentalMcpServerTasks };
private _taskToolMap: Map<string, string> = new Map();
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 _taskToolMap entries (taskId → toolName) are added at line 248 but never removed — there is no .delete() or .clear() call anywhere. For long-running servers processing many tasks, this map grows unboundedly even after tasks reach terminal states or expire via TTL. Consider adding cleanup when a task completes/fails/is cancelled, or lazily when _getTaskHandler finds a task no longer exists in the store.

Extended reasoning...

What the bug is

The _taskToolMap field (declared at line 85 as Map<string, string>) stores a mapping from taskId to the tool name that created it. Entries are added at line 248 via this._taskToolMap.set(taskResult.task.taskId, request.params.name) whenever a task-augmented tool call returns a CreateTaskResult. However, there is no corresponding .delete() call anywhere in the codebase — entries persist for the lifetime of the McpServer instance.

How it manifests

Every task creation adds a string → string entry to the map. When a task completes, fails, is cancelled, or expires via its TTL and gets cleaned up from the TaskStore, the corresponding _taskToolMap entry remains. Over time, for a server that processes many tasks, this map grows monotonically.

Step-by-step proof

  1. A client calls tools/call with task: { ttl: 60000 } for a registered tool task.
  2. The CallToolRequestSchema handler at line 243-249 executes: this._taskToolMap.set(taskResult.task.taskId, request.params.name).
  3. The task completes — TaskStore.storeTaskResult() is called, the task enters a terminal state.
  4. The task's TTL expires and InMemoryTaskStore cleans it up internally.
  5. The _taskToolMap still holds the taskId → toolName entry. There is no code path that removes it.
  6. Repeat steps 1-5 thousands of times — the map now holds thousands of stale entries.

Why existing code doesn't prevent it

Searching for all references to _taskToolMap reveals exactly three: the declaration (line 85), a .get() call (line 111), and the .set() call (line 248). No .delete(), .clear(), or any other cleanup mechanism exists.

Impact

Each entry is two short strings (taskId + toolName), so individual entries are small. For typical short-lived MCP server instances or low task throughput, this is unlikely to cause issues. However, for long-running servers processing a high volume of tasks (e.g., a persistent production server), memory usage will grow linearly and unboundedly over time.

Suggested fix

The simplest approach would be to add a lazy cleanup in _getTaskHandler: if the taskId is found in _taskToolMap but the task no longer exists in the TaskStore, delete the entry. Alternatively, cleanup could be added when a task reaches a terminal state (in the taskHandlerHooks or after storeTaskResult calls).

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The cleanup added in the getTaskResult hook (using finally { this._taskToolMap.delete(taskId) }) addresses the happy path, but leaves entries in _taskToolMap for tasks that are cancelled or fail and where the client never calls tasks/result.

The getTaskResult hook is only invoked from the GetTaskPayloadRequestSchema handler (the tasks/result request). If a client cancels a task via tasks/cancel and moves on without calling tasks/result, the taskId → toolName entry remains in the map permanently. Same for tasks that expire via TTL without tasks/result being called.

The original suggestion of lazy cleanup in _getTaskHandler would cover these cases — when _getTaskHandler is called for a taskId that no longer exists in the store, it could remove the stale entry. Alternatively, cleaning up in the tasks/cancel handler path would handle the cancellation case specifically.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

test

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The finally-based deletion issue from the previous follow-up is now fixed -- the delete is on the success path only, so a retry after a transient error in getTaskResult will correctly find the handler again.

However, the cancel-path leak is still present. The code comment says "Cleanup on tasks/cancel would require a protocol-level hook and is intentionally left out here", but the original comment suggested lazy cleanup in _getTaskHandler, which requires no protocol hook:

private _getTaskHandler(taskId: string): ToolTaskHandler<ZodRawShapeCompat | undefined> | null {
    const toolName = this._taskToolMap.get(taskId);
    if (\!toolName) return null;
    const tool = this._registeredTools[toolName];
    if (\!tool || \!("createTask" in (tool.handler as AnyToolHandler<ZodRawShapeCompat>))) {
        this._taskToolMap.delete(taskId); // lazy-evict stale entry
        return null;
    }
    return tool.handler as ToolTaskHandler<ZodRawShapeCompat | undefined>;
}

This covers the typical cancel-then-poll-to-confirm pattern (the subsequent tasks/get call would trigger lazy eviction). Truly abandoned entries (cancelled with no further polling) would still linger, but those are the smaller concern.


constructor(serverInfo: Implementation, options?: ServerOptions) {
this.server = new Server(serverInfo, options);
const taskHandlerHooks = {
getTask: async (taskId: string, extra: RequestHandlerExtra<ServerRequest, ServerNotification>) => {
// taskStore is guaranteed to exist here because Protocol only calls hooks when taskStore is configured
const taskStore = extra.taskStore!;
const handler = this._getTaskHandler(taskId);
if (handler) {
return await handler.getTask({ ...extra, taskId, taskStore });
}
return await taskStore.getTask(taskId);
},
getTaskResult: async (taskId: string, extra: RequestHandlerExtra<ServerRequest, ServerNotification>) => {
const taskStore = extra.taskStore!;
const handler = this._getTaskHandler(taskId);
try {
if (handler) {
return await handler.getTaskResult({ ...extra, taskId, taskStore });
}
return await taskStore.getTaskResult(taskId);
} finally {
// Once the result has been retrieved the task is complete;
// drop the taskId → toolName mapping to avoid unbounded growth.
this._taskToolMap.delete(taskId);
}
}
};
this.server = new Server(serverInfo, { ...options, taskHandlerHooks });
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 The _taskToolMap lifecycle introduced by this PR has three related gaps: (1) the finally block in getTaskResult unconditionally deletes the entry even on transient errors, so a client retry bypasses the custom handler; (2) the cancel path (tasks/cancel) never notifies McpServer to clean up _taskToolMap, causing entries to leak for the lifetime of the server; (3) handleAutomaticTaskPolling creates a task but never populates _taskToolMap, so if a client learns the task ID via TaskStatusNotification and calls tasks/get directly, the custom handler is bypassed. Fix (1) by only deleting on success; fix (2) by adding a cancelTask hook or exposing a cleanup method; fix (3) by calling this._taskToolMap.set(taskId, toolName) after wrappedHandler.createTask.

Extended reasoning...

Gap 1 — Premature deletion on error (bug_003)

In the getTaskResult hook (lines 98–111 of src/server/mcp.ts), the finally block unconditionally calls this._taskToolMap.delete(taskId) regardless of whether the handler succeeded or threw. The developer comment reads "Once the result has been retrieved the task is complete", which implies the intent was to clean up only on success.

Step-by-step proof:

  1. Client calls tasks/result for a completed task.
  2. _getTaskHandler(taskId) finds the registered custom handler in _taskToolMap.
  3. handler.getTaskResult({ ...extra, taskId, taskStore }) throws a transient error (e.g., task store temporarily unavailable).
  4. The finally block runs and deletes the _taskToolMap entry.
  5. The error propagates to the client.
  6. Client retries tasks/result.
  7. _getTaskHandler(taskId) returns null — the entry was already deleted.
  8. Code falls back to taskStore.getTaskResult(taskId) directly, silently bypassing the custom handler.

The fix is straightforward: use a success flag (or move the delete call to after the return) so cleanup only happens when the result is successfully retrieved.

Gap 2 — Cancel path never cleans up (bug_001)

The _taskToolMap is only cleaned up inside the getTaskResult hook. When a client calls tasks/cancel, the CancelTaskRequestSchema handler in protocol.ts (lines ~533–574) calls this._clearTaskQueue(request.params.taskId) on the Protocol level, but McpServer._taskToolMap is a private field with no corresponding hook or callback. Since cancelled tasks are terminal and clients have no reason to call tasks/result after cancellation, the _taskToolMap entry for that task persists for the lifetime of the McpServer instance.

Step-by-step proof:

  1. Tool registered with registerToolTask, taskSupport: "required".
  2. Client calls tools/call with task augmentation — _taskToolMap.set(taskId, toolName) is called at line ~252.
  3. Server creates the task and returns CreateTaskResult.
  4. Client calls tasks/cancel — Protocol handler runs, calls _clearTaskQueue, but McpServer._taskToolMap is not touched.
  5. Client does NOT call tasks/result (task is terminal).
  6. The entry taskId → toolName persists indefinitely.

Individual entries are small (two short strings), but on long-running servers handling many task cancellations, the map grows without bound. The fix requires either adding a cancelTask hook to taskHandlerHooks, or exposing a cleanup method that McpServer can wire up to the cancel path.

Gap 3 — Auto-polling path never populates the map (bug_004)

In handleAutomaticTaskPolling (lines ~401–437), a task is created via wrappedHandler.createTask(taskExtra) but this._taskToolMap.set(taskId, toolName) is never called. Compare with the isTaskRequest path (lines ~249–258) which does populate the map.

Step-by-step proof:

  1. Tool registered with taskSupport: "optional".
  2. Client calls tools/call without task augmentation — handleAutomaticTaskPolling is invoked.
  3. wrappedHandler.createTask(taskExtra) runs; the task is stored in taskStore.
  4. requestTaskStore.storeTaskResult is called internally, which sends a TaskStatusNotification (see protocol.ts lines ~1638–1641) containing the taskId.
  5. A client with an active SSE stream receives this notification and learns the taskId.
  6. Client calls tasks/get for that taskId.
  7. Protocol dispatches to the getTask hook; _getTaskHandler(taskId) returns null (no entry in _taskToolMap).
  8. Hook falls back to taskStore.getTask(taskId) directly, bypassing the registered custom handler.

Addressing the refutation: The refuter argues this scenario is implausible because clients using the optional-without-augmentation path are just waiting for a synchronous tools/call response and would not make parallel tasks/get calls. This is a fair observation for the common case. However, the server does emit TaskStatusNotification messages containing the taskId, and a well-behaved client with an open SSE stream can observe them. More importantly, the inconsistency is a design-level API contract violation: registerToolTask is supposed to wire up custom handlers for all task lifecycle requests, but for the optional-without-augmentation path, external calls to tasks/get silently bypass those handlers. The fix is simply to add this._taskToolMap.set(taskId, request.params.name) after wrappedHandler.createTask completes, matching the isTaskRequest path.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gap 1 (premature deletion on error) is fixed in the latest commit (d044ace): the _taskToolMap.delete(taskId) at line 108 now executes only after result is successfully assigned, so a transient error causes the exception to propagate before the delete runs, preserving the entry for client retries.

However, Gaps 2 and 3 remain unaddressed:

Gap 2 (cancel path leaks entries): The code comment at lines 106–107 explicitly acknowledges this — "Cleanup on tasks/cancel would require a protocol-level hook and is intentionally left out here" — but leaves it unresolved. Any task that is cancelled without the client calling tasks/result afterwards will keep its _taskToolMap entry indefinitely.

Gap 3 (auto-polling path never populates the map): handleAutomaticTaskPolling (lines 419–438) calls wrappedHandler.createTask(taskExtra) and retrieves a taskId, but never calls this._taskToolMap.set(taskId, request.params.name). The internal polling loop calls wrappedHandler.getTask/wrappedHandler.getTaskResult directly and works correctly. However, if a client observes the taskId via a TaskStatusNotification on an open SSE stream and then independently issues a tasks/get request, _getTaskHandler(taskId) returns null (no map entry) and falls back to taskStore.getTask(taskId), silently bypassing the registered custom handler. The fix is a single line: add this._taskToolMap.set(taskId, request.params.name) after wrappedHandler.createTask(taskExtra) completes, mirroring what the isTaskRequest path does at line 252.

}

private _getTaskHandler(taskId: string): ToolTaskHandler<ZodRawShapeCompat | undefined> | null {
const toolName = this._taskToolMap.get(taskId);
if (!toolName) return null;
const tool = this._registeredTools[toolName];
if (!tool || !('createTask' in (tool.handler as AnyToolHandler<ZodRawShapeCompat>))) return null;
return tool.handler as ToolTaskHandler<ZodRawShapeCompat | undefined>;
}

/**
Expand Down Expand Up @@ -215,6 +249,10 @@ export class McpServer {

// Return CreateTaskResult immediately for task requests
if (isTaskRequest) {
const taskResult = result as CreateTaskResult;
if (taskResult.task?.taskId) {
this._taskToolMap.set(taskResult.task.taskId, request.params.name);
}
return result;
}

Expand Down Expand Up @@ -374,27 +412,28 @@ export class McpServer {
const handler = tool.handler as ToolTaskHandler<ZodRawShapeCompat | undefined>;
const taskExtra = { ...extra, taskStore: extra.taskStore };

const createTaskResult: CreateTaskResult = args // undefined only if tool.inputSchema is undefined
? await Promise.resolve((handler as ToolTaskHandler<ZodRawShapeCompat>).createTask(args, taskExtra))
: // eslint-disable-next-line @typescript-eslint/no-explicit-any
await Promise.resolve(((handler as ToolTaskHandler<undefined>).createTask as any)(taskExtra));
const wrappedHandler = toolTaskHandlerByArgs(handler, args);

const createTaskResult = await wrappedHandler.createTask(taskExtra);

// Poll until completion
const taskId = createTaskResult.task.taskId;
const taskExtraComplete = { ...extra, taskId, taskStore: extra.taskStore };
let task = createTaskResult.task;
const pollInterval = task.pollInterval ?? 5000;

while (task.status !== 'completed' && task.status !== 'failed' && task.status !== 'cancelled') {
await new Promise(resolve => setTimeout(resolve, pollInterval));
const updatedTask = await extra.taskStore.getTask(taskId);
const getTaskResult = await wrappedHandler.getTask(taskExtraComplete);
const updatedTask = getTaskResult;
if (!updatedTask) {
throw new McpError(ErrorCode.InternalError, `Task ${taskId} not found during polling`);
}
task = updatedTask;
}
Comment on lines 427 to 435
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟣 This is a pre-existing issue: the polling loop in handleAutomaticTaskPolling (src/server/mcp.ts:427-435) uses await new Promise(resolve => setTimeout(resolve, pollInterval)) without checking extra.signal. If a client disconnects or sends notifications/cancelled mid-poll, the server continues polling until the task naturally reaches a terminal state, wasting resources and holding the handler slot open. This PR directly refactors this function and would be an ideal place to add abort signal support.

Extended reasoning...

What the bug is and how it manifests

The polling loop in handleAutomaticTaskPolling uses a bare setTimeout without any signal check:

while (task.status !== 'completed' && task.status !== 'failed' && task.status !== 'cancelled') {
    await new Promise(resolve => setTimeout(resolve, pollInterval));
    const getTaskResult = await wrappedHandler.getTask(taskExtraComplete);
    ...
}

When extra.signal fires (client disconnect or notifications/cancelled), the server never observes the abort. The full pollInterval (default 5 seconds per iteration) elapses before each loop check, and since task.status is only updated by wrappedHandler.getTask, the loop runs until the underlying task naturally terminates. For long-running tasks (minutes or hours), this can hold a handler slot and consume server resources long after the originating client has gone away.

The specific code path that triggers it

  1. Tool registered with taskSupport: optional
  2. Client calls tools/call without task augmentation → handleAutomaticTaskPolling is invoked
  3. wrappedHandler.createTask(taskExtra) creates the task and enters the while loop
  4. Client disconnects or sends notifications/cancelled, which fires extra.signal.abort
  5. await new Promise(resolve => setTimeout(resolve, pollInterval)) does not observe the signal; it runs for the full poll interval
  6. wrappedHandler.getTask(taskExtraComplete) is called, loop continues indefinitely

Why existing code does not prevent it

The extra.signal abort signal is present in RequestHandlerExtra (protocol.ts:255) and is wired up by Protocol._onrequest via an AbortController that is aborted when notifications/cancelled arrives. However, handleAutomaticTaskPolling never reads extra.signal. No if (extra.signal.aborted) guard exists, and the setTimeout is not raced against the signal.

Pre-existing nature and why it matters here

This bug predates the PR — the old code also had await new Promise(resolve => setTimeout(resolve, pollInterval)) without a signal check. The PR changed the inner loop body (extra.taskStore.getTaskwrappedHandler.getTask and added taskExtraComplete), but did not introduce the missing signal check. All verifiers confirmed this is pre-existing.

However, since the PR directly modifies handleAutomaticTaskPolling, it is the natural moment to add the fix. The _waitForTaskUpdate helper in protocol.ts already demonstrates the correct pattern — racing the setTimeout against the abort signal.

How to fix it

Replace the bare setTimeout with a signal-aware wait, matching the _waitForTaskUpdate pattern:

// Option A: check at loop top
if (extra.signal.aborted) throw extra.signal.reason;
await new Promise(resolve => setTimeout(resolve, pollInterval));

// Option B: race against signal (cleaner, immediate cancellation)
await Promise.race([
    new Promise(r => setTimeout(r, pollInterval)),
    new Promise((_, reject) =>
        extra.signal.addEventListener('abort', () => reject(extra.signal.reason), { once: true })
    )
]);

Step-by-step proof

  1. Register a tool with taskSupport: 'optional' and a 60-second task TTL, pollInterval: 5000.
  2. Client calls tools/call without task augmentation.
  3. Server enters handleAutomaticTaskPolling, createTask returns, task is 'working'.
  4. Client disconnects; extra.signal.abort() fires immediately.
  5. Server is stuck at await new Promise(resolve => setTimeout(resolve, 5000)) — signal ignored.
  6. After 5 seconds, wrappedHandler.getTask is called; task is still 'working'.
  7. Loop repeats indefinitely (up to 60 seconds) until task reaches a terminal state.
  8. Server resources held for up to 60 seconds after client has gone.


// Return the final result
return (await extra.taskStore.getTaskResult(taskId)) as CallToolResult;
return await wrappedHandler.getTaskResult(taskExtraComplete);
}

private _completionHandlerInitialized = false;
Expand Down Expand Up @@ -1545,3 +1584,24 @@ const EMPTY_COMPLETION_RESULT: CompleteResult = {
hasMore: false
}
};

/**
* Wraps a tool task handler's createTask to handle args uniformly.
* getTask and getTaskResult don't take args, so they're passed through directly.
* @param handler The task handler to wrap.
* @param args The tool arguments.
* @returns A wrapped task handler for a tool, which only exposes a no-args interface for createTask.
*/
function toolTaskHandlerByArgs<Args extends AnySchema | ZodRawShapeCompat | undefined>(
handler: ToolTaskHandler<Args>,
args: unknown
): ToolTaskHandler<undefined> {
return {
createTask: extra =>
args // undefined only if tool.inputSchema is undefined
? Promise.resolve((handler as ToolTaskHandler<ZodRawShapeCompat>).createTask(args, extra))
: Promise.resolve((handler as ToolTaskHandler<undefined>).createTask(extra)),
getTask: handler.getTask,
getTaskResult: handler.getTaskResult
};
}
32 changes: 31 additions & 1 deletion src/shared/protocol.ts
Original file line number Diff line number Diff line change
Expand Up @@ -98,6 +98,20 @@ export type ProtocolOptions = {
* appropriately (e.g., by failing the task, dropping messages, etc.).
*/
maxTaskQueueSize?: number;
/**
* Optional hooks for customizing task request handling.
* If a hook is provided, it fully owns the behavior (no fallback to TaskStore).
*/
taskHandlerHooks?: {
/**
* Called when tasks/get is received. If provided, must return the task.
*/
getTask?: (taskId: string, extra: RequestHandlerExtra<Request, Notification>) => Promise<GetTaskResult>;
/**
* Called when tasks/payload needs to retrieve the final result. If provided, must return the result.
*/
getTaskResult?: (taskId: string, extra: RequestHandlerExtra<Request, Notification>) => Promise<Result>;
};
};

/**
Expand Down Expand Up @@ -383,6 +397,16 @@ export abstract class Protocol<SendRequestT extends Request, SendNotificationT e
this._taskMessageQueue = _options?.taskMessageQueue;
if (this._taskStore) {
this.setRequestHandler(GetTaskRequestSchema, async (request, extra) => {
// Use hook if provided, otherwise fall back to TaskStore
if (_options?.taskHandlerHooks?.getTask) {
const hookResult = await _options.taskHandlerHooks.getTask(
request.params.taskId,
extra as unknown as RequestHandlerExtra<Request, Notification>
);
// @ts-expect-error SendResultT cannot contain GetTaskResult
return hookResult as SendResultT;
}

const task = await this._taskStore!.getTask(request.params.taskId, extra.sessionId);
if (!task) {
throw new McpError(ErrorCode.InvalidParams, 'Failed to retrieve task: Task not found');
Expand Down Expand Up @@ -462,7 +486,13 @@ export abstract class Protocol<SendRequestT extends Request, SendNotificationT e

// If task is terminal, return the result
if (isTerminal(task.status)) {
const result = await this._taskStore!.getTaskResult(taskId, extra.sessionId);
// Use hook if provided, otherwise fall back to TaskStore
const result = this._options?.taskHandlerHooks?.getTaskResult
? await this._options.taskHandlerHooks.getTaskResult(
taskId,
extra as unknown as RequestHandlerExtra<Request, Notification>
)
: await this._taskStore!.getTaskResult(taskId, extra.sessionId);

this._clearTaskQueue(taskId);

Expand Down
24 changes: 12 additions & 12 deletions test/client/index.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -2386,14 +2386,14 @@ describe('Task-based execution', () => {

return { task };
},
async getTask(_args, extra) {
async getTask(extra) {
const task = await extra.taskStore.getTask(extra.taskId);
if (!task) {
throw new Error(`Task ${extra.taskId} not found`);
}
return task;
},
async getTaskResult(_args, extra) {
async getTaskResult(extra) {
const result = await extra.taskStore.getTaskResult(extra.taskId);
return result as { content: Array<{ type: 'text'; text: string }> };
}
Expand Down Expand Up @@ -2462,14 +2462,14 @@ describe('Task-based execution', () => {

return { task };
},
async getTask(_args, extra) {
async getTask(extra) {
const task = await extra.taskStore.getTask(extra.taskId);
if (!task) {
throw new Error(`Task ${extra.taskId} not found`);
}
return task;
},
async getTaskResult(_args, extra) {
async getTaskResult(extra) {
const result = await extra.taskStore.getTaskResult(extra.taskId);
return result as { content: Array<{ type: 'text'; text: string }> };
}
Expand Down Expand Up @@ -2539,14 +2539,14 @@ describe('Task-based execution', () => {

return { task };
},
async getTask(_args, extra) {
async getTask(extra) {
const task = await extra.taskStore.getTask(extra.taskId);
if (!task) {
throw new Error(`Task ${extra.taskId} not found`);
}
return task;
},
async getTaskResult(_args, extra) {
async getTaskResult(extra) {
const result = await extra.taskStore.getTaskResult(extra.taskId);
return result as { content: Array<{ type: 'text'; text: string }> };
}
Expand Down Expand Up @@ -2620,14 +2620,14 @@ describe('Task-based execution', () => {

return { task };
},
async getTask(_args, extra) {
async getTask(extra) {
const task = await extra.taskStore.getTask(extra.taskId);
if (!task) {
throw new Error(`Task ${extra.taskId} not found`);
}
return task;
},
async getTaskResult(_args, extra) {
async getTaskResult(extra) {
const result = await extra.taskStore.getTaskResult(extra.taskId);
return result as { content: Array<{ type: 'text'; text: string }> };
}
Expand Down Expand Up @@ -3105,14 +3105,14 @@ describe('Task-based execution', () => {

return { task };
},
async getTask(_args, extra) {
async getTask(extra) {
const task = await extra.taskStore.getTask(extra.taskId);
if (!task) {
throw new Error(`Task ${extra.taskId} not found`);
}
return task;
},
async getTaskResult(_args, extra) {
async getTaskResult(extra) {
const result = await extra.taskStore.getTaskResult(extra.taskId);
return result as { content: Array<{ type: 'text'; text: string }> };
}
Expand Down Expand Up @@ -3373,14 +3373,14 @@ test('should respect server task capabilities', async () => {

return { task };
},
async getTask(_args, extra) {
async getTask(extra) {
const task = await extra.taskStore.getTask(extra.taskId);
if (!task) {
throw new Error(`Task ${extra.taskId} not found`);
}
return task;
},
async getTaskResult(_args, extra) {
async getTaskResult(extra) {
const result = await extra.taskStore.getTaskResult(extra.taskId);
return result as { content: Array<{ type: 'text'; text: string }> };
}
Expand Down
Loading
Loading