Skip to content
Merged
Show file tree
Hide file tree
Changes from 9 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -151,6 +151,7 @@ Breaking changes in this release:
- 👷🏻 Added `npm run build-browser` script for building test harness package only, in PR [#5667](https://github.com/microsoft/BotFramework-WebChat/pull/5667), by [@compulim](https://github.com/compulim)
- Added pull-based capabilities system for dynamically discovering adapter capabilities at runtime, in PR [#5679](https://github.com/microsoft/BotFramework-WebChat/pull/5679), by [@pranavjoshi001](https://github.com/pranavjoshi001)
- Added Speech-to-Speech (S2S) support for real-time voice conversations, in PR [#5654](https://github.com/microsoft/BotFramework-WebChat/pull/5654), by [@pranavjoshi](https://github.com/pranavjoshi001)
- Added core mute/unmute functionality for speech-to-speech via `useRecorder` hook (silent chunks keep server connection alive), in PR [#5688](https://github.com/microsoft/BotFramework-WebChat/pull/5688), by [@pranavjoshi](https://github.com/pranavjoshi001)
Copy link

Copilot AI Mar 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changelog entry says mute/unmute was added "via useRecorder hook", but useRecorder is an internal/private implementation detail under providers/SpeechToSpeech/private. For consumers, the new public surface appears to be useVoiceRecordingMuted (and/or the muteVoiceRecording/unmuteVoiceRecording actions). Consider rewording this entry to reference the public API to avoid confusing integrators.

Suggested change
- Added core mute/unmute functionality for speech-to-speech via `useRecorder` hook (silent chunks keep server connection alive), in PR [#5688](https://github.com/microsoft/BotFramework-WebChat/pull/5688), by [@pranavjoshi](https://github.com/pranavjoshi001)
- Added core mute/unmute functionality for speech-to-speech via `useVoiceRecordingMuted` hook and `muteVoiceRecording` / `unmuteVoiceRecording` actions (silent chunks keep server connection alive), in PR [#5688](https://github.com/microsoft/BotFramework-WebChat/pull/5688), by [@pranavjoshi](https://github.com/pranavjoshi001)

Copilot uses AI. Check for mistakes.

### Changed

Expand Down
2 changes: 2 additions & 0 deletions packages/api/src/boot/hook.ts
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,7 @@ export {
useMarkActivityAsSpoken,
useMarkActivityKeyAsRead,
useMarkAllAsAcknowledged,
useMuteVoice,
useNotifications,
usePerformCardAction,
usePonyfill,
Expand Down Expand Up @@ -74,6 +75,7 @@ export {
useTrackException,
useTrackTiming,
useUIState,
useUnmuteVoice,
useUserID,
useUsername,
useVoiceSelector,
Expand Down
14 changes: 14 additions & 0 deletions packages/api/src/hooks/Composer.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ import {
dismissNotification,
emitTypingIndicator,
markActivity,
muteVoiceRecording,
postActivity,
sendEvent,
sendFiles,
Expand All @@ -35,6 +36,7 @@ import {
stopSpeakingActivity,
stopVoiceRecording,
submitSendBox,
unmuteVoiceRecording,
type DirectLineJSBotConnection,
type GlobalScopePonyfill,
type OneOrMany,
Expand Down Expand Up @@ -381,6 +383,14 @@ const ComposerCore = ({
dispatch(stopVoiceRecording());
}, [dispatch, voiceHandlers]);

const muteVoice = useCallback(() => {
dispatch(muteVoiceRecording());
}, [dispatch]);

const unmuteVoice = useCallback(() => {
dispatch(unmuteVoiceRecording());
}, [dispatch]);

const patchedLocalizedStrings = useMemo(
() => mergeStringsOverrides(getAllLocalizedStrings()[normalizeLanguage(locale)], locale, overrideLocalizedStrings),
[locale, overrideLocalizedStrings]
Expand Down Expand Up @@ -563,6 +573,7 @@ const ComposerCore = ({
language: locale,
localizedGlobalizeState: [localizedGlobalize],
localizedStrings: patchedLocalizedStrings,
muteVoice,
onTelemetry,
renderMarkdown,
scrollToEndButtonRenderer,
Expand All @@ -575,6 +586,7 @@ const ComposerCore = ({
trackDimension,
typingIndicatorRenderer: patchedTypingIndicatorRenderer,
uiState,
unmuteVoice,
userID,
username
}),
Expand All @@ -585,6 +597,7 @@ const ComposerCore = ({
hoistedDispatchers,
locale,
localizedGlobalize,
muteVoice,
onTelemetry,
patchedActivityStatusRenderer,
patchedAttachmentForScreenReaderRenderer,
Expand All @@ -604,6 +617,7 @@ const ComposerCore = ({
telemetryDimensionsRef,
trackDimension,
uiState,
unmuteVoice,
userID,
username
]
Expand Down
4 changes: 4 additions & 0 deletions packages/api/src/hooks/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,7 @@ import useLocalizer from './useLocalizer';
import useMarkActivityAsSpoken from './useMarkActivityAsSpoken';
import useMarkActivityKeyAsRead from './useMarkActivityKeyAsRead';
import useMarkAllAsAcknowledged from './useMarkAllAsAcknowledged';
import useMuteVoice from './useMuteVoice';
import useNotifications from './useNotifications';
import usePerformCardAction from './usePerformCardAction';
import usePonyfill from './usePonyfill';
Expand Down Expand Up @@ -71,6 +72,7 @@ import useTrackEvent from './useTrackEvent';
import useTrackException from './useTrackException';
import useTrackTiming from './useTrackTiming';
import useUIState from './useUIState';
import useUnmuteVoice from './useUnmuteVoice';
import useUserID from './useUserID';
import useUsername from './useUsername';
import useVoiceSelector from './useVoiceSelector';
Expand Down Expand Up @@ -119,6 +121,7 @@ export {
useMarkActivityAsSpoken,
useMarkActivityKeyAsRead,
useMarkAllAsAcknowledged,
useMuteVoice,
useNotifications,
usePerformCardAction,
usePonyfill,
Expand Down Expand Up @@ -153,6 +156,7 @@ export {
useTrackException,
useTrackTiming,
useUIState,
useUnmuteVoice,
useUserID,
useUsername,
useVoiceSelector,
Expand Down
2 changes: 2 additions & 0 deletions packages/api/src/hooks/internal/WebChatAPIContext.ts
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,7 @@ export type WebChatAPIContextType = {
localizedGlobalizeState?: PrecompiledGlobalize[];
localizedStrings?: { [language: string]: LocalizedStrings };
markActivity?: (activity: { id: string }, name: string, value?: any) => void;
muteVoice?: () => void;
onCardAction?: PerformCardAction;
onTelemetry?: (event: TelemetryMeasurementEvent) => void;
postActivity?: (activity: WebChatActivity) => Observable<string>;
Expand Down Expand Up @@ -81,6 +82,7 @@ export type WebChatAPIContextType = {
trackDimension?: (name: string, data: any) => void;
typingIndicatorRenderer?: any; // TODO
uiState: 'blueprint' | 'disabled' | undefined;
unmuteVoice?: () => void;
userID?: string;
username?: string;
};
Expand Down
9 changes: 9 additions & 0 deletions packages/api/src/hooks/useMuteVoice.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
import useWebChatAPIContext from './internal/useWebChatAPIContext';

/**
* Hook to mute voice mode (stops microphone input but keeps connection alive with silent chunks).
* The session remains active and can be unmuted to resume listening.
*/
export default function useMuteVoice(): () => void {
return useWebChatAPIContext().muteVoice;
Comment thread
pranavjoshi001 marked this conversation as resolved.
Outdated
}
9 changes: 9 additions & 0 deletions packages/api/src/hooks/useUnmuteVoice.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
import useWebChatAPIContext from './internal/useWebChatAPIContext';

/**
* Hook to unmute voice mode (resumes microphone input after muting).
* This reactivates speech-to-speech listening.
*/
export default function useUnmuteVoice(): () => void {
Comment thread
pranavjoshi001 marked this conversation as resolved.
Outdated
return useWebChatAPIContext().unmuteVoice;
}
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ export function VoiceRecorderBridge(): null {
const [voiceState] = useVoiceState();
const postVoiceActivity = usePostVoiceActivity();

const muted = voiceState === 'muted';
// Derive recording state from voiceState - recording is active when not idle
const recording = voiceState !== 'idle';

Expand All @@ -29,7 +30,13 @@ export function VoiceRecorderBridge(): null {
[postVoiceActivity]
);

const { record } = useRecorder(handleAudioChunk);
const { record, mute } = useRecorder(handleAudioChunk);
Comment thread
pranavjoshi001 marked this conversation as resolved.
Outdated

useEffect(() => {
if (muted) {
Comment thread
compulim marked this conversation as resolved.
return mute();
}
}, [mute, muted]);

useEffect(() => {
if (recording) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,13 +36,16 @@ const mockWorkletNode = {
port: mockWorkletPort
};

const mockSourceNode = {
connect: jest.fn(),
disconnect: jest.fn()
};

const mockAudioContext = {
audioWorklet: {
addModule: jest.fn().mockResolvedValue(undefined)
},
createMediaStreamSource: jest.fn(() => ({
connect: jest.fn()
})),
createMediaStreamSource: jest.fn(() => mockSourceNode),
destination: {},
resume: jest.fn().mockResolvedValue(undefined),
state: 'running'
Expand Down Expand Up @@ -218,4 +221,74 @@ describe('useRecorder', () => {
});
});
});

test('should return mute function', () => {
render(<HookApp onAudioChunk={onAudioChunk} />);
expect(typeof hookData?.mute).toBe('function');
});

test('should send MUTE command and stop media stream when mute is called', async () => {
render(<HookApp onAudioChunk={onAudioChunk} />);

// Start recording first
act(() => {
hookData?.record();
});

await waitFor(() => {
expect(mockWorkletPort.postMessage).toHaveBeenCalledWith({ command: 'START' });
});

// Clear mocks to isolate mute behavior
mockWorkletPort.postMessage.mockClear();
mockTrack.stop.mockClear();
mockSourceNode.disconnect.mockClear();

// Call mute
act(() => {
hookData?.mute();
});

// Should send MUTE command to worklet
expect(mockWorkletPort.postMessage).toHaveBeenCalledWith({ command: 'MUTE' });
// Should stop media stream tracks (mic indicator OFF)
expect(mockTrack.stop).toHaveBeenCalledTimes(1);
// Should disconnect source node
expect(mockSourceNode.disconnect).toHaveBeenCalledTimes(1);
Comment thread
pranavjoshi001 marked this conversation as resolved.
});

test('should return unmute function from mute() that sends UNMUTE and restarts media stream', async () => {
render(<HookApp onAudioChunk={onAudioChunk} />);

// Start recording first
act(() => {
hookData?.record();
});

await waitFor(() => {
expect(mockWorkletPort.postMessage).toHaveBeenCalledWith({ command: 'START' });
});

// Call mute and get unmute function
let unmute: (() => void) | undefined;
act(() => {
unmute = hookData?.mute();
});

// Clear mocks to isolate unmute behavior
mockWorkletPort.postMessage.mockClear();
mockMediaDevices.getUserMedia.mockClear();

// Call unmute
act(() => {
unmute?.();
});

// Should send UNMUTE command to worklet
expect(mockWorkletPort.postMessage).toHaveBeenCalledWith({ command: 'UNMUTE' });
// Should restart media stream
await waitFor(() => {
expect(mockMediaDevices.getUserMedia).toHaveBeenCalledTimes(1);
});
});
Comment thread
pranavjoshi001 marked this conversation as resolved.
});
Loading
Loading