Skip to content

Commit 1e1eda6

Browse files
yungshinlintwyungshinlinCopilot
authored
[Python] Add agent-framework-azure-ai-contentunderstanding package (#4829)
* feat: add agent-framework-azure-contentunderstanding package Add Azure Content Understanding integration as a context provider for the Agent Framework. The package automatically analyzes file attachments (documents, images, audio, video) using Azure CU and injects structured results (markdown, fields) into the LLM context. Key features: - Multi-document session state with status tracking (pending/ready/failed) - Configurable timeout with async background fallback for large files - Output filtering via AnalysisSection enum - Auto-registered list_documents() and get_analyzed_document() tools - Supports all CU modalities: documents, images, audio, video - Content limits enforcement (pages, file size, duration) - Binary stripping of supported files from input messages Public API: - ContentUnderstandingContextProvider (main class) - AnalysisSection (output section selector enum) - ContentLimits (configurable limits dataclass) Tests: 46 unit tests, 91% coverage, all linting and type checks pass. * fix: update CU fixtures with real API data, fix test assertions - Replace synthetic fixtures with real CU API responses (sanitized) - Update test assertions to match real data (Contoso vs CONTOSO, TotalAmount vs InvoiceTotal, field values from real analysis) - Add --pre install note in README (preview package) - Document unenforced ContentLimits fields (max_pages, duration) * chore: add connector .gitignore, update uv.lock * refactor: rename to azure-ai-contentunderstanding, fix CI issues Align naming with Azure SDK convention and AF pattern: - Directory: azure-contentunderstanding -> azure-ai-contentunderstanding - PyPI: agent-framework-azure-contentunderstanding -> agent-framework-azure-ai-contentunderstanding - Module: agent_framework_azure_contentunderstanding -> agent_framework_azure_ai_contentunderstanding CI fixes: - Inline conftest helpers to avoid cross-package import collision in xdist - Remove PyPI badge and dead API reference link from README (package not published yet) * feat: add samples (document_qa, invoice_processing, multimodal_chat) - document_qa.py: Single PDF upload, CU context provider, follow-up Q&A - invoice_processing.py: Structured field extraction with prebuilt-invoice - multimodal_chat.py: Multi-file session with status tracking - Add ruff per-file-ignores for samples/ directory - Update README with samples section, env vars, and run instructions * feat: add remaining samples (devui_multimodal_agent, large_doc_file_search) - S3: devui_multimodal_agent/ — DevUI web UI with CU-powered file analysis - S4: large_doc_file_search.py — CU extraction + OpenAI vector store RAG - Update README and samples/README.md with all 5 samples * feat: add file_search integration for large document RAG Add FileSearchConfig — when provided, CU-extracted markdown is automatically uploaded to an OpenAI vector store and a file_search tool is registered on the context. This enables token-efficient RAG retrieval for large documents without users needing to manage vector stores manually. - FileSearchConfig dataclass (openai_client, vector_store_name) - Auto-create vector store, upload markdown, register file_search tool - Auto-cleanup on close() - When file_search is enabled, skip full content injection (use RAG instead) - Update large_doc_file_search sample to use the integration - 4 new tests (50 total, 90% coverage) * fix: add key-based auth support to all samples Follow established AF pattern: check for API key env var first, fall back to AzureCliCredential. Supports AZURE_OPENAI_API_KEY and AZURE_CONTENTUNDERSTANDING_API_KEY environment variables. * FEATURE(python): add analyzer auto-detection, file_search RAG, and lazy init _context_provider.py: - Make analyzer_id optional (default None) with auto-detection by media type prefix: audio->audioSearch, video->videoSearch, else documentSearch - Add _ensure_initialized() for lazy client creation in before_run() - Add FileSearchConfig-based vector store upload - Fix: background-completed docs in file_search mode now upload to vector store instead of injecting full markdown into context messages - Add _pending_uploads queue for deferred vector store uploads devui_file_search_agent/ (new sample): - DevUI agent combining CU extraction + OpenAI file_search RAG azure_responses_agent (existing sample fix): - Add AzureCliCredential support and AZURE_AI_PROJECT_ENDPOINT fallback Tests (19 new), Docs updated (AGENTS.md, README.md) * feat(cu): MIME sniffing, media-aware formatting, unified timeout, vector store expiration - Add three-layer MIME detection (fast path → filetype binary sniff → filename fallback) to handle unreliable upstream MIME types (e.g. mp4 sent as application/octet-stream). Adds filetype>=1.2,<2 dependency. - Media-aware output formatting: video shows duration/resolution + all fields as JSON; audio promotes Summary as prose; document unchanged. - Unified timeout for all media types (removed file_search special-case that waited indefinitely for video/audio). All files use max_wait with background polling fallback. - Vector store created with expires_after=1 day as crash safety net. - Add 8 MIME sniffing tests (TestMimeSniffing class). * fix: merge all CU content segments for video/audio analysis CU's prebuilt-videoSearch and prebuilt-audioSearch analyzers split long media files into multiple `contents[]` segments. Previously, `_extract_sections()` only read `contents[0]`, causing truncated duration, missing transcript, and incomplete fields for any video/audio longer than a single scene. Now iterates all segments and merges: - duration: global min(startTimeMs) → max(endTimeMs) - markdown: concatenated with `---` separators - fields: same-named fields collected into per-segment list - metadata (kind, resolution): taken from first segment Single-segment results (documents, short audio) are unaffected. Update test fixture to realistic 3-segment video structure and expand assertions to verify multi-segment merging. Add documentation for multi-segment processing and speaker diarization limitation. * refactor: improve CU context provider docs and remove ContentLimits - Improve class docstring: clarify endpoint (Azure AI Foundry URL with example), credential (AzureKeyCredential vs Entra ID), and analyzer_id (prebuilt/custom with auto-selection behavior and reference links) - Add SUPPORTED_MEDIA_TYPES comments explaining MIME-based matching behavior and add missing file types per CU service docs - Use namespaced logger to align with other packages - Remove ContentLimits and related code/tests - Rename DEFAULT_MAX_WAIT to DEFAULT_MAX_WAIT_SECONDS for clarity * feat: support user-provided vector store in FileSearchConfig - Add vector_store_id field to FileSearchConfig (None = auto-create) - Track _owns_vector_store to only delete auto-created stores on close() - Remove vector_store_name; use internal _DEFAULT_VECTOR_STORE_NAME - Add inline comments for private state fields - Document output_sections default in docstring - Update AGENTS.md, samples, and tests * fix: remove ContentLimits from README code block * refactor: create CU client in __init__ instead of __aenter__ Follow Azure AI Search provider pattern: create the client eagerly in __init__, make __aenter__ a no-op. This ensures __aexit__/close() is always safe to call and eliminates the _ensure_initialized() workaround. * docs: add file_search param to class docstring * feat: introduce FileSearchBackend abstraction for cross-client support Replace direct OpenAI client usage with FileSearchBackend ABC: - OpenAIFileSearchBackend: for OpenAIChatClient (Responses API) - FoundryFileSearchBackend: for FoundryChatClient (Azure Foundry) - Shared base _OpenAICompatBackend for common vector store CRUD FileSearchConfig now takes a backend instead of openai_client. Factory methods from_openai() and from_foundry() for convenience. BREAKING: FileSearchConfig(openai_client=...) -> FileSearchConfig.from_openai(...) * refactor: FileSearchBackend abstraction + caller-owned vector store * fix: file_search reliability and sample improvements - Poll vector store indexing (create_and_poll) to ensure file_search returns results immediately after upload - Set status to failed when vector store upload fails - Skip get_analyzed_document tool in file_search mode to prevent LLM from bypassing RAG - Simplify sample auth: single credential, direct parameters - Use from_foundry backend for Foundry project endpoints * perf: set max_num_results=10 for file_search to reduce token usage * fix: move import to top of file (E402 lint) * chore: remove unused imports * fix: align azure-ai-contentunderstanding with MAF coding conventions - Add module-level docstrings to __init__.py and _context_provider.py - Use Self return type for __aenter__ (with typing_extensions fallback) - Use explicit typed params for __aexit__ signature - Add sync TokenCredential to AzureCredentialTypes union - Pass AGENT_FRAMEWORK_USER_AGENT to ContentUnderstandingClient - Remove unused ContentLimits from public API and tests - Fix FileSearchConfig tests to match refactored backend API - Fix lifecycle tests to match eager client initialization * refactor: improve CU context provider API surface and fix CI - Refactor _analyze_file to return DocumentEntry instead of mutating dict - Remove TokenCredential from AzureCredentialTypes (fixes mypy/pyright CI) - Remove OpenAIFileSearchBackend/FoundryFileSearchBackend from public API (internal to FileSearchConfig factory methods) - Remove DocumentStatus from public exports (implementation detail) - Update file_search comments to reflect backend-agnostic design - Add DocumentStatus enum, analysis/upload duration tracking - Add combined timeout for CU analysis + vector store upload * fix: improve file_search samples and move tool guidelines to context provider - Delete redundant devui_file_search_agent sample (duplicate of azure_openai variant) - Move tool usage guidelines from sample agent instructions into context provider (extend_instructions in step 6, applied automatically for all file_search users) - Fix file_search purpose: use from_foundry() for Azure OpenAI (purpose="assistants") - Add filename hint in upload instructions for targeted file_search queries - Reduce max_num_results from 10 to 3 in both devui samples - Simplify agent instructions in both samples (remove tool-specific guidance) * feat: improve source_id, integration tests, and content assertions - Rename DEFAULT_SOURCE_ID to "azure_ai_contentunderstanding" (matches azure_ai_search convention) - Improve source_id docstring to describe default value - Clarify _detect_and_strip_files docstring (CU-supported files) - Add invoice.pdf test fixture from Azure CU samples repo - Refactor integration tests to use invoice.pdf directly (assert instead of skip when fixture missing) - Add URI content test (Content.from_uri with external URL) - Add "CONTOSO LTD." content assertion to all integration tests - Use max_wait=None in integration tests (wait until complete) * feat: reject duplicate filenames, add integration tests and sample comments - Reject duplicate document keys in before_run (skip + warn LLM to rename) - Update _derive_doc_key docstring to document uniqueness constraint - Add unit tests for duplicate filename rejection (cross-turn and same-turn) - Add integration test for data URI content (from_uri with base64) - Add integration test for background analysis (max_wait timeout + resolve) - Add filename recommendation comments to all samples' Content.from_data() * chore: improve doc key derivation, comments, and README - Replace hash-based doc key with uuid4 for anonymous uploads (O(1), no payload traversal) - Remove hashlib import (no longer needed) - Add File Naming section to README (filename importance, duplicate rejection) - Improve inline comments (_derive_doc_key, _extract_binary, URL parsing) * test: strengthen _format_result assertions with exact expected strings - Replace loose 'in' checks with exact 'assert formatted == expected' for both multi-segment and single-segment format tests - Add object-type fields (ShippingAddress, Speakers) to test data to cover nested dict/list serialization - Add position-based ordering assertions to verify structural correctness (header -> markdown -> fields across segments) * refactor: move invoice.pdf to shared sample_assets directory - Move invoice.pdf from tests/cu/test_data/ to python/samples/shared/sample_assets/ as single source of truth - Add INVOICE_PDF_PATH constant in test_integration.py pointing to the shared location - Update document_qa.py, invoice_processing.py, large_doc_file_search.py to use invoice.pdf instead of sample.pdf * refactor: reorganize samples into numbered dirs and simplify auth - Move script samples into 01-get-started/ with numbered prefixes (01_document_qa, 02_multimodal_chat, 03_invoice_processing, 04_large_doc_file_search) - Move devui samples into 02-devui/ with 01-multimodal_agent and 02-file_search_agent/{azure_openai_backend,foundry_backend} - Move invoice.pdf to CU package-local samples/shared/sample_assets/ - Replace kwargs dicts with direct constructor calls; support both API key (AZURE_OPENAI_API_KEY) and AzureCliCredential - Update README sample table with new paths * fix: resolve CI lint errors (D205, RUF001, E501) - Fix D205: single-line docstring summary for _detect_and_strip_files - Fix RUF001: replace EN DASH with HYPHEN-MINUS in segment headers - Fix E501: wrap long assertion lines in tests - Also includes samples reorg and auth simplification * refactor: overhaul samples — FoundryChatClient, sessions, remove get_analyzed_document Samples: - Switch all samples from deprecated AzureOpenAIResponsesClient to FoundryChatClient - Add 02_multi_turn_session.py showing AgentSession persistence across turns - Rewrite 03_multimodal_chat.py with real PDF + audio + video (parallel analysis), per-modality follow-ups, cross-document question, elapsed time, user prompts, and input token counts - Renumber: 02->03 multimodal, 03->04 invoice, 04->05 file_search Context provider: - Remove get_analyzed_document tool -- full content is in conversation history via InMemoryHistoryProvider, no retrieval tool needed - Remove follow-up turn instructions about tools - Only list_documents tool remains (for status queries) - Update README to reflect tool removal * feat: add 05_background_analysis sample and fix 04 session/max_wait - Add 05_background_analysis.py demonstrating non-blocking CU analysis with max_wait=1s, status tracking via list_documents(), and automatic background task resolution on subsequent turns - Fix 04_invoice_processing.py: add max_wait=None and AgentSession - Rename 05→06 large_doc_file_search - Update README sample table * docs: update README and fix sample 06 README: - Switch Quick Start from AzureOpenAIResponsesClient to FoundryChatClient - Add AgentSession to Quick Start example - Fix status values: pending -> analyzing/uploading/ready/failed - Fix env var: AZURE_OPENAI_RESPONSES_DEPLOYMENT_NAME -> AZURE_OPENAI_DEPLOYMENT_NAME - Update samples section with new paths, link to samples/README.md - Update multi-segment description to reflect per-segment fields Sample 06: - Fix from_openai -> from_foundry for Azure endpoints - Add AgentSession and max_wait=None * docs: rewrite README — concise format, prerequisites, CU link * fix: resolve pyright errors in _format_result segment cast * docs: add numbered section comments and fresh sample output to all samples - Add numbered section comments (# 1. ..., # 2. ...) per SAMPLE_GUIDELINES - Re-run all 6 samples and update expected output with real results - Fix duplicate sample output blocks in 04 and 05 - Update README code example to use public invoice URL * feat: add load_settings support for env var configuration - Make endpoint optional in constructor — auto-loads from AZURE_CONTENTUNDERSTANDING_ENDPOINT env var via load_settings() - Add ContentUnderstandingSettings TypedDict - Add env_file_path/env_file_encoding params for .env file support - Add 4 unit tests: env var loading, explicit override, missing endpoint error, missing credential error - Update README with env var auto-resolution docs - Follows framework convention used by all other packages * docs: polish README — fix duplicate env var, add Next steps, service limits link * chore: trim invoice fixture from 199K to 33 lines Keep only VendorName, InvoiceTotal, DueDate, InvoiceDate, InvoiceId fields and first 500 chars of markdown. Strip spans/source/coordinates. Reduces fixture from 6.6MB to 1.2KB. * feat: per-file analyzer_id override via additional_properties - Read analyzer_id from Content.additional_properties for per-file override - Resolution order: per-file > provider-level > auto-detect by media type - Update class docstring documenting filename and analyzer_id properties - Update sample 04 to demonstrate per-file override (prebuilt-invoice) - Add unit test for per-file analyzer override * Trim PDF test fixture and clarify unique filename requirement - Trim analyze_pdf_result.json from 4427 to 23 lines by removing pages, words, lines, paragraphs, sections, spans, and source fields that are not used by any unit test. - Add docstring note that filename must be unique within a session; duplicate filenames are rejected and the file will not be analyzed. * Update python/packages/azure-ai-contentunderstanding/agent_framework_azure_ai_contentunderstanding/_context_provider.py Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Update python/packages/azure-ai-contentunderstanding/agent_framework_azure_ai_contentunderstanding/_context_provider.py Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Update python/packages/azure-ai-contentunderstanding/samples/02-devui/02-file_search_agent/azure_openai_backend/agent.py Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Update python/packages/azure-ai-contentunderstanding/samples/02-devui/01-multimodal_agent/agent.py Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Update python/packages/azure-ai-contentunderstanding/samples/01-get-started/06_large_doc_file_search.py Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Fix AGENTS.md to match implementation; remove unused variable in test helper AGENTS.md: - Remove _ensure_initialized() reference (client is created in __init__) - Fix multi-segment docs: segments kept as list, not merged into fields - Remove get_analyzed_document() reference (only list_documents registered) - Update sample names to match current directory structure test_context_provider.py: - Simplify _make_data_uri() — remove unused 'encoded' variable * Fix premature file_search instruction for background-completed docs - Change _resolve_pending_tasks() instruction from 'Use file_search' to 'being indexed' since the upload hasn't completed yet at that point. - Add LLM instruction on upload failure in step 1b so the agent can inform the user the document isn't searchable. * fix: wrap long line in devui agent instructions (E501) * Fix Copilot review: unused logger, stray code in README, await cancelled tasks - _file_search.py: Remove unused logger and logging import - 01-multimodal_agent/README.md: Remove accidentally pasted Python script - _context_provider.py close(): Await cancelled tasks before closing client to prevent 'Task destroyed but pending' warnings * Sanitize doc keys and fix duplicate filename re-injection - Add _sanitize_doc_key() to strip control characters, collapse whitespace, and cap length at 255 chars — prevents prompt injection via crafted filenames in extend_instructions() calls. - Track accepted doc_keys in step 3 so step 5 only injects content for files actually analyzed this turn, not pre-existing duplicates. - Soften duplicate upload instruction wording (remove IMPORTANT/caps). * fix: add type annotation to tasks_to_cancel for pyright * Move per-session mutable state to state dict for session isolation Previously _pending_tasks, _pending_uploads, and _uploaded_file_ids were stored on self, shared across all sessions. This caused cross-session leakage: Session A's background task results could be injected into Session B's context. Now these are stored in the per-session state dict. Global copies (_all_pending_tasks, _all_uploaded_file_ids) are kept on self only for best-effort cleanup in close(). Add 2 new TestSessionIsolation tests verifying that background tasks and resolved content stay within their originating session. * Remove unused AnalysisSection enum values Only MARKDOWN and FIELDS are handled by _extract_sections(). Remove FIELD_GROUNDING, TABLES, PARAGRAPHS, SECTIONS to avoid exposing dead options to users. * Recursively flatten object/array field values for cleaner LLM output - Use SDK .value property with recursive extraction for object/array fields - Object: AmountDue -> {Amount: 610, CurrencyCode: USD} (was raw SDK dict) - Array: LineItems -> list of flattened items (was raw SDK list) - Update invoice fixture with object/array fields from prebuilt-invoice - Add 3 unit tests for object, array, and nested object field extraction * Preserve sub-field confidence; compare full expected JSON in tests * Remove incorrect MIME aliases (audio/mp4, video/x-matroska) * feat: add AnalysisInput, content_range, warnings, and category support - Use SDK AnalysisInput model instead of raw body dict for begin_analyze - Forward content_range from additional_properties to CU (page/time ranges) - Extract CU warnings with code/message/target (ODataV4Format) into output - Include content-level category from classifier analyzers - Add 5 new tests: warnings, category, content_range forwarding - Fix pyright with explicit casts; fix en-dash lint (RUF002) * fix: falsy-0 bug in duration calc; improve test coverage - Fix start_time_ms=0 treated as falsy by 'or' short-circuit, use 'is None' checks instead for duration and segment time extraction - Update warnings test to use RAI ContentFiltered codes - Enrich warnings extraction to include code/message/target (ODataV4Format) - Add multi-segment video category test with per-segment assertions * refactor: split _context_provider.py into focused modules - Extract _constants.py: SUPPORTED_MEDIA_TYPES, MIME_ALIASES, analyzer maps - Extract _detection.py: file detection, MIME sniffing, doc key derivation - Extract _extraction.py: result extraction, field flattening, LLM formatting - _context_provider.py delegates via thin wrappers (793 lines, was 1255) - Update test imports to use _constants.py for SUPPORTED_MEDIA_TYPES * docs: update AGENTS.md with DocumentStatus, FileSearchBackend, and _file_search.py * refactor: replace AnalysisSection enum with Literal type for simpler DX - Remove AnalysisSection(str, Enum) class, replace with Literal["markdown", "fields"] type alias - Users can now pass plain strings: output_sections=["markdown"] — no extra import needed - AnalysisSection type alias still exported for type annotation use - Update all samples, tests, and internal code to use string literals - Address PR review feedback (eavanvalkenburg) * refactor: replace asyncio.Task with continuation tokens for serializable state - Replace state["_pending_tasks"] (asyncio.Task — not serializable) with state["_pending_tokens"] (dict of continuation token strings) so the framework can persist session state to disk/storage - Resume pending analyses via Azure SDK continuation_token mechanism - Fix: resumed pollers have stale cached status (done() always False), use asyncio.wait_for(poller.result()) with 10s min timeout instead - Remove _background_poll(), _all_pending_tasks, and task cancellation - Address PR review feedback (eavanvalkenburg): state must be serializable * fix: resolve CI lint (RUF052) and mypy (call-overload) errors * feat: add structured output (Pydantic model) to invoice processing sample - Use response_format=InvoiceResult for schema-constrained LLM output - Use output_sections=["fields"] only (no markdown needed for structured output) - Add LowConfidenceField model with confidence values - Add comments about prebuilt-invoice extensive schema vs simplified model - Address PR review feedback (eavanvalkenburg): use structured response * fix: use FOUNDRY_PROJECT_ENDPOINT and FOUNDRY_MODEL env vars in all samples Replace AZURE_AI_PROJECT_ENDPOINT → FOUNDRY_PROJECT_ENDPOINT and AZURE_OPENAI_DEPLOYMENT_NAME → FOUNDRY_MODEL across all sample .py and README.md files. Address PR review feedback (eavanvalkenburg). * refactor: remove background_analysis sample, use FoundryChatClient in DevUI - Remove 05_background_analysis.py (per reviewer feedback — discuss max_wait design separately from samples) - Renumber 06_large_doc_file_search.py → 05_large_doc_file_search.py - Replace AzureOpenAIResponsesClient with FoundryChatClient in all DevUI samples - Replace client.as_agent() with Agent(client=client, ...) everywhere - Add max_wait comments explaining interactive vs batch usage - Update README.md and AGENTS.md - Address PR review feedback (eavanvalkenburg) * fix: vector_stores API moved from beta namespace in OpenAI SDK * docs: add comments about multi-file support and CU service limits in file_search sample * fix: broken markdown links after sample removal and renumbering * fix: migrate BaseContextProvider to ContextProvider (non-deprecated) * fix: Message(text=) -> Message(contents=[]) for API compatibility * Inline _constants.py into consuming modules Remove _constants.py and move constants to where they are used: - SUPPORTED_MEDIA_TYPES, MIME_ALIASES → _detection.py - MEDIA_TYPE_ANALYZER_MAP, DEFAULT_ANALYZER → _context_provider.py Addresses review feedback to reduce file count. * Mark package as alpha per package management skill - Version: 1.0.0b260401 → 1.0.0a260401 - Classifier: Development Status 4 - Beta → 3 - Alpha - Add to PACKAGE_STATUS.md as alpha Follows the alpha package checklist from python-package-management skill. * Replace extend_instructions with extend_messages for status notifications Status/error/result notifications now use extend_messages (conversation context) instead of extend_instructions (system prompt). This avoids system prompt bloat and keeps behavioral directives separate from event notifications. - 11 extend_instructions calls → extend_messages (role='user') - 1 extend_instructions retained: tool usage guidelines (behavioral) - 6 test assertions updated to check context_messages All 84 unit tests + 5 live integration tests pass. * Fix lint: E402 import order, ISC004 implicit string concatenation - Move constants after all imports to fix E402 - Wrap multi-line strings in parentheses inside contents=[] to fix ISC004 * Fix lint: remove unused json import in invoice sample * Fix CI: apply ruff format + fix E501 line length after reformatting ruff format expands Message() calls to multi-line, pushing string indentation deeper. Break long strings to fit within 120 char limit after formatting. Also removes unused json import in sample. * Address review feedback: keyword-only args, accept pre-built client, remove wrappers - All __init__ args now keyword-only (matches FoundryChatClient pattern) - New 'client' param accepts pre-built ContentUnderstandingClient - core dep bound: >=1.0.0rc5 → >=1.0.0,<2 - Self import moved after local imports - Removed 9 static method wrappers; callsites use module functions directly - Tests updated to import derive_doc_key and format_result directly * fix: remove duplicate ContentUnderstandingClient instantiation The client was being created twice — once inside the if/else block and again unconditionally after it. The second instantiation overwrote the pre-built client path and failed type checking when credential was None. * rename: azure-ai-contentunderstanding → azure-contentunderstanding Package: agent-framework-azure-ai-contentunderstanding → agent-framework-azure-contentunderstanding Module: agent_framework_azure_ai_contentunderstanding → agent_framework_azure_contentunderstanding Directory: packages/azure-ai-contentunderstanding → packages/azure-contentunderstanding Per agreement with PM and MAF team to drop 'AI' from the package name. * feat: add ContentUnderstanding re-export to agent_framework.foundry namespace Enables: from agent_framework.foundry import ContentUnderstandingContextProvider Exports: ContentUnderstandingContextProvider, FileSearchConfig, FileSearchBackend, AnalysisSection, DocumentStatus Updates all samples and README to use the foundry namespace import. * fix: add missing copyright headers to standalone sample scripts * chore: remove .vscode/settings.json and add to .gitignore * refactor: reuse FoundryChatClient.client for vector store ops in file_search sample Address review feedback from TaoChenOSU: - 05_large_doc_file_search.py: use client.client instead of manually constructing AsyncAzureOpenAI; remove openai dependency - azure_openai_backend/agent.py: import reorder only (AIProjectClient kept — required for sync vector store creation in DevUI) * fix: skip closing client when caller passes pre-built client When a ContentUnderstandingClient is passed via client=, the caller owns its lifecycle. Added _owns_client flag so close() only closes the client when we created it internally. --------- Co-authored-by: yungshinlin <yungshin@msn.com> Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
1 parent 3a463b8 commit 1e1eda6

42 files changed

Lines changed: 6878 additions & 0 deletions

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

python/AGENTS.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -69,6 +69,7 @@ python/
6969

7070
### Azure Integrations
7171
- [foundry](packages/foundry/README.md) - Microsoft Foundry chat, agent, memory, and embedding integrations
72+
- [azure-contentunderstanding](packages/azure-contentunderstanding/AGENTS.md) - Azure Content Understanding context provider
7273
- [azure-ai-search](packages/azure-ai-search/AGENTS.md) - Azure AI Search RAG
7374
- [azure-cosmos](packages/azure-cosmos/AGENTS.md) - Azure Cosmos DB-backed history provider
7475
- [azurefunctions](packages/azurefunctions/AGENTS.md) - Azure Functions hosting

python/PACKAGE_STATUS.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,7 @@ Status is grouped into these buckets:
1818
| `agent-framework-a2a` | `python/packages/a2a` | `beta` |
1919
| `agent-framework-ag-ui` | `python/packages/ag-ui` | `beta` |
2020
| `agent-framework-anthropic` | `python/packages/anthropic` | `beta` |
21+
| `agent-framework-azure-contentunderstanding` | `python/packages/azure-contentunderstanding` | `alpha` |
2122
| `agent-framework-azure-ai-search` | `python/packages/azure-ai-search` | `beta` |
2223
| `agent-framework-azure-cosmos` | `python/packages/azure-cosmos` | `beta` |
2324
| `agent-framework-azurefunctions` | `python/packages/azurefunctions` | `beta` |
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# Local-only files (not committed)
2+
_local_only/
3+
*_local_only*
Lines changed: 71 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,71 @@
1+
# AGENTS.md — azure-contentunderstanding
2+
3+
## Package Overview
4+
5+
`agent-framework-azure-contentunderstanding` integrates Azure Content Understanding (CU)
6+
into the Agent Framework as a context provider. It automatically analyzes file attachments
7+
(documents, images, audio, video) and injects structured results into the LLM context.
8+
9+
## Public API
10+
11+
| Symbol | Type | Description |
12+
|--------|------|-------------|
13+
| `ContentUnderstandingContextProvider` | class | Main context provider — extends `ContextProvider` |
14+
| `AnalysisSection` | enum | Output section selector (MARKDOWN, FIELDS, etc.) |
15+
| `DocumentStatus` | enum | Document lifecycle state (ANALYZING, UPLOADING, READY, FAILED) |
16+
| `FileSearchBackend` | ABC | Abstract vector store file operations interface |
17+
| `FileSearchConfig` | dataclass | Configuration for CU + vector store RAG mode |
18+
19+
## Architecture
20+
21+
- **`_context_provider.py`** — Main provider implementation. Overrides `before_run()` to detect
22+
file attachments, call the CU API, manage session state with multi-document tracking,
23+
and auto-register retrieval tools for follow-up turns.
24+
- **Analyzer auto-detection** — When `analyzer_id=None` (default), `_resolve_analyzer_id()`
25+
selects the CU analyzer based on media type prefix: `audio/``prebuilt-audioSearch`,
26+
`video/``prebuilt-videoSearch`, everything else → `prebuilt-documentSearch`.
27+
- **Multi-segment output** — CU splits long video/audio into multiple scene segments
28+
(each a separate `contents[]` entry with its own `startTimeMs`, `endTimeMs`, `markdown`,
29+
and `fields`). `_extract_sections()` produces:
30+
- `segments`: list of per-segment dicts, each with `markdown`, `fields`, `start_time_s`, `end_time_s`
31+
- `markdown`: concatenated at top level with `---` separators (for file_search uploads)
32+
- `duration_seconds`: computed from global `min(startTimeMs)``max(endTimeMs)`
33+
- Metadata (`kind`, `resolution`): taken from the first segment
34+
- **Speaker diarization (not identification)** — CU transcripts label speakers as
35+
`<Speaker 1>`, `<Speaker 2>`, etc. CU does **not** identify speakers by name.
36+
- **file_search RAG** — When `FileSearchConfig` is provided, CU-extracted markdown is
37+
uploaded to an OpenAI vector store and a `file_search` tool is registered on the context
38+
instead of injecting the full document content. This enables token-efficient retrieval
39+
for large documents.
40+
- **`_models.py`**`AnalysisSection` enum, `DocumentStatus` enum, `DocumentEntry` TypedDict,
41+
`FileSearchConfig` dataclass.
42+
- **`_file_search.py`**`FileSearchBackend` ABC, `OpenAIFileSearchBackend`,
43+
`FoundryFileSearchBackend`.
44+
45+
## Key Patterns
46+
47+
- Follows the Azure AI Search context provider pattern (same lifecycle, config style).
48+
- Uses provider-scoped `state` dict for multi-document tracking across turns.
49+
- Auto-registers `list_documents()` tool via `context.extend_tools()`.
50+
- Configurable timeout (`max_wait`) with `asyncio.create_task()` background fallback.
51+
- Strips supported binary attachments from `input_messages` to prevent LLM API errors.
52+
- Explicit `analyzer_id` always overrides auto-detection (user preference wins).
53+
- Vector store resources are cleaned up in `close()` / `__aexit__`.
54+
55+
## Samples
56+
57+
| Sample | Description |
58+
|--------|-------------|
59+
| `01_document_qa.py` | Upload a PDF via URL, ask questions about it |
60+
| `02_multi_turn_session.py` | AgentSession persistence across turns |
61+
| `03_multimodal_chat.py` | PDF + audio + video parallel analysis |
62+
| `04_invoice_processing.py` | Structured field extraction with `prebuilt-invoice` analyzer |
63+
| `05_large_doc_file_search.py` | CU extraction + OpenAI vector store RAG |
64+
| `02-devui/01-multimodal_agent/` | DevUI web UI for CU-powered chat |
65+
| `02-devui/02-file_search_agent/` | DevUI web UI combining CU + file_search RAG |
66+
67+
## Running Tests
68+
69+
```bash
70+
uv run poe test -P azure-contentunderstanding
71+
```
Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
MIT License
2+
3+
Copyright (c) Microsoft Corporation.
4+
5+
Permission is hereby granted, free of charge, to any person obtaining a copy
6+
of this software and associated documentation files (the "Software"), to deal
7+
in the Software without restriction, including without limitation the rights
8+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9+
copies of the Software, and to permit persons to whom the Software is
10+
furnished to do so, subject to the following conditions:
11+
12+
The above copyright notice and this permission notice shall be included in all
13+
copies or substantial portions of the Software.
14+
15+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21+
SOFTWARE
Lines changed: 127 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,127 @@
1+
# Get Started with Azure Content Understanding in Microsoft Agent Framework
2+
3+
Please install this package via pip:
4+
5+
```bash
6+
pip install agent-framework-azure-contentunderstanding --pre
7+
```
8+
9+
## Azure Content Understanding Integration
10+
11+
### Prerequisites
12+
13+
Before using this package, you need an Azure Content Understanding resource:
14+
15+
1. An active **Azure subscription** ([create one for free](https://azure.microsoft.com/pricing/purchase-options/azure-account))
16+
2. A **Microsoft Foundry resource** created in a [supported region](https://learn.microsoft.com/azure/ai-services/content-understanding/language-region-support)
17+
3. **Default model deployments** configured for your resource (GPT-4.1, GPT-4.1-mini, text-embedding-3-large)
18+
19+
Follow the [prerequisites section](https://learn.microsoft.com/azure/ai-services/content-understanding/quickstart/use-rest-api?tabs=portal%2Cdocument&pivots=programming-language-rest#prerequisites) in the Azure Content Understanding quickstart for setup instructions.
20+
21+
### Introduction
22+
23+
The Azure Content Understanding integration provides a context provider that automatically analyzes file attachments (documents, images, audio, video) using [Azure Content Understanding](https://learn.microsoft.com/azure/ai-services/content-understanding/) and injects structured results into the LLM context.
24+
25+
- **Document & image analysis**: State-of-the-art OCR with markdown extraction, table preservation, and structured field extraction — handles scanned PDFs, handwritten content, and complex layouts
26+
- **Audio & video analysis**: Transcription, speaker diarization, and per-segment summaries
27+
- **Background processing**: Configurable timeout with async background fallback for large files
28+
- **file_search integration**: Optional vector store upload for token-efficient RAG on large documents
29+
30+
> Learn more about Azure Content Understanding capabilities at [https://learn.microsoft.com/azure/ai-services/content-understanding/](https://learn.microsoft.com/azure/ai-services/content-understanding/)
31+
32+
### Basic Usage Example
33+
34+
See the [samples directory](samples/) which demonstrates:
35+
36+
- Single PDF upload and Q&A ([01_document_qa](samples/01-get-started/01_document_qa.py))
37+
- Multi-turn sessions with cached results ([02_multi_turn_session](samples/01-get-started/02_multi_turn_session.py))
38+
- PDF + audio + video parallel analysis ([03_multimodal_chat](samples/01-get-started/03_multimodal_chat.py))
39+
- Structured field extraction with prebuilt-invoice ([04_invoice_processing](samples/01-get-started/04_invoice_processing.py))
40+
- CU extraction + OpenAI vector store RAG ([05_large_doc_file_search](samples/01-get-started/05_large_doc_file_search.py))
41+
- Interactive web UI with DevUI ([02-devui](samples/02-devui/))
42+
43+
```python
44+
import asyncio
45+
from agent_framework import Agent, AgentSession, Message, Content
46+
from agent_framework.foundry import FoundryChatClient
47+
from agent_framework.foundry import ContentUnderstandingContextProvider
48+
from azure.identity import AzureCliCredential
49+
50+
credential = AzureCliCredential()
51+
52+
cu = ContentUnderstandingContextProvider(
53+
endpoint="https://my-resource.cognitiveservices.azure.com/",
54+
credential=credential,
55+
max_wait=None, # block until CU extraction completes before sending to LLM
56+
)
57+
58+
client = FoundryChatClient(
59+
project_endpoint="https://your-project.services.ai.azure.com",
60+
model="gpt-4.1",
61+
credential=credential,
62+
)
63+
64+
async def main():
65+
async with cu:
66+
agent = Agent(
67+
client=client,
68+
name="DocumentQA",
69+
instructions="You are a helpful document analyst.",
70+
context_providers=[cu],
71+
)
72+
session = AgentSession()
73+
74+
response = await agent.run(
75+
Message(role="user", contents=[
76+
Content.from_text("What's on this invoice?"),
77+
Content.from_uri(
78+
"https://raw.githubusercontent.com/Azure-Samples/"
79+
"azure-ai-content-understanding-assets/main/document/invoice.pdf",
80+
media_type="application/pdf",
81+
additional_properties={"filename": "invoice.pdf"},
82+
),
83+
]),
84+
session=session,
85+
)
86+
print(response.text)
87+
88+
asyncio.run(main())
89+
```
90+
91+
### Supported File Types
92+
93+
| Category | Types |
94+
|----------|-------|
95+
| Documents | PDF, DOCX, XLSX, PPTX, HTML, TXT, Markdown |
96+
| Images | JPEG, PNG, TIFF, BMP |
97+
| Audio | WAV, MP3, M4A, FLAC, OGG |
98+
| Video | MP4, MOV, AVI, WebM |
99+
100+
For the complete list of supported file types and size limits, see [Azure Content Understanding service limits](https://learn.microsoft.com/azure/ai-services/content-understanding/service-limits#input-file-limits).
101+
102+
### Environment Variables
103+
104+
The provider supports automatic endpoint resolution from environment variables.
105+
When ``endpoint`` is not passed to the constructor, it is loaded from
106+
``AZURE_CONTENTUNDERSTANDING_ENDPOINT``:
107+
108+
```python
109+
# Endpoint auto-loaded from AZURE_CONTENTUNDERSTANDING_ENDPOINT env var
110+
cu = ContentUnderstandingContextProvider(credential=credential)
111+
```
112+
113+
Set these in your shell or in a `.env` file:
114+
115+
```bash
116+
AZURE_CONTENTUNDERSTANDING_ENDPOINT=https://your-cu-resource.cognitiveservices.azure.com/
117+
AZURE_AI_PROJECT_ENDPOINT=https://your-project.services.ai.azure.com
118+
AZURE_OPENAI_DEPLOYMENT_NAME=gpt-4.1
119+
```
120+
121+
You also need to be logged in with `az login` (for `AzureCliCredential`).
122+
123+
### Next steps
124+
125+
- Explore the [samples directory](samples/) for complete code examples
126+
- Read the [Azure Content Understanding documentation](https://learn.microsoft.com/azure/ai-services/content-understanding/) for detailed service information
127+
- Learn more about the [Microsoft Agent Framework](https://aka.ms/agent-framework)
Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,28 @@
1+
# Copyright (c) Microsoft. All rights reserved.
2+
3+
"""Azure Content Understanding integration for Microsoft Agent Framework.
4+
5+
Provides a context provider that analyzes file attachments (documents, images,
6+
audio, video) using Azure Content Understanding and injects structured results
7+
into the LLM context.
8+
"""
9+
10+
import importlib.metadata
11+
12+
from ._context_provider import ContentUnderstandingContextProvider
13+
from ._file_search import FileSearchBackend
14+
from ._models import AnalysisSection, DocumentStatus, FileSearchConfig
15+
16+
try:
17+
__version__ = importlib.metadata.version(__name__)
18+
except importlib.metadata.PackageNotFoundError:
19+
__version__ = "0.0.0"
20+
21+
__all__ = [
22+
"AnalysisSection",
23+
"ContentUnderstandingContextProvider",
24+
"DocumentStatus",
25+
"FileSearchBackend",
26+
"FileSearchConfig",
27+
"__version__",
28+
]

0 commit comments

Comments
 (0)