Commit 1e1eda6
[Python] Add agent-framework-azure-ai-contentunderstanding package (#4829)
* feat: add agent-framework-azure-contentunderstanding package
Add Azure Content Understanding integration as a context provider for the
Agent Framework. The package automatically analyzes file attachments
(documents, images, audio, video) using Azure CU and injects structured
results (markdown, fields) into the LLM context.
Key features:
- Multi-document session state with status tracking (pending/ready/failed)
- Configurable timeout with async background fallback for large files
- Output filtering via AnalysisSection enum
- Auto-registered list_documents() and get_analyzed_document() tools
- Supports all CU modalities: documents, images, audio, video
- Content limits enforcement (pages, file size, duration)
- Binary stripping of supported files from input messages
Public API:
- ContentUnderstandingContextProvider (main class)
- AnalysisSection (output section selector enum)
- ContentLimits (configurable limits dataclass)
Tests: 46 unit tests, 91% coverage, all linting and type checks pass.
* fix: update CU fixtures with real API data, fix test assertions
- Replace synthetic fixtures with real CU API responses (sanitized)
- Update test assertions to match real data (Contoso vs CONTOSO,
TotalAmount vs InvoiceTotal, field values from real analysis)
- Add --pre install note in README (preview package)
- Document unenforced ContentLimits fields (max_pages, duration)
* chore: add connector .gitignore, update uv.lock
* refactor: rename to azure-ai-contentunderstanding, fix CI issues
Align naming with Azure SDK convention and AF pattern:
- Directory: azure-contentunderstanding -> azure-ai-contentunderstanding
- PyPI: agent-framework-azure-contentunderstanding -> agent-framework-azure-ai-contentunderstanding
- Module: agent_framework_azure_contentunderstanding -> agent_framework_azure_ai_contentunderstanding
CI fixes:
- Inline conftest helpers to avoid cross-package import collision in xdist
- Remove PyPI badge and dead API reference link from README (package not published yet)
* feat: add samples (document_qa, invoice_processing, multimodal_chat)
- document_qa.py: Single PDF upload, CU context provider, follow-up Q&A
- invoice_processing.py: Structured field extraction with prebuilt-invoice
- multimodal_chat.py: Multi-file session with status tracking
- Add ruff per-file-ignores for samples/ directory
- Update README with samples section, env vars, and run instructions
* feat: add remaining samples (devui_multimodal_agent, large_doc_file_search)
- S3: devui_multimodal_agent/ — DevUI web UI with CU-powered file analysis
- S4: large_doc_file_search.py — CU extraction + OpenAI vector store RAG
- Update README and samples/README.md with all 5 samples
* feat: add file_search integration for large document RAG
Add FileSearchConfig — when provided, CU-extracted markdown is automatically
uploaded to an OpenAI vector store and a file_search tool is registered on
the context. This enables token-efficient RAG retrieval for large documents
without users needing to manage vector stores manually.
- FileSearchConfig dataclass (openai_client, vector_store_name)
- Auto-create vector store, upload markdown, register file_search tool
- Auto-cleanup on close()
- When file_search is enabled, skip full content injection (use RAG instead)
- Update large_doc_file_search sample to use the integration
- 4 new tests (50 total, 90% coverage)
* fix: add key-based auth support to all samples
Follow established AF pattern: check for API key env var first,
fall back to AzureCliCredential. Supports AZURE_OPENAI_API_KEY and
AZURE_CONTENTUNDERSTANDING_API_KEY environment variables.
* FEATURE(python): add analyzer auto-detection, file_search RAG, and lazy init
_context_provider.py:
- Make analyzer_id optional (default None) with auto-detection by media
type prefix: audio->audioSearch, video->videoSearch, else documentSearch
- Add _ensure_initialized() for lazy client creation in before_run()
- Add FileSearchConfig-based vector store upload
- Fix: background-completed docs in file_search mode now upload to vector
store instead of injecting full markdown into context messages
- Add _pending_uploads queue for deferred vector store uploads
devui_file_search_agent/ (new sample):
- DevUI agent combining CU extraction + OpenAI file_search RAG
azure_responses_agent (existing sample fix):
- Add AzureCliCredential support and AZURE_AI_PROJECT_ENDPOINT fallback
Tests (19 new), Docs updated (AGENTS.md, README.md)
* feat(cu): MIME sniffing, media-aware formatting, unified timeout, vector store expiration
- Add three-layer MIME detection (fast path → filetype binary sniff → filename
fallback) to handle unreliable upstream MIME types (e.g. mp4 sent as
application/octet-stream). Adds filetype>=1.2,<2 dependency.
- Media-aware output formatting: video shows duration/resolution + all fields
as JSON; audio promotes Summary as prose; document unchanged.
- Unified timeout for all media types (removed file_search special-case that
waited indefinitely for video/audio). All files use max_wait with background
polling fallback.
- Vector store created with expires_after=1 day as crash safety net.
- Add 8 MIME sniffing tests (TestMimeSniffing class).
* fix: merge all CU content segments for video/audio analysis
CU's prebuilt-videoSearch and prebuilt-audioSearch analyzers split long
media files into multiple `contents[]` segments. Previously,
`_extract_sections()` only read `contents[0]`, causing truncated
duration, missing transcript, and incomplete fields for any video/audio
longer than a single scene.
Now iterates all segments and merges:
- duration: global min(startTimeMs) → max(endTimeMs)
- markdown: concatenated with `---` separators
- fields: same-named fields collected into per-segment list
- metadata (kind, resolution): taken from first segment
Single-segment results (documents, short audio) are unaffected.
Update test fixture to realistic 3-segment video structure and expand
assertions to verify multi-segment merging. Add documentation for
multi-segment processing and speaker diarization limitation.
* refactor: improve CU context provider docs and remove ContentLimits
- Improve class docstring: clarify endpoint (Azure AI Foundry URL with
example), credential (AzureKeyCredential vs Entra ID), and analyzer_id
(prebuilt/custom with auto-selection behavior and reference links)
- Add SUPPORTED_MEDIA_TYPES comments explaining MIME-based matching
behavior and add missing file types per CU service docs
- Use namespaced logger to align with other packages
- Remove ContentLimits and related code/tests
- Rename DEFAULT_MAX_WAIT to DEFAULT_MAX_WAIT_SECONDS for clarity
* feat: support user-provided vector store in FileSearchConfig
- Add vector_store_id field to FileSearchConfig (None = auto-create)
- Track _owns_vector_store to only delete auto-created stores on close()
- Remove vector_store_name; use internal _DEFAULT_VECTOR_STORE_NAME
- Add inline comments for private state fields
- Document output_sections default in docstring
- Update AGENTS.md, samples, and tests
* fix: remove ContentLimits from README code block
* refactor: create CU client in __init__ instead of __aenter__
Follow Azure AI Search provider pattern: create the client eagerly in
__init__, make __aenter__ a no-op. This ensures __aexit__/close() is
always safe to call and eliminates the _ensure_initialized() workaround.
* docs: add file_search param to class docstring
* feat: introduce FileSearchBackend abstraction for cross-client support
Replace direct OpenAI client usage with FileSearchBackend ABC:
- OpenAIFileSearchBackend: for OpenAIChatClient (Responses API)
- FoundryFileSearchBackend: for FoundryChatClient (Azure Foundry)
- Shared base _OpenAICompatBackend for common vector store CRUD
FileSearchConfig now takes a backend instead of openai_client.
Factory methods from_openai() and from_foundry() for convenience.
BREAKING: FileSearchConfig(openai_client=...) -> FileSearchConfig.from_openai(...)
* refactor: FileSearchBackend abstraction + caller-owned vector store
* fix: file_search reliability and sample improvements
- Poll vector store indexing (create_and_poll) to ensure file_search
returns results immediately after upload
- Set status to failed when vector store upload fails
- Skip get_analyzed_document tool in file_search mode to prevent
LLM from bypassing RAG
- Simplify sample auth: single credential, direct parameters
- Use from_foundry backend for Foundry project endpoints
* perf: set max_num_results=10 for file_search to reduce token usage
* fix: move import to top of file (E402 lint)
* chore: remove unused imports
* fix: align azure-ai-contentunderstanding with MAF coding conventions
- Add module-level docstrings to __init__.py and _context_provider.py
- Use Self return type for __aenter__ (with typing_extensions fallback)
- Use explicit typed params for __aexit__ signature
- Add sync TokenCredential to AzureCredentialTypes union
- Pass AGENT_FRAMEWORK_USER_AGENT to ContentUnderstandingClient
- Remove unused ContentLimits from public API and tests
- Fix FileSearchConfig tests to match refactored backend API
- Fix lifecycle tests to match eager client initialization
* refactor: improve CU context provider API surface and fix CI
- Refactor _analyze_file to return DocumentEntry instead of mutating dict
- Remove TokenCredential from AzureCredentialTypes (fixes mypy/pyright CI)
- Remove OpenAIFileSearchBackend/FoundryFileSearchBackend from public API
(internal to FileSearchConfig factory methods)
- Remove DocumentStatus from public exports (implementation detail)
- Update file_search comments to reflect backend-agnostic design
- Add DocumentStatus enum, analysis/upload duration tracking
- Add combined timeout for CU analysis + vector store upload
* fix: improve file_search samples and move tool guidelines to context provider
- Delete redundant devui_file_search_agent sample (duplicate of azure_openai variant)
- Move tool usage guidelines from sample agent instructions into context provider
(extend_instructions in step 6, applied automatically for all file_search users)
- Fix file_search purpose: use from_foundry() for Azure OpenAI (purpose="assistants")
- Add filename hint in upload instructions for targeted file_search queries
- Reduce max_num_results from 10 to 3 in both devui samples
- Simplify agent instructions in both samples (remove tool-specific guidance)
* feat: improve source_id, integration tests, and content assertions
- Rename DEFAULT_SOURCE_ID to "azure_ai_contentunderstanding" (matches
azure_ai_search convention)
- Improve source_id docstring to describe default value
- Clarify _detect_and_strip_files docstring (CU-supported files)
- Add invoice.pdf test fixture from Azure CU samples repo
- Refactor integration tests to use invoice.pdf directly (assert instead
of skip when fixture missing)
- Add URI content test (Content.from_uri with external URL)
- Add "CONTOSO LTD." content assertion to all integration tests
- Use max_wait=None in integration tests (wait until complete)
* feat: reject duplicate filenames, add integration tests and sample comments
- Reject duplicate document keys in before_run (skip + warn LLM to rename)
- Update _derive_doc_key docstring to document uniqueness constraint
- Add unit tests for duplicate filename rejection (cross-turn and same-turn)
- Add integration test for data URI content (from_uri with base64)
- Add integration test for background analysis (max_wait timeout + resolve)
- Add filename recommendation comments to all samples' Content.from_data()
* chore: improve doc key derivation, comments, and README
- Replace hash-based doc key with uuid4 for anonymous uploads (O(1), no payload traversal)
- Remove hashlib import (no longer needed)
- Add File Naming section to README (filename importance, duplicate rejection)
- Improve inline comments (_derive_doc_key, _extract_binary, URL parsing)
* test: strengthen _format_result assertions with exact expected strings
- Replace loose 'in' checks with exact 'assert formatted == expected'
for both multi-segment and single-segment format tests
- Add object-type fields (ShippingAddress, Speakers) to test data
to cover nested dict/list serialization
- Add position-based ordering assertions to verify structural
correctness (header -> markdown -> fields across segments)
* refactor: move invoice.pdf to shared sample_assets directory
- Move invoice.pdf from tests/cu/test_data/ to
python/samples/shared/sample_assets/ as single source of truth
- Add INVOICE_PDF_PATH constant in test_integration.py pointing
to the shared location
- Update document_qa.py, invoice_processing.py, large_doc_file_search.py
to use invoice.pdf instead of sample.pdf
* refactor: reorganize samples into numbered dirs and simplify auth
- Move script samples into 01-get-started/ with numbered prefixes
(01_document_qa, 02_multimodal_chat, 03_invoice_processing,
04_large_doc_file_search)
- Move devui samples into 02-devui/ with 01-multimodal_agent and
02-file_search_agent/{azure_openai_backend,foundry_backend}
- Move invoice.pdf to CU package-local samples/shared/sample_assets/
- Replace kwargs dicts with direct constructor calls; support both
API key (AZURE_OPENAI_API_KEY) and AzureCliCredential
- Update README sample table with new paths
* fix: resolve CI lint errors (D205, RUF001, E501)
- Fix D205: single-line docstring summary for _detect_and_strip_files
- Fix RUF001: replace EN DASH with HYPHEN-MINUS in segment headers
- Fix E501: wrap long assertion lines in tests
- Also includes samples reorg and auth simplification
* refactor: overhaul samples — FoundryChatClient, sessions, remove get_analyzed_document
Samples:
- Switch all samples from deprecated AzureOpenAIResponsesClient to FoundryChatClient
- Add 02_multi_turn_session.py showing AgentSession persistence across turns
- Rewrite 03_multimodal_chat.py with real PDF + audio + video (parallel
analysis), per-modality follow-ups, cross-document question, elapsed
time, user prompts, and input token counts
- Renumber: 02->03 multimodal, 03->04 invoice, 04->05 file_search
Context provider:
- Remove get_analyzed_document tool -- full content is in conversation
history via InMemoryHistoryProvider, no retrieval tool needed
- Remove follow-up turn instructions about tools
- Only list_documents tool remains (for status queries)
- Update README to reflect tool removal
* feat: add 05_background_analysis sample and fix 04 session/max_wait
- Add 05_background_analysis.py demonstrating non-blocking CU analysis
with max_wait=1s, status tracking via list_documents(), and automatic
background task resolution on subsequent turns
- Fix 04_invoice_processing.py: add max_wait=None and AgentSession
- Rename 05→06 large_doc_file_search
- Update README sample table
* docs: update README and fix sample 06
README:
- Switch Quick Start from AzureOpenAIResponsesClient to FoundryChatClient
- Add AgentSession to Quick Start example
- Fix status values: pending -> analyzing/uploading/ready/failed
- Fix env var: AZURE_OPENAI_RESPONSES_DEPLOYMENT_NAME -> AZURE_OPENAI_DEPLOYMENT_NAME
- Update samples section with new paths, link to samples/README.md
- Update multi-segment description to reflect per-segment fields
Sample 06:
- Fix from_openai -> from_foundry for Azure endpoints
- Add AgentSession and max_wait=None
* docs: rewrite README — concise format, prerequisites, CU link
* fix: resolve pyright errors in _format_result segment cast
* docs: add numbered section comments and fresh sample output to all samples
- Add numbered section comments (# 1. ..., # 2. ...) per SAMPLE_GUIDELINES
- Re-run all 6 samples and update expected output with real results
- Fix duplicate sample output blocks in 04 and 05
- Update README code example to use public invoice URL
* feat: add load_settings support for env var configuration
- Make endpoint optional in constructor — auto-loads from
AZURE_CONTENTUNDERSTANDING_ENDPOINT env var via load_settings()
- Add ContentUnderstandingSettings TypedDict
- Add env_file_path/env_file_encoding params for .env file support
- Add 4 unit tests: env var loading, explicit override, missing
endpoint error, missing credential error
- Update README with env var auto-resolution docs
- Follows framework convention used by all other packages
* docs: polish README — fix duplicate env var, add Next steps, service limits link
* chore: trim invoice fixture from 199K to 33 lines
Keep only VendorName, InvoiceTotal, DueDate, InvoiceDate, InvoiceId
fields and first 500 chars of markdown. Strip spans/source/coordinates.
Reduces fixture from 6.6MB to 1.2KB.
* feat: per-file analyzer_id override via additional_properties
- Read analyzer_id from Content.additional_properties for per-file override
- Resolution order: per-file > provider-level > auto-detect by media type
- Update class docstring documenting filename and analyzer_id properties
- Update sample 04 to demonstrate per-file override (prebuilt-invoice)
- Add unit test for per-file analyzer override
* Trim PDF test fixture and clarify unique filename requirement
- Trim analyze_pdf_result.json from 4427 to 23 lines by removing
pages, words, lines, paragraphs, sections, spans, and source
fields that are not used by any unit test.
- Add docstring note that filename must be unique within a session;
duplicate filenames are rejected and the file will not be analyzed.
* Update python/packages/azure-ai-contentunderstanding/agent_framework_azure_ai_contentunderstanding/_context_provider.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Update python/packages/azure-ai-contentunderstanding/agent_framework_azure_ai_contentunderstanding/_context_provider.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Update python/packages/azure-ai-contentunderstanding/samples/02-devui/02-file_search_agent/azure_openai_backend/agent.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Update python/packages/azure-ai-contentunderstanding/samples/02-devui/01-multimodal_agent/agent.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Update python/packages/azure-ai-contentunderstanding/samples/01-get-started/06_large_doc_file_search.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Fix AGENTS.md to match implementation; remove unused variable in test helper
AGENTS.md:
- Remove _ensure_initialized() reference (client is created in __init__)
- Fix multi-segment docs: segments kept as list, not merged into fields
- Remove get_analyzed_document() reference (only list_documents registered)
- Update sample names to match current directory structure
test_context_provider.py:
- Simplify _make_data_uri() — remove unused 'encoded' variable
* Fix premature file_search instruction for background-completed docs
- Change _resolve_pending_tasks() instruction from 'Use file_search'
to 'being indexed' since the upload hasn't completed yet at that point.
- Add LLM instruction on upload failure in step 1b so the agent can
inform the user the document isn't searchable.
* fix: wrap long line in devui agent instructions (E501)
* Fix Copilot review: unused logger, stray code in README, await cancelled tasks
- _file_search.py: Remove unused logger and logging import
- 01-multimodal_agent/README.md: Remove accidentally pasted Python script
- _context_provider.py close(): Await cancelled tasks before closing
client to prevent 'Task destroyed but pending' warnings
* Sanitize doc keys and fix duplicate filename re-injection
- Add _sanitize_doc_key() to strip control characters, collapse
whitespace, and cap length at 255 chars — prevents prompt injection
via crafted filenames in extend_instructions() calls.
- Track accepted doc_keys in step 3 so step 5 only injects content
for files actually analyzed this turn, not pre-existing duplicates.
- Soften duplicate upload instruction wording (remove IMPORTANT/caps).
* fix: add type annotation to tasks_to_cancel for pyright
* Move per-session mutable state to state dict for session isolation
Previously _pending_tasks, _pending_uploads, and _uploaded_file_ids
were stored on self, shared across all sessions. This caused
cross-session leakage: Session A's background task results could be
injected into Session B's context.
Now these are stored in the per-session state dict. Global copies
(_all_pending_tasks, _all_uploaded_file_ids) are kept on self only
for best-effort cleanup in close().
Add 2 new TestSessionIsolation tests verifying that background tasks
and resolved content stay within their originating session.
* Remove unused AnalysisSection enum values
Only MARKDOWN and FIELDS are handled by _extract_sections().
Remove FIELD_GROUNDING, TABLES, PARAGRAPHS, SECTIONS to avoid
exposing dead options to users.
* Recursively flatten object/array field values for cleaner LLM output
- Use SDK .value property with recursive extraction for object/array fields
- Object: AmountDue -> {Amount: 610, CurrencyCode: USD} (was raw SDK dict)
- Array: LineItems -> list of flattened items (was raw SDK list)
- Update invoice fixture with object/array fields from prebuilt-invoice
- Add 3 unit tests for object, array, and nested object field extraction
* Preserve sub-field confidence; compare full expected JSON in tests
* Remove incorrect MIME aliases (audio/mp4, video/x-matroska)
* feat: add AnalysisInput, content_range, warnings, and category support
- Use SDK AnalysisInput model instead of raw body dict for begin_analyze
- Forward content_range from additional_properties to CU (page/time ranges)
- Extract CU warnings with code/message/target (ODataV4Format) into output
- Include content-level category from classifier analyzers
- Add 5 new tests: warnings, category, content_range forwarding
- Fix pyright with explicit casts; fix en-dash lint (RUF002)
* fix: falsy-0 bug in duration calc; improve test coverage
- Fix start_time_ms=0 treated as falsy by 'or' short-circuit, use
'is None' checks instead for duration and segment time extraction
- Update warnings test to use RAI ContentFiltered codes
- Enrich warnings extraction to include code/message/target (ODataV4Format)
- Add multi-segment video category test with per-segment assertions
* refactor: split _context_provider.py into focused modules
- Extract _constants.py: SUPPORTED_MEDIA_TYPES, MIME_ALIASES, analyzer maps
- Extract _detection.py: file detection, MIME sniffing, doc key derivation
- Extract _extraction.py: result extraction, field flattening, LLM formatting
- _context_provider.py delegates via thin wrappers (793 lines, was 1255)
- Update test imports to use _constants.py for SUPPORTED_MEDIA_TYPES
* docs: update AGENTS.md with DocumentStatus, FileSearchBackend, and _file_search.py
* refactor: replace AnalysisSection enum with Literal type for simpler DX
- Remove AnalysisSection(str, Enum) class, replace with Literal["markdown", "fields"] type alias
- Users can now pass plain strings: output_sections=["markdown"] — no extra import needed
- AnalysisSection type alias still exported for type annotation use
- Update all samples, tests, and internal code to use string literals
- Address PR review feedback (eavanvalkenburg)
* refactor: replace asyncio.Task with continuation tokens for serializable state
- Replace state["_pending_tasks"] (asyncio.Task — not serializable) with
state["_pending_tokens"] (dict of continuation token strings) so the
framework can persist session state to disk/storage
- Resume pending analyses via Azure SDK continuation_token mechanism
- Fix: resumed pollers have stale cached status (done() always False),
use asyncio.wait_for(poller.result()) with 10s min timeout instead
- Remove _background_poll(), _all_pending_tasks, and task cancellation
- Address PR review feedback (eavanvalkenburg): state must be serializable
* fix: resolve CI lint (RUF052) and mypy (call-overload) errors
* feat: add structured output (Pydantic model) to invoice processing sample
- Use response_format=InvoiceResult for schema-constrained LLM output
- Use output_sections=["fields"] only (no markdown needed for structured output)
- Add LowConfidenceField model with confidence values
- Add comments about prebuilt-invoice extensive schema vs simplified model
- Address PR review feedback (eavanvalkenburg): use structured response
* fix: use FOUNDRY_PROJECT_ENDPOINT and FOUNDRY_MODEL env vars in all samples
Replace AZURE_AI_PROJECT_ENDPOINT → FOUNDRY_PROJECT_ENDPOINT and
AZURE_OPENAI_DEPLOYMENT_NAME → FOUNDRY_MODEL across all sample .py and
README.md files. Address PR review feedback (eavanvalkenburg).
* refactor: remove background_analysis sample, use FoundryChatClient in DevUI
- Remove 05_background_analysis.py (per reviewer feedback — discuss max_wait
design separately from samples)
- Renumber 06_large_doc_file_search.py → 05_large_doc_file_search.py
- Replace AzureOpenAIResponsesClient with FoundryChatClient in all DevUI samples
- Replace client.as_agent() with Agent(client=client, ...) everywhere
- Add max_wait comments explaining interactive vs batch usage
- Update README.md and AGENTS.md
- Address PR review feedback (eavanvalkenburg)
* fix: vector_stores API moved from beta namespace in OpenAI SDK
* docs: add comments about multi-file support and CU service limits in file_search sample
* fix: broken markdown links after sample removal and renumbering
* fix: migrate BaseContextProvider to ContextProvider (non-deprecated)
* fix: Message(text=) -> Message(contents=[]) for API compatibility
* Inline _constants.py into consuming modules
Remove _constants.py and move constants to where they are used:
- SUPPORTED_MEDIA_TYPES, MIME_ALIASES → _detection.py
- MEDIA_TYPE_ANALYZER_MAP, DEFAULT_ANALYZER → _context_provider.py
Addresses review feedback to reduce file count.
* Mark package as alpha per package management skill
- Version: 1.0.0b260401 → 1.0.0a260401
- Classifier: Development Status 4 - Beta → 3 - Alpha
- Add to PACKAGE_STATUS.md as alpha
Follows the alpha package checklist from python-package-management skill.
* Replace extend_instructions with extend_messages for status notifications
Status/error/result notifications now use extend_messages (conversation
context) instead of extend_instructions (system prompt). This avoids
system prompt bloat and keeps behavioral directives separate from
event notifications.
- 11 extend_instructions calls → extend_messages (role='user')
- 1 extend_instructions retained: tool usage guidelines (behavioral)
- 6 test assertions updated to check context_messages
All 84 unit tests + 5 live integration tests pass.
* Fix lint: E402 import order, ISC004 implicit string concatenation
- Move constants after all imports to fix E402
- Wrap multi-line strings in parentheses inside contents=[] to fix ISC004
* Fix lint: remove unused json import in invoice sample
* Fix CI: apply ruff format + fix E501 line length after reformatting
ruff format expands Message() calls to multi-line, pushing string
indentation deeper. Break long strings to fit within 120 char limit
after formatting. Also removes unused json import in sample.
* Address review feedback: keyword-only args, accept pre-built client, remove wrappers
- All __init__ args now keyword-only (matches FoundryChatClient pattern)
- New 'client' param accepts pre-built ContentUnderstandingClient
- core dep bound: >=1.0.0rc5 → >=1.0.0,<2
- Self import moved after local imports
- Removed 9 static method wrappers; callsites use module functions directly
- Tests updated to import derive_doc_key and format_result directly
* fix: remove duplicate ContentUnderstandingClient instantiation
The client was being created twice — once inside the if/else block and
again unconditionally after it. The second instantiation overwrote the
pre-built client path and failed type checking when credential was None.
* rename: azure-ai-contentunderstanding → azure-contentunderstanding
Package: agent-framework-azure-ai-contentunderstanding → agent-framework-azure-contentunderstanding
Module: agent_framework_azure_ai_contentunderstanding → agent_framework_azure_contentunderstanding
Directory: packages/azure-ai-contentunderstanding → packages/azure-contentunderstanding
Per agreement with PM and MAF team to drop 'AI' from the package name.
* feat: add ContentUnderstanding re-export to agent_framework.foundry namespace
Enables: from agent_framework.foundry import ContentUnderstandingContextProvider
Exports: ContentUnderstandingContextProvider, FileSearchConfig,
FileSearchBackend, AnalysisSection, DocumentStatus
Updates all samples and README to use the foundry namespace import.
* fix: add missing copyright headers to standalone sample scripts
* chore: remove .vscode/settings.json and add to .gitignore
* refactor: reuse FoundryChatClient.client for vector store ops in file_search sample
Address review feedback from TaoChenOSU:
- 05_large_doc_file_search.py: use client.client instead of manually
constructing AsyncAzureOpenAI; remove openai dependency
- azure_openai_backend/agent.py: import reorder only (AIProjectClient
kept — required for sync vector store creation in DevUI)
* fix: skip closing client when caller passes pre-built client
When a ContentUnderstandingClient is passed via client=, the caller
owns its lifecycle. Added _owns_client flag so close() only closes
the client when we created it internally.
---------
Co-authored-by: yungshinlin <yungshin@msn.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>1 parent 3a463b8 commit 1e1eda6
42 files changed
Lines changed: 6878 additions & 0 deletions
File tree
- python
- packages
- azure-contentunderstanding
- agent_framework_azure_contentunderstanding
- samples
- 01-get-started
- 02-devui
- 01-multimodal_agent
- 02-file_search_agent
- azure_openai_backend
- foundry_backend
- tests/cu
- fixtures
- core/agent_framework/foundry
Some content is hidden
Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
69 | 69 | | |
70 | 70 | | |
71 | 71 | | |
| 72 | + | |
72 | 73 | | |
73 | 74 | | |
74 | 75 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
18 | 18 | | |
19 | 19 | | |
20 | 20 | | |
| 21 | + | |
21 | 22 | | |
22 | 23 | | |
23 | 24 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
| 1 | + | |
| 2 | + | |
| 3 | + | |
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
| 1 | + | |
| 2 | + | |
| 3 | + | |
| 4 | + | |
| 5 | + | |
| 6 | + | |
| 7 | + | |
| 8 | + | |
| 9 | + | |
| 10 | + | |
| 11 | + | |
| 12 | + | |
| 13 | + | |
| 14 | + | |
| 15 | + | |
| 16 | + | |
| 17 | + | |
| 18 | + | |
| 19 | + | |
| 20 | + | |
| 21 | + | |
| 22 | + | |
| 23 | + | |
| 24 | + | |
| 25 | + | |
| 26 | + | |
| 27 | + | |
| 28 | + | |
| 29 | + | |
| 30 | + | |
| 31 | + | |
| 32 | + | |
| 33 | + | |
| 34 | + | |
| 35 | + | |
| 36 | + | |
| 37 | + | |
| 38 | + | |
| 39 | + | |
| 40 | + | |
| 41 | + | |
| 42 | + | |
| 43 | + | |
| 44 | + | |
| 45 | + | |
| 46 | + | |
| 47 | + | |
| 48 | + | |
| 49 | + | |
| 50 | + | |
| 51 | + | |
| 52 | + | |
| 53 | + | |
| 54 | + | |
| 55 | + | |
| 56 | + | |
| 57 | + | |
| 58 | + | |
| 59 | + | |
| 60 | + | |
| 61 | + | |
| 62 | + | |
| 63 | + | |
| 64 | + | |
| 65 | + | |
| 66 | + | |
| 67 | + | |
| 68 | + | |
| 69 | + | |
| 70 | + | |
| 71 | + | |
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
| 1 | + | |
| 2 | + | |
| 3 | + | |
| 4 | + | |
| 5 | + | |
| 6 | + | |
| 7 | + | |
| 8 | + | |
| 9 | + | |
| 10 | + | |
| 11 | + | |
| 12 | + | |
| 13 | + | |
| 14 | + | |
| 15 | + | |
| 16 | + | |
| 17 | + | |
| 18 | + | |
| 19 | + | |
| 20 | + | |
| 21 | + | |
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
| 1 | + | |
| 2 | + | |
| 3 | + | |
| 4 | + | |
| 5 | + | |
| 6 | + | |
| 7 | + | |
| 8 | + | |
| 9 | + | |
| 10 | + | |
| 11 | + | |
| 12 | + | |
| 13 | + | |
| 14 | + | |
| 15 | + | |
| 16 | + | |
| 17 | + | |
| 18 | + | |
| 19 | + | |
| 20 | + | |
| 21 | + | |
| 22 | + | |
| 23 | + | |
| 24 | + | |
| 25 | + | |
| 26 | + | |
| 27 | + | |
| 28 | + | |
| 29 | + | |
| 30 | + | |
| 31 | + | |
| 32 | + | |
| 33 | + | |
| 34 | + | |
| 35 | + | |
| 36 | + | |
| 37 | + | |
| 38 | + | |
| 39 | + | |
| 40 | + | |
| 41 | + | |
| 42 | + | |
| 43 | + | |
| 44 | + | |
| 45 | + | |
| 46 | + | |
| 47 | + | |
| 48 | + | |
| 49 | + | |
| 50 | + | |
| 51 | + | |
| 52 | + | |
| 53 | + | |
| 54 | + | |
| 55 | + | |
| 56 | + | |
| 57 | + | |
| 58 | + | |
| 59 | + | |
| 60 | + | |
| 61 | + | |
| 62 | + | |
| 63 | + | |
| 64 | + | |
| 65 | + | |
| 66 | + | |
| 67 | + | |
| 68 | + | |
| 69 | + | |
| 70 | + | |
| 71 | + | |
| 72 | + | |
| 73 | + | |
| 74 | + | |
| 75 | + | |
| 76 | + | |
| 77 | + | |
| 78 | + | |
| 79 | + | |
| 80 | + | |
| 81 | + | |
| 82 | + | |
| 83 | + | |
| 84 | + | |
| 85 | + | |
| 86 | + | |
| 87 | + | |
| 88 | + | |
| 89 | + | |
| 90 | + | |
| 91 | + | |
| 92 | + | |
| 93 | + | |
| 94 | + | |
| 95 | + | |
| 96 | + | |
| 97 | + | |
| 98 | + | |
| 99 | + | |
| 100 | + | |
| 101 | + | |
| 102 | + | |
| 103 | + | |
| 104 | + | |
| 105 | + | |
| 106 | + | |
| 107 | + | |
| 108 | + | |
| 109 | + | |
| 110 | + | |
| 111 | + | |
| 112 | + | |
| 113 | + | |
| 114 | + | |
| 115 | + | |
| 116 | + | |
| 117 | + | |
| 118 | + | |
| 119 | + | |
| 120 | + | |
| 121 | + | |
| 122 | + | |
| 123 | + | |
| 124 | + | |
| 125 | + | |
| 126 | + | |
| 127 | + | |
Lines changed: 28 additions & 0 deletions
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
| 1 | + | |
| 2 | + | |
| 3 | + | |
| 4 | + | |
| 5 | + | |
| 6 | + | |
| 7 | + | |
| 8 | + | |
| 9 | + | |
| 10 | + | |
| 11 | + | |
| 12 | + | |
| 13 | + | |
| 14 | + | |
| 15 | + | |
| 16 | + | |
| 17 | + | |
| 18 | + | |
| 19 | + | |
| 20 | + | |
| 21 | + | |
| 22 | + | |
| 23 | + | |
| 24 | + | |
| 25 | + | |
| 26 | + | |
| 27 | + | |
| 28 | + | |
0 commit comments