- Prompt for pipeline template parameters with placeholder values (#1032)
- Renamed pipeline
generatecommand tocompile(#1028) - Removed overly aggressive client-side log filtering (#1030)
- Bumped python-dependencies group across 1 directory with 28 updates (#1031)
- Added ruff to test dependencies (#1033)
- Added pip package ecosystem to dependabot.yml (#1025)
- Fixed per-character alias bug in pipeline command (#1028)
- Closed 9 open CodeQL/Dependabot alerts: workflow permissions and torch in test fixtures (#1035)
- Code-first Pipeline DSL with CLI generate/upload support (#1017)
clarifai pipeline run --devfor local pipeline development (#1012)- Report cached prompt tokens in model responses (#1026)
- Improved
clarifai pipeline inithelp text and post-init next-steps message (#1023) - Disabled
deploy_latest_versionforclarifai model servedeployments (#1022)
- Re-pin deployment
desired_workerto current model version onclarifai model serve(#1024) - Loosen pinned requirements and fix Clarifai package detection (#1020)
- Validate Hugging Face access for private repos that report
not_foundto anonymous requests (#1018) - Fix CI compute orchestration tests (#1021)
clarifai pipeline local-runcommand to run pipeline steps locally in Docker (#1013)- Auto-create compute cluster/nodepool for
clarifai pipeline runvia--instanceflag (#1011)
- Improved
clarifai pipeline initUX (#1008) - Local runner defaults set to PRIVATE;
--publicflag now patches all resource visibilities (#1014) - Updated additional requirements for
model init --streaming-video(#987)
User.app()now returns actual server data instead of empty values (#954)- Skip flaky
test_model_templatesandtest_model_params(#1015)
- Smart resource reuse and private-by-default for
clarifai model serve(#1004) - Optimize model runner memory and latency (#994)
- Auto-detect and clamp max_tokens to backend's max_seq_len (#1005)
- Added
VLLMOpenAIModelClassparent class with cancellation support and health probes (#998) - Added clarifai Skills installation (#1003)
- Streamlined overhead in SSE stream (#988)
- Fixed minor local-runner issue (#999)
- Added
--keepflag toclarifai model serveto preserve build directory (#990)
- Local Runner is now public by default (#981)
- Fixed reasoning model token tracking, event-loop safety, streaming and tool call passthrough in agentic class (#989)
- Fixed
HasFieldusage on scalar primitives in DataConverter (#985)
- CLI & Deploy Improvements (#977)
- Fixed user/app conflicts with context in CLI (#979)
- Fixed completion token reporting to use
total_tokens - prompt_tokensto include reasoning tokens (#978) - Fixed user_id conflicts (#973)
- Added support for reading pipeline templates git repo URL from
CLARIFAI_PIPELINE_TEMPLATES_GIT_REPO_URLenvironment variable (#975)
- Fixed
clarifai model initcreating a subdirectory instead of updating the existing model directory (#972)
- Added
clarifai model deploycommand andclarifai model initsimplification with multi-cloud GPU discovery, zero-prompt deployment flow, and simplifiedconfig.yaml(#960) - Added
developerandtoolto valid message roles for LLM interactions (#970) - Added support for pipeline upload without
step_directorieswhentemplateRefshave versions (#961)
- Fixed
List Artifactscommand to use correctlatest-version-idand visibility (#968) - Fixed dataset download bug (#945)
- Fixed user input override bypass when a PostScript version exists (#969)
- Added App CRUD commands and
clarifai whoamito the CLI (#958) - Added health check configuration for
clarifai model local-runnervia--health-check-port,--disable-health-check, and--auto-find-health-check-portflags (#957)
- Improved CLI performance by lazily loading modules and reducing startup overhead (#958)
- Converted GitHub Copilot contributor instructions into modular agent skills with a symlink-based structure (#935)
- Fixed env var clobbering in tests by using
@patch.dictand corrected test patch paths after the CLI lazy-loading refactor (#959) - Fixed
ModelRunnerhealth server startup being invoked twice, which could cause “Address already in use” errors; added support to disable the health server and optionally auto-select an available port (#957) - Reduced overhead in the admission control poll loop in
ModelRunnerto improve runner efficiency (#956)
- Added interactive
clarifai logoutcommand support with programmatic flags for non-interactive usage (#933) - Added
--contextCLI flag to override active context on a per-command basis (#919)
- Added visibility and
user_idfiltering support for Pipeline and Pipeline Step resources in CLI and builders (#951) - Pipeline and Pipeline Step templates now initialize with
PRIVATEvisibility by default (#951) - Dropped support for Modules and removed associated module components (#949)
- Fixed PAT creation URL in CLI login from
/settings/securityto/settings/secrets(#950) - Fixed a serializer regression affecting runner serialization utilities (#952)
- Admission Control: Added admission control support for model runners (#941)
- OpenAI Dependency: Added openai as a core dependency (#938)
- Local Runner: Removed inference_compute_info requirement for local model runners (#911)
- CLI Login: Improved CLI login experience with better UX and security (#928)
- Relaxed clarifai-protocol version constraint from ==0.0.35 to >=0.0.35,<0.1.0 (#932)
- Thread Configuration Management: Added functionality to pass num_threads from config.yaml to the model version protobuf.
- Docker Entrypoint: Switched to tini as the default entrypoint in Dockerfile templates to improve signal handling and zombie process reaping within runner containers.
- Stdio MCP Server: Refactored the Model Context Protocol (MCP) server to improve logging clarity and remove unused legacy code.
- Add support for concept IDs from config.yaml in visual detector/classifier (#913)
- Added
load_concepts_from_config()method toVisualDetectorClassandVisualClassifierClassto load concepts from config.yaml - Added optional
concepts_mapparameter toprocess_detections()andprocess_concepts()methods - When
concepts_mapis provided, concept IDs are taken from config.yaml instead of being auto-generated from names - Fixes mismatch between concept IDs in model output_info and actual prediction output
- Added
- Added a dockerfile template that conditionally adds packages for video streaming (#902)
- Fixed the deployment cleaning logic to only target failed model deployments (#895)
- [EAGLE-7083]: Add retry logic to OpenAI API calls (#878)
- Implements an automatic retry mechanism for OpenAI API calls to handle transient httpx.ConnectError exceptions
- Adds tenacity as a dependency
- Wraps all OpenAI API calls in OpenAIModelClass with a @retry decorator
- Configures the retry to happen up to 3 times with exponential backoff on httpx.ConnectError
- Fix agentic OpenAI transport (#900)
- Fixed attribute access for OpenAI response objects in agentic transport to use hasattr() checks instead of dictionary .get() methods
- Added "none" mode to the --mode CLI option for local-runner command and changed the default from "env" to "none"
- Fix top_k when playground hits openai_transport_* methods (#791)
- [PR-1090] Agentic Class (#869)
- Introduced new
AgenticModelClassthat extendsOpenAIModelClassto enable agentic behavior by integrating LLMs with MCP (Model Context Protocol) servers - Added tool discovery, execution, and iterative tool calling capabilities for both chat completions and responses endpoints
- Supports both streaming and non-streaming modes
- Introduced new
- [PR-1092][PR-1093] Optimised MCPModelClass and supports for Stdio MCP servers (#872)
- Refactored
MCPModelClasswith persistent session management using background thread with long-lived event loop - Added persistent FastMCP client session that opens once during
load_model()and reuses for all subsequent requests - Introduced new
StdioMCPModelClassfor stdio MCP servers with automatic tool discovery - Added support for single long-lived Node.js process for stdio servers
- Added configuration via YAML with support for environment variables and secrets
- Refactored
- Validate requirements.txt for Agentic Models (#897)
- Added validation for requirements.txt in agentic models
- Add CLI support for pause, cancel, resume, and monitor Pipeline Runs (#881)
clarifai pipelinerun(aliaspr) with subcommands:pause,cancel,resume,monitor- Accepts pipeline_version_run_id as positional arg or explicit flag
- Auto-loads user_id, app_id, pipeline_id, pipeline_version_id from config-lock.yaml when present
- Helper functions extract shared logic for config loading, validation, and pipeline instantiation
monitorcommand polls status and logs with configurable --timeout and --monitor_interval options
- Fixed Artifacts Download and Improved Output Formatting (#893)
- Fix Artifact download authentication issue.
- Standardize table formatting by using the existing display_co_resources function.
- Artifacts list table have more details such as version, created_at, etc.
- Artifact version list displayed integers in the visibility column, fixed to human readable strings.
- Fixed local model runner issues (#886)
- Re-enabled copying from the working directory to the container, which was previously disabled
- Corrected incorrect argument configuration for uploaded models from earlier work
- Fixed checkpoint downloads failed when hf_transfer wasn't installed (#888)
- Added compatibility check that temporarily disables HF_HUB_ENABLE_HF_TRANSFER environment variable during downloads when hf_transfer package is unavailable
- Prevents download failures from Hugging Face when environment variable is set but package is not installed
- Fix conflicts with latest vLLM (#887)
- Fixed vLLM model upload failures caused by hardcoded dependencies in SDK
- PIPE-1120: Artifact CLI/SDK implementation (#860)
- Added comprehensive artifact management system for SDK and CLI
- Added Artifact and ArtifactVersion client classes for metadata and file operations
- Added CLI commands for artifact operations (list, get, cp, delete) with alias support
- Added file upload/download with streaming, progress tracking, and retry logic
- Added 80+ test methods across 4 test files for comprehensive coverage
- PR-1014: Interactive config.yaml creation during model upload process (#843)
- Added interactive CLI prompts for creating config.yaml when missing during model upload
- Added helper functions for prompting required, optional, integer, and yes/no fields
- Added context selection during upload process
- Added container and Env model for Local runners (#856)
- Added CLI options (--mode, --keep_image) for local_runner command
- Added ModelRunLocally class for environment setup and Docker operations
- Added support for running models in virtual environment or Docker container
- Add comprehensive test coverage for MCPConnectionPool connection lifecycle (#875)
- Added 22 unit tests for connection lifecycle operations
- Added tests for singleton behavior, connection cleanup, and parallel operations
- Update status code and description for model runner failure case (#870)
- Updated status code to RUNNER_PROCESSING_FAILED for model runner failures
- [EAGLE-7007]: Prevent TypeError during model version creation (#858)
- Fixed TypeError by filtering None values from method signatures before protobuf constructor
- Fixed runner-id bug for local-runners (#867)
- Fixed runner selection and error handling logic to reuse existing runners
- Fixed runner ID missing error when local-runner is initiated from fresh login
- Add fix for user verification in dev (#868)
- Fixed CONN_INSUFFICIENT_SCOPES error during model upload in dev environment
- Added graceful handling of insufficient scopes for Clarifai employee check
- [SVMB-1361]: Upgrade urllib3>2.6.2 (#877)
- Upgraded requests dependency to ensure urllib3>2.6.2 for security fix
- [EAGLE-7083]: Add retry logic to OpenAI API calls and fix test mocks (#879)
- Added retry mechanism with exponential backoff for OpenAI API calls
- Added tenacity dependency for retry logic
- Fixed test mocks with missing OpenAI client methods
- Fix TypeError when accelerator_type is None in config.yaml (#864)
- Added null check before iterating over accelerator_type
- Prevents crash during model upload for CPU-only models
- Fixed local-runner to handle duplicate runner id errors (#850)
- Added CLARIFAI_HF_TOKEN to CLI Context (#851)
- Add comprehensive test coverage for cli.pipeline_step module (#795)
- Add tests for local-runner CLI command (#853)
- Remove auto-generating file (#854)
- Add platform specification support to config.yaml for model versions (#855)
- Add --platform CLI option for model upload (#857)
- Add support for new struct_value field in runner data utils (#847)
- Add support for including deployment user ID (#848)
- Remove model proto caching from ModelRunner, ModelServicer, and server (#838)
- Add Pipeline Step Secrets support in SDK and CLI (#830)
- Bump dockerfile base image git hash (#844)
- Add input argument overrides for pipeline runs via CLI and SDK (#841)
- Refactored the
Dockerfile.templateused for building Clarifai model runner images by introducing of a multi-stage build that separates model asset downloading from final image creation, resulting in a cleaner and more efficient build process (#839) - Fixed an issue by ensuring the model proto with secrets is loaded once during server initialization and is available for all predict requests (#837)
- Added comprehensive support for the OpenAI
responsesAPI (both streaming and non-streaming) to the dummy model implementation, improved token usage accounting for bothchat.completionsandresponsesendpoints, and introduces thorough tests for the new functionality (#836) - Added a validation mechanism to the model loading process in
Modelclass, improving reliability during model initialization (#835) - Improved how package names and versions are parsed from requirement lines, specifically adding support for dependencies specified with the
@symbol and ensuring consistent whitespace handling (#834) - Centralized and streamlined the logic for reading environment variables and passing them to the
ClarifaiAuthHelper, making the codebase more maintainable and flexible (#833) - Added
visual-keypointerto concepts-required model types list (#824) - Improved the robustness of the
clarifai model local-runnercommand by ensuring that model configuration is loaded and validated earlier in the process, and by adding stricter checks for model type consistency (#823) - Optimized model runner performance by loading the model proto once at initialization instead of expecting it with every predict request from the API (#822)
- Improved the
clarifai pipeline initcommand by updating the Argo workflow template generation to include input arguments and remove unnecessary metadata fields (#819) - Added comprehensive environment validation to provide immediate feedback when users attempt to run model tests on unsupported environments, helping them understand limitations and avoid confusion when tests fail (#658)
- Fixed Missing
user_idparameter issue in CLIlocal-runnercommand (#816) - Added sglang toolkit to CLI init command (#815)
- Added Model Deployment Workflow after Model Upload in CLI (#802)
- Added optional protobuf response information in pythonic models with parameter validation (#810)
- Added python toolkit to CLI init command (#807)
- Added USER_ID to config of CLI Model Init (#808)
- Add user input prompt for OpenAI local runner (#801)
- Fixed async_client initialisation (#806)
- Disabled
async_stubinModelClientinitialization (#804) - Fixed
UnboundLocalErrorin model init when using--model-type-idwithout toolkit (#799) - Added config-lock.yaml to clarifai pipeline upload (#754)
- Added support for initializing models using the vLLM toolkit for local-runners (#789)
- Modified the secret injection mechanism to support pulling secrets directly from the current environment when no secret files are available (#788)
- Updated type hints and docstring descriptions across all major files in the clarifai/client folder to improve code quality, maintainability, and developer experience (#781)
- Added comprehensive secrets management functionality to Clarifai's client, including CRUD operations for secrets and integration with model upload workflows (#779)
- Added support for initializing models using the LMStudio toolkit for local-runners (#760)
- Added support for initializing models using the Hugging Face toolkit for local-runners (#740)
- A new
patch_versionmethod is added to the Model class and integrated method signatures into the local runner workflow (#718) - Highlights the code of example code script printed in logs of local runner workflow (#707)
- Changed the default local development model type from "text-to-text" to "any-to-any" (#680)
- Reduced friction while still leveraging a single prebuilt AMD base image (#645)
- Bump setuptools from 70.0.0 to 78.1.1 in /.github/workflows (#600)
- Fixes an issue with the conversion of gRPC response enums to integers for the runner creation process (#576)
- Minor internal improvements and bug fixes.
- Health probe support allowing
ModelClassimplementations to define liveness/readiness checks (#783) - Interactive
pipeline inituser prompts replacing placeholder TODO values (#768) - Git registry metadata capture during model upload with model‑scoped change detection (#762)
- Comprehensive internal GitHub Copilot contributor instructions document (#748)
- Local runner now uses latest local-dev model version automatically (#777)
- Improved overall Model CLI UX (consolidated flags, clearer help, better error surfacing) (#738)
- Updated
clarifai model predictCLI to align with pythonic model changes (#654) - Updated local-runner default API base URL (#770)
- Refined logging in model & pipeline step builders for clearer diagnostics (#773)
- Correct TypeError when parsing checkpoint size from environment variable (#775)
- Secrets handling for request type secrets in runners / builders (#774)
- Pipeline log monitoring pagination now returns all entries beyond first 50 (#772)
- Added structured maintainer + contributor guidance for AI assistance workflows (#748)
- This release focuses on developer ergonomics (CLI UX, logging clarity), operational robustness (health probes, pagination fix), and improved reproducibility (git registry metadata & latest local-dev model resolution).
- Fix Local Runner CLI command [(#765)] (#765)
- update protocol and grpc versions [(#763)] (#763)
- fix num_threads setting fix pip checks [(#752)] (#752)
- fix pip checks when cache is broken [(#751)] (#751)
- fix usage setting on openai responses [(#750)] (#750)
- add stream_options validation for internal streaming model upload [(#742)] (#742)
- add packaging dependency that was missing [(#743)] (#743)
- always return JSON errors on openAI calls [(#744)] (#744)
- use 32 threads by default [(#735)] (#735)
- [PR-754] Fix ruff and dependencies-related issues [(#737)] (#737)
- [PR-768]: Fix Model Upload Deployment [(#739)] (#739)
- [PR-765] Fix wrong url for python SDK in README [(#734)] (#733)
- [PR-734] Use Method signature for local-runner [(#718)] (#718)
- Prevent Dockerfile overwrite during model upload with user confirmation [(#715)] (#715)
- quickfix for local runner signatures [(#732)] (#732)
- skip code generation when context is None [(#730)] (#730)
- pipeline_steps should be used in templates [(#728)] (#728)
- Fix nodepool creation [(#729)] (#729)
- Fix pipeline status code checks [(#727)] (#727)
- various fixes for pipelines [(#726)] (#726)
- Add list / ls CLI command for pipeline and pipelinestep [(#667)] (#667)
- Fix PAT account settings link [(#724)] (#724)
- Added support for verbose logging of Ollama [(#717)] (#717)
- Improve error messages with pythonic models [(#721)] (#721)
- Improve login logging experience [(#719)] (#719)
- Improve Local Runner Logging [(#720)] (#720)
- Add CLI config context support to BaseClient authentication [(#704)] (#704)
- live logging functionality for model runner [(#711)] (#711)
- Unify Context Management Under a Single config Command [(#709)] (#709)
- Add func to return both stub and channel [(#713)] (#713)
- Added local-runner requirements validation step [(#712)] (#712)
- Improve URL Download error handling [(#710)] (#710)
- Added Playground URL to Local-Runner Logs [(#708)] (#708)
- Unit tests for toolkits [(#639)] (#639)
- Improve Local-Runner CLI Logging [(#706)] (#706)
- Improve client script formatting (black linter formatting) [(#705)] (#705)
- Add github folder download support and toolkit option in model init [(#699)] (#699)
- Improve Handling for PAT and USER_ID [(#702)] (#702)
- Fixed flag for local runner threads, add user validation error [(#698)] (#698)
- Added PAT token validation during clarifai login command [(#697)] (#697)
- Fixed Local Runners Name across SDK [(#695)] (#695)
- Added default template for ollama models in the local-runner ising
model initcommand [(#693)] (#693) - Fixed
pipelinestep uploadcommand to parse all compute-info params and preserve user Dockerfile - Fixed base model template import & return issues [(#690)] (#690)
- Add
pool_sizeflag default to 1 for local dev runner threads [(#689)] (#689)
- Updated local-runner constants [(#684)] (#684)
- Added
--versionflag support to the Clarifai CLI [(#678)] (#678) - Ensured better handling of
model_type_idand improved configuration management [(#676)] (#676) - Added support for specifying a
deployment_user_idin the Model class to enhance runner selection functionality [(#675)] (#675) - Added functionality to initialize a model directory from a GitHub repository, enhancing flexibility and usability in
model initcommand [(#674)] (#674) - Fixed CLI PATH for Windows [(#672)] (#672)
- Fixed code generation script [(#671)] (#671)
- Added an alias for the pipelinestep CLI command and significantly improved test coverage for the
clarifai.runners.pipeline_stepsmodule [(#665)] (#665) - Improved CLI documentation and added descriptive help messages for various model-related commands [(#663)] (#663)
- Number of threads used for GRPC Server default to CLARIFAI_NUM_THREADS and 32 otherwise [(#661)] (#661)
- Use Configuration contexts in Model Upload CLI [(#649)] (#649)
- Add pipeline run CLI similar to model predict [(#644)] (#644)
- Update requirements.txt for protocol version [(#668)] (#668)
- Per-output token context tracking for batch operations
- New
set_output_context()method for models to specify token usage per output
- Improved token usage tracking in ModelClass with thread-local storage
- Enhanced batch processing support with ordered token context queue
- Token context ordering in batch operations using FIFO queue approach
- Temporarily disabled
test_client_batch_generatewhile implementing token tracking features
- fix legacy proto support [(#636)] (#636)
- Added authentication support to URL fetcher for SDH-protected URLs [(#647)] (#647)
- Fixes AMD-related configuration by updating image versioning, introducing an AMD-specific Torch image [(#641)] (#641)
- Fix code snippets and Added code snippet test [(#638)] (#638)
- Add CLI command for pipeline upload with orchestration and validation [(#634)] (#634)
- Add list models information in CLI and method [(#640)] (#640)
- Show a terminal prompt asking users if they want to create a new app when the specified app does not exist [(#637)] (#637)
- Asyncify predict endpoints v2 [(#588)] (#588)
- Added Model Utils in SDK [(#631)] (#631)
- Use model auth to set runner [(#632)] (#632)
- Add support for Clarifai Pipeline Steps Upload similar to Model Upload [(#621)] (#621)
- improve local dev and url helper (630)
- use uv in the build process (626)
- Removed an unused parameter in VisualClassifier class [(#622)] (#622)
- Add support to
/responses,/embeddings, and/images/generationsendpoints to the OpenAI class [(#619)] (#619) - Fixed data display issue and updated openai params [(#618)] (#618)
- Add back in pretrained model config [(#616)] (#616)
- Updated Model Upload section in Readme [(#613)] (#613)
- Add clarifai model init to CLI to create default files for model upload [(#611)] (#611)
- Fix issue with model upload [(#612)] (#612)
- Improve usage of clarifai config in urls [(#608)] (#608)
- Update code snippets for MCP / OpenAI [(#607)] (#607)
- Fixed Model Upload [(#606)] (#606)
- Fixed
MCPModelClassnotifications bug [(#602)] (#602) - Improved the
OpenAIModelClassto streamline request processing, add modularity, and simplify parameter extraction and validation [(#601)] (#601) - Fixed a bug in the
OpenAIModelClassto return the full json responses [(#597)] (#597) - Cleanup fastmcp [(#596)] (#596)
- Added
OpenAIModelClassto allow developers to create models that interact with OpenAI-compatible API endpoints [(#594)] (#594)
- Fixed openai messages Utils function and code-snippet function [(#595)] (#595)
- Simplified openai client wrapper functions (#562)
- MCP integration, CLI commands and improved environment variable handling (#592)
- Fix Pythonic bugs (#586)
- Addition of Base Class for Visual Classifier Models (#585)
- Print script after model upload (#583)
- Add AMD changes (#581)
- Removed duplicate model downloads and improved error logging for gated HF repo. (#564)
- Addition of Base Class for Visual Detector Models (#563)
- remove rich from req (#560)
- Param for Inference params in model.py and FE (#567)
- Fixed Streamlit Query Parameters retrieval issue in ClarifaiAuthHelper. (#577)
- Fixed pyproject.toml. (#575)
- Fixed local dev runners. (#574)
- Fixed issue of runner ID of local dev runners. (#573)
- Switched to
uvandruffto speed up tests and formatting & linting. (#572) - Changed some
==tois. (#570) - Local dev runner setup using CLI is easier now. (#568)
- Fixed indirect inheritence from ModelClass. (#566)
- We support pythonic models now. See runners-examples (#525)
- Fixed failing tests. (#559)
- CLI is now abougt 20x faster for most operations (#555)
- CLI now has config contexts, more to come there... (#552)
- Improve error messages with missing PAT (#548)
- Fix model builder return args (#547)
- Removed HF loader
config.jsonvalidation for all clarifai Model type ids [(#543)] (#543) - Added Regex Patterns to Filter Checkpoint Files to Download [(#542)] (#542)
- Added validation for CLI config [(#540)] (#540)
- Fixed docker image name and added
skip_dockerfileoption totest-locallysubcommand od model CLI [(#526)] (#526)
- Improved CLI login module [(#535)] (#535)
- Updated the CLI to test out model locally independent of remote access [(#534)] (#534)
- Modified the default value of
num_threadsfield [(#533)] (#533)
- Dropped testing of python 3.8, 3.9, 3.10 [(#532)] (#532)
- Updated the deployment testing config [(#531)] (#531)
- Removed the model_path argument to CLI [(#529)] (#529)
- Added configuration for multi-threaded runners [(#524)] (#524)
- Adds support for local dev runners from CLI [(#521)] (#521)
- Use the non-runtime path for tests [(#520)] (#520)
- Fix local tests [(#518)] (#518)
- Catch additional codes that models have at startup [(#517)] (#517)
- Introduce 3 times when you can download checkpoints [(#515)] (#515)
- Fix dependency parsing [(#514)] (#514)
- User new base images and fix clarifai version [(#513)] (#513)
- Don't validate API in server.py [(#509)] (#509)
- Fixed Docker test locally [(#505)] (#505)
- Fixed HF checkpoints error [(#504)] (#504)
- Fixed Deployment Tests [(#502)] (#502)
- Fixed Issue with Filename as Invalid Input ID [(#501)] (#501)
- Update Model Predict CLI [(#500)] (#500)
- Tests Health Port to None [(#499)] (#499)
- Refactor model class and runners to be more independent [(#494)] (#494)
- Add storage request inferred from tar and checkpoint size [(#479)] (#479)
- Updated model upload experience [(#498)] (#498)
- Added Model Upload Tests [(#495)] (#495)
- Updated Torch version Images and Delete tar file for every upload [(#493)] (#493)
- Added Tests for Model run locally [(#492)] (#492)
- Added CLARIFAI_API_BASE in the test container [(#491)] (#491)
- remove triton requirements [(#490)] (#490)
- Added tests for downloads and various improvements [(#489)] (#489)
- Added tests for downloads and various improvements [(#488)] (#488)
- Take user_id from Env variable [(#477)] (#477)
- Added HF token Validation [(#476)] (#476)
- Fix Model prediction methods when configured with a dedicated compute_cluster_id and nodepool_id [(#475)] (#475)
- Fix model upload issues [(#474)] (#474)
- Improved error logging [(#473)] (#473)
- Changed labels to optional in Dataloaders to support Data Ingestion pipelines in clarifai-datautils library [(#471)] (#471)
- Added model building logs [(#467)] (#467)
- Added user_id to RAG class [(#466)] (#466)
- Added Compute Orchestration to README.md [(#461)] (#461)
- Added Testing and Running a model locally within a container [(#460)] (#460)
- Added CLI support for Model Predict [(#459)] (#459)
- Updated Dockerfile for Sglang [(#468)] (#468)
- Updated available torch images and some refactoring [(#465)] (#465)
- Fixed issue for Model local testing [(#469)] (#469)
- Removed protobuf from requirements to resolve conflicts with clarifai-grpc [(#464)] (#464)
- Fixed issue of bounding box info edge cases [(#457)] (#457)
- Supports downloading data.parts as bytes [(#456)] (#456)
- Changed default env to prod for Model upload [(#455)] (#455)
- Added tests for all stream and generate methods [(#452)] (#452)
- Added Codecoverage test report in PRs [(#450)] (#450)
- Fixed code bug in runners selection using Deployment [(#446)] (#446)
- Fixed id bug in multimodal loader during deletion of failed inputs [(#445)] (#445)
- Added list inputs functionality to Dataset Class [(#443)] (#443)
- Added delete annotations functionality to Input Class [(#442)] (#442)
- Added Dockerle template based on new base images by parsing requirements [(#439)] (#439)
- Added a check for base url parameter [(#438)] (#438)
- Added CLI support for Compute Orchestration resources (Compute cluster, Nodepool, Deployment) [(#436)] (#436)
- Added tests for CRUD Operations of CO Resource - Deployment [(#431)] (#431)
- Added request-id-prefix header to SDK requests to improve SDK monitoring [(#430)] (#430)
- Added CLI for Model upload [(#429)] (#429)
- Fixed model servicer for Model Upload [(#428)] (#428)
- Added python versions badge to README.md [(#427)] (#427)
- Removed stream tests till stream API is fixed [(#426)] (#426)
- Removed unnecessary prefixes to concept ID added from SDK [(#424)] (#424)
- Upgraded llama-index-core lib version as a security update [(#423)] (#423)
- Added metadata in exported dataset annotations files
- Upgrade to clarifai-grpc 10.9.11
- Improve UX for model upload and fix runners tests [(#420)] (#420)
- Added functionality to Merge Datasets [(#419)] (#419)
- Fix bugs for model upload [(#417)] (#417)
- Fix download_checkpoints and fix run model locally [(#415)] (#415)
- Improve handling missing huggingface_hub package [(#412)] (#412)
- Implement script that allows users to test and run a runner's model locally [(#411)] (#411)
- Improve Model upload experience for cv models [(#408)] (#408)
- Improved the Test Coverage for Dataloaders & Evaluations modules of SDK [(#409)] (#409)
- New streaming predict endpoints(#407)
- New dockerfile for model upload and improvements to upload flow(#406)
- Bug fixes for logger(#405)
- Added CRUD operations for Compute Orchestration resources (Compute cluster, Nodepool, Deployment) [(#402)] (#402)
- Improved logging and fixed issues with downloading checkpoints(#403)
- Refract Model upload and download checkpoints at build time during model upload(#400)
- Added fsspec dependency which would be required in runners for model upload (#398)
- Added MultiModalLoader support (#384)
- Deleted model_serving in this SDK, after the Runners PR has been merged (#391)
- Added validation check in HF loader, if the checkpoints really exit at checkpoint path (#396)
- Remove pydantic dependency from runners in clarifai-python (#395)
- use json logger always in k8s (#393)
- Added a json logger so it's convenient to get logs into logging stacks (#392)
- Added HuggingFaceLoader and added methods in model_upload for download_checkpoints and handling concepts (#390)
- Integrate clarifai-protocol which use to upload model to platform (#389)
- Tests Addition for App, Dataset, Input, Model Classes (#386)
- Upgrade to clarifai-grpc 10.8.7
- Upgrade to clarifai-grpc 10.8.6
- Improved Model Export functionality by adding
Rangesheader (#385)
- Python SDK usage issue on Windows OS due to upgrade in Protobuf library (#380)
- Dataset Annotations bug that returns None if class annotation is not present during export (#382)
- Patch operations for Models and Workflows [(#370)] (#370)
- Addition of Concept Relations Operations [(#371)] (#371)
- Addition of App's Input Count functionality [(#372)] (#372)
- Dataset Annotations bug that returns either class annotation or detection annotation during export (#375)
- Model Export Bug by adding authentication headers (#373)
- Patch operations for Apps and Datasets [(#364)] (#364)
- RAG class to support env variable for
user_idparam (#357) - Search query bug that returns duplicated triplets by removing
PostAnnotationsSearchesand replacing it withPostInputsSearches(#366) - Search request potentially blocks the users to use different types of filters altogether, fixed it by supporting annotation and input proto filters.(#366)
- Patch operations for input annotations and concepts [(#354)] (#354)
- Getting user id from ENV variables for RAG class (#358)
- Improved rich logging by width addition (#359)
- Dataset export functionality - Added authentication headers to download requests, better exception formatting (#356)
- Moved some convenience features to CLI only to avoid writes to disk (#353)
- Text Features to add random ID as input if input ID is not provided in Dataloader (#351)
- Added BaseClient.from_env() and some new endpoints (#346)
- Upgrade to clarifai-grpc 10.5.0 (#345)
- Upgrade to clarifai-grpc 10.3.4 (#343)
- RAG apps, workflows and other resources automatically setup now use UUIDs in their IDs instead of timestamps to avoid races. (#343)
- Fixed issue with
get_upload_statusoverridinglog_warningstable in log file. (#342) - Use UUIDs in tests to avoid race conditions with timestamps. (#343)
- Hardcoded shcema package to 0.7.5 as it introduced breaking changes. (#343)
- Flag to download model. If export_dir param in
Model().export()is provided, the exported model will be saved in the specified directory else export status will be shown.(#337) - Label ID support in Dataloaders(label_ids param) and get_proto functions in Inputs class.(#338)
- Logger for
Inputs().upload_annotationsto show full details of failed annotations.(#339)
- RAG upload bug by changing llama-index-core version to 0.10.24 in ImportError message (#336)
- Pagination feature in Search. Added pagination param in
Search()class and included per_page and page_no params inSearch().query()(#331) - Alogrithm param in
Search()(#331)
- Model Upload CLI Doc(#329)
- RAG.setup() bug where if we delete a specific workflow and create another workflow with the same id, by adding timestamp while creating a new prompter model (#332)
RAG.upload()to support folder of text files.(#332)
- Root certificate support to establish secure gRPC connections by adding
root_certificates_pathparam in all the classes and auth helper and updating the grpc to the latest version.(#319) - Missing VERSION and requirements.txt files to setup.py(#320)
- To limit max upload batch size for
Inputs().upload_inputs()function. Also changed the model version id parameter inconsistency inApp.model()andModel()(#317)
- Training status bug by removing constraint of user specifying model_type_id for training_logs and using
load_info()to get model version details(#321) - Create workflow bug which occured due to the model version id parameter change in #317(#322)
- Unnecessary infra alerts by adding wait time before deleting a model in model training tests (#326)
- Runners from the SDK(#325)
- Dataset version ID support in
app.dataset()andDataset()(#315)
- Dataset Export function to internally download the dataset archive zip with the function
Dataset.archive_zip()(#303) - The backoff iterator to support custom starting count, so different process can have different starting wait times.(#313)
- Removed the key base_embed_model from params.yaml file, since the model training by default considers the base embed model which is set for the app and no need to define it again in params file.(#314)
- File not found error in model serving CLI (#305)
- Workflow YAML schema bug (#308)
- Base URL passing bug (#308)
- Eval Endpoints (#290)
- Eval Utils (#296)
- Eval Tests (#297)
- Support session token (#300)
- Dataset upload Enhancements (#292)
- Concept ID check befor model training (#295)
- RAG setup debug (#298)
- Requirements Update (#299)
- Model Upload v2 CLI (#269)
- Support Existing App in RAG (#275)
- Support RAG Prompter kwargs (#280)
- Custom Workflow id support in RAG (#291)
- Model Template Change in Model Train Test (#273)
- Dataset Upload summary fix (#282)
- Update Model Serving Docs (#287)
- Modified process_response_keys functions to fetch metadata info (#270)
- Assert user_id condition for RAG (#268)
- Changed demo link in README (#260)
- Fixed Mulitmodal input bug (#261)
- Workflow predict retry time to 10 minutes (#266)
- Update clarifai-grpc to 10.0.1 (#267)
- Test Cases for Model Upload (#256)
- Download Inputs functionality (#263)
- Added RAG base class (#262)
- RAG Chat Method (#264)
- RAG Upload Method (#265)
- Model upload examples moved to examples repo (#258)
- Use specific URL method for apps (#257)
- Loosen requirement constraints (#243)
- Update clarifai-grpc to 9.11.5
- Support Rank for PostInputsSearch (#255)
- CocoDetectionDataloader bug (#241)
- Codeql Change (#241)
- Seperate tests requiring secrets (#233)
- SDK Pending tasks (#232)
- add retry for workflow predict
- add constants for max inputs count in predict
- change annotation proto to bbox
- add search to README.md
- Add CHANGELOG.md
- Updated runner logic with parallel and error catching (#238)
- Removing internal_only Training Params (#231)
- Remove pytest requirement (#225)
- Remove omegaconf requirement (#235)
- Update clarifai-grpc to 9.11.0
- Support multimodal inputs for inference (#239)
- Ensure support for Python 3.10-3.12 (#226)
- Add MANIFEST.in back to include .css files
- Support Dataset Upload Status
- Support PAT as arg
- SDK cleanup(docs, examples, symlink to clarifai_utils, clarifai.auth)
- Refactor dataset upload process(loaders, dataloader)
- Fix Search top_k bug
- Model Training in SDK.
- Tests for Model Training.
- Fix base_url bug in passing while chained
- Moving Pycocotools requirement to extras(clarifai[all]).
- Support for model inference params
- PostInputsSearch Support
- Bump clarifai_grpc==9.10.0
- Pagination in listing
- Support list_annotations
- Supports custom metadata in dataloader, upload_from_csv
- Set clarifai_grpc to 9.8.1
- Reuse requirements.txt in setup.py
- Support Annotation Download
- Fix critical Version file not found bug in 9.9.1
- Reuse Version number from Version file
- Support Vector Search
- Workflow Create Bugs
- Support Workflow Create, Export
- Bump clarifai_grpc to 9.8.1
- Bump clarifai_grpc to 9.8.0
- Bump clarifai_grpc to 9.7.4
- Model Serving Support
- PyPi build issues