AI Dev Kit: Unified Interface Architecture
Executive Summary
The Problem
Today, the AI Dev Kit only offers MCP tools as an interface. While powerful for AI agents, MCP has friction:
- Context loading: MCP tools must be loaded into the AI's context, consuming tokens
- Stability: MCP server connections can be unstable (stdio transport, process lifecycle)
- Deployment complexity: Requires MCP-compatible clients (Claude Code, etc.)
- Not human-friendly: Users can't directly test or script operations
The Solution
We're adding a CLI interface (aidevkit) that provides the exact same tools as MCP:
aidevkit lakebase create my-db --capacity CU_2
aidevkit sql execute "SELECT * FROM ..."
aidevkit jobs run 12345 --wait
How It Works
| Layer |
Role |
databricks-tools-core |
All business logic (shared) |
databricks-mcp-server |
Thin wrapper exposing tools via MCP |
aidevkit-cli |
Thin wrapper exposing tools via CLI |
Both interfaces call the same core logic - just different entry points.
Installation Choice
At install time, users choose:
- MCP only - Current behavior, for AI-first users
- CLI only - For terminal users, scripts, CI/CD
- Both - Full flexibility
Skills Strategy
- Skills are written in CLI syntax (human-readable, testable)
- If user chose MCP, a simple search/replace transforms CLI commands to MCP tool references
- Low skill impact - just syntax, same operations
- MCP tool names unchanged - no breaking changes for existing MCP users
Why This Is Non-Disruptive
| Aspect |
Impact |
| MCP users |
Zero change - same tools, same names, same behavior |
| Backend code |
Reorganization only - logic moves to core where it belongs |
| Skills |
Syntax update, same content |
| Migration |
Optional - users can stay on MCP indefinitely |
What We're Really Doing
The MCP project accumulated business logic that should have been in databricks-tools-core. We're:
- Moving logic back to core (where it belongs)
- Making MCP a thin wrapper (as it should be)
- Adding CLI as another thin wrapper (same pattern)
Result: Cleaner architecture, user choice, same functionality.
1. Current State
Architecture Today
ai-dev-kit/
├── databricks-tools-core/ # Low-level SDK wrappers
│ └── databricks_tools_core/
│ └── lakebase/
│ └── lakebase.py # Granular: create_lakebase_instance(), etc.
│
├── databricks-mcp-server/ # MCP server (business logic embedded)
│ └── databricks_mcp_server/
│ └── tools/
│ └── lakebase.py # @mcp.tool with create_or_update logic INSIDE
│
├── databricks-skills/ # Knowledge/patterns for AI
│ └── databricks-lakebase-*/
│ └── skill.md # References MCP tools directly
Problems with Current State
| Problem |
Impact |
| Business logic locked in MCP |
Can't reuse create-or-update patterns outside AI context |
| No CLI option |
Humans must use AI or raw SDK for operations |
| Skills are MCP-specific |
Skills can't be used as documentation for CLI users |
| Tight coupling |
Adding new interfaces requires duplicating logic |
2. Target Architecture
New Structure
ai-dev-kit/
├── databricks-tools-core/ # Core library (ALL business logic)
│ └── databricks_tools_core/
│ ├── lakebase/
│ │ ├── lakebase.py # Low-level SDK wrappers (unchanged)
│ │ └── workflows.py # NEW: High-level business logic
│ └── skill_transformer.py # NEW: Simple CLI → MCP text replacement
│
├── databricks-mcp-server/ # THIN MCP wrapper
│ └── databricks_mcp_server/
│ └── tools/
│ └── lakebase.py # @mcp.tool → calls workflows.py
│ # + CLI_MAPPING dict at bottom
│
├── aidevkit-cli/ # NEW: THIN CLI wrapper
│ └── aidevkit/
│ └── commands/
│ └── lakebase.py # @app.command → calls workflows.py
│
├── databricks-skills/ # Skills (CLI syntax - universal)
│ └── databricks-lakebase-*/
│ └── skill.md # Uses CLI commands (transformed for MCP)
Layer Diagram
┌─────────────────────────────────────────────────────────────────┐
│ User Interfaces │
├────────────────────────────┬────────────────────────────────────┤
│ databricks-mcp-server │ aidevkit-cli │
│ @mcp.tool wrappers │ @app.command wrappers │
│ (Claude, AI agents) │ (humans, scripts, CI) │
│ │ │
│ ┌──────────────────────┐ │ ┌──────────────────────────────┐ │
│ │ 10-15 lines per tool │ │ │ 15-20 lines per command │ │
│ │ Docstrings for AI │ │ │ Typer decorators + output │ │
│ │ Calls workflows │ │ │ Calls workflows │ │
│ │ + CLI_MAPPING dict │ │ │ │ │
│ └──────────────────────┘ │ └──────────────────────────────┘ │
└────────────────────────────┴────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ databricks-tools-core │
│ │
│ workflows.py ───────────────────────────────────────────────── │
│ • create_or_update_database() ← Business logic lives HERE │
│ • get_database() │
│ • delete_database() │
│ │
│ lakebase.py (existing) ─────────────────────────────────────── │
│ • Low-level SDK wrappers (unchanged) │
└─────────────────────────────────────────────────────────────────┘
3. CLI ↔ MCP Mapping (Simple Approach)
The Problem
Skills use CLI syntax. MCP users need tool names. How do we map them?
The Solution: Co-located Dict
Each MCP tool file contains a simple mapping dict at the bottom:
# databricks-mcp-server/tools/lakebase.py
@mcp.tool(timeout=120)
def create_or_update_lakebase_database(...):
return create_or_update_database(...)
@mcp.tool(timeout=30)
def get_lakebase_database(...):
return get_database(...)
# ... more tools ...
# CLI mapping - lives with the tools it maps
CLI_MAPPING = {
"lakebase create": "create_or_update_lakebase_database",
"lakebase get": "get_lakebase_database",
"lakebase delete": "delete_lakebase_database",
"lakebase create-branch": "create_or_update_lakebase_branch",
"lakebase delete-branch": "delete_lakebase_branch",
"lakebase credential": "generate_lakebase_credential",
}
Aggregated Mapping
# databricks-mcp-server/cli_mapping.py
from .tools import lakebase, sql, jobs, apps, pipelines, serving
COMMAND_TO_TOOL = {
**lakebase.CLI_MAPPING,
**sql.CLI_MAPPING,
**jobs.CLI_MAPPING,
**apps.CLI_MAPPING,
**pipelines.CLI_MAPPING,
**serving.CLI_MAPPING,
}
Skill Transformation
Simple search/replace - no param parsing needed:
# skill_transformer.py
import re
from databricks_mcp_server.cli_mapping import COMMAND_TO_TOOL
def transform_for_mcp(content: str) -> str:
"""Replace 'aidevkit xxx' with 'MCP tool: yyy'"""
for cli_cmd, mcp_tool in COMMAND_TO_TOOL.items():
pattern = rf'aidevkit {cli_cmd}(\s+.*)?'
replacement = rf'MCP tool `{mcp_tool}`\1'
content = re.sub(pattern, replacement, content)
return content
Before (skill source):
aidevkit lakebase create my-db --capacity CU_2
After (MCP mode):
MCP tool `create_or_update_lakebase_database` my-db --capacity CU_2
The AI figures out param mapping from the arguments. No complex registry needed.
Why This Works
- Co-located: Mapping lives next to the MCP tools (easy to keep in sync)
- Simple: Just a dict, no introspection or decorators
- Maintainable: When you add/change a tool, update the mapping in the same file
- MCP names unchanged: Existing MCP tool names preserved
4. The CLI-First Skill Strategy
Core Principle
CLI is the universal language. Skills are always written with CLI commands because:
- Human-readable (works as documentation)
- Testable (users can copy-paste and run)
- Concrete (no abstract placeholders)
MCP instructions are derived via simple text replacement when needed.
Example Skill
# Lakebase Database Management
## Create a Database
` ` `bash
aidevkit lakebase create my-analytics --capacity CU_2
` ` `
## Check Status
` ` `bash
aidevkit lakebase get my-analytics
` ` `
## Clean Up
` ` `bash
aidevkit lakebase delete my-analytics --force
` ` `
When Loaded in MCP Mode
# Lakebase Database Management
## Create a Database
MCP tool `create_or_update_lakebase_database` my-analytics --capacity CU_2
## Check Status
MCP tool `get_lakebase_database` my-analytics
## Clean Up
MCP tool `delete_lakebase_database` my-analytics --force
5. Implementation Phases
Phase 1: Extract Business Logic to Core
Goal: Move logic from MCP tools to databricks-tools-core workflows.
Files to create:
databricks-tools-core/databricks_tools_core/lakebase/workflows.py
databricks-tools-core/databricks_tools_core/apps/workflows.py
- (Similar for each domain)
Pattern:
# workflows.py - contains all business logic
def create_or_update_database(
name: str,
type: str = "provisioned",
capacity: str = "CU_1",
on_resource_created: Optional[Callable] = None,
) -> Dict[str, Any]:
"""Business logic here."""
existing = _find_by_name(name)
if existing:
return update_database(...)
else:
result = create_database(...)
if on_resource_created:
on_resource_created("lakebase_instance", name, name)
return result
MCP tool becomes thin wrapper:
# MCP tools/lakebase.py
from databricks_tools_core.lakebase import create_or_update_database
@mcp.tool(timeout=120)
def create_or_update_lakebase_database(name: str, type: str = "provisioned", ...):
"""[Full docstring for Claude]"""
return create_or_update_database(
name=name,
type=type,
on_resource_created=_track_resource,
)
# Add CLI mapping
CLI_MAPPING = {
"lakebase create": "create_or_update_lakebase_database",
# ...
}
Phase 2: Create CLI Package
Goal: New aidevkit-cli package using Typer.
Structure:
aidevkit-cli/
├── pyproject.toml
├── aidevkit/
│ ├── __init__.py
│ ├── __main__.py
│ ├── cli.py
│ ├── output.py
│ └── commands/
│ ├── lakebase.py
│ ├── sql.py
│ ├── jobs.py
│ └── apps.py
Example command:
# commands/lakebase.py
from databricks_tools_core.lakebase import create_or_update_database
@app.command("create")
def create_cmd(
name: str,
type: DatabaseType = DatabaseType.provisioned,
capacity: Capacity = Capacity.CU_1,
json_output: bool = typer.Option(False, "--json"),
):
"""Create or update a Lakebase database."""
result = create_or_update_database(name=name, type=type.value, capacity=capacity.value)
output_result(result, json_output)
Phase 3: Add Skill Transformer
Goal: Simple CLI → MCP text replacement for skills.
Files:
databricks-mcp-server/cli_mapping.py (aggregates all CLI_MAPPING dicts)
databricks-tools-core/skill_transformer.py (search/replace logic)
Phase 4: Update Installation
Goal: User chooses interface at install time.
echo "Select installation mode:"
echo " 1) MCP Server only (for Claude Code, AI agents)"
echo " 2) CLI only (for terminal, scripts, CI/CD)"
echo " 3) Both (recommended)"
Phase 5: Migrate Skills to CLI Syntax
Goal: Update existing skills to use CLI commands.
6. Domains to Refactor
| Domain |
MCP Tools |
CLI Commands |
Priority |
| lakebase |
8 tools |
aidevkit lakebase * |
High |
| apps |
5 tools |
aidevkit apps * |
High |
| jobs |
4 tools |
aidevkit jobs * |
Medium |
| pipelines |
3 tools |
aidevkit pipelines * |
Medium |
| sql |
2 tools |
aidevkit sql * |
Medium |
| serving |
4 tools |
aidevkit serving * |
Low |
7. CLI Usage Examples
# Installation
pip install aidevkit-cli
# Database management
aidevkit lakebase create my-db --capacity CU_2
aidevkit lakebase get my-db
aidevkit lakebase get # List all
aidevkit lakebase delete my-db --force
# Autoscale branches
aidevkit lakebase create-branch myproj dev --min-cu 0.5 --max-cu 4
# SQL execution
aidevkit sql execute "SELECT * FROM catalog.schema.table"
# Jobs
aidevkit jobs run 12345 --wait
aidevkit jobs list --json
# Apps
aidevkit apps deploy ./my-app --name my-app
8. File Changes Summary
New Files
databricks-tools-core/databricks_tools_core/*/workflows.py (per domain)
databricks-tools-core/databricks_tools_core/skill_transformer.py
databricks-mcp-server/databricks_mcp_server/cli_mapping.py
aidevkit-cli/ (entire new package)
Modified Files
databricks-mcp-server/databricks_mcp_server/tools/*.py (thin wrappers + CLI_MAPPING)
databricks-skills/*/skill.md (convert to CLI syntax)
install.sh (add interface choice)
Unchanged
databricks-tools-core/databricks_tools_core/*/[domain].py (low-level SDK wrappers)
- MCP tool names (API surface unchanged)
.mcp.json configuration
AI Dev Kit: Unified Interface Architecture
Executive Summary
The Problem
Today, the AI Dev Kit only offers MCP tools as an interface. While powerful for AI agents, MCP has friction:
The Solution
We're adding a CLI interface (
aidevkit) that provides the exact same tools as MCP:How It Works
databricks-tools-coredatabricks-mcp-serveraidevkit-cliBoth interfaces call the same core logic - just different entry points.
Installation Choice
At install time, users choose:
Skills Strategy
Why This Is Non-Disruptive
What We're Really Doing
The MCP project accumulated business logic that should have been in
databricks-tools-core. We're:Result: Cleaner architecture, user choice, same functionality.
1. Current State
Architecture Today
Problems with Current State
2. Target Architecture
New Structure
Layer Diagram
3. CLI ↔ MCP Mapping (Simple Approach)
The Problem
Skills use CLI syntax. MCP users need tool names. How do we map them?
The Solution: Co-located Dict
Each MCP tool file contains a simple mapping dict at the bottom:
Aggregated Mapping
Skill Transformation
Simple search/replace - no param parsing needed:
Before (skill source):
After (MCP mode):
The AI figures out param mapping from the arguments. No complex registry needed.
Why This Works
4. The CLI-First Skill Strategy
Core Principle
CLI is the universal language. Skills are always written with CLI commands because:
MCP instructions are derived via simple text replacement when needed.
Example Skill
When Loaded in MCP Mode
5. Implementation Phases
Phase 1: Extract Business Logic to Core
Goal: Move logic from MCP tools to
databricks-tools-coreworkflows.Files to create:
databricks-tools-core/databricks_tools_core/lakebase/workflows.pydatabricks-tools-core/databricks_tools_core/apps/workflows.pyPattern:
MCP tool becomes thin wrapper:
Phase 2: Create CLI Package
Goal: New
aidevkit-clipackage using Typer.Structure:
Example command:
Phase 3: Add Skill Transformer
Goal: Simple CLI → MCP text replacement for skills.
Files:
databricks-mcp-server/cli_mapping.py(aggregates all CLI_MAPPING dicts)databricks-tools-core/skill_transformer.py(search/replace logic)Phase 4: Update Installation
Goal: User chooses interface at install time.
Phase 5: Migrate Skills to CLI Syntax
Goal: Update existing skills to use CLI commands.
6. Domains to Refactor
aidevkit lakebase *aidevkit apps *aidevkit jobs *aidevkit pipelines *aidevkit sql *aidevkit serving *7. CLI Usage Examples
8. File Changes Summary
New Files
databricks-tools-core/databricks_tools_core/*/workflows.py(per domain)databricks-tools-core/databricks_tools_core/skill_transformer.pydatabricks-mcp-server/databricks_mcp_server/cli_mapping.pyaidevkit-cli/(entire new package)Modified Files
databricks-mcp-server/databricks_mcp_server/tools/*.py(thin wrappers + CLI_MAPPING)databricks-skills/*/skill.md(convert to CLI syntax)install.sh(add interface choice)Unchanged
databricks-tools-core/databricks_tools_core/*/[domain].py(low-level SDK wrappers).mcp.jsonconfiguration