The Test Local AI Script is a straightforward PowerShell utility designed for quickly verifying the functionality and connectivity of a local AI model hosted by LM Studio. This script sends a predefined conversational prompt to your LM Studio API endpoint and then displays the AI's response directly in the PowerShell console. It serves as an excellent basic health check for your local AI setup, allowing you to confirm that your LM Studio server is running, reachable, and successfully processing requests from a specific model.
Built by: Gemini
This script is a simple utility to test your local AI setup.
To run this script, you will need:
- Windows Operating System: (Windows 7 or later).
- PowerShell 5.1 or newer: This script uses PowerShell's capabilities for web requests and JSON handling.
- LM Studio: The LM Studio application must be installed and running on your local machine, with a model loaded and its local server API enabled. The script is configured to use
http://192.168.10.100:1234and thegoogle/gemma-3-12bmodel. You may need to edit these values within the script to match your specific LM Studio setup. - Internet Connection: Not strictly required if LM Studio is running entirely locally and not fetching remote resources, but good practice.
- Download: Download the
Test-LocalAI.ps1script file. - Unblock: Right-click the file, go to Properties, and click
Unblockif the file was downloaded from the internet. - Configure API URL and Model: Open the
Test-LocalAI.ps1script in a text editor and adjust the$apiUrlvariable and themodelparameter within the$bodypayload to match your LM Studio server address and the specific model you wish to test.# 1. Define the server address from your LM Studio setup $apiUrl = "http://YOUR_LM_STUDIO_IP_OR_HOSTNAME:YOUR_PORT/v1/chat/completions" # e.g., "http://localhost:1234/v1/chat/completions" # ... (inside $body payload) model = "YOUR_MODEL_IDENTIFIER" # e.g., "google/gemma-3-12b"
- Run: Execute the script from a PowerShell console.
.\Test-LocalAI.ps1
When you run the script, it will perform the following actions:
- Send Prompt: It constructs a JSON payload with a predefined system prompt ("You are a helpful assistant for an 'Elite' software utility developer.") and a user prompt ("In three sentences, what makes PowerShell a powerful tool for Windows administration?").
- Invoke API: It sends this payload as an HTTP POST request to the specified LM Studio API URL.
- Display Response: Upon receiving a successful response from LM Studio, it extracts the AI's generated content and prints it to your PowerShell console.
- Error Reporting: If there's an issue connecting to LM Studio or receiving a response, it will display an error message with details.
This script is ideal for quickly confirming the operational status of your local AI server and specific models without needing a full chat client.
- LM Studio API Interaction: Directly communicates with your local LM Studio server via its HTTP API.
- Quick Connectivity Test: Provides a fast way to verify that your LM Studio instance is running and responding to API calls.
- Predefined Conversational Prompt: Comes with a ready-to-use system and user prompt for immediate testing.
- Targeted Model Testing: Allows specifying a particular AI model (e.g.,
google/gemma-3-12b) for interaction. - Console Output: Displays the AI's response clearly in the PowerShell console.
- Basic Error Reporting: Catches common network and API errors, providing informative messages to the user.
- Customizable API Endpoint: Easy to modify the target LM Studio API URL and model identifier within the script.
The script is developed entirely in PowerShell, leveraging its web and data handling capabilities:
- Scripting Language: PowerShell
- Web Requests:
Invoke-RestMethodfor making HTTP POST requests to the LM Studio API endpoint. - JSON Processing:
ConvertTo-Jsonfor serializing PowerShell objects into JSON payloads required by the API.
The Test Local AI Script is a simple client-side utility interacting with a local AI server.
- Local Interaction: The script is designed to communicate with a locally hosted AI model via LM Studio. No external servers or cloud services are involved.
- LM Studio API: Communication with LM Studio occurs over its local HTTP API. Users should ensure their LM Studio setup is properly secured if they expose it beyond
localhost. - Hardcoded Configuration: The API URL and specific model identifier are hardcoded in the script. Users must manually update these values to match their LM Studio environment.
- No Data Persistence: The script does not store any chat history or configuration settings.
- No Telemetry: The script does not collect or transmit any user data or telemetry.
Distributed under the MIT License. See LICENSE.txt for more information.
Zach Whiteman - elitesoftwarecolimited@gmail.com
HuggingFace - https://huggingface.co/EliteSoftware
HuggingFace (Personal) - https://huggingface.co/TheShadyRainbow
LinkTree - https://linktr.ee/zachrainbow