In the era of generative AI, developers are continually seeking ways to quickly deploy and share their AI models without relying on complex cloud infrastructures. LM Studio offers a seamless experience to download, install, and run AI models locally, while tools like Pinggy enable you to expose your local endpoints to the internet securely. This guide provides a step-by-step tutorial on sharing your LM Studio API online, making your AI models accessible and shareable in minutes.
Download & Install LM Studio
Enable the Model API
http://localhost:1234
).Expose Your API with Pinggy
ssh -p 443 -R0:localhost:1234 a.pinggy.io
By sharing your LM Studio API online, you can:
Pinggy provides a hassle-free solution for exposing your local API to the public internet without the need for complex infrastructure. Key benefits include:
Visit the Website:
Go to
LM Studio and download the installer appropriate for your operating system (Windows, macOS, or Linux).
Install LM Studio:
Follow the installation prompts to set up LM Studio on your machine.
Launch and Download a Model:
Open the Developer Tab:
Click on the Developer tab located in LM Studio.
Start Your API Server:
http://localhost:1234
.Test the API Endpoint:
Below the status button, you will see a list of supported endpoints. Copy the displayed curl
command and test it using a tool like
Postman .
Example Curl Command:
curl http://localhost:1234/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "qwen2-0.5b-instruct",
"messages": [
{ "role": "system", "content": "Always answer in rhymes. Today is Thursday" },
{ "role": "user", "content": "What day is it today?" }
],
"temperature": 0.7,
"max_tokens": -1,
"stream": false
}'
Set Up a Secure Tunnel:
While your LM Studio API is running on http://localhost:1234
, open your terminal and execute the following command to create a secure tunnel:
ssh -p 443 -R0:localhost:1234 a.pinggy.io
Share the Public URL:
Once connected, Pinggy will generate a public URL (e.g., https://xyz123.pinggy.link
). Share this URL to allow remote access to your API.
Enable Basic Authentication:
To secure your tunnel, modify your SSH command to include a username and password:
ssh -p 443 -R0:localhost:1234 -t a.pinggy.io b:username:password
This ensures that only authorized users can access your public URL.
Regular Monitoring:
Use Pinggy’s web debugger to keep an eye on incoming requests and identify any potential issues quickly.
Custom Domain Setup:
With Pinggy Pro, you can map a custom domain to your tunnel, enhancing your branding and credibility.
Performance Considerations:
For high-traffic applications, consider optimizing your LM Studio configuration and ensuring your local machine has sufficient resources to handle the load.
Model Fails to Start:
Connection Timeouts
while true; do
ssh -p 443 -o StrictHostKeyChecking=no -R0:localhost:3000 a.pinggy.io;
sleep 10; done
Combining LM Studio intuitive model deployment with Pinggy’s secure tunneling offers a streamlined approach to sharing your AI models online. This solution empowers developers to test, demo, and integrate AI capabilities without the overhead of cloud infrastructure, while keeping full control over data and performance.