Expose Ollama API

Share your local Ollama LLM inference endpoint with teammates or integrate it into remote apps.

Why Expose Ollama?

Ollama runs LLMs locally on port 11434. If you want teammates to query your models, or integrate with a remote frontend, the API needs a public URL.

Setup

Start Ollama: ollama serve. It listens on localhost:11434.

Run the Skytunnel command with port 11434.

Remote clients can now hit https://your-id.free.skytunnel.dev/api/generate with standard Ollama API calls.

CORS & Environment

Set OLLAMA_ORIGINS=* before starting Ollama to allow cross-origin requests from web frontends.

For security, restrict origins to your specific frontend domain in production.

? FAQ

Yes. SSH tunnels support streaming responses natively.
Yes. Point Open WebUI's Ollama URL to the Skytunnel public URL.