Running large language models locally with Ollama is becoming increasingly popular for privacy, cost control, and offline capabilities. In this guide, I'll show you how to set up Ollama on one Windows PC and connect to it from another computer on your network.
At a Glance
- Install Ollama on the server PC
- Configure it to listen on your LAN interface (
OLLAMA_HOST
) - Open Windows Firewall for TCP port
11434
- On the client, set
OLLAMA_HOST
to the server's address - Test with
ollama list
and API calls
Example Network Setup
[Client PC] ────────────► [Server PC: 192.168.1.74:11434] (uses OLLAMA_HOST) runs Ollama listener
What is Ollama?
Ollama is a local runtime that provides a simple HTTP/CLI interface for running large language models on your own computer. You pull models (similar to Docker images), run them locally, and access them through a consistent REST API.
Why Use Ollama?
- Privacy & Data Residency: No prompts or outputs leave your machine unless you send them elsewhere
- Cost Control: No per-token cloud bills—pay once for hardware and power
- Low Latency: Responses generated locally over your LAN
- Offline Capability: Works without internet access
- Simple Model Management:
ollama pull
,ollama run
,ollama list
⚠️ Important Security Note: Ollama has no built-in authentication. Secure it with firewall rules, VPN, or reverse proxy authentication.
Prerequisites
- Two Windows PCs on the same network (e.g., Server: 192.168.1.74, Client: 192.168.1.75)
- Administrator access on the server PC for firewall configuration
- At least 8GB RAM (16GB+ recommended for larger models)
- Network connectivity between the machines
Step 1: Server Setup
Install Ollama
- Download Ollama for Windows from ollama.com/download
- Run the installer and follow the setup wizard
- Verify installation by opening Command Prompt and running
ollama --version
Configure Network Binding
By default, Ollama only listens on localhost. To accept connections from other machines:
Option 1: Environment Variable (Recommended)
- Open System Properties → Advanced → Environment Variables
- Add a new system variable:
- Name:
OLLAMA_HOST
- Value:
0.0.0.0:11434
- Name:
- Restart your computer or the Ollama service
Option 2: PowerShell (Session-based)
# Set for current session
$env:OLLAMA_HOST = "0.0.0.0:11434"
ollama serve
Step 2: Configure Windows Firewall
Create a firewall rule to allow connections to Ollama:
Using Windows Firewall GUI
- Open Windows Defender Firewall with Advanced Security
- Click Inbound Rules → New Rule
- Choose Port → TCP → Specific local ports: 11434
- Choose Allow the connection
- Apply to Private profile only (for security)
- Name the rule "Ollama LLM Server"
Using PowerShell (Administrator)
# Create firewall rule for Ollama
New-NetFirewallRule -DisplayName "Ollama LLM Server" `
-Direction Inbound -Protocol TCP -LocalPort 11434 `
-Action Allow -Profile Private
# Optional: Restrict to specific client IP
New-NetFirewallRule -DisplayName "Ollama LLM Server (Restricted)" `
-Direction Inbound -Protocol TCP -LocalPort 11434 `
-Action Allow -Profile Private -RemoteAddress "192.168.1.75"
Step 3: Download and Test Models
On the server PC, pull and test a model:
# Pull a lightweight model
ollama pull llama3:8b
# Test local functionality
ollama run llama3:8b "What is machine learning?"
# List available models
ollama list
# Check if service is listening
netstat -an | findstr :11434
Step 4: Client Setup
On the client PC, configure it to use the remote Ollama server:
Install Ollama Client
- Download and install Ollama on the client PC
- Set the environment variable to point to your server
Configure Connection
Environment Variable Method
- Set
OLLAMA_HOST
tohttp://192.168.1.74:11434
- Restart Command Prompt or PowerShell
PowerShell Session Method
# Set for current session
$env:OLLAMA_HOST = "http://192.168.1.74:11434"
# Test connection
ollama list
Step 5: Testing the Setup
CLI Testing
# From client PC, test connection
ollama list
# Run a chat session
ollama run llama3:8b "Explain DNS in one paragraph"
# Check model status
ollama ps
API Testing
Test the HTTP API directly using PowerShell:
# PowerShell API test
$body = @{
model = "llama3:8b"
prompt = "What is the capital of France?"
stream = $false
} | ConvertTo-Json
$response = Invoke-RestMethod -Uri "http://192.168.1.74:11434/api/generate" `
-Method POST -Body $body -ContentType "application/json"
Write-Output $response.response
C# Example
// Minimal C# example for calling Ollama API
using System.Net.Http;
using System.Text;
using System.Text.Json;
var http = new HttpClient { BaseAddress = new Uri("http://192.168.1.74:11434") };
var payload = new
{
model = "llama3:8b",
messages = new[] {
new { role = "user", content = "Explain DNS concisely." }
},
stream = false
};
var content = new StringContent(
JsonSerializer.Serialize(payload),
Encoding.UTF8,
"application/json"
);
var response = await http.PostAsync("/api/chat", content);
response.EnsureSuccessStatusCode();
var result = await response.Content.ReadAsStringAsync();
Console.WriteLine(result);
Troubleshooting
Connection Issues
Cannot Connect from Client
- Verify firewall rule is active and applies to Private profile
- Check if Ollama is listening:
netstat -an | findstr :11434
- Test with
telnet 192.168.1.74 11434
from client - Ensure
OLLAMA_HOST
is set correctly on server
Models Not Available
- Models are stored on the server only
- Use
ollama pull
on the server to download models - Client will show models available on the connected server
Performance Issues
- Ensure sufficient RAM/VRAM on server
- Try smaller model variants (e.g.,
llama3:8b
vsllama3:70b
) - Check network bandwidth between client and server
Security Considerations
Important Security Notes
- No Authentication: Ollama has no built-in auth—anyone with network access can use your models
- Firewall Scope: Restrict to Private network profile only
- IP Restrictions: Consider limiting access to specific client IPs
- VPN Access: For remote access, use VPN rather than exposing to internet
- Resource Monitoring: Monitor CPU/GPU usage to prevent resource exhaustion
Advanced Configuration
Multiple Clients
To support multiple client machines, simply:
- Install Ollama on each client
- Set
OLLAMA_HOST
to point to the same server - Update firewall rules if restricting by IP
CORS for Web Applications
If building web applications that call Ollama directly:
# Set CORS origins (server)
$env:OLLAMA_ORIGINS = "http://localhost:3000,https://myapp.com"
Performance Tuning
# Adjust concurrent request limit
$env:OLLAMA_NUM_PARALLEL = "2"
# Set memory limit
$env:OLLAMA_MAX_LOADED_MODELS = "1"
Useful Commands Reference
Server Management
# Check service status
ollama ps
# Stop all running models
ollama stop
# Remove a model
ollama rm llama3:8b
# Show model information
ollama show llama3:8b
Client Commands
# List available models on server
ollama list
# Pull model to server
ollama pull codellama:7b
# Run interactive session
ollama run codellama:7b
Conclusion
Setting up Ollama across multiple Windows machines provides a flexible, private solution for running large language models in your local environment. The key steps are:
- Configure the server to listen on the network interface
- Secure access with appropriate firewall rules
- Point clients to the server using environment variables
- Test thoroughly with both CLI and API methods
This setup is ideal for development teams, research environments, or anyone wanting to leverage LLMs while maintaining full control over their data and infrastructure.
Quick Reference
- Server Port: 11434 (TCP)
- Environment Variable:
OLLAMA_HOST
- API Endpoint:
http://server-ip:11434/api/
- Firewall Profile: Private only
Note: Adjust IP addresses (192.168.1.74 for server, 192.168.1.75 for client) to match your network configuration. Always keep your firewall configured for the Private profile only for security.
Useful Links: Ollama for Windows · Official Documentation · Model Library