Ollama Setup
Ollama is an open-source tool for running large language models locally. WindowSill supports Ollama as an AI provider, giving organizations full control over their AI infrastructure without relying on cloud services.
Configuration Paths
WindowSill offers three ways to configure Ollama, each with different network requirements and scenarios:
Scenario A: Administrators restrict AI providers and models to a defined list
| Method | Request Flow | Network Requirement | Best For |
|---|---|---|---|
| Dashboard | App → WindowSill server → Ollama | Ollama must be publicly accessible | Centralized management, cloud-hosted Ollama |
| WindowSill App | App → Ollama directly | Ollama reachable from user's device | Centralized management while hosting Ollama on a Private network or local installation |
Scenario B: Administrators let user configure AI providers and models
| Method | Request Flow | Network Requirement | Best For |
|---|---|---|---|
| Registry | App → Ollama directly | Ollama reachable from user's device | Per-organization member configuration, while using Ollama on a Private network or local installation |
Dashboard Configuration
When you configure Ollama through the WindowSill Dashboard, all LLM requests are proxied through the WindowSill web server:
WindowSill App → WindowSill Server (getwindowsill.app) → Your public Ollama Server
Requirements:
- Your Ollama server must be publicly accessible from the internet (with or without restrictions).
- The WindowSill server IP (
51.77.212.201) must be able to reach your Ollama endpoint. - HTTPS is strongly recommended.
Advantages:
- Centralized configuration for all organization members.
- No client-side setup required.
- Ollama's endpoint is never stored on the client's machine.
Disadvantages:
- You can not use an Ollama server that is deployed on a private enterprise network or on user's local machine. The server has to be public on the internet.
How to configure (step-by-step)
- Navigate to WindowSill Dashboard.
- Select your organization and navigate to WindowSill tab, then AI Providers & Models.
- Change Configuration Mode to
Restrict AI providers and models to a defined list. - Click Add an AI Provider.
- Select
Ollama (public server) - Enter the URL or IP address of your public Ollama server, then click Add.
- Then, click Add Model and
Ollama (public server)to select the model(s) you want to allow your clients to use.
WindowSill App Configuration
When you configure Ollama through the WindowSill App, the WindowSill app connects directly to your Ollama server:
WindowSill App → Your public, private or local Ollama Server
Requirements:
- Your Ollama Server must be reachable directly from the client's machine. It can be done by either hosting Ollama on the user's machine, deploying it on a private enterprise server, or a public server.
Advantages:
- Centralized configuration for all organization members.
- You can use an Ollama server deployed on a private enterprise network or on user's local machine.
- No internet exposure required.
Disadvantages:
- If the list of models change on the Ollama's server, or from a local machine to another, the WindowSill dashboard won't be aware of it until the configuration has been updated by an Administrator.
- Ollama's endpoint is always stored on the client's machine.
How to configure (step-by-step)
- Navigate to WindowSill Dashboard.
- Select your organization and navigate to WindowSill tab, then AI Providers & Models.
- Change Configuration Mode to
Restrict AI providers and models to a defined list. - On your Windows desktop, install and run the
WindowSillapp. - Sign-in in the app using an organization Administrator account.
- Right-click on the WindowSill bar, Settings.
- Navigate to Account and ensure you select the Organization for which you want to edit the Ollama's settings.
- Click Sync now to refresh your settings (just in case).
- Go to AI Writing & Analysis, AI Providers, WindowSill AI Pro.
- If you are administrator of the selected organization, you will see an
Administrator Zoneappearing at the bottom. - Click Configure & Test Ollama.
- Ensure the displayed organization name corresponds to the one you wish to edit.
- Enter your public, private network or local Ollama server's endpoint, then click Connect
- If the connection was successfully established, enter an LLM prompt for test, select a model, then click Test. An LLM request will be send to the server. This is to ensure the WindowSill app can successfully interact with your potentially private or local Ollama server.
- If the test was successful, click Upload Configuration to finish.
- Navigate back to WindowSill Dashboard. Refresh the page and navigate to WindowSill tab, then AI Providers & Models.
- Confirm that you see
Ollama (private network or local)listed in Providers, and that all the Ollama's models are listed. - Delete the model(s) you do not want to allow your clients to use.
Registry Configuration
When you configure Ollama via Windows Registry, the WindowSill app connects directly to your Ollama server:
WindowSill App → Your public, private or local Ollama Server
Requirements:
- Your Ollama Server must be reachable directly from the client's machine. It can be done by either hosting Ollama on the user's machine, deploying it on a private enterprise server, or a public server.
- Registry keys deployed via Group Policy, Intune, PowerShell or similar.
Advantages:
- You can use an Ollama server deployed on a private enterprise network or on user's local machine.
- No internet exposure required.
- Customization of the configuration is possible for each member of an organization.
Disadvantages:
- No centralized configuration.
- Ollama's endpoint is always stored on the client's machine.
How to configure
See Registry Keys for configuration details.