Install
Install it your way.
Pick a package manager, install `onde`, done. If you want to see the full local workflow first, read Fine-Tune a Language Model on Your Mac.

npm install -g @ondeinference/cli
Install from npm.

pip install onde-cli
# or
uv tool install onde-cli
# or
uvx --from onde-cli onde
Install through pip, uv, or uvx.

dotnet tool install --global Onde.Cli
Install as a .NET global tool.
pub.dev
dart pub global activate onde_cli
Install from Dart pub.
crates.io
cargo install onde-cli
Install direct from crates.io.

brew tap ondeinference/homebrew-tap
brew install onde
Install with Homebrew on macOS.

# macOS Apple Silicon
curl -Lo onde https://github.com/ondeinference/onde-cli/releases/latest/download/onde-macos-arm64
chmod +x onde
mv onde /usr/local/bin/onde
Download a prebuilt binary.
Workflow
From sign-in to GGUF.
Open the app, run the tune, merge, export. If the model is headed into app code next, the Swift SDK, Dart SDK, and React Native SDKpages cover the shipping side.
1. Open the terminal UIRun `onde` to open the terminal UI.
onde2. Start a fine-tunePick a safetensors model and press `f` to start a local LoRA run.
Models tab → select a safetensors model → press f3. Merge the adapterMerge the adapter back into the base model.
Fine-tune complete → press m4. Export GGUFExport GGUF for local inference or packaging.
Merged model → press g Details
What you get.
Terminal UI, local fine-tuning, GGUF export, broad install support. If you are evaluating Onde as a product direction, start with What Onde Is and Privacy as a Feature, Not a Compliance Check.
Account
Sign in and manage your Onde account from the terminal.
Terminal-first
Open the app and work from the terminal. No browser detour.
Local fine-tuning
Keep training data, model files, and logs in one place.
GGUF export
Merge the adapter, export GGUF, and test the result.
Install anywhere
Use the package manager you already have.
Native
Fast startup. Direct install. One `onde` command.