Ship Onde into React Native apps.

The @ondeinference/react-native package brings Onde to React Native with a native inference engine underneath. You write JavaScript or TypeScript. The model still runs on-device.

npm package

Published as @ondeinference/react-native. Install with npm, pnpm, bun, or yarn and wire it into an existing React Native app without inventing a backend.

Native bridge

The JavaScript layer talks to the native Onde engine underneath. You keep the ergonomics of React Native while inference stays on-device.

TypeScript-friendly

The API is shaped for modern React Native apps. Load a model, send a message, stream results, and keep the app code readable.

Current scope

Chat-focused: OndeChatEngine, model loading, history management, and streaming callbacks. More APIs ship with the engine.

Quick Start

Install.

Add the package from npm. Use whichever package manager your app already uses. If you want to test model downloads or GGUF export before you ship the app build, use Onde CLI. If you want the broader case for local inference, read The Latency War.

# npm
npm install @ondeinference/react-native

# pnpm
pnpm add @ondeinference/react-native

# yarn
yarn add @ondeinference/react-native

Usage

Load. Prompt. Render.

The API is intentionally small. Load the default model, send a message, and render the result in your app.

import { OndeChatEngine } from '@ondeinference/react-native';

const engine = new OndeChatEngine();

// loads Qwen 2.5 locally, no server
await engine.loadDefaultModel({
  systemPrompt: 'You are a helpful assistant.',
});

const result = await engine.sendMessage({
  message: 'Hello!',
});

console.log(result.text);
// completed in 85ms, fully on device

Coverage

Platform Matrix.

Mobile first. iOS and Android are the main targets for the React Native SDK. If you want to see the live telemetry side, open Onde Inference Pulse.

PlatformABI / sliceStatus
iOSarm64 device, arm64 simulatorReady
Androidarm64Ready