Run AI Web Apps Without a Backend

Access LLMs directly from your browser. Simple, secure, and scalable.

Log in Playground
import { getProfile } from "https://aipipe.org/aipipe.js";

const { token, email } = getProfile();
if (!token) window.location = `https://aipipe.org/login?redirect=${window.location.href}`;

const response = await fetch("https://aipipe.org/openrouter/v1/chat/completions", {
  method: "POST",
  headers: { Authorization: `Bearer ${token}`, "Content-Type": "application/json" },
  body: JSON.stringify({
    "model": "openai/gpt-4.1-nano",
    "messages": [{ "role": "user", "content": "What is 2 + 2?" }]
  })
}).then(r => r.json());

Features

Secure Authentication

Built-in user authentication and token management for secure API access.

Usage Monitoring

Track API usage and costs with detailed analytics and budget controls.

Multiple LLM Support

Connect to various LLM providers through a single unified interface.

Documentation

OpenRouter

import { getProfile } from "https://aipipe.org/aipipe.js";

const { token, email } = getProfile();
if (!token) window.location = `https://aipipe.org/login?redirect=${window.location.href}`;

const response = await fetch("https://aipipe.org/openrouter/v1/chat/completions", {
  method: "POST",
  headers: { Authorization: `Bearer ${token}`, "Content-Type": "application/json" },
  body: JSON.stringify({
    "model": "openai/gpt-4.1-nano",
    "messages": [{ "role": "user", "content": "What is 2 + 2?" }]
  })
}).then(r => r.json());

This will:

  1. Redirect the user to AI Pipe's login.
    • getProfile() sets token to null since it doesn't know the user.
    • window.location redirects the user to https://aipipe.org/login with ?redirect= as your app URL
  2. Redirect them back to your app once they log in.
    • Your app URL will have a ?aipipe_token=...&aipipe_email=... with the user's token and email
    • getProfile() fetches these, stores them for future reference, and returns token and email
  3. Make an LLM API call to OpenRouter and log the response.
    • Replace any call to https://openrouter.ai/api/v1 with https://aipipe.org/openrouter/v1
    • Add Authorization: Bearer ${TOKEN} as a header.
    • AI Pipe replaces the token and proxy the request via OpenRouter.

OpenAI

You can also use an OpenAI model directly with the Chat Completion or Responses API:

import { getProfile } from "https://aipipe.org/aipipe.js";

const { token, email } = getProfile();
if (!token) window.location = `https://aipipe.org/login?redirect=${window.location.href}`;

const response = await fetch("https://aipipe.org/openrouter/v1/responses", {
  method: "POST",
  headers: { Authorization: `Bearer ${token}`, "Content-Type": "application/json" },
  body: JSON.stringify({ "model": "openai/gpt-4.1-nano", "input": "What is 2 + 2?" })
}).then(r => r.json());

Pricing

Self hosted

Deploy on your CloudFlare.

It's open source. Use your own API keys.

Get the code

AIPipe.org

Free while it lasts.

Everyone gets $0.10 / week free, for now.

Get Started