CogniSpend

Integrations

Wherever your AI runs, CogniSpend tracks it

First-class support for the providers your team already uses, with a transparent proxy fallback for anything else.

OpenAI

SDK + Proxy

Track GPT-4o, o1, and every other OpenAI model. Input/output/cached token costs, streaming support, per-request attribution.

Learn more

Anthropic

SDK + Proxy

Full cost tracking for Claude 3.5 Sonnet, Haiku, and Opus. Extended thinking tokens tracked separately.

Learn more

Google Gemini

SDK + Proxy

Gemini 1.5 Pro, Flash, and Ultra. Track context caching savings alongside standard call costs.

Learn more

GitHub Copilot

Manual + CSV

Track per-seat license costs with team attribution. Know if your $200/mo Copilot licenses are actually saving engineering time.

Learn more

Cursor

Manual + CSV

Log Cursor subscription costs and attribute them to the engineering teams that use it. Part of your complete AI spend picture.

Learn more

Self-hosted models

Infrastructure cost

Running Llama 3, Mistral, or a fine-tuned model on your own hardware? Track GPU and cloud infrastructure costs attributed to specific workflows.

Learn more

Any OpenAI-compatible API

Proxy

Point any OpenAI SDK at the CogniSpend proxy endpoint. Full tracking for Groq, Together, Fireworks, Azure OpenAI, and every other compatible provider.

Learn more

Integration paths

Which integration is right for you?

Three ways to get CogniSpend tracking your AI spend. Most teams use all three.

SDK Wrapper

Best for

You own the application code and want zero-overhead tracking with full TypeScript types.

  • Zero request latency added
  • Fully typed TypeScript API
  • Streaming supported
  • Works in any Node.js runtime
import { wrapOpenAI } from "@cognispend/sdk";
import OpenAI from "openai";

const openai = wrapOpenAI(new OpenAI(), {
  workflowId: "code-review",
  teamId:     "platform",
});

// All existing calls tracked — no other changes.

Proxy Mode

Best for

You want full tracking without touching a single line of application code.

  • Zero application code changes
  • Works with any language or framework
  • Full OpenAI-compatible API
  • Under 5ms round-trip overhead
const openai = new OpenAI({
  baseURL: "https://proxy.cognispend.com/v1",
  // Pass your real provider key as-is.
  apiKey:  process.env.OPENAI_API_KEY,
});

// Every call is now tracked.

Manual Entry

Best for

You need to track SaaS seats, subscriptions, or historical spend without an SDK.

  • Track Copilot & Cursor seats
  • Import historical API CSVs
  • Add manual cost entries via UI
  • Blends with API spend in one view
// No code required.
// Add fixed costs in:
//   Settings → Costs → Add entry
// or drag-and-drop a provider
// CSV for instant backfill.

Compatibility

Works with your stack

CogniSpend's SDK is zero-dependency TypeScript. Works in any Node.js 18+ environment — Express, Next.js, Fastify, AWS Lambda, Cloudflare Workers, or plain scripts.

Node.js 18+TypeScriptExpressNext.jsFastifyAWS LambdaCloudflare WorkersBunDockerPlain scripts

Know what your AI is costing. Know if it's worth it.

Set up in under ten minutes. No infrastructure changes required. Start with your existing OpenAI or Anthropic client.

Free plan available. No credit card required to start.