Overview
Server Actions run on the server, which makes them ideal for LLM calls — your API keys never reach the browser, and FluxGate tracking happens server-side with no client bundle cost.
Project structure
lib/
fluxgate.ts ← singleton FluxGate client
openai.ts ← singleton tracked OpenAI client
actions/
ai/
summarize.ts ← server action: summarize text
chat.ts ← server action: single-turn chat reply
1. Singleton clients
Create these once. Every server action imports from here.
// lib/fluxgate.ts
import { FluxGate } from "@fluxgate/sdk";
export const fg = new FluxGate({
apiKey: process.env.FLUXGATE_API_KEY!,
});
// lib/openai.ts
import OpenAI from "openai";
import { createOpenAICostTracker } from "@fluxgate/openai";
import { fg } from "./fluxgate";
const _client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY! });
export const openai = createOpenAICostTracker(_client, fg);
2. Summarize action
// actions/ai/summarize.ts
"use server";
import { auth } from "@/lib/auth";
import { openai } from "@/lib/openai";
export async function summarizeAction(text: string) {
const session = await auth();
if (!session?.user) throw new Error("Unauthenticated");
const completion = await openai
.withContext({
feature: "summarizer",
user: {
id: session.user.id,
email: session.user.email ?? undefined,
name: session.user.name ?? undefined,
},
})
.chat.completions.create({
model: "gpt-4o-mini",
messages: [
{
role: "system",
content: "You are a concise summarizer. Return a 3-sentence summary.",
},
{ role: "user", content: text },
],
max_tokens: 256,
});
return completion.choices[0].message.content;
}
Calling it from a Client Component:
// app/dashboard/summarize/_components/summarize-form.tsx
"use client";
import { useServerAction } from "@/hooks/use-server-action";
import { summarizeAction } from "@/actions/ai/summarize";
export function SummarizeForm() {
const { call, loading } = useServerAction(summarizeAction);
const handleSubmit = async (text: string) => {
const summary = await call(text);
// summary is the returned string on success, null on error
};
// ...
}
3. Chat reply action
// actions/ai/chat.ts
"use server";
import { auth } from "@/lib/auth";
import { openai } from "@/lib/openai";
type Message = { role: "user" | "assistant"; content: string };
export async function chatReplyAction(
messages: Message[],
conversationId: string,
) {
const session = await auth();
if (!session?.user) throw new Error("Unauthenticated");
const completion = await openai
.withContext({
feature: "chat-assistant",
user: session.user.id,
conversationId,
})
.chat.completions.create({
model: "gpt-4o",
messages: [
{ role: "system", content: "You are a helpful assistant." },
...messages,
],
});
return completion.choices[0].message.content;
}
4. Image analysis action (multimodal)
// actions/ai/analyze-image.ts
"use server";
import { auth } from "@/lib/auth";
import { openai } from "@/lib/openai";
export async function analyzeImageAction(imageBase64: string) {
const session = await auth();
if (!session?.user) throw new Error("Unauthenticated");
const completion = await openai
.withContext({ feature: "image-analysis", user: session.user.id })
.chat.completions.create({
model: "gpt-4o",
messages: [
{
role: "user",
content: [
{ type: "text", text: "Describe this image in detail." },
{
type: "image_url",
image_url: { url: `data:image/jpeg;base64,${imageBase64}` },
},
],
},
],
max_tokens: 512,
});
return completion.choices[0].message.content;
}
What you see in FluxGate
After calling any of these actions, open the Requests Explorer on the FluxGate dashboard. Each call appears as a row with:
- Feature tag (
summarizer,chat-assistant,image-analysis) - User identity (name, email if provided)
- Model, tokens, cost, and latency
- A link to the full event detail
The Cost Breakdown → By Feature view will accumulate spend per feature over time.