I’ve been building agents with Mastra lately and one reasonable intrusive thought I keep having is: should I just cut out Mastra and use AI SDK straight up? Their basic APIs look very similar and Mastra is built on top of several AI SDK features and concepts after all. Mastra’s developer experience, however, is just simply great from the start. You can start tinkering with an agent in Mastra studio in minutes, and Mastra really starts to shine though as your agent features get more complicated. It provides a full backend framework for you to plug and play.
Let’s dive into some differences and neat features Mastra gives you over just AI SDK.
tl;dr
- AI SDK is a core toolkit. It give you a set of unified APIs for calling different LLMs, streaming text, doing tool calls, and building UIs. It’s very flexible but it does mean you have to do some architecture yourself.
- Mastra is a full framework. It provides ready to go building blocks for you to quickly add agent features like workflows, memory, RAG, and more on top of a model layer. It’s designed to work with the AI SDK UI among other frontend frameworks.
If I were starting an agent project, I’d use Mastra since it provides excellent DevEx + it’s compatible with practically any frontend you choose.
Quick comparison
| Capability | AI SDK | Mastra |
|---|---|---|
| Provider-agnostic API | ✅ | ✅ |
| Tools | ✅ | ✅ |
| Streaming | ✅ | ✅ |
| Workflows | ❌ (you orchestrate with your own code) | ✅ |
| Memory / RAG | ❌ (e.g. pass message history, call your own RAG) | ✅ |
| Human-in-the-loop | ❌ (e.g. pause and resume in your route) | ✅ |
| Local dev / playground | ❌ | ✅ |
| UI | AI SDK UI | Works with AI SDK UI, assistant-ui, and CopilotKit |
What AI SDK gives you
AI SDK is a set of core primitives that span an agent’s entire stack: AI SDK Core, AI SDK UI, and AI SDK RSC. They are intended to work together, but you can also use them independently. Many frontend agent frameworks like assistant-ui build compatibility with AI SDK out of the box.
AI SDK Core is a unified API for generating text, structured output, tool calls, and agent-style loops. Think of it like the “backend” of an agent. You get a single interface across many providers, so you can swap models without rewriting your integration code. With how fast the LLM landscape is evolving, a provider agnostic layer is important to stay nimble.
Like many of Vercel’s projects, it’s built to work best with Vercel’s products. In this case, Vercel’s AI Gateway.
# With Gateway
import { generateText } from "ai";
const { text } = await generateText({
model: "anthropic/claude-sonnet-4.5",
prompt: "What is love?",
});
# Without Gateway
import { generateText } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
const { text } = await generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: "What is love?",
});
You stay in control of how you orchestrate logic: you call generateText, streamText, or agent helpers and plug the results into your own routes and workflows.
AI SDK UI is focused on the frontend side of things. It receives a standardized response from AI SDK Core and provides hooks like useChat, useCompletion, and useObject for building chat and generative UIs in React, Vue, Svelte, Angular, and SolidJS. Streaming, state, and wire-up are handled for you.
import { useChat } from '@ai-sdk/react';
import { DefaultChatTransport } from 'ai';
import { useState } from 'react';
export default function Page() {
const { messages, sendMessage, status } = useChat({
transport: new DefaultChatTransport({
api: '/api/chat',
}),
});
const [input, setInput] = useState('');
return (
<>
{messages.map(message => (
<div key={message.id}>
{message.role === 'user' ? 'User: ' : 'AI: '}
{message.parts.map((part, index) =>
part.type === 'text' ? <span key={index}>{part.text}</span> : null,
)}
</div>
))}
<form
onSubmit={e => {
e.preventDefault();
if (input.trim()) {
sendMessage({ text: input });
setInput('');
}
}}
>
<input
value={input}
onChange={e => setInput(e.target.value)}
disabled={status !== 'ready'}
placeholder="Say something..."
/>
<button type="submit" disabled={status !== 'ready'}>
Submit
</button>
</form>
</>
);
}
What Mastra adds on top
Mastra is a framework to help you quickly agents from building prompts and tools to memory infrastructure. The basic agent building experience is similar to AI SDK: structured output, model routing, and tool calling. On top of that, Mastra adds:
- Memory — A Memory API so agents get the right context: conversation history, semantic search over past interactions, storage backend abstraction, and thread sharing. You scope context with
resourceId(user/entity) andthreadId(conversation), and tune retrieval with options likelastMessagesandsemanticRecall. With AI SDK you’d implement this yourself. - Workflows — You define steps with
createStep(input/output schemas and business logic), compose them withcreateWorkflowto define the execution flow, then run the workflow to execute the full sequence—with built-in support for suspension, resumption, and streaming results. - **Streaming - AI SDK V5+ message compatible streaming format so users can see output as it’s being generated.
- Human-in-the-loop — Suspend an agent or workflow, wait for user input or approval, then resume. State is persisted so you can pause and resume later.
- RAG pipeline primitives — Abstracted verbs (
.chunk(),.embed(),.upsert(),.query(),.rerank()) across document types (text, HTML, Markdown, JSON) and vector stores (e.g. Pgvector, MongoDB, Astra, Cloudflare). - Evals — Scorers to evaluate agent and workflow performance.
- Studio — A local dev environment to chat with agents, visualize state and memory, debug tool calls, and iterate on prompts. This may be one of the most compelling reasons to use Mastra since it offers everything you need to iterate on your agents.
Mastra integrates with AI SDK as first-class integration so you can use useChat() while Mastra handles the agent and workflow layer on the server. Any frontend UI framework that is AI SDK compatible can also be compatible with Mastra since Mastra can stream messages in AI SDK compatible formats.
Mastra even allows you to wrap your existing AI SDK setup with Mastra features.
Comparisons
Defining a tool
Both use Zod for schemas (as does the rest of the AI ecosystem these days).
AI SDK uses a single tool() helper. Mastra uses createTool().
AI SDK
import { tool } from 'ai';
import { z } from 'zod';
export const weatherTool = tool({
description: 'Get the weather in a location',
inputSchema: z.object({
location: z.string().describe('The location to get the weather for'),
}),
execute: async ({ location }) => {
const { location } = inputData;
const response = await fetch(`https://wttr.in/${location}?format=3`);
const weather = await response.text();
return { weather };
},
});
Mastra
import { createTool } from '@mastra/core/tools';
import { z } from 'zod';
export const weatherTool = createTool({
id: 'weather-tool',
description: 'Fetches weather for a location',
inputSchema: z.object({
location: z.string().describe('The location to get the weather for'),
}),
outputSchema: z.object({
weather: z.string(),
}),
execute: async (inputData) => {
const { location } = inputData;
const response = await fetch(`https://wttr.in/${location}?format=3`);
const weather = await response.text();
return { weather };
},
});
Agent with tools
Here’s where the mental model diverges. With the AI SDK you run the loop yourself; with Mastra you create an agent and it runs the loop for you.
AI SDK
You call generateText with tools and a stopWhen condition. The SDK can run multiple steps until stopWhen is hit. You get back the final text and any toolCalls and results from the last step.
import { generateText, tool, stepCountIs } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';
import { weatherTool } from '../tools/weather-tool';
const { text } = await generateText({
model: openai('gpt-4o'),
prompt: 'What is the weather in San Francisco?',
tools: { weatherTool },
stopWhen: stepCountIs(5),
});
return text;
Mastra
import { Agent } from '@mastra/core/agent';
import { weatherTool } from '../tools/weather-tool';
const weatherAgent = new Agent({
id: 'weather-agent',
name: 'Weather Agent',
instructions: 'You are a helpful weather assistant. Use the weather tool to fetch current weather.',
model: 'openai/gpt-5.1',
tools: { weatherTool },
});
// Run until the agent is done
const { text } = await weatherAgent.generate({ messages: [...] });
// Or stream the same agent
const stream = await weatherAgent.stream({ messages: [...] });
Agent with memory
AI SDK doesn’t have a memory layer out of the box.
Mastra gives you a built-in Memory system.
You have your choice of storage adapter (e.g. libSQL, PostgreSQL, MongoDB, DynamoDB, Cloudflare D1) and along with several different memory concepts:
- Message history - Does what it says on the box
- Working memory - Info about a user or task
- Semantic recall - Context across longer interactions
- Observational memory - Replacement for raw message history that selectively remembers only relevant details across conversations
import { Agent } from '@mastra/core/agent';
import { Memory } from '@mastra/memory';
import { LibSQLStore } from '@mastra/libsql';
const agent = new Agent({
id: 'chat-agent',
instructions: 'You are a helpful assistant.',
model: 'openai/gpt-4o-mini',
memory: new Memory({
storage: new LibSQLStore({ id: 'mastra-storage', url: 'file:./mastra.db' }),
}),
});
const { text } = await agent.generate('What did we discuss last time?', {
memory: { thread: 'conversation-abc-123', resource: 'user_123' },
});
When to use which
You can most definitely start off with AI SDK and add on Mastra when you outgrow it. Though, you may want to just start with Mastra from the start for the developer experience.
I kind of think of AI SDK as a low level library like Hono while Mastra has more features and convention like Adonis.
You may want to reach for just AI SDK when…
- You just want a clean, provider-agnostic way to call LLMs and stream responses.
- You want minimal abstraction and maximum flexibility. You may want to reduce dependencies or you have strong opinions about the architecture and infrastructure.
- You’re fully engaged with the Vercel ecosystem and use Next.js.
Consider Mastra when…
- You intend to build complex agents that need tools, multi-step reasoning, or human-in-the-loop.
- You want workflows with explicit steps, branching, or parallelism.
- You care about memory, RAG, evals, and observability in one stack, and you need to add those capabilities quickly.
- You want an easy to use agent development environment.
- You view Vercel lock-in as a risk.
Either way, both will help accelerate your agent development. It’s wild how incredibly fast you can get a production-ready POC going.