431

Show HN: Mastra – Open-source JS agent framework, by the developers of Gatsby

Hi HN, we’re Sam, Shane, and Abhi, and we’re building Mastra (https://mastra.ai), an open-source JavaScript SDK for building agents on top of Vercel’s AI SDK.

You can start a Mastra project with `npm create mastra` and create workflow graphs that can suspend/resume, build a RAG pipeline and write evals, give agents memory, create multi-agent workflows, and view it all in a local playground.

Previously, we built Gatsby, the open-source React web framework. Later, we worked on an AI-powered CRM but it felt like we were having to roll all the AI bits (agentic workflows, evals, RAG) ourselves. We also noticed our friends building AI applications suffering from long iteration cycles: they were getting stuck debugging prompts, figuring out why their agents called (or didn’t call) tools, and writing lots of custom memory retrieval logic.

At some point we just looked at each other and were like, why aren't we trying to make this part easier, and decided to work on Mastra.

Demo video: https://www.youtube.com/watch?v=8o_Ejbcw5s8

One thing we heard from folks is that seeing input/output of every step, of every run of every workflow, is very useful. So we took XState and built a workflow graph primitive on top with OTel tracing. We wrote the APIs to make control flow explicit: `.step()` for branching, `.then()` for chaining, and `.after()` for merging. We also added .`.suspend()/.resume()` for human-in-the-loop.

We abstracted the main RAG verbs like `.chunk()`, `embed()`, `.upsert(),’ `.query()`, and `rerank()` across document types and vector DBs. We shipped an eval runner with evals like completeness and relevance, plus the ability to write your own.

Then we read the MemGPT paper and implemented agent memory on top of AI SDK with a `lastMessages` key, `topK` retrieval, and a `messageRange` for surrounding context (think `grep -C`).

But we still weren’t sure whether our agents were behaving as expected, so we built a local dev playground that lets you curl agents/workflows, chat with agents, view evals and traces across runs, and iterate on prompts with an assistant. The playground uses a local storage layer powered by libsql (thanks Turso team!) and runs on localhost with `npm run dev` (no Docker).

Mastra agents originally ran inside a Next.js app. But we noticed that AI teams’ development was increasingly decoupled from the rest of their organization, so we built Mastra so that you can also run it as a standalone endpoint or service.

Some things people have been building so far: one user automates support for an iOS app he owns with tens of thousands of paying users. Another bundled Mastra inside an Electron app that ingests aerospace PDFs and outputs CAD diagrams. Another is building WhatsApp bots that let you chat with objects like your house.

We did (for now) adopt an Elastic v2 license. The agent space is pretty new, and we wanted to let users do whatever they want with Mastra but prevent, eg, AWS from grabbing it.

If you want to get started: - On npm: npm create mastra@latest - Github repo: https://github.com/mastra-ai/mastra - Demo video: https://www.youtube.com/watch?v=8o_Ejbcw5s8 - Our website homepage: https://mastra.ai (includes some nice diagrams and code samples on agents, RAG, and links to examples) - And our docs: https://mastra.ai/docs

Excited to share Mastra with everyone here – let us know what you think!

The example from the landing page does not exactly spark joy:

    testWorkflow
     .step(llm)
       .then(decider)
       .then(agentOne)
       .then(workflow)
     .after(decider)
       .then(agentTwo)
       .then(workflow)
      .commit();

On a first glance, this looks like a very awkward way to represent the graph from the picture. And this is just a simple "workflow" (the structure of the graph does not depend on the results of the execution), not an agent.
a day agoPalmik

I get the same feeing when I first looked at the LangChain documentation when I wanted to first start tinkering with LLM apps.

I built my own TypeScript AI platform https://typedai.dev with an extensive feature list where I've kept iterating on what I find the most ergonomic way to develop, using standard constructs as much as possible. I've coded enough Java streams, RxJS chains, and JavaScript callbacks and Promise chains to know what kind of code I like to read and debug.

I was having a peek at xstate but after I came across https://docs.dbos.dev/ here recently I'm pretty sure that's that path I'll go down for durable execution to keep building everything with a simple programming model.

a day agocampers

Kind of similar camp, I checked LangChain and others and ultimately I was like, well, it's not really doing much is it, just adding abstraction on top of what is essentially basic loops and conditional statements, and tbh it feels like in nearly every case I'll never be using them the same way such that some abstraction will help over just making some function helpers myself.

I don't think from first principles there's any broad framework that makes sense to be honest. I'll reach for a specific vector DB, or logging library, but beyond that you'll never convince me your "query-builder" API is going to make me build a better thing when I have the full power of TypeScript already.

Especially when these products start throwing in proprietary features and add-ons with fancy names on top.

a day agonwienert

TypedAI looks solid, was not aware of it! Bookmarked for further research.

Personally I am not fond of the decorator approach and decided to not use it in pgflow (my soon-to-be-released workflow orchestration engine on top of Postgres).

1. I wanted it to be simple to reason about and explicit (being more verbose as a trade-off)

2. There are some issues with supporting decorators (Svelte https://github.com/sveltejs/svelte/issues/11502, and a lot of others).

3. I decided to only support directed acyclic graphs (no loops!) in order to promote simplicity. Will be supporting conditional recursive sub-workflows to provide a way to repeat some steps and be able to branch.

Cheers!

a day agojumski

Can dbos work with CF durable objects?

a day agoCMCDragonkai

Thanks! The conditional `when` clauses live on the steps, rather than being represented in the workflow, and in fact when we built this for an example, the last step being called depended on the results of the previous two steps.

How would you simplify this?

a day agocalcsam

I think the problem is that a 'fluent' chain of calls already expresses a sequence, so the way that 'after' resets the context to start a new branch feels very awkward ... like a GOTO or something

It's telling that the example relies on arbitrary indentation (which a linter will get rid of) to have some hope of comprehending it

Possibly this was all motivated by a desire to avoid nested structures above all?

But for a branching graph a nested structure is more natural. It'd also probably be nicer if the methods were on the task nodes instead of on the workflow, then you could avoid the 'step'/'then' distinction and have something like:

e.g.

    testWorkflow(
        llm
        .then(decider)
        .then(
            agentOne.then(workflow),
            agentTwo.then(workflow),
        )
    )
a day agoanentropic

You’re right that the syntax was inspired by the desire to avoid nested structures. But the syntax here is interesting as well and fairly readable. Worth thinking about!

a day agocalcsam

that example syntax is loosely based on CDK code for AWS Step Functions, since I had to write some recently

essentially you're building a DAG so it could be worth checking some other APIs which do a similar thing for inspiration

e.g. it looks like in Airflow you could write it as:

    chain(llm, decider, [agentOne, agentTwo], workflow)
https://airflow.apache.org/docs/apache-airflow/stable/core-c...
7 hours agoanentropic

I think it is just easier to comprehend if the edges/dependencies are explicit (as an array for example).

a day agojumski

We have a ticket to allow this actually!

a day agocalcsam

Yeah, I also found this a bit unintuitive at first. I’m building a workflow engine myself (https://pgflow.dev/pgflow, not released yet), and I’ve been thinking a lot about how to model the DSL for the graph and decided to make dependencies explicit and use method chaining for expansion with other step types.

Here’s how it would look like in my system:

  new Flow<string>()  
    .step("llm", llmStepHandler)  
    .step("decider", ["llm"], deciderStepHandler)  
    .step("agentOne", ["decider"], agentOneStepHandler)  
    .step("agentTwo", ["decider"], agentTwoStepHandler)  
    .step("workflow", ["agentOne", "agentTwo"], workflowStepHandler);  
Mine is a DAG, so more constrained than the cyclic graph Mastra supports (if I understand correctly).
a day agojumski
[deleted]
a day ago

I knew it will be bad when I seen "by the developers of Gatsby", but this is pure comedy.

JQuery plugin for LLM.

a day agozeroq

Very excited about Mastra! We have a number of Agent-ic things we'll be building at ElectricSQL and Mastra looks like a breath of fresh air.

Also the team is top-notch — Sam was my co-founder at Gatsby and I worked closely with Shane and Abhi and I have a ton of confidence in their product & engineering abilities.

2 days agokylemathews

Why not use Elixir for agents as Electric is already heavily invested? It’s a much better fit than JS.

2 days agocpursley

Gretchen, stop trying to make Elixir happen.

2 days agomvf4z7

I think it is actually a solid choice given the startup ecosystem and generally easy async nature.

2 days agofunerr

Abhi is one of the best engineers I know. I’m excited that he and his colleagues are tackling this problem.

a day agodoctorpangloss

This looks awesome! Quick question, are there plans to support SSE MCP servers? I see Stdio [0] are supported and I can always run a proxy but SSE would be awesome.

[0] https://mastra.ai/docs/reference/tools/client

2 days agojoshstrange

we have a tutorial that covers this!

https://docs.mcp.run/tutorials/mcpx-mastra-ts

you don't even need to use SSE, as mcp.run brings the tools directly to your agent, in-process, as secure wasm modules.

mcp.run does have SSE support for all its servlet tools in the registry though too.

2 days agonilslice

Hey! Glad to hear you're excited about it! Yes, we're currently working on improving our MCP support in general - we'll have more to share soon, but part of that is supporting SSE servers directly

2 days agotybaa

Very cool. Like I said I can make it work with Stdio but I have a SSE MCP proxy I wrote to combine multiple MCP servers (just to make plugging in all my tools to a new client easier to test). That said, I think after looking at the docs that I'll be tempted to move my tools in directly but I probably will keep them behind MCP for portability.

2 days agojoshstrange

Oh nice, did you write your own proxy or are you using something like https://www.npmjs.com/package/mcp-proxy ?

2 days agotybaa

I have used `mcp-proxy` but (afaik) you can only use it 1-to-1 and I wanted an N-to-1 proxy so that instead of configuring all my MCP servers in the multiple clients I've tested out I could just add 1 server and pull in everything.

I found `mcp-proxy-server` [0] which seemed like it would do what I want but I ran into multiple problems. I added some minor debug logging to it and the ball sort of rolled downhill from there. Now it's more my code than what was there originally but I have tool proxying working for multiple clients (respecting sessionIds, etc) and I think I've solved most all the issues I've run into and added features like optional tool prefixing so there isn't overlap between MCP servers.

Given what I know now, I don't think N-to-1 is quite as useful as I thought. Or rather, it really depends on your "client". If you can toggle on/off tools in your client then it's not a big problem but sometimes you don't want "all" the tools and if you client only allows toggling per MCP server then you will have an issue.

I love the ideas of workflows and how you have defined agents. I think my current issue is almost too many tools and the LLM sometimes gets confused over which ones to use. I'm especially thrilled with your HTTP endpoints you expose for the agents. My main MCP server (my custom tools I wrote, vs the third-party ones) exposes an HTTP GUI for calling the tools (faster iteration vs trying it through LLMs) and I've been using that and 3rd-party chat clients (LibreChat and OpenWebUI) as my "LLM testing" platform (because I wasn't aware of a better options) but neither of those tools let you "re-expose" the agents via an API.

All in all I'm coming to the conclusion that 90% of MCP servers out there are really cool for seeing what's possible but it's probably best to write your own tools/MCP since most all MCP servers are just thin wrappers around an API. Also it's so easy to create an MCP server that they are popping up all over the place and often of low quality (don't fully implement the API, take shortcuts for the authors use-case, etc). Using LLMs to writing the "glue" code from API->Tool is fairly minor and I think is worth "owning". To sum that all up: I think my usage of 3rd party MCP servers is going to trend towards 0 as I "assimilate" MCP servers into my own codebase for more control but I really like MCP as a way to vend tools to various different LLM clients/tools.

[0] https://github.com/adamwattis/mcp-proxy-server

2 days agojoshstrange

Thanks for sharing! It's so helpful to hear real world experiences like this. Would you be interested in meeting up on a call sometime? I'd love to chat about how you're using MCP to help inform how we can make all of this easier for folks. We're actively thinking about our APIs for tool use and MCP right now.

2 days agotybaa

I appreciate the offer but I think you'll probably find someone better to talk to here in the comments.

MCP is super cool and I've loved playing with it but playing with it is all I'm doing. I'm working on some tools to use in my $dayJob and also just using it as an excuse to learn about LLMs and play with new tech. Most my work is writing tools that connect our to our distributed fleet of servers to collect data, run commands, etc. My goal is to build a SlackOps-type bot that can provide extra context about errors we get in Slack (Pull the latest commits/PRs around that code, link to current deployed version, provide all the logs for the request that threw an error, check system stats, etc). And while I have tools written to do all of that I'm still working on bringing it all together in something more than a bot I can invoke from Slack and make MCP calls.

All that to say, I'm not a professional user of MCP/Mastra and my opinion is probably not one you want shaping your framework.

2 days agojoshstrange

No worries! But I am definitely interested in chatting still - that you've tried it in multiple ways, ran into pain points, and overcame those in your own ways is super interesting and valuable. Playing around is how everyone starts and this "agents with tool use in prod" game is still very new. These APIs should work well and make sense for folks who are just getting into it as well folks who have been around the block. If you change your mind let me know! Would love to chat

2 days agotybaa

Happy Mastra user here! Strikes the right balance between letting me build with higher level abstractions but providing lower level controls when needed. I looked at a handful of other frameworks before getting started and the clarity & easy of use of Mastra stood out. Nice work.

2 days agoalanwells

thank you!

2 days agocalcsam

I don’t really understand agents. I just don’t get why we need to pretend we have multiple personalities, especially when they’re all using the same model.

Can anyone please give me a usecase, that couldn’t be solved with a single API call to a modern LLM (capable of multi-step planning/reasoning) and a proper prompt?

Or is this really just about building the prompt, and giving the LLM closer guidance by splitting into multiple calls?

I’m specifically not asking about function calling.

2 days agobrap

If you ignore the word "agent" and autocomplete it in your mind to "step", things will make more sense.

Here is an example-- I highlight physical books as I read them with a red pen. Sometimes my highlights are underlines, sometimes I bracket relevant text. I also write some comments in the margins.

I want to photograph relevant pages and get the highlights and my comments into plain text. If I send an image of a highlighted/commented page to ChatGPT and ask to get everything into plain text, it doesn't work. It's just not smart enough to do it in one prompt. So, you have to do it in steps. First you ask for the comments. Then for underlined highlights. Then for bracketed highlights. Then you merge the output. Empirically, this produces much better results. (This is a really simple example; but imagine you add summarization or something, then the steps feed into each other)

As these things get complicated, you start bumping into repeated problems (like understanding what's happening between each step, tweaking prompts, etc.) Having a library with some nice tooling can help with those. It's not especially magical and nothing you couldn't do yourself. But you also could write Datadog or Splunk yourself. It's just convenient not to.

The internet decided to call these types of programs agents, which confuses engineers like you (and me) who tend to think concretely. But if you get past that word, and maybe write an example app or something, I promise these things will make sense.

2 days agocoffeemug

To add some color to this

Anthropic does a good job of breaking down some common architecture around using these components [1] (good outline of this if you prefer video [2]).

"Agent" is definitely an overloaded term - the best framing of this I've seen is aligns more closely with the Anthropic definition. Specifically, an "agent" is a GenAI system that dynamically identifies the tasks ("steps" from the parent comment) without having to be instructed that those are the steps. There are obvious parallels to the reasoning capabilities that we've seen released in the latest cut of the foundation models.

So for example, the "Agent" would first build a plan for how to address the query, dynamically farm out the steps in that plan to other LLM calls, and then evaluate execution for correctness/success.

[1] https://www.anthropic.com/research/building-effective-agents [2] https://www.youtube.com/watch?v=pGdZ2SnrKFU

2 days agofryz

This sums up as ranging from multiple LLM calls to build a smart features to letting the LLM decide what to do next. I think you can go very far with the former but the latter is more autonompus in unconstrained environments (like chatting with a human etc.)

2 days agoeric-burel

https://aider.chat/2024/09/26/architect.html

"Aider now has experimental support for using two models to complete each coding task:

An Architect model is asked to describe how to solve the coding problem.

An Editor model is given the Architect’s solution and asked to produce specific code editing instructions to apply those changes to existing source files.

Splitting up “code reasoning” and “code editing” in this manner has produced SOTA results on aider’s code editing benchmark. Using o1-preview as the Architect with either DeepSeek or o1-mini as the Editor produced the SOTA score of 85%. Using the Architect/Editor approach also significantly improved the benchmark scores of many models, compared to their previous “solo” baseline scores (striped bars)."

In particular, recent discord chat suggests that o3m is the most effective architect and Claude Sonnet is the most effective code editor.

2 days agobravura

Now next is to have a Senior Editor and Editor pair :)

a day agohassleblad23

I don't get it either. Watching implementations on YouTube etc it primarily it feels like a load of verbiage trying to carve out a sub-industry, but the meat on the bone just seems to be defining discreet units of AI actions that can be chained into workflows that interact with non-ai services.

2 days agoweego

> defining discreet units of AI actions that can be chained into workflows that interact with non-ai services.

You got. But that is the interesting part! To make AI useful, beyond basic content generation in a chat context you need interaction with the outside world. And you may need iterative workflows that can spawn more work based on the output of those interactions. The focus on Agents as personas is a tangent to the core use case. We could just call this stuff "AI Workflow Orchestration" or something ... and it would remain pretty useful!

2 days agojacobr1

I wont trust an agent with anything by itself at their current state though.

2 days agokarn97

AI seems to forget more things as the context window grows. Agents keep scope local and focused, so you can get better/faster results, or use models trained on specific tasks.

Just like in real life, there's generalists and experts. Depending on your task you might prefer an expert over a generalist, think f.e. brain surgery versus "summarize this text".

a day agoToJans

I don't work in prompt engineering but my partner does and she tells me numerous need for agents in cases where you want some technology which goes and seeks things on the live web and then comes back and you want to make sense of that found data with the LLM and pre-written prompts where you use that data as variables, and then possibly go back into the web if the task remains unsolved.

2 days ago2pointsomone

Can't that be solved with regular workflow tools and prompts? Is that what an agent is, essentially?

Or is an agent a collection of prompts with a limited set of available tools?

2 days agodimgl

I think the agent part is deciding how to navigate the web on its own and when it is convinced (and you haven't told it specifically deterministically) it found what it wanted, to come back and work with your prompts. You can't really logic code this into a workflow.

a day ago2pointsomone

One of the key limitations of even state-of-the-art LLMs is that their coherence and usefulness tend to degrade as the context window grows. When tackling complex workflows, such as customer support automation or code review pipelines - breaking the process into smaller, well-defined tasks allows the model to operate with more relevant and focused context at each step, improving reliability.

Additionally, in self-hosted environments, using an agent-based approach can be more cost-effective. Simpler or less computationally intensive tasks can be offloaded to smaller models, which not only reduces costs but also improves response times.

That being said, this approach is most effective when dealing with structured workflows that can be logically decomposed. In more open-ended tasks, such as "build me an app," the results can be inconsistent unless the task is well-scoped or has extensive precedent (e.g., generating a simple Pong clone). In such cases, additional oversight and iterative refinement are often necessary.

2 days agoblainm

One way to think about it is job orchestration. You end up with some kind of DAG of work to execute. If all the work you are doing is based on context from the initiation of the workflow, then theoretically you could do everything in a single prompt. But more interesting is when there is some kind of real-world interaction, potentially multiple. Such as a websearch, or executing code, calling an API. Then you take action based on the result of then. Which in turn might trigger another decision to take some other action, iteratively, and potentially branching.

2 days agojacobr1

Modularity. We could put all code in a single function, it is possible, but we prefer to organize it differently to make it easier to develop and reason about. Agents are similar

2 days agoandrewmutz

Without checking out this particular framework, the word is sometimes overloaded with that meaning (LLM personality), but actually in software engineering in general, "agent" generally means something with its own inner loop and branching logic (agent as in autonomy). It's a neccessary abstraction when you compose multiple workflows together under the same LLM interface, things like which flow to run next, and edge case handling for each of them etc.

2 days agonsonha

By the developers of Gatsby is a minus, not a plus makes me think this is going to be the next abandonware.

2 days agoepolanski

Surprised this is comment is not higher. Gatsby was one of the worst technologies I have worked with in my long career of working with various JS libraries and frameworks. Im sure the team is smart and capable, but I would not be advertising their work with Gatsby.

2 days agopaultannenbaum

Same experience, I had the exact same thought. Was new to react and had to make a website… big mistake, wasted so many hours untangling the regex and hacks keeping together Gatsby over the next few years until that website was retired.

2 days agochristina97

Gatsby never made sense to me. Weird design decisions I couldn’t find any plausible reason for. As soon as Next.js became capable of doing SSG I convinced my team to abandon Gatsby. Definitely a minus, sorry.

2 days agosquillion

gatsby was one of the first static react frameworks, now you have things like nextjs remix astro etc... i dont think abandonware is fair, thats just the way software goes

2 days agouser9999999999

The Gatsby team made a lot of promises upon which they didn't follow through. Not a great way to build confidence in your next big project.

2 days agomplewis

… such as?

2 days agoDSchau

The character Gatsby didn't function very well either (as far as being a successful person goes, I quite liked the book and he functioned well as a character) :)

However, the Gatsby CMS had a couple of things that were really interesting about it - especially runtime type safety through GraphQL and doing headless WordPress.

2 days agobenatkin

Interesting, because GQL was the most divisive thing of Gatsby.

a day agoepolanski

I don't want to be that person but there are hundreds of other similar frameworks doing more or less the same thing. Do you know why? Because writing a framework that orchestrates a number of tools with a model is the easy part. In fact, most of the time you don't even need a framework. All of these framework focus on the trivial and you can tell that simply by browsing the examples section.

This is like 5% of the work. The developer needs to fill the other 95% which involves a lot more things that are strictly outside of scope of the framework.

2 days ago_pdp_

Some people don't like frameworks. Some people do. We have a little bit of experience building frameworks, so we figured we'd build a good one.

2 days agocalcsam

I love frameworks :)

2 days agosanta_boy

Couldn't agree more. This also looks mostly like a Typescript "port" of Langgraph, and I say "port" because Langgraph has a TS framework already.

2 days agoincanspyder

True. That's the reason I see a lot of people dropping similar frameworks like LangChain recently: https://medium.com/thoughts-on-machine-learning/drop-langcha...

2 days agofsndz

i was using vercel ai sdk for my production app and it was such a bad experience that I eventually went with native implementation and tbh it was not much of work thanks to cursor. problems i faced: too many bugs (just browse their github repo to get an idea), the UI side also had suboptimal performance based on how they implemented hooks.

2 days agojerrygoyal

vercel's whole shtick is to make money off of dumb js devs who do not know better. i think they pay far too much attention on how things look compared to how things work. but hey, they made millions, possibly billions off of those js devs so who is to blame them.

2 days agoilrwbwrkhv

I agree, and it feels like JS is just the wrong runtime for agents. Really languages that can model state in sane ways and have a good concurrency story like Elixir make much more sense.

And here’s a fun exercise: ask Claude via Cursor or Perplexity with R1 to create a basic agentic framework for you in your language of choice on top of Instructor.

2 days agocpursley

> good concurrency story like Elixir make much more sense

Agree, that's why I've been building this: https://github.com/agentjido/jido

2 days agomikehostetler

Call me an elixir virgin until 5 minutes ago. This language from a quick glance seems perfect for agent orchestration.

Project looks great, will follow & learn.

2 days agoMattDaEskimo

It's less about the language syntax and more about the capabilities of the underlying Erlang runtime. There's also Gleam on top of Erlang if you like stronger typing (gleam.run).

a day agocpursley

    > Really languages that can model state in sane ways and have a good concurrency story like Elixir make much more sense.
Can you expand on this? Curious why JS state modelling falls short here and what's wrong with the concurrency model in JS for agents.
2 days agoCharlieDigital

For one, NodeJS doesn’t have concurrency. It’s a single threaded event loop.

2 days agodartos

It has concurrency with Promise; it doesn't have parallelism.

2 days agoCharlieDigital

And these agents are all network I/O bound by the model services so a lot of use cases don't need threading.

I would argue that python is the overrated language when it comes to building agents. Just because it's the language of choice for training models doesn't mean it should be for building apps against them.

The dx typescript brings to these types of applications is nice.

a day agocjonas

    > The dx typescript brings to these types of applications is nice.
Ironically, it only gets halfways there.

What I've found is that teams that want TS probably should just move up to C#; they are close enough [0]. The main thing is that once you start to get serious with your backend API, then data integrity matters. TS types disappear at runtime and it's just JS. So you need a Zod or Valibot to validate the incoming data. Then your API starts getting bigger and you want to generate OpenAPI for your frontend. Now your fast and easy Node/Express app is looking a lot like Spring or .NET...without the headway and perf...the irony.

[0] https://github.com/CharlieDigital/js-ts-csharp

a day agoCharlieDigital

No real concurrency. No scheduling. If you are not working with a lot of IO then js would be a poor choice. But in this case we talk about network calls, so definitely IO. The settimout, promise, request methods will do their job.

a day agoholoduke

You could describe all frontend JS frameworks the same way: you spend 95% of time on content and mechanics of your webapp, while the framework provides the easy 5%.

2 days agofullstackwife

I think most JS frameworks save more than 5% of the effort for developers compared to writing raw JS. Especially when you include the ecosystem around those frameworks

2 days agochipgap98

Congrats on launching. I've noticed that switching prompts without edits between different LLM providers has degradation on performance. I'm wondering if you guys have noticed how developers do these "translations", I'm wondering since maybe your eval framework might have data for best practices.

2 days agoGakho

Yeah, this is something we've heard as well. No particular feature right now but we did ship an agent in local dev to help people improve their prompts.

2 days agocalcsam

I'm wondering since there seem to be a lot of frameworks/websites that support evals, even OpenAI has evals.

Do you think that a lot of these components like observability and evals will eventually be consumed by either providers (like OpenAI) or an orchestration framework like Mastra (when using multiple providers, though even if you're using just one provider for many tasks I can see it belonging to the orchestration framework)?

2 days agoGakho

I could be wrong but don't think OpenAI wants to be opinionated about that, except maybe the OpenAI solutions engineers :)

2 days agocalcsam

Congrats! Side question - is the website OS as well? I'd like to "borrow" the Nav Bar code. I looked on GitHub and couldn't find it in the repos and 300+ branches. Cheers!

a day agodstroot

I basically learned everything about how agents work by using Mastra's framework and going through their documentation. The founders are also super hands-on and love to help!

2 days agoeliotthehacker

This looks really great! How do you make money? Do you charge for deploying these to your platform? I couldnt find anything on pricing

2 days agonetcraft

If you watch the demo video you will see the cloud platform we are building at the end. Right now it’s in beta.

2 days agocalcsam

Congrats on launching! Curious how early the Mastra team thinks people should be thinking about evals and setting up a pipeline for them.

2 days agoaranibatta

We tend to recommend folks spend a few hours writing evals after they spend a couple weeks prototyping. Then they get a sense of how valuable evals are for their use-case.

We think about evals a bit like perf monitoring -- it's good to have RUM but also good to have some synthetic stuff in your CI. So if you do find them valuable, useful to do both.

2 days agocalcsam

Congrats, looks promising! 1. Is it possible to create custom endpoints? I see that several endpoints are created when running “mastra dev”.

2. Related to previous question, since this is node based, is it possible to support websockets?

a day agolmrl

1. what are you wanting to create?

currently agents and workflow endpoints are created at `/api/workflows/workflow-id` and `/api/agents/agent-id/` for workers and agents

2. we are thinking about it -- curious what you'd use it for?

a day agocalcsam

Are there any plans to add automatic retries for individual steps (with configurable max attempts and backoff strategy)?

2 days agomonideas

Impressive. Have you seen any success with Mastra being used to build voice agents? Our company has been experimenting with VAPI, which just launched a workflow builder into open beta (https://docs.vapi.ai/workflows), but it has a lot of rough edges.

2 days agoharliem

We're just starting to do that and have a few TTS providers: ElevenLabs, OpenAI, PlayAI.

We hear a lot from people who are outgrowing the voice agent platforms and moving to something like pipecat (in Python), and we'd love to be the JS option.

2 days agocalcsam

Is any of the voice stuff in any way 'natural' sounding, I'd love to be able to recreate the ChatGPT app voice experience in my own app with a custom agent, but it just sounds robotic and crap

a day agojmkni

If you'd like, feel free to reach out to me via email with your requirements and we can get a conversation going. I've built a few voice agent systems in both python and JavaScript and would love to hear about what issues you're running into. Might be able to build what you need.

2 days agosoulofmischief

Does Mastra support libraries of tools for agents like toolhouse.ai or https://github.com/transitive-bullshit/agentic

2 days agoorliesaurus

Agentic's tool library _should_ also work for Mastra via its AI SDK adapter.

(We haven't tested this, so if you do try let us know if you see quirks!)

a day agocalcsam

What about Toolhouse and/or composeio?

15 hours agoorliesaurus

Why is it on top of Vercel’s platform?

2 days agodavedx

It looks like theyre using the vercel ai sdk, which really isnt the vercel platform, doesnt have anything to do with any of the rest of vercel. Its actually quite nice and full featured.

2 days agonetcraft

It’s not. It’s on top of AI SDK, which is a popular open source library maintained by Vercel.

2 days agocalcsam

So the vercel ai js-sdk is not tied to the vercel platform and is “just” a js library which points to various llms? Is there any promise/plan/etc to keep it that way?

2 days agotomhallett

(I work at Vercel) Yes, it will continue to be an MIT-licensed open-source library to simplify building AI apps.

2 days agoleerob

i am very long on TS as the future of agent applications. nice work team

2 days agodhorthy

thanks!!

2 days agocalcsam

This looks really nice. We've been considering developing something very similar in-house. Are you guys looking at supporting MLC Web LLM, or someother local models?

2 days ago_1

Yup! We rely on the AI SDK for model routing, and they have an Ollama provider, which will handle pretty much any local model.

2 days agocalcsam

Interested to learn more about the PDF -> CAD project built on mastra, can you share a link?

a day agocshimmin

Got excited, was hoping to see a repository of Go Agents.

a day agodikaio

Do the workflows support voice-to-voice models like openai's realtime? Or if something like that exists I'd be curios.

2 days agofnikacevic

Congrats! This is exactly what the AI world needs. I'm thinking about using Mastra for a class I'm working on with AI Agents.

2 days agorealmikebernico

So an AI Mastra Class?

2 days agoash_091

that's awesome!

2 days agocalcsam

I thought Kyle Matthews was the creator of Gatsby

2 days ago5Qn8mNbc2FNCiVV

Kyle started the project, I started helping pretty shortly thereafter, then he and I cofounded the company together. Kyle's working on ElectricSQL now but is using us, we're doing a meetup together next month, etc.

2 days agocalcsam

I put the "creators" bit in the title because I thought readers would find it interesting. Sorry if that was not-quite-right! I've turned them into developers now.

2 days agodang

Neat, I’m going to use this

a day agotobyhinloopen

awesome! let us know how it goes

a day agocalcsam

Congrats guys! really excited to try this out!

2 days agoasati

Super excited to try out the new agent memory features

2 days agolevensti

interesting to contrast the recent memory releases

- https://mastra.ai/docs/agents/01-agent-memory

- https://blog.langchain.dev/langmem-sdk-launch/

- https://help.getzep.com/concepts#adding-memory

not sure where all this is leading yet but glad people are exploring.

2 days agoswyx

100% and agree with this, we saw the langmem stuff last night

imho getting some sort of hierarchical memory is conceptually fairly straightforward, the tricky part is having the storage and vector db pieces well integrated so that the apis are clean

2 days agocalcsam

let us know what you think!

2 days agocalcsam
[deleted]
2 days ago

Any timeline for python?

2 days agogregpr07

Not planning on it — we think frameworks should be single-language

2 days agocalcsam
[deleted]
2 days ago

"You may not provide the software to third parties as a hosted or managed service" - The Elastic v2 license isn't actually open source like your title mentions: "Open-source JS agent framework"

https://github.com/mastra-ai/mastra/blob/main/LICENSE

2 days agofuddle

I mentioned that in the comment. We’re using Elastic v2 for now because we want users to be able to do anything with us, but protect from eg AWS

2 days agocalcsam

If the license isn't open source, then the SDK shouldn't be labeled as open source.

2 days agofuddle

So it‘s a lie.

2 days agoTomte

> Mastra uses the Vercel AI SDK

It started off wrong.

2 days agodelduca

Care to elaborate? I’ve never used it but have heard good things from colleagues who have.

2 days agojcheng

Lock in.

2 days agodelduca

What lock in?

I use their AI SDK, but never touch vercel servers. It's just a unified interface.

2 days agoSparkyMcUnicorn

The SDK is the lock in.

a day agodelduca

Same as any other open source framework or library.

Calling that "lock in" is a stretch, but you're free to write everything from scratch if that's the way you roll.

11 hours agoSparkyMcUnicorn

Bamfs

2 days agojobryan

lol thanks

2 days agocalcsam

Very interesting set of abstractions that address lots of the pain points when building agents, also the team is super eager to help out!

2 days agopablodecm

thank you!

2 days agocalcsam

[dead]

2 days agoanimanoir

You’re awesome guys! I had so many problems with lanchain and am very happy since switching to Mastra

2 days agoyovboy

that sus account with no activity until now

2 days agoge96

that's great to hear!!

2 days agocalcsam

A TypeScript first AI framework is something that has been missing. How do you work with AI SDK?

2 days agobobremeika

We originally were wrapping AI SDK, but that confused people who wanted to use both, so we decided to make the API more explicit, eg:

import { Agent } from "@mastra/core/agent"; import { openai } from "@ai-sdk/openai";

export const myAgent = new Agent({ name: "My Agent", instructions: "You are a helpful assistant.", model: openai("gpt-4o-mini"), });

2 days agocalcsam

https://typedai.dev is another full-featured one I've built, with a web UI, multi-user support, code editing agents, CodeAct autonomous agent

a day agocampers

Mine is written in TypeScript and I still think it's more ergonomic than anything else I'm seeing in the wild. Maybe there's finally an appetite for this stuff and I should release it. The Mastra dashboard looks pretty nice, might take some notes from it.