Hi y'all. Love the idea and congratulations on your launch. I've used [n8n](https://github.com/n8n-io/n8n) for similar use cases in the past. Any differences in Sim Studio that you'd like to call out?
ekarabeg 1 days ago [-]
Thank you! n8n has done really well over the last few years to simplify the workflow building process. I responded to this in a previous comment, but we believe the agent building process should be more open, meaning fewer abstractions between the interface and the model provider. We want our platform to be as lightweight as possible.
How this translates in the application is through features like allowing for custom tool calling with code execution, JSON schema input for response format, etc. I'd love to hear your thoughts using Sim Studio - let us know how we compare to the other workflow builders!
skeeter2020 1 days ago [-]
> Building reliable, multi-step agent systems with current frameworks often gets complicated fast.
In my experience so far it's not just complicated, but effectively impossible. I struggle to get a single agent to reliably & consistently use tools, and adding n+1 agents is a error multiplier.
waleedlatif1 1 days ago [-]
on our platform (for the providers that allow granular tool use control), you can actually 'force' certain tool calls and have the agent dynamically select others. this was a painpoint that we faced ourselves and were confused why any frameworks didn't allow granular tool use control if the provider allows it. try it out and let us know what you think
simple10 1 days ago [-]
Congrats on the launch! Looking forward to playing with it.
Do you mind elaborating on what differentiates Sim Studio from n8n, Flowise, RAGFlow and other open source flow based AI automation platforms?
ekarabeg 1 days ago [-]
Thanks! The main difference between Sim Studio and other open-source AI agent workflow builders is the level of abstraction used when creating agents.
For instance, n8n has a "memory" parameter, which is not an inherent parameter of LLMs. You can inject your agent's memories into the agent message history (or system prompt) - which is the most common scenario - but we give you control over that. We want to provide visibility, so everything that's exposed on the workflow canvas is exactly what's being executed in the background. Also, we think it's faster and more intuitive to get your workflow up and running in Sim Studio. I'd love your feedback, though! What do you think?
all2 22 hours ago [-]
This sounds like execution/variable resolution scopes in programming languages. I wonder if there are ideas from programming languages you could pick up and use?
ekarabeg 22 hours ago [-]
Yes exactly! A lot of our platform was inspired by programming languages - for loops, for each loops, custom variables, and environment variables in settings. If you have any more concepts, we'd love to hear them!
ddon 1 days ago [-]
Would be amazing to be able to design the workflow using your builder, and then export to code (and choose the language) and then copy paste the code into the project... just an idea.
waleedlatif1 1 days ago [-]
this is something we've been looking into. curious, would you rather export the code into an existing agentic framework like crewai/langgraph, or have it exported as raw code? also, would you prefer if the code was exported block-by-block or the entire workflow altogether?
handfuloflight 20 hours ago [-]
Raw code, block by block.
ekarabeg 18 hours ago [-]
Exactly
rahimnathwani 18 hours ago [-]
The UI looks lovely.
If I run Sim Studio with docker compose, how do I point it to the existing `ollama serve` instance running on the host?
I looked in settings (in the workspace UI) but don't see anywhere to configure the ollama endpoint.
waleedlatif1 18 hours ago [-]
thank you!
for ollama running on your host machine, you'll need to modify the docker configuration since by default it's looking at http://localhost:11434 which points to localhost inside the container, not your host. you can either add `extra_hosts` as `host.docker.internal:host-gateway` to your docker compose and set the OLLAMA_HOST envvar to `OLLAMA_HOST=http://host.docker.internal:11434`, or just run `docker compose up --profile local-cpu -d --build --network=host` when running the compose command.
will add this to the readme and add in some UI locally so its easily configurable! let me know if you have any issues
(BTW I did NOT run `--profile local-cpu` because I didn't want to run ollama in a docker container, as it's already running on the host.)
waleedlatif1 17 hours ago [-]
i think it could be one of a few things:
- first, even though your ollama is running on the host, you still need to use the local profile flag to enable ollama models in the UI. you can do this by running the docker compose command with `local-cpu`
- also, make sure your host ollama is actually running and responding (curl http://localhost:11434/api/tags should show your models)
- if neither of the above work, you may need to restart the app container after changing the OLLAMA_HOST value
rahimnathwani 17 hours ago [-]
I see this in the docker compose logs, but those models don't show up in the model drop-down on the Agent block:
Whether or not I include `--profile local-cpu` in the docker compose command:
- the models from my local ollama show up in the logs, and
- the models don't show up in the model drop-down in the Agent block.
AFAICT the only impact of `--profile local-cpu` is starting a docker container with ollama running.
waleedlatif1 14 hours ago [-]
just pushed a hotfix that should resolve this for you! let me know if you are still having issues. we recently updated the csp and needed to explicitly add the ollama endpoint to the connect-src directive
rahimnathwani 4 hours ago [-]
I updated, deleted docker volumes, and retried, and I still see the same issue :(
artem_zin 1 days ago [-]
Youtube demo looks intriguing, I'm self-hosting n8n for exact this purpose with a home run LLM machine in a local k8s cluster (lol) but out of the box I can tell your tool surpasses AI integrations and workflow in n8n.
Quick glance at GitHub suggests that GitHub package for the Docker image is missing, let me know if you need help with that — happy to contribute!
vseplet 1 days ago [-]
It's funny, but I solved a similar problem myself, but instead of n8n I came to write my own solution. I even noticed this post thanks to automation and llm. Likewise, I'll be glad to help!
waleedlatif1 1 days ago [-]
thanks! that would be awesome to have, always welcome contributors :)
joshcsimmons 1 days ago [-]
Congrats on the launch the tool looks phenomenal.
I’m conflicted because n8n does feel like the right level of abstraction but the UI and dated JS runtime environment are horrible. I don’t really want to write my own memory functionality for my AI agents but wondering if it’s worth it just to have a nicer UI and more modern JS env.
waleedlatif1 1 days ago [-]
for ease of use, we are exploring a way to add in short-term and medium-term memory out of the box in a way that doesn't require us to inject anything into the agent's context unless the user explicitly wants to. for longer-term memory, we support popular vector DB's like pinecone and integrations with mem0
gavmor 1 days ago [-]
I'm sure that the complex logic and state management were not trivial to implement, but the link said GUI so I wanted to see some screenshots, but all I saw were two very dim simple forms.
This space is REALLY struggling to graduate from Gradio-like design sensibilities.
That being said, I'm looking forward to playing with this, congrats on the launch!
rancar2 1 days ago [-]
The screenshot on the website are helpful, and it would be good to add them into the GitHub documentation as the OP mentioned.
waleedlatif1 1 days ago [-]
thanks for the feedback! adding some screenshots to the docs showcasing more complex workflows we've created so far
It looks very nice. What happens when a flow that takes 1 minute to complete is triggered by three different Slack messages from different users one second apart. Are flow executions queued or executed in parallel? Is it configurable?
waleedlatif1 22 hours ago [-]
thanks! this would actually trigger three separate workflow executions, and the messages would be processed in parallel by each execution. by default, within the workflow all the blocks execute once their dependencies are resolved, and for the workflow itself they are also executed in parallel. if three messages are sent by the same user, it is also executed as separate workflow executions. curious to hear about your usecase though, are you looking to process the messages in batches?
dr_kiszonka 18 hours ago [-]
Thanks for replying so quickly and in detail. I wanted to understand issues around scalability and integration into a larger system.
I have been looking for a good solution in this increasingly crowded space and if I could offer a word of unsolicited advice it would be to ensure documentation is top notch, truthful (some competitors mention non-existent features in their docs), and includes a relatively detailed roadmap.
Good luck with Sim Studio. I may try it out in a few weeks!
waleedlatif1 17 hours ago [-]
of course! we will be sure to keep the most up-to-date & accurate documentation and document all of the intricacies like this one, and create a roadmap as well. excited for you to try it out and let us know what you think
brene 23 hours ago [-]
I checked it out and it’s quite polished for a workflow builder. But I struggled for it to handle lists of content well. But I saw that’s already an ongoing feature request.
ekarabeg 22 hours ago [-]
Thank you! Yes - we are adding variable resolution to the for each input. Let us know if there's anything else you'd like to see!
22 hours ago [-]
dmos62 13 hours ago [-]
What are your main competitors for agent orchestration with observability?
ekarabeg 13 hours ago [-]
The big names out there are n8n and flowwise. I'd encourage you to try them out and let us know what you think compared to Sim Studio! We have a logs page that shows execution duration, tool call duration, tokens used, the cost based on the model you selected, etc.
dmos62 13 hours ago [-]
Thanks. It would be nice to find your expert comparison of Sim Studio to these other tools in your docs. The comparisons you already made in this thread is a great resource too ofc.
gitroom 1 days ago [-]
This is pretty sick, I love having that much control without having to hack around a bunch of stuff.
waleedlatif1 1 days ago [-]
thank you and exactly! that's why we built it. trigger it many different ways, swap out tools, models, etc. Just focus on the things that matter for agent performance and ignore everything else
neil_s 1 days ago [-]
Much simpler UI than n8n, congrats on the launch!
waleedlatif1 1 days ago [-]
thanks!
djjose 17 hours ago [-]
Nearly 20 years later, it's fascinating to see Yahoo! Pipes for AI Agents.
pjpr 22 hours ago [-]
congrats on the launch!!
waleedlatif1 20 hours ago [-]
thanks!
deshraj 1 days ago [-]
Congratulations, Emir and Waleed! This is exactly the kind of OSS tooling I’ve been waiting for. I’ve spent countless hours wrestling with multi-step agent workflows hidden inside monolithic prompts, and every iteration felt like shooting in the dark. Having a drag-and-drop, executable graph with built-in branching, loops, and observability is a game-changer.
waleedlatif1 1 days ago [-]
thank you! check out the mem0 integration and let us know if you like the form factor. excited for you to check out the platform and let us know if it helps you wrangle those multi-step agent workflows.
frabonacci 1 days ago [-]
This could play very well with building a managed agentic system around computer-use for RPA. Great stuff!
waleedlatif1 1 days ago [-]
we have support for browser agents (browser-use and stagehand), but running this locally with computer use agents would change the game. will explore :)
nico 1 days ago [-]
Can the browser agents use my already running browser? It would be nice to automate some light workflows in platforms that require login, especially the ones that make it hard to use headless browsers
Right now my solution is to build extensions that I can manually start on my browser. But using extensions to gather and export data + maintaining them is a bit of a pain
waleedlatif1 1 days ago [-]
yeah! for stagehand, I actually stitched together a way for you to login and authenticate on platforms without the LLM's ever seeing your login credentials. in the prompt, you can specify the username as %username% and provide the credentials right below, and then we use selectors to enter that value into the DOM and hand it back to the agent once the login is completed. you can also get structured output. afaik, stagehand themselves don't even offer these three in their SDK and there's no way to login without giving the LLM your credentials. it isn't the best, but its the only place I've seen you can get secure login + agents + structured output
nico 1 days ago [-]
Amazing. Can it also use multimodal local LLMs? For example, can it pass images to gemma3 running via ollama?
waleedlatif1 1 days ago [-]
although I haven't experimented with gemma 3 locally, with ollama we have instructions in the README, and all you need to do is initialize the model, and pull it when running sim studio. let me know how it goes!
How this translates in the application is through features like allowing for custom tool calling with code execution, JSON schema input for response format, etc. I'd love to hear your thoughts using Sim Studio - let us know how we compare to the other workflow builders!
In my experience so far it's not just complicated, but effectively impossible. I struggle to get a single agent to reliably & consistently use tools, and adding n+1 agents is a error multiplier.
Do you mind elaborating on what differentiates Sim Studio from n8n, Flowise, RAGFlow and other open source flow based AI automation platforms?
For instance, n8n has a "memory" parameter, which is not an inherent parameter of LLMs. You can inject your agent's memories into the agent message history (or system prompt) - which is the most common scenario - but we give you control over that. We want to provide visibility, so everything that's exposed on the workflow canvas is exactly what's being executed in the background. Also, we think it's faster and more intuitive to get your workflow up and running in Sim Studio. I'd love your feedback, though! What do you think?
If I run Sim Studio with docker compose, how do I point it to the existing `ollama serve` instance running on the host?
I looked in settings (in the workspace UI) but don't see anywhere to configure the ollama endpoint.
for ollama running on your host machine, you'll need to modify the docker configuration since by default it's looking at http://localhost:11434 which points to localhost inside the container, not your host. you can either add `extra_hosts` as `host.docker.internal:host-gateway` to your docker compose and set the OLLAMA_HOST envvar to `OLLAMA_HOST=http://host.docker.internal:11434`, or just run `docker compose up --profile local-cpu -d --build --network=host` when running the compose command.
will add this to the readme and add in some UI locally so its easily configurable! let me know if you have any issues
Then I went to localhost:3000/w/
Then I added an Agent block. I expected ollama (or my ollama models) to show up in the drop-down, but I only see the hosted models.
I even tried editing `sim/providers/ollama/index.ts`:
Any ideas?(BTW I did NOT run `--profile local-cpu` because I didn't want to run ollama in a docker container, as it's already running on the host.)
- the models from my local ollama show up in the logs, and
- the models don't show up in the model drop-down in the Agent block.
AFAICT the only impact of `--profile local-cpu` is starting a docker container with ollama running.
Quick glance at GitHub suggests that GitHub package for the Docker image is missing, let me know if you need help with that — happy to contribute!
I’m conflicted because n8n does feel like the right level of abstraction but the UI and dated JS runtime environment are horrible. I don’t really want to write my own memory functionality for my AI agents but wondering if it’s worth it just to have a nicer UI and more modern JS env.
This space is REALLY struggling to graduate from Gradio-like design sensibilities.
That being said, I'm looking forward to playing with this, congrats on the launch!
I have been looking for a good solution in this increasingly crowded space and if I could offer a word of unsolicited advice it would be to ensure documentation is top notch, truthful (some competitors mention non-existent features in their docs), and includes a relatively detailed roadmap.
Good luck with Sim Studio. I may try it out in a few weeks!
Right now my solution is to build extensions that I can manually start on my browser. But using extensions to gather and export data + maintaining them is a bit of a pain