At Buster, we are building AI agents for data & analytics. You can read more about the problems we’re solving (and how we’re solving them) here.

If you join us, you’ll be an early team member in helping shape:

  1. Our company culture
  2. Our engineering practices
  3. People that we hire
  4. The direction & focus of our products

Open Roles

<aside>

To apply, fill out this form & we’ll be in touch!

</aside>

Frequently Asked Questions

  1. How do I get in touch/apply?

    To apply, fill out this form & we’ll be in touch!

  2. Who else works at Buster?

    Blake, Dallin, Nate, Jacob

  3. Where is Buster located?

    We work in-person at our office in Pleasant Grove, UT.

  4. Are you okay with remote employment?

    No, we only work in-person. If you don’t live in Utah, we will cover re-location costs.

  5. Have you raised any money?

    Yes. We recently raised $2.4m from investors like General Catalyst, Y Combinator, General Advance, Kulveer Taggar, and others.

  6. Do you provide any healthcare benefits?

    Yes - we cover 100% of medical, dental, and vision for employees and dependents.

  7. I saw that you are open-source, how do you make money?

    Buster is completely open-source. We love open-source and see it as an advantage for our business. Our code is public, readable, and free to pull down, fork, host, etc. That said, its not easy to host and run a full-stack agent platform with hundreds of running agents. It takes a significant amount of resources to do so and most data teams don’t want to.

    Currently, all of our customers use Buster as a fully managed (cloud-hosted) solution. We charge them on a subscription basis like any other SaaS platform. We typically sell to companies with 100-1000 employees and an established data team.

  8. Do you plan to train your own models?

    Currently, we only use SOTA models from Anthropic, Google, and OpenAI. We have a robust suite of proprietary evals (not open-sourced) that we constantly use to optimize Buster, identify which models are best for which aspects of the platform, etc.

    In the future, we plan on training our own models to optimize for cost-savings and reduced latency.