Hacker Newsnew | past | comments | ask | show | jobs | submit | cschlaepfer's commentslogin

Thanks for the question! We only use the hosted browser for running the automations remotely (via API). In the IDE, we use a local chrome browser, where we spin up an anonymous profile for isolation.


Thanks!

We're constantly thinking about ways we can improve the dev experience and integrations story around deploying these scripts. Right now we support API executions, and we are adding webhooks soon. We think this will unblock the earliest adopters, and as we learn more about popular use cases/workflows, we'll look to prioritize first-class integrations where it makes sense.


BrowserBook allows users to create 'auth profiles' which can be utilized in notebooks for authentication purposes. These profiles currently support username/password and 2FA via TOTP (and we recommend provisioning a service account for your automations).

For captchas, we use Kernel's stealth mode which includes a captcha solver.

Re: CI integration, today we support API-based execution, but if you have a specific CI pipeline or set of tools you'd like to see support for, let us know and we can look into it!


Thanks, we appreciate it!

Yes, you can use BrowserBook to write e2e test automations, but we don't currently include playwright assertions in the runtime - we excluded these since they are geared toward a specific use case, and we wanted to build more generally. Let us know if you think we should include this though; we're always looking for feedback.

> For scraping, how do you handle Cloudflare and Captchas?

Cloudflare turnstiles/captchas tend to be less of an issue in the inline browser because it’s just a local Chrome instance, avoiding the usual bot-detection flags from headless or cloud browsers (datacenter IPs, user-agent quirks, etc.). For hosted browsers, we use Kernel's stealth mode to similar effect.

> Do you respect robots.txt instructions of websites?

We leave this up to the developer creating the automations.


Thank you!

> Why the subscription model though?

The subscription model is primarily to cover the costs of creating and running automations at scale (i.e., LLM code gen and browser uptime) and to build a sustainable business around those features. We included the free tier to give users access to the IDE, but we're committed to adding value beyond just the IDE and subscriptions support that.

> Is data being sent back to your servers to enable some of the functionality?

Yes - we save all notebooks in our database. Since we're working to build a lot of value-add features for hosted executions, having notebooks saved online worked in service of that.

That said, we're now thinking about the local-only / no sign-up use case as well. We've gotten a lot of feedback about this, so it's something we're taking seriously now that we've gotten all of the core functionality in.

I will also add, we are HIPAA-compliant for healthcare use cases.

Really appreciate the questions!


Thanks! Yep, linux is coming soon - now that we have the first version of the IDE out the door we're going to get cross-platform going shortly.


What UI framework are you using to build the app?


It's a typescript/react application, bundled in electron.


is windows support going to be included too since it's electron?


thank you!


Thanks, and great question - we think about this a lot and think there are a couple of things here.

First, as models get better, our agent's ability to navigate a website and generate accurate automation scripts will improve, giving us the ability to more confidently perform multi-step generations and get better at one-shotting automations.

We expect browser agents will improve as well, which I think is more along the lines of what you're asking. At scale, we still think scripts will be better for their cost, performance, and debuggability aspects - but there are places where we think browser agents could potentially fit as an add-on to deterministic workflows (e.g., handling inconsistent elements like pop-ups or modals). That said, if we do end up introducing a browser agent in the execution runtime, we want to be very opinionated about how it can be used, since our product is primarily focused on deterministic scripting.


> At scale, we still think scripts will be better for their cost, performance, and debuggability aspects

This actually makes a ton of sense to me in lots of the LLM contexts (e.g. seeing how we are starting to prefer having LLMs write one-off scripts to do API calls rather than just pointing them at problems and having them try it directly).

Thanks!


Appreciate it!

Yes exactly - today we send it a simplified version of the DOM, but we're currently building an agent which will be able to discover the relevant DOM elements.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: