Hacker Newsnew | past | comments | ask | show | jobs | submit | dougbarrett's commentslogin

Hi all - this is a culmination of a lot of work I've been doing over the past few years of trying to make different code generators, DSLs, and this is something I'm pretty proud of (but yes, I did lean heavily on Claude Code to see this vision through)

Some of the key features here are:

- API generation - transparent calling of API's from server or client-side. It's all go code and `gux gen` generates the http middleware for communicating to the backend - Model scaffolding - generate admin screens with complex data relationships using a JSON configuration file - Claude Code ready - comes baked in with skills and an initial CLAUDE.md file to get started - Tailwind baked in - keeps your CSS file small - Docker ready. Build the entire stack into a single binary that contains the wasm file and all dependencies.

There's some more info on https://guxcore.dev as well.

I'd appreciate any feedback, good or bad, I know that this is a lot of "magic" for Go that is primarily frowned upon - but I've heavily inspired this off of tools like nicegui and streamlit and wanted to build something like that for Go.


I think they meant the 'Plan with Opus' model. shift+tab still works for me, the VS code extension allows you to plan still too, but the UI is _so_slow with updates.


I wonder if it's a difference between SSE and HTTP streaming support? I've been working on a tool for devs to create their own MCP tools and built out support for both protocols because it was easier for me to support both protocols vs explaining why it's not working for one LLM client or another.


Oh, that might be it!

Ours doesn’t support SSE.


mine does support SSE (https://github.com/mickael-kerjean/filestash) but it fails before getting there, with the log looking like this:

    2025/09/11 01:16:13 HTTP 200 GET    0.1ms /.well-known/oauth-authorization-server
    2025/09/11 01:16:13 HTTP 200 GET    2.5ms /
    2025/09/11 01:16:14 HTTP 404 GET    0.2ms /favicon.svg
    2025/09/11 01:16:14 HTTP 404 GET    0.2ms /favicon.png
    2025/09/11 01:16:14 HTTP 200 GET    0.2ms /favicon.ico
    2025/09/11 01:16:14 HTTP 200 GET    0.1ms /.well-known/oauth-authorization-server
    2025/09/11 01:16:15 HTTP 201 POST    0.3ms /mcp/register
    2025/09/11 01:16:27 HTTP 200 GET    1.4ms /
with the frontend showing: "Error creating connector" and the network call showing: { "detail": "1 validation error for RegisterOAuthClientResponse\n Input should be a valid dictionary or instance of RegisterOAuthClientResponse [type=model_type, input_value='{\"client_id\":\"ChatGPT.Dd...client_secret_basic\"}\\n', input_type=str]\n For further information visit https://errors.pydantic.dev/2.11/v/model_type" }


Batching is a pattern I’ve had to manually build in the past to push large amounts of analytic data to a database. I’d push individual events to be logged, map reduce those in batches and then perform insert on duplicate update queries on the database, otherwise the threshold of incoming events was enough to saturate the connection pool making the app inoperable.

Even optimizing to where if an app instance new it ran the inert on update for a specific unique index by storing that in a hash map and only running updates from there on out to increase the count of occurrences of that event was enough to find significant performance gains as well.


I wonder why they didn’t just continue this under the Alexa product?



I've seen this argument come up frequently and I find this is just a weird hill to die on. I can see where the author is coming from as far as taking you out of the experience, but QR codes and NFC absolutely have a place in the restaurant ordering experience and I'm here for it.


Plus, if you’re writing the front end in react or something else, that’s essentially the “other endpoint”. You’re just moving that back closer to the source of the origin.


This is definitely possible, and this is definitely already happening. It may be hard for some people to even tell the difference between an AI generated script/audio vs 100% human curated.


AWS beanstalk allows you to run on very cheap instances, even cheaper if you get a plan and commit to a term.

It’s not a 1:1 experience, but I’ve enjoyed it as an alternative to Heroku for sure. Alternatively, you could spin up a server and install dokku which is pretty close to a shipping experience, but still requires some maintenance and hand holding.


I switched from heroku to dokku (and DigitalOcean) last month. Overall: easy to adapt from heroku since so many of the concepts (and commands) are the same.

I tried to get too fancy and set two web services on the same app (since the DO droplet was giving me more CPU and 4x the RAM for half the price) but they seemed to battle each other for control of the database and/or were exceeding resources. So I chilled out and used 1 web service and set CPU and RAM resource limits. And... it's been smooth since then! Much faster than heroku, too.

Price-wise: we were on the $50/mo dyno plus $9/mo postgresql, and with DO we beefed up the managed database specs, and now get 4x the RAM on the droplet, and the total cost is the same as heroku.

We do still have a free tier staging server on heroku that we only use a couple times a year.

Oh shoot, I just remembered that I use staticman for processing comments on a couple jekyll blogs, and those use free heroku tiers. Argh!


I've been using echo for years. Has a great balance of adding helpful utilities while also not imposing bad patterns.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: