Thanks for surfacing this. If you click to "tools" button to the left of "compile", you'll see a list of comments, and you can resolve them from there. We'll keep improving and fixing things that might be rough around the edges.
Eh. This is yet another "I tried AI to do a thing, and it didn't do it the way I wanted it, therefore I'm convinced that's just how it is... here's a blog about it" article.
"Claude tries to write React, and fails"... how many times? what's the rate of failure? What have you tried to guide it to perform better.
These articles are similar to HN 15 years ago when people wrote "Node.JS is slow and bad"
It's crazy how the etymology of "Kentucky" cannot be traced with certainty. Goes to show how much of the native American culture and language is now untraceable and how fragile our record-keeping is, even in "modern times".
The etymology I’ve heard isn’t even listed in the article.
One theory traces “Kentucky” to early forms like Cantucky or Cane-tucky, referring to the region’s vast river-cane brakes, Kentucky River cane, North America’s only native bamboo, which early inhabitants associated with fertile, game-rich land.
I always wondered how something like AWS or GCP Cloud Console admin UIs get shipped. How could someone deliver a product like these and be satisfied, rewarded, promoted, etc. How can Google leadership look at this stuff and be like... "yup, people love this".
In defense of AWS consoles, they are derivative of AWS APIs, as such they are really just a convenience layer that will only occasionally string 2+ AWS APIs together for convenience purposes that can be considered distinct feature on the console.
That is wholly unlike the problem here where the console and API somehow behaves completely differently.
Along with the public APIs, An AWS service can also have Console APIs that are specifically for the console. These APIs do not have the same constraints as the public api.
Object.defineProperty on every request to set params / query / body is probably slower than regular property assignment.
Also parsing the body on every request without the ability to change it could hurt performance (if you're going for performance that is as a primary factor).
I wonder if the trie-based routing is actually faster than Elysia in precompile mode set to enabled?
Overall, this is a nice wrapper on top of bun.serve, structured really well. Code is easy to read and understand. All the necessary little things taken care of.
The dev experience of maintaining this is probably a better selling point than performance.
In my opinion, attempting to perform live dictation is a solution that is looking for a problem. For example, the way I'm writing this comment is: I hold down a keyboard shortcut on my keyboard, and then I just say stuff. And I can say a really long thing. I don't need to see what it's typing out. I don't need to stream the speech-to-text transcription. When the full thing is ingested, I can then release my keys, and within a second it's going to just paste the entire thing into this comment box. And also, technical terms are going to be just fine with Whisper. For example, Here's a JSON file.
(this was transcribed using whisper.cpp with no edits. took less than a second on a 5090)
Yea whisper has more features and is awesome if you have the hardware to run the big models that are accurate enough. The constraint here is the best cpu only implementation. By no means am I wedded or affiliated with parakeet, it's just the best/fastest within the CPU hardware space.
I've done something similar for Linux and Mac. I originally used Whisper and then switched to Parakeet. I much prefer whisper after playing with both. Maybe I'm not configuring Parakeet correctly, But the transcription that comes out of Whisper is usually pretty much spot on. It automatically removes all the "ooms" and all the "ahs" and it's just way more natural, in my opinion. I'm using Whisper.CPP with CUDA acceleration. This whole comment is just written with me dictating to a whisper, and it's probably going to automatically add quotes correctly, there's going to be no ums, there's going to be no ahs, and everything's just going to be great.
If you don't mind closed source paid app, I can recommend MacWhisper. You can select different models of Whisper & Parakeet for dictation and transcription. My favorite feature is that it allows sending the transcription output to an LLM for clean-up, or anything you want basically eg. professional polish, translate, write poems etc.
I have enough RAM on my Mac that I can run smaller LLMs locally. So for me the whole thing stays local
Honestly, I think if you track the performance of each over time, since these get regenerated once in a while, you can then have a very, very useful and cohesive benchmark.
EDIT: Fixed :)
reply