Hacker Newsnew | past | comments | ask | show | jobs | submit | arach's commentslogin

> My personal favorite hooks though are these:

  "Stop": [
  {
    "hooks": [
      {
        "type": "command",
        "command": "afplay -v 0.40 /System/Library/Sounds/Morse.aiff"
      }]}],
  "Notification": [
  {
    "hooks": [
      {
        "type": "command",
        "command": "afplay -v 0.35 /System/Library/Sounds/Ping.aiff"
      }]}]
These are nice but it's even nicer when Claude is talking when it needs your attention

Easy to implement -> can talk to ElevenLabs or OpenAI and it's a pretty delightful experience


If you're on a Mac, it can speak the notification instead for more detail!

``` "hooks": { "Notification": [ { "hooks": [ { "type": "command", "command": "jq '.message' | say" } ] } ] } ```


thanks man! I've actually gone a bit further with hooks and speech synthesis

https://hooked.arach.dev/ and https://speakeasy.arach.dev/ - it's a fun setup


Hah! I’ve done something similar. I created a bunch of pre-defined voice messages with Eleven Labs and then have a script that randomly calls them via the same hooks.

https://github.com/daveschumaker/homebrew-claude-sounds


nice, next step is to write a custom function that derives a useful but repeatable message from the hooks and put a little cache in front the Eleven Labs message gen. For example: "In {Project}, Claude needs your permission to run {command}"

That way you can stay comfortably in the free plan with awesome voice messages

I have a different voice for my laptop compared to my main computer and can also pick per project


Omg, this is so great, that you so much! Gonna set up a bunch of A320 warning sounds to scare me in case I am idle for too long :)

Same as you and I was glad to see the status page - hit subscribe on updates

Claude user base believes in Sunday PM work sessions


As a solo bootstrapped founder, I take my sabbath sundown on Saturday to sundown on Sunday. Sunday evening therefore is generally the start of my work week.


Sunday PM builder, reporting in.


hah I ran out of tokens a bit before it hit I reckon.


same here, and I just got started, Hm..


Sunday? What is that?


this probably levels the playing field for a while, and then dramatically raises the bar over a longer period of time

as better engineers and better designers get more leverage with lower nuisance in the form of meetings and other people, they will be able to build better software with a level of taste and sophistication that wouldn't make sense if you had to hand type everything


this feels like language from another era


I’m afraid I don’t follow. Would you mind to elaborate a bit?


is yo taken?


Yo definitely occupies an existing part of my mindspace. https://en.wikipedia.org/wiki/Yo_(app)


But imagine a modernized version of this where you can leverage AI to say "yo" to people?



Yo MTV Raps beat them to trademark office.


About a decade ago


that's a nice idea. i wonder if applying a bit of ai summarization / grouping logic could help present changes in logical sequences regardless of time or file proximity

would probably also make sense to add quick review actions in place - like ask a question to the gitlogue tool or the author during the playback


maybe we could just record the screen and work backwards from there. I don't even know why we still use git when we have near unlimited data at our exposure for things like this to create newer better tools.


thanks for sharing, very interesting example


if it affects only a minority of accounts, why not figure out how to special case them without affecting everyone else is the primary question I would ask myself if I worked on this

the principle: let's protect against outliers without rocking the behavior of the majority, not at this stage of PMF and market discovery

i'd also project out just how much the compute would cost for the outlier cohort - are we talking $5M, $100M, $1B per year? And then what behaviors will simply be missed by putting these caps in now - is it worth missing out on success stories coming from elite and creative users?

I'm sure this debate was held internally but still...


Because the goal is to extract more money from the people who have significant usage. These users are the actual targets of the product. The idea that it’s a few bad actors is misdirection of blame to distract “power users”.

They undercharged for this product to collect usage data to build better coding agents in the future. It was a ploy for data.

Anecdotally, I use Claude Code with the $20/mo subscription. I just use it for personal projects, so I figured $20 was my limit on what I’d be willing to spend to play around with it. I historically hit my limits just a few times, after ~4hrs of usage (resets every 5hrs). They recently updated the system and I hit my limits consistently within an hour or two. I’m guessing this weekly limit will affect me.

I found a CLI tool (which I found in this thread today) that estimates I’m using ~$150/mo in usage if I paid through the API. Obviously this is very different from my payments. If this was a professional tool, maybe I’d pay, but not as a hobbyist.


What was the name of that CLI tool?


It’s called “ccusage”. Search for other comments on this story for more details.


> why not figure out how to special case them without affecting everyone else

I’m guessing that they did, and that that’s what this policy is.

If you’re talking about detecting account sharing/reselling, I’m guessing they have some heuristics, but they really don’t want the bad press from falsely accusing people of that stuff.


fair enough - DS probably ran through data and came up with 5% and some weekly cutoff as a good starting point until they have better measures in place

my point is that 5% still a large cohort and they happen to be your most excited/creative cohort. they might not all want to pay a surchage yet while everyone is discovering the use cases / patterns / etc

having said that, entirely possible burn rate math and urgency requires this approach


They did have several outages last week, it would be good to find better plans for those huge users but I can also see them wanting to just stop the bleeding.


I've noticed the frequent perf issues and I'm on the 20x plan myself - good point that you'd want to stop the bleeding or bad actors to make sure the majority have a better experience


> why not figure out how to special case them without affecting everyone else is the primary question I would ask myself if I worked on this

The announcement says that using historical data less than 5% of users would even be impacted.

That seems kind of clear: The majority of users will never notice.


5% of a large number is a large number - this why it's both a significant problem for them and why I'm thinking out loud about the downsides of discouraging good actors who happen to be power users.

that 5% is probably the most creative and excited cohort. obviously it's critical to not make the experience terrible for the 95% core, but i'd hate to lose even a minority of the power users who want to build incredible things on the platform

having said that, the team is elite, sure they are thinking about all angles of this issue


5% seems like a huge number of previously ecstatic customers who may suddenly be angry. Especially when it is trivial to identify the top 0.1% of users who are doing something insane.


> if it affects only a minority of accounts, why not figure out how to special case them without affecting everyone else

that's exactly what they have done - the minority of accounts that consume many standard deviations above the mean of resources will be limited, everyone else will be unaffected.


"You're absolutely right!" i misread the announcement - thought everyone moved to primarily a weekly window but seems like 5hr window still in place and they're putting in place another granularity level control that DS teams will adjust to cutoff mostly bad actors.

correct me if I'm wrong, it's not like we have visibility into the token limit logic, even on the 5hr window?


What do you think they should have done instead?


At a bare minimum there needs to be some way to understand how close you are to these limits. People shouldn’t be wondering if this is going to impact them or not.


It’s tricky without seeing the actual data. 5% of a massive user base can still be a huge number so I get that it’s hard to be surgical.

But those power users are often your most creative, most productive, and most likely to generate standout use cases or case studies. Unless they’re outright abusing the system, I’d lean toward designing for them, not against them.

if the concern is genuine abuse, that feels like something you handle with escalation protocols: flag unusual usage, notify users, and apply adaptive caps if needed. Blanket restrictions risk penalizing your most valuable contributors before you’ve even discovered what they might build


5% of a massive user base could also be huge if 50% of users are on an enterprise plan and barely using it.


in other words, these limits will help introduce Enterprise (premium) plans?


I've been wondering the same and started exploring building a startup around this idea. My analysis led me to the conclusion that if AI gets even just 2 orders of magnitude better over the next two years, this will be "easy" and considered table stakes. Like connecting to the internet, syncing with cloud or using printer drivers

I don't think there will be a very big place for standalone next gen RPA pure plays. it makes sense that companies that are trying to deliver value would implement capabilities this. Over time, I expect some conventions/specs will emerge. Either Apple/Google or Anthropic/OpenAI are likely to come up with an implementation that everyone aligns on

In other words, I agree


> if AI gets even just 2 orders of magnitude better over the next two years

You realize this means '100 times better', right?


yes, thanks for pointing out the assumption here. I'm not sure how to quantify AI improvements and tbh not really up to speed on quantifiable rate of improvement from 4 to 4o to o1

100 times better seems to me in line with the bet that's justifying $250B per annum in Cap Ex (just among hyperscalers) but curious how you might project a few years out?

Having said that, my use of 100x better here applies to 100x more effective at navigating use cases not in training set, for example, as opposed to doing things that are 100x more awesome or doing them 100x more efficiently (though seemingly costs, context window and token per unit of electricity seem to continue to improve quickly)


I would think that such an increase in AI capability would basically be AGI...

Just to give a few comparisons, the following things are two orders of magnitude apart:

1. The force felt by a mosquito landing on your arm and getting punched by Mike Tyson in his prime

2. A firecracker exploding and a stick of dynamite exploding

3. The heat from a candle and the heat from a blowtorch

4. The sound from a whisper and the sound from jet engine


Microsoft will be making autonomous AI agents available next month for Copilot Studio and Dynamics 365 users workflow authoring capabilities branded as Copilot Studio

Does anyone know if this is repackaging prior tech like Power Automate or is this brand new?

I'm assuming it's using AutoGen under the hood but would be interested to learn more about architecture


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: