Hacker Newsnew | past | comments | ask | show | jobs | submit | mannykannot's commentslogin

That’s a fair point, but a combination of “fake it ‘til you make it” together with extracting massive “compensation” before you actually make it amounts to pretty much the same thing.

How is it the 'same thing'? Especially if he gets his comp largely in the same supposedly overvalued stock?

How is that different from any other CEO, especially of publicly traded companies?

CEOs are constantly making claims and promises that are aspirational at best, their compensation isn't held until all promises are reached.


How about "the documents were clarified" or "their contents were revealed"? Maybe "formatted for reading on your device"?

Talk of how it might be interpreted is rather beside the point when the administration appears to be implementing a particular interpretation and SCOTUS appears to be fine with that, whether or not it is a selective one. Those are the concrete concerns of which you speak.

It is helpful to have the document publicly available, but only if enough people heed its implicit warning.


I would argue the concrete concerns we should have is the fact that we seem to be committing economic suicide, which will have decades of economic and sociopolitical fallout. If you think people have an appetite for fascism today, wait until you see what decades of deflating economies will do.

Personally, I do not see the distinction here between the two sentences, but your last paragraph got me thinking: should we be using parenthetical, self-interruptive clauses? When we are speaking extemporaneously, we may need them, but when writing, could we rearrange things so they are not needed?

One reason I came up with for doing so is to acknowledge a caveat or answer a question that the author anticipates will enter a typical reader's mind at that point in the narrative.

If that is the case, then it seems to me that when an author does this, they are making use of their theory of mind, anticipating what the reader may be thinking as they read, and acknowledging that it will likely differ from what they, as the author, is thinking of (and knows about the topic) at that point.

If this makes any sense, then we might ask if at least a rudimentary theory of mind is needed to effectively use parenthetical clauses, or can it be faked through the rote application of empirically-learned style rules? LLMs have shown they can do the latter, but excessive use might be signalling a lack of understanding.


I will just float this idea for consideration, as I cannot judge how plausible it is: Is it possible that LLMs or their successors will soon be able to make use of formal methods more effectively than humans? I don't think I am the only person surprised by how well they do at informal programming (On the other hand, there is a dearth of training material. Maybe a GAN approach would help here?)


LLMs, at least as they exist today, could be trained to be very good at producing text that looks like formal specifications, but there would be no way to guarantee that what they produced was a) correctly formed, or b) correctly specifying what was actually asked of them.

You might be able to get (a) better by requiring the LLM to feed its output through a solver and forcing it to redo anything that fails (this is where my knowledge of current LLMs kinda falls down), but (b) is still a fundamentally hard problem.


I don't see why two LLMs together (or one, alternating between tasks) could not separately develop a spec and an implementation. The human input could be a set of abstract requirements, and both systems interact and cross-check each other to meet the abstract requirements, perhaps with some code/spec reviews by humans. I really don't see it ever working without one or more humans in the loop, if only to confirm that what is being done is actually what the human(s) intended. The humans would ideally be able to say as little as possible to get what they want. Unless/until we get powerful AGI, we will need to have technical human reviewers.


> I really don't see it ever working without one or more humans in the loop, if only to confirm that what is being done is actually what the human(s) intended.

That is precisely my point.


Some already use probabilistic methods to automatically produce proofs for specifications and code they wrote manually. It’s one of the few actually useful potential use cases for LLMs.


Use them?

Absolutely. See DeepSeek Prove, for instance. As far as I understand, it's basically a neurosymbolic system, which uses an LLM to write down proofs/code, then Lean to verify them, looping until it finds a proof/implementation that matches the specification, or it gives up.

Create them? Much harder.


I like the way you made me think! It had not thought about it until now, but I take your point.

While thinking about it, this phrase occurred to me: “silver bullets are a defense against zombies.” It is not the same phrase structure as the original, but it also has the double-negative vibe, yet it feels more reasonable to me than “…are a defense for zombies”, which to me suggests that zombies would employ them against their enemies.

I think the resolution here is that defense is inherently against something, so these phrases are not unequivocally double negatives - though I also agree with nine_k’s point about a better way to say it.

EDIT: Duh! The fact that defense is inherently against something is precisely what makes these phrases look like double negatives! The resolution must be something else - maybe agreement in mood or sentiment…


Fair enough, and I agree that regulation is often needed, but we cannot, in general, expect it to have only the consequences explicitly sought.


Sometimes the cost is that there is no viable product.


And that is perfectly fine!


Yes its perfectly fine, thats my point. They arent spiting the EU, they are just responding to the legislation by not entering that market. If EU voters are unhappy they can take it up with their government.


Yes, indeed! But I don’t think Daenney gets it.


That's not what a lot of proponents of these laws argue. They often state that if a company is making something unavailable in the EU due to one of the laws that the company is throwing a fit or being spiteful.


And it's also probably true, especially for $MEGACORP. But in general the concept of this kind of laws, as others mentioned, it's to make companies internalize the whole cost of their product impact on the environment. It is GOOD if it drives the price up. At some point people will find it too expensive and they will simply not buy it because it's not worth the cost.


There seems to be a tacit premise here, that anything an LLM can do is meaningless as an exercise for a student, but that is simply not true, and if it were, it would likely be the case that we would soon run out of pedagogically-‘meaningful’ (by this standard) tasks (the author has no practical suggestions for how we could avoid this situation.)


exactly, i just explained this here: https://news.ycombinator.com/item?id=46226251


There is a time- honored, straightforward way to deal with the last two percent problem, which is to overbuild by a couple of percent or so.


That’s not how the maths works unfortunately.

Basically, you end up having to overbuild to crazy levels, or build insane amounts of battery storage, which only gets used a few days a year.


That is right (if rather exaggerated, and I will note that it was you who originally picked the figure of two percent), and in practice, we accept a certain risk that we will not always have all the capacity we want, even though (or because) we cannot precisely predict how big or often these events will be. There is no particular reason to think this specific case is any different.


Why can't we predict how big or how often those events would be? We have clear understandings of the distribution of probabilities for all kinds of weather scenarios - see for example 1-50/100/1000 year flood/droughts.


I'm not saying we cannot do it, just that we cannot always get it right, and there is plenty of empirical evidence for that.

The second point is that the distribution has a long tail, especially when we consider the possibility of multiple independent incidents overlapping in time, to the point where it becomes infeasible to suppose that we could be prepared to continue operating as if nothing had happened in all conceivable scenarios, regardless of how accurately we could predict their likelihood.


I do not understand your argument We also cannot get right predicting the failures of fossil fuel generation. Sometimes multiple plants have outages that coincide and we have blackouts. Shit happens, and will continue to happen. Meanwhile we can make statistically rational plans.

We have coal fired plants in Australia with <90% uptime (often unscheduled), but somehow they're considered baseload rather than intermittent.


And I cannot figure out why you are saying this, as nothing I have said previously either contradicts what you say here, or is contradicted by it. If you could say what you think I am saying in my posts in this thread, we can sort it out.

EDIT: I see the problem starts with the first sentence of your first post here: “Why can't we predict how big or how often those events would be?” - which is completely beside the point in my response to rgmerk, who wrote “It's not clear (yet) what a 100% clean energy powered world would use to cover the last couple of percent of demand when loads peak and/or variable generation troughs for extended periods.” My response to this and the follow-up is this: a) if we are talking about two percent, we can overbuild the renewable capacity, and b) if we are considering all eventualities, there inevitably comes a point where we say that we are not going to prepare for uninterrupted service in this event.


> a) if we are talking about two percent, we can overbuild the renewable capacity,

We've pointed out why this is a poor argument.


No you didn't; you pointed out why it is not, in itself, a significant issue in the first place (which rgmerk tacitly seems to recognize in his first response, through pivoting away from the 2% claim.) My position on this has been that if the issue really is over ~2%, there is a simple solution.


You even admitted it was a poor argument.

I'll state it plainly: to get to the same level of reliability as the existing grid with just wind, solar, and batteries requires unacceptable amounts of overprovisioning of these at high latitude (or unacceptably high transmission cost).

Fortunately, use of different long duration storage (not batteries) can solve the problem more economically.


> You even admitted it was a poor argument.

"Creative" re/misinterpretation is becoming quite a thing here - what I actually did was agree that rgmerk had a more defensible position after he pivoted away from his original ~2% claim to a more reasonable one.

I'll state it plainly: rgmerk's subsequent pivot in his stated claims does not retroactively make my response to his original claim wrong! (Not even if the subsequent claim more accurately reflects what he really meant to say.) I am having trouble figuring out why anyone would think otherwise.


We can and do, and there are detailed plans based on those weather scenarios (eg for the Australian east coast grid; there is AEMO’s Integrated System Plan).

Things in the US are a bit more of a mixed bag, for better or worse, but there have been studies done that suggest that you can get very high renewables levels cost effectively, but not to 100% without new technology (eg “clean firm” power like geothermal, new nuclear being something other than a clusterfumble, long-term storage like iron-air batteries, etc etc etc).


The best technologies there are (IMO) e-fuels and extremely low capex thermal.

There are interesting engineering problems for sources that are intended to operate very infrequently and at very low capacity factor, as might be needed for covering Dunkleflauten. E-fuels burned with liquid oxygen (and water to reduce temperature) in rocket-like combustors might be better than conventional gas turbines for that.


Curious - any references for those “rocket turbine” motors, particularly for this application? I’ve not seen that idea before.


It's mostly something I thought about myself. The prompting idea was how to massively reduce the capex of a turbine system, even if that increases the marginal cost per kWh when the system is in use, and also the observation of th incredibly high power density of rockets (they're the highest power density heat engines humanity makes). So, get rid of the compressor stage of the turbine, be open cycle so there's no need to condense steam back to water, and operate at higher pressure (at least an order of magnitude higher than combustion turbines) so the entire thing can be smaller.

You'd have to pay for storage of water and LOX (and making the LOX) so this wouldn't make sense to prolonged usage. On the plus side, using pure LOX means no NOx formation, so you also lose the catalytic NOx destruction system a stationary gas turbine would need to treat its exhaust.

I vaguely recall some people in Germany were looking at something like this but I don't remember any details.


The problem is the last two percent isn't evenly distributed in time, but rather occurs rarely, but in large chunks. On average it's 2%, but not at each point in time.

Also, if solar ends up much cheaper than wind there's going to be need for seasonal energy storage, which could be considerably more than 2% at high latitude. Batteries are unsuitable for this.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: