Hacker Newsnew | past | comments | ask | show | jobs | submit | AdieuToLogic's commentslogin

>> A comment "this CANNOT happen" has no value on itself.

> I think it does have some value: it makes clear an assumption the programmer made.

To me, a comment such as the above is about the only acceptable time to either throw an exception (in languages which support that construct) or otherwise terminate execution (such as exiting the process). If further understanding of the problem domain identifies what was thought impossible to be rare or unlikely instead, then introducing use of a disjoint union type capable of producing either an error or the expected result is in order.

Most of the time, "this CANNOT happen" falls into the category of "it happens, but rarely" and is best addressed with types and verified by the compiler.


> Does anyone have any good resources on how to get better at doing "functional core imperative shell" style design?

Hexagonal architecture[0] is a good place to start. The domain model core can be defined with functional concepts while also defining abstract contracts ( abstractly "ports", concretely interface/trait types) implemented in "adapters" (usually technology specific, such as HTTP and/or SMTP in your example).

0 - https://en.wikipedia.org/wiki/Hexagonal_architecture_(softwa...


> AI should, from the core be intrinsically and unquestionably on our side, as a tool to assist us.

"Should" is a form of judgement, implying an understanding of right and wrong. "AI" are algorithms, which do not possess this understanding, and therefore cannot be on any "side." Just like a hammer or Excel.

> If it's not, then it feels like it's designed wrong from the start.

Perhaps it is not a question of design, but instead on of expectation.


I think that is where people disagree about the definition of AI.

An algorithm isn't really AI then. Something worthy of being called AI should be capable of this understanding and judgement.


> An algorithm isn't really AI then.

But they are though. For a seminal book discussing why and detailing many algorithms categorized under the AI umbrella, I recommend:

  Artificial Intelligence: A Modern Approach[0]
And for LLMs specifically:

  Foundations of Large Language Models[1]
0 - https://en.wikipedia.org/wiki/Artificial_Intelligence:_A_Mod...

1 - https://arxiv.org/pdf/2501.09223


>> The whole idea of putting "agentic" LLMs inside a sandbox sounds like rubbing two pieces of sandpaper together in the hopes a house will magically build itself.

> What is the alternative?

Don't expect to get a house from rubbing two pieces of sandpaper together?


Fitting username, if nothing else.

>>> What is the alternative?

>> Don't expect to get a house from rubbing two pieces of sandpaper together?

> Fitting username, if nothing else.

Such is my lot in life I suppose...

Now for a reasoned position while acknowledging the flippant nature of my previous post.

The original metaphor centered around expectations. If best practice when using a s/w dev tool is to sandbox it so that potential damage can be limited, then there already exists the knowledge of its use going awry at any time. Hence the need for damage mitigation. The implication being an erosion of trust in whether the tool will perform as desired or perform as allowed each time it is used.

As for the "house" part of the metaphor, use of tools to build desired solutions assumes trust of said tools to achieve project goals. Much like using building construction tools are expected to result in a house. But if all construction workers have is sandpaper, then there's no way there's going to be a house at the end of construction.

It takes more than sandpaper to get (build) a house - people, hammers, saws, etc. along with the skills of all involved. And it takes more than an LLM to deliver an acceptable s/w solution, even if its per-invocation deleterious effects are mitigated via sandboxing.


> I don't know if this sentence was written by LLM or not but people will definitely use LLMs to revise and refine posts. No amount of complaining will stop this. It is the new reality. It's a trend that will only continue to grow.

Using an LLM to generate a post with the implication it is the author's own thoughts is the quintessential definition of intellectual laziness.

One might as well argue that plagiarism is perfectly fine when writing a paper in school.


> Using an LLM to generate a post

You are talking about an entirely different situation that I purposely avoided in my comment.


>>> people will definitely use LLMs to revise and refine posts

>> Using an LLM to generate a post

> You are talking about an entirely different situation that I purposely avoided in my comment.

By that logic, if I hand a s/w engineering team a PostIt note saying "add feature X", then all they are doing is "revise and refine" the solution I made, not generating a solution.

Gotcha.


> By that logic, if I hand a s/w engineering team a PostIt note saying "add feature X", then all they are doing is "revise and refine" the solution I made, not generating a solution.

Way to stretch my comment and make it mean something I didn't mean! You have gone from me talking about just "revising and refining a post" to someone generating whole software features using LLM.

First, I wasn't talking about generating whole software features. Second, pretending as if I implied anything like that even remotely is a disingenuous and frankly a bad-faith style of debating.

You are looking for some sort of disagreement when there is none. I detest LLM-based plagiarism too. So really confused why you've to come here and look for disagreements when there is none and be combative, no less? If this is your style of debating, I refuse to engage further.

Next time, you might want to review the HN guidelines and be less combative: https://news.ycombinator.com/newsguidelines.html

> Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize.


>> By that logic, if I hand a s/w engineering team a PostIt note saying "add feature X", then all they are doing is "revise and refine" the solution I made, not generating a solution.

> You have gone from me talking about just "revising and refining a post" to someone generating whole software features using LLM.

I simply extrapolated your stated position by applying it to another, relatable, situation. The use of "add feature X" was to keep the response succinct and served as a placeholder.

> Next time, you might want to review the HN guidelines and be less combative

And you might want to review same after originally authoring:

  These incessant complaints about LLM-written text don't 
  help and they make the comment threads really boring. HN 
  should really introduce a rule to ban such complaints just 
  like it bans complaints about tangential annoyances like 
  article or website formats, name collisions, or back-button 
  breakage

> I simply extrapolated your stated position by applying it to another, relatable, situation.

The extrapolation led to something I didn't imply. If you're making the extrapolation to add a point in addition to what I said, I'm sure that'd have been very welcome if you hadn't posed it in a combative manner that comes across as a 'take down' of my comment.

Going back to where it all began:

> Using an LLM to generate a post with the implication it is the author's own thoughts is the quintessential definition of intellectual laziness.

Extrapolation, yes. But non sequitur because my comment not even remotely implied generating a whole post using LLM. So your extrapolation stands well on its own. I just don't see the need to pose it as a sort of "take down" on my comment.

What I find really funny is that in reality like you, I detest LLM-based plagiarism too. So we must be in agreement? Yet you manage to find disagreements where there are none and be combative about it. Well done, sir!

> And you might want to review same after originally authoring

I have. I've found nothing in the guidelines that forbid me from expressing my frustrations over the abundant supply of trite comments. Nothing there forbids me from begging the HN overlords to discourage trite comments about LLM-written text. They already discourage comments about tangential issues like website format, name collisions, back-button issues. They might as well discourage comments about LLM-written text. That was my request. The HN overlords may not pay heed to my request and that's fine. But after reading the guidelines, I don't see why I cannot make the request I've in my mind.


>> I simply extrapolated your stated position by applying it to another, relatable, situation.

> The extrapolation led to something I didn't imply.

I extrapolated my interpretation of your position to make the point that to "revise and refine" is equivalent to "generate", in that the latter is the effect of the former without shrouding the source of the work.

> ... I'm sure that'd have been very welcome if you hadn't posed it in a combative manner that comes across as a 'take down' of my comment.

This is your interpretation. Mine is that I have not made ad hominem responses nor anything similar.

> So your extrapolation stands well on its own. I just don't see the need to pose it as a sort of "take down" on my comment.

This is the second time you've used the phrase "take down." Having a differing opinion and expressing such is not a "take down."

> What I find really funny is that in reality like you, I detest LLM-based plagiarism too. So we must be in agreement?

In that we most certainly are. In addition, I believe those who use LLMs to produce content as if it were their own work is unacceptable. This might be different for someone else, depending on one's definition of what is plagiarism.

>> And you might want to review same after originally authoring

> I have. I've found nothing in the guidelines that forbid me from ...

Guidelines do not forbid, they suggest for the betterment of everyone's experience.

> ... expressing my frustrations over the abundant supply of trite comments.

See:

  Don't be curmudgeonly. Thoughtful criticism is fine, but 
  please don't be rigidly or generically negative.
  
  Please don't fulminate. Please don't sneer, including at 
  the rest of the community. 

  Please don't post shallow dismissals, especially of other 
  people's work.
> The HN overlords may not pay heed to my request and that's fine.

There are no "HN overlords", only Zuul[0].

(that last one was a joke)

0 - https://ghostbusters.fandom.com/wiki/Zuul


Whenever someone brings up washing machines and software, I am always reminded of Forth[0]:

  As an example, imagine a microprocessor-controlled washing 
  machine programmed in Forth. The ultimate command in your 
  example is named WASHER. Here is the definition of WASHER, 
  as written in Forth:

    : WASHER  WASH SPIN RINSE SPIN ;
0 - https://www.forth.com/starting-forth/1-forth-stacks-dictiona...

> ... if the LLM hits a wall it’s first inkling is not to step back and understand why the wall exists and then change course, its first inkling is ...

LLM's do not "understand why." They do not have an "inkling."

Claiming they do is anthropomorphizing a statistical token (text) document generator algorithm.


The more concerning algorithms at play are how they are post-trained. And the then concern of reward hacking. Which is what he was getting at. https://en.wikipedia.org/wiki/Reward_hacking

100% - we really shouldn't anthropomorphize. But the current models are capable of being trained in a way to steer agentic behavior from reasoned token generation.


> But the current models are capable of being trained in a way to steer agentic behavior from reasoned token generation.

This does not appear to be sufficient in the current state, as described in the project's README.md:

  Why This Exists

  We learned the hard way that instructions aren't enough to 
  keep AI agents in check. After Claude Code silently wiped 
  out hours of progress with a single rm -rf ~/ or git 
  checkout --, it became evident that "soft" rules in an 
  CLAUDE.md or AGENTS.md file cannot replace hard technical 
  constraints. The current approach is to use a dedicated 
  hook to programmatically prevent agents from running 
  destructive commands.
Perhaps one day this category of plugin will not be needed. Until then, I would be hard-pressed to employ an LLM-based product having destructive filesystem capabilities based solely on the hope of them "being trained in a way to steer agentic behavior from reasoned token generation."

I wasn’t able to get my point across. But I completely agree

> What is "skill entropy"

Skill entropy is a result of reliance on tools to perform tasks which otherwise would contribute to and/or reinforce a person's ability to master same. Without exercising one's acquired learning, skills can quickly fade.

For example, an argument can be made that spellcheckers commonly available in programs degrade people's ability to spell correctly without this assistance (such as when using pen and paper).


There are a few problems with this post:

  1 - In C++, a struct is no different than a class
      other than a default scope of public instead of
      private.
  2 - The use of braces for property initialization
      in a constructor is malformed C++.
  3 - C++ is not C, as the author eventually concedes:

  At this point, my C developer spider senses are tingling: 
  is Response response; the culprit? It has to be, right? In 
  C, that's clear undefined behavior to read fields from 
  response: The C struct is not initialized.
In short, if the author employed C++ instead of trying to use C techniques, all they would have needed is a zero cost constructor definition such as:

  inline Response () : error (false), succeeded (false)
  {
    ;
  }

inline and ; are redundant

> inline and ; are redundant

One of my s/w engineering axioms is:

  Better to express intent than assume a future
  reader of a solution, including myself, will
  intrinsically understand the decisions made.
If this costs a few extra keystrokes when authoring an implementation, so be it.

In the banking subdomain of credit/debit/fleet/stored-value card processing, over time when considering regulation and format evolution, services provided by banks/ISOs/VARs will effectively exhibit FP traits regardless the language(s) used to implement them.

Savvy processors recognize the immutability of each API version published to Merchants, along with how long each must be supported, and employ FP techniques both in design and implementation of their Merchant gateways.

Of course, the each bank's mainframes "on the rails" do not change unless absolutely necessary (and many times not even then).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: