Hacker Newsnew | past | comments | ask | show | jobs | submit | theodorejb's commentslogin

According to the timeline it took more than a week just for Filevine to respond saying they would review and fix the vulnerability. It was 24 days after initial disclosure when he confirmed the fix was in place.


Given that the author describes the company as prompt, communicative and professional, I think it’s fair to assume there was more contact than the four events in the top of the article.


My guess would be so they don't have to embed an IP address or hostname in the malware to send secrets to, which could then be blocked or taken down.


But they could encrypt it, its just double b64 encoded, everybody can read it.


That one stumped me. Why not just encrypt with a hardcoded public key, then only the attacker can get the creds.

The simple B64 encoding didn't hide these creds from anyone, so every vendor out there's security team can collect them (e.g. thinking big clouds, GitHub, etc) and disable them.

If you did a simple encryption pass, no one but you would know what was stolen, or could abuse/sell it. My best guess is that calling node encryption libs might trigger code scanners, or EDRs, or maybe they just didn't care.


Or they just wanted to prove a point.

They surely seemed to be smart enough to choose encryption over encoding.

Hard to believe encryption would be the one thing that would trigger code scanners.

Also it’s not just every vendor, also every bad actor could’ve scraped the keys. I wonder if they’ve set up the infrastructure to handle all these thousands of keys…

Like what do you even do with most of it on scale?

Can you turn Cloud, AWS , AI api keys to money on a black market?


Someone could be tricked into giving their npm credentials to the attacker (e.g. via a phishing email), and then the attacker publishes new versions of their packages with the malicious diff. Then when the infected packages are installed, npm runs the malicious preinstall script which harvests secrets from the new machine, and if these include an npm token the worm can see which packages it has access to publish, and infect them too to continue spreading.


I can testify that steamed stingy nettles with gomasio (toasted sesame seeds and salt) is very delicious.


One option to make it a little safer is to add ignore-scripts=true to a .npmrc file in your project root. Lifestyle scripts then won't run automatically. It's not as nice as Pnpm or Bun, though, since this also prevents your own postinstall scripts from running (not just those of dependencies), and there's no way to whitelist trusted packages.


Bun also doesn't execute lifestyle scripts by default, except for a customizable whitelist of trusted dependencies:

https://bun.com/docs/guides/install/trusted


"Trusted" dependencies are poor solution, the good solution is either never run scripts, or run them inside qemu.


I would expect to be able to download a package and then inspect the code before I decide to import/run any of the package files. But npm by default will run arbitrary code in the package before developers have a chance to inspect it, which can be very surprising and dangerous.


npm used to do that. bun never did. No idea about the past for pnpm or yarn.


> Human cognition was basically bruteforced by evolution

This is an assumption, not a fact. Perhaps human cognition was created by God, and our minds have an essential spiritual component which cannot be reproduced by a purely physical machine.


Even if you don't believe in God, scientific theories of how human cognition came about (and how it works and changes over time) are all largely speculation and good storytelling.


It’s not an assumption, it’s a viable theory based on overwhelming evidence from fossil records.

What’s NOT supported by evidence is an unknowable, untestable spiritual requirement for cognition.


What overwhelming evidence do fossil records provide about human cognition?


We don't need fossil records. We have a clear chain of evolved brain structures in today's living mammals. You'd have to invent some fantastical tale of how God is trying to trick us by putting such clearly connected brain structures in a series of animals that DNA provides clear links for an evolutionary path.

I'm sympathetic to the idea that God started the whole shebang (that is, the universe), because it's rather difficult to disprove, but looking at the biological weight of evidence that brain structures evolved over many different species and arguing that something magical happened with homo sapiens specifically is not an easy argument to make for someone with any faith in reason.


>clear links for an evolutionary path

there are clear links for at least 2 evolutionary paths: bird brain architecture is very different from that of mammals and some are among the smartest species on the planet. they have sophisticated language and social relationships, they can deceive (meaning they can put themselves inside another's mind and act accordingly), they solve problems and they invent and engineer tools for specific purposes and use them to that effect. give them time and these bitches might even become our new overlords (if we're still around, that is).


And let’s not forget how smart octopuses are! If they lived longer than a couple years, I’d put them in the running too.


> it’s a viable theory based on overwhelming evidence from fossil records

No one has gathered evidence of cognition from fossil records.


Sure they have. We see every level of cognition in animals today, and the fossil record proves that they all came from the same evolutionary tree. For every species that can claim cognition (there’s lots of them), you can trace it back to predecessors which were increasingly simple.

Obviously cognition isn’t a binary thing, it’s a huge gradient, and the tree of life shows that gradient in full.


It is completely unreasonable to assume our intelligence was not evolved, even if we acknowledge that an untestable magical process could be responsible. If the latter is true, it's not something we could ever actually know.


> If the latter is true, it's not something we could ever actually know.

That doesn’t follow.


It follows by definition. If the magic, spirit, or god were testable, it becomes a part of scientific theory.


I'm sticking to materialism, because historically all its predictions turned out to be correct (cognition happens in the brain, thought manifests physically in neural activity, affecting our physical brain affects our thinking).

The counter-hypothesis (we think because some kind of magic happens) has absolutely nothing to show for; proponents typically struggle to even define the terms they need, much less make falsifiable predictions.


it is an assumption backed by considerable evidence. creationism otoh is an assumption backed by superstition an phantasizing, or could you point to at least some evidence.

besides, spirituality is not a "component", it's a property emergent from brain structure and function, which is basically purely a physical machine.


In that sense, what isn't an assumption?


Maybe there's a small teapot orbiting the earth, with ten thousand angels dancing on the tip of the spout.


I think you’re both saying the same thing


Many people have non-JS backends and only use npm for frontend dependencies. If a postinstall script runs in a dev or build environment it could get access to a lot of things that wouldn't be available when the package is imported in a browser or other production environment.


Malicious client-side code can still perform any user action, exfiltrate user data via cross-domain requests, and probe the user's local network.


I wonder why npm doesn't block pre/postinstall scripts by default, which pnpm and Bun (and I imagine others) already do.

EDIT: oh I scrolled down a bit further and see you said the exact same thing in a top-level comment hahah, my bad


It's crazy to me that npm still executes postinstall scripts by default for all dependencies. Other package managers (Pnpm, Bun) do not run them for dependencies unless they are added to a specific allow-list. Composer never runs lifecycle scripts for dependencies.

This matters because dependencies are often installed in a build or development environment with access to things that are not available when the package is actually imported in a browser or other production environment.


I'm also wondering why huge scale attacks like this don't happen for other package managers.

Like, for rust, you can have a build.rs file that gets executed when your crate is compiled, I don't think it's sandboxed.

Or also on other languages that will get run on development machines, like python packages (which can trigger code only on import), java libraries, etc...

Like, there is the post install script issue or course, but I feel like these attacks could have been just as (or almost as) effective in other programming languages, but I feel like we always only hear about npm packages.


All package managers are vulnerable to this type of attack, it just happens that npm is like 10+ times more popular than the others, so it gets targeted often.


Its only JS devs that constantly rebuild their system with full dependcy update, so they are the most attractive target.


It's a lot harder to do useful things with backend languages. JavaScript is more profitable as you can do the crypto wallet attacks without having to exploit kernel zero days.


It's trivial to run an exploit shell from almost any language when you have non-sandboxed code running on the target machine.


Yes but outside of dumping user data, there's not much else you can do. Crypto mining will get caught rather quickly (most big clouds ban mining). User data is useful for the type of attacker that's willing to go through the whole blackmarketing selling process. For script kiddies, if you think about it, the easiest pay-off for a social engineering/phishing is a frontend wallet crypto theft.


This has still nothing to do with the language or kernel exploits. Only code execution on a valuable host matters.

You could make a malicious Rust crate that on installation runs a Python shell and injects JavaScript into your browser to extract crypto wallets. There even seems to be a significant overlap of Rust devs/crypto fans.

Also script kiddies don't do social engineering and blackmarket crypto selling, that's 100% professional crime territory. Real-life script kiddie attacks I've seen were more like hacking an ecommerce site and adding bananas as currency.


for the same reason that scams are kind of obvious if you care to look: use of js / npm is an automatic filter for a more clueless target.


Seems like this is a fairly recent change, for Pnpm at least, https://socket.dev/blog/pnpm-10-0-0-blocks-lifecycle-scripts...

What has been the community reaction? Has allowing scripts been scalable for users? Or could it be described as people blindly copying and pasting allow commands?

I am involved in Python packaging discussions and there is a pre-proposal (not at PEP stage yet) at the moment for "wheel variants" that involves a plugin architecture, a contentious point is whether to download and run the plugins by default. I'd like to find parallels in other language communities to learn from.


In my experience, packages which legitimately require a postinstall script to work correctly are very rare. For the apps I maintain, esbuild is the only dependency which benefits from a postinstall script to slightly improve performance (though it still works without the script). So there's no scaling issue adding one or two packages to a whitelist if desired.



Yes it does, since the ignore-scripts option is not enabled by default.


Yes it does, you're correct and I have misread. I can't edit, delete, or flag my initial reply unfortunately.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: