I don't work in a shop where performance/speed is important, but I am looking for other ways to do things I would do in Make but...not in Make.
For example, my use-case is similar to what Mike Bostock described in "Why Use Make" [0] when explaining how he uses Make to build out his data tranformation process. Most of my work is data transformation/small-scale ETL, but I just haven't been able to get into Make beyond trivial work, and I often end up writing things in Rake (Ruby).
So I was wondering if other devs had tried using Buck/Bazel for everyday hobbies and projects, and whether you stuck with the new tool or went back to Make? The portability of Makefiles isn't a high priority for me, and I like experimenting with different systems for my own projects.
I tried Buck and found it to be a poor clone of the Google build tool it's based on. If you're interested, go straight to Bazel -- it's the real thing.
Edit to add: here's a specific complaint. To run arbitrary commands and shell scripts, you use genrule(), but in Buck a genrule can only have a single output. I used Buck to preprocess and organize the assets for a game, and that restriction made it very awkward.
It's been a while since I used Buck, and it looks like this has improved a bit -- you can now output a folder of files: https://buckbuild.com/rule/genrule.html
These build systems become super handy once you're working on a large tree of many different software projects. For small projects not so much. It's probably easier to use the default build system of your programming language.
When make fails me for personal stuff, I switch to ninja [1]. The Chromium people use it in their Bazel-like build system. A hidden gem I recently discovered is doit [2]. I found this one incredibly helpful when I had a build with tons of 1:N, N:1, N:M dependencies.
Thanks for the suggestions! That "doit" looks like it's simple enough for my fairly pedestrian needs. I wonder if my aversion to Make is that I don't spend enough time thinking/making Makefiles, but having something a little friendlier and simpler -- at the cost of ubiquity/portability -- might be the kind of training wheels I need.
Thanks for this! I need a build tool for a Python based data analysis project ("micro ETL"?). Doit looks like it will work nicely, with the bonus of keeping everything in Python.
So far I've also always found `make` to be the best option for what I need, especially because it's already available everywhere (albeit sadly in very subtly incompatible versions).
Anyways, one thing that has always bothered me about make is that it so much depends on file modification dates. Imagine if it would instead use a very optimised hashing algorithm over the input content. Content can be a file or any URI, so it uses wget/curl and ssh as a dependency. Hashing is optimized such that it fails early - e.g. hash in n kB increments, mark as new and return if changed.
Imagine how well this would now suddenly integrate into our modern web service based landscape. You could hook together services quite easily:
It is a bit unpolished and still a work in progress, but has some features that are useful for data science workflows that most software build tools do not cover:
* Steps can produce multiple named outputs (for example full.csv and summary.csv), dependency graph can branch in both ways,
* Outputs that are directories with many files,
* Built-in input/output file sharding (for example by date or month) and HDFS support,
* Lots of control over which steps to execute: not just "rebuild target X and any required upstream dependencies". Can select a target and all that depend on it downstream (if you know that a remote file changed), skip rebuilding some steps, etc,
* Can put very short (text processing) scripts directly in the Drakefile in Python, Ruby.
AFAIK, you can't specify several outputs per rule in GNU Make without relying on horribly complex workarounds. It's actually the feature I miss the most in it.
That's the kind of workaround I had in mind and is probably sufficient, but yes, ~25 lines of cryptic and error-prone interleaved GNU make and shell script just to express multiple outputs correctly is way above my personal threshold for “too complex” or even “sane”.
That doesn't really look like "handling it" by any sensible definition. That looks like a huge kludgy workaround, almost like user implementing a build system inside the build system. (I mean, wtf, asking the user to implement a lock-and-set operation inside their build?)
It even has builtin implicit rules. Enabled by default. Checking for the existence of files that are not found in any new project for maybe ten or twenty years (such as RCS and SCSS version control files).
But I love the built-in implicit rules. I use these frequently.
Often, I write a little test program in C or C++. I name the file "foo.c". Then, to build it I simply type `make foo`.
Sure the rules may be a bit stale, but there isn't much software i use today that I could use in a near identical manner 40 years ago. Make has truly stood the test of time.
Closest in spirit to make, and does essentially all you want to do is "redo", concept drafter by djb, implemented independently by apenwarr[0] and by others. apenwarr's version has a super short "do" script (a couple of hundred lines of portable sh IIRC) which uses the same configuration but does a full rebuild each time.
The two weak points are: (1) multiple outputs from a compilation step (yytab.c yytab.h from yacc) is not properly supported, and (2) no windows support. Other than that it's the perfect minimalistic make replacement -- what make should have been.
Additionally, there tup[1]. With some assumptions about the build process that usually hold in non-distributed builds, it is the fastest, simplest make replacement; You just write a list of commands that builds your final outputs, and by tracing the processes it figures out exactly what needs to be done next time -- nothing more, and nothing less.
I have used Bazel (on Mac OS) to build a small C++ project. The nice thing about it is that it can be easily configured to download and build third-party dependencies from github. I have used this feature to add gflags, protobuf, and googletest dependencies to my project. That being said, it is still not perfect. For instance, I was not able to build GzipInputStream and GzipOutputStream (see e.g. https://github.com/google/protobuf/issues/2365).
I've often found myself in a similar situation as myself where I just can't seem to enjoy using Make and never make it past trivial things. For that reason, I'll shamelessly plug what I think is a viable alternative: the Taskfile.
Funny, I read that article and got excited that I finally know how to use Make. Then I looked at one of the examples he pointed to, and discovered he abandoned it in November last year :P
Buck has been open source for a while! It's especially popular among larger companies who have a ton of mobile developers contributing to their apps, but I'm not sure if FB has ever publicized who exactly is using Buck. Other companies have been contributing to the Buck ecosystem, too, though, like https://github.com/uber/okbuck
Blaze predates Buck but Buck was open-sourced before Bazel. The two tools also have different origins (Blaze for server applications and Buck for mobile when I was at each respective company).
Buck was started after former google employees at facebook wanted to use something like blaze (ended up being called bazel). Kinda like a lot of other things they copied from google at facebook. Dremel -> presto, etc.
> "Buck is a build system developed and used by Facebook."
I have really, really grown to resent this culture of proud and unabashed cargo-culting that we've arrived at in the open source world. Why is this the first sentence describing a new project? Why do we need a Facebook™-approved build system? Does that somehow make it better than the others? And why does Facebook need their own build system? Was the existing ecosystem technically insufficient for them, or was the issue a legal one?
Whenever I make this point in dev circles, someone will reply, "They serve X amount of visitors a day, so they must know something!" Well, they also have a firehose of ad money pointed at them 24/7.
> "x is a build system developed by me as a side-project that I might drop at any time, and its only production use is to compile itself and a hello world app"
Not to say people's side projects aren't useful, but there's only so many hours in a day, and there are thousands and thousands of open source projects, so you need some way to evaluate what's worth looking at. Knowing it has more than one person behind it, some time in field (so it's not 0.0.1 alpha), and is actively being used (you're not writing the first production code using it) goes a long way to separating it from the pack.
This does not logically follow. Why does facebook know best what kind of build system is suitable for my needs, given that I'm probably nothing like facebook? Why does George Foreman know how I want my grill?
I understand the human need to optimize attention, use basal heuristics to weed out unattractive options, and nurture a need to belong, but this is not logic. It's a rationalization.
It's also why this works, and why facebook (and every other major tech player) does it. It's a brilliant marketing and recruiting tool and a way to insert influence under the guise of open benevolence.
Hidden out there is probably a better tool for your needs. But now you'll never find it.
> I understand the human need to optimize attention, use basal heuristics to weed out unattractive options, and nurture a need to belong, but this is not logic. It's a rationalization.
There is an underlying logic though when you frame it in as a time vs reward problem. I could spend a few months evaluating all of the various build tools out there and find the perfect fit for my team. Or I could spend a few days evaluating the projects with the most support (be that large companies, large communities, whatever). I'll concede that I can't guarantee it's the _perfect_ fit, but it's likely good enough when you compare the opportunity costs.
> Why does facebook know best what kind of build system is suitable for my needs
well, by definition your needs are yours, and only you know them, right? perhaps that is your point... but then this question would seem to be malformed, or perhaps disingenuous. why does facebook need to know "your needs" to develop a good build system? maven developers don't know "your needs" anymore than make, cargo and all the rest.
> I'm probably nothing like facebook?
it's not clear what it exactly means to be "like" or "nothing like" facebook, without further context. differences with your understanding of facebook's operational requirements do not necessarily translate to same or similar differences with their software development and build needs and practices.
> Why does George Foreman know how I want my grill?
similarly, the structuring of "my grill" bit seems problematic. also, this seems outright unrelated, unless foreman is in fact a very intensive user of grills and has optimized the grill design over time based on his experiences... and you are also, in some capacity, a professional user of grills.
> It's a brilliant marketing and recruiting tool and a way to insert influence under the guise of open benevolence.
even if we accept the premise of it as a "marketing tool", this doesn't imply that it's not a solid tool proven in real world projects. no obvious mutual exclusivity here. it also doesn't mean it's not-benevolent.
it would seem impossible for any company to avoid this pointed finger of yours if they release an project, because any engineer is going to ask "where did this come from" at one point or another. then, presumably, they will have been victims of villainous marketing.
> Hidden out there is probably a better tool for your needs. But now you'll never find it.
maybe, maybe not. how much better? at best, unknown or unresearched tools would seem to have indeterminate benefit (or lack thereof) relative to what is known and researched.
As a complete outsider, the way I see it a lot of the systems at Facebook actually originated within Google, but were never published. Engineers from Google then went to work at Facebook and wanted to replicate the systems they were already familiar with. The only difference is that Facebook actually releases most of these things as open source. So I guess the good news is that when engineers leave Facebook and go to work at "next big thing" they can bring the same tools with them without having to rewrite them again.
I think you are overreacting. The first sentence has to introduce you to what you are reading about, and this seems like a fairly minimal description of what Buck is. There are more detailed points below the first paragraph, and a talk available on YouTube: https://www.youtube.com/watch?v=uvNI_E0ZgZU.
Yeah, sorry. I've just been seeing this more and more in open source and the first paragraph just jumped out at me. Why is is so important that things we use be developed by major companies?
It's an unpoular opinion, but I feel that React et al are just a way for Facebook to gain developer mindshare. Likewise how Angular is a play for Google to gain developer mindshare. The big guys want developers locked into their ecosystems. It's a play out of Microsoft's book.
When you realize that none of these frameworks are even an improvement over jQuery, it becomes clear what the true motivation is.
I once thought that too. But after working under a jQuery ninja I learned that it's far more powerful than most people give it credit for. More importantly, it's significantly easier to reason about DOM changes in jQuery vs other frameworks because there's no magic at all.
For example, here is a SQLite playground that I built with a friend https://sql.glitch.me/.
The entire frontend is only 100 lines of Javascript thanks to jQuery. I'd love to see what the React implementation looks like.
I guess we just disagree over whether or not all that is made easier with React. I have worked on large jQuery codebases that do everything you mentioned: they are maintainable, they manage state in a comprehensive fashion, and they are modularized.
JavaScript Fatigue inducing media tries to taint jQuery calling it all sorts of things but that hasn't stopped me. Sure you need to learn and use a handful of useful patterns to build with it, but it's the same situation if you want to make proper use of any framework.
I don't find that important, but I find that it means
1) This is probably a project I want to check out, if it pertains to my stack, and
2) This is probably going to stick around and not be unsupported very soon.
I would be just as happy if FB hadn't developed it, and instead it was just used by them. (Or any other of maybe a dozen big camps) Or if it was just used by a whole lot of people.
Widespread use is a pretty good metric for what library to choose, when you aren't familiar with the landscape.
It's important because a lot more gets done when multiple people are being paid to work on bugs & features full-time than when it's, say, one person trying to find some spare time in their evenings.
I think the argument is that Facebook is spending an absurd amount of money on engineering. So, anything that emerges as something that huge pool of engineers find useful and cool is probably pretty well made.
On the other hand. I do think you are right. React suffers from this a lot. React is good. It has a lot of strong competition from smaller groups. But, the attitude seems to be "FB will rub so much money against React! That much money and fame will polish anything. Soon it'll be popular because it's popular. Don't fight it."
Continuing the trend of Facebook creating competing tools more often than persisting with and improving existing ones, which I feel dilutes effort and has fragmented a number of ecosystems.
We've got a couple of Facebook fans at work, which has left us with a number of our systems using different tools to accomplish essentially the same tasks, and no strong case on either side for us to standardise on one of them.
Maybe I'm just being bitter because I've had a bad experience, but it's been a maintenance nightmare for us in the office.
When Buck was created (my team created it!) there wasn't anything that supported what Facebook needed in a build tool. There still really isn't as all the Blaze work-a-likes are focusing on various different needs.
I would argue Buck actually helps to solve what you are concerned about. At Facebook, everything builds with Buck (and Google with Bazel I hear)...which means you learn it once and you know it for your Objective-C library, your Android app, your Cxx service, your python scripts, etc. It really helps to standardize on a build system, and at many companies you can only do that if it supports windows/mac/linux, it supports many languages and platforms, is fast, and is easy to pick up. Buck is all of those things.
It is very simple. Instead of improving <insert make edition>, they started yet another brand new one.
From my years and years at large companies, these things start because they don't want to share anything in the first place, and then it's just pricey to maintain.
To be fair a majority of the big technology companies all write their own tooling for many things that already exist because they didn't quite fit the way they needed them to and they had the resources to re-invent whatever they want.
Sure when they open source them it further fragments the market and you always get the rush of "Facebook has almost 2 billion users therefore their tools must be the best!" which further exacerbates the issue but I don't blame Facebook for these issues.
It sounds like, in your situation, perhaps there are too many tools being introduced into your projects. Too often I see developers abuse the hell out of npm and the like to just include whatever they need with zero regard for the newly introduced dependency tree and the new tooling that needs to now be maintained.
When Facebook wrote this, the tooling that already existed wasn't available to them -- ex-Google engineers at Facebook wanted something like Bazel/Blaze, but Google had yet to release that publicly. So this is a Hadoop situation where Google had this internal tool that someone else really wanted and because Google didn't open-source it they wrote their own version.
It's almost certain that it is a legal violation and if a lawsuit was brought, I'd bet a million dollars in favour of Google; however, the tool itself isn't the final product and it's silly, counterproductive and just outright malicious to sabotage efforts at building good tooling in a competitor's company. Plus, it will set precedent and Google would have to vet every single new internal tool against what the previous employers of their employees were doing.
> To be fair a majority of the big technology companies all write their own tooling for many things that already exist because they didn't quite fit the way they needed them
What tool would you have suggested Facebook improve?
Saying that Facebook should have improved Make or Maven is like saying Linus should have improved CVS or SVN instead of competing with Git. They're in the same space, but they differ at a very fundamental level.
Some of the tools seems more like improving existing tools such as yarn (instead of npm) and jest over jasmine. However I still applaud Facebook for releasing these tools since they often offer massive improvements over previous tools many regards. It might be more apparent if you're exposed to web development since the toolsets are changing constantly and quickly, some stability would be great in this space.
This doesn't mean that we should disencourage creating new tools fundamentally changing the ways we work with them. You mention Git and I see it as on great such example, I remember with anguish some of the SVN trunks, working with their pseudo tags and the mess which you'd often be introduced to.
Facebook and Google have a problem I am familiar with: Not Invented Yet Syndrome.
Typically they face a problem that few people have ever faced. There may be an existing solution, but not in the open. So they have to roll their own.
Others in this thread say Buck was inspired by Blaze. That seems reasonable and I hardly think Facebook can be blamed for rolling their own when that was the only available option.
You might as well blame Google for fragmenting the Hadoop ecosystem by creating their MapReduce framework.
Nope. I totally agree.
I don't trust a single thing from facebook. I mean, good on them that they open source their tools, but most everything that comes out is just a huge pain in the ass with very little benefit to adopt, other than we can say we're using open-source facebook technology.
The funny thing, though, is that keeping up with other web tech, you always have to deal with facebooks API's and sharing and whatnot, and its also, usually a huge pain in the ass.
So, like, what's the benefit of this, switching build systems, other than a tick on the resume for someone who wants to get a job at facebook?
I'm all for using existing tools, but not for the sake of itself. If there is a good reason (and everyone in the team agrees), all now obsolete tools should simply be replaced by the new one.
If this did not work in your office, this is most likely a problem of people in your office not communication properly about their choice of tools. Did you have people make a short presentation when they introduced a new tool? Did nobody ever point out that you already have various tools that can (easily/elegantly) accomplish what the next new thing(tm) is introduced for?
I can't speak to the rest of Facebook's stuff, but I think the build tool problem is a special case. Per their docs:
> Buck is designed for building multiple deliverables from a single repository (a monorepo) rather than across multiple repositories. It has been Facebook's experience that maintaining dependencies in the same repository makes it easier to ensure that all developers have the correct version of all of the code, and simplifies the process of making atomic commits.
Having been on the "other side" of the monorepo argument where we tried to make do with improving/extending existing build tools etc. in a rapidly growing engineering org, let me say that Facebook (with Buck), Twitter (Pants? I think?) and Google (with Bazel/Blaze) almost certainly built these to deal with the problem of scaling build management with an ever-growing organization.
The popular model of a dozen or so small repositories in GitHub + Jenkins with Maven/NPM/Rake+Bundler/whatever works fine for maybe a few dozen engineers or more, but one day you wake up and realize there hundreds of repositories spread across dozens of _teams_ and hundreds of developers. Obviously you've then got a big ol' dependency graph between repos to deal with, so if you need to fix something near the root suddenly you need to run off bumping version numbers and/or fixing intermediate libraries all the way down the graph. Plus version incompatibilities between the dependencies of different libraries ... it's a total mess, and it doesn't make for an org that can easily "move fast and break things", so to speak.
So then to avoid paralysis your options are basically either to silo up (this team owns their stuff, that team owns their stuff, don't bother with shared dependencies) or you go the monorepo route. If you do, then maybe you go and pull all your hundreds of smaller repos into a monorepo. Having everything in one place makes it easier to police the dependency issues within the org & makes it easier for a single engineer to deal with those sort of "cascading changes" instead of shunting that problem onto the entire organization. But in exchange for this "agility" you've then got the problem that builds take multiple hours & the associated tools are often highly language-centric (Maven+Java, NPM+Node, Ruby+Rake, etc.). They don't typically make any reproducibility guarantees either.
Anyway, to make a short story really long: at the time FB, Google and Twitter were hitting these organizational scaling walls, making these decisions and building these tools internally, there really weren't any great tools out there for the monorepo use case. I think that's why all these tools have appeared as side-by-side alternatives rather than improvements on one another or to tools like Maven et al.
Whether or not consolidation is warranted, for the folks who have the problems that Buck/Bazel/Pants solve, it's likely to save 'em a hell of a lot of time, effort and money IMO. It's a good thing that they have been published, even if the value's maybe not immediately obvious.
This. Also, I think that the build system itself is just the tip of the iceberg. At least in Google's case it has recently been very nicely documented [1] that blaze is "just" one piece of how google keeps velocity high
I'm veering a bit towards the opposite: Some of the Facebook tools look quite nice, but there's a certain amount of taint that goes with their corporate heritage, so if there's a good alternative...
When I started the project, Buck had one specific goal: to make Android builds faster (https://youtu.be/CdNw6mRpsDI). At the time, the recommended way of building Android (from Google) was to use Ant. So when someone points to Buck as an example of "creating competing tools more often than persisting with and improving existing ones," I'd like to point out that you can't fix Ant if these are your issues with Ant:
* It is unsound.
* Because it is unsound, it is irreparably slow.
* It uses XML as a build language.
Yes, in July 2012, there were a number build systems on the market (though Bazel was not one of them, but Pants was), and none of them focused on building Android. And even if they did, few (if any) software companies were building an Android app as large as Facebook, so it was unlikely that anyone else was going to design for our scale.
It also wasn't just about build times, but about how I wanted to see us organize code in our repository. At the time, there was a flat list of folders in the Android repo, each called lib-something. This drives me insane because you inevitably end up with two (or more!) people creating com.facebook.common.StringUtils, each in their own lib-something. (It's also annoying to `ls` this "lib-" directory over time.)
In contrast, Buck/Bazel encourage the use of a unified tree, but still encourage fine-grained modularization (which is key as your build graph gets very large). This has been shown to scale to extremely large monorepos at both Facebook and Google.
Finally, by having total control of the build system, we were able to build in all sorts of cool tricks to build Android very fast, both in the large and in the small: https://youtu.be/Y9MfGS3qfoM. I don't think there is any other build system we could have decided to work with at the time to achieve these gains.
Buck has since evolved to build everything else at Facebook. This is not because the Buck team set out to conquer the world, but because people internally wanted the benefits of Buck for their builds. Building an alternative toolchain to xcodebuild was a mammoth effort (and one for which I take no credit). Having one build language for a heterogeneous collection of programming languages in a monorepo is no small feat.
Finally, to the people who believe "The big guys want developers locked into their ecosystems," I have news for you: the Buck team is not offended if you use Bazel, Gradle, Make, or anything else. Buck is open source because we wanted to share it with the community, not dominate it. Like many of you, people are excited to show their work and learn from others.
I think when you say "make Android builds faster", you mean "make Android application builds faster" -- as opposed to making Android operating system builds faster. Those are two very different things, and for the uninitiated the casual use of language here is confusing. The Android operating system has never been built with ant, but historically was built with make until around Android N when that team started migrating to ninja-based builds instead.
I got confused by the exact same language 5 years ago when one of the Apache Groovy project managers (before it joined the ASF) started repeatedly saying "Google have now chosen Groovy and Gradle for building Android." I didn't know if they meant building Android at Google, or as the default build system shipping with their (then) new Android Studio tool.
The only thing that I'm aware of these days that doesn't work on Windows is C++ code (but it works if you are building for Android on Windows). It's even covered in the getting started guide: https://buckbuild.com/setup/getting_started.html
Wow, so many build systems got mentioned here, when do people have the time to check them all out? I stick with gnu make because that's the evil I know, not because its the best tool imaginable...
Not everyone here is an expert in a particular system yet. If a build system runs 20% faster than what you're currently using, and you plan on using it for a good few years, the overall time saved is not insubstantial.
Surprised no one else has mentioned nix, because this seems very much inspired by it. As someone who uses nix extensively this is interesting but doesn't seem as powerful or general.
I would recommend gn and ninja. It's the chromium build system. The make file is generated in less than a minute with gn, and ninja does a good job with incremental builds. It's also been around for a few years so it's been proven.
Buck has a daemon that does it automatically. Often when you go to build it takes 0 seconds because the daemon has already done it before you get to it:
Tools like Make are also useful for simple data analysis workflows, and I'm curious to hear any thoughts from any Buck users as to whether it would be useful in those cases too.
What about it doesn't support Windows? Not disagreeing since I haven't used it much and I'm on Linux, but they have "Quick Start" instructions for Windows.
They made buck for themselves where their infrastructure runs on (*nix). Google's internal build tool (before bazel) was the same.
Now chromium's GN build tool was made for their product that was targetted to run everywhere, which is why windows support is good.
So much complexity to put some pictures on a screen and have people click 'like'.
It's dizzying.
I wonder if build-system complexity is an artifact of reducible complexity in other areas.
Perhaps the next time we develop a language, it should comprise of it's own build system that doesn't require any configuration, or rather minimal. To the point wherein we didn't need to think that much beyond the obvious.
I think a few people had that same thought, and then invented Golang. Love it or hate it, building a typical go app, or even a suite of apps, is pretty darn simple.
If you can get by using only go for everything, you'll have a great time. But complexity starts to become unavoidable when requirements move beyond single-language ecosystems. At some point simplicity can end up costing more.
I agree with the comments in this thread, and I add that Facebook knows fanboys are stupid and there are a lot of them, so they try to take advantage of it.
> Buck: A high-performance build tool
And in the title you read "a fast build tool", like yarn, as soon as it was released it was the faster one.
F: let's use yarn/buck
G: why?
F: cause it is faster
G: Did you already tryed it? Did you measured or benchmarked it?
F: No, but Facebook claims it is fast, come on!
By the way, sometimes yarn does not work and you need to add a file to manage it. Furthermore facebook is using the npm registry, do they pay for it or support it?
Other than that, thanks to Facebook to bring awesome tools to the public, like React.
For my needs yarn was a drop in replacement for npm and it is faster, 5s vs 17s for a fresh install but critically it's reproducible, I get the exact same output in node_modules every time I run it and that alone was worth the switch.
As for using the npm registry (by default) so what? Why would the npm folks care, MS uses it as well with vscode ans its automatic resolution.
Agreed that yarn is faster and reproducibility is critical, but in case you aren't aware, you can have reproducibility with npm too by using the "npm shrinkwrap" command.
Updating a single package version in a yarn.lock file is much easier than updating a single package version in an npm shrinkwrap file, in my experience. With yarn it's just a single command. With npm shrinkwrap, you have install everything from the current snapshot, then install the package you want to update, then run npm prune, then regenerate the shrinkwrap file, then look through hundreds of lines of mostly irrelevant diff to make sure that it did what you wanted it to.
For example, my use-case is similar to what Mike Bostock described in "Why Use Make" [0] when explaining how he uses Make to build out his data tranformation process. Most of my work is data transformation/small-scale ETL, but I just haven't been able to get into Make beyond trivial work, and I often end up writing things in Rake (Ruby).
So I was wondering if other devs had tried using Buck/Bazel for everyday hobbies and projects, and whether you stuck with the new tool or went back to Make? The portability of Makefiles isn't a high priority for me, and I like experimenting with different systems for my own projects.
[0] https://bost.ocks.org/mike/make/