Hacker Newsnew | past | comments | ask | show | jobs | submit | WorldMaker's commentslogin

Personally, I'm still using JSX/TSX to template my Web Components. (I'm not using React, I'm using the non-Virtual DOM approach with Butterfloat [1], but there are other small JSX template language options like Preact and snabbdom out there as well and also other non-Virtual DOM approaches.)

I like the type safety of TSX as well as the syntax highlighting. (As may be obvious with Butterfloat, I also prefer the power of RxJS over signals, but that's a longer conversation.)

[1] https://worldmaker.net/butterfloat/


I'm certainly feeling like Shadow DOM is the new iframe and mostly useful for ad networks and "embeds" which are not things I'm generally building with Web Components. It's interesting how many developers seem enamored with Shadow DOM, but for me the sweet spot is keeping all of a Web Component in the "Light" DOM, let CSS do its cascading job, and let my Web Component adapt to the page hosting it rather than be a lonely island of its own style.

Accordions: just use `<details name="accordion-name">` and style it however you like. No need for JS or Web Components any more for an accordion.

Combo Boxes and Date Pickers: CSS Form Control Styling Level 1 [1] will be a massive game changer. `appearance: base` will make it easier to style every part of a browser's form input with just CSS as they start with fewer opinions on how it should be styled (less trying to be platform-specific, more web platform generic) and have more CSS selectors for their component parts. Yet they will still have all the accessibility of native form controls. Really hoping that draft moves forward this year.

[1] https://www.w3.org/TR/css-forms-1/


I didn’t know about the `name` attribute on `<details>`, thanks for pointing that out!

Stylable form controls are definitely a step in the right direction. It really should not be taking this long though. In the meantime, developers have been building broken, half-assed, inaccessible inputs just to satisfy aesthetic requirements.


I use this PowerShell variant:

    function Remove-GitBranches {
        git branch --merged | Out-GridView -Title "Branches to Remove?" -OutputMode Multiple | % { git branch -d $_.Trim() }
    }
`Out-GridView` gives you a quick popup dialog with all the branch names that supports easy multi-select. That way you get a quick preview of what you are cleaning up and can skip work in progress branch names that you haven't committed anything to yet.

This is my PowerShell variant for squash merge repos:

    function Rename-GitBranches {
        git branch --list "my-branch-prefix/*" | Out-GridView -Title "Branches to Zoo?" -OutputMode Multiple | % { git branch -m $_.Trim() "zoo/$($_.Trim())" }
    }
`Out-GridView` gives a very simple dialog box to (multi) select branch names I want to mark finished.

I'm a branch hoarder in a squash merge repo and just prepend a `zoo/` prefix. `zoo/` generally sorts to the bottom of branch lists and I can collapse it as a folder in many UIs. I have found this useful in several ways:

1) It makes `git rebase --interactive` much easier when working with stacked branches by taking advantage of `--update-refs`. Merges do all that work for you by finding their common base/ancestor. Squash merging you have to remember which commits already merged to drop from your branch. With `--update-refs` if I find it trying to update a `zoo/` branch I know I can drop/delete every commit up to that update-ref line and also delete the update-ref.

2) I sometimes do want to find code in intermediate commits that never made it into the squashed version. Maybe I tried an experiment in a commit in a branch, then deleted that experiment in switching directions in a later commit. Squashing removes all evidence of that deleted experiment, but I can still find it if I remember the `zoo/` branch name.

All this extra work for things that merge commits gives you for free/simpler just makes me dislike squash merging repos more.


The syntax of a language is the poetry form, it defines things like meter, scansion, rhyming scheme. Of course people are going to have strong aesthetic opinions on it, just as there are centuries of arguments in poetry over what form is best. You can make great programs in any language, just like you make beautiful poetry in almost every form. (Leaving an almost there for people that dislike Limericks, I suppose.) Language choice is one of the (sometimes too few) creative choices we can make in any project.

> Another option is to do something like automatic semicolon insertion (ASI) based on a set of rules. Unfortunately, a lot of people’s first experience with this kind of approach is JavaScript and its really poor implementation of it, which means people usually just write semicolons regardless to remove the possible mistakes.

Though the joke is that the largest ASI-related mistakes in JavaScript aren't solved by adding more semicolons, it's the places that the language adds semicolons you didn't expect that trip you up the worst. The single biggest mistake is adding a newline after the `return` keyword and before the return value accidentally making a `return undefined` rather than the return value.

In general JS is actually a lot closer to the Lua example than a lot of people want to believe. There's really only one ASI-related rule that needs to be remembered when dropping semicolons in JS (and it is a lot like that Lua rule of thumb), the Winky Frown rule: if a line starts with a frown it must wink. ;( ;[ ;`

(It has a silly name because it keeps it easy to remember.)


ISBN reuse is a quite large problem for some especially small publishers. The intended use of an ISBN is a product code for a retail point of service. If the book prices were expected to be basically the same, reusing the same ISBN for an entire shelf of books was sometimes fine if the retailers didn't mind the inventory management problem of knowing which specific book titles were left. For "pulp paperbacks" intended for a spindle at a grocery store they probably didn't care to manage the inventory by exact title, they managed it by the spindle-full.

> if archive.org scans in a hardback with its ISBN, what do I use for the scanned pdf?

Archive.org would recommend using the OpenLibrary IDs instead of ISBNs. (OpenLibrary is an Archive.org project.)

> The number of oddball publishers and pamphlets and so forth that have never been cataloged anywhere is enormous.

I think it's more the case that number of catalogs is too many. At least with LibraryThing it always seems like somebody has cataloged everything, but we have such a hodgepodge of ID systems and catalog numbers in part because so rarely have all the catalogs been connected or have tried to be connected. It's only a relatively recent library phenomenon that so many small library catalogs can talk to each other on the same protocol, much less coexist in the same broader search tool.

> Cataloging my own library, I've had to use a hodgepodge of unique ids. ASINs, ISBNs, Worldcat's OCLC numbers, Open Library's, and a few others besides.

In part because most of my personal catalog is in LibraryThing, I've been impressed with LibraryThing's Works ID as a generally trustworthy unique ID for a book. LibraryThing benefits from an interesting mix of volunteer and professional librarian work (especially the work of a lot of tiny and interesting niche libraries across the world) in deduping and merging editions together into the same Work ID. StoryGraph and OpenLibrary are also doing interesting things in this space, but LibraryThing has the momentum of time (it's as old as GoodReads and not an Amazon side project) and the benefit of extra (nerdy) labor.

I also like the LibraryThing IDs because they are generally short, opaque (which is a weird feature sometimes), and don't look anything like an ISBN because they aren't intended for that. StoryGraph's IDs are GUIDs, which I will forever find ugly in their normal - delimited hexadecimal rendering. Open Library's look like ISBNs for reasons that I don't understand, but I do appreciate that you can use the last letter of the ID to distinguish between an edition ID (ends in M for reasons I don't know why) and a work ID (ends in W), and the OL prefix does help them stand out next to other catalogs' IDs.

I built a voting website for my current favorite book club and I thought I could do everything with just the LibraryThing Works ID but then I keep adding other IDs to the "database" (YAML frontmatter) as time goes on. LibraryThing doesn't have a Covers API because most of their edition covers come from Amazon and Amazon is restrictive on that. If I add the OpenLibrary Edition ID, I can use the OpenLibrary Covers API as Archive.org has very nice terms on that today. (Not the OpenLibrary Works ID, because covers are associated at the Edition level, which does make some sense, but the website UI shows a default cover from a random edition so I'm not sure why the API couldn't return that cover from the Works ID, but it is nice to pick and choose Edition covers anyway and I can't complain too much having a working cover image API from someone.) I started adding StoryGraph IDs because members of the club love StoryGraph right now and also because while StoryGraph doesn't have an Official API yet (it is on the Roadmap), I discovered StoryGraph's CWs section was amenable to easy scraping. I figured since an API for it is on the Roadmap a bit of light scraping (with attribution!) was fair. (My club wanted CW information to help decide on book voting. LibraryThing intentionally doesn't track CWs as too hot button and subjective, but StoryGraph has a rather nice "voting" experience for CWs and before I started to scrape StoryGraph's CWs we were already starting to copy and paste them by hand into the Markdown documents. The scraping provides better attribution and a unified display.)


Yes, which is why there are so many questions about if we are solving the right problems with such tools.

Even in a monorepo you can tag releases independently in git. git doesn't proscribe any particular version tag naming scheme and stores tags similarly to refs in a folder structure that many (but not all) UIs pay attention to. You can tag `project-a/v1.2.0` and `project-b/v1.2.0` as different commits at different points in the repo as each project is independently versioned.

It makes using `git describe` a little bit more complicated, but not that much more complicated. You just need to `--match project-a/` or `--match project-b/` when you want `git describe` for a specific project.


That's true, but git also doesn't have tags that apply to a subset of the repository tree. You can easily check out `project-b/v1.2.0` and build project-a from that tree. Of course, the answer to that is "don't do that", but you still have the weird situation that the source control implementation doesn't match the release workflow; your `git describe` example is but one of the issues you will face fighting the source control system -- the same applies to `git log` and `git diff`, which will also happily give you information from all other projects that you're not interested in.

For me, the scope of a tag should match the scope of the release. That means that a monorepo is only useful if the entire source tree is built and released at the same time. If you're using a monorepo but then do partial releases from a subtree, you're using the wrong solution: different repo's with a common core dependency would better match that workflow. The common core can either be built separately and imported as a library, or imported as a git submodule. But that's still miles ahead of any solution that muddles the developers' daily git operations.


I understand the low level details of why tags don't work that way and why git leaves that "partial release" or "subtree release" as a higher level concept for whoever is making the tags in how they want to name them.

I know there are monorepo tools out there that do things like automate partial releases include building the git tag names and helping you you get release trees, logs, and diffs when you need them.

I think a lot of monorepo work is using more domain specific release management tools on top of just git.

Also, yeah, my personal preference is to avoid monorepos, but I know a lot of teams like them and so I try my best to at least know the tools to getting what I can out of monorepos.


Do you have any examples of tooling like that, providing the monorepo tiling on top of git's porcelain so to speak? I had assumed that most of such tooling is bespoke, internal to each company. But if there's generic tooling out there, then I agree, it's useful to know such.

That's absolutely an issue that a lot of it is bespoke and proprietary.

I found someone else's list of well known open source tools (in the middle of a big marketing page advertising monorepos as an ideal): https://monorepo.tools/#monorepo-tools

That list includes several I was aware and several I'd not yet heard of. It's the cross-over between monorepo management tool and build tool is. It's also interesting how many of the open source stacks are purely for or at least heavily specialized for Typescript monorepos.

I don't have any recommendations on which tools work well, just vaguely trying to keep up on the big names in case I need to learn one for a job, or choose one to better organize an existing repo.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: