A similar approach was taken by Pixar when making the beach environment for the Piper short that was previewed with Finding Dory. It was absolutely mind-blowing.
I was on the RenderMan team at the top and remember thinking it really neat that our system could stand up to that.
I remember find it mind blowing when I learned that in Brave, the artists weren't just using a texture/displacement mapped surface for the clothing and armor. They were using tori primitives for the chain mail, and curve primitives for the clothing. (I.e., the clothing actually woven out of curve primitives for the threads.)
> I was on the RenderMan team at the time and remember thinking it really neat that our system could stand up to that.
> I remember finding it mind blowing when I learned that in Brave, the artists weren't just using a texture/displacement mapped surface for the clothing and armor. They were using tori primitives for the chain mail, and curve primitives for the clothing. (I.e., the clothing was actually woven out of curve primitives for the threads.)
> They were using tori primitives for the chain mail, and curve primitives for the clothing. (I.e., the clothing was actually woven out of curve primitives for the threads.)
That sounds mind-blowing. Is this documented anywhere?
I got your autograph in my notebook at EGSR (I think?) 2019, still a little sad I didn't have my PBRT full of autographs at that particular dinner! Next time ;)
Would you mind elaborating on this more, describing the differences and how tools like Zuul introduce degrees of friction that result in smooth operation and pipelines?
I know my phrasing may come off wrong, I apologize for that. But I'm asking genuinely; I've only ever seen Zuul in the wild in the Red Hat and OpenStack ecosystems.
Right, so Zuul is properly interesting if you're dealing with multi-repo setups and want to test changes across them before they merge; that's the key bit that something like GitLab CI doesn't really do.
The main thing with Zuul is speculative execution. Say you've got a queue of patches waiting to merge across different repos. Zuul will optimistically test each patch as if all the patches ahead of it in the queue have already merged.
So if you've got patches A, B, and C queued up, Zuul tests:
* A on its own
* B with A already applied
* C with both A and B applied
If something fails, Zuul rewinds and retests without the failing patch. This means you're not waiting for A to fully merge before you can even start testing B - massive time saver when you've got lots of changes flowing through.
With GitLab CI, you're basically testing each MR in isolation against the current state of the target branch. If you've got interdependent changes across repos, you end up with this annoying pattern:
* Merge change in repo A
* Wait for it to land
* Now test change in repo B that depends on it
* Merge that
* Now test change in repo C...
It's serial and slow, and you find out about problems late. If change C reveals an issue with change A, you've already merged A ages ago.
Zuul also has this concept of cross-repo dependencies built in. You can explicitly say "this patch in repo A depends on that patch in repo B" and Zuul will test them together. GitLab CI can sort of hack this together with trigger pipelines and artifacts, but it's not the same thing... you're still not getting that speculative testing across the dependency tree.
The trade-off is that Zuul is significantly more complex to set up and run. It's designed for the OpenStack-style workflow where you've got dozens of repos and hundreds of patches in flight. For a single repo or even a handful of loosely-coupled repos, GitLab CI (and it's ilk) is probably fine and much simpler. But once you hit that multi-repo, high-velocity scenario, Zuul starts to make proper sense. Yet nobodies using it except hardcore foundational infrastructure providers.
> Right, so Zuul is properly interesting if you're dealing with multi-repo setups and want to test changes across them before they merge; that's the key bit that something like GitLab CI doesn't really do.
I'm not sure about that. Even when we ignore plain old commits pushed by pipeline jobs, GitLab does support multi-project pipelines.
GitLab's multi-project pipelines trigger downstream jobs, but you're still testing each MR against the current merged state of dependencies.
Zuul's whole thing is testing unmerged changes together.
You've got MR A in repo 1, MR B in repo 2 that needs A, and MR C in repo 3 that needs B... all unmerged. Zuul lets you declare these dependencies and tests A+B+C as a unit before anything merges. Plus it speculatively applies queued changes so you're not serialising the whole lot.
GitLab has the mechanism to connect repos, but not the workflow for testing a DAG of unmerged interdependent changes. You'd need to manually coordinate checking out specific MR branches together, which is exactly the faff Zuul sorts out.
Luckily commonly used forges for collaboration have the ability to make tags immutable. Any repository where multiple people collaborate on a project should have that feature enabled by default. I'm still waiting for the day where tags are immutable by default with no option exposed to change it.
I'm sure that would cause problems for some, but transitive labels already exist in Git: branches.
I dont find the idea of a immutable "descriptive" tag or branch to be that useful (I also dont find the differentiation of tags and branches to be useful either) I've seen plenty of repositories where tags end up being pretty ambiguous compared to each other or where "release-20xx" does not actually point to the official 20xx release. Immutable references are more typically handled by builders and lockfiles to which Git already has a superior immutable reference system, the commit hash.
I 100% agree on the latter (the tag != release is more of a project management issue), and the same concept applies to containers and their digest hashes. The main issue at the end of the day is the human one: most people don't like looking at hashes, nor do they provide context of progression. I would say "give both" and make sure they match on the end user side of things, but tags are the most common way (open source) software releases are denoted.
The purpose of the forge is to be able to prevent this. Protected tags are usually a feature which provides a way to mark tags as untouchable, so removal would require a minimum level of trust to the repository on the platform. Otherwise, attempts to push tag deletions or changes for tags matching the protected pattern would be rejected/ignored.
Of course, the repository owner has unlimited privilege here, hence the last part of my prior comment.
Tags are just a text file with a name and the sha of the tag object (with the commit and some metadata/signatures as contents), last I checked. It's deliberately simple and thus almost impossible to actually lock it down in concrete terms.
Packed refs are a little more complicated but all of the storage formats in git are trivial to manually edit or write a tool to handle, in extremis.
That's the purpose of the forge platform, to provide a way to prevent changes to these files from being accept into the source repository. For example:
This is obviously kicking the can down the road, but I "solve" this problem by storing passkeys in a third-party credential manager that supports them. That way I can use them on any device that I've installed the client app or browser extension on. I have this working on Fedora, macOS, Windows, and iOS.
I gave Notes/Plume a try a year or so ago, it was an interesting experience. I ended up falling back to Joplin as I could use it on macOS, iOS, and Fedora with synchronization via Dropbox.
I've always been curious about productizing apps like these, from a financial/business perspective have you found Daino worthwhile or enough of a success (by your standards) to continue developing it as a proprietary application?
Hi! That put a smile on my face (: I'm working now on a mobile version with real-time sync, so maybe give it another try when it comes out.
Not really, not yet. Once my FOSS app was popular I used to earn a livable amount of money from ads on the website. But after a SEO crash that all went down the drain and the money I'm getting now from subscriptions to Daino Notes is nice but not livable. I've been working the last year (at a really awesome place) doing React programming (my first salary job, actually) and at nights and weekends working on Daino.
I actually got many requests to license Daino Notes' block editor. So I've figured there's a business there. I'm working on something I'm calling Daino Qt which is a collection of different components to accelerate Qt apps development (so I'm also its client). It will include my block editor, components for mobile - current Qt components on mobile are extremly shitty - so I'm planning on changing that with things like native-feeling swipeable stack view, native-feeling text editing, etc. And maybe Qt C++ client SDK for InstantDB (and more stuff).
Hope I can sell this as well while also consuming these components for Daino Notes and other apps I will develop.
I found gitea's interface to be so unusably bad that i switched to full-fat GitLab.
Was this Gitea pre-UI redesign or after? 1.23 introduced some major UI overhauls, with additional changes in the following releases. Forejo currently represents the Gitea 1.22 UI, reminiscent of earlier GitHub design.
eBPF is restricted when booted in a SB environment, but it's not nonfunctional. The default config puts the kernel into "integrity" mode of Kernel Lockdown, which reduces scope of access and enforces read-only usage.
Whether or not the specific functions needed to replicate this tool are impacted is beyond my knowledge.
> on the most demanding real-world production workloads (think Pixar/Weta), which for now it hasn't been.
Super small nit (or info tidbit), but it doesn't take away from your overall message regarding production and scene scale.
Pixar does not and has not used Maya as the primary studio application, it's really only used for asset modeling and some minor shading tasks like UV generation and some Ptex painting. The actual studio app is Presto, which is an in-house tool Pixar has developed over the years since its earliest productions. All other DCCs are team/task specific.
Dreamworks is similar with their tool, Presto, IIRC. Walt Disney Animation Studio (WDAS) does use Maya as the core app last I saw, but I don't know if they've made any headway with evaluating Presto since 2019...
https://www.fxguide.com/fxfeatured/the-tech-of-pixar-part-1-...
Related but unlinked Part 2 on other aspects of Finding Dory:
https://www.fxguide.com/fxfeatured/the-tech-of-pixar-part-2-...
reply