The GitHub website experience is already messed up with forcing Copilot into everything. But then asking for user feedback about new setting options for issues but denying any request for a user default.
This surely isn't going in any good direction. What's next ads in commits?
I wish they would share more about how it works. Maybe a reseach paper for once? we didn't even get a technical report.
From my best guess: it's a video generation model like the ones we already head. But they condition inputs (movement direction, viewangle). Perhaps they aren't relative inputs but absolute and there is a bit of state simulation going on? [although some demo videos show physics interactions like bumping against objects - so that might be unlikely, or maybe it's 2D and the up axis is generated??].
It's clearly trained on a game engine as I can see screenspace reflection artefacts being learned. They also train on photoscans/splats... some non realistic elements look significantly lower fidelity too..
some inconsistencies I have noticed in the demo videos:
- wingsuit discollcusions are lower fidelity (maybe initialized by high resolution image?)
- garden demo has different "geometry" for each variation, look at the 2nd hose only existing in one version (new "geometry" is made up when first looked at, not beforehand).
- school demo has half a caroutside the window? and a suspiciously repeating pattern (infinite loop patterns are common in transformer models that lack parameters, so they can scale this even more! also might be greedy sampling for stability)
- museum scene has odd reflection in the amethyst box, like the rear mammoth doesn't have reflections on the right most side of the box before it's shown through the box. The tusk reflection just pops in. This isn't fresnel effect.
I feel after the 2017 transformer paper, and its impact on current state of AI, and google stocks, it seems Google is much more hesistant to keep things under their wings for now. Sadly, so.
I have been using wgpu for my main projects for nearly two years now. Let's hope this rollout means more maintainers so issues I have opened 18 months ago bug more people and eventually get resolved. Never touched rust myself but maybe I find the motivation and time to do it myself.
As I also depend on the wgpu-native bindings it's slow for updates to reach. Like we just got to v25 last week and v26 dropped a couple days prior to that.
I have watched recordings of your recent representation and decided to finally give it a try last week. My goal is to create some interactive network visualizations - like letting you click/box select nodes and edges to highlight subgraphs which sounds possible with the callbacks and selectors.
Haven't had the time to get very far yet, but will gladly contribute an example once I figure something out. Some of the ideas I want to eventually get to is to render shadertoys(interactively?) into a fpl subplot (haven't looked at the code at all, but might be doable), eventually run those interactively in the browser and do the network layout on the GPU with compute shaders (out of scope for fpl).
Hi! I've seen some of your work on wgpu-py! Definitely let us know if you need help or have ideas, if you're on the main branch we recently merged a PR that allows events to be bidirectional.
This surely isn't going in any good direction. What's next ads in commits?