Hacker Newsnew | past | comments | ask | show | jobs | submit | jayd16's commentslogin

On the one hand, better GC is better but on the other, it doesn't matter all that much.

You tend to want zero per frame allocation as it is and that would probably not change.

As long as your less frequent garbage doesn't overtake the incremental GC, that's not really an issue either. If it's working incrementally as intended stutter shouldn't be an issue.

In a game there's no endless benefit from raw GC throughput like you might see on a server instance that could always push more requests per second.


The entire point of the incremental GC is to preserve frame latency budget at the expense of raw throughput. If you can guarantee <16ms frames, I'll work with whatever you can give me.

If your game is allocating so quickly that the incremental GC can't keep up, I would argue that solving this with a "faster" GC is just taking you further into hell.


> On the one hand, better GC is better but on the other, it doesn't matter all that much.

It shouldn't but it does. Boehm is a conservative GC so when it triggers it needs to scan a lot more memory for pointers than .NET's GC because it has to assume anything in memory could be a pointer.


I think the major problem with Unity is they're just rudderless. They just continue to buy plugins and slap in random features but it's really just in service of more stickers on the box and not a wholistic plan.

They've tried and failed to make their own games and they just can't do it. That means they don't have the internal drive to push a new design forward. They don't know what it takes to make a game. They just listen to what people ask for in a vacuum and ship that.

A lot of talented people at Unity but I don't expect a big change any time soon.


I've seen it happening time and time again in similar companies, and this is a symptom of a problem at the upper levels, which means it won't change.

C-level set goals are abstract and generic, or sometimes plain naive, and this is often coming from generic requests from the board or VCs.

"Hire as many developers as you can, even if there's no work right now", a Softbank request.

"Don't build, just acquire similar products", from a Brazilian capital management that ended up killing that company.

"Kill this team, their product doesn't sell. I don't care if all our other product depends on theirs", from Francisco Partners.

Employees who stay can't really rock the boat, so it self-selects for non-boat-rocking people. Rockstars who stay must adapt or suffer. Eventually you get so many bad people that you do layoffs.


The thread in all of them is that the CEO listened to other people’s advice instead of leading themselves. When a ship loses its captain…

That's a good point.

If the CEO is just a parrot repeating what the board says, you get a company full of parrots too. No pirate to guide the ship.


The best CEOs I’ve seen balance board requests with what they themselves want to do and where they see their market going. Standing on the shoreline when the armada of prospects come sailing in for provisions.

When there’s a gold rush, sell pickaxes and shovels.


The talent left ship years ago. The core engine’s graphics team is all that’s really left.

They also hired Jim Whitehurst as CEO after the previous CEO crapped the bed. Then Jim left as he just didn’t understand the business (he’s probably the one responsible for the “just grab it from the store” attitude). Now they have this stinking pile of legacy they can’t get rid of.


Has the talent moved to anywhere in particular?

Nicholas Francis manages a fund for AgTech after a decade making games with Unity (the engine he made). He left in 2013 so I don't associate him with Unity today but it was his product.

2018 We get the new HDRP and Shader Graph.

2019 there were sexual harassment lawsuits.

The other co-founders left after they announced runtime fees in 2023 and the community fled.

2024 the URP team basically imploded. Leaving everything basically flat.


It's not hard, but it's one of those tools where the user has to think about how the tool is implemented.

Even if the abstractions get leaky, people yern for goal/workflow oriented UX.


This doesn't even make sense even if you believe it. Why wouldn't both sides of any argument use "a superhuman persuasion engine"?

You're confusing the final stage of grief with actually liking it.

Even with git, it should be possible to grab the single file needed without the rest of the repo, but i'ts still trying to round a square peg.

Honestly I think the article is a bit ahistorical on this one. ‘go get’ pulls the source code into a local cache so it can build it, not just to fetch the go.mod file. If they were having slow CI builds because they didn’t or couldn’t maintain a filesystem cache, that’s annoying, but not really a fault in the design. Anyway, Go improved the design and added an easy way to do faster, local proxies. Not sure what the critique is here. The Go community hit a pain point and the Go team created an elegant solution for it.

I was thinking this too. I think it might be talking about operations like “go mod tidy” or update operations where it updates your go.mod/sum but doesn’t actually build the code. I would guess enterprise tools do a lot of checking whether there are updates without actually doing any building.

What about Radeon cards, or consoles?

Consoles and their install base set the target performance envelope. If your machine can't keep up with a 5 year old console then you should lower expectations.

And like, when have onboard GPUs ever been good? The fact that they're even feasible these days should be praised but you're imagining some past where devs left them behind.


Its just easier to write off the full value as e-waste than to try to turn a profit selling a meager amount of used hardware.

You don't "write off" the full value, you fiddle the amortization so they go to zero accounting value exactly when you want them to. They're playing this exact game with datacentre GPUs right now.

You can still have a tax implication when you sell the fully depreciated item but in theory it should only be a benefit unless your company has a 100% marginal tax rate somehow.

Of course it can cost more to store the goods and administer the sale then you recoup. And the manufacturer may do or even require a buyback to prevent the second hand market undercutting their sales. Or you may be disinclined to provide cheap hardware to your competitors.


So they replaced a TCP connection with no congestion control with a sycnronous poll of an endpoint which is inherently congestion controlled.

I wonder if they just tried restarting the stream at a lower bitrate once it got too delayed.

The talk about how the images looks more crisp at a lower FPS is just tuning that I guess they didn't bother with.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: