Hacker Newsnew | past | comments | ask | show | jobs | submit | socalgal2's commentslogin

This is kind of ridiculous take

You can use wgpu or dawn in a native app and use native tools for GPU debugging if that's what you want

You can then take that and also run it in the browser, and, you can debug the browser in the same tools. Google it for instructions

The positive things about WebGPU is it's actually portable, unlike Vulkan. And, it's easy to use, unlike Vulkan.


WebGPU is JavaScript + WebGPU, it has nothing to do with wgpu Rust extensions for native coding, other than sharing a similar API surface.

The very fact that you are suggesting this as workaround proves how bad the tools are.


I have plenty of friends with young kids that are super social. They invite people over, take their kids to restaurants and invite people to come, go to picniqs, festivals, and other things. We love their kids (or at least all the people that show up do).

You have the agency to make it happen.


You and your love for and acceptance of their kids and willingness to show up to family-friendly activities are very likely the key to their ability to do this.

Lack of sleep when they are young makes you into a zombie

My take is that modern culture just doesn't want kids. It doesn't matter how cheap you make having a family, for many it's just not remotely the same culture as it was 50-70 years ago.

Then, for most, it was, at 20-ish, find a partner ASAP and have a family. That was "the culture".

Today it's "have a great career, travel, party, netflix, game, ... and maybe someday think about kids"

There's other stats like in the USA in the 50s, being single was seen as just a transition until you met someone. 78% of adults were married, 22% single. Today, being single is way more common, > 50% and while many of those might want a parter, tons don't see it as a priority.


Yes some countries like India are still like that. Parents from the smaller villages literally pick a partner for their kids if they don't find one.

I think that's really unfair, people deserve to enjoy their lives. If they actually enjoy it that's fine but nobody should be pressured into having kids.


I agree with this take. A lot of old boomers tell me how there was a lot of pressure to get married and have kids because it was the thing to do. Nowadays, there is less pressure for people to do that.

It’s literally the only purpose of life to pass on our genetics to our offsprings in a Darwinian sense.

Turns out that's mediated by the sexual impulse, and can be short-circuited via contraception.

It's not that easy to beat evolution, some will still have kids while those who only care about the fun will die out.

No need to wait: they've already fried themselves out of the evolution game with STDs. Any child they have will likely be retarded or diseased in some way.

Don't forget to include alcohol as a drug - "fetal alcohol spectrum disorders", FASDs, are a real thing.


Third such comment I see in this thread. And... so what? What does the Darwinian purpose have to do with anything mentioned here?

Choosing to not have children appears to "swim against the current" of the dominant biological process/context by which one came to be and in which one exists.

Certainly not having children allows one more time to pursue other matters. Mankind in general might gain (or lose) from such behavior, depending on whether one is an Einstein or a Stalin for example. Most anyone who participates in society has some set of interests and pursuit of those interests is nonetheless very real and the results may dominate our perspective.

I see no clear way to judge whether a person contributes more through his/her work or through his/her children. Nor do I think "contributing" (whatever that means) is a known evaluation anyway. And what one man considers useful another might judge detrimental. All the more b/c history is "unfinished business". IMO in summary we simply cannot know.

Aside: there's a T-shirt that shows the sinking bow of a shipwreck through a telescope lens. It's labeled thusly: "MISTAKES - It could be that the purpose of your life is only to serve as a warning to others." Yet another viewpoint.


The single thing I find most addicting in my life for the last 5-6 years is HN. I feel like all the same criticisms can be applied here. HN chooses their algo and how quickly upvotes degrade and how much they are worth for keeping something on the front page. It works to keep me checking multi times a day. As an example, they could instead pick to only update the front page once every 24 hours and my addition would disappear because I'd know "no updates until tomorrow". As it is, I get that random reward addiction of "maybe there's something interest now". I guess HN is an evil company engineering addiction.

You can get an RSS feed here: https://hnrss.github.io/

Seems that might solve your problem

You can do a "best" top submissions with a minimum of upvotes which ends up not being that many per day.. and you still stay up to date


Vulkan takes like 600+ lines to do what Metal does in 50.

I'm sure the comments will be all excuses and whys but they're all nonsense. It's just a poorly thought out API.


My understanding of API standards that need to be implemented by multiple vendors is that there's a tradeoff between having something that's easy for the programmer to use and something that's easy for vendors to implement.

A big complaint I hear about OpenGL is that it has inconsistent behavior across drivers, which you could argue is because of the amount of driver code that needs to be written to support its high-level nature. A lower-level API can require less driver code to implement, effectively moving all of that complexity into the open source libraries that eventually get written to wrap it. As a graphics programmer you can then just vendor one of those libraries and win better cross-platform support for free.

For example: I've never used Vulkan personally, but I still benefit from it in my OpenGL programs thanks to ANGLE.


Agreed. It has way too much completely unnecessary verbosity. Like, why the hell does it take 30 lines to allocate memory rather than one single malloc.

just use the vma library. the low level memory allocation interface is for those who care to have precise control over allocations. vma has shipped in production software and is a safe choice for those who want to "just allocate memory".

Nah, I know about VMA and it's a poor bandaid. I want a single-line malloc with zero care about usage flags and which only produces one single pointer value, because that's all that's needed in pretty much all of my use cases. VMA does not provide that.

And Vulkans unnecessary complexity doesn't stop at that issue, there are plenty of follow-up issues that I also have no intention of dealing with. Instead, I'll just use Cuda which doesn't bother me with useless complexity until I actually opt-in to it when it's time to optimize. Cuda allows to easily get stuff done first then check the more complex stuff to optimize, unlike Vulkan which unloads the entire complexity on you right from the start, before you have any chance to figure out what to do.


> I want a single-line malloc with zero care about usage flags and which only produces one single pointer value

That's not realistic on non-UMA systems. I doubt you want to go over PCIe every time you sample a texture, so the allocator has to know what you're allocating memory _for_. Even with CUDA you have to do that.

And even with unified memory, only the implementation knows exactly how much space is needed for a texture with a given format and configuration (e.g. due to different alignment requirements and such). "just" malloc-ing gpu memory sounds nice and would be nice, but given many vendors and many devices the complexity becomes irreducible. If your only use case is compute on nvidia chips, you shouldn't be using vulkan in the first place.


> Even with CUDA you have to do that.

No you don't, cuMemAlloc(&ptr, size) will just give you device memory, and cuMemAllocHost will give you pinned host memory. The usage flags are entirely pointless. Why would UMA be necessary for this? There is a clear separation between device and host memory. And of course you'd use device memory for the texture data. Not sure why you're constructing a case where I'd fetch them from host over PCI, that's absurd.

> only the implementation knows exactly how much space is needed for a texture with a given format and configuration

OpenGL handles this trivially, and there is also no reason for a device malloc to not also work trivially with that. Let me create a texture handle, and give me a function that queries the size that I can feed to malloc. That's it. No heap types, no usage flags. You're making things more complicated than they need to be.


> No you don't, cuMemAlloc(&ptr, size) will just give you device memory, and cuMemAllocHost will give you pinned host memory.

that's exactly what i said. You have to explicitly allocate one or the other type of memory. I.e. you have to think about what you need this memory _for_. It's literally just usage flags with extra steps.

> Why would UMA be necessary for this?

UMA is necessary if you want to be able to "just allocate some memory without caring about usage flags". Which is something you're not doing with CUDA.

> OpenGL handles this trivially,

OpenGL also doesn't allow you to explicitly manage memory. But you were asking for an explicit malloc. So which one do you want, "just make me a texture" or "just give me a chunk of memory"?

> Let me create a texture handle, and give me a function that queries the size that I can feed to malloc. That's it. No heap types, no usage flags.

Sure, that's what VMA gives you (modulo usage flags, which as we had established you can't get rid of). Excerpt from some code:

``` VmaAllocationCreateInfo vma_alloc_info = { .usage = VMA_MEMORY_USAGE_GPU_ONLY, .requiredFlags = VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT};

VkImage img; VmaAllocation allocn; const VkResult create_alloc_vkerr = vmaCreateImage( vma_allocator, &vk_image_info, // <-- populated earlier with format, dimensions, etc. &vma_alloc_info, &img, &allocn, NULL); ```

Since i dont care about reslurce aliasing, that's the extent of "memory management" that i do in my rhi. The last time i had to think about different heap types or how to bind memory was approximately never.


No, it's not usage flags with extra steps, it's less steps. It's explicitly saying you want device memory without any kind of magical guesswork of what your numerous potential combinations of usage flags may end up giving you. Just one simple device malloc.

Likewise, your claim about UMA makes zero sense. Device malloc gets you a pointer or handle to device memory, UMA has zero relation to that. The result can be unified, but there is no need for it to be.

Yeah, OpenGL does not do malloc. I'm flexible, I don't necessarily need malloc. What I want is a trivial way to allocate device memory, and Vulkan and VMA don't do that. OpenGL is also not the best example since it also uses usage flags in some cases, it's just a little less terrible than Vulkan when it comes to texture memory.

I find it fascinating how you're giving a bad VMA example and passing that of as exemplary. Like, why is there gpu-only and device-local. That vma alloc info as a whole is completely pointless because a theoretical vkMalloc should always give me device memory. I'm not going to allocate host memory for my 3d models.


> It's explicitly saying you want device memory

You are also explicitly saying that you want device memory by specifying DEVICE_LOCAL_BIT. There's no difference.

> Likewise, your claim about UMA makes zero sense. Device malloc gets you a pointer or handle to device memory,

It makes zero sense to you because we're talking past each other. I am saying that on systems without UMA you _have_ to care where your resources live. You _have_ to be able to allocate both on host and device.

> Like, why is there gpu-only and device-local.

Because there's such a thing as accessing GPU memory from the host. Hence, you _have_ to specify explicitly that no, only the GPU will try to access this GPU-local memory. And if you request host-visible GPU-local memory, you might not get more than around 256 megs unless your target system has ReBAR.

> a theoretical vkMalloc should always give me device memory.

No, because if that's the only way to allocate memory, how are you going to allocate staging buffers for the CPU to write to? In general, you can't give the copy engine a random host pointer and have it go to town. So, okay now we're back to vkDeviceMalloc and vkHostMalloc. But wait, there's this whole thing about device-local and host visible, so should we add another function? What about write-combined memory? Cache coherency? This is how you end up with a zillion flags.

This is the reason I keep bringing UMA up but you keep brushing it off.


> You are also explicitly saying that you want device memory by specifying DEVICE_LOCAL_BIT. There's no difference.

There is. One is a simple malloc call, the other uses arguments with numerous combinations of usage flags which all end up doing exactly the same, so why do thy even exist.

> You _have_ to be able to allocate both on host and device.

cuMemAlloc and cuMemAllocHost, as mentioned before.

> Because there's such a thing as accessing GPU memory from the host

Never had the need for that, just cuMemcpyHtoD and DtoH the data. Of course host-mapped device memory can continue to exist as a separate, more cumbersome API. The 256MB limit is cute but apparently not relevant im Cuda where I've been memcpying buffers with GBs in size between host and device for years.

> No, because if that's the only way to allocate memory, how are you going to allocate staging buffers for the CPU to write to?

With the mallocHost counterpart.

cuMemAllocHost, so a theoretic vkMallocHost, gives you pinned host memory where you can prep data before sending it to device with cuMemcpyHtoD.

> This is how you end up with a zillion flags.

Apparently only if you insist on mapped/host-visible memory. This and usage flags never ever come up in Cuda where you just write to the host buffer and memcpy when done.

> This is the reason I keep bringing UMA up but you keep brushing it off.

Yes I think I now get why keep bringing up UMA - because you want to directly access buffers between host or device via pointers. That's great, but I don't have the need for that and I wouldn't trust the performance behaviour of that approach. I'll stick with memcpy which is fast, simple, has fairly clear performance behaviours and requires none of the nonsense you insist on being necessary. But what I want isn't either this or that approach, I want the simple approach in addition what exists now, so we can both have our cakes.


What exactly is the difference between these?

cuMemAlloc -> vmaAllocate + VMA_MEMORY_USAGE_GPU_ONLY

cuMemAllocHost -> vmaAllocate + VMA_MEMORY_USAGE_CPU_ONLY

It seems like the functionality is the same, just the memory usage is implicit in cuMemAlloc instead of being typed out? If it's that big of a deal write a wrapper function and be done with it?

Usage flags never come up in CUDA because everything is just a bag-of-bytes buffer. Vulkan needs to deal with render targets and textures too which historically had to be placed in special memory regions, and are still accessed through big blocks of fixed function hardware that are very much still relevant. And each of the ~6 different GPU vendors across 10+ years of generational iterations does this all differently and has different memory architectures and performance cliffs.

It's cumbersome, but can also be wrapped (i.e. VMA). Who cares if the "easy mode" comes in vulkan.h or vma.h, someone's got to implement it anyway. At least if it's in vma.h I can fix issues, unlike if we trusted all the vendors to do it right (they wont).


> and are still accessed through big blocks of fixed function hardware that are very much still relevant

But is it relevant for malloc? Everthing is put into the same physical device memory, so what difference would the usage flag make? Specialized texture fetching and caching hardware would come into play anyway when you start fetching texels via samplers.

> It seems like the functionality is the same, just the memory usage is implicit in cuMemAlloc instead of being typed out? If it's that big of a deal write a wrapper function and be done with it?

The main reason I did not even give VMA a chance is the github example that does in 7 lines what Cuda would do in 2. You now say it's not too bad, but that's not reflected in the very first VMA examples.


> I want the simple approach in addition what exists now, so we can both have our cakes.

The simple approach can be implemented on top of what Vulkan exposes currently.

In fact, it takes only a few lines to wrap that VMA snippet above and you never have to stare at those pesky structs again!

But Vulkan the API can't afford to be "like CUDA" because Vulkan is not a compute API for Nvidia GPUs. It has to balance a lot of things, that's the main reason it's so un-ergonomic (that's not to say there were no bad decisions made. Renderpasses were always a bad idea.)


> In fact, it takes only a few lines to wrap that VMA snippet above and you never have to stare at those pesky structs again!

If it were just this issue, perhaps. But there are so many more unnecessary issues that I have no desire to deal with, so I just started software-rasterizing everything in Cuda instead. Which is way easier because Cuda always provides the simple API and makes complexity opt-in.


But what if you want both on a shared memory system?

No problem: Then you provide an optional more complex API that gives you additional control. That's the beautiful thing about Cuda, it has an easy API for the common case that suffices 99% of the time, and additional APIs for the complex case if you really need that. Instead of making you go through the complex API all the time.

Same with DirectX, if only COM actually had better tooling, instead of pick your adventure C++ framework, or first class support for .NET.

DXGI+D3D11 via C is actually fine and is close or even lower than Metalv1 when it comes to 'lines of code needed to get a triangle on screen". D3D12 is more boilerplate-heavy, but still not as bad as Vulkan.

I guess at least that way is easier to have bindings.

I like COM as idea, but the tooling execution could be so much better.


> From a pure performance standard across economy and quality of life, its pretty clear that Democratic policies always end up as net positive,

All one has to do is point at San Francisco as this us provably false. Dems have been in charge their for decades and it's arguably not working.


I can cherry pick a worse republican run city easily, despite you picking pretty much the worst example of Democratic run city, that despite its problem is also is home to many tech companies with a strong economy.

I dunno why you guys even try to argue against Dems at this point tbh. Even if I am wrong on that point, there are a thousand others that demonstrably show that Republican policies and politicians, especially during this administration, are many times worse.


Which part is not working? Do you live here? I’ve been living in the Mission since 2023 and despite some problems, the city, overall, works… pretty well. Really.

“Super Bowl Visitors Find San Francisco Better Than Its Apocalyptic Image. Problems with homelessness and open-air drug use have been widely broadcast, but many visitors this week said they found the city surprisingly pleasant.”

https://www.nytimes.com/2026/02/06/us/san-francisco-super-bo...

Incidentally, reading some books on the history of SF illuminates that homelessness/poverty and drug use have plagued the city for almost a century, across all manner of governments. There is no easy solution here.


It’s not working so hard people pay millions just to live there.

People do because of the economic activity, not the pristine management of the city.

I think I'm mis-understanding.

How is 1 adult + 3 children at $107.95 and 2 adults + 3 children at $63.97

5 people could require more money than 4. You could say in the 2nd case it's $63.97x2 but that doesn't make any sense either because the table also has 1 adult 0 children $29.31 and 2 adults 0 children at $41.81. Clearly they are not doing 2x to that $41.81 as it would be more than the $29.31 at 2x

Was this AI generated?


There are separate columns for 2 ADULTS (1 WORKING) and 2 ADULTS (BOTH WORKING). I think you are mixing up the two.

And the non-working adult is taking care of children, so reducing childcare expenses.


I am not mixing up the 2

First row, for https://livingwage.mit.edu/counties/06075

    | 1 adult                                        | 2 adults (1 working)                          |
    | 0 Children | 1 Child | 2 Children | 3 Children | 0 Children |1 Child | 2 Children | 3 Children |
    | $29.31     | $61.37  | $83.72     | $107.95    | $41.83     | $50.47 | $54.77     | $63.97     |

    1 adult + 0 children  = $29.31
    2 adults + 0 children = $41.83
The only way these numbers make sense if if you assume one income. Then

    1 adult + 3 kids = $107.95
    2 adults + 3 kids = $63.97
Given the first example was one income, this 2nd one makes no sense. 5 people cost more than 4. These numbers are wrong.

Look at the childcare number in the breakdown table. 1 adult and 3 children has an estimated $71k/year childcare cost, while 2 adults and 3 children (1 working) has a $0/year childcare cost. So some things go up (transportation, healthcare, food), but others go down. Childcare going down by $71k pretty much entirely accounts for the difference you're questioning (~$34/hour difference just on that entry).

Also, two adults (assuming married) will pay lower taxes than one adult for the same income. That's another ~30k difference per year in the breakdown table for the 3 children case. If your tax burden is lower, you can afford a lower wage while bringing in the same net.

EDIT: Tax rates in the US are roughly half (except for high income earners, way beyond these living wage estimates would be relating to) when you're married versus single.

https://www.irs.gov/filing/federal-income-tax-rates-and-brac...

Check out the 22% bracket on that page, the range is doubled for married people filing joint versus single. That's a huge savings each year. Tax savings of two married people and any number of kids is a major contributor to why the living wage drops when someone gets married versus is single with the same number of kids.


1. This is not ai generated.

2. Did you look in the costs breakdown? You'll probs find your answers there.

3. I am guessing having a spare adult to take care of 3 children instead of paying for childcare is probably the difference.


Child care.

that's not either

See the first row in this table: https://livingwage.mit.edu/counties/06075

Compare 2 adults (1 working) 3 kids to 2 adults (both working) 3 kids

First off, you'd expect it to be

     1 adult = X
     2 adults = X + X(0.?) 
Where 0.? is something less than 1 because 2 adults need less than 2x the money

Similarly for kids

     1 kids = Y
     2 kids = Y + Y(0.?)
     3 kids = Y + Y(0.?) + Y(0.?)
You'd expect 2 kids to be less than 2x 1 kid. And you'd expect 3 kids to be les than 1x + 2x 2nd kid. Each kid is cheaper for various reasons like hand-me-downs etc...

But instead, under 2 adults 1 working we see

     1 adult  = $29.31 (from one adult)
     2 adults = $41.83 (so X + X * 0.42)

     2 adults 1 kid  = 50.47
     2 adults 2 kids = 54.77 (so + $4.30)
     2 adults 3 kids = 63.97 (so + $9.19)
Why does the 3rd kid cost more than the 2nd?

Then you can also compare 1 adult 3 kids with 2 adults both working + 3 kids

     1 adult + 3 kids                 = $107.95
     2 adults (both working + 3 kids) = $55.67
Assuming that $55.67 is wages for each that means we're comparing

     1 adult + 3 kids                 = $107.95
     2 adults (both working + 3 kids) = $55.67x2 ($111.34)
We already established that above that adding one adult is only $12.52 a month yet here, suddenly that adult only costs $3.40 a month.

Again, these are nonsense numbers.


I'm always surprised at the number of people that follow too closely.

This always stuck with me

https://www.youtube.com/watch?v=iJFOTSYJrtw&t=466s


Japanese films are notoriously bad. You found a few gems. They are rare. This topic comes up often in Japanese learning groups.

There's also just personal takes. I had to shut off Memories of Matsuko. Maybe the end saves it but it was way too over the top and not in a good way.

Some good older Japanese though

Kurasawa: Ikiru (1955)

Teshigahara: Woman in the Dunes (1964)

These are 2 movies you won't forget.

Conversely, even though I enjoyed Shoplifters I remember nothing about it except the guy celebrating he had sex and the girl burping. Similarly with After Life. I just watched it 2 months ago and had to go look it up to remember what it was about. It was interesting because of the premise but not because of the movie itself.


this type of thing happens quite often. I can only guess that part of the page is automated or farmed out because I see it all the time, main actors not credited when the list is 4-6 people.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: