I have plenty of friends with young kids that are super social. They invite people over, take their kids to restaurants and invite people to come, go to picniqs, festivals, and other things. We love their kids (or at least all the people that show up do).
You and your love for and acceptance of their kids and willingness to show up to family-friendly activities are very likely the key to their ability to do this.
My take is that modern culture just doesn't want kids. It doesn't matter how cheap you make having a family, for many it's just not remotely the same culture as it was 50-70 years ago.
Then, for most, it was, at 20-ish, find a partner ASAP and have a family. That was "the culture".
Today it's "have a great career, travel, party, netflix, game, ... and maybe someday think about kids"
There's other stats like in the USA in the 50s, being single was seen as just a transition until you met someone. 78% of adults were married, 22% single. Today, being single is way more common, > 50% and while many of those might want a parter, tons don't see it as a priority.
Yes some countries like India are still like that. Parents from the smaller villages literally pick a partner for their kids if they don't find one.
I think that's really unfair, people deserve to enjoy their lives. If they actually enjoy it that's fine but nobody should be pressured into having kids.
I agree with this take. A lot of old boomers tell me how there was a lot of pressure to get married and have kids because it was the thing to do. Nowadays, there is less pressure for people to do that.
No need to wait: they've already fried themselves out of the evolution game with STDs. Any child they have will likely be retarded or diseased in some way.
Don't forget to include alcohol as a drug - "fetal alcohol spectrum disorders", FASDs, are a real thing.
Choosing to not have children appears to "swim against the current" of the dominant biological process/context by which one came to be and in which one exists.
Certainly not having children allows one more time to pursue other matters. Mankind in general might gain (or lose) from such behavior, depending on whether one is an Einstein or a Stalin for example. Most anyone who participates in society has some set of interests and pursuit of those interests is nonetheless very real and the results may dominate our perspective.
I see no clear way to judge whether a person contributes more through his/her work or through his/her children. Nor do I think "contributing" (whatever that means) is a known evaluation anyway. And what one man considers useful another might judge detrimental. All the more b/c history is "unfinished business". IMO in summary we simply cannot know.
Aside: there's a T-shirt that shows the sinking bow of a shipwreck through a telescope lens. It's labeled thusly:
"MISTAKES - It could be that the purpose of your life is only to serve as a warning to others." Yet another viewpoint.
The single thing I find most addicting in my life for the last 5-6 years is HN. I feel like all the same criticisms can be applied here. HN chooses their algo and how quickly upvotes degrade and how much they are worth for keeping something on the front page. It works to keep me checking multi times a day. As an example, they could instead pick to only update the front page once every 24 hours and my addition would disappear because I'd know "no updates until tomorrow". As it is, I get that random reward addiction of "maybe there's something interest now". I guess HN is an evil company engineering addiction.
My understanding of API standards that need to be implemented by multiple vendors is that there's a tradeoff between having something that's easy for the programmer to use and something that's easy for vendors to implement.
A big complaint I hear about OpenGL is that it has inconsistent behavior across drivers, which you could argue is because of the amount of driver code that needs to be written to support its high-level nature. A lower-level API can require less driver code to implement, effectively moving all of that complexity into the open source libraries that eventually get written to wrap it. As a graphics programmer you can then just vendor one of those libraries and win better cross-platform support for free.
For example: I've never used Vulkan personally, but I still benefit from it in my OpenGL programs thanks to ANGLE.
Agreed. It has way too much completely unnecessary verbosity. Like, why the hell does it take 30 lines to allocate memory rather than one single malloc.
just use the vma library. the low level memory allocation interface is for those who care to have precise control over allocations. vma has shipped in production software and is a safe choice for those who want to "just allocate memory".
Nah, I know about VMA and it's a poor bandaid. I want a single-line malloc with zero care about usage flags and which only produces one single pointer value, because that's all that's needed in pretty much all of my use cases. VMA does not provide that.
And Vulkans unnecessary complexity doesn't stop at that issue, there are plenty of follow-up issues that I also have no intention of dealing with. Instead, I'll just use Cuda which doesn't bother me with useless complexity until I actually opt-in to it when it's time to optimize. Cuda allows to easily get stuff done first then check the more complex stuff to optimize, unlike Vulkan which unloads the entire complexity on you right from the start, before you have any chance to figure out what to do.
> I want a single-line malloc with zero care about usage flags and which only produces one single pointer value
That's not realistic on non-UMA systems. I doubt you want to go over PCIe every time you sample a texture, so the allocator has to know what you're allocating memory _for_. Even with CUDA you have to do that.
And even with unified memory, only the implementation knows exactly how much space is needed for a texture with a given format and configuration (e.g. due to different alignment requirements and such). "just" malloc-ing gpu memory sounds nice and would be nice, but given many vendors and many devices the complexity becomes irreducible. If your only use case is compute on nvidia chips, you shouldn't be using vulkan in the first place.
No you don't, cuMemAlloc(&ptr, size) will just give you device memory, and cuMemAllocHost will give you pinned host memory. The usage flags are entirely pointless. Why would UMA be necessary for this? There is a clear separation between device and host memory. And of course you'd use device memory for the texture data. Not sure why you're constructing a case where I'd fetch them from host over PCI, that's absurd.
> only the implementation knows exactly how much space is needed for a texture with a given format and configuration
OpenGL handles this trivially, and there is also no reason for a device malloc to not also work trivially with that. Let me create a texture handle, and give me a function that queries the size that I can feed to malloc. That's it. No heap types, no usage flags. You're making things more complicated than they need to be.
> No you don't, cuMemAlloc(&ptr, size) will just give you device memory, and cuMemAllocHost will give you pinned host memory.
that's exactly what i said. You have to explicitly allocate one or the other type of memory. I.e. you have to think about what you need this memory _for_. It's literally just usage flags with extra steps.
> Why would UMA be necessary for this?
UMA is necessary if you want to be able to "just allocate some memory without caring about usage flags". Which is something you're not doing with CUDA.
> OpenGL handles this trivially,
OpenGL also doesn't allow you to explicitly manage memory. But you were asking for an explicit malloc. So which one do you want, "just make me a texture" or "just give me a chunk of memory"?
> Let me create a texture handle, and give me a function that queries the size that I can feed to malloc. That's it. No heap types, no usage flags.
Sure, that's what VMA gives you (modulo usage flags, which as we had established you can't get rid of). Excerpt from some code:
VkImage img;
VmaAllocation allocn;
const VkResult create_alloc_vkerr = vmaCreateImage(
vma_allocator,
&vk_image_info, // <-- populated earlier with format, dimensions, etc.
&vma_alloc_info,
&img,
&allocn,
NULL);
```
Since i dont care about reslurce aliasing, that's the extent of "memory management" that i do in my rhi. The last time i had to think about different heap types or how to bind memory was approximately never.
No, it's not usage flags with extra steps, it's less steps. It's explicitly saying you want device memory without any kind of magical guesswork of what your numerous potential combinations of usage flags may end up giving you. Just one simple device malloc.
Likewise, your claim about UMA makes zero sense. Device malloc gets you a pointer or handle to device memory, UMA has zero relation to that. The result can be unified, but there is no need for it to be.
Yeah, OpenGL does not do malloc. I'm flexible, I don't necessarily need malloc. What I want is a trivial way to allocate device memory, and Vulkan and VMA don't do that. OpenGL is also not the best example since it also uses usage flags in some cases, it's just a little less terrible than Vulkan when it comes to texture memory.
I find it fascinating how you're giving a bad VMA example and passing that of as exemplary. Like, why is there gpu-only and device-local. That vma alloc info as a whole is completely pointless because a theoretical vkMalloc should always give me device memory. I'm not going to allocate host memory for my 3d models.
You are also explicitly saying that you want device memory by specifying DEVICE_LOCAL_BIT. There's no difference.
> Likewise, your claim about UMA makes zero sense. Device malloc gets you a pointer or handle to device memory,
It makes zero sense to you because we're talking past each other. I am saying that on systems without UMA you _have_ to care where your resources live. You _have_ to be able to allocate both on host and device.
> Like, why is there gpu-only and device-local.
Because there's such a thing as accessing GPU memory from the host. Hence, you _have_ to specify explicitly that no, only the GPU will try to access this GPU-local memory. And if you request host-visible GPU-local memory, you might not get more than around 256 megs unless your target system has ReBAR.
> a theoretical vkMalloc should always give me device memory.
No, because if that's the only way to allocate memory, how are you going to allocate staging buffers for the CPU to write to? In general, you can't give the copy engine a random host pointer and have it go to town. So, okay now we're back to vkDeviceMalloc and vkHostMalloc. But wait, there's this whole thing about device-local and host visible, so should we add another function? What about write-combined memory? Cache coherency? This is how you end up with a zillion flags.
This is the reason I keep bringing UMA up but you keep brushing it off.
> You are also explicitly saying that you want device memory by specifying DEVICE_LOCAL_BIT. There's no difference.
There is. One is a simple malloc call, the other uses arguments with numerous combinations of usage flags which all end up doing exactly the same, so why do thy even exist.
> You _have_ to be able to allocate both on host and device.
cuMemAlloc and cuMemAllocHost, as mentioned before.
> Because there's such a thing as accessing GPU memory from the host
Never had the need for that, just cuMemcpyHtoD and DtoH the data. Of course host-mapped device memory can continue to exist as a separate, more cumbersome API. The 256MB limit is cute but apparently not relevant im Cuda where I've been memcpying buffers with GBs in size between host and device for years.
> No, because if that's the only way to allocate memory, how are you going to allocate staging buffers for the CPU to write to?
With the mallocHost counterpart.
cuMemAllocHost, so a theoretic vkMallocHost, gives you pinned host memory where you can prep data before sending it to device with cuMemcpyHtoD.
> This is how you end up with a zillion flags.
Apparently only if you insist on mapped/host-visible memory. This and usage flags never ever come up in Cuda where you just write to the host buffer and memcpy when done.
> This is the reason I keep bringing UMA up but you keep brushing it off.
Yes I think I now get why keep bringing up UMA - because you want to directly access buffers between host or device via pointers. That's great, but I don't have the need for that and I wouldn't trust the performance behaviour of that approach. I'll stick with memcpy which is fast, simple, has fairly clear performance behaviours and requires none of the nonsense you insist on being necessary. But what I want isn't either this or that approach, I want the simple approach in addition what exists now, so we can both have our cakes.
It seems like the functionality is the same, just the memory usage is implicit in cuMemAlloc instead of being typed out? If it's that big of a deal write a wrapper function and be done with it?
Usage flags never come up in CUDA because everything is just a bag-of-bytes buffer. Vulkan needs to deal with render targets and textures too which historically had to be placed in special memory regions, and are still accessed through big blocks of fixed function hardware that are very much still relevant. And each of the ~6 different GPU vendors across 10+ years of generational iterations does this all differently and has different memory architectures and performance cliffs.
It's cumbersome, but can also be wrapped (i.e. VMA). Who cares if the "easy mode" comes in vulkan.h or vma.h, someone's got to implement it anyway. At least if it's in vma.h I can fix issues, unlike if we trusted all the vendors to do it right (they wont).
> and are still accessed through big blocks of fixed function hardware that are very much still relevant
But is it relevant for malloc? Everthing is put into the same physical device memory, so what difference would the usage flag make? Specialized texture fetching and caching hardware would come into play anyway when you start fetching texels via samplers.
> It seems like the functionality is the same, just the memory usage is implicit in cuMemAlloc instead of being typed out? If it's that big of a deal write a wrapper function and be done with it?
The main reason I did not even give VMA a chance is the github example that does in 7 lines what Cuda would do in 2. You now say it's not too bad, but that's not reflected in the very first VMA examples.
> I want the simple approach in addition what exists now, so we can both have our cakes.
The simple approach can be implemented on top of what Vulkan exposes currently.
In fact, it takes only a few lines to wrap that VMA snippet above and you never have to stare at those pesky structs again!
But Vulkan the API can't afford to be "like CUDA" because Vulkan is not a compute API for Nvidia GPUs. It has to balance a lot of things, that's the main reason it's so un-ergonomic (that's not to say there were no bad decisions made. Renderpasses were always a bad idea.)
> In fact, it takes only a few lines to wrap that VMA snippet above and you never have to stare at those pesky structs again!
If it were just this issue, perhaps. But there are so many more unnecessary issues that I have no desire to deal with, so I just started software-rasterizing everything in Cuda instead. Which is way easier because Cuda always provides the simple API and makes complexity opt-in.
No problem: Then you provide an optional more complex API that gives you additional control. That's the beautiful thing about Cuda, it has an easy API for the common case that suffices 99% of the time, and additional APIs for the complex case if you really need that. Instead of making you go through the complex API all the time.
DXGI+D3D11 via C is actually fine and is close or even lower than Metalv1 when it comes to 'lines of code needed to get a triangle on screen". D3D12 is more boilerplate-heavy, but still not as bad as Vulkan.
I can cherry pick a worse republican run city easily, despite you picking pretty much the worst example of Democratic run city, that despite its problem is also is home to many tech companies with a strong economy.
I dunno why you guys even try to argue against Dems at this point tbh. Even if I am wrong on that point, there are a thousand others that demonstrably show that Republican policies and politicians, especially during this administration, are many times worse.
Which part is not working? Do you live here? I’ve been living in the Mission since 2023 and despite some problems, the city, overall, works… pretty well. Really.
“Super Bowl Visitors Find San Francisco Better Than Its Apocalyptic Image. Problems with homelessness and open-air drug use have been widely broadcast, but many visitors this week said they found the city surprisingly pleasant.”
Incidentally, reading some books on the history of SF illuminates that homelessness/poverty and drug use have plagued the city for almost a century, across all manner of governments. There is no easy solution here.
How is 1 adult + 3 children at $107.95 and 2 adults + 3 children at $63.97
5 people could require more money than 4. You could say in the 2nd case it's $63.97x2 but that doesn't make any sense either because the table also has 1 adult 0 children $29.31 and 2 adults 0 children at $41.81. Clearly they are not doing 2x to that $41.81 as it would be more than the $29.31 at 2x
Look at the childcare number in the breakdown table. 1 adult and 3 children has an estimated $71k/year childcare cost, while 2 adults and 3 children (1 working) has a $0/year childcare cost. So some things go up (transportation, healthcare, food), but others go down. Childcare going down by $71k pretty much entirely accounts for the difference you're questioning (~$34/hour difference just on that entry).
Also, two adults (assuming married) will pay lower taxes than one adult for the same income. That's another ~30k difference per year in the breakdown table for the 3 children case. If your tax burden is lower, you can afford a lower wage while bringing in the same net.
EDIT: Tax rates in the US are roughly half (except for high income earners, way beyond these living wage estimates would be relating to) when you're married versus single.
Check out the 22% bracket on that page, the range is doubled for married people filing joint versus single. That's a huge savings each year. Tax savings of two married people and any number of kids is a major contributor to why the living wage drops when someone gets married versus is single with the same number of kids.
Where 0.? is something less than 1 because 2 adults need less than 2x the money
Similarly for kids
1 kids = Y
2 kids = Y + Y(0.?)
3 kids = Y + Y(0.?) + Y(0.?)
You'd expect 2 kids to be less than 2x 1 kid. And you'd expect 3 kids to be les than 1x + 2x 2nd kid. Each kid is cheaper for various reasons like hand-me-downs etc...
Japanese films are notoriously bad. You found a few gems. They are rare. This topic comes up often in Japanese learning groups.
There's also just personal takes. I had to shut off Memories of Matsuko. Maybe the end saves it but it was way too over the top and not in a good way.
Some good older Japanese though
Kurasawa: Ikiru (1955)
Teshigahara: Woman in the Dunes (1964)
These are 2 movies you won't forget.
Conversely, even though I enjoyed Shoplifters I remember nothing about it except the guy celebrating he had sex and the girl burping. Similarly with After Life. I just watched it 2 months ago and had to go look it up to remember what it was about. It was interesting because of the premise but not because of the movie itself.
this type of thing happens quite often. I can only guess that part of the page is automated or farmed out because I see it all the time, main actors not credited when the list is 4-6 people.
You can use wgpu or dawn in a native app and use native tools for GPU debugging if that's what you want
You can then take that and also run it in the browser, and, you can debug the browser in the same tools. Google it for instructions
The positive things about WebGPU is it's actually portable, unlike Vulkan. And, it's easy to use, unlike Vulkan.
reply