Hacker Newsnew | past | comments | ask | show | jobs | submit | twic's commentslogin

  SyntaxError: not a chance

Part of what's going on is that all of this stuff is developed by volunteers. If only one person, or three people or whatever, are actually willing to give up hundreds of thousands of hours of their own time to work on something, then those people get to make the decisions about what gets done. There's very little supervision from any kind of "upper management" to catch bad decisions.

On top of that, there's adverse selection here. Who gives up thousands of hours to work on some obscure corner of the Linux desktop? People with quite unusual thought processes.


I prefer diatoms myself: https://diatoms.org/morphology


And aren't diatoms largely responsible for silica deposition in deep ocean sediments, not radiolarians? The latter are zooplankton, and being higher on the food web will not appear in as large numbers.


I always associated these battle-jacketed MacBooks with Ruby developers. But Ruby developers were the Rust developers of their day.


I think the assertion might even have been true for non-pathological cases 20 to 40 years ago. A company that chose Visual Basic or Perl would have had a much harder road than one which chose C# or Python. But i think the languages which have survived to the present day are all pretty close in productivity. Except C.


I've worked for a company with a large code base in Visual Basic .net. Product been in development since the 90's, with rich customer that only cares about their software doing its job. It's a surprisingly productive language combined with Visual Studio. Even though, as a language enthusiast, I barfed a bit now and then. Dev team would like to switch to C# but it would have been a multi-year effort taking away from lucrative feature requests.


that comment is probably referring to VisualBasic 6.0 which was not a dialect of C# like VB.NET but then again your product from the 90s likely started off as that


> A company that chose Visual Basic or Perl would have had a much harder road than one which chose C# or Python.

Based on what? Economics define where you go, not language. If company using Perl or VB has steady cash flow and their bottleneck is language - they’ll just rewrite when it makes sense. No amount of writing in C# or Python from scratch will save you if your product is garbage.


The majority of popular PvP shooters use anti-cheat which does not work on Proton, so "almost all modern games" seems like overselling it to me.

But the stuff that does work, works well. I play Helldivers 2 via Proton on Fedora, and i experience far fewer crashes and instances of weird behaviour than friends on Windows or Xbox.


ARC Raiders is currently all working perfectly for me. Such a blessing. I hope it stays this way.


> Debian using Fil-C (Filian?)

DJB SMACKER CONFIRMED?!


I couldn't run Delta Force [1], due to anti-cheat as far as i can tell.

Shame about Battlefield 6, some of my friends are playing that and it would be fun to join them. Oh well. Fortunately they're mostly still playing Helldivers 2 as well, and that works fine.

[1] https://www.protondb.com/app/2507950


To put a name on it, i believe you are talking about SAGE, the Semi-Automatic Ground Environment:

https://en.wikipedia.org/wiki/Semi-Automatic_Ground_Environm...

https://sage.mitre.org/ (see also links at the end)

SAGE also pioneered user interface technology, for example:

The first pointing device: https://historyofinformation.com/detail.php?id=727

The first naked lady on a computer: https://www.theatlantic.com/technology/archive/2013/01/the-n...

And bits of the machinery went on to a long and varied career in film and TV: https://www.starringthecomputer.com/computer.php?c=73


There isn't today, but there was in 1991, scheduler activations:

https://dl.acm.org/doi/10.1145/121132.121151

The rough idea is that if the kernel blocks a thread on something like a page cache miss, then it notifies the program through something a bit like a signal handler; if the program is doing user-level scheduling, it can then take account of that thread being blocked. The actual mechanism in the paper is more refined than that.


Nice find. That going nowhere seems like classic consequence of the cyclical nature of these things: user-managed concurrency was cool, then it wasn't, then Go (and others) brought it back.

I think the more recent UMCG [1] (kind of a hybrid approach, with threads visible by the kernel but mostly scheduled by userspace) handles this well. Assuming it ever actually lands in upstream, it seems reasonable to guess Go would adopt it, given that both originate within Google.

It's worth pointing out that the slow major page fault problem is not unique to programs using mmap(..., fd, ...). The program binary is implicitly mmaped, and if swap is enabled, even anonymous memory can be paged out. I prefer to lock ~everything [2] into RAM to avoid this, but most programs don't do this, and default ulimits prevent programs running within login shells from locking much if anything.

[1] https://lwn.net/Articles/879398/

[2] particularly on (mostly non-Go) programs with many threads, it's good to avoid locking into RAM the guard pages or stack beyond what is likely to be used, so better not to just use mlockall(MCL_CURRENT | MCL_FUTURE) unfortunately.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: