Hacker Newsnew | past | comments | ask | show | jobs | submit | rep_lodsb's commentslogin

That just describes using conversion routines for data, after loading and before storing to some standardized on-disk / network format.

It's still one more thing you need to keep in mind when writing code, at least in languages that don't have a separate data type for different-endian fields.

!tfel-ot-thgir eb dluow ,redro dleif sa llew sa ,sgnirts lla neht tuB

It could be argued that little endian is the more natural way to write numbers anyway, for both humans and computers. The positional numbering system came to the West via Arabic, after all.

Most of the confusion when reading hex dumps seems to arise from how the two nibbles of each byte being in the familiar left-to-right order clashes with the order of bytes in a larger number. Swap the nibbles, and you get "43 21", which would be almost as easy to read as "12 34".


Yep. We even have a free bit when writing hex numbers like 0x1234. Just flip that 0x to a 1x to indicate you are writing in little-endian and you get nice numbers like 1x4321 that are totally unambiguous little-endian hex representations.

You can apply that same formatting to little-endian bit representations by using 1b instead of 0b and you could even do decimal representations by prefixing with 1d.


For me I think the issue is the way you think of memory.

You can think of memory are a store of register sized values. Big endian sort of make some sense when you think of it that way.

Or you can think of it as arbitrarily sized data. It's arbitrary data then big endian is just a pain the ass. And code written to handle both big and little endian is obnoxious.


> tree-like file systems, multiple users, access privileges,

Why should everything pretend to be a 1970s minicomputer shared by multiple users connected via teletypes?

If there's one good idea in Unix-like systems that should be preserved, IMHO it's independent processes, possibly written in different languages, communicating with each other through file handles. These processes should be isolated from each other, and from access to arbitrary files and devices. But there should be a single privileged process, the "shell" (whether command line, TUI, or GUI), that is responsible for coordinating it all, by launching and passing handles to files/pipes to any other process, under control of the user.

Could be done by typing file names, or selecting from a drop-down list, or by drag-and-drop. Other program arguments should be defined in some standard format so that e.g. a text based shell could auto-complete them like in VMS, and a graphical one could build a dialog box from the definition.

I don't want to fiddle with permissions or user accounts, ever. It's my computer, and it should do what I tell it to, whether that's opening a text document in my home directory, or writing a disk image to the USB stick I just plugged in. Or even passing full control of some device to a VM running another operating system that has the appropriate drivers installed.

But it should all be controlled by the user. Normal programs of course shouldn't be able to open "/dev/sdb", but neither should they be able to open "/home/foo/bar.txt". Outside of the program's own private directory, the only way to access anything should be via handles passed from the launching process, or some other standard protocol.

And get rid of "everything is text". For a computer, parsing text is like for a human to read a book over the phone, with an illiterate person on the other end who can only describe the shape of each letter one by one. Every system-level language should support structs, and those are like telepathy in comparison. But no, that's scaaaary, hackers will overflow your buffers to turn your computer into a bomb and blow you to kingdom come! Yeah, not like there's ever been any vulnerability in text parsers, right? Making sure every special shell character is properly escaped is so easy! Sed and awk are the ideal way to manipulate structured data!


Indeed.

AmigaOS was the pinnacle of personal computing OS design. Everything since has been a regression. Fite me.


What about BeOS ?

Not very likely, what if the BeOS API emerged as "the standard" on Linux?

https://cosmoe.org/

It would not solve the ABI problem, but it would give at least an opinionated end to end API that was at some point the official API of an OS. It has some praise on its design too.


It was more about everything since the Amiga being a regression. BeOS was sometimes called a successor (in spirit) to the Amiga : a fun, snappy, single-user OS.

I regularly install HaikuOS in a VM to test it and I think I could probably use it as a daily driver, but ported software often does not feel completely right.


OK, point.

So every Linux distribution should compile and distribute packages for every single piece of open source software in existence, both the very newest stuff that was only released last week, and also everything from 30+ years ago, no matter how obscure.

Because almost certainly someone out there will want to use it. And they should be able to, because that is the entire point of free software: user freedom.


Those users will either check the source code and compile it themself, with all the proper options to match their system; or rely on a software distribution to do it for them.

People who are complaining would prefer a world of isolated apps downloaded from signed stores, but Linux was born at an optimistic time when the goal was software that cooperate and form a system, and which distribution does not depend on a central trusted platform.

I do not believe that there is any real technical issue discussed here, just drastically different goals.


No. People would prefer the equivalent of double-click `setup.exe`. Were you being serious?

I am not an expert on this, but my question is, how does windows manages to achieve it? Why can't Linux do the same?

because they care about ABI/API stability.

And have an ever decreasing market share, in desktop, hypervisor and server space. The API/ABI stability is probably the only thing stemming the customer leakage at all. It's not the be all and end all.

Decreasing market share in the desktop?

Yes, but Windows is even so bad, it's been decreasing market share of desktops.

Not sure if it's the right solution but it's a description of what happens right now in practice yes.

It also makes support more or less impossible.

Even if we ship as source, even if the user has the skills to build it, even if the make file supports every version of the kernel, plus all other material variety, plus who knows how many dependencies, what exactly am I supposed to do when a user reports;

"I followed your instructions and it doesn't run".

Linux Desktop fails because it's not 1 thing, it's 100 things. And to get anything to run reliably on 95 of them you need to be extremely competent.

Distribution as source fails because there are too many unknown, and dependent parts.

Distribution as binary containers (Docker et al) are popular because it gives the app a fighting chance. While at the same time being a really ugly hack.


Yep. But docker doesn’t help you with desktop apps. And everything becomes so big!

I think Rob pike has the right idea with go just statically link everything wherever possible. These days I try to do the same, because so much less can go wrong for users.

People don’t seem to mind downloading a 30mb executable, so long as it actually works.


What do you mean docker doesn’t help you with desktop apps? I run complicated desktop apps like Firefox inside containers all the time. There are also apps like Citrix Workspace that need so specific dependency versions that I’ve given up on running outside containers.

If you don’t want to configure this manually, use distrobox, which is a nice shell script wrapper that helps you set things up so graphical desktop apps just work.


And being 100 things is completely unavoidable when freedom is involved. You can force everyone to use the same 1 thing, if you make it proprietary. If people have freedom to customize it, of course another 99 people will come along and do that. We should probably just accept this is the price of freedom. It's not as bad as it seems because you also have the freedom to make your system present like some other system in order to run programs made for that system.

Then you only support 1 distro. If anyone wants to use your software on an unsupported distro they can figure out the rest themselves.

No, they come online and whine that you didn't package your software for <obscure distro>, that your software is shit and you're incompetent.

Your tone makes it sound like this is a bad thing. But from a user’s perspective, I do want a distro to package as much software as possible. And it has nothing to do with user freedom. It’s all about being entitled as a user to have the world’s software conveniently packaged.

What if you want to use a newer or older version of just one package without having to update or downgrade the entire goddamn universe? What if you need to use proprietary software?

I've had so much trouble with package managers that I'm not even sure they are a good idea to begin with.


That is the point of flatpak or appimage but even before that you could do it by shipping the libraries with your software and use LD_LIBRARY_PATH to link your software to them.

That was what most well packaged proprietary software used to do when installing into /opt.


I know you are trying to make a point about complexity, but that is literally what NixOS allows for.

Software installed from your package manager is almost certainly provided as a binary already. You could package a .exe file and that should work everywhere WINE is installed.

That's not my point. My point is that if executable A depends on library B, and library B does not provide any stable ABI, then the package manager will take care of updating A whenever updating B. Windows has fanatical commitment to ABI stability, so the situation above does not even occur. As a user, all the hard work dealing with ABI breakages on Linux are done by the people managing the software repos, not by the user or by the developer. I'm personally very appreciative of this fact.

Sure, it's better than nothing, but it's certainly not ideal. How much time and energy is being wasted by libraries like that? Wouldn't it be better if library B had a stable ABI or was versioned? Is there any reason it needs to work like this?

And you can also argue how much time and energy is being wasted by committing to a stable ABI such that the library cannot meaningfully improve. Remember that even struct sizes are part of the ABI; so you either cannot add new fields to a struct, or you expose pointers only and have to resort to dynamic allocation rather than stack allocation most of the time.

Opinions could differ but personally I think a stable ABI wastes more time and energy than an unstable ABI because it forces code to be inefficient. Code is run more often than they are compiled. It’s better to allow code to run faster than to avoid extra compilations.


Actually the solution to this on Windows is for programs to package all their dependencies except for Windows. When you install a game, that game includes a copy of every library the game uses, except for Windows

That's Guix.

"In 1193 (1981.), I submitted my first article [...] and in 1197 (1987.), I became a member"

Seems obviously wrong, or is that yet another dozenal notation, where what looks like the digit three is really a one? Because it should have been real easy to avoid mistakes like that for an entire decade by just remembering that 1190 = 1980 decimal (next time the decades and dozen-years align like that will be in 2040).


The dozenal movement seems based (no pun intended) mostly on opposition to the metric system.

The article on page 38 is really funny to anyone not in the US:

    Fahrenheit temperature usually ranges from about 0° (cold) to about 100°
    (hot). On the other hand, those who use the awkward Celsius scale usually range from
    about 18° to about 38°! Interesting.
(18-22 °C is room temperature, 38 °C = 100 °F = hot summer day. 0 °F is way below freezing, a lot colder than it gets in most places!).

And apparently only the metric system was imposed by tyrannical governments. Maybe someone could ask the people in metric countries today if they would like to go back to the "natural" measurements that were in use before that happened? And maybe also switch to counting everything in dozen and gross at the same time.

Even if that really were objectively a better system, I think few would make that change if it wasn't forced on them.


There's nothing "natural" about the Fahrenheit scale either. Fahrenheit took the Rømer scale, multiplied it by 4 and rounded it off a bit.

The "dividing things by two" argument makes a lot of sense! And if you need ⅓ and ⅕, they aren't too bad either: .5555 and .3333 repeating.

Yes, with that name it really should have an emulation mode similar to the NEC V20/V30 (although that one only did 8080, not Z80)


Only the Z80 refetched the entire instruction, x86 never did it this way. Each bus transfer (read or write) takes multiple clocks:

    CPU                        Cycles  per              theoretical minimum per byte for block move
    Z80 instruction fetch      4       byte
    Z80 data read/write        3       byte             6
    80(1)88, V20               4       byte             8
    80(1)86, V30               4       byte/word        4
    80286, 80386 SX            2       byte/word        1
    80386 DX                   2       byte/word/dword  0.5
LDIR (etc.) are 2 bytes long, so that's 8 extra clocks per iteration. Updating the address and count registers also had some overhead.

The microcode loop used by the 8086/8088 also had overhead, this was improved in the following generations. Then it became somewhat neglected since compilers / runtime libraries preferred to use sequences of vector instructions instead.

And with modern processors there are a lot of complications due to cache lines and paging, so there's always some unavoidable overhead at the start to align everything properly, even if then the transfer rate is close to optimal.


This is correct, but it should be noted that the 2-cycle transfers of 286/386SX/386DX could normally be achieved only from cache memory (if the MB had cache), while for DRAM accesses at least 1 or 2 wait states were needed, lengthening the access cycles to 3 or 4 clock cycles.

Moreover, the cache memories used with 286/386SX/386DX were normally write-through, which means that they shortened only the read cycles, not also the write cycles. Such caches were very effective to diminish the impact on performance of instruction fetching, but they brought little or no improvement to block transfers. The caches were also very small, so any sizable block transfer would flush the entire cache, then all transfers would be done at DRAM speed.


0 wait state 286 was pretty standard affair for 8-10 and some 12MHz gray boxes. Example https://theretroweb.com/motherboard/manual/g2-12mhz-zero-wai...

"12MHz/0 wait state with 100ns DRAM."

another https://theretroweb.com/chip/documentation/neat-6210302843ed...

"The processor can operate at 16MHz with 0.5-0.7 wait state memory accesses, using 100 nsec DRAMs. This is possible through the Page Interleaved memory scheme."


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: