Hacker Newsnew | past | comments | ask | show | jobs | submit | mananaysiempre's favoriteslogin

This reminds me that at some point I should write up my exploration of the x86 encoding scheme, because a lot of the traditional explanations tend to be overly focused on how the 8086 would have decoded instructions, which isn't exactly the same way you look at them for a modern processors.

I actually have a tool I wrote to automatically derive on x86 decoder from observing hardware execution (based in part on sandsifter, so please don't ask me if I've heard of it), and it turns out to largely be a lot simpler than people make it out to be... if you take a step back and ignore some of what people have said about the role of various instruction prefixes (they're not prefixes, they're extra opcode bits).

(FWIW, this is fairly dated in that it doesn't cover the three-byte opcodes, or the 64-bit prefixes that were added, like the REX and VEX prefixes).


Well that sucks. It's exactly the site that comes to mind when I think "most popular alternative to HN".

I've generally found conversation there to be more respectful than HN, rather than less, when discussions get heated - so I had high hopes it would be a different site, but alas.

This leaves a really bad taste in my mouth.

Edit: you know what, screw it. In the spirit of "no more self censorship", here's the link: https://lobste.rs/~7u026ne9se


I used to spend a lot of time in Jakarta for work, and it's an underrated city. Yes, it's hot, congested, polluted and largely poor, but so is Bangkok.

Public transport remains not great, but it's improved a lot with the airport link, the metro, LRT, Transjakarta BRT. SE Asia's only legit high speed train now connects to Bandung in minutes. Grab/Gojek (Uber equivalents) make getting around cheap and bypass the language barrier. Hotels are incredible value, you can get top tier branded five stars for $100. Shopping for locally produced clothes etc is stupidly cheap. Indonesian food is amazing, there's so much more to it than nasi goreng, and you can find great Japanese, Italian, etc too; these are comparatively expensive but lunch at the Italian place in the Ritz-Carlton was under $10. The nightlife scene is wild, although you need to make local friends to really get into it. And it's reasonably safe, violent crime is basically unknown and I never had problems with pickpockets (although they do exist) or scammers.

I think Jakarta's biggest problems are lack of marketing and top tier obvious attractions. Bangkok has royal palaces and temples galore plus a wild reputation for go-go bars etc, Jakarta does not, so nobody even considers it as a vacation destination.


> There’s an old electronics joke that if you want to build an oscillator, you should try building an amplifier

It's funny, I was just thinking this morning about an old article in (I think) Television magazine that I read in the 80s when I was getting into electronics. The author was talking about some service notes he'd received for a particular model of Philips radio, which had just come out, and it was when shops tended to have their own service department that would repair things right there in the shop - and also, apply any "factory fixes".

One such fix was described as "Fix VIUPS", and involved changing a couple of resistors and adding a couple of capacitors. Not really any difference, but the author did think it seemed to make the amp a bit more stable and less inclined to make squealy ploppy noises at high volume when the battery was low. But, curiosity got the better of him, so he rang the Philips rep - what's this "VIUPS"?

No idea. But I'll get hold of someone at head office you can ring. Okay, what's this "VIUPS" thing? No idea, said the head office guy, but I can put you in touch with one of the factory engineers in Eindhoven.

So, a call came in, an international call! Quite a big deal in the 80s. "What's this VIUPS Fix thing in the service notes?" he asked the guy.

"Aha yes", he said in a heavy Dutch accent, "the VIUPS is the noise the set makes when the fault is present."

VIUPS VIUPS VIUPS. Yup.


The most comprehensive “Singal does bad journalism” montages come from the left-wing media outlets and leftist bloggers that he’s targeted over the years. The typical HN commenter is going to immediately gloss those accounts as partisan hyperbole. And why not? It’s purely academic for some of them, and internally worldview-challenging for others.

But if you really are honestly curious and unbiased, M. K. Anderson wrote a well-researched article for Protean in 2022.


This bit about it not being difficult to implement is false. The single most damaging vulnerability class of the last 25 years came from the inability of programmers to reliably count bytes. It's simple to come up with something that works reliably without the presence of an adversary. But as soon as you add an adversary who will manipulate inputs and environments to put you into corner cases, counting becomes quite difficult indeed, no matter how simple you think it is to understand counting.

If you create the opportunity to make a mistake remembering to freshen a nonce, even if that opportunity is remote, such that you'd never trip over it accidentally, you've given attackers a window to elaborately synthesize that accident for you. That's what a vulnerability is.

There is a whole subfield of cryptography right now dedicated to "nonce misuse resistance", motivated entirely by this one problem. This is what I love about cryptography. You could go your entire career in the rest of software security and not come up with a single new bug class (just instances of bug patterns that people have been finding for years). But cryptography has them growing on trees, and it is early days for figuring out how to weaponize them.

That's why people pay so much attention to stuff like nonce widths.


It is not only that ASN.1 was there before SSL, but even the certificate format was there before SSL. The certificate format comes from X.500, which is the "DAP" part of "LDAP", L as in "Lightweight" in "LDAP" refers mostly to LDAP not using public key certificates for client authentication in contrast to X.500 [1]. Bunch of other related stuff comes from RSA's PKCS series specifications, which also mostly use ASN.1.

1] the somewhat ironic part is that when it was discovered that using just passwords for authentication is not enough, the so called "lighweight" LDAP got arguably more complex that X.500. Same thing happened to SNMP (another IETF protocol using ASN.1) being "Simple" for similar reasons.


The magic is +LEVEL and -LEVEL (instead of FIG-FORTH's traditional STATE), which when you start a top level loop or conditional it transitions from level 0 to 1, and switches the outer interpreter into temporary compilation mode that compiles headerless code into into a compile-buffer, then when you -LEVEL back to zero it executes the code you just compiled, without it actually being part of a permanent word.

So your temporary top level code is out of the way of HERE and can compile permanent stuff into the dictionary or whatever it needs to do. Then you can do stuff like "10 0 do i , loop" and the numbers you're ,'ing won't get mixed up with the code of the loop that's ,'ing them.

This post has a bunch of links to the OpenFirmware metacompiler's implementation and also the CForth implementation:

https://news.ycombinator.com/item?id=38689282

Here are the control structures in kernel.fth (this is some beautiful FORTH code to read for pleasure):

kernel.fth: https://github.com/MitchBradley/openfirmware/blob/master/for...

Here is the same approach in CForth, the low level C kernel code (necessarily ugly C macrology) and the higher level FORTH control structure definitions (more beautiful FORTH code):

forth.c: https://github.com/MitchBradley/cforth/blob/master/src/cfort...

control.fth: https://github.com/MitchBradley/cforth/blob/master/src/cfort...

Here is another paper about refactoring the FORTH compiler/interpreter with deferred words that Mitch wrote called "Yet Another Interpreter Organization":

https://groups.google.com/g/comp.lang.forth/c/lKQjcJL_o54/m/...

>There has been a mild controversy in the Forth community about how to implement the text interpreter. The particular problem is how the distinction between compiling and interpreting should be coded. At least three distinct solutions have been advocated over the years. I propose a fourth one, and claim that it is the best solution yet.

[describes FIG-FORTH's solution with STATE, PolyForth's solution with two separate loops for compiling and interpreting, Bob Berkey's coroutines approach]

>What is Wrong with all this

>These different schemes do not at all address what I consider to be the fundamental problems with the interpreter/compiler.

>Fundamental Problem #1:

>The compiler/interpreter has a built-in infinite loop. This means that you can't tell it to just compile one word; once you start it, off it goes, and it won't stop until it gets to the end of the line or screen.

>Fundamental Problem #2:

>The reading of the next word from the input stream is buried inside this loop. This means that you can't hand a string representing a word to the interpreter/compiler and have it interpret or compile it for you.

>Fundamental Problem #3:

>The behavior of the interpreter/compiler is hard to change because all the behavior is hard-wired into one or two relatively large words. Changing this behavior can be extremely useful for a number of applications, for example meta-compiling.

>Fundamental Problem #4:

>If the interpreter/compiler can't figure out what to do with a word (it's not defined and it's not a number), it aborts. Worse yet, the aborting is not done directly from within the loop, but inside NUMBER. This severly limits the usefulness of NUMBER because if the string that NUMBER gets is not recognizable as a number, it will abort on you. (The 83 standard punts this issue by not specifying NUMBER, except as an uncontrolled reference word).

[describes Mitch's solution of making DO-DEFINED, DO-LITERAL, and DO-UNDEFINED a deferred word]

>So what?

>This may seem to be more complicated than the schemes it replaces. It certainly does have more words. On the other hand, each word is individually easy to understand, and each word does a very specific job, in contrast to the old style, which bundles up a lot of different things in one big word. The more explicit factoring gives you a great deal of control over the interpreter.

[describes cool examples of what you can do with it]

>Finally, a really neat way to write keyword-driven translators. Suppose you have some kind of a file that contains a bunch of text. Interspersed throughout the text are keywords that you would like to recognize, and the program should do something special when it sees a keyword. For things that aren't keywords, it just writes them out unchanged. Suppose that the keywords are ".PARAGRAPH", ".SECTION", and ".END". [...]

>I have used this technique very successfully to extract specific information from data base files produced by a CAD system. Instead of outputting unrecognized words, I actually just ignored them in this application, but the technique is the same in either case.

  Mitch Bradley
  Bradley Forthware
  P.O. Box 4444
  Mountain View, CA 94040
  wmb@forthware.com
Mitch had the coolest P.O. Box address for his Forthware company in Mountain View!

This deferred word approach is actually what I used for the HyperTIES markup language interpreter/formatter for NeWS I wrote in Forth and C and PostScript, using Mitch's Sun Forth / Forthmacs (predecessor to OpenFirmware that ran on the Sun):

https://donhopkins.com/home/ties/doc/formatter.st0

https://donhopkins.com/home/ties/fmt.f

https://donhopkins.com/home/ties/fmt.c

https://donhopkins.com/home/ties/fmt.cps

https://donhopkins.com/home/ties/fmt.ps


Why would anyone want to use a complex kludge like QUIC and be at the mercy of broken TLS libraries, when Wireguard implementations are ~ 5k LOC and easily auditable?

Have all the bugs in OpenSSL over the years taught us nothing?


Mind you, a lot of their designs are cheap knock-offs of contemporary designs.

* The POÄNG chair is a copy of Alvar Aalto's 406.

* Nakamura's earlier POEM copied both the 406 and a chair by Bruno Mathsson.

* FROSTA (now discontinued) is a copy of Aalto's Stool 60.

* KROMVIK copied Bruno Mathsson's Ulla bed frame.

* BORE copied Mathsson's Karin chair.

And so on. Ironically, some of these also have become classics of their own, or at least sought-after vintage objects.

IKEA sometimes comes up with original, sometimes novel designs, but generally they copy better designs with worse manufacturing quality rather than coming up with original ones.

And they are genuinely worse in terms of construction. For example, if you compare the wood quality of a FROSTA with Aalta's stool it's night and day. FROSTA is just plywood cut to size. The Aalto stool is solid birch, with a plywood top and an elegant solid birch veneer for the edge band, and the legs use a unique plywood-like join that is a thing of beauty [1].

[1] https://www.alvaraalto.fi/wp-content/uploads/2017/10/l-jalka...


This movie was great. If you liked thoughtful almost-action movies No Country for Old Men or even something like The Fugitive, you’ll enjoy this. There are cinematic set pieces as beautiful as anything I’ve seen in years (the rooftop sequence!) the acting is stellar throughout, and the finale gives an original take on one of the most common film plot devices of all time.

The film’s politics are very progressive/liberal so I can imagine that deterring some viewers but PTA adds a lot of nuance and subversion throughout that make it more of an examination of radicalism than a straight trumpeting of it. As mentioned in another comment the radical characters often disagree and are shown taking very different strategies that then produce very different outcomes.


Talking of cheap and powerful devices one can also look at Chinese UZ801 4G LTE (Qualcomm MSM8916) dongles. They cost like only $4-5 and pack quite impressive HW: 4GB eMMC, 512MB RAM, actual 4G modem sometimes with 2 sim switching support. Since it's actually old Android SOC there is even GPU and GPS in there. And a lot of work was already done on supporting them:

https://wiki.postmarketos.org/wiki/Zhihe_series_LTE_dongles_...

https://github.com/OpenStick/OpenStick

So yeah if you looking for hardware platform for weird homelab projects that's can be it.


The "AVX2 (generic)" approach is roughly what ripgrep uses (via Rust's `regex` crate) to accelerate most searches. Even something like `\w+\s+Sherlock\s+\w+` will benefit since ripgrep will pluck `Sherlock` out of the regex and search that.

The actual implementation is here: https://github.com/BurntSushi/memchr?tab=readme-ov-file#algo...

The main difference with the algorithm presented here is that instead of always using the first and last bytes of the needle, a heuristic is used to try to pick two bytes that occur less frequently according to a background frequency distribution.

It ends up being quite a bit faster than just plain Two-Way or even GNU libc's implementation of `memmem`. From the root of the `memchr` repository:

    $ rebar rank benchmarks/record/x86_64/2023-12-29.csv -e '^rust/memchr/memmem/(oneshot|prebuilt|twoway)' -e '^libc/memmem/oneshot'
    Engine                       Version  Geometric mean of speed ratios  Benchmark count
    ------                       -------  ------------------------------  ---------------
    rust/memchr/memmem/prebuilt  2.7.1    1.07                            57
    rust/memchr/memmem/oneshot   2.7.1    1.39                            54
    libc/memmem/oneshot          unknown  3.15                            54
    rust/memchr/memmem/twoway    2.7.1    5.26                            54
The "prebuilt" benchmarks also demonstrate the API deficiencies of routines like `memmem`. Because of their API, they need to rebuild the state necessary to execute a search for the needle given. But it is often the case that the needle is invariant across repeated calls with different haystacks. So you end up doing that repeated work for no gain.

What I said is even true in the US - employers have payroll and other compliance obligations such as their portion of Social Security and Medicare taxes, unemployment insurance, worker’s compensation insurance, sometimes a state-specific requirement for short-term disability insurance, and often more. Plus health insurance and paid time off are usually part of the package for employees at least in the tech world, and COBRA rights exist after losing the job. Plus unionization rights too.

For independent contractors, all of those things are either fully the responsibility of the contractor (such as the Social Security and Medicare taxes) or absent entirely (such as the unionization rights). Whether that’s legal even in the US depends on whether the relationship is misclassified employment or true independent contracting. (These are among several reasons why true independent contractors charge much higher rates than people who just acquiesce to employer misclassification.) The IRS has a many-factor test based on the common law and is absolutely willing to hear reports of alleged misclassification. So are many state and local government agencies.


Is there any better model you can point at? I would be interested in having a listen.

There are people – and it does not matter what it's about – that will overstate the progress made (and others will understate it, case in point). Neither should put a damper on progress. This is the best I personally have heard so far, but I certainly might have missed something.


Hey, I heard about how utility pole inspecting helicopters are able to tell the good/rotten state of wooden telephone poles by the reverb pattern of sound waves coming off the poles from the rotors -- it seems to me the whole field of non-invasive sensing (and using existing/ambient emission sources) is getting pretty impressive.

All eleven thousand words were hand-typed. No AIs are used or abused in the making of any of my blog posts. (Because I write to think, because writing is nature's way of showing me how sloppy my thinking is. So it goes...)

https://www.ti.com/lit/gpn/bq77908a

https://www.diodes.com/assets/Datasheets/products_inactive_d...

Look at the reference circuits, it's a pair of antiserial NMOS on the negative pole.

(Those 2 protection circuits are at the opposite ends of complexity & features)

To be clear, using 2 PMOS on the positive pole is also quite common, my choice of words with "standard best practice" might be a bit misleading.

> use bus bars to minimize wiring resistance.

Those come after the protection circuit, there should always be 2 MOSFETs in series with the individual Li-Ion cell in a design like this (specifically: user swappable cell).

(Protecting paralleled cells together is kinda nonsensical because you also want to protect them from each other, I don't think I've ever seen a 2P combined protection circuit.)


Ads and many online features can be removed before installation of Adobe Reader by customizing the installer using the Adobe Reader Customization Wizard for Windows [1], where there is an optional labeled "Disable Upsell" [2]. There might also be a version for macOS [3]. It might also just be possible to just directly set the appropriate "FeatureLockDown" options in the registry/preferences in your system [4].

[1]: https://www.adobe.com/devnet-docs/acrobatetk/tools/Wizard/in...

[2]: https://www.adobe.com/devnet-docs/acrobatetk/tools/Wizard/on...

[3]: https://www.adobe.com/devnet-docs/acrobatetk/tools/AdminGuid...

[4]: https://www.adobe.com/devnet-docs/acrobatetk/tools/PrefRef/W...


> Nickel metal rechargeables are a good AA/AAA substitute for devices designed to tolerate their lower voltage.

Any device that can't is arguably broken as designed. Much of the energy (the majority, in a higher current application) in an alkaline battery is found under 1.2V.

See discharge curves: https://lygte-info.dk/review/batteries2012/Duracell%20Ultra%...

NiMH actually stays above 1.2V longer for all but the lightest loads: https://lygte-info.dk/review/batteries2012/Eneloop%20AA%20BK...


NATS is very good. It's important to distinguish between core NATS and Jetstream, however.

Core NATS is an ephemeral message broker. Clients tell the server what subjects they want messages about, producers publish. NATS handles the routing. If nobody is listening, messages go nowhere. It's very nice for situations where lots of clients come and go. It's not reliable; it sheds messages when consumers get slow. No durability, so when a consumer disconnects, it will miss messages sent in its absence. But this means it's very lightweight. Subjects are just wildcard paths, so you can have billions of them, which means RPC is trivial: Send out a message and tell the receiver to post a reply to a randomly generated subject, then listen to that subject for the answer.

NATS organizes brokers into clusters, and clusters can form hub/spoke topologies where messages are routed between clusters by interest, so it's very scalable; if your cluster doesn't scale to the number of consumers, you can add another cluster that consumes the first cluster, and now you have two hubs/spokes. In short, NATS is a great "message router". You can build all sorts of semantics on top of it: RPC, cache invalidation channels, "actor" style processes, traditional pub/sub, logging, the sky is the limit.

Jetstream is a different technology that is built on NATS. With Jetstream, you can create streams, which are ordered sequences of messages. A stream is durable and can have settings like maximum retention by age and size. Streams are replicated, with each stream being a Raft group. Consumers follow from a position. In many ways it's like Kafka and Redpanda, but "on steroids", superficially similar but just a lot richer.

For example, Kafka is very strict about the topic being a sequence of messages that must be consumed exactly sequentially. If the client wants to subscribe to a subset of events, it must either filter client-side, or you have some intermediary that filters and writes to a topic that the consumer then consumes. With NATS, you can ask the server to filter.

Unlike Kafka, you can also nack messages; the server keeps track of what consumers have seen. Nacking means you lose ordering, as the nacked messages come back later. Jetstream also supports a Kafka-like strictly ordered mode. Unlike Kafka, clients can choose the routing behaviour, including worker style routing and deterministic partitioning.

Unlike Kafka's rigid networking model (consumers are assigned partitions and they consume the topic and that's it), as with NATS, you can set up complex topologies where streams get gatewayed and replicated. For example, you can streams in multiple regions, with replication, so that consumers only need to connect to the local region's hub.

While NATS/Jetstream has a lot of flexibility, I feel like they've compromised a bit on performance and scalability. Jetstream clusters don't scale to many servers (they recommend max 3, I think) and large numbers of consumers can make the server run really hot. I would also say that they made a mistake adopting nacking into the consuming model. The big simplification Kafka makes is that topics are strictly sequential, both for producing and consuming. This keeps the server simpler and forces the client to deal with unprocessable messages. Jetstream doesn't allow durable consumers to be strictly ordered; what the SDK calls an "ordered consumer" is just an ephemeral consumer. Furthermore, ephemeral consumers don't really exist. Every consumer will create server-side state. In our testing, we found that having more than a few thousand consumers is a really bad idea. (The newest SDK now offers a "direct fetch" API where you can consume a stream by position without registering a server-side consumer, but I've not yet tried it.)

Lastly, the mechanics of the server replication and connectivity is rather mysterious, and it's hard to understand when something goes wrong. And with all the different concepts — leaf nodes, leaf clusters, replicas, mirrors, clusters, gateways, accounts, domains, and so on — it's not easy to understand the best way to design a topology. The Kafka network model, by comparison, is very simple and straightforward, even if it's a lot less flexible. With Kafka, you can still build hub/spoke topologies yourself by reading from topics and writing to other topics, and while it's something you need to set up yourself, it's less magical, and easier to control and understand.

Where I work, we have used NATS extensively with great success. We also adopted Jetstream for some applications, but we've soured on it a bit, for the above reasons, and now use Redpanda (which is Kafka-compatible) instead. I still think JS is a great fit for certain types of apps, but I would definitely evaluate the requirements carefully first. Jetstream is different enough that it's definitely not just a "better Kafka".


Kafka isn’t a queue, it’s a distributed log. A partitioned topic can take very large volumes of message writes, persist them indefinitely, deliver them to any subscriber in-order and at-least-once (even for subscribers added after the message was published), and do all of that distributed and HA.

If you need all those things, there just are not a lot of options.


We have a Volkswagen e-Up, it's basically that. Analog cluster, a very small radio screen that also displays the world's smallest reverse camera view, and a dashboard mount for your phone. It's a fantastic little car, I honestly like it more than our 400bhp Volvo XC60.

With the limited roll out, it wouldn't take much capacity for individual sites to schedule their available doses.

You'd think so, right. You'd think that the state of California was certainly able to successfully inject more than 25% of doses delivered in January 2021, right. You'd think that simply calling around for places that had nobody coming in could not possibly work, right.

A thing that continues to blow the minds of many: there is literally no one whose job it is to generate demand for most doses at most locations which were allocated doses. This was entirely on a pull model. If there was no pull, then they would have sat in the freezer until discarded.

This didn't just strand doses in the freezer at places like Rural Clinic For Low-Income Farm Workers Who Accidentally Got A Supersized Allocation Due To Political Considerations. It stranded doses in the freezer at e.g. the third largest pharmacy chain in most well-populated cities because people called the first largest, heard a No, and then assumed "Well if they don't have it clearly no one has it."


I mean, yeah, but also Simon and Speck aren't as good as the new generation of low-footprint designs like Ascon and Xoodyak. We know more about how to do these things now than we did 15 years ago.

I think events are a bit unsung and underutilized in a lot of web projects. Events are really powerful and you can build systems with them that can replace proprietary framework features with interoperable protocols.

Context: Components that need a context value can fire an event to request it. Any other element or listener higher in the tree can handle the event and provide a value via the event object. Event are synchronous, so you can get values synchronously. The Web Components Community Group maintains an interoperable context community protocol: https://github.com/webcomponents-cg/community-protocols/blob...

Suspense: Components that have some pending some work, like awaiting data to render, can fire an event to signal that they have pending work. The event can carry a promise, and then a suspense-boundary-like component can handle the event and display a spinner until all the pending work in the tree below it is finished. Another protocol: https://github.com/webcomponents-cg/community-protocols/blob...

Error boundaries: A component can fire an ErrorEvent if it fails to render, and an error boundary component can display the error or some other user affordance.

Child-parent registration: A child component can fire an event to tell some parent that it's available. This is useful for using components as plugins. A <code-mirror> element could have children that provide language support, syntax highlight themes, etc.

Actions: Redux-like actions can be done with events instead. You can build a nice data-down, events-up system this way with very little code and very loose coupling.

Event buses: components can listen for events on a top-level node like document, and they'll receive every event of that type from every other dispatcher.


There's actually a very good reason to implement a delay in switching submenus.

Recent versions of Apple's human interface guidelines don't make any mention of it, because those decisions are baked into the toolkit and not under control of application designers, but the earlier editions of Apple's guidelines went into some detail about why and how pop-up submenus were delayed.

1995 edition of Macintosh Human Interface Guidelines:

http://interface.free.fr/Archives/Apple_HIGuidelines.pdf

>pp. 79: Hierarchical menus are menus that include a menu item from which a submenu descends. You can offer additional menu item choices without taking up more space in the menu bar by including a submenu in a main menu. When the user drags the pointer through a menu and rests it on a hierarchical menu item, a submenu appears after a brief delay. To indicate that a submenu exists, use a triangle facing right, as shown in Figure 4-36.

The original 1987 version of the Apple Human Interface Guidelines can be checked out from the Internet Archive, and should be required reading for serious user interface designers, the same way that serious art students should contemplate the Mona Lisa, and serious music students should listen to Mozart. Even though it's quite dated, it's a piece of classic historic literature that explicitly explains the important details of the design and the rationale behind the it, in a way that modern UI guidelines just gloss over because so much is taken for granted and not under the control of the intended audience (macOS app designers using off-the-shelf menus -vs- people rolling their own menus in HTML, who do need to know about those issues):

Apple Human Interface Guidelines (1987): https://archive.org/details/applehumaninterf00appl

>pp. 87: The delay values enable submenus to function smoothly, without jarring distractions to the user. The submenu delay is the length of time before a submenu appears as the user drags the pointer through a hierarical menu item. It prevents flashing caused by rapid appearance-disappearance of submenus. The drag delay allows the user to drag diagonally from the submenu title into the submenu, briefly crossing parent of the main menu, without the submenu disappearing (which would ordinarily happen when the pointer was dragged into another menu item). This is illustrated in Figure 3-42.

pp. 87: Hierarchical Menus: https://i.imgur.com/RrEDo3m.png

pp. 88: Figure 3-42: Dragging diagonally to a submenu item: https://i.imgur.com/a0gNWHh.png

Others have written about this issue in the context of the web:

Why is there a menu show delay, anyway? https://blogs.msdn.microsoft.com/oldnewthing/20080619-00/?p=...

>I run into this problem all the time on the Web. Web site designers forget to incorporate a menu show delay, resulting in frustration when trying to navigate around them. For example, let's look at the navigation bar on the home page of The Discovery Channel. Hover over TV Shows, and the menu appears. Suppose you want to go to Koppel on Discovery, but instead of moving the mouse straight downward, the way you hold your arm on the desk moves the mouse in an arc that happens to swing to the right before it arcs downward. You touch TV Schedules and your navigation is screwed up. You have to start over and make sure to move the mouse exactly straight down.

You can even solve the problem with CSS and without JavaScript, by using ":hover":

Dropdown Menus with More Forgiving Mouse Movement Paths: https://css-tricks.com/dropdown-menus-with-more-forgiving-mo...

>This is a fairly old UX concept that I haven't heard talked about in a while, but is still relevant in the case of multi-level dropdown menus. A fine-grained pointer like a mouse sometimes has to travel through pretty narrow corridors to accurately get where it needs to in a dropdown menu. It's easy to screw up (have the mouse pointer leave the path) and be penalized by having it close up on you. Perhaps we can make that less frustrating.


You can also use _netdev in the mount options, then systemd mount generator will generate the dependency on network automatically.

I'll tell you how not to do it.

Require me to give you my contact information just to download something. Have sales people blow up my phone and/or email and ignore polite brush-offs. Keep reaching out to me periodically with requests to have a meeting about how you product can help me.

I don't have buying power, but I do have bitching power and your product will wind up getting bad-mouthed by the whole team eventually. And when the engineer asks us for recommendations, guess what we tell him?

Lookin' at you, Veeam, AWS, and Keyence.


For folks that like old Italian movies: I suggest watching everything with Gian Maria Volontè as actor.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: