Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

IMHO skeuomorphic design had a few wins, but also plenty of losses. Sometimes the real world interface is just not as intuitive as it should have been.

But I'm 100% behind you on "make buttons look like buttons" and "don't hide functionality behind arbitrary gestures that you never tell the user". UI designers may hate menus these days, but they were so good for letting a user browse through looking for the thing they want. Search boxes are a good speed improvement, but should never be the only interface object because many times the user doesn't know exactly what they're looking for.

This is also why most voice assistants don't get used very much, there's no easily accessible list of phrases they know and they aren't smart enough to really understand what the person wants, so people end up using the one or two phrases they know the assistant can handle and forget about it otherwise.




> This is also why most voice assistants don't get used very much, there's no easily accessible list of phrases they know and they aren't smart enough to really understand what the person wants, so people end up using the one or two phrases they know the assistant can handle and forget about it otherwise.

Thank you for saying this, you've just made me realise they share all the problems of text adventures while having none of the excitement.


I was actually complaining about this the other day: there is no manual (or even a searchable database) of recognized commands/features. I often discover that something was possible with Google Assistant when the announcement comes that it's being removed.


When you start a timer with Siri, it often announces that you can also tell it to stop the timer by saying stop. This tells me that even the most rudimentary functions of starting and stopping timers is not yet learned by users. Every time I hear that message I think of how much of a failure this whole thing has been.


Oh timers, you mean the one thing I use daily for cooking where they changed the recognized phrase between iOS 17 and iOS 18? It used to understand "notify me in 15 minutes" meant to set a timer. Now it asks for what I want to be reminded about to add it to the calendar. I have to explicitly say "set a 15-minute timer".

So long for muscle memory (oh and for consiseness, it's worse in French).

Anyways, that's the prime reason there's no list: either they want to change the commands willy-nilly, or they don't know them because that's whatever the model's learned.


This is why I call these voice "commands" spells. They feel very much like a spell. You have to remember them, and if you don't remember 'em exactly, they don't do what you expect. Siri (and Alexa for that matter) is a big failure. After 12 years of having a voice "assistant" in my pocket, I still don't use it for anything important and/or useful.


>I have to explicitly say "set a 15-minute timer".

Only saying "15 minutes" initiates a timer for that long.


OMG thanks, it also works in french!

This goes even deeper in the "undiscoverable commands" issue at hand.

"notify me in 15 minutes" feels natural and casual, and how I'd expect to interact with modern voice assistants. "set a 15 minutes timer" feels overly formal and redundant (it does not help that in French, a timer is "minuteur", so you repeat the "minutes" sound twice), and how I'd expect to interact with old voice assistants. This new one is just some hidden trial-and-error thing deep in Siri that's likely an engineer that likes cooking that added it as a shortcut.


>This new one is just some hidden trial-and-error thing deep in Siri that's likely an engineer that likes cooking that added it as a shortcut.

Among the many shortcomings of Siri is that it seems as if it's not good with verbs. I've learned to avoid them as much as possible. Put another way, it's better with nouns, so I focus on them. I guess that's why

"15-minute timer" and

"15 minutes"

work well. But similarly, I wanted to use the stopwatch the other day. Not something I ever really do. Just saying, "Stopwatch" got it open. And testing now on some non-Apple apps also worked (in case Siri has some built-in pro-Apple bias). One was WhatsApp. The other was an app for an insurance and banking company. That one, just saying its name opened the contact card I have for the company. That's fair. Trying again, and saying "company name app" opened the app.

Of course, sometimes the verbs are necessary. But I've had more success when I could avoid them. Do note that I say all this using a Siri-only phone that is too old for any of Apple Intelligence that may get mixed in with Siri.


It's not a huge deal, but on google devices, setting a timer is different from setting an alarm. the end result is more or less the same thing, but it uses different underlying functionality and I have to remember to say timer instead of alarm when I'm cooking.


Saying ”set an alarm in 15 minutes” vs ”set at timer for 15 minutes” to Siri also do different things


Really? For me both commands set a timer for 15 minutes.


Yep this gets me all the time. The biggest difference is that a timer will be displayed whereas an alarm is in the background. The display is very handy when cooking.


iOS also differentiates between a timer and an alarm.


It’s a disconnect between the vision and the reality. Users shouldn’t have to learn Siri, it should just work every time no matter how you ask as long as it’s understandable to a person.

But the reality is it doesn’t work and users have to specifically learn the few things it can do.


It's a disconnect because we have this vision that language (as commonly spoken, not legalese) is perfectly clear and precise. But the reality is that even two live people who seem to speak the same language will misunderstand each other, including for "basic" things. So how should a computer be able to read your mind, when it most likely doesn't even have the context of where you're from?

Regarding the "notify" vs "timer", I had a very similar experience with a friend. I went to a bakery, and she asked me to get her some kind of pastry. To me, she meant some kind of bread. Queue confused faces on both sides when she asked where her stuff was. Sure, it's still in the broad "baked goods" category, just like a reminder and a timer. This was in France, both living in major cities 200 km apart. It's not like some extreme variation of English from the other side of the world.


Theoretically, large language models have enough common sense to understand all variations of natural language commands, and to ask for clarification if they think some request is ambiguous. It's probably not yet feasible to do Siri via an LLM, or not via a properly large one (that has the necessary intelligence).


I think we need a word for “buttons look like buttons”, as opposed to “the Contacts app looks like a real-world leather-cladded address book” skeuomorphism. I’m seeing “skeuomorphism” increasingly used for the former, where people mostly mean “not flat design”, whereas originally it meant only the latter.


Ideomorphic seems like it would work for that.

Turns out it's actually already a word: having the proper form or shape —used of minerals whose crystalline growth has not been interfered with

https://www.merriam-webster.com/dictionary/idiomorphic

That seems to fit amazingly well here too.


> I think we need a word for “buttons look like buttons”, as opposed to “the Contacts app looks like a real-world leather-cladded address book” skeuomorphism.

Likely related to https://en.wikipedia.org/wiki/Affordance#As_perceived_action..., but it's a jargon word most tech people and others don't know, and it creates debates about what it means among those that do know it.

I usually say something like it should be obvious it's clickable, or obvious what it does, when it comes up.


Affordances is a more general term, not necessarily purely visual, or even visual at all (it can be tactile, or auditory, etc.). It doesn’t denote a particular visual design, and full-blown skeuomorphic elements would also exhibit affordances. But yes, it approaches the heart of the problem.


Signifiers? https://ux.stackexchange.com/questions/94265/whats-the-diffe...

> Affordances are what an object can do (truth). Perceived affordances are what one thinks an object can do (perception). Signifiers make affordances clearer (closing the gap between truth and perception). Signifiers often reduce number of possible interpretations and/or make intended way of using an object more explicit.

> A grey link on the screen might afford clicking (truth). But you might perceive it just as a non-interactive label (perception). Styling it as a button (background, shadow etc.) is a signifier that makes it clearer that the link can be clicked.

I don't think there's any more widely known terms here, and not any used within general tech audiences. I'd like it if there was a useful shorthand too but devs/users/clients are probably going to stick with e.g. "I couldn't tell that was a button" because the above have failed to catch on.

"Visual cues" feels accurate enough. I immediately understand "Buttons should look like buttons".


Thanks. Signifiers looks like a perfect fit here since they are elements which signify their affordance. It should ideally get more mainstream instead of someone inventing a new word.


Yeah, I find it interesting these words haven't become more mainstream though when they've been around for a while, and maybe that ship has sailed. They don't resonate? The definitions are too complex? (they often cause debates) They're not guessable? They don't shorten what you mean enough? ("it should look more like a button" isn't much longer than "it's lacking signifiers" to be worth the jargon) I see people drop "afford/affordance" into replies occasionally but most people don't know what it means and it rarely adds anything.

"Skeuomorphism" has caught on. It's not guessable but then it saves quite a few words so helps with communication. It probably got picked up by some tech news/blog sites and reached critical mass because skeuomorphism vs flat design resonates with people.


This is exactly the problem with Siri - if it was nothing but a vocal command line that I had to memorize exactly how to talk to it, and I could find a list of commands to learn, it'd be 1000x better.


This is similar to WolframAlpha. Theoretically, it can do countless different things, but you wouldn't know about them just from looking at the empty text box. The difference to something like ChatGPT is that it can interpret arbitrary commands, even if it can't properly execute them.


I think one thing that is involved in this is conventions, and when you've learned one set of rules on how to communicate on one form of interface that it transfers to other applications on that interface. If there's certain ways to use graphical elements, gestures, console keywords/option flags, spoken keywords, while other applications have the freedom to do their own thing it should be seen as better not to diverge and reinvent the wheel (so each needs learning its own rules) too much without good reason.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: