Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I keep being tempted to write same post but named "Does all software work like shit now?", because I swear, this is not just Apple. Software in general feels more bugged as a new norm.

Most websites have an element that won't load on the first try, or a button that sometimes needs to be clicked twice because the first click did nothing.

Amazon shopping app needs two clicks every now and then, because the first one didn't do what it was supposed to do. Since 3+ years ago at least.

Spotify randomly stops syncing play status with its TV app. Been true for at least a year.

HBO app has subtitles for one of my shows out of sync and it has been for more than a year.

Games including AAA titles need few months post-release fixing before they stabilize and stop having things jerk themselves into the sky or something.

My robot vacuum app just hangs up forever once in a while and needs to be killed to work again, takes 10+ seconds after start to begin responding to taps, and it has been like that for over 2 years of owning the device.

Safari has had a bug when opening a new tab and typing "search term" too quickly, it opens URL http://search%20term instead of doing a Google search. 8 years ago I've opened a bug for that which was closed as a duplicate, and just recently experienced this bug again.

It really seems that criteria for "ready for production" is way lower now. If my first job 13+ years ago any QA noticed any of that above, the next version wouldn't be out until it is fixed. Today, if "Refresh" button or restarting the app fixes it, approved, green light, release it.




Something I found annoying at a previous big-tech work, was how the focus on top-level metrics (read, revenue-linked metrics) meant we couldn't fix things.

There were a lot of smart people, very interested in fixing things— not only because engineers tend to like fixing things, but also because we, and everyone around us, were users too.

For example, many things related to text input were broken on the site. Korean was apparently quite unusable. I wanted to fix it. A Korean manager in a core web team wanted to fix it. But we couldn't because the incentive structures dictated we should focus on other things.

It was only after a couple years, and developing a metric that linked text-input work with top-level (read, revenue-linked) metrics, that we were able to work on fixing these issues.

I find a lot of value in the effort to make incentives objective, but at a company that was already worth half a trillion dollars at the time, I just always felt there could be more room for caring about users and the product beyond the effects on the bottom-line.


This is exactly the problem. Hyper efficient (or at least trying to be) businesses have no room for craftsmanship. If you take the time to make quality software, you’ll be left behind by someone who doesn’t. Unfortunately the market doesn’t care, and therefore efficient businesses don’t either.

The only solution I know of is to have a business that’s small enough and controlled by internal forces (e.g. a founder who cares) to pay attention to craftsmanship.


You're implying that buggy software has no impact on the bottom line. I'm not so sure. Users weigh the availability of features against the quality of features. Getting bugs fixed is not necessarly the highest priority for users either. It's a trade-off.

Our use of Microsoft 365 is a pretty good example of that. I moved our company to Microsoft 365 because it had some features we wanted. Then I moved the company off Microsoft 365 because it turned out to be too buggy to be useful.

I realise that the actual users of software are not necessarily the same people making the purchasing decisions. But if productivity suffers and support costs rise then the consequences of choosing low quality software eventually filters through to purchasing decisions.


Even if buggy software has an impact on the buttom line, managers can continue pretending it doesn't and not allocate any budget to fix them. They assume bug fixes somehow will be squeezed in between the work they really value - new features or better completely new projects. Because creating something new (asking debelopers to create) is the easiest way for a manager to get a promotion. It was many years ago when I last seen a manager (with the power to set priorties and not just translate them form above) who pays more than a lip service to quality and cares about maintenance.


> You're implying that buggy software has no impact on the bottom line. I'm not so sure. Users weigh the availability of features against the quality of features.

The problem is that managers / those that determine priorities don't get the numbers, they don't see a measurable impact of buggy software. There's only two signals for that, one is error reporters - which depend on an error being generated, that is, software bug - and the other is user reporting, but only a small fraction of users will actually bother to make reports.

I think this is a benefit of open source software, as developers are more likely to provide feedback. But even then you have some software packages that are so complex and convoluted that bugs emerge as combinations of many different factors (I'm thinking of VS Code with its plugins as an example) that the bug report itself is a huge effort.


>The problem is that managers / those that determine priorities don't get the numbers, they don't see a measurable impact of buggy software.

I don't believe that. IT departments have to support users. Users complain and request support. It costs money and it affects productivity and everybody knows it.

But that's not enough. You would also have to believe that there are significantly less buggy alternatives and that the difference justifies the cost of switching. For big companies that is an incredibly high bar.

But small companies do dump software providers like my company dumped Microsoft.

[Edit] Ah, I think I misunderstood. You're looking at it from the software provider's perspctive rather than the user organisation. Got it.


> You're implying that buggy software has no impact on the bottom line. I'm not so sure.

The problem is that very little competition exists for computer operating systems. Apple, Google, and Microsoft collectively control nearly all of the consumer OS market share on both desktop and mobile. Thus, macOS just needs to be "better than Windows", and iOS just needs to be "better than Android".

> Then I moved the company off Microsoft 365 because it turned out to be too buggy to be useful.

What did you move to?

In general, Microsoft 365 is extremely successful, despite any bugs. There doesn't appear to be any imminent danger of financial failure.

Software vendors also face tradeoffs, engineering hours spent on fixing bugs vs. writing new features. From a bean counter's perspective, they can often live with the bugs.


> In general, Microsoft 365 is extremely successful, despite any bugs.

That's because of some very hard monopolistic anti-consumer behavior from Microsoft in their ecosystem.


> You're implying that buggy software has no impact on the bottom line.

I'm not implying that, and I don't think my manager was implying that either. I think rather there were 2 things going on:

1. It's often hard to connect bug-fixing to metrics.

A specific feature change can easily be linked with an increase in sales, or an increase in usage. It's much harder to measure the impact of a bugfix. How can you measure how many people are _not_ churning thanks to a change you pushed? How can you claim an increase in sales is due to a bugfix?

In your case, I'm sure some team at Microsoft has a dashboard that was updated the minute you used one of these features you bought Microsoft 365 for. How could you build something similar for a bugfix?

Bugfixes don't tend make the line go up quickly. If they make the line go up it often is a slow increase of regained users that's hard to attribute to the bugfixes alone. Usually you're trying to measure not an increase, but a "not decrease", which if possible is tricky at best. The impact is intuitively clear to anyone who uses the software, but hard to measure in a graph.

2. A ruthless prioritization of the most clearly impactful work.

I wouldn't have minded working on something less-clearly measurable which I nonetheless thought was important. But my manager does care though because their performance is an aggregate of all those measurable things the team has worked on. And their manager cares, and so on and so forth.

So at the end of the day, in broad strokes, unless the very top (which tends to be much more disconnected from triage and edge-cases) "doesn't mind" spending time on less measurable things like bugfixing, said bugfixing will be incentivized against.

I think we all know this impacts the bottom-line. Everyone knows people prefer to use software that is not buggy. But a combination of "knowing is not enough, you have to show it" and "don't work on what you know, you have to prioritize work on what is shown", makes for active disincentivizing of bug-fixing work.


> first job 13+ years ago any QA...

such QA jobs no longer exists. Ever since the software dev world has moved to doing one's own QA during development, software has been consistently worse in quality. May be there's a correlation there!


The problem is Agile. Not the way it was intended at some point, but the way it has become through Agile consultants and SAFe. Also the fact that it's become the default for any project and that Waterfall has become a bad word.

Companies abuse Agile so they don't have to plan or think about stuff anymore. In the past decade, I haven't worked in (or seen) a single team that had had more than 2 weeks of work prepared and designed. This leads to something build 4 weeks ago needing a massive refactor, because we only just realized we would be building something conflicting.

That refactor never happens though, because it takes too much time, so we just find a way to slap the new feature on top of the old one. That then leads to a spaghetti mess and every small change introduces a ton of (un)expected issues.

Sometimes I wish we could just think about stuff for a couple of months with a team of designers before actually starting a multi-year project.

Of course, this way of working is great when you don't know what you'll be building, in an innovative start-up that might pivot 8 times before finding product-market fit. But that's not what many of us in big corp and gov are doing, yet we're using the same process.


This, 100%. Agile (properly done, for whatever value of “proper“ you choose) is fine for websites, apps, consumer facing stuff. For things that must work, in predictable fashion, for years, it’s often inappropriate.

OS work is somewhere in between, but definitely more towards the latter category.


I couldn’t agree more. I’ve had literal conversations with tech leads who say “no, we aren’t going to talk about database design, we’re agile”.

Not even architecture is being discussed properly under the guise of being agile, it’ll come by itself.

Absolute insanity.


The underlying cause of this is online software updates. Knowing you can fix bugs any time removes the release date as _the_ deadline for fixing all egregious bugs. And so the backlog of bugs keeps growing.


The backlog is down to management and priorities, not testing per se.


Depends where you look. There's been a QA process in all the (agile, some very forward-thinking) teams I've worked with for the last decade. That QA might be being done by other devs, but it's always been there.


You’re not wrong. I’ve assumed it’s a side effect of the way the industry deals with career advancement. If you’re an engineer or middle manager, you aren’t going to get a promotion or bonus if you say “we took feature X and made it more stable without introducing any new functionality”. The industry seems to favor adding new features regardless of quality so the teams that do it can stand out and make it look like they’re innovating. This isn’t how it has to be: if companies would recognize that better doesn’t necessarily mean “more stuff” or “change”, then people could get rewarded for improving quality of what already exists.


I remember software working really badly in the early 2000s, when Microsoft had an unassailable monopoly over everything. Then there were a bunch of changes: Windows started getting better with Windows 7, Firefox and then Chrome started being usable instead of IE, and Google and Apple products were generally a huge breath of fresh air.

Since then, Google and Apple products have become just as bad as Microsoft's. I think this is because the industry has moved towards an oligopoly where no one is really challenging the big players anymore, just like Microsoft in the late 1990s. The big companies compete with each other, but in oblique ways that go after revenue not users.


Few things manage to make me as angry as a link (even if shown in form of a button) which does not open in a new background tab when clicked with the MMB.

Preloading selected results in background tabs and then closing the main tab, so that I can iterate through the results of each clicked item per tab is simply so much more efficient than entering a page, hitting back, entering the next, hitting back, ...

Like the items in Twitter's Explore page.


Frequently, getting the desired result requires ‹‹right-click->New tab››. Trying to remember which sites require me to do that, and which ones work properly is difficult. And every once in a while, some other git makes the link to respond to right-click, too, so you get the new tab _and_ lose the original page. )-:


>which does not open in a new background tab when clicked with the MMB.

Which you notice because your page scrolls up wildly as you move to click on what should be the new tab


I think the financial cost of these bugs is pretty low and the cost to employ people to fix all of them is pretty high. Everywhere I’ve worked, there is a huge backlog of known issues that are agreed upon that we probably just won’t ever get to them. And we certainly aren’t going to hire new people to solve them. It’s probably because the systems we build are getting way overcomplex due to feature piling and promotion seeking complex projects to show off. If these bugs were trivial to solve, they wouldn’t exist. The fact is, these are pernicious bugs because of how complicated everything is.

I actually got penalized in my last performance review because something I shipped “wasn’t that technically complicated”. I was flabbergasted because I consider it my job to make things simpler, not harder to reason about. But you don’t get promotions for simple.


It's true. One example I can give is how Gmail used to automatically recognise flights and hotel bookings and add them to calendar.

It was suddenly completely broken and stopped working a few years ago. I tried every setting to try to get it working but couldn't.

I feel like a stone age caveman having to manually type everything into my Google calendar.

There are a lot of people raising the same issue in Google forums, but it's not fixed yet.

Ironically they are adding new Gemini AI features into Gmail, which can't do this as well.


With regards to Google Flights, I seem to recall that there was some European Digital Markets Act occurrence. Google decided to comply with it in a malicious fashion.


Ironically Linux Desktop environments have never been so robust.

As much as I dislike systemd, if this is the reason, then I retract everything negative I ever said.


It's hard to argue that systemd isn't a part of modern Linux robustness! It's not the only way it could have been done, but the more declarative model is absolutely better than shell script exit codes. Daemons don't have to worry about double-fork. User-level services are incredibly valuable.


Seconded about the desktops: currently loving KDE Plasma over here. Less sure about systemd.


Every once in a while I think „There is no public bugtracker for closed source software — wouldn’t it be great to have something like Github issues, but for all the software that is developed behind closed doors?“

Like, at least we had a central place to vent about the exact same stuff you just listed, and who knows, in the best case, at least some companies might feel shamed into picking up issues with the most upvotes or see it as a chance to engage with their userbase more directly.

Or I‘m naïve and the most likely outcome is getting sued?

What do you think?


I think the risk is that unless people think that reporting a bug there might actually cause it to be fixed, few will bother to report bugs and you'll end up with mostly people just venting, thus perpetuating the cycle.


>Safari has had a bug when opening a new tab and typing "search term" too quickly, it opens URL http://search%20term instead of doing a Google search. 8 years ago I've opened a bug for that which was closed as a duplicate, and just recently experienced this bug again.

While webkit might have some much needed improvements in the past few years, it is still the behind Blink and Gecko. Safari, the browser itself. Has been awful for the past 10 years. At least on Desktop. And some of these are not issue with Webkit because other webkit browser does it better.

The Address bar is far the worst compared to Chrome ( OmniBar ) and Firefox ( I believe it used to be call Awesomebar ). I have experience the same bug you mentioned and I believe I filed it way earlier.

Opening Bookmarks with too many items continue to pause and jank for 11 years now.

Tab Overview continue to re-render all the tabs. Causing Paging and Kernel_Task CPU spike. My Kernel_Task is currently 80TB at 240 days uptime. That is 333GB of write per day. Simply killing the SSD.

And no Tab Sleeping.

Apple just doesn't give a fuck any more about their software.


My gripe is that iCloud Tabs haven’t worked right for years. Everything else that syncs in Safari works perfectly fine: tab groups, bookmarks, reading list. But iCloud Tabs, the feature that shows what you have open on other devices, is always either empty or showing things I had open literally months ago.


It works for me. But randomly not work. And I have seen that iCloud Tabs issue before. I think it was logging out and logging back in would fix it. But this will cause another issue but I cant remember what it was.

Basically the whole thing with Sync is very fickle.

On another note, Safari somehow doesn't work well when you have over 128 Tabs.


> "Does all software work like shit now?", because I swear, this is not just Apple. Software in general feels more bugged as a new norm.

I think this is just the result of an optimizing game placing profit above all else (including quality and user satisfaction) which is indeed the norm in this late stage of capitalism. You want to opt out of that? Good thing the GPL opened the way placing human freedoms front and center, and not-for-profit software stacks like KDE (for instance) keep getting better and better over time.

I use commercial OSes at work by obligation, and the turning point from which my experience as a user became better served by free software happened many years ago.


Maybe we should introduce Mean Time Between Annoyance (MTBA).

Many of my appliances (dish washer, coffee maker, …) work just fine for weeks before an annoyance pops up („deep clean“, for example). Many of my applications do not. For most I could measure MBTA in minutes. Definitely with Spotlight.


Don't get me started on Google Home. It was working good-ish for years. Lately it started to respond with "sorry, I didn't understand" no matter what I asked, happily doing it the 2nd time I asked. It became unreliable which is ironic because I can build this tool by myself now in a 24h hackathon using basic openai/anthropic apis..



I mean, just to consider TV alone (thankfully I do not use one), it takes a while for it to start up, and we are talking about a modern, new TV. Old TVs started immediately. I told my grandma to press the button and wait a bit, before trying to press the button again.


Am I the only one only who is satisfied with Mac OS X? I use Windows from time to time and as far as I can tell it is much worse when it comes to random updates and UI quirkiness.


Mac OS X is fine, that would be snow leopard for example =)

macOS on the other hand, is getting worse, I can definitely concur that spotlight is getting more and more useless. Time Machine as well. It mostly doesn’t work for me, always breaking, hanging…


You can be happy until you're hitting a bug that severely impedes your workflow. And then you might feel annoyed when they refuse to fix it for years, and there's no recourse because it's closed software.


Generally I am pretty happy with macOS and I still believe it to be the best option for a desktop. Where I'm getting frustrated is the increase locked down nature of the OS. I get that it's for security, and that's fine for my dad, but it's starting to get in the way of me down my work.

So when you already start feeling like the operation system is preventing you from doing the things you need to do, then all the small cosmetic flaws seems more in your face.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: