Hacker Newsnew | past | comments | ask | show | jobs | submit | chartered_stack's commentslogin

> Apple’s always done their best work when they’re the second mover.

People say Apple does its best work as a “second mover,” but that misses the actual pattern: Apple builds great products when leadership is solving their own problems.

The Mac, iPod, iPhone, and iPad weren’t just refinements of existing products. They were devices Steve Jobs personally wanted to use and couldn’t find elsewhere. The man saw the GUI at Xerox and saw how anyone could use a computer without remembering arcane commands. So he drove the development of the Mac. He was using a shitty mobile phone, saw the opportunity and had the iPhone developed. Same with the early Apple Watch (first post-Jobs new product line), which reflected Jony Ive’s fashion ambitions; once he left, it evolved into what current leadership actually uses: a high-end fitness tracker.

The stagnation we're seeing now isn’t about Apple losing its “second-mover magic.” It’s that leadership doesn’t feel an unmet need that demands a new device. None of Vision Pro, Siri, Apple Intelligence or even macOS itself anymore appear to be products the execs themselves rely on deeply, and it shows. Apple excels when it scratches its own itch and right now, it doesn’t seem to have one.


I think this is an interesting take that really reflects the saturation of the wider problem space of society. Much of the stuff that we could potentially need, we already have. It will be interesting to see what new products are released to the market in the next ten or so years which substantially change the way that we use technology.


I understand the feeling. There is a huge asymmetry between individual contributors and huge profitable companies.

But I think a frame shift that might help is that you're not actually donating your time to LMAX (or whoever). You're instead contributing to make software that you've already benefited from become better. Any open source library represents many multiple developer-years that you've benefited from and are using for free. When you contribute back, you're participating in an exchange that started when you first used their library, not making a one-way donation.

> They wouldn't have merged my code in if they didn't think it had some amount of value, and if they think it has value then they should pay me.

This can easily be flipped: you wouldn't have contributed if their software didn't add value to your life first and so you should pay them to use Disruptor.

Neither framing quite captures what's happening. You're not in an exchange with LMAX but maintaining a commons you're already part of. You wouldn't feel taken advantage of when you reshelve a book properly at a public library so why feel bad about this?


Like Alan Kay said about software: Simple things should be simple, complex things should be possible.

The thing is this takes a lot of resources to get right. FOSS developers simply don't have the wherewithal - money, inclination or taste - to do this. So, by default, there are no simple things. Everything's complex, everything needs training. And this is okay because the main users of FOSS software is others of a similar bend as the developers themselves.


For complex things there's CLI. For even more complex things there are programming languages.


I think we're returning to CLIs mostly because typing remains one of the fastest ways we can communicate with our computers. The traditional limitation was that CLIs required users to know exactly what they wanted the computer to do. This meant learning all commands, flags etc.

GUIs emerged to make things easier for users to tell their computers what to do. You could just look at the screen and know that File > Save would save the file instead of remembering :w or :wq. They minimized friction and were polished to no end by companies like MSFT and AAPL.

Now that technology has got to a point where our computers now can bridge the gap between what we said and what we meant reasonably well, we can go back to CLIs. We keep the speed and expressiveness of typing but without the old rigidity. I honestly can't wait for the future where we evolve interfaces to things we previously only dreamt of before.


It’s less rigid than a command line but much less predictable than either a CLI or a GUI, with the slightest variation in phrasing sometimes producing very different results even on the same model.

Particularly when you throw in agentic capabilities where it can feel like a roll of the dice if the LLM decides to use a special purpose tool or just wings it and spits out its probabilistic best guess.


True the unpredictability sucks right now. We're in a transition stage where the models can understand intent but cannot constrain the output within some executable space reliably.

The bridge would come from layering natural languages interfaces on top of deterministic backends that actually do the tool calling. We already have models fine-tuned to generate JSON schemas. MCP is a good example of this kind of stuff. It discovers tools and how to use them.

Of course, the real bottle neck would be running a model capable of this locally. I can't run any of models actually capable of this on a typical machine. Till then, we're effectively digital serfs.


I think this piece makes a fair and important point about LLM hype and the need to treat it as a normal technology rather than a cult movement. The over-the-top marketing and constant “AI will change everything” drumbeat can definitely obscures the more grounded, practical ways it can be used day-to-day.

That said, every major technology wave has needed a similar level of push, hype, and momentum to reach mass adoption. The Internet existed for decades before the public knew what to do with it. AOL gave such a huge push with the “You’ve got mail”, endless free trial CDs and an almost manic push to bring it into homes for it to become the foundation of modern life. The same was true of personal computers: early machines like the Apple II or IBM PC were expensive, clunky, and had little practical software. But without the evangelism, marketing, and cultural hype that surrounded them, the entire ecosystem might never have matured. So while the AI frenzy can feel excessive, some level of over-excitement may be what turns the technology from niche tools into something broadly accessible and transformative — just as it did for the web and the PC before it.


> every major technology wave has needed a similar level of push, hype, and momentum to reach mass adoption

People were standing in line for the first iPhone. Gmail had a waiting list. Tesla sold EVs far faster than they could make them.

On the other hand, I now literally have AI icons blinking in several apps, begging to be used. This isn't a regular marketing push of a brand-new product, it is companies desperately trying to justify their billions of dollars of sunk costs by bolting AI onto their existing products.


Part of the reason for the blinking icon begging to be used is because AI chat is a new interface being grafted onto existing products. It's difficult to get people to make the habit of using a different interface to a familiar product. That's why google can get people to use AI by injecting the response above the results. But facebook messenger for example throws a metaai icon in the lower right corner because they cant figure out a natural way to create discovery that isnt intrusive, and is able to silently hijack some existing muscle memory.


That mass adoption has brought in the normalisation of automated surveillance, attention farming and arguably lowered peoples’ tolerance to “the other”. I’m currently not convinced it’s a net positive. Perhaps things would have gone better if the adoption had been slower.


I think this is exactly the right intuition. I think people hopelessly underestimate the human tendency to do nothing. We have this idea that if an innovation is good enough it should “sell itself”, and that’s almost never true because across all organizations, it’s almost always safer to do nothing, adopt nothing, keep doing what you’re doing.

No one gets fired for suggesting no change.

It takes a special level of hype where “doing nothing” is no longer the sensible choice.

Do I wish this hype was spread around to other technologies that are also awesome, of course. I’d love to help someone figure out a way to do that but as of now, we don’t know how to do that. Humans are very bad at holding two different ideas in their head.


But we don't need to do anything. We don't need AI and so we don't need a push for it. If AI is just a "normal" technology that has some legitimate uses, it doesn't need a huge boost, it doesn't need any hype at all. It can just be slowly discovered and used by the people who have a legitimate use for it. Doing nothing is often a good move.


“ technology that has some legitimate uses, it doesn't need a huge boost”

That’s what I’m disagreeing with. “Legitimate uses” isn’t something just hanging out in the ether to attach itself to useful technology it happens via a grinding sales process and big industry wide cultural changes.

People don’t like change.

I think AI and its knock-on effects in robotics will have massive productivity boosts in industries where productivity has been lagging for years. It will take decades and multiple boom-busts to happen to drag the population into change but it’ll happen.


I guess what I'd say is that if that grinding sales process and those industry-wide cultural changes have all the negative effects we're seeing with AI, then we shouldn't make that trade. There is simply no urgent need to adopt AI, and the frenzied push to adopt it anyway is actively harmful.


“If people keep stabbing each other with knives then we shouldn’t make that trade just to be able to continue to cut vegetables at home”

Tale as old as time itself.

“But on balance it’s a negative!!”

By what measurement? That’s simply a measurement of your own personal information bubble.


What makes you think that it's only CEOs who look like they're part of a hive mind? It's got nothing to do with capitalism or it's "creepy agents". It's simply the human condition. It's literally company/human see company/human do. One company/management loudly kangs whatever their position is and others simply follow it because they think it's either "industry standard" or it's convenient somehow to them. That's all it is - people trying to be safe.

Let me give you a technical parallel. A couple of engineers/architects from Big Co. that's hugely successful leave and go to Hot Startup. There they proselytize their One True Way because honestly that's all they know. Everybody in Hot Startup goes along with it because they are Senior Engineers from Big Co. who are now plotting the course and Big Co. is HUGE so they know what they're talking about. Now because Hot Startup is suddenly using the One True Way everybody else in the market tries to copy them because that's obviously why Hot Startup is Hot. This leads to a job market where people optimize for things used by Hot Startup. This tilts the skill set of the general tech market towards the One True Way making it gospel to a lot of people. So hiring managers who don't know the first about anything suddenly start optimizing for One True techs and ask for 20 years experience with React. They think they're doing the safe thing by using the same tech stack used by everybody else - the industry standard. Never mind that the "industry standard" changes every time it's convenient.

This is the same thing for CEOs. Oh you're having a slightly down quarter and have to answer to investors? Say you're using AI. That's the in-thing and will give you that bump to ride out the quarter. You screwed up in 2021-22 and hired a fuckton of people who are just sitting on their hands costing the company money? Say AI and get rid of them because they're not productive. It's got nothing to do with collusion or anything like that. It's just that people have mismatched expectations and things happen downstream of these unmanaged expectations.


> This tilts the skill set of the general tech market towards the One True Way making it gospel to a lot of people

That lot of people literally cannot get hired anywhere but startups because everyone else isn't so naive

> things happen downstream of these unmanaged expectations

It sounds like a metric fuckton of people need to retire or get out of the way already if they can't set expectations despite being in the exact position where they should be able to talk to multiple audiences

None of these excuses appease the investors nor the heads-down employees. Shit will have to change sooner rather than later. Many factors will make it so. This is exactly what defines a tech bubble.


Sure, plenty of incompetence out there, but nobody wakes up thinking they’re doing wrong. They overpromise, they play it safe. Sometimes it works, sometimes it doesn’t.

As Picard says, "It is possible to commit no mistakes and still lose. That is not a weakness; that is life"


We have a very different understanding of what it means to make a mistake


> Governments have no right or authority to stop us from being idiots if we want to be idiots.

This sounds fine in theory but it ignores the fact that gambling today isn’t just about individuals making free choices in a vacuum. There’s an active, systemic push to get people hooked. Millions (billions?) are spent on ads, algorithms, and dark patterns designed to keep people hooked. That's not freedom - that's exploitation.

With modern tech like gambling apps on your phone, 24/7 internet access, social media tie-ins the problem multiplies. You don't have to go to a casino when you have one in your pocket. The same tricks that make people lose hours on TikTok are being weaponized to make them lose their money.

Freedom matters. But if the entire system is engineered to trap people in endless dopamine hits, then society has to step in. Not to ban choice, but to create a framework that tilts people away from predatory addiction loops and toward things that actually build resilience and meaning. Otherwise “freedom” just becomes another word for “you’re on your own while other people drain you dry.”


This article https://www.rosloto.net/en/how-vs-and-big-tech-investments-a... is very much on point — it shows how VC funds and global tech giants like Google, Amazon, Meta, and Apple are directly or indirectly investing in the online casino market


Those same traps and dark patterns you mentioned are used well beyond gambling. those things themselves should be regulated, not gambling. You're arguing against something unrelated to the topic here. It's fine to trap and addict people into gambling, and then punish them for getting addicted and trapped? how does that make sense? let people gamble if they want, but ban malicious and hostile practices of capitalism. Whether it is gambling,shopping addiction, social media addiction, porn or political influence campaigns, the practice and trappings are the same and should be heavily regulated. No argument there.


Honestly, it would be great if it were "Gemma in Chrome" instead.

A local model capable enough to do the things that this is designed to do? Yes please.

Gemini in Chrome is a way to increase adoption. Gemma in Chrome is an innovation - a platform that allows developers to build stuff leveraging the local model. A step closer to a world where we can talk to our computers and have them do what we mean instead of what we say.


There are really two separate issues here:

A) It should be harder for non-technical users to accidentally install apps designed to harm them.

B) It should also be possible for anyone to run whatever code they want on hardware they own.

Both can be true, and platforms should support both. Ultimately, it is up to the platform to decide what they want to allow and how they protect their users.

I get why Android is tightening controls: plenty of people install shady APKs they get from random websites or Telegram/WhatsApp groups and get burned. But forcing developers to register with Google isn’t the answer. If I want to run a hobby project on my own phone, I for sure shouldn't have to jump through bureaucratic hoops.

The thing is that Google already has the mechanism to protect users: the Play Store. The real problem is that its review process is weak and flooded with low-quality and malicious apps. Fixing that would do far more good than punishing independent developers. They also don't want to open up anti-trust behavior by actually prioritizing the Play Store and saying that you shouldn't trust an app from a random Chinese App Store.

If Google wants to make Android safer, step one should be cleaning up the Play Store. Step two is making that the obvious, prioritized channel. Only after that should they even think about playing Big Brother.


Just as in the linked article, these two statements make it pretty clear:

> A) It should be harder for non-technical users to accidentally install apps designed to harm them.

> B) It should also be possible for anyone to run whatever code they want on hardware they own

Require something in the neighborhood of:

C) It should be possible to prevent people who can run whatever they want from wanting* to intentionally or accidentally install apps designed to harm them; or, where these harms are either not harmful or are reversible.

If you consider things that help with (C), and apply this principle — “Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith." — then a lot of iOS/iPadOS developer and app ecosystem can be understood as positive intentionality around flavors of C.

* By being scammed, persuaded, misled, confused, coerced, etc.


I often come across tips/pointers/exhortations about how to write good design documents. I generally agree that it's an important step: not just for clarifying your own thinking, but also for communicating effectively with others.

However, these types of posts often lack are concrete examples of what a good design document actually looks like. I understand that many of these documents are proprietary and intended for internal use. Still, are there any examples of well-written design documents available publicly that learners can study to get a clearer idea of what one should look like?


> However, these types of posts often lack are concrete examples of what a good design document actually looks like

The entire post is an application of his document designing philosophy. It became obvious with his first header being "Goal" and him mentioning to set a goal early on.


This should grow from within yourself. Read a lot of design docs and blog posts and articles and books. Which ones did you like? Which were confusing? And why was that? Was it the language complexity? Length of explanations? Diagrams? Too much / too little reasoning? Did you feel that the writer did a good job of picking you up were you were and brining it to a new state which now was enriched with their idea?

Constantly ask youself whether you liked a particular piece of writing and that will over time shape your understanding of what's good and what's not. Note that that's not entirely objectively quantifiable and people will have different tastes. That's also why it's hard to have a "good examples" archive because, just like with code, that would immediately people to start debating. But there is a certain core of properties most people can agree on.


Somewhere in some comment antirez said he writes design documents for his projects before he writes a single line of code. You can browse his GitHub projects or google “antirez design documents” or “antirez specification”.

https://github.com/antirez


Yes this is also a question that comes to my mind. Where are the design docs from the past 50 years of software development. There must be something concrete for people to study and learn from.


I've been doing this stuff for 40 years and spent the first 20 or so looking for design documents (having been asked to write many). Eventually I realized there are none. At least almost none, and very few that existed prior to the related software being written.


The AWS Lambda PR/FAQ was released late last year - worth a read

https://www.allthingsdistributed.com/2024/11/aws-lambda-turn...


Also not a design document?


The problem with design documents is that they require maintenance which takes more effort than it solves problems.


What kind of maintenance do you mean?

A design can evolve over time, but a design document's objective is to document what was going to be built at that time. If something changes, make a new design document. (Similar to blog posts or news articles, they also don't evolve over the years. You write a new one.)

It sounds like what you mean is system documentation, a handbook of sorts, and that's what needs maintenance. But that's different from a design doc.


> What kind of maintenance do you mean?

> If something changes, make a new design document.

This one.


Without fail, people who say this really mean "I am unable or unwilling to put in the hard work at the design stage to resolve uncertainty and will instead push these problems downstream to the development process where hopefully no-one will remember I'm responsible for the ensuing mess".


Yes, that's correct. In other words, my cost-benefit analysis concluded that shipping fast and iterating later is a much better strategy than spending countless manhours on plans and meetings in order to provide a product that is perfect from technical perspective but misses both the timing and market needs. I don't understand this fetishation of "perfect code" when experience shows again and again that for most use cases, the correct approach is to ship fast and fix later. We're not sending a spaceship to Mars, we're making an AI-based social media app, either we ship this week or next week facebook will launch its own version which will make our product completely irrelevant and being first to release a feature is the only way to capture a statically significant part of the market. As long as the most common use case works we're grand, if 80% of the app doesn't work that's fine because 80% of the users only use the main 20% of the app, and the focus is on making sure that this works correctly.


> I don't understand this fetishation of "perfect code" when experience shows again and again that for most use cases, the correct approach is to ship fast and fix later.

You're conflating two separate issues, because design documents help you ship faster and fix later. They do this by making you think about what you're building, thereby allowing you and downstream consumers of the plan to focus on a smaller and more valuable set of goals.

Your approach is the opposite - building by gut feel and ignoring the available data. That's leads to wasted effort and elongated development cycles.

> my cost-benefit analysis

You don't plan, so you don't know your costs or benefits.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: