Hacker Newsnew | past | comments | ask | show | jobs | submit | rjst01's commentslogin

If you're involved in the hiring process at your org at all, and they ask these type of questions, I'd encourage you to try to as-objectively-as-possible evaluate how much of a signal they actually provide.

In my experience the signal is pretty strong.

Are you able to share how you evaluated this? Is this based on gut-feeling or is it data-driven?

Gut feeling based on the generally very high competence of my colleagues when I was at Google.

I am glad that you mentioned Google. At this point, their interview process is legendary. It is so good that many other companies have tried to copy it.

> DNS auth would be okish if it was simply tied to a txt entry in the DNS and valid as long as the txt entry is there. Why does LetsEncrypt expire the cert while the acme DNS entry is still there? Which attack vector does this prevent?

An attacker should not gain the ability to persistently issue certificates because they have one-time access to DNS. A non-technical user may not notice that the record has been added.

> Also, why not support file based auth in .well-known/acme-challenge/... for domain wide certs? Which attack vector does that prevent?

Control over a subdomain (or even control over the root-level domain) does not and should not allow certificate issuance for arbitrary subdomains. Consider the case where the root level domain is hosted with a marketing agency that may not follow security best practices. If their web server is compromised, the attacker should not be able to issue certificates for the secure internal web applications hosted on subdomains.


> An attacker should not gain the ability to > persistently issue certificates because they > have one-time access to DNS.

They wouldn't. As soon as the owner of the domain removes the TXT entry that ability would be gone.


Of course - but that requires the owner to know they were attacked, know the attacker added a TXT verification, potentially overcome fear of deleting it breaking something unexpected, etc.

If the owner does not find out that someone got control of their DNS server, the attacker can do anything with the domain anyhow. Including issuing certs.

Yes, but once that access is revoked, that is enough to be certain that the attacker can no longer issue certs. With your proposal, I would then have to audit my TXT records and delete only attacker-created records.

(Which in general would be a good practise anyway, because many services do use domain validation processes similar to what you propose)


I think the parent commenter would be satisfied if they could authorize their DNS by creating a DNS challenge entry one time, and then continue to renew their certificate as long as that entry still existed.

And I'm sympathetic to the concerns that automating this type of thing is hard - many of the simpler DNS tools - which otherwise more than cover the needs for 90% of users - do not support API control or have other compromises with doing so.

That said, I do think LE's requirements here are reasonable given how dangerous wildcard certs can be.


> many of the simpler DNS tools -...- do not support API control

That's on the DNS provider in my opinion. They can, if they want to, make things easy and automatic for their customers, but they choose not to. There's a whole list of provider-specific plugins (https://eff-certbot.readthedocs.io/en/stable/using.html#dns-...) with many more unofficial ones available (https://pypi.org/search/?q=certbot-dns-*). Generic ones, like the DirectAdmin one, will work for many web hosts that don't have their own APIs.

If you like to stick with whatever domain provider you picked and still want to use Let's Encrypt DNS validation, you can create a CNAME to another domain on a domain provider that does have API control. For instance, you could grab one of those free webhosting domains (rjst01.weirdfreewebhostthatputsadsinyourhtml.biz) with DirectAdmin access, create a TXT record there, and CNAME the real domain to that free web host. Janky, but it'll let you keep using the bespoke, API-less registrar.

I imagine you could set up a small DNS service offering this kind of DNS access for a modest fee ($1 per year?) just to host API-controllable CNAME DNS validation records. Then again, most of the time the people picking weird, browser-only domain registrars do so because it allows them to save a buck, so unless it's free the service will probably not see much use.


I had to completely turn off notifications for Instagram because none of the provided settings appear to disable the almost-daily "for you" and "trending" notifications. Now I don't get notified when someone DMs me there, which has lead to me missing important messages.

Same. And I used to work there, and I raised it with them. They have all their career incentives aligned to getting people to see spammy notifications. I was powerless.

The problem with the user hostility is that, in the long term, people don't use it.

As a web dev I see so many things that are lights-on-nobody-home about Meta. The Meta app on my phone generates numerous notifications, when I get one that says a game that looks really cool is 50% off, clicking on it doesn't send me to the landing page in the their app store, it sends me to the senseless home page of the app which seems to have the message "move on folks, nothing to see here"

The Instagram web application fails to load the first time I load it on my computer and I have to always reload. On either Facebook or Instagram I am always getting harassed by OnlyFans models that want me to engage with them... on the same platform where I engage with my sister-in-law.

When they say they are "careless people" I wonder if they are not just careless about sexual harassment and genocide but careless about making money because we're in a postcapitalist hell where Zuck could care less for making money for his shareholders but rather gets a squee from sitting behind Trump at his inauguration and hires people with $100M packages not because he wants them to work with him but because he doesn't want them to work with someone else.


I went through a couple rounds of trying to raise specifically this issue with support before simply uninstalling the app out of principle. They had their chance and burned it.


On Android:

1. your profile icon (bottom right) > hamburger menu (top right) > Notifications > Posts, stories, and comments > turn off ‘Posts suggested for you’ and ‘Notes’

2. on the same screen, set ‘First posts and stories’ to ‘From people I follow’

3. back out to Notifications > Live and reels > turn off ‘Recently uploaded reels’ and ‘Reels suggested for you’

This works for me, but if you’re still getting notifications you don’t want, you’ll have to figure out what category/type they fall under and turn that off.


Thanks for the suggestion. I'm on iOS but the notification settings look the same.

I already had all but one of the settings you mentioned disabled, along with most of the others. I'll report back in a day or two.


> Encryption for 30 years ago? Trivially breakable with quantum

I wouldn't be so sure - quantum computers aren't nearly as effective for symmetric algorithms as they are for pre-quantum asymmetric algorithms.


I would go as far as saying anyone who mentions quantum computers breaking block encryption doesn’t know what they’re talking about.


Regardless of the parent's statement, just normal compute in 30 years, plus general vulnerabilities and weaknesses discovered, will ensure that anything encrypted today is easily readable in the future.

I can't think of anything from 30 years ago that isn't just a joke today. The same will likely be true by 2050, quantum computing or not. I wonder how many people realise this?

Even if one disagrees with my certainty, I think people should still plan for the concept that there's a strong probability it will be so. Encryption is really not about preventing data exposure, but about delaying it.

Any other view regarding encryption means disappointment.


> I can't think of anything from 30 years ago that isn't just a joke today.

AES is only 3 years shy of 30.

If you used MD5 as a keystream generator I believe that would still be secure and that's 33 years old.

3DES is still pretty secure, isn't it? That's 44 years old.

As for today's data, there's always risk into the future but we've gotten better as making secure algorithms over time and avoiding quantum attacks seems to mostly be a matter of doubling key length. I'd worry more about plain old leaks.


I'll concede your point re: current status of some encryption. However there are loads that were comprised.

How do you tell which will fall, and which will succeed in 30 years?

All this said, I just think proper mental framing helps. Considering the value of encrypted data, in 30 years, if it is broken.

In many cases... who cares. In others, it could be unpleasant.


> However there are loads that were comprised.

There are a lot of interactive systems that have attacks on their key exchange or authentication. And there are hashes that have collision attacks.

But compromises that let you figure out a key that's no longer in use have not been common for a while. And even md5 can't be reversed.

I agree with you about being wary, but I think encryption itself can be one of the stronger links in the chain, even going out 30 years.


30 years ago we had a good idea. Anything considered good 30 years ago - 3DES- still is. Anything not considered good has turned out not to be. We don't know what the future will hold so it is always possible someone will find a major flaw in AES, but as I write this nobody has indicated they are even close.


>normal compute

You are underestimating the exponential possibilities of keys.

>plus general vulnerabilities and weaknesses discovered, will ensure that anything encrypted today is easily readable in the future.

You can't just assume that there is always going to be new vulnerabilities that cause it to be broken. It ignores that people have improved at designing secure cryptography over time.


From a security perspective, I argue ypu must assume precisely that.

An example being, destroying sensitive backup media upon its retirement, regardless of data encryption.


> I can't think of anything from 30 years ago that isn't just a joke today

The gold standard 30 years ago was PGP. RSA 1024 or 2048 for key exchange. IDEA symmetric cipher.

This combination is, as far as I am aware, still practically cryptographically secure. Though maybe not in another 10 or 20 years. (RSA 1024 is not that far from brute forcing with classical machines.)


I was wondering exactly how hard factoring RSA-1024 would be today and found this stackexchange answer: https://crypto.stackexchange.com/a/111828

In summary, it estimates the cost at $3.5 billion using commodity hardware, and I'd expect a purpose-built system could bring that cost down by an order of magnitude.


The headline here makes it sound (to me) like Salesforce did the study.


It sure sounds like it in the article:

A team led by Kung-Hsiang Huang, a Salesforce AI researcher, showed that using a new benchmark relying on synthetic data, LLM agents achieve around a 58 percent success rate on tasks that can be completed in a single step without needing follow-up actions or more information.

and

The Salesforce AI Research team argued that existing benchmarks failed to rigorously measure the capabilities or limitations of AI agents, and largely ignored an assessment of their ability to recognize sensitive information and adhere to appropriate data handling protocols.


The article also makes it sound like that. Are you saying they didn't? I don't see any reference in the article to any other organization that could have done the research.

Edit: Unless "Salesforce AI Research" is not a part of Salesforce, I think Salesforce did do the research.


judging from the comments most of the people read it like Salesforce did the study


In practice, whether or not this actually works can be very hit-or-miss. We've found several UEFI implementations will not consider a disk bootable if the pMBR doesn't exactly match the spec, which specifies that the 'protective' partition shouldn't be marked as bootable in the MBR partition table.

Meanwhile, other implementations will not consider the disk bootable in BIOS mode if the partition in the pMBR is not marked bootable.


Indeed! Which is why the only portable solution is to not do this.

For stuff that needs to be bootable by both BIOS and UEFI the only portable solution is to use MBR, not GPT. That means all legacy BIOS systems will boot it, and so will all UEFI systems since UEFI must support MBR.

For ISOs that need to additionally be booted off of optical media (aka ISOHYBRIDs) the story gets more complicated, but ultimately what you need to take away from that is the same: avoid GPT at all cost.


I wonder… does the UEFI spec mandate any particular behaviour in these cases? Does the UEFI SCT test for it?

Seems like an inconsistency which could be addressed by adding it to the spec and/or test suite.

I guess the other thing I don’t know, is whether there is any actual real world pressure on firmware vendors to pass the test suite.


Let me give you an alternative perspective.

My startup pays Docker for their registry hosting services, for our private registry. However, some of our production machines are not set up to authenticate towards our account, because they are only running public containers.

Because of this change, we now need to either make sure that every machine is authenticated, or take the risk of a production outage in case we do too many pulls at once.

If we had instead simply mirrored everything into a registry at a big cloud provider, we would never have paid docker a cent for the privilege of having unplanned work foisted upon us.


I get why this is annoying.

However, if you are using docker's registry without authentication and you don't want to go through the effort of adding the credentials you already have, you are essentially relying on a free service for production already, which may be pulled any time without prior notice. You are already taking the risk of a production outage. Now it's just formalized that your limit is 10 pulls per IP per hour. I don't really get how this can shift your evaluation from using (and paying for) docker's registry to paying for your own registry. It seems orthogonal to the evaluation itself.


The big problem is that the docker client makes it nearly impossible to audit a large deployment to make sure it’s not accidentally talking to docker hub.

This is by design, according to docker.

I’ve never encountered anyone at any of my employers that wanted to use docker hub for anything other than a one-time download of a base image like Ubuntu or Alpine.

I’ve also never seen a CD deployment that doesn’t repeatedly accidentally pull in a docker hub dependency, and then occasionally have outages because of it.

It’s also a massive security hole.

Fork it.


> This is by design, according to docker.

I have a vague memory of reading something to that effect on their bug tracker, but I always thought the reasoning was ok. IIRC it was something to the effect that the goal was to keep things simple for first time users. I think that's disservice to users, because you end up with many refusing to learn how things actually work, but I get the sentiment.

> I’ve also never seen a CD deployment that doesn’t repeatedly accidentally pull in a docker hub dependency, and then occasionally have outages because of it.

There's a point where developers need to take responsibility for some of those issues. The core systems don't prevent anyone from setting up durable build pipelines. Structure the build like this [1]. Set up a local container registry for any images that are required by the build and pull/push those images into a hosted repo. Use a pull through cache so you aren't pulling the same image over the internet 1000 times.

Basically, gate all registry access through something like Nexus. Don't set up the pull through cache as a mirror on local clients. Use a dedicated host name. I use 'xxcr.io' for my local Nexus and set up subdomains for different pull-through upstreams; 'hub.xxcr.io/ubuntu', 'ghcr.xxcr.io/group/project', etc..

Beyond having control over all the build infrastructure, it's also something that would have been considered good netiquette, at least 15-20 years ago. I'm always surprised to see people shocked that free services disappear when the stats quo seems to be to ignore efficiency as long as the cost of inefficiency is externalized to a free service somewhere.

1. https://phauer.com/2019/no-fat-jar-in-docker-image/


> I'm always surprised to see people shocked that free services disappear when the stats quo seems to be to ignore efficiency as long as the cost of inefficiency is externalized to a free service somewhere.

Same. The “I don’t pay for it, why do I care” attitude is abundant, and it drives me nuts. Don’t bite the hand that feeds you, and make sure, regularly, that you’re not doing that by mistake. Else, you might find the hand biting you back.


Block the DNS if you don’t want dockerhub images. Rewrite it to your artifactory.

This is really not complicated and your not entitled to unlimited anonymous usage of any service.


That will most likely fail, since the daemon tries to connect to the registry with SSL and your registry will not have the same SSL certificate as Docker Hub. I don't know if a proxy could solve this.


This is supported in the client/daemon. You configure your client to use a self-hosted registry mirror (e.g. docker.io/distribution or zot) with your own TLS cert (or insecure without if you must) as pull-through cache (that's your search key word). This way it works "automagically" with existing docker.io/ image references now being proxied and cached via your mirror.

You would put this as a separate registry and storage from your actual self-hosted registry of explicitly pushed example.com/ images.

It's an extremely common use-case and well-documented if you try to RTFM instead of just throwing your hands in the air before speculating and posting about how hard or impossible this supposedly is.

You could fall back to DNS rewrite and front with your own trusted CA but I don't think that particular approach is generally advisable given how straightforward a pull-through cache is to set up and operate.


This is ridiculous.

All the large objects in the OCI world are identified by their cryptographic hash. When you’re pulling things when building a Dockerfile or preparing to run a container, you are doing one of two things:

a) resolving a name (like ubuntu:latest or whatever)

b) downloading an object, possibly a quite large object, by hash

Part b may recurse in the sense that an object can reference other objects by hash.

In a sensible universe, we would describe the things we want to pull by name, pin hashes via a lock file, and download the objects. And the only part that requires any sort of authentication of the server is the resolution of a name that is not in the lockfile to the corresponding hash.

Of course, the tooling doesn’t work like this, there usually aren’t lockfiles, and there is no effort made AFAICT to allow pulling an object with a known hash without dealing with the almost entirely pointless authentication of the source server.


Right but then you notice the failing CI job and fix it to correctly pull from your artifact repository. It's definitely doable. We require using an internal repo at my work where we run things like vulnerability scanners.


> since the daemon tries to connect to the registry with SSL

If you rewrite DNS, you should of course also have a custom CA trusted by your container engine as well as appropriate certificates and host configurations for your registry.

You'll always need to take these steps if you want to go the rewrite-DNS path for isolation from external services because some proprietary tool forces you to use those services.


You don't have to run docker. Containerd is available.


It's trivial to audit a large deployment, you look at dns logs.


This is Infamous Dropbox Comment https://news.ycombinator.com/item?id=9224 energy


They didn't say it's easy to fix, just detect.


Is there no way to operate a caching proxy for docker hub?!


There are quite a few docker registries you can self-host. A lot of them also have a pull-through cache.

Artifactory and Nexus are the two I've used for work. Harbor is also popular.

I can't think of the name right now, but there are some cool projects doing a p2p/distributed type of cache on the nodes directly too.


I don't really get how this can shift your evaluation from using (and paying for) docker's registry to paying for your own registry

Announcing a new limitation that requires rolling out changes to prod with 1 week notice should absolutely shift your evaluation of whether you should pay for this company's services.


Here's an announcement from September 2024.

https://www.docker.com/blog/november-2024-updated-plans-anno...


You're right, that is "an announcement":

At Docker, our mission is to empower development teams by providing the tools they need to ship secure, high-quality apps — FAST. Over the past few years, we’ve continually added value for our customers, responding to the evolving needs of individual developers and organizations alike. Today, we’re excited to announce significant updates to our Docker subscription plans that will deliver even more value, flexibility, and power to your development workflows.

We’ve listened closely to our community, and the message is clear: Developers want tools that meet their current needs and evolve with new capabilities to meet their future needs.

That’s why we’ve revamped our plans to include access to ALL the tools our most successful customers are leveraging — Docker Desktop, Docker Hub, Docker Build Cloud, Docker Scout, and Testcontainers Cloud. Our new unified suite makes it easier for development teams to access everything they need under one subscription with included consumption for each new product and the ability to add more as they need it. This gives every paid user full access, including consumption-based options, allowing developers to scale resources as their needs evolve. Whether customers are individual developers, members of small teams, or work in large enterprises, the refreshed Docker Personal, Docker Pro, Docker Team, and Docker Business plans ensure developers have the right tools at their fingertips.

These changes increase access to Docker Hub across the board, bring more value into Docker Desktop, and grant access to the additional value and new capabilities we’ve delivered to development teams over the past few years. From Docker Scout’s advanced security and software supply chain insights to Docker Build Cloud’s productivity-generating cloud build capabilities, Docker provides developers with the tools to build, deploy, and verify applications faster and more efficiently.

Sorry, where in this hyped up marketingspeak walloftext does it say "WARNING we are rugging your pulls per IPv4"?


That's some cherry-picking right there. That is a small part of the announcement.

Right at the top of the page it says:

> consumption limits are coming March 1st, 2025.

Then further in the article it says:

> We’re introducing image pull and storage limits for Docker Hub.

Then at the bottom in the summary it says again:

> The Docker Hub plan limits will take effect on March 1, 2025

I think like everyone else is saying here, if you rely on a service for your production environments it is your responsibility to stay up to date on upcoming changes and plan for them appropriately.

If I were using a critical service, paid or otherwise, that said "limits are coming on this date" and it wasn't clear to me what those limits were, I certainly would not sit around waiting to find out. I would proactively investigate and plan for it.


The whole article is PR bs that makes it sound like they are introducing new features in the commercial plans and hiking up their prices accordingly to make up for the additional value of the plans.

I mean just starting with the title:

> Announcing Upgraded Docker Plans: Simpler, More Value, Better Development and Productivity

Wow great it's simpler, more value, better development and productivity!

Then somewhere in the middle of the 1500-word (!) PR fluff there is a paragraph with bullet points:

> With the rollout of our unified suites, we’re also updating our pricing to reflect the additional value. Here’s what’s changing at a high level:

> • Docker Business pricing stays the same but gains the additional value and features announced today.

> • Docker Personal remains — and will always remain — free. This plan will continue to be improved upon as we work to grant access to a container-first approach to software development for all developers.

> • Docker Pro will increase from $5/month to $9/month and Docker Team prices will increase from $9/user/month to $15/user/mo (annual discounts). Docker Business pricing remains the same.

And at that point if you're still reading this bullet point is coming:

> We’re introducing image pull and storage limits for Docker Hub. This will impact less than 3% of accounts, the highest commercial consumers.

Ah cool I guess we'll need to be careful how much storage we use for images pushed to our private registry on Docker Hub and how much we pull them.

Well it's an utter and complete lie because even non-commercial users are affected.

————

This super long article (1500 words) intentionally buries the lede because they are afraid of a backlash. But you can't reasonably say “I told u so” when you only mentioned in a bullet point somewhere in a PR article that there will be limits that impact the top 3% of commercial users, then 4 months later give a one week notice that images pulls will be capped to 10 pulls per hour LOL.

The least they could do is to introduce random pull failures with an increasing probability rate over time until it finally entirely fails. That's what everyone does with deprecated APIs. Some people are in for a big surprise when a production incident will cause all their images to be pulled again which will cascade in an even bigger failure.


None of this takes away from my point that the facts are in the article, if you read it.

If the PR stuff isn't for you, fine, ignore that. Take notes on the parts that do matter to you, and then validate those in whatever way you need to in order to assure the continuity of your business based on how you rely on Docker Hub.

Simply the phrase "consumption limits" should be a pretty clear indicator that you need to dig into that and find out more, if you rely on Docker in production.

I don't get everyone's refusal here to be responsible for their own shit, like Docker owes you some bespoke explanation or solution, when you are using their free tier.

How you chose to interpret the facts they shared, and what assumptions you made, and if you just sat around waiting for these additional details to come out, is on you.

They also link to an FAQ (to be fair we don't know when that was published or updated) with more of a Q&A format and the same information.


It's intentionally buried. The FAQ is significantly different in November; it does say that unauthenticated pulls will experience rate limits, but the documentation for the rate limits given doesn't offer the limit of 10/hour but instead talks about how to authenticate, how to read limits using API, etc.

The snippets about rate limiting give the impression that they're going to be at rates that don't affect most normal use. Lots of docker images have 15 layers; doesn't this mean you can't even pull one of these? In effect, there's not really an unauthenticated service at all anymore.

> “But the plans were on display…”

> “On display? I eventually had to go down to the cellar to find them.”

> “That’s the display department.”

> “With a flashlight.”

> “Ah, well, the lights had probably gone.”

> “So had the stairs.”

> “But look, you found the notice, didn’t you?”

> “Yes,” said Arthur, “yes I did. It was on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the Leopard.”


I'm certainly not trying to argue or challenge anyone's interpretations of motive or assumptions of intent (no matter how silly I find them - we're all entitled to our opinions).

I am saying that when change is coming, particularly ambiguous or unclear change like many people feel this is, it's no one's responsibility but yours to make sure your production systems are not negatively affected by the change.

That can mean everything from confirming data with the platform vendor, to changing platforms if you can't get the assurances you need.

Y'all seem to be fixated on complaining about Docker's motives and behaviour, but none of that fixes a production system that's built on the assumption that these changes aren't happening.


> but none of that fixes a production system that's built on the assumption that these changes aren't happening.

Somebody's going to have the same excuse when Google graveyards GCP. Till this change, was it obvious to anyone that you had to audit every PR fluff piece for major changes to the way Docker does business?


> was it obvious to anyone that you had to audit every PR fluff piece for major changes to the way Docker does business?

You seem(?) to be assuming this PR piece, that first announced the change back in Sept 2024, is the only communication they put out until this latest one?

That's not an assumption I would make, but to each their own.


Sure, but at least those of us reading this thread have learned this lesson and will be prepared. Right?


Oh definitely.

This isn't exactly the same lesson, but I swore off Docker and friends ages ago, and I'm a bit allergic to all not-in-house dependencies for reasons like this. They always cost more than you think, so I like to think carefully before adopting them.


But Mr Dent, the plans have been available in the local planning office for the last nine months.”

“Oh yes, well as soon as I heard I went straight round to see them, yesterday afternoon. You hadn’t exactly gone out of your way to call attention to them, had you? I mean, like actually telling anybody or anything.”

“But the plans were on display …”

“On display? I eventually had to go down to the cellar to find them.”

“That’s the display department.”

“With a flashlight.”

“Ah, well the lights had probably gone.”

“So had the stairs.”

“But look, you found the notice didn’t you?”

“Yes,” said Arthur, “yes I did. It was on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the Leopard’.”


> I don't get everyone's refusal here to be responsible for their own shit

No kidding. Clashes with the “gotta hustle always” culture, I guess.

Or it means that they can’t hide their four full-time jobs from each of the four employers as easily while they fix this at all four places at the same time.

The “I am owed free services” mentality needs to be shot in the face at close range.


Documentation on usage and limits from December 2024.

https://web.archive.org/web/20241213195423/https://docs.dock...

Here's the January 21st 2025 copy that includes the 10/HR limit.

https://web.archive.org/web/20250122190034/https://docs.dock...

The Pricing FAQ goes back further to December 12th 2024 and includes the 10/HR limit.

https://web.archive.org/web/20241212102929/https://www.docke...

I haven't gone through my emails, but I assume there was email communication somewhere along the way. It's safe to assume there's been a good 2-3 months of communication, though it may not have been as granular or targeted as some would have liked.


Hey, can I have your services for free because I also feel entitled?


I mean, there has never not been some issue with Docker Desktop that I have to remember to work around. We're all just collectively cargo culting that Docker containers are "the way" and putting up with these troubles is the price to pay.


If you offer a service, you have some responsibility towards your users. One of those responsibilities is to give enough notice about changes. IMO, this change doesn't provide enough notice. Why not making it a year, or at least a couple of months? Probably because they don't want people to have enough notice to force their hand.


> Why not making it a year, or at least a couple of months?

Announced in September 2024: https://www.docker.com/blog/november-2024-updated-plans-anno...

At least 6 months of notice.


I use docker a few times a week and didn’t see that. Nor would I have seen this if I hadn’t opened HN today. Not exactly great notice.


If you had an account you’d receive an email back in September 2024. I have received one…


This is not announcing the 10 (or 40) pull/hr/ip


Didn't they institute more modest limits some time ago? Doesn't really seem like this is out of nowhere.


Yes they have. They are reducing the quota further.


They altered the deal. Pray they don't alter it any further.


s/Pray/Pay/


What principal are you using to suggest that responsibility comes from?

I have a blog, do I have to give my readers notice before I turn off the service because I can't afford the next hosting charge?

Isn't this almost exclusively going to effect engineers? Isn't it more of the engineer's responsibility not to allow their mission critical software to have such a fragile signal point of failure?

> Probably because they don't want people to have enough notice to force their hand.

He says without evidence, assuming bad faith.


You don't. You have responsibility towards your owners/shareholders. You only have to worry about your customers if they are going to leave. Non-paying users not so much - you're just cutting costs now zirp isn't a thing.


If this was a public company I would put my tin foil hat and believe that it's a quick buck scheme to boost CEO pay. A short sighted action that is not in the shareholders interest. But I guess that's not the case? Who knows...


At this stage of the product lifecycle, free users are unlikely to ever give you money without some further "incentives". This shouldnt be news by now, especially on HN.

If you're production service is relying on a free-tier someone else provides, you must have some business continuity built in. These are not philanthropic organisations.


It's bait and switch that has the stakes of "adopt our new policy, that makes us money, that you never signed up for, or your business fails." That's a gun to the head.

Not an acceptable interaction. This will be the end of Docker Hub if they don't walk back.


To think of malice is mistaken. It's incompetence.

Docker doesn't know how to monetize.


Yes. But they are paying for this bandwidth, authenticated or not. This is just busy work, and I highly doubt it will make much of a difference. They should probably just charge more.


They charge more.


> take the risk of a production outage in case we do too many pulls at once.

And the exact time you have some production emergency is probably the exact time you have a lot of containers being pulled as every node rolls forward/back rapidly...

And then docker.io rate limits you and suddenly your 10 minute outage becomes a 1 hour outage whilst someone plays a wild goose chase trying to track down every docker hub reference and point it at some local mirror/cache.


I mean, don’t build your production environment to rely on some other company’s free tier, and then act surprised when they throttle high usage.

And yes, you’re still using the free tier even if you pay them, if your usage doesn’t have any connection to your paid account.


Companies that change their free tiers also change their paid tiers.

I just don’t build my environment to rely on unstable companies.

That’s sort of the comedy of second order effects as by reducing the amount of free stuff, I think Docker will end up reducing their paid customers.


> If we had instead simply mirrored everything into a registry at a big cloud provider, we would never have paid docker a cent for the privilege of having unplanned work foisted upon us.

Indeed, you’d be paying the big cloud provider instead, most likely more than you pay today. Go figure.


I’d you’re using popular images, they're probably free.

https://gallery.ecr.aws/docker/?page=1


Please reread the comment I replied to.


They should have provided more notice. Your case is simply prioritizing work that you would have wanted to complete anyway. As a paying customer you could check if your unauthenticated requests can go via specific outbound IP addresses that they can then whitelist? I’m not sure but they may be inclined to provide exceptions for paying customers - hopefully.


> Your case is simply prioritizing work that you would have wanted to complete anyway

It's busy-work that provides no business benefit, but-for our supplier's problems.

> specific outbound IP addresses that they can then whitelist

And then we have an on-going burden of making sure the list is kept up to date. Too risky, IMO.


> It's busy-work that provides no business benefit, but-for our supplier's problems.

I dunno, if I were paying for a particular quality-of-service I'd want my requests authenticated so I can make claims if that QoS is breached. Relying on public pulls negates that.

Making sure you can hold your suppliers to contract terms is basic due diligence.


It is a trade-off. For many services I would absolutely agree with you, but for hosting public open-source binaries, well, that really should just work, and there's value in keeping our infrastructure simpler.


This was announced last year.


This sounds like its only talking about authenticated pulls:

> We’re introducing image pull and storage limits for Docker Hub. This will impact less than 3% of accounts, the highest commercial consumers. For many of our Docker Team and Docker Business customers with Service Accounts, the new higher image pull limits will eliminate previously incurred fees.


Go look at the wayback for the exact page the OP is linking to.


So it goes. You're a business, pay to make the changes. It's a business expense. Docker ain't doing anything that their agreements/licenses say they can't do.

It's not fair, people shout. Neither are second homes when people don't even have their first but that doesn't seem to be a popular opinion on here.


Devsec/ops guy here, the fact that you were pulling public images at all ever is the thing that is insane to me.


Why? We are running the exact same images that we would be mirroring into and pulling from our private registry if we were doing that, pinned to the sha256sum.


You can setup your own registry. You're complaining about now having to do your own IT.

this isn't a counterpoint is rewrapping the same point: free services for commercial enterprise is a counterproductive business plan


How can you make Docker pull debian:latest from your own registry instead of the official Docker registry, without explicitly specifying <my_registry>/debian:latest?



> If we had instead simply mirrored everything into a registry at a big cloud provider

You would have had to authenticate to access that repo as well.


Amazon ECR for instance provides the option to host a public registry.


> Data transferred out from public repositories is limited by source IP when an AWS account is not used.

https://aws.amazon.com/ecr/pricing/?nc1=h_ls

> For unauthenticated customers, Amazon ECR Public supports up to 500GB of data per month. https://docs.aws.amazon.com/AmazonECR/latest/public/public-s...

I don't see how it's better.


`mirror.gcr.io` works fine for many popular images on Docker Hub.


Wouldn't they get a choice as to what type of authentication they want to use then? I'd assume they could limit access in multiple ways, vs just the dockerhub way.


I just cannot imagine going into public and saying, roughly the equivalent of I want free unlimited bandwidth because I'm too lazy to do the very basics of managing my own infra.

> If we had instead simply mirrored everything into a registry at a big cloud provider, we would never have paid docker a cent for the privilege of having unplanned work foisted upon us.

I mean, if one is unwilling to bother to login to docker on their boxes, is this really even an actual option? Hm.


> mirrored everything into a registry at a big cloud provider

https://cloud.google.com/artifact-registry/docs/pull-cached-...


You might try complaining and see if they give you an extension.


> When I first ran into this issue back in 2017, I posted in the React issue tracker that I had ”fixed” my app by blocking translation entirely.

Please do not do this! In almost every instance I've encountered severe Translate-related broken-ness, it's still worked well enough to get me a snapshot of the current page translated. Fighting through this is still less cumbersome than the alternatives.

> The only alternative solution that I can think of, is to implement your own localization within your app (i.e. internationalization)

I will add, please make sure that language is an independent setting, and not derived from locale! I sometimes have to use translate on sites that have my preferred language available, but won't show it to me because it's tied to locale and that changes other things that I don't want, like currency.

On one such site I used a browser extension to rewrite the request for the language strings file.


> I will add, please make sure that language is an independent setting, and not derived from locale!

Websites already have exactly what they need to provide you with the language you want via the Accept-Language header your browser sends. In your browser's settings you can configure a list of languages (country-specific if desired) which get send with every request.

E.g.,:

    en-GB
    en
    nl
(Prefer British English, fall back to any English, and if not available either use Dutch.)

This is already entirely separate from your OS locale! Although it will default to filling it in with that locale's language if you don't configure it yourself of course.

This should be the primary way to decide upon a language, but in addition to that offering a way to switch languages for a specific site on that site itself is a user-friendly gesture appreciated by many.


This is not true. E.g., Safari is tied to the OS settings, Firefox has some dependencies regarding the locale of the first install, etc…

Moreover, probably most people speak or can read more than a single language. There may be reasons for accessing a site in a particular language other than the standard locale.

Please empower users to make their own choice! Do not assume to know better.


For example, when the translation is shit and you prefer to use the English one because the one in your language is impossible to understand.


This does not help in many many situations.

I am a Hongkonger, natively speaks Cantonese, fluent in English and learning Japanese.

If I go to Google I want English UI and prioritise traditional Chinese result then English then simplified Chinese.

On the other hand if I go to a Japanese website, I don’t want them to translate for me, just display the original Japanese will be fine. Unless I toggle.

These kind of complex setup can never be achieved if we don’t have a per site locale policy. And seriously. A toggle per site is easier than navigate three level deep into browser setting page.


> A toggle per site is easier than navigate three level deep into browser setting page.

I don't disagree with your overall point, that flexibility is useful for website visitors, but your statements requires asking the question: "easier for whom?"

Certainly relying on Accept-Language is significantly easier for the website maintainer. And overall it would be a lot easier if the small handful of web browser maintainers added saner settings (even a per-website Accept-Language toggle), than if we were to require the thousands (tens of thousands? millions?) of multi-language website developers to provide their own toggle. Not to mention having a standardized way to do manage this would be better for users than having to discover each website's language toggle UI.

But sure, we don't have those easy-to-use browser settings, so it's (unfortunately) up to every website developer to solve this problem, over and over and over.

(As an aside, it would be cool if websites could return a hypothetically-standardized Available-Languages header in their responses so browsers could display the appropriate per-website UI, with only the supported languages.)


The problem is when you understand multiple languages.

If a website is made by an English speaking team, as I understand English I'd like it to be English first and not a possibly broken French version. If a website is developed with French language first I'd like to have it in French and not a second-rate English translation.


> In your browser's settings you can configure a list of languages (country-specific if desired) which get send with every request.

Customising this list at all makes your browser fingerprint thousands of times less common than it was before you did this, and many websites you visit could then probably uniquely identify you as the same user across all of your sessions.


That and a thousand other things. A highly privacy focussed browser could offer to enable this setting only on whitelisted websites (and send 'en' plus a bunch of random language codes on others).


> 'en' plus a bunch of random language codes

This is an easier fingerprint signal than just not sending anything.


Not if the random part of the list changes with each request.

There are two ways to defeat fingerprinting: 1) make everyone look exactly the same (pretty difficult to do), or 2) introduce enough noise and randomness to fingerprinting signals to each request so that each person looks like many different people (much easier to do).


> This should be the primary way to decide upon a language

Google developers are very intelligent, but not intelligent enough to understand this.


They probably understand it just fine. Someone higher-up has just over-ruled them. There may even be a good reason for it, but because of the way companies work, we will probably never find out what it is.


Almost no website uses this, even big ones like Google who insist on showing me pages in German rather than English or French.


On the other hand, sometime the ads that are shown are in German. Easier to mentally filter out.


> that changes other things that I don't want, like currency.

Oh god Google is so bad at this. They don't even let me change the currency in many cases when looking at hotels (yes on the website; not in the Google Maps app)


Google is so ridiculously bad at this, when an account that only ever uses English is explicitly asking for English search results, but happens to be located in Thailand, it will give you English results, but use the Thai calendar to display years, which is 543 years ahead of the Gregorian calendar. Are there any people at all who expect to read English text but expect to see the year 2568 instead of the year 2025, when no part of their system or account is configured for Thai?


I'd have no issue leaving translation enabled, if the translator was an optional feature that the user must opt-in to, and that's clearly communicated as something not controlled by the developer.

But I've received reports from Edge-users that didn't even know translation was enabled.


Yeah, I agree that's problematic. And I would have no objection to implementing a UI feature that displayed a warning banner of some kind if it detected that the page had been translated.


When you say locale, you mean your current location, e.g. as detected by geoip?


A locale is a combination of several things, including a language, but not only.

E.g. I'm from portugal. I'm visiting an american site, which does not have professional portuguese translations, but does have auto-generated ones.

I don't like the auto-generated ones and can read english just fine, so I want to have the language set to english (en-US in this case).

However, I still want it to apply some locale-specific things from Portugal, e.g.:

- Units (Metric vs. Imperial vs. Whatever mess countries like the UK do)

- Date formatting (DD/MM/YYYY vs MM/DD/YYYY)

- Time formatting (AM/PM vs. 24hr clock)

- Currency formatting (10€ vs. 10 € vs. 10 EUR vs. €10)

- Number formatting (10,000.00 vs 10.000,00)

- When the week starts (Monday vs. Sunday)

If you take a look at the windows locale options, it mostly lets you mix-and-match whatever you want (which is great! Now if only the MS apps let me stop using the localized keyboard shortcuts...): https://learn.microsoft.com/en-us/globalization/locale/langu...


Locale I'm using as a shorthand for "the bundle of variables that your service or business needs to tweak between customers in different markets". It may determine things like currency, date/time or currency formatting, or relevant regulatory framework. My argument is that language should always be sett-able independently of the other variables locale controls.

For an example of a site that almost gets it right, see https://www.finnair.com/ . You are first prompted to set location, and then language. I say "almost" because although they will allow you to select English in any market, they won't allow you to select any offered language in any market.

In comparison, https://www.flysas.com/ you get one dropdown which sets market, currency, and language in one go.


Sometimes, but not always. Sometimes it is also based on the locales in your browser.


It means system/browser settings like the one available in navigator.language.


I was recently looking for an article I remember reading a bit over a year ago. I could even remember some exact phrases that appeared. I tried to find it on Google for more than 10 minutes, ultimately to no avail. I then went looking through chat histories and was able to find where I had shared it to someone.

I relayed this story to a friend who suggested I try Kagi. It was on the first page on my first attempt. I was also able to use it to find a different article I was sure I read even longer ago, that I didn't have as clear memory of.


I'm not sure what your positive experience has to do with GPs negative experience. Is the implication that it negates it? Because it doesn't.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: