Hacker Newsnew | past | comments | ask | show | jobs | submit | arghwhat's commentslogin

With the disclaimer that I am comparing to the memory of some entry-level cameras, I would still say that it's way too noisy.

Even on old, entry-level APS-C cameras, ISO1600 is normally very usable. What is rendered here at ISO1600 feels more like the "get the picture at any cost" levels of ISO, which on those limited cameras would be something like ISO6400+.

Heck, the original pictures (there is one for each aperture setting) are taken at ISO640 (Canon EOS 5D MarkII at 67mm)!

(Granted, many are too allergic to noise and end up missing a picture instead of just taking the noisy one which is a shame, but that's another story entirely.)


Noise depends a lot on the actual amount of light hitting the sensor per unit of time, which is not really a part of the simulation here. ISO 1600 has been quite usable in daylight for a very long time; at night it's a somewhat different story.

The amount and appearance of noise also heavily depends on whether you're looking at a RAW image before noise processing or a cooked JPEG. Noise reduction is really good these days but you might be surprised by what files from even a modern camera look like before any processing.

That said, I do think the simulation here exaggerates the effect of noise for clarity. (It also appears to be about six years old.)


The kind of noise also makes a huge difference. Chroma noise looks like ugly splotches of colour, whereas luma noise can add positively to the character of the image. Fortunately humans are less sensitive to chroma resolution so denoising can be done more aggressively in the ab channels of Lab space.

Yes, this simulation exaggerates a lot. Either that, or contains a tiny crop of a larger image.


Yeah, I don't think that it's easy to reproduce noise (if it was, noise reduction would be even better). Also, bokeh/depth of field. That's not so easy to reproduce (although AI may change that).

Rather than a moat of details, it's first-mover advantage. Anyone can run a credit card network, but merchants and banks need to support them. Many others exist, but the issue is that they don't have widespread adoption. Solutions that work exist, which means the lesser supported alternative is not widely used, which again reduces reason for wider adoption...

Regulation changes "why bother" to "oh crap".


jup. once this is built, if adoption is lacking, it's not hard to imagine how the EU could make it the standard payment option.

The real outcome is mostly a change in workflow and a reasonable increase in throughput. There might be a 10x or even 100x increase in creation of tiny tools or apps (yay to another 1000 budget assistant/egg timer/etc. apps on the app/play store), but hardly something one would notice.

To be honest, I think the surrounding paragraph lumps together all anti-AI sentiments.

For example, there is a big difference between "all AI output is slop" (which is objectively false) and "AI enables sloppy people to do sloppy work" (which is objectively true), and there's a whole spectrum.

What bugs me personally is not at all my own usage of these tools, but the increase in workload caused by other people using these tools to drown me in nonsensical garbage. In recent months, the extra workload has far exceeded my own productivity gains.

For the non-technical, imagine a hypochondriac using chatgpt to generate hundreds of pages of "health analysis" that they then hand to their doctor and expect a thorough read and opinion of, vs. the doctor using chatgpt for sparring on a particular issue.


>people using these tools to drown me in nonsensical garbage

https://en.wikipedia.org/wiki/Brandolini%27s_law

>The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it.


> Potentially it's even a way for a MITM to exploit the HTTP stack, some content parser or the application's own handling. TLS stacks are a significantly harder target in comparison.

For signed payloads there is no difference, you're trusting <client>'s authentication code to read a blob, a signature and validate it according to a public key. For package managers that usually only mean trusting gpg - at the very least no less trustworthy than the many TLS and HTTP libraries out there.


> For signed payloads there is no difference, you're trusting <client>'s authentication code to read a blob, a signature and validate it according to a public key.

Assuming this all came through unencrypted HTTP:

- you're also trusting that the client's HTTP stack is parsing HTTP content correctly

- for that matter, you're also trusting that the server (and any man-in-the-middle) is generating valid HTTP responses

- you're also trusting that the client's response parser doesn't have a vulnerability (and not, say, ignoring some "missing closing bracket" or something)

- you're also trusting that the client is parsing the correct signature (and not, say, some other signature that was tacked-on later)

It's trivially easy to disassemble software to find vulnerabilities like those, though. So it's a lot of trust given for an untrusted software stack.


> you're also trusting that the client's HTTP stack is parsing HTTP content correctly

This is an improvement: HTTP/1.1 alone is a trivial protocol, whereas the alternative is trusting the client's much more complicated TLS stack and its HTTP stack.

For technical reasons, unencrypted HTTP is also always the simpler (and for bulk transfers more performant) HTTP/1.1 in practice as standard HTTP/2 dictates TLS with the special non-TLS variant ("h2c") not being as commonly supported.

> for that matter, you're also trusting that the server (and any man-in-the-middle) is generating valid HTTP responses

You don't, just like you don't trust a TLS server to generate valid TLS (and tunneled HTTP) messages.

> you're also trusting that the client's response parser doesn't have a vulnerability (and not, say, ignoring some "missing closing bracket" or something)

You don't. Authentication 101 (which also applies to how TLS works), authenticity is always validated before inspecting or interacting with content. Same rules that TLS needs to follow when it authenticates its own messages.

Furthermore, TLS does nothing to protect you against a server delivering malicious files (e.g., a rogue maintainer or mirror intentionally giving you borked files).

> you're also trusting that the client is parsing the correct signature (and not, say, some other signature that was tacked-on later)

You don't, as the signature must be authentic from a trusted author (the specific maintainer of the specific package for example). The server or attacker is unable to craft valid signatures, so something "tacked-on" just gets rejected as invalid - just like if you mess with a TLS message.

> It's trivially easy to disassemble software to find vulnerabilities like those, though. So it's a lot of trust given for an untrusted software stack.

The basis of your trust is invalid and misplaced: Not only is TLS not providing additional security here, TLS is the more complex, fragile and historically vulnerable beast.

The only non-privacy risk of using non-TLS mirrors is that a MITM could keep serving you an old version of all your mirrors (which is valid and signed by the maintainers), withholding an update without you knowing. But, such MITM can also just fail your connection to a TLS mirror and then you also can't update, so no: it's just privacy.


> HTTP/1.1 alone is a trivial protocol

Eh? CWE-444 would beg to differ: https://cwe.mitre.org/data/definitions/444.html

https://http1mustdie.com/

> the alternative is trusting the client's much more complicated TLS stack and its HTTP stack.

An attacker doesn't get to attack client's HTTP stack without first piercing protection offered by TLS.


You seem to have forgotten all the critical TLS bugs we had. Heartbleed ring a bell?

> An attacker doesn't get to attack client's HTTP stack without first piercing protection offered by TLS.

You misunderstand: this means more attack surface.

The attacker can mess with the far more complex and fragile TLS stack, and any attacker controlling a server or server payload can also attack the HTTP stack.

Have you recently inspected who owns and operates every single mirror in the mirror list? None of these are trusted by you or by the distro, they're just random third parties - the trust is solely in the package and index signatures of the content they're mirroring.

I'm not suggesting not using HTTPS, but it just objectively wrong to consider it to have reduced your attack surface. At the same time most of its security guarantees are insufficient and useless for this particular task, so in this case the trade-off is solely privacy for complexity.


That was a long time ago and it was specific to one implementation. In comparison GnuPG has had so many critical vulnerabilities even recently. That's why Apt switched to Sequoia.

Modern TLS stacks are far from fragile, especially in comparison to PGP implementations. It's a significant reduction in attack surface when it's a MITM we're talking about.

Malicious mirrors remain a problem, but having TLS in the mix doesn't make it more dangerous. Potential issues with PGP, HTTP and Apt's own logic are just so much more likely.


If you believe TLS is more fragile than PGP and plain HTTP, then I have reason to believe you have never looked at any of those wire protocols/file formats and the logic required.

Adding TLS in front of HTTP when talking to an untrusted third-party server (and yes, any standard HTTPS server is untrusted int his context), can only ever increase your attack surface. The only scenario where it reduces the attack surface is if you are connected with certificate pinning to a trusted server implementation serving only trusted payloads, and neither is the case for a package repo - that's why we have file signatures in the first place.


I have implemented parts of all three. I doubt you have.

> Adding TLS in front of HTTP when talking to an untrusted third-party server, can only ever increase your attack surface.

No, against a MITM it instantly subtracts the surface inside the TLS from the equation. Which is the entire point.

> [...] that's why we have file signatures in the first place.

You still don't understand that even before the cryptographic operations done in order to verify the signatures you have all those other layers. Layers that are complex to implement, easy to misinterpret and repeatedly to this day found flawed. PGP is so terrible no serious cryptographer even bothers looking at it this day and age.

I start getting the feeling that you're involved in keeping the package repositories stuck in the past. I can't wait for yet another Apt bug where some MITM causes problems yet again.


> I start getting the feeling that you're involved in keeping the package repositories stuck in the past.

I start getting the feeling that you have no actual experience in threat modelling.


If you don't trust the http client to not do something stupid, this all applies for https, too. Plus, they can also bork on the ssl verification phase, or skip it altogether.

TLS stacks are generally significantly harder targets than HTTP ones. It's absolutely possible to use one incorrectly, but then we should also count all the ways you can misuse a HTTP, there are a lot more of those.

This statement makes no sense, TLS is a complicated protocol with implementations having had massive fun and quite public security issues, while HTTPS means you have both and need to deal with a TLS server feeing you malicious HTTP responses.

Having to harden two protocol implementations, vs. hardening just one of those.

(Having set up letsencrypt to get a valid certificate does not mean that the server is not malicious.)


TLS may be complicated for some people. But unlike HTTP, it has even formally proven correct implementations. You can't say the same about HTTP, PGP and Apt.

> Having to harden two protocol implementations, vs. hardening just one of those.

We're speaking of a MITM here. In that case no, you don't have to harden both. (Even if you did have to, ain't nobody taking on OpenSSL before all the rest, it's not worth the effort.)

I find it kind-of weird that you can't understand that if all a MITM can tamper with is the TLS then it's irrefutably a significantly smaller surface than HTTP+PGP+Apt.


> We're speaking of a MITM here

We are speaking of the total attack surface.

1. When it comes to injecting invalid packets to break a parser, you can MITM TLS without problem. This is identical to the types of attack you claimed were relevant to HTTP-only, feeding invalid data that would be rejected by authentication of the signature.

2. Any server owning a domain name can have a valid TLS certificate, creating "trusted" connections, no MITM necessary. Any server in your existing mirrorlist can go rogue, any website you randomly visit might be evil. They can send you both signed but evil TLS packets, and malicious HTTP payloads.

3. Even if the server is good, it's feeding you externally obtained data that too could be evil.

There is no threat model here where you do not rely 100% on the validity of the HTTP stack and file signature checking. TLS only adds another attack surface, by running more exploitable code on your machine, without taking away any vulnerabilities in what it protects.


No, you want to move goalposts, but we're not speaking of some arbitrary "total attack surface". The article itself is also about a potential MITM. Then you list three cherry-picked cases, none of which actually touch upon the concerns that a plaintext connection introduces or exposes. Please stop, it's silly.

There is fundamentally no reasonable threat model where a plaintext connection (involving all these previously listed protocols) is safer against a MITM than an encrypted and authenticated one.


You don't call it "cherry-picking" when a person lists fundamental flaws in your argument.

Constantly ignoring all the flaws outlined and just reiterating your initial opinion with no basis whatsoever is at best ignorance, at worst trolling.

HTTP with signed packages is by definition a protocol with authenticated payloads, and encryption exclusively provides privacy. And no, we're not singeling out the least likely attack vector for the convenience of your argument - we're looking at the whole stack.


I do call it cherry-picking because you chose scenarios that either apply to it also without TLS or the scenarios are just (intentionally) extremely narrow in scope.

You have repeatedly ignored that we're speaking about protections against a MITM, not malicious endpoints. Because of that your desperate attempt at talking about the "whole stack" talk is also nonsense. Even if you include it, a modern TLS stack is a very difficult target. The additional surface added that hasn't been inspected with a fine-toothed comb is microscopic.

As such you've excluded the core of the problem - how an unprotected connection means that you have to simultaneously ensure that your HTTP, PGP and Apt code has to be bulletproof. This is an unavoidable result, signatures or no signatures, all that surface is exposed.

You've provided no proof or proper arguments that all three of those can achieve the same level of protection against a MITM. You've not addressed how the minuscule surface added by the TLS stack is not worth it considering the enormous surface of HTTP+PGP+Apt that gets protected against a MITM.

TLS also provides more than just privacy, I recommend you familiarize yourself with the Wikipedia page of TLS.


There's a massive difference. The entire HTTP stack comes into play before whatever blob is processed. GPG is notoriously shitty at verifying signatures correctly. Only with the latest Apt there's some hope that Sequoia isn't as vulnerable.

In comparison, even OpenSSL is a really difficult target, it'd be massive news if you succeed. Not so much for GPG. There are even verified TLS implementations if you want to go that far. PGP implementations barely compare.

Fundamentally TLS is also tremendously more trustworthy (formally!) than anything PGP. There is no good reason to keep exposing it all to potential middlemen except just TLS. There have been real bugs with captive portals unintentionally causing issues for Apt. It's such an _unnecessary_ risk.

TLS leaves any MITM very little to play with in comparison.


They usually support both, but important to note that HTTPS is only used for privacy.

Package managers generally enforce authenticity through signed indexes and (directly or indirectly) signed packages, although be skeptical when dealing with new/minor package managers as they could have gotten this wrong.


Reducing the benefit of HTTPS to only privacy is dishonest. The difference in attack surface exposed to a MITM is drastic, TLS leaves so little available for any attacker to play with.

MITM usually will not work in case of pkg managers, since packages are signed. But still, attacker can learn what kind of software is installed on target. So I believe that HTTPS for privacy in case of linux package managers are fair enough.

The attacker can meddle with every step taken before the signature verification. The way you handle the HTTP responses, the way you handle the signature format, all that. Captive portals have already caused corruption issues for Apt, signed packages be damned.

Saying it's "fair" is like saying engine maintenance does not matter because the tires are inflated. There are more components to it.

Ensuring the correctness of your entire stack against an active MITM is significantly more difficult than ensuring the correctness of just a TLS stack against an active MITM.


> Chemistry trumps psychology

To nitpick: The mind is applied biochemistry. Psychology intervenes in the chemistry, like many other activities do. The goal of that is to solve the root cause so that your future levels will be maintained at the right level, instead of just forcing the level by sourcing the respective chemicals externally.

A good rule of thumb in biology and particular any kind of hormone production and balance is "use it or lose it" - if you start regularly receiving something externally, internal production will scale back and atrophy in response, in many cases permanently.


Psychology can change neurochemistry but only in certain limited ways. Many people are on antidepressants long term because that's the only thing that works for them. Taking antidepressants is already stigmatized enough. People should just do what makes them feel best over the long run. Your rule of thumb does not trump hard-won personal experiences.

We don't really know how SSRIs work, but there's some evidence that it's through desensitizing serotonin receptors, not directly addressing the lack of serotonin. If so, "use it or lose it" doesn't apply; long-term adaptation is the point, and SOMETIMES does persist after quitting.


>A good rule of thumb in biology and particular any kind of hormone production and balance is "use it or lose it" - if you start regularly receiving something externally, internal production will scale back and atrophy in response, in many cases permanently.

There are ways to "hack it".

For example, ~6 months ago I started trt (testosterone replacement). It was the best decision health wise ever. I feel way better psychologically, first time in my life I managed to stick with cardio training for so long (before 3 months was the most). There are other benefits too.

So what about the "loose it" part? Well there is a hormone called HCG one can take a twice a week to trick one's balls into producing some natural testosterone. Its use prevents atrophy and infertility.


> Some cancerous tumors produce this hormone; therefore, elevated levels measured when the patient is not pregnant may lead to a diagnosis of cancer and, if high enough, of paraneoplastic syndromes. It is unknown however whether this production is a contributing cause or an effect of carcinogenesis.

Interesting.

Well, I don't think you'll be able to avoid testicle atrophy even if it minimizes it, but the important part is understanding the tradeoff. Particularly, that adding testosterone will cause changes throughout your entire body (including, for example, shortening life expectancy a bit), and that adding other hormones to the mix will likewise cause changes around the entire body and not just one single process or organ.

But it's your body, your life, your priorities and decision. I also wouldn't consider it a good decision health-wise to take steroids to get huge, but I have no problem with someone deciding that absurd bulk is their main goal in life and worth the tradeoff.


>Well, I don't think you'll be able to avoid testicle atrophy even if it minimizes it

I only take 250iu twice a week. 6 months in I can't see any atrophy. Some people take it every other day.

This hormone HCG is very interesting. In fact it is what many urine pregnancy tests test for. I wonder if I did a pregnancy test it would cone out positive :-)

But, the reason why I'm talking about it is to explain that it works the same as another hormone called LH produced by the pituary gland that regulates natural testosterone production by testes

So when one takes external testosterone the mechanism through which body shuts its production down is by shutting down this LH hormone.

By taking HCG ones testes are told to make testosterone. Not a lot of it, but it seems to be enough to prevent atrophy so far.

> Particularly, that adding testosterone will cause changes throughout your entire body (including, for example, shortening life expectancy a bit), and that adding other hormones to the mix will likewise cause changes around the entire body and not just one single process or organ.

For sure. Many of these effects are very positive. For example I was a little pre-diabetic, this has improved a lot (but one may argue via exercise - then again I manage to be consistent with the exercise thanks to psych. changes caused by the testosterone).

Another is these psychological changes. I feel more "stable". I never was really unstable, at least I never noticed, but it feels better. Like, whatever happens I can most likely solve it kind of feeling, even when things go wrong.

Finally about the shortening of lifespan. I think I offset this by exercising, but another factor is getting regular blood tests. The doc monitors things like hematocrit etc. I'm told to drink a lot more water, to measure blood pressure even though I never had high pressure etc to keep it in check.

I think most problems happen with people that do not realise their blood gets "too thick", they get high blood pressure, miss it for years and end up with circulatory system issues.

It is a tradeoff. Not everything is ideal. One negative side effect I did get is worse skin until I found a procedure to exfoliate/moisturise and so on. It's a bit of a pain in the arse as before I'd just have quick showers. Now it takes longer. But so far it is worth it.


>A good rule of thumb in biology and particular any kind of hormone production and balance is "use it or lose it" -

Very basic and very often wrong rule, so take it with a grain of salt.

Insulin for example is the opposite. "lose it then use it" would be a general rule for type 2 diabetics where insulin resistance commonly due to weight gain is the primary problem. Losing the weight leads to better uptake and usage. For a type 1 "lose it then use it" you typically lose the ability to produce insulin to an an autoimmune disorder, then are stuck using insulin for the rest of your life.

The body itself typically attempts to main homeostasis, but at population scales this is something that is going to have a massive range of ways it shows up. Evolution, at grand scales, doesn't care if you survive as long as enough of your population survives and breeds. At the end of the day you might just be one of those people that was born broken and to work properly you need replacement parts/chemicals. A working medical system should be there to figure out which case is which.


> Insulin for example is the opposite.

You're describing entirely orthogonal issues. In case of insulin resistance, your natural production is running full blast with demand exceeding supply because the consumer stopped caring about the hormone. In case of autoimmune disease, the natural production was killed - you can neither use nor lose what is already dead, and even if some capacity was left it will either soon be killed or atrophy under external insulin, but it will not be mourned.

So no I would say it is exactly the same - "use it or lose it" - but that does not mean that there is never a reason to manually overrule your body's attempt at homeostasis through direct manipulation. It just means that there is a very significant consequence to the process.

> The body itself typically attempts to main homeostasis, but at population scales this is something that is going to have a massive range of ways it shows up.

As a somewhat sidenote, this is also why I dislike the idea of trying to classify people into "normal" and "divergent/atypical". In my eyes we're all normal people and an entirely normal aspect of being a human is that we all differ and have individually specific needs by virtue of being built by a trillion micro-meter sized workers, each with their own hand-copied version of the blueprint, only caring about the millimeter of you in their immediate vincinity and not really talking to any of the others.


I believe that is indeed what they meant. The perception of being given a remedy is very powerful indeed, especially for issues ultimately linked to the mind.

That placebos can work should not be seen as undermining the severity or pain of the depression, but rather underline the power of tricking the mind into improvement.


For reference, golang's mutex also spins by up to 4 times before parking the goroutine on a semaphore. A lot less than the 40 times in the webkit blogpost, but I would definitely consider spinning an appropriate amount before sleeping to be common practice for a generic lock. Granted, as they have a userspace scheduler things do differ a bit there, but most concepts still apply.

https://github.com/golang/go/blob/2bd7f15dd7423b6817939b199c...

https://github.com/golang/go/blob/2bd7f15dd7423b6817939b199c...


There are third-party tags out there compatible with both Google and Apple's network that is roughly the same size and use the same battery, yet have a giant lanyard opening in the design to fit anything.

Apple could trivially have fit a usable hole if they wanted to. They just don't want to because they get to sell accessories with that now. Also, looking cleaner on its own helps sell even if that is an entirely useless quality for a tag tha tneeds to go into a bloody case.


Do the third-party tags have all the same features, size, capabilities, range, durability, etc.? Or have they made other tradeoffs instead of eliding the attachment point?


Nothing related to the attachment point.

I don't know of any third-party AirTag-compatible trackers that have UWB right now, but this applies equally to tags that are much larger than the AirTag. The rest is identical - good battery life, range, loud speaker, ...

I have a few theories on the lacking UWB:

1. Given that UWB is also super slow to roll out to Google Find, with only the Moto Tag available, there might be a technical/regulatory hurdle that manufacturers don't think is worth it

2. Apple/Google might make it a pain to be allowed to integrate with their UWB stuff

3. Cost - maybe the UWB stack is comparatively expensive, with third-party tags aiming for price brackets as low as 1/0th the cost of an AirTag

As a note, I don't know if this is because of regional differences in spectrum limits, but at least with AirTag and Moto Tag v1 EU versions, I could never get UWB to give any meaningful directions until I was already staring at the thing. Once you were in range to even consider UWB, playing a sound would be way more effective.


I'm pleasantly surprised Apple allows third-party manufacturers to make trackers that work with Find My. I've bought a bunch for as low as $2 per tracker. The only missing feature, like you mentioned, is missing UWB.


I do appreciate the visual of driving a forklift into the gym.

The activity would train something, but it sure wouldn't be your ability to lift.


A version of this does happen with regard to fitness.

There are enthusiasts who will spend an absolute fortune to get a bike that is few grams lighter and then use it to ride up hills for the exercise.

Presumably a much cheaper bike would mean you could use a smaller hill for the same effect.


From an exercise standpoint, sure, but with sports there is more to it than just maximizing exercise.

If you practice judo you're definitely exercising but the goal is defeating your opponent. When biking or running you're definitely exercising but the goal is going faster or further.

From an an exercise optimization perspective you should be sitting on a spinner with a customized profile, or maybe do some entirely different motion.

If sitting on a carbon fiber bike, shaving off half a second off your multi-hour time, is what brings you joy and motivation then I say screw it to further justification. You do you. Just be mindful of others, as the path you ride isn't your property.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: