> How many of the core devs don't work for Google? How much of the overall project direction is decided by people who have nothing to do with Google?
This is why some developers are starting to hate open source. There's no technical excitement, it's all about community and governance and who has power and corporations and codes of conduct, etc... If not, you're going to fail. You get an army of experts scrutinizing your project and it's unexciting.
hey! let me illustrate with an (oversimplified) example:
– we tweak eyes to be larger-than-real and observe the reaction by letting users pick from several options or just observing usage frequency. that way we find the optimal point and then slice it down by age, sex, ethnic etc. groups; as well as to personal preference.
- do that with a large amount of data + genetic algos that do much less-obvious things than eye size.
- when a user wants a World of Warcraft orc avatar, we can use the data to make the orc look both like an orc and like he wants to see himself.
you can't do this with generic emoji – you actually have to keep tweaking and seeing user reactions to understand how people truly want to see themselves and others. and then you can use this to other contexts.
thanks for the feedback, i agree this is not explained clearly enough in the post.
Thanks for explaining it in more detail. This is very interesting (and if it doesn't pan out, you got the hottest skills in the market right now). All the best.
You're complaining that a company is framing a certain situation in a given way and yet that's exactly what you're doing here, framing it in a different way trying to convince us of your point. I don't see the difference.
It seems to me that at this point in time, the only substantive action they can take is
- cancel this change
- consult with the community about how to solve the perceived problem
Both of which they appear to be doing. I get that the proof of sincerity is in what solution they eventually move forward with, but I think what people are pushing back at you on is your apparent argument that what they've done so far to fix this isn't good enough. I think what they've done so far to fix this is literally all they can do at this moment.
More seriously. My takeaway is not "I wish Patreon hadn't issued an apology", but to beware promises without binding contracts or strong signals of fidelity.
Work at a Startup serves primarily as a matchmaker for potential employees and founders and isn't directly involved in facilitating visa sponsorships or other details of employment. While YC advises our companies on how to best navigate and obtain visas, they are still ultimately the responsibility of the founders/company.
It's the first time I read this blog post (thanks for sharing) but it's not surprising to me.
I always cringe when I see people quoting Bryan because that's exactly my experience interacting with him on mailing lists or watching him give talks.
At this point I don't have the energy to deal with people like him. I just accept him as a natural occurrence in our field. I certainly praise does who do have the energy for fighting that.
I'm also quite combative in discussions. And then someone told me: be careful not to win the battle (current conversation) and lose the war (attention and possibly the respect or friendship of the person you're talking to).
I think Brendan must have worked with quite a lot of brilliant jerks in the ZFS appliance days at Sun/Oracle. The Fishworks team was excellent but some of the top heads were quite toxic.
This is why I think projects like caddy/traefik shouldn't get too comfortable thinking Let's Encrypt / HTTPS support by default alone is going to differentiate them too much. They're one PR away from having their major selling point becoming irrelevant in the face of the competition.
(I don't use caddy, but I always saw the "HTTPS by default" thing more as a nice thing to have, but not hugely important given that you can have the same with external scripts in apache or nginx. But being memsafe is the real distinguisher and one that certainly isn't reachable with a pull req in apache or nginx.)
Now, you’ll notice that https://caddyserver.com/ works, but https://caddyserver.com./ doesn’t. Caddy, the server, doesn’t support it, but you have to enter every domain twice manually. And caddy, the website, doesn’t support it either.
This was closed as a WONTFIX, despite every implementation of a webserver except for traefik and caddy doing it the same way.
> Same with every major site, and every major webserver.
I last tried this a few years back (probably around 2011). I found that a substantial fraction of major sites did not support it, and a substantial fraction of those that seemed to support it produced web pages that were at least partially broken.
IIS might support it, but Microsoft doesn't (universally): social.technet.microsoft.com, live.com, bing.com, office.com, skype.com all fail to properly load or redirect. As does instagram.com and linkedin.com.
It sounds like the situation has improved (if you consider it an improvement!) since then.
But did all of them function correctly? Assertions about the host are very common. Many things operate by domain whitelists, and so things like font loaders and analytics will commonly not work. Cross-origin resource loading will often break, if `*` is not used.
(Most of the things that I expect to break are unimportant, but there will still be a non-trivial number of important breakages.)
So, it's to be able to indicate that we wrote a FQDN, otherwise the DNS client, if it has a local search path, will check if it's a relative domain first.
I have to admit it's technically a benefit, but if you have a search path that resolves FQDNs as relative domains, isn't half of your software broken anyway? I can't say I've ever seen a FQDN with a dot at the end on any hardcoded or default value.
> but if you have a search path that resolves FQDNs as relative domains, isn't half of your software broken anyway
That’s correct, but it shouldn’t be that way.
I should be able to have google.com resolve to google.com.local.kuschku.de in my resolver, without issue, and the actual website should use google.com.
The fact that we don’t do that today breaks many parts of the original DNS and URL RFCs.
DNS software has absolute domain names in config files. In BIND zone files you have entries like "IN NS ns1.example.com." specifying the nameserver for the domain.
I bet some software implicitly uses absolute domains. URLs are just specified not to work like that.
What? They're not even comparable. Here are distinct advantages of all three as I see them:
- Traefik has cross-platform, highly dynamic proxying
- Apache has such widespread use and market saturation
- Caddy is the only server, even in the face of mod_md, to have fully automatic HTTPS by default
The thread you linked to has nothing to do with any of this, except that it links to this comment by myself, which preempts your claim: https://news.ycombinator.com/item?id=15433788
They are absolutely comparable and the advantages each one have don't exclude the others from attaining the same features.
Traefik cross-platform? All others are. Highly dynamic. What does that even mean? All are "dynamic".
Apache has widespread use and market saturation... how's that the single advantage it has? It's been evolving a lot.
Caddy is the only server to have fully automatic HTTPS? How much longer mod_md get that?
I think you've missed all of my single point and kind of confirm my fears.
The link which I posted has everything to do with this discussion. It's about Caddy thinking a bad business plan will work because "caddy is the only server to have fully automatic HTTPS by default".
Last question, is Caddy thinking of hiring a CEO or sales person? I think it should.
> mod_md builds are not yet available for all platforms.
Do you have reason to believe they won't be? Are you betting your business on the failure of Apache to do basic release engineering?
> Where did I say it was the "single advantage"?
That's fair. Because you listed it then I think that's the "major" advantage. Is that right?
> ou forgot "by default" -- and probably never, not on Apache's main release tree. Or at least not for a long time.
Why? Let's Encrypt and HTTPS by default being something that a lot of people want, why do you think Apache will ignore that and not include mod_md in Apache "for a long time"?
Competition is good. I don't have major reasons to be afraid but I would like Caddy/Traefik and others to succeed. From the very basic mistakes they're making in coming up with a business plan, I don't think they will. And no, being open source alone is not reason enough to ensure project survival.
If you re-read your own comment, I think you're the one spreading FUD about those other projects (and their implied inability to outpace Caddy).
Because those projects are very conservative about making things default. Apache famously has (had now?) bad defaults that no one should use, just for compatibility reasons.
Keep in mind that caddy is not only https by default, it's HTTP/2 by default as well. How long until that is by default in Apache?
And I don't think those are even the killer features of Caddy. They are the things that drive people in, but the real killer feature is how easy it is to configure.
Where do you see evidence of projects like Caddy getting too comfortable thinking Let's Encrypt/HTTPS support by default alone is going to differentiate them?
The author (mholt) replied above and the 'district advantage' he identified for caddy, his own product was:
> Caddy is the only server, even in the face of mod_md, to have fully automatic HTTPS by default
In every discussion about Caddy I've seen, the same argument is made. Even when caddy would refuse to start (with valid certificates cached!) during the LE outage, the response was "but we do LE + TLS automatically".
I still don't understand the concept of Caddy. The project seems inherently aimed at hobbyist's at best based on the idea that "its too hard to enable TLS in $Competition", but similarly they provide literally zero support for actually running Caddy - no sysvinit script, no systemd unit file, NOTHING.
So tell me again who their target market is? People who can't enable TLS in <Apache/HAProxy/Hitch/Nginx> but can write a fucking unit file for systemd?
Don't know where you get that idea from. The reference implementation for letsencrypt has always been (a Python-based collection of scripts with auto-config, auto-update etc) for Apache httpd. A native Apache module for ACME has been proposed for some time now, and is great because the reference implementation is quite a bit too rich to run as root (and is Python 2 only I believe).
certbot, the reference ACME implementation, should work with Python 2 and 3 (it definitely works with 3; I haven't verified 2 with recent versions), and it does not require root (though the default configuration will want it).
IIRC, the last time I set it up, I stuck HAProxy in front so I could still send ACME requests to certbot, but didn't have to have it running as root. If you put its user in the HAProxy group, it can write the certs as 640. If you want to be really secure, you create SELinux or Apparmor policies as well.
I use a HAProxy + Certbot too (with a certbot "hook" script that builds the .pem for HAproxy AND downloads the OCSP staples from LE).
As a bonus, you can have zero downtime renewals and use the TLS-SNI challenge, rather than relying on the "it's probably safe but it still feels wrong" http challenge.
Just wait a few minutes. I can already see one.