The situation is additionally confused by the fact that the version numbers do not give a good clue to how different the protocols were. Specifically:
SSLv2 was the first widely deployed version of SSL, but as this post indicates, had a number of issues.
SSLv3 is a more or less completely new protocol
TLS 1.0 is much like SSLv3 but with some small revisions made during the IETF standardization process.
TLS 1.1 is a really minor revision to TLS 1.0 to address some issues with the way block ciphers were used.
TLS 1.2 is a moderately sized revision to TLS 1.1 to adjust to advances in cryptography, specifically adding support for newer hashes in response to weaknesses in MD5 and SHA-1 and adding support for AEAD cipher suites such as AES-GCM.
TLS 1.3 is mostly a new protocol though it reuses some pieces of TLS 1.2 and before.
Each of these protocols has been designed so that you could automatically negotiate versions, thus allowing for clients and servers to independently upgrade without loss of connectivity.
TLS1.0 introduced modularity via the concept of "extensions". It's everything but a minor evolution of the protocol.
One of the many things it brought is session tickets, enabling server-side session resumption without requiring servers to keep synced-up state. Another is Server Name Indication, enabling servers to use more than one certificate.
> Each of these protocols has been designed so that you could automatically negotiate versions, thus allowing for clients and servers to independently upgrade without loss of connectivity.
The downgrade attacks on TLS are only really present in the case of client behaviour where, on failing to achieve one version, they retry a new connection without it.
This was necessary to bypass various broken server side implementations, and broken middleboxes, but wasn’t necessarily a flaw in TLS itself.
But from the learnings of this issue preventing 1.2 deployment, TLS 1.3 goes out of its way to look very similar on the wire to 1.2
This isn't really accurate historically. TLS has both ciphersuite and version negotiation. Logjam (2015) [1] was a downgrade attack on the former that's now fixed, but is an extension of an attack that was first noticed way back in 1996 [2]. Similar problems occurred with the FREAK attack, though that was actually a client vulnerability. TLS 1.3 goes out of its way to fix all of this using a better negotiation mechanism, and by reducing agility.
Moreover, there's not really much in the way of choices here. If you don't have this kind of automatic version negotiation then it's essentially impossible to deploy a new version.
Well you can, but that would require a higher level of political skill than normally exists for such things. What would have to happen is that almost everyone would have to agree on the new version and then implement it. Once implementation was sufficiently high enough then you have a switchover day.
The big risk with such an approach is that you could implement something, then the politics could fail and you would end up with nothing.
The big downside of negotiation is that no one ever has to commit to anything so everything is possible. In the case of TLS, that seems to have led to endless bikeshedding which has created a standard which has so many options is is hardly a standard anymore. The only part that has to be truly standard is the negotiation scheme.
This seems like a truly unreasonable level of political skill for nearly any setting. We're talking about changing every endpoint in the Internet, including those which can no longer be upgraded. I struggle to think of any entity or set of entities which could plausibly do that.
Moreover, even in the best case scenario this means that you don't get the benefits of deployment for years if not decades. Even 7 years out, TLS 1.3 is well below 100% deployment. To take a specific example here: we want to deploy PQ ciphers ASAP to prevent harvest-and-decrypt attacks. Why should this wait for 100% deployment?
> The big downside of negotiation is that no one ever has to commit to anything so everything is possible. In the case of TLS, that seems to have led to endless bikeshedding which has created a standard which has so many options is is hardly a standard anymore. The only part that has to be truly standard is the negotiation scheme.
I don't think this is really that accurate, especially on the Web. The actual widely in use options are fairly narrow.
TLS is used in a lot of different settings, so it's unsurprising that there are a lot of options to cover those settings. TLS 1.3 did manage to reduce those quite a bit, however.
> This seems like a truly unreasonable level of political skill for nearly any setting. We're talking about changing every endpoint in the Internet, including those which can no longer be upgraded. I struggle to think of any entity or set of entities which could plausibly do that.
Case in point: IPv6 adoption. There's no interoperability or negotiation between it and IPv4 (at least, not in any way that matters), which has led to the mess we're in today.
That’s not negotiating- I can’t connect to a server over v4 and have it tell me to switch to v6 or vice versa. That’s just supporting 2 completely different protocols.
Right. The closest thing we have to IPv6 "negotiation" is the Happy Eyeballs algorithm[0], which is literally just "connect to both at the same time and pick the one that connects first". The name serves to legitimise it and make it sound fancy but it's basically just brute force + a bit of caching.
That’s a great theory but in practice such a “flag day” almost never happens. The last time the internet went through such a change was January 1, 1983, when the ARPANET switched from NCP to the newly designed TCP/IP. People want to do something similar on February 1, 2030, to remove IPv4 and switch totally to IPv6, but I give it a 50/50 chance of success, and IPv6 is already about 30 years old. See https://ipv4flagday.net/
You don't have to have everyone switch over on the same day as with your example. Once it is decreed that implementations are widespread enough, then everyone can switch over to the introduced thing gradually. The "flag day" is when it is decreed that implementations no longer have to support some previously widely used method. Support for that method would then gradually disappear unless there was some associated cryptographic emergency that could not be dealt with without changing the standard.
Well, this is basically what we do, except that we try to negotiate to the highest version during the period before the flag day. This is far more practical for three reasons:
1. You actually get benefit during the transition period because you get to use the new version.
2. You get to test the new version at scale, which often reveals issues, as it did with TLS 1.3. It also makes it much easier to measure deployment because you can see what is actually negotiated.
3. Generally, implementations are very risk averse and so aren't willing to disable older versions until there is basically universal deployment, so it takes the pressure off of this decision.
You could deploy a new version, you'd just have older clients unable to connect to servers implementing the newer versions.
It wouldn't have been insane to rename https to httpt or something after TLS 1.2 and screw backwards compatibility (yes I realize the 's' stands for secure, not 'ssl', but httpt would have still worked as "HTTP with TLS")
> It wouldn't have been insane to rename https to httpt or something after TLS 1.2 and screw backwards compatibility
That would have been at least little bit insane, since then web links would be embedding the protocol version number. As a result, we'd need to keep old versions of TLS around indefinitely to make sure old URLs still work.
I wish we could go the other way - and make http:// implicitly use TLS when TLS is available. Having http://.../x and https://.../x be able to resolve to different resources was a huge mistake.
Regarding your last paragraph: Isn’t that pretty much solved thanks to HSTS preload? A non-technical author of a small recipe blog might not know how to set it up, but a bank ought to have staff (and auditors) who takes care of stuff like that.
Are there any real world online resources where, modulo redirect, a different resource is presented on the HTTP and the HTTPS protocols? Or alternatively, on ports 80 and 443?
There used to be, though it's less true now. However, the reason to treat them distinctly (as different origins, technically) is that HTTPS provides integrity whereas HTTP does not. So, consider the case where the client enters an HTTP URL and is redirected, just as you say above. If the attacker injects their own JS and it is cached in an origin that is just `example.com`, then they control the user's experience of the site, even if later the user securely goes to the site with HTTPS.
You’re thinking about it from the perspective of a site operator. Yes, individual websites could do that. But not all websites would use such a redirect.
But think about it from the perspective of a web browser or curl. You can’t rely on all web servers having such a redirect for their URLs. Web browsers would need to support old versions of TLS to make old URLs work. They’d need to support old versions of tls indefinitely so as to not break old URLs.
Using an old version of tls isn’t like using an old version of the C compiler. Old versions of tls have well documented problems with security implications. That’s why we made new versions. Maintaining lots of versions of TLS multiplies the security surface area for bugs, and makes you vulnerable to downgrade attacks.
Like, you're right that some, perhaps many, sites would continue using https, just like in the current situation, many sites continue supporting http (instead of just setting up a redirect)
No site needs to do this though, and I can't recall seeing a site with sensitive user info that supports http in recent years. And in the current situation, many sites are still supporting old versions of https (SSL2). A protocol name upgrade would give you more certainty that you're connecting over a secure connection, and perhaps a better indication if you've accidentally used a less-secure connection than intended.
I mean actually your exact argument could be made about http vs https, that http+SSL should have become the default (without changing the protocol name of http://), and by changing the protocol name it made it so that some websites still accept http. I guess in practice there's a slight difference since http->https involved a default port change and ssl2 -> tls did not, so in the former case the name change was important to let clients know to use a different default port; but ignoring that, the same argument could be made, and I would have disagreed with it there too.
Specifying the protocol... in the protocol portion of the URL... can be useful for users.
First, recall that links are very often inter-site, so the consequence would be that even when a server upgraded to TLS 1.2, clients would still try to connect with TLS 1.1 because they were using the wrong kind of link. This would relay delay deployment. By contrast, today when the server upgrades then new clients upgrade as well.
Second, in the Web security model, the Origin of a resource (e.g., the context in which the JS runs) is based on scheme/host/port. So httpt would be a different origin from HTTPS. Consider what happens if the incoming link is https and internal links are httpt now different pages are different origins for the same site.
These considerations are so important that when QUIC was developed, the IETF decided that QUIC would also be an https URL (it helps that IETF QUIC's cryptographic handshake is TLS 1.3).
If a protocol is widely used wrongly, I consider it a flaw in the protocol. But overall, SSL standardization has gone decently well. I always bring it up as a good example to contrast with XMPP as a bad example.
Well, my only real point is that it’s not the version negotiation in TLS that’s broken. It’s the workaround for intolerance of newer versions that had downgrade attacks.
Fortunately that’s all behind us now, and transitioning from 1.2 to 1.3 is going much smoother than 1.0 to 1.2 went.
One of the big differences was in attitude. The TLS 1.3 anti-downgrade feature was not compatible with some popular middlebox products. Google told people too bad, either your vendor fixes it (most shipped free bug fixes for this issue, presumably "encouraged" by the resulting customer anger) or you can't run Chrome once this temporary fudge goes away in a year's time.
Previously (in earlier protocol versions) nobody stood up to the crap middleboxes even though it's bad for all normal users.
The service providers were the worst offenders here because they wanted to be the MIM to be able to look at the data and “add value” to their networks some how. Moving to TLS 1.3 took a lot of that away from them and it was only Google’s market power that could break them.
XMPP is too loose. Easiest comparison is security alone. XMPP auth and encryption are complicated, and they're optional for each of c2s, s2c, s2s (setting aside e2e). Clients and servers will quietly do the wrong thing if not configured exactly right. Email has similar problems, so bad that entire companies exist just to help set up stuff like DMARC, but that's a simpler app than instant messaging. The rest of the XMPP feature set is also super loose. Clients and servers never agree on what extensions to implement, even for very basic things like chat rooms. I really tried to like it before giving up.
SSL is appropriately strict. Auth and encryption, both c2s and s2c, go together. They were a bit lax on upgrades in the past, but as another comment said, Google just said you fix your stuff or else Chrome will show a very scary banner on your website. Yes you can skip it or force special things like auth without encryption, but it's impossible to do by accident.
Man in the middle interfering with TLS handshakes?
The handshake is unencrypted so you can modify the messages to make it look like the server only supports broken ciphers. Then the man in the middle can read all of the encrypted data because it was badly encrypted.
A surprising number of servers still support broken ciphers due to legacy uses or incompetence.
Yes, this is a seriously difficult problem with only partial solutions.
The basic math of any kind of negotiation is that you need the minimum set of cryptographic parameters supported by both sides to be secure enough to resist downgrade. This is too small a space to support a complete accounting of the situation, but roughly:
- In pre-TLS 1.3 versions of TLS, the Finished message was intended to provide secure negotiation as long as the weakest joint key exchange was secure, even if the weakest joint record protection algorithm was insecure, because the Finished provides integrity for the handshake outside of the record layer.
- In TLS 1.3, the negotiation messages are also signed by the server, which is intended to protect negotiation as long as the weakest joint signature algorithm is secure. This is (I believe) the best you can do with a client and server which have never talked to each other, because if the signature algorithm is insecure, the attacker can just impersonate the server directly.
- TLS 1.3 also includes a mechanism intended to prevent against TLS 1.3 -> TLS 1.2 downgrade as long as the TLS 1.2 cipher suite involves server signing (as a practical matter, this means ECDHE). Briefly, the idea is to use a sentinel value in the random nonces, which are signed even in TLS 1.2 (https://www.rfc-editor.org/rfc/rfc8446#section-4.1.3).
No: while the handshake is unencrypted, it is authenticated. An attacker can’t modify it.
What an attacker can do is block handshakes with parameters they don’t like. Some clients would retry a new handshake with an older TLS version, because they’d take the silence to mean that the server has broken negotiation.
You could encrypt the handshake that you recieved with the server's certificate and send it back. Then if it doesn't match what the server thought it sent it aborts the handshake. As long as the server's cert isn't broken this would detect a munged handshake, and if the server's cert is broken you have no root of trust to start the connection in the first place.
Bob's list of supported protocols is an input into the (authenticated) final handshake message, and that authentication failing will prevent the connection from being considered successfully established.
If the "negotiated" cipher suite is weak enough to allow real-time impersonation of Bob, though, pre-1.3 versions are still vulnerable; that's another reason not to keep insecure cipher suites around in a TLS config.
It also enabled cipher strength "step up". Back during the '90s and early 2000s (I'm not sure when it stopped, tbh), the US government restricted the export of strong cryptography, with certain exceptions (e.g. for financial services).
If you fell under one of those exceptions, you could get a special certificate for your website (from, e.g. Verisign) that allowed the webserver to "step up" the encryption negotiation with the browser to stronger algorithms and/or key lengths.
They still should have just called it TLS v4.0 instead of v1.0.
I'm halfway convinced that they have made subsequent versions v1.1, v1.2, and v1.3 in an outrageously stubborn refusal to admit that they were objectively incorrect to reset the version number.
Considering that Microsoft was a completely different beast in that time, I'm not surprised it does not seem that silly.
M$ (appropriate name for that time) of the day was doing its best to own everything and the did not let up on trying to hold back the open source internet technologies until the early 2010's I believe. Its my opinion that they were successful in killing Java Applets, which were never able to improve past the first versions and JavaScript and CSS in general was held back many years.
I still recall my corporate overloards trying to push me to support IE's latest 'technologies' but I resisted and instead started supporting Mozilla 3.0 as soon as they fixed some core JS bugs for our custom built enterprise JavaScript SPA tools in the early 2000's. It turned out to be a great decision as the fortune 500 company started using Mozilla / Firefox in other internal apps in later years long before it became common place.
I don't think it was Microsoft that killed Java applets. I mean, for one thing, they always worked in IE, which was really the only avenue through which MS could have affected them.
No, Java applets failed because they became the poster child for "Java is slow" take. Even though it wasn't exactly true in general, it was certainly true of applets, what with waiting for them to download and then waiting for the JVM to spin up.
What killed them was 1) HTML/JS itself getting better at dynamic stuff that previously required something like applets, and 2) Flash taking over the remaining niche for which HTML wasn't good enough.
Another reason Java applets ultimately failed was the never-ending stream of sandbox escapes, which is inherent to their design of running trusted and untrusted code in the same VM and trying to keep track of which is which. It turns out it's much more straightforward to sandbox the whole VM.
A representative vulnerability is "trusted method chaining". You (the attacker) construct a chain of standard library objects that call each other in unexpected ways. You can make use of the fact that you can subclass a standard library class and implement a standard library interface, in order to implement the interface methods with the base class's implementations, to construct more unusual pathways. Then you get some standard library entry point to call the first method in the chain. Since your code doesn't appear on the call stack at any point (it's just the standard library calling the standard library) whatever is at the bottom of the call stack, at the end of the chain, infers a trusted context and can access files or whatever. Of course, finding a method chain that's possible to construct and does something malicious is non-trivial.
Even prior to HTML5 stuff, Flash was just a better UX than applets, which always felt like your browser was loading an application, vs being an element in a page.
Java Applets also froze the entire browser when loading. Even more so than the Windows Media / QuickTime / Real Player plug-ins. Only the Flash plug-in didn't noticeably freeze the browser. It was heavily CPU optimized and even used AVX for rendering, as far as I remember.
> > No, Java applets failed because they became the poster child for "Java is slow" take.
> Java Applets also froze the entire browser when loading.
More than just "poster child", I believe Java applets are the origin of the "Java is slow" meme. The first time many people heard of Java would be when it locked up their browser for a whole minute while loading an applet, with a status bar message pointing to Java as the culprit.
Applets died because of many reasons, like absurd startup time for the JRE (often just for silly animations), absurd memory requirements (for the time) and associated crashes, weird compatibility issues in the initial releases of the Java platform, a silly security model based on the assumption that only good actors will be able to get a CA certificate in order to do whatever they want in your PC, an immature sandboxing technology in browsers (not only IE), etc.
They've adopted a different flavour of devilishness. See VSCode versus Visual Studio, or their approach to AI.
Bill Gates would've bought OpenAI. Satya shares their mission of developing AI for the good of humanity. He charitably donated billions of dollars in Azure credits in exchange for nothing besides a voice at the table and a license to help enable other organisations use AI through MS services.
In a way it's a PR difference, but I feel that understates the change.
People who make a strong distinction between TLS and SSL are indicating that they know the difference and think you should too, but at a practical level it's the difference between .doc and .docx (fundamentally different but interchangeable to the layman). The boots on the ground mostly care about getting https to work and have minimal consideration for it's inner workings.
The main issue was explaining to the layman that TLSv1.0 was in fact newer and better than SSLv2 and SSLv3. I remember having quite a few discussions about this with people who assumed that the bigger number must be better..
It is like ages since SSL was obsoleted but people still refer to the name meaning encrypted network traffic.
Would be much easier if everyone just talks about TLS to mean modern encrypted network traffic. Mention SSL if you really use it because you have legacy system running.
People still say Twitter instead of X. Of course people are going to continue using the name used when something was first introduced and engrained into their day to day vs the rebrand. It would be funny if ssl.com just redirects to tls.com and get upset when people still refer to it as ssl. The only successful rebrand attempts have been company names like when Comcast became Xfinity or MCI becoming Worldcom type situations
I agree with all except your example. TLS and SSL are about the same memorability wise, Twitter and X are not. If we were talking about a porn website it would be the inverse.
Oh wow, I just discovered that my brain unconsciously had a hard time to differentiate between SSL and TLS. And now, after two friggin decades I find out, why!
Back closer to the time, there were some people around who insisted that SSL specifically meant the old versions and it was all TLS now. I recall a couple of occasions where people were talking about UCSPI-SSL and someone stepped in to explain that We Don't Do SSL Now. As the headlined article says, that contrived distinction seems silly with the hindsight of decades.
The nomenclature was complicated in people's minds by SMTP. Because there was SMTP over a largely transparent encrypted connection, and SMTP where it started unencrypted and negotiated a switch, as well as plain old cleartext. It didn't help that RFC 2487 explained that STARTTLS negotiated "TLS more commonly known as SSL". RFC 8314 explains some of the historical mess that SMTP got into with two types of SMTP (relay and submission) and three types of transport.
And the "S" for "submission" could be confused with the "S"s in both "SSL" and "TLS". It's not just TLAs that are ambiguous, indeed. There was confusion over "SMTPS" and "SSMTP", not helped at all by the people who named programs things like "sSMTP".
I'm still calling it SSL in 2025. (-: And so is Erwin Hoffmann.
- "SSL" is a set of protocols so ridiculously old, busted and insecure that nobody should ever use them. It's like talking about Sanskrit; ancient and dead.
- "TLS" is way better than "SSL", but still there are insecure versions. Any version before 1.2 is no longer supported due to security holes.
- Technically an "ssl certificate" is neither "SSL" nor "TLS", it's really an "X.509 Certificate with Extended Key Usage: Server Authentication". But that doesn't roll off the tongue. You could use a cert from 1996 in a modern TLS server; the problem would be its expiration date, and the hash/signature functions used back then are deprecated. (some servers still support insecure methods to support older clients, which is bad)
The point is more that SSL 3.0 and TLS 1.0 were nearly identical. That is, the breaks in similarity were at SSL 2.0 -> SSL 3.0 (and TLS 1.2 -> TLS 1.3, to a lesser extent), as opposed to the common misconception that TLS 1.0 is what changed everything.
But yes, it's all a bit irrelevant now that anything below TLS 1.2 is sketchy to use.
Right, but they accomplish the same thing and people move monotonically from SSL to TLS. It’s not like choosing between React and Angular, but like choosing between React version 5 and React version 10 for a new project. SSL and TLS are the same in all meaningful respects from this perspective.
A Chicago dog is literally a hamburger with a different surface area. Same obscure ground beef, same vegetables, same bread. Just different dimensions. Who cares about the details, right?
I learned about that around 2010, but before that was also clueless about it. What triggered me was that in Java you still use a SSLSocket to start encrypted connections even when using TLSv1.3 today!
The problem is that TLS was already in widespread use for "thread local storage".
Transport Layer Security is widely documented as beginning in 1999.
I can find references to "Thread Local Storage" going back to at least 1996. That particular term seems more common in the Microsoft (and maybe IBM, does anyone have an OS/2 programming manual?) world at the time; Pthreads (1995) and Unix in general tended to call it "thread-specific data".
It's possible that the highly influential 2001 Itanium ABI document (which directly led to Drepper's TLS paper) brought the term to (widespread) use in the broader Unix world, though Sun (for both Solaris and Java?) was using the term previously. But it's also possible that I'm just missing the reference material.
I don’t doubt that, but I never heard Thread Local Storage until much later than that. While it might well’ve been common within its ecosystem, I don’t think it was widely known outside it.
I might have an OS/2 programming manual. But I don't need it. (-: This was not an OS/2 thing. We had to make map data structures using thread IDs. Or our language runtimes did.
Look to Windows NT rather than to OS/2 for thread-local storage. TlsAlloc() et al. were in the Win32 API right from NT 3.1, I think.
I think SSL is a better fit, actually. In theory TLS could be a transport-layer security mechanism that would let arbitrary protocols run on top of it (like IPSec does), but in practice it's pretty much tied up to TCP sockets. The UDP variant (DTLS, and I suppose QUIC) isn't part of the TLS spec for instance. Of course we have kernel TLS on Linux now, and Windows also has infrastructure like that, but it isn't as easy as setting a flag on a socket to turn TLS on.
Plus, who doesn't like to sound like a snake sometimes? Snakes are badass.
Speaking of ipsec, ipsec was supposed to be "the" encrypted interchange on the internet, basically used for random secure connections like we use tls today.
I like to imagine an alternate past where ipsec "won" and how that would affect our expectations of secure connections. One thing different is that the security would handled at the os level instead of the application level, on the one hand this is nice all application get a secure connection whether they want one or not, on the other hand the application has no idea it is using a secure transport and has no way of indicating this to the user.
Anyhow the opportunistic connection employment of ipsec never really got implemented and all we use it for anymore is as a dedicated tunnel. one that is fiendishly difficult to employ.
I think the primary problem with ipsec is that it tried to be too flexible. this made setting up a ipsec link a non-trivial exercise in communication, and the process never got streamlined enough to just be invisible and work.
No? The "transport" layer is layer 4 in the 7-layer OSI model (physical/datalink/network/transport/session/presentation/application) and 5-layer IP model (physical/network/internetwork/transport/application). That is: the "transport" provides reliable continuous data-stream abstraction over the lower-layers' discreet and unreliable packets; e.g. TCP.
And that data-stream the interface that TLS provides; to the higher layers it looks like a transport layer.
Just on a technical note, TLS 1.3 only uses AEAD ciphers where the nonce is determined by the record numbers, so it actually is in principle possible to decrypt the packets even if they are received out of order by trial decrypting with different record numbers. You don't do this in TLS (as opposed to DTLS) because it runs over TCP and therefore you are guaranteed in-order delivery.
I would agree with you that DTLS is a misnomer; that it does not provide the layer-4/transport-layer -like interface that regular TLS provides.
(It isn't quite a layer-3/internetwork-layer -like interface; from the UDP that it sits on, it has a multiplexing component that is "half" of a layer 4 interface.)
1. SSL. For a long time I didn't even know TLS was the "same thing", but even now that I know it is, I still say SSL 9 times out of 10.
2. 38 - Started working in 2011, but my first forays into network programming was in something like 2004-2005.
Looked over onto my other screen and sure enough the function I'd literally minutes before added an if statement to went
public Builder sslCertNotBefore(Instant sslCertNotBefore) {
if (sslCertNotBefore.isAfter(MAX_UNIX_TIMESTAMP)) {
sslCertNotBefore = MAX_UNIX_TIMESTAMP;
}
this.sslCertNotBefore = sslCertNotBefore;
return this;
}
I think possibly part of the problem is that we as programmers typically don't deal with TLS directly. The code above is part of a system I wrote that extracts detailed certificate information from HTTPS connections, and man was it ever a hassle to wrestle all the information I was interested in out of the java standard library.
Sure on the one hand it's easier to not mess up if it's all automatic and out of sight, but at the same time, it's not exactly beneficial to the spread of deeper awareness of how TLS actually works when it's always such a black box.
I think most people call it SSL because they use OpenSSL library to deal with secure communication have SSL in their names. Openssl being the most dominant one). Other libraries are BoringSSL, LibreSSL, wolfSSL etc.
Libraries with TLS in their names are less frequently used
I usually say SSL, because it has a greater chance of being understood than the more correct TLS (nobody uses SSL 3.0 anymore). It's also in the name of many SSL (I mean, TLS) libraries, like the classic OpenSSL.
But yeah, I learned about SSL back in the crypto wars days of the 1990s, back when you had to pirate the so-called "US only" version of Netscape if you wanted decent SSL encryption, so I might be just using the old term out of habit.
1. I say both somewhat 50/50. I say SSL instinctively, and TLS when I think about it and remember we don't say SSL anymore. It's been like that for around 10 years now, before that I'd only say SSL.
2. I started programming professionally in 1998 and I'm in my early 50s.
If I need to specifically say SSL or TLS, it's SSL (as in OpenSSL, LibreSSL, BoringSSL, SSL certificates, Qualys SSL Labs, SSL Server Test). TLS is a made up name for SSL.
I do say e.g. "TLSv1.2" if I need to name the specific protocol, that's about it.
I always say HTTPS because in the context of my area of speciality, the details of how HTTPS works don't matter and neither do secure communication protocols besides HTTPS.
I work in cybersecurity and all the tools in the firewall/cert world still say "SSL decryption" and "SSL certificate". TLS is just a "major version" of SSL in my mind.
Aside: I think this shared preference for efficiency/comfort/laziness is big part of why master -> main spread quickly while JavaScript -> ECMAScript never had a chance.
I guess it follows that Twitter/X might never be able to pull off a rebrand again.
SSL. Working as a sysadmin since 2010. It just feels more right to me, and honestly, it hasn’t been until recently that I’ve noticed more of a concerted effort to rebrand it to TLS — not sure if that’s just my perception or not.
Nobody ever says "TLS Certificate". It's only an "SSL Certificate". On that alone, it's just easier to stick to "SSL" for consistency and everyone knows what you mean.
It’s probably a good thing I didn’t have the knowledge/skills I have now, it might have saved me from trouble. Back in those days I was more interested in getting Back Orfice to remotely open a CD-ROM tray on a friend’s computer. I remember when broadband was first being rolled out it seemed like everyone was hooking up their cable/DSL modems directly to their PC and having a public IP with no firewall. Good times.
My mom bought me applied crypto when I was thirteen and I was really into trying to learn how to find exploits with idapro and learning to code in general. It wasn't really the other kind of Freeform studying lol I was terrified of the thought of prison.
Even today, people and marketing pages promote "SSL" term. Unless you specifically google, "What is the deference between SSL and TLS?" most people would have no idea what TLS is.
Wait, but didn't TLS 1.0 have significant improvements over SSL 3.0? The article makes it seems that just a couple of things were tweaked just to make it different for the sake of being different.
The main difference is in the padding. When the POODLE attack was pre-announced as only affecting SSL3 and not TLS1.0, that was enough to predict it was going to be a padding oracle.
I think it’s fair to say they’re very similar, with a few “bug fixes”. It’s been a while since I’ve thought about either though, and might be forgetting a few things. I’ve only ever implemented SSL3 and TLS1.0 together, so there may be some details I’m forgetting.
The tweaks were minor (smaller than for any other version revision), and mostly just the IETF marking its territory and doing something other than blessing the SSL 3.0 protocol as-is.
> This was written in 1996. The language used feels already much different from today's publications. God I feel old.
That depends on which publications you're looking at, just as it did in 1996. An article from LWN [1] today, for example, reads in a fairly similar style. Maybe slightly less stuffy, because it's targeted at a slightly more general audience.
I remember "SSL and TLS: Designing and Building Secure Systems" by Eric Rescorla being really useful to understand the history behind TLS and how we got here. The book was written in 2001 and warned about some issues which turned into CVEs a bit later. You might find used copies for a couple bucks.
> And of course, now, in retrospect, the whole thing looks silly.
Private enterprise should be the last people on earth to be allowed to label themselves. I have many marketer friends I love, but I truly think the practice of trying to pimp businesses to rich individuals has been probably the biggest waste of human effort in history (outside of maybe carbon-capture efforts). We're just stuck with shitty brands, broken products, and stupid consumers who think they're getting the best.
This brought me back. I was a member of the UC Berkeley Computer Science Undergraduate Association (https://www.csua.berkeley.edu) in the early aughts. Through the CSUA I came across a job posting for a sysadmin job at Skotos Tech (https://www.skotos.net/), the multiplayer text games company Christopher Allen founded after his work at Consensus Development/Certicom to develop the SSL/TLS implementation for Netscape. It's been a long and strange road.
I seem to remember that Microsoft's initial implementation used a field in the protocol in an incompatible way to encode that it was a different implementation. I remember people being annoyed at them for deliberately screwing up future compatibility. Does anyone remember the details of this?
IIRC they were using a cipher suite to signal the new version. Cipher suites were basically the only signaling mechanism in SSLv2 (and SSLv3/TLS 1.0 before extensions were introduced).
When TLS 1.3 was finally standardized, there was quite a bit of debate about whether in light of the how different it was from TLS 1.2 we should continue to use the 1.3 version number. ISTR that TLS 2 and TLS 4.0 were both floated--though I don't recall SSL 3.4--but eventually the WG decided to stick with the 1.3 version number we had been using throughout the rest of the process.
> As a part of the cutthroat competition, Microsoft decided to revise the SSL 2 protocol with some additions of their own, and specified a protocol called "PCT" that was derived from SSL 2. It was only supported in IE and IIS.
> Netscape also wanted to address SSL 2 issues, but wasn't going to let Microsoft take leadership/ownership in the standard, so they developed SSL 3.0, which was a more significant departure.
I remember this moment and this is where I realized that Microsoft wasn't always the bad guy here. They had the better implementation and were willing to share it. But Netscape in this instance acted like kids and wouldn't cooperate at all. Which is why this meeting had to occur and by that point it was clear Netscape had lost the browser and it wasn't going to be close.
Hence the quick about face by Netscape to accept what was pretty much Microsoft's proposed solution.
I can't speak to the rest of Microsoft's browser decisions and given the court ruling it's clear they weren't the good guys either but this opened my eyes to the fact that all companies are the bad guys some time:)
Microsoft was the bad guy in a movie where you have a war right before aliens invade and you figure out that there's bigger enemies.
FSF hated Microsoft because they released binaries without source code, they were THE enemy, nowadays, you are lucky if you get a binary to study and modify! The standard from any competitive developer is to hide the binary and source behind a server. Try to study and modify that!
For the FSF, Microsoft releasing binaries without source was reason enough to hat them but it was not the only reason why people, including those in the FSF, hated them. Microsoft was very much a company that used their dominant market position to lock customers in and the competition out. (Remember embraced, extend, extinguish?) The Microsoft of today looks like a cuddly teddy bear in comparision.
> FSF hated Microsoft because they released binaries without source code
I think that's a bit of an oversimplification - FOSS-leaning people had a pretty large set of reasons to dislike and distrust MS back then. "Embrace, Extend, Extinguish" was a big one, calling linux/FOSS a cancer, their money and influence being used to fund the whole SCO debacle amongst other things. They were pretty actively evil, not just "closed source".
There was very good reason not to let MS gain de-facto control of an open protocol, because 90s and 00s microsoft would not have hesitated to find ways to use that dominance to screw the competition.
The "velvet sweatshop" one is sufficient, but plenty of others to choose from. Don't have a source at hand but I remember it was known for its "work 3 years there and then you need to retire early from burnout" culture. There's also a really good (and highly depressing) 2001 German documentary around that "feature" called "Leben nach Microsoft" (Life after Microsoft).
When a company repeatedly demonstrates a pattern of embracing, extending, and extinguishing standards (see: Java, Kerberos, HTML), it’s fair to view any technical move with suspicion - even if the particular proposal seems technically sound. It’s easy to retroactively view Netscape’s resistance as petty, but the power imbalance was real, and the fear of Microsoft co-opting the standard wasn’t paranoia.
Some companies make abuse a business model. I don't see how anyone can defend a position where they only look at isolated actions of a company and not their overall strategic positioning. There are boundaries. Ethical boundaries. If you never experience the consequences of your actions, if nobody ever objects to your behavior, you will not stop. Especially not a distributed organism of a company, which has no inherent ethical boundaries; its boundaries are those that affect business, so you need to teach them in business. If your business model is based on treating your own employees like slaves, it is you who is cancer, not the other.
Calling that “kid-like behavior” is misguided on two levels. First, as noted, Netscape’s actions were arguably rational in context - pushing back against a powerful incumbent trying to steer an open standard toward a proprietary implementation.
Second, the phrase itself leans on a dismissive and inaccurate stereotype. Kids aren’t inherently irrational or overly emotional; in fact, there’s substantial research showing that young people behave quite logically given their environment. Framing behavior this way isn’t just lazy; it reinforces the kind of condescension that later gets labeled as “adverse childhood experiences” in therapy, assuming someone even gets the chance to unpack and not replicate it.
SSLv2 was the first widely deployed version of SSL, but as this post indicates, had a number of issues.
SSLv3 is a more or less completely new protocol
TLS 1.0 is much like SSLv3 but with some small revisions made during the IETF standardization process.
TLS 1.1 is a really minor revision to TLS 1.0 to address some issues with the way block ciphers were used.
TLS 1.2 is a moderately sized revision to TLS 1.1 to adjust to advances in cryptography, specifically adding support for newer hashes in response to weaknesses in MD5 and SHA-1 and adding support for AEAD cipher suites such as AES-GCM.
TLS 1.3 is mostly a new protocol though it reuses some pieces of TLS 1.2 and before.
Each of these protocols has been designed so that you could automatically negotiate versions, thus allowing for clients and servers to independently upgrade without loss of connectivity.