> If Corporations are people, and they commit felonies, maybe the US can assume control of the corp?
"Corporate death penalty" was a term people used to a use a lot.
[It turns out it's already a real-thing](https://en.wikipedia.org/wiki/Judicial_dissolution), but push-back from shareholders in response to attempts at accountability predictably led to nothing happening.
I think the response is to make shareholders more legally, and especially unescapably, responsible for the companies they jointly own. Ignorance and the fact that _no-droplet-blames-itself-for-a-flood_ should be an excuse[1].
[1] Though I do think about how this might discourage _activist investors_ that willingly buy shares of immorally/amorally-acting publicly traded companies and then attempt to steer it in the right direction - so perhaps if we could exempt voting-shareholders who can demonstrate a clear and consistent voting record in the public good to the extent that their shares permitted them to then they would escape prosecution?
“public good” is a highly variable term, though. While one person might argue that (example only) Palantir is the grossest public avil imaginable, another person could argue that they are a public good, on the basis of helping law enforcement.
How do you derive a legally acceptable definition of public good in investment? An evil person could invest just as much as a supposedly good person, but with the intent of making money, but otherwise be amoral about everything else related to the investment.
Not only that, but I do not know of an open source phone that is commercially available and has been audited to not do nasty things behind your back. I am not sure about the progress of Librem, and it is a sad state that we only have that, or will have that. Those phones are spy devices, I look at them as I look at Windows 10. I am never, ever, going to trust it. When I take a photo, I expect at least the thumbnail of it to get on some server. Am I too paranoid? With Google, or Chinese crap, I do not think so.
Of course you could take a similar argument or stance when it comes to desktop, and indeed, just look at all those CPU vulnerabilities. I think technology is at a really sad state. Most things are closed source (what does your keyboard do?![1]), and we have no goddamn way of knowing what is happening behind our back unless someone reverse engineers it, but the fact that it has to be done is an issue to me.
Hi Jacob. Paul Graham's comments are only accurate if you apply reflective practices and gain the learning. There might be some learnings, but they are inconsistent, weak and and easily lost. If you only have one conversation a day, then you'll probably think about it later that day. Taking notes allows you to reflect on it later, especially if you have a busy schedule. Jarvis talks about this, but you'll see reflection being promoted as a necessity in many learning theories
Need to bring back some of those brownbag lunch & learn events, starting with "SPICE up your clocks"! Except people might not be comfortable sitting in the same room these days
Comfort won't be much of a factor since most NASA centers are completely work from home for non-mission-critical work for the indefinite future. Nonetheless, we do have a ton of these kinds of things going on all the time. The bigger problem is that NASA's culture is so meeting heavy that it's hard to justify these kinds of low importance events on my calendar.
> The bigger problem is that NASA's culture is so meeting heavy that it's hard to justify these kinds of low importance events on my calendar.
Not sure if you are speaking from experience, but if that is the case, and this is true, then NASA as an organization seems to be a poor judge of what's important.
Yes, from experience. It's not that we don't know what's important, it's that it's all important and I have to find time to actually write code every now and then too. The real problem is just that we're all very busy and it's not really possible for everyone in the agency to have even a cursory awareness of everything that's going on.
Even now with everything being virtual, I get invites to brown-bag type events probably 3-4 time per week. I can go to one or two, but there's no way I can fit every single one into my schedule.
There are a couple of issues at play. One is that there's so much interesting stuff going on that I (and most of my colleagues) tend to volunteer ourselves for lots of projects, which results in us going "deep" on a couple of projects at the expense of not having enough time to go "wide" by attending lots of talks that aren't directly related to our own work. That part's on me and I really could make more time in my schedule if I just said "no" more often (but again, I don't say "yes" because I'm pressured to do so, I say "yes" because the work is really fun and interesting!).
The other question is why we have so many meetings per project. That one's tougher to tackle and I don't really have an answer. The one thing that I have identified is that we're very in to making sure that everyone has their say on things before taking action. That's a good thing overall, but it also leads to tremendous meeting bloat. A meeting that would only take 5 minutes if everyone restricted themselves to discussing critical issues before making a decision regularly takes a full hour in which we examine every pitfall, no matter how minor.
Interesting, it sounds like in this case it's mostly a good problem to have - even ensuring all voices are heard is good, from a sound engineering perspective, which I'd imagine is important for NASA's missions!
I wonder if there would be a benefit in your case if folks had to follow a similar process to the one Amazon requires for its meetings (this may be something NASA already does, but I will admit I'm projecting a bit here and am wishing my organization required some writing before calling for a meeting):
> We don’t do PowerPoint (or any other slide-oriented) presentations at Amazon. Instead, we write narratively structured six-page memos. We silently read one at the beginning of each meeting in a kind of “study hall.” Not surprisingly, the quality of these memos varies widely. Some have the clarity of angels singing. They are brilliant and thoughtful and set up the meeting for high-quality discussion. Sometimes they come in at the other end of the spectrum.
> ...
> Here’s what we’ve figured out. Often, when a memo isn’t great, it’s not the writer’s inability to recognize the high standard, but instead a wrong expectation on scope: they mistakenly believe a high-standards, six-page memo can be written in one or two days or even a few hours, when really it might take a week or more! They’re trying to perfect a handstand in just two weeks, and we’re not coaching them right. The great memos are written and re-written, shared with colleagues who are asked to improve the work, set aside for a couple of days, and then edited again with a fresh mind. They simply can’t be done in a day or two. The key point here is that you can improve results through the simple act of teaching scope – that a great memo probably should take a week or more.
EDIT: Looks like I was wrong when I thought IPv6 was backwards compatible
IPv6 is backwards compatible so why would we "finally roll off IPv4"? IPv6 has been "here" in production for over a decade, and has seen slow adoption. I suspect it may be another 8-10 years until IPv6 gets close to a majority, but wouldn't IPv4 still be in-use?
What would force a roll off of IPv4?
Either way, I too would like to know more about how IPv4 space has appreciated/depreciated in the past decade
In what way is it backwards compatible? It's not as if all allocations are kept and people just add 96 bits to their existing addresses. They are completely disjoint address spaces and protocols, as compatible as Tor is with the general internet: to communicate between the two, you need a proxy.
The only realistic reason to get rid of IPv4 is its unreliability, if significant.
15-20 years ago IPv6 routing was often tunneled over v4, and it was noticeably less reliable than v4. Nowadays v6 is a touch faster/more reliable than v4, not very much. If that difference widens much and most users have v6 anyway, then v4 might get the axe. Otherwise... I don't think so.
FWIW I already operate some v6-only services. Not for the general audience. An SSH bastion host, a backup service, that kind of thing, not web sites for the general audience. I'm not in the least surprised if some warez topsite is v6-only already.
IPv6 is not backwards compatible - it is a completely separate networking stack for all devices. And routers that speak only IPv4 have no idea what to do with IPv6 traffic. Similarly, IPv6-only endpoints cannot communicate with IPv4 endpoints without some sort of 6-to-4 gateway between them.
There shouldn't be any IPv4 only routers in the wild anymore. That isn't to say they aren't out there, only that anyone who has one is negligent in not updated/replacing it within the last 15 years or so. IPv6 is not a new thing, it was already live in 1996 (my memory might be a little off, but not much). Maybe you have turned off IPv6, but it should be supported and any competent IT should have a plan to turn it on (it might be a 5 year plan without any budget, but the plan should exist). Likewise you should have a plan to turn off IPv4 (if this plan has a budget it should be paid for entirely by the sale of your existing IPv4 block - this is mostly the business costs of IPv4 only customers being unable to reach you NOT the technical cost of updating all your servers)
How would it have possibly been backwards compatible? Plenty of routers and ip-aware switches have, in hardware, specified that ips are 32-bits, so anything that added more bits would necessarily break existing hardware, and thus not be backwards compatible.
That's not to mention all the software that has similarly hardcoded the number of bytes in an ip.
How could we possibly have made ipv6 backwards compatible?
I worked on the first switch ever sold in 1995 - it didn't support IPv6, but I assure you all the engineers working on it were aware of IPv6 and assumed IPv4 would be long gone by today. For reference back then we were more concerned about how the switch handled IPX (Novell Netware) than IP. Everything else since then was designed in the era where IPv6 is coming soon enough that you ensure you can support it with a software update if you don't support it.
That isn't to say IPv6 couldn't have been done in a backward compatible way. I can think of ways to do that, and dozens of pros and cons - even though I haven't been in networking for 20 years and so I've forgotten a lot.
> ensure you can support it with a software update if you don't support it
For hardware, that doesn't sound that different from not supporting it. Convincing end-users of internet hardware to update their switch's firmware is hard. Ubiquiti has done a pretty good job of making updates actually doable, but for most hardware I doubt it'd happen more quickly than the hardware itself would fail.
> That isn't to say IPv6 couldn't have been done in a backward compatible way. I can think of ways to do that
6to4 (2002::/16) was basically backwards-compatible IPv6. If the entire Internet had been allocated from that space, IPv6 would've been easier to deploy.
But it's technically quite ugly, and tunnelling/MTU-related problems would be more prevalent.
6to4 provides a standard way to tunnel IPv6 traffic at the perimeter over an IPv4 backbone, essentially assigning a range of IPv6 subnets to every public IPv4 address, but it doesn't provide full backward compatibility, and it still depends on proxies at the border of the IPv4 network. The applications at the endpoints need to support IPv6—so all the existing software still needs to be updated—and you still need new protocols for address assignment (SLAAC or DHCPv6) and domain resolution (AAAA records), along with any other protocols than embed IP addresses. Most of this is an inevitable consequence of extending the address space; no amount of clever tricks can make IPv4-only applications, hosts, or protocols automatically compatible with larger IP addresses.
6rd works on similar principles, but in a way that plays more nicely with ISP routing and traffic management policies. The main user-visible change is that a variable-length, ISP-specific prefix is used instead of 2002::/16.
There are a few other transition mechanisms and embeddings of the IPv4 address space, for example the ::a.b.c.d address format. They all suffer from the fact that two-way communication without a proxy requires each endpoint to be capable of representing the full address of its peer. For a node which does not have a public IPv4 address to communicate with a node which does not understand larger addresses, some third node must exist in between to perform protocol translation.
6to4 could've been released to billions of users, with no cooperation from the ISPs, just by home router manufacturers adding a few lines of code (maybe a few dozen, for health checking.)
Applications would still need to change, but apps change a lot faster than ISPs.
6to4 died because it was never the primary mode of addressing. Performance between 2002::/16 and the rest of IPv6 relied on anycast, which was unpredictable and therefore useless for real traffic.
Placing the 6to4 endpoint at the home router assumes that the home router has a public, routable IPv4 address, which doesn't really do anything to solve the problem of address exhaustion. At best it might have been a slightly superior form of NAT, but not enough of an improvement to offset the reliability and performance issues or the lack of economic incentive for ISPs (or anyone, really) to operate 6to4 anycast gateways and potentially invite non-customer, and thus non-paying, traffic to route through their network. Considered as a transition mechanism, the fact that 6to4 works better for two 6to4 LANs communicating over IPv4 (where packets can be routed directly) than between 6to4 and native IPv6 (which requires anycast proxies) was a real deal-breaker. It encourages a dependency on tunnelling via IPv4 rather than a transition to a native IPv6 internet.
6rd, fortunately, has none of these drawbacks, though it does require support from the ISPs.
How would it have possibly been backwards compatible? Plenty of routers and ip-aware switches have, in hardware, specified that ips are 32-bits, so anything that added more bits would necessarily break existing hardware, and thus not be backwards compatible.
Easy, just declare that the entire IPv4 address space is ::x.x.x.x in IPv6 and Bob’s your uncle. No idea why they didn’t do this other than sheer solipsism.
> Easy, just declare that the entire IPv4 address space is ::x.x.x.x in IPv6 and Bob’s your uncle. No idea why they didn’t do this other than sheer solipsism.
They did [0,1]:
The "IPv4-Compatible IPv6 address" was defined to assist in the IPv6
transition. The format of the "IPv4-Compatible IPv6 address" is as
follows:
| 80 bits | 16 | 32 bits |
+--------------------------------------+--------------------------+
|0000..............................0000|0000| IPv4 address |
+--------------------------------------+----+---------------------+
Note: The IPv4 address used in the "IPv4-Compatible IPv6 address"
must be a globally-unique IPv4 unicast address.
The "IPv4-Compatible IPv6 address" is now deprecated because the
current IPv6 transition mechanisms no longer use these addresses.
New or updated implementations are not required to support this
address type.
The problem isn't supporting IPv6, the problem is often having to support both 4 and 6 simultaneously. There's just a ton of work to get a single IP stack working on infrastructure, and it's more than twice the work to get 2 working in tandem.
Is it really? In 2020? What examples are there? I worked on network switches at a major switch manufacturer and our IPv6 setup mostly just came along for the ride (or at least I thought it did - maybe our customers thought differently!)
(Genuinely interested in hearing war stories from the front on this)
That's just for IPv6 support, which is beta, and took years to get there. Dual Stack is still in alpha, and that didn't appear until a really recent release (1.16).
I think routers and switches are probably the easier pieces of the puzzle, at least as far as managing them goes. The vendors have worked out all the kinks (hopefully?) before the equipment gets to you.
That's just to get the stuff to work. Then you have corporate/government policies and validation. Then you have to solve problems like "My VM resolves things to IPv6 by default, but I have no IPv6 gateway so everything times out". And then make sure that logic makes it up through your entire stack. Multicast? Not allowed on many networks.
Is it the end of the world? No. It's just a lot of extra work for everyone.
Yeah - that’s been my intuition, too. First we got software support in switches. Then OS network stacks started to support it. Then we got hardware switching chips that supported it. But the application and deployment layers just seem to be super hard. Just look at how hard it’s been to get IPv6 on AWS.
This also reminds me of university costs. They're allowed to continuously increase prices non-stop, but then the politicians want to just cancel student debt. They probably should have just capped the prices instead
In an ideal future of course