The US is the security guarantor of the worlds shipping lanes. This is a corner stone of US led globalism. Read any geopolitical comentator or book and it will be there.
Could be interesting to see performance distribution for random strategies on that stock universe as a comparison. The reverse could also be interesting: how do the models perform on data that is random?
Because if I don't intend to sell right now, and the company is otherwise a healthy, going concern that can pay sustainable dividends, the actual share price is irrelevant to me. If anything, given my belief in the company, a lower share price is better. I can buy more shares!
But you now own a larger percentage of the company because you own the same number of a smaller total number of shares outstanding, so you benefit whether you are a seller or a holder. If you intend to buy more it is neutral because the price per share goes up, but each share represents proportionally more.
If you ever want to sell, getting in the limit nothing for the shares might matter, no? There are other things: for example, share based M&A or compensation or other investors with different preferences - no relevance or interaction?
All fair points. Share-based M&A can be good for investors. But if the stock price is going up because the company spent money on buybacks, then the company could also just pay cash for M&A and skip the buybacks.
Higher compensation is good for employees who get paid stock and for upper management, who are nearly always paid largely in stock. There's an argument that's good for shareholders because of better retention. But if that were the case, why not just pay employees more cash?
Are there many investors that are never sellers (that is different from selling soon-ish)?
Paying cash could be quite different than paying in shares for M&A.
If owning/using shares makes no difference to cash (whether to employees or in M&A situations), why not do buybacks then if there is no difference between cash and shares anyway?
Not always/limited in some areas (astrophysics come to mind where we, e.g., cannot (yet) create a star under controlled conditions). Testing hypotheses or predictions vs observations is also a valid method.
Science is not about where you test hypotheses, but how. Astrophysics builds falsifiable models and checks them against reality like spectra, gravitational waves, etc... When predictions fail, theories change.
If these were economists, they would check if their equations match the economic universe they live in. :-) Instead, they conclude the agents just “did not behave rationally enough”.
That's just not true. The models have real predictive power, they just have limitations. Behavioral economics, which tackles this frontier is still a growing field. Thaler, Kahneman, and Taversky won the prize in 2017 for building the bridge between economic theory and individual decision-making.
At the risk of being inflammatory-- These arguments are the equivalent of saying that Newton didn't really do physics because his models of mechanics break down at high enough speeds and small enough scales.
By way of example, if there's a backdoor key, it can be stolen or misused. Witness the many examples of companies that collect too much data and have that data stolen, and many examples of police departments abusing police databases for personal stalking and similar misuses.
Any other backdoor mechanism can similarly be breached or misused. There is no such thing as a backdoor that can only be used for what it is "supposed" to be used for.
Depending on the level of security you need, there are any number of steps people could take or the industry as a whole could take:
- Don't allow remotely installing things on a device, only doing so with physical presence on the device.
- Have "binary transparency" mechanisms to make sure that you're seeing the same binary everyone else is, and you're not getting served a special backdoored version nobody else sees. (This doesn't prevent global backdoors, of course, but those are more likely to get caught.)
- Relatedly, have multiple independent app stores in different jurisdictions, and make sure they are serving identical binaries. That ensures no one jurisdiction can surreptitiously demand and enforce a backdoor.
- Have signatures from the original app author that can be verified, and ensure that intermediaries (e.g. "app stores") can add signatures but can't add anything to the package that's not covered by the original signature. That reduces the number of parties you have to trust.
- In an ideal world, only install Open Source software that's reviewed and subject to multiple independent reproducible builds.
> In high security settings, just don't allow devices in.
That's appropriate for a SCIF, not for someone's day-to-day life.
> Also, democratically authorized state actors have a valid role to play in liberal democracies.
They still don't get to have backdoors into everyone's device.
Also, many many events throughout history should demonstrate that "democratically authorized" is in fact laughably bad at curtailing abuses of power, and not a substitute for a sacrosanct right to privacy that's systematically enforced through both legal and technical means.
Make devices secure. When people tell you to make them insecure, refuse.
Not sure why why we are talking about everyone's device now or even a backdoor as such if it might even need access to the device to interfere with it? (My initial post wasn't about mass surveillance.)
If you look at history, not sure why technical measures would offer much protection against violence based approaches against privacy, though.
When you said "If you can install stuff on the device, how could you protect against it?", that sounded like it was talking about how a device that can have new software installed onto it can have a backdoor for later use installed onto it, and that led into a discussion about how to protect against that.
Were you instead saying "on a device you have control over, how can you protect yourself against that?". Or something else?
> If you look at history, not sure why technical measures would offer much protection against violence based approaches against privacy, though.
They can at a minimal level (e.g. steganography, duress passwords), but yes, ultimately there is little you can do against someone threatening you personally with harm.
Sorry, I meant it as in someone else installing surveillance on a device/bugging it, not as in installing something with a backdoor for some possible later use, i.e., targeted surveillance of a specific device under a warrant etc. So something which can have a lot more effort per device involved, potentially access to it, and not some broad-based thing.
And, yes, there can be a debate if such warrants are desirable etc., but I think it's quite different from broad-based backdoors for potential mass surveillance etc. and liberal democracies can decide to have such tools available.
Ah, got it! Sorry for us talking past each other there, then.
I think part of what I was suggesting still applies there: if you can't install things remotely, and you have oversight to ensure that any backdoor applied to you has to have been sent to everyone (which provides a measure of security), then what's left is physical control of the device. For instance, keeping the device in your possession, locking it down against peripherals...
That can't prevent you from being physically coerced, but it provides a lot of security in every other case. And if things have devolved to the point of you being physically coerced then you have much worse problems.
Too vague. More like surveillance of telecommunications at the source (by means of trojaning the source/device, or using existing backdoors at whichever layer of the stack).
reply