We wouldn't even need these services if instant payments were the norm. I guess we have to thank the Visa and MasterCard lobbies for this to happen at least 10 years too late.
Yeah that's pretty much the only reason people use Wero: transferring money faster than a snail between people. This was filled by the likes of Lydia before, but their shenanigans trying to become a bank pushed people to Wero (which is indeed a rename of something else I don't remember, but I used for less than a year).
The real deal is the card payment networks that your plastic thing can use at a merchant's point of sale. All the rest is moot as we already have SEPA for e.g. online payments (it does have its issue for sure, but it's something).
Is it backed with a large amount of money to hedge against the risk that the privacy or security could be flawed in some unexpected way? Are there people lobbying for its adoption at scale that can make cast-iron assurances about liability? Business people do not rely on technical proofs because they don't want to become cryptography.software engineering experts.
I've been using Firefox on android for over a decade, including for YouTube, and maybe once a year I encounter a problem where I need to use Chrome for a specific website.
We're only discussing one particular website, though, aren't we?
Except for on the TV (where I use SmartTube), all of my Youtube activity is done with web browsers.
Otherwise: On both big computers and with my pocket supercomputer alike, that means Firefox and uBlock Origin. It works quite well for navigating Youtube's website and watching videos.
An old iPad that I have suffers from Apple's deliberately baked-in lack of choice, but it does handle Youtube's website very well with Safari and AdBlock.
It has been a very long time since I've used Youtube's app on any device at all.
I used to have dermatophagia up until 2021. Every time my skin grew again I would rip it off in a never-ending cycle.
During the Covid-19 pandemic, I was in WFH and wore cotton gloves while working. I caught myself gnawing at the gloves and after a few days I lost the habit and my fingers were fully healed.
I always suspected I would lose the habit if my fingers were healed.
It certainly wasn't the worse case of dermatophagia compared to some on the Internet but it was noticeable and made me feel bad.
> but basing your social interactions and how you see others and yourself on this stuff might not be the best thing to do!
Why not? Just knowing that there is such different ways of thinking is useful especially when interacting with people.
I was a guesser until maybe 2 of 3 years ago until I talked about it with friends and family and I learned just today that it was called "asker" and "guesser".
If you spend time with people from different cultures, there clearly is a stark divide in behavior. Even inside said culture there might be situations in which someone becomes an asker.
Therefore this framework is good to understand how people think socially and have a better understanding towards one another. Some people may think you are rude to ask–or an idiot not to–and you will probably lose relationships if you don't realize it.
> Just knowing that there is such different ways of thinking is useful
We agree. People have different ways of thinking and interacting. Maybe that asker-vs-guesser thing made you / others realize that (and that's good! Possibly it made me realize that too, although having a flatmate years before had already done the trick tbh), but we didn't need it to know this.
> there clearly is a stark divide in behavior
How are you sure it's not confirmation bias [1]? When you have a hammer, everything looks like nails. When you have an asker-guesser theory, everybody look like askers and guessers, including yourself.
Odds are it is most likely, in fact, confirmation bias, since that theory was found to be unsubstantiated and underdeveloped, and since this is a sexy topic, it's hard to believe nobody tried to validate it rigorously (and the way scientific publishing is currently organized sadly doesn't encourage publishing negative results).
> Why not?
Because apparently, from what we actually know (robust, established knowledge), there's no good reason to think the following is actually true, even if it strongly feels like it for a host of reasons, which is my whole concern:
> this framework is good to understand how people think socially and have a better understanding towards one another
It's too easy to pick two half convincing categories that feel somewhat opposite and have the feeling that these two categories provide insight on how people work. Such theories are sugar for the brain.
I'd be most happy to be proven wrong in the future though! In the meantime, I'll pick cautiousness.
I agree with what you say regarding confirmation bias but then how do you separate that from what is considered the scientific consensus? What I mean is that Newton's Law is not scientifically accurate anymore (it's good enough, though) but the fact that it validated what we observed (i.e. gravity) is also confirmation bias.
What I'm getting at is that there is a fine line between confirmation bias and scientific theory. I hope I made sense, lol
Ooh that's a good question, how do you control for confirmation bias in studies?
I'm a bit embarrassed to have to admit that this goes beyond my knowledge. I'm sure there are answers to this, this must be well known in these areas of research. We also know that research itself can be biased too. I'll have to ask friends working on these topics! Thanks for the interesting discussion about this I'll probably live in the future.
On this topic specifically though, that meta analysis that concluded there was a lack of evidence was despite the potential confirmation bias (unless the authors of the meta-analysis where already suspicious about the theory… oh well… one can hope them following the scientific method provides strong enough guarantees. It's not completely bulletproof but it's the most reliable thing we have. I'll ask for sure!).
> but the fact that it validated what we observed (i.e. gravity) is also confirmation bias
Pretty sure that's wrong. The way it works is: we have this equation. It predicts where we expect such stuff to be in X seconds. In X seconds, we check it's indeed there. It's there: actual confirmation, not confirmation bias. That's how you check your hypothesis. Of course the initial hypothesis comes from intuition… formed by observing the world. Enough confirmations makes your model more reliable, and is the thing that will be used until a counter example shows its nose and a better model is found. Even then, the model can still be used for cases where we know it does the job; Newton's model is simpler to use than Einstein's so we keep using it.
I guess if you have a solid enough hypothesis, it also works like this in human sciences.
> Pretty sure that's wrong. The way it works is: we have this equation. It predicts where we expect such stuff to be in X seconds. In X seconds, we check it's indeed there. It's there: actual confirmation, not confirmation bias.
Exactly. My point is that since Einstein's theory, we know that Newton's Law is incomplete. Therefore proving that it was confirmation bias (i.e. that our equations just confirmed what we observed). Since we observed black holes, we knew that Newton's was incomplete as it couldn't fully explain their behaviors.
> i.e. that our equations just confirmed what we observed
No, no, it's the opposite, and it's key! What we had been observing kept matching what the equations gave us "so far". Without cherry-picking, or refusing to see the cases where the model doesn't apply (consciously or not), which would have been confirmation bias.
We did, in fact, question the model as soon as we noticed it didn't apply.
Confirmation bias implies "cognitive blinkers", I don't think this happened in this Newton vs Einstein stuff.
But I agree the confirmation bias risk is not very far away. It's an issue in the general population, it's also likely a big issue in research.
Don't we start the equations after observing a phenomenon? It wouldn't make sense to try to explain something before observing it..
For example, after observing black holes we understood that Newton's was not enough to explain them. Thus we had to find another theory that explained our observations. Now with quantum computing we know that Einstein's theory is insufficient, too (not very knowledgeable on quantum physics myself, though)
There's definitely a "Seeing an apple fall and intuiting a hypothesis" process which is early in the research process, which leads to formulating the equation as an hypothesis somehow.
So you observe stuff, intuit and formulate an hypothesis. The hypothesis is a model that you hope matches how the world works well enough. Developing a scientific hypothesis takes scientific rigor. Among other things:
- it needs to be testable (it needs to be possible to design some scientific protocol to check the hypothesis)
- it needs to be formulated before you start experimenting and collecting data (that doesn't mean you can't observe your world before, you just can't use these observations in the data that backs your thesis)
- it needs to be rooted in existing science, knowledge, it's not a simple "naive" guess. It certainly takes being deeply familiar with the research area.
Then you test your hypothesis with experiments. You design a significant number of them. You must not cherry pick here, that would be confirmation bias (but you can encode the limitations of the model in the previous step). You predict your expected results with the model you have in your hypothesis. You run your experiments, make your measurements, compute the deltas. Here too, you must not discard or tweak the results to your liking. That would be cherry-picking, or event outright falsification. If the deltas are small enough, and people review your work, and ideally reproduce it (same experiments, or other experiments), eventually there's a consensus that starts forming around your model. Congrats, the model is validated.
So, people start to use your model. They do exactly the opposite of what you did when you formulated your hypothesis: they don't try to come up with a model from preliminary observations, they assume the model works, and they use it to predict the future.
Until Einstein comes :-). And stumble upon a black hole. An observation that doesn't match. Then your model gets refined (with limits and restrictions) or even "deprecated".
But yeah, physicists model the world after the observations they make, not the contrary. Otherwise they are doing something else. Math, maybe, or philosophy, or whatever. It's just that designing the model is only the beginning, you have to check that it works… with carefully selected observations… before it can be validated.
With this asker-vs-guesser thing, we don't have convincing work that provides the validation step. This means asker-vs-guesser is an hypothesis, at best (at best, because we don't know if things required to formulate a scientific hypothesis have been respected).
I'm born Guesser but evolved into an Asker. However, it depended on whether I was the requester or not; if I wanted to invite someone, I would try to avoid putting them in a position where they had to say "no". However, I didn't mind saying "no" myself.
I would argue with other people that it's impolite to put them in such a position as they may not like to decline.
After discussing it openly with friends and family, I realized that it was okay to say no and people wouldn't mind. This changed me into an asker.
What's funny is that my parents were askers. I guess being introverted made me more of a guesser initially.
Seems like the researchers are not interested in telling you that, however.
Children spend most of their time indoor—would you want something to happen to them outdoor? Surely not, right?! But surely it must be those pesky screens.
If that's not clickbait, then that's a hugely sensationalist title to something that is at best ignorant and at worst misinformation; I don't usually flag posts but I'm flagging this one.
Can we have a lens that covers the entire display that collimates the light so you're actually focusing on 1-2m away or infinity, like in a VR headset?
reply