Hacker Newsnew | past | comments | ask | show | jobs | submit | _bkyr's commentslogin

For a story like that how do you know "no one" went there? Do you read every newspaper and watch every news report?

It's been debunked to death and was a rumor that was posted to Facebook and then parroted by right-wingers looking to gain votes (it worked). It's a theater of the absurd that we're talking about journalists not doing their job and not the president that either lied about it, or fell for it and amplified it.

https://en.wikipedia.org/wiki/Springfield_pet-eating_hoax

https://www.cnn.com/2024/09/10/politics/jd-vance-haitian-imm...


My guess is you can trace a lot of those publications to a handful of conglomerates that are there for ROI and nothing else.


It's also directed by one of the best directors in history.


> Most people have terrible eyes for distinguishing content.

But also in the case of the fluffy train there's nothing to compare it against. The reason CGI humans look the most fake is because we're trained from birth to read a human face. Someone that looks at trains on a regular basis will probably discern this as being fake quicker than most.


I'm wondering that as well but I also wonder if it's a bit like CGI where it's somewhat hit a limit on realness. I'm not saying CGI doesn't get better but is a 2024 Gollum that much more realistic than 2004 Gollum? Maybe I'm wrong but I wonder if that plastic feel to AI lessens but still sticks around.


> But debanking happens, or has happened, to _almost_ everybody who has some assets and cash, probably from the higher middle class until the ~1%

Can you back up this claim? The wealthier you are the less chance you're going to be debanked. Marc Andressen and all the crypto bros will never have an issue with debanking. Banks are rolling out the red carpet for him and everyone else with his net worth.

What they (1%) want is to own the bank and own (and create) the currency without ever having to be an actual bank. They invest in crypto to make a massive profit, and it's foolish to play along with these things as some sort of benefit to society. Together these guys could end world hunger and still have more money than they would ever need, but no, they've decided VBucks are a pressing issue.


It's funny how humans and companies have mined natural resources for theoretical progress, but we've come to a point that natural resources are being used to mine a fake resource that really serves zero purpose. What an absurd direction technology has gone.


Have you seen Youtube or Reddit comments on anything? They don't reflect real-life. Youtube is probably the worst, most trollish comments of the "normal" internet and Reddit is a sort of progressive propaganda machine that will also call for the death penalty on a teenager that did something stupid (which lots of teenagers do) or a karen that was an asshole in a restaurant and isn't happy with her life.

I haven't visited twitter in ten years or more so can't comment on that.


I've had conversations with real people in real life who also overwhelmingly share the sentiment. It's not an internet bubble.


Those people are probably reading the same websites as you.

I swear, users of a site have no kind of memory. Literally a month ago everyone on reddits jaw was agape as trump was declared the winner. After months of redditors telling each other Kamala would win in a landslide and Trump was floundering, and no one was coming to his rallies, etc.

And now they're doing the same thing, telling each other that everyone in America loves that a dude was murdered in the street of NY(with a gun, literally 3' away from a random individual)


Nah, this is just moving the goalposts from "it's an Internet bubble" to "it's your personal bubble". I don't buy it.

Neo-*ism ideologies share a common myth - the myth of necessary order. This forms a contradiction that if enforced by more order will self-correct with stocastic violence. Sorry, but that's simply reality.


"internet bubbles" don't exist solely on the Internet, they exist in the minds of people spending a lot of time on the internet on certain sites. They manifest on the Internet commonly. But they also manifest in the "real world" when those people communicate with others.

Go ahead and deceive yourself that you're not in an echo chamber.

I'm sure that will help you understand reality better.


You imply that you have access to some unvarnished truth, unlike the other commenters who are trapped in their bubbles. What is your justification for this belief? Is it possible for other people to have access to this higher truth, or is it something innate to you?


My justification I guess is repeatedly seeing various commenters on certain sites espousing the same things in almost the same language. Then some event happened in the real world and they all start asking themselves "how could this happen", again, commonly in similar language. Meanwhile, I'm shocked that those people didn't see X.

Sure, it's possible to acquire my magnificent skills, by just going out and interacting with a wide swath of people in the real world.

Even, instead of thinking that you're getting any kind of signal about society when you read the same meme comment on Reddit/Twitter or whatever, just imagine you actually have no idea, even if you've read the 5000th tweet expressing the same idea


But you could say that about anything including the opposite.

I see a bunch of people who say that "CEO-killing is wrong. Therefore they must be in an internet bubble." I think we should both admit that the consistency of a message isn't actually a good signal for "bubbleness" and that something like randomized polling on personal beliefs and perceptions or a similar study actually would be.


Bubbleness isn't about the viewpoint, it is about the difference between bubble perception vs global perception.

If you read reddit/twitter, a common statement was something like "The police will never be able to find him - no one will cooperate". "Must be hard to find someone when there are 150million suspects". Basically, treating him like a modern day robin hood.

When, back in real life, the news of the day seems to be that he was caught at a McDonalds after two random employees noticed him and called the police on him.


If you want to make a statement about bubbles, then you have to ground it in global perception which is operationalizable and empirically verifiable - speculation isn't epistemologically responsible.


I'm not particularly worried about verification or epistemologically responsibility when something is manifestly obvious. The same reason I'm not jumping out of an airplane without a parachute even if I haven't read a study on the relative effectiveness of parachutes vs no parachutes when jumping out of an airplane.


Sorry, I hate to burst your bubble, but social facts ESPECIALLY need to be tested before being assumed because humans are particularly susceptible to typification, legitimization, and reification.


It's easy for other people to have access to this higher truth, and most people do. Basically by definition, the sentiments which are truly "overwhelmingly shared" are the sentiments you can talk openly about with strangers without fear that you might leave a bad impression. I think the vast majority of people supporting the CEO's murder would immediately understand that they should not bring it up when meeting their in-laws for the first time or something.


I look at Brave as another business looking to take their cut as a middleman between users and creators. It's an ad network that takes the 30% cut like Apple does to apps making over $1M.

I would much rather that creators who want to make money decide what they want to sell, and how they want to sell it. The web doesn't need a crypto tip jar layer.


Brave does not take 30% of creators that receive BAT in the rewards program.


I don't have an answer to this, but why if the algorithm can manipulate people, regardless of who directed it, why are we ok with the algorithmic content of all the platforms? I'm not much of a social media user but a lot of the argument here is the algorithm can feed propoganda that people will succumb to.

Why is it ok then for Youtube to feed violence and awful behavior to people (probably to lots of kids) in the US if it's able to influence people? Is the thought that Meta and Google (both without ethics or morals) are just trying to get us to buy shit we don't need, but Tiktok is trying to get us to agree (or not agree) with their stance on x?


> why are we ok with the algorithmic content of all the platforms?

What's the alternative? Even HN has "algorithmic content" the algorithm is based on voting and time.


Tangent: There's a study that I can't access, but the abstract claims that during the 2020 election, chronological feeds exposed Facebook users to more "untrustworthy content", without necessarily having an effect on people's political knowledge or polarization [1]:

> We investigated the effects of Facebook’s and Instagram’s feed algorithms during the 2020 US election. We assigned a sample of consenting users to reverse-chronologically-ordered feeds instead of the default algorithms. Moving users out of algorithmic feeds substantially decreased the time they spent on the platforms and their activity. The chronological feed also affected exposure to content: The amount of political and untrustworthy content they saw increased on both platforms, the amount of content classified as uncivil or containing slur words they saw decreased on Facebook, and the amount of content from moderate friends and sources with ideologically mixed audiences they saw increased on Facebook. Despite these substantial changes in users’ on-platform experience, the chronological feed did not significantly alter levels of issue polarization, affective polarization, political knowledge, or other key attitudes during the 3-month study period.

[1] https://www.science.org/doi/10.1126/science.abp9364


Regulation forcing the unbundling of client software from content hosting. Then people can choose different client software depending on which algorithms they'd like to sort things by.


While still algorithmic, a strictly chronological timeline is not really manipulatable, at least not in the same way


Spamming is the trivial way to manipulate chronological feeds. It is so bad that they need active moderation or ranking to prevent a board from going into a state of absolute uselessness.


I haven't used Facebook in 15 years, but I don't recall spam being a problem. I saw my friend's posts and replies by people who are friends with my friend.

That model may not work with all social media, but as far as I remember it worked for Facebook.


Friends' posts are a tiny fraction of what you see on FB nowadays.


Because influence being illegal means "speech which causes an impact" is illegal. You don't want a law that outlaws propaganda because whomever decides what is propaganda has massive power.

There isn't such a thing as a non-manipulatible feed. Even a chronological feed can be manipulated by spam, so it is a bit like asking why is it okay for tools to be made of mass when it means they can be used to bludgeon someone to death.


>why are we ok with the algorithmic content of all the platforms?

I don't think we are okay, but I don't think there's an across the board equivalent. We prioritize the egregious examples while simultaneously searching for a systematic redress that's less heavy handed.

I don't think Facebook marketing Blue Apron to its users is the same kind of issue as undermining democracy in Romania.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: