Yeah, they are probably a disturbance at best. Like pets, large signs, beach balls, and alcohol alcoholic beverages, which are other things on the list.
"security" is a lot more broad than just "preventing terrorist attacks"
You don't need to be a super l33et h4x0r to disrupt an event -- you could knock around a beach ball or turn off a display with the IR blaster on a flipper zero. Not everything is life or death.
What’s more likely? That they were banned due to misunderstandings of what these devices are, or that they were banned they are “causing a disturbance”? Can you find an example of such a case? I’m not sure why this feels so important to defend.
There are several definitions of security, but the most relevant (in this context) are:
1. the state of being protected against or safe from danger or threat.
2. the safety of a state or organization against criminal activity such as terrorism, theft, or espionage.
3. procedures followed or measures taken to ensure the safety of a state or organization.
I fail to see how these devices fall into those definitions. I also don’t see how beach balls do either.
So if your argument is changing to: it isn’t security, but rather preventing people from getting in each other’s way (large signs, strollers, beach balls) I once again don’t see how that applies.
I agree those items have nothing to do with security either.
I'm not changing anything -- my root comment in this thread specifically mentioned "disturbances". Mitigating disturbances to the proceedings of an event is plainly a part of event security. The "threats" evaluated in event security are not solely to life and limb but also to the proceedings of the event itself. I didn't think this required elaboration; I thought most people would be familiar with this function of event security.
It is solving for the wrong problem. An idiot with a Pi or Flipper Zero isn’t the actual threat any more than Star Simpson was.
And if you don’t agree the stupidity that Star was put through was absurd, then we simply won’t agree on the matter.
There’s a difference between security/intelligence and theater. Too many people mistake the two, because they’ve been trained by folks like the TSA to mistake theater for security/intelligence.
You are all spot on with in terms of a technical information security evaluation here.
Unfortunately, the reality is there are not enough information security specialists in the world to hire them as event security for every large public event. And even if there were, the logistics of such an event would not allow for enough time for a proper information security screening.
What you're asking for is not theoretically wrong, it's just impossible to implement.
My argument isn’t that every event can and should build an intelligence apparatus. That would be impossible, though it would actually provide security. I agree.
My argument is that banning flipper zeros does not do anything to improve security, even if they wish it did. If they actually cared about security, it would cost them a lot more time and money. Instead, they’ve chosen theater. I don’t even have a problem with this necessarily, if it makes some people feel safer; I have a problem with anyone pretending it is security, and not theater.
When someone is given a placebo during a clinical trial, they are informed and unblinded after the trial so that they do not think they were on the actual medication; this is because otherwise, they would draw the wrong conclusion from the trial for themselves.
This is theater; that’s okay, maybe, but let’s not pretend it’s something it isn’t.
Anyway, I think we’re repeating ourselves, and I’m happy to agree to disagree.
> Presumably, the cops are aware of previous disruption with these specific devices, or threats thereof. And it's not like they're going to say exactly what, nor should they, lest it give people ideas...
I don’t think this is necessarily true. The TSA bans all sorts of crap solely because they feel like it, and not in the name of any kind of actual security. It is entirely possible that the NYPD “heard” these terms in media and got spooked, so here we are.
I don't think the NYPD thinks "beach balls" or "large items that could obstruct views" or "flipper zeros" are spooky or scary. I think they think they're potential annoyances in a large crowd.
Crazy idea but: what if we built an AI pair programmer that actually pair programmed? That is, sometimes it was the driver and you navigated, pretty much as it is today, but sometimes you drive and it navigates.
I surmise that would help people learn to code better.
> Keep in mind that a lot of the benefits go away once patients come off GLP-1 and we have not seen any studies yet on what happens to people who come off it for long term effects.
Not if they increase muscle mass and change their lifestyle, like every physician (and the FDA/pharma companies) recommend.
> It may in fact make things even worse and for a lot of people, they may have to stay on it for the rest of their lives.
It does not. And some people may.
You know what’s worse than taking a GLP-1 forever? Obesity or metabolic syndrome killing you before you get to “forever.”
It seems weird because we hate finding “bugs” in our bodies, but it happens all the time.
Another example: low dose metformin is largely considered beneficial for most people, at least in a small way. But very few people who aren’t diabetic take it, as the drawback of possible side effects outweighs the potential benefit for someone who doesn’t have symptoms in the first place.
Same thing here. Would it benefit you? Possibly. Do the risks of side effects outweigh that benefit for someone without symptoms? Also possibly.
I agree with you. My point was simply that most physicians only prescribe if the potential benefits are obvious and outweigh the potential risks / side effects. Doing nothing is something better than doing something without an obvious benefit.
If you’re obese, have metabolic syndrome, have T2D, or any other number of issues that we’ve seen GLP-1s (or metformin) help with - then the medications can be a godsend.
All of which is great theory without any kind of evidence? Whereas the evidence pretty clearly shows OpenAI is losing tons of money and the revenue is not on track to recover it?
Well, for one, the model doesn't take into account various factors, assumes a fixed cost per token, and doesn't allow for the people in charge of buying and selling the compute to make decisions that make financial sense. Some of OpenAI research commitments and compute is going toward research, with no contracted need for profit or even revenue.
If you account for the current trajectory of model capabilities, bare-minimum competence and good faith on behalf of OpenAI and cloud compute providers, then it's nowhere near a money pit or shenanigan, it's typical VC medium to high risk investment plays.
At some point they'll pull back the free stuff and the compute they're burning to attract and retain free users, they'll also dial in costs and tweak their profit per token figure. A whole lot of money is being spent right now as marketing by providing free or subsidized access to ChatGPT.
If they wanted to maximize exposure, then dial in costs, they could be profitable with no funding shortfalls by 2030 if they pivot, dial back available free access, aggressively promote paid tiers and product integrations.
This doesn't even take into account the shopping assistant/adtech deals, just ongoing research trajectories, assumed improved efficiencies, and some pegged performance level presumed to be "good enough" at the baseline.
They're in maximum overdrive expansion mode, staying relatively nimble, and they've got the overall lead in AI, for now. I don't much care for Sam Altman on a personal level, but he is a very savvy and ruthless player of the VC game, with some of the best ever players of those games as his mentors and allies. I have a default presumption of competence and skillful maneuvering when it comes to OpenAI.
When an article like this FT piece comes out and makes assumptions of negligence and incompetence and projects the current state of affairs out 5 years in order to paint a negative picture, then I have to take FT and their biases and motivations into account.
The FT article is painting a worst case scenario based on the premise "what if everyone involved behaved like irresponsible morons and didn't do anything well or correctly!" Turns out, things would go very badly in that case.
ChatGPT was released less than 3 years ago. I think predicting what's going to happen in even 1 year is way beyond the capabilities of FT prognosticators, let alone 5 years. We're not in a regime where Bryce Elder, finance and markets journalist, is capable or qualified to make predictions that will be sensible over any significant period of time. Even the CEOs of the big labs aren't in a position to say where we'll be in 5 years. I'd start getting really skeptical when people start going past 2 years, across the board, for almost anything at this point.
Things are going to get weird, and the rate at which things get weird will increase even faster than our ability to notice the weirdness.
All of which is more theory. Of course nobody can predict the future. Your argument is essentially “they have enough money and enough ability to attract more that they’ll figure it out,” just like Amazon did, who were also famously unprofitable but could “turn it on at any time.”
FT’s argument is, essentially, “we’re in a bubble and OpenAI raised too much and may not make it out.”
Neither of us knows which is more correct. But it is certainly at least a very real possibility that the FT is more correct. Just like the Internet was a great “game changer” and “bubble maker,” so are LLMs/AI.
I think it’s quite obvious we’re in a bubble right now. At some point, those pop.
The question becomes: is OpenAI AOL? Or Yahoo? Or is it Google?
That's a fabulous tale you've told (the notion that there's a bunch of Anthropic leaning sites is my personal favourite) but alas, the article is reporting on a GSBC report which they are justifiably sceptical if, and does not in any way, shape or form represent the FTs beliefs.
AI can both be a transformative technology and the economics may also not make sense.
I presume that email address is for when you want to ask something of Hacker New, not to ask something about Hacker News.
For example they probably didn't want posts like "Hey Hacker News, why don't you call for the revival of emacs and the elimination of all vi users?" and would rather you email them so they can ignore it, but they also don't want email messages asking "How do I italicize text in a Hacker News comments, seriously I can't remember and I would have done so earlier in this comment if I could?" and would rather you ask the community who could answer it without bothering anyone working at Y Combinator.
Are you saying this based on experience or are you projecting? In my experience (tho not asking how to italicize text using * characters) Dang and tomhow are happy to answer all sorts of questions. Sometimes they do get bogged down by the reality of running a site of this site manually, as it were, but I can't remember a question that didn't eventually get answered. I'll even tell them I vouched for this bunch of dead comments, was that the right thing to do? And one of them will write back saying mostly, but just fyi comment xyz was more flamebaity than idea, but thank you for asking and working on calibrating your vouch-o-meter.
What's the problem? Someone submitted it for people to read but it didn't catch on, now it's resubmitted and people can read it after all. Everyone happy. Don't be so attached to imaginary internet points.
Nobody is arguing we should ban phones. The argument is that banning flipper zeros doesn’t accomplish anything meaningful toward security.