Hacker Newsnew | past | comments | ask | show | jobs | submit | arrowleaf's commentslogin

Interesting. All the Flock cameras around me are stationed around the entrances to Lowe's parking lots.


All the Flock cameras around me are stationed around the entrances to Lowe's parking lots.

Most of the ones in my neighborhood are pointed at parks, playgrounds, and the big transit center. Which makes no sense to me since there's a ton of government buildings around that you'd think would be under Flock surveillance for "safety."


All of the ones I've noticed have been pointed directly towards streets for mostly license recognition but it's notable that they record whatever objects a typical real world AI image model could. In my area, we have Flock, Shotspotter, Stingray devices, free Ring camera programs from law enforcement departments.

Our Lowe's have the mobile parking lot camera/light units, I wasn't aware if these were Flock but either wouldn't be surprised if they were, had access or plans to buy in.


Lowe's and Home Depot both seem to be hubs for their cameras. I only know of one in my rural area and it's at the Lowe's entrance.


Home Depot and Lowe's Share Data From Hundreds of AI Cameras [Flock] With Cops - https://news.ycombinator.com/item?id=44819750 - August 2025


You missed the touch of sarcasm. It's a joke, recent AWS announcements have been heavily AI-focused.


I don't really see how this is a productive comment for the article. Most of big tech focuses on AI and those typically get traction in the news. AWS specically has plenty of non-AI announcements: https://aws.amazon.com/new/

Parent comment made a low quality joke that lacked substance.


I think that is a joke that reflects pretty well the feeling of many people (me included) that miss the ten years ago AWS and their ability to amaze us with solutions for practical problems, instead of marketing claims on PowerPoints.


Kevin was CTO / head of product engineering at Windsurf, Anshul was a founding engineer


If they can fund a fork, they can continue business as usual until the need arises


A fork is more expensive to maintain than funding/contributing to the original project. You have to duplicate all future work yourselves, third party code starts expecting their version instead of your version, etc.


Nobody said the fork cannot diverge from the original project.


Abundance of natural beauty and recreation opportunities


The physics of heat pumps disagrees with you. The freezing point of water has no bearing on at what point they become less effective.


Not exactly true, one of the main issues with heat pumps in cold weather is the outside coil freezing up with ice blocking airflow due to them being below the freezing point of water.

This is actually why older heat pumps became less effective around 40F because the coils would start to hit 32F since they are attempting to pull heat from the warmer outside air and are therefore colder than the outside air.

There are various solutions to this problem, the standard way is to run it in reverse as a air conditioner for a short period if it detects the situation to defrost the coils and if the system has resistive heat strips it uses those to warm the air that is being cooled. This obviously reduces the efficiency of the system the more it has to defrost and may not be very comfortable to the users.

Cold weather heat pumps work better in drier climates due to this as well because the lower the outside humidity the slower frost will form on the outside coils.

Some cool weather heat pumps will have two compressor units and fans and alternate between them with one defrosting the other, there are many other tricks they are using to prevent frost buildup and continue working above COP 1 far below freezing.


Heat pumps lose efficiency as it gets colder. There are no laws of physics which contradict this.


The relevant laws of physics operate in Kelvin. 60°F is 288 K. -20°F is 244 K. These are not that far apart.


For the past ten years I've run a side project that estimates the word count of books and how long it takes to read them. Maintaining and improving this requires tens of hours a month, and a few hundred dollars in RDS, ECS, etc. costs. Two years ago I was at least breaking even on affiliate income, so the cost put into it was purely my own time and effort which I enjoy. These days my total traffic numbers are about 10x, but human traffic is down 50-70%.

I'm basically paying to host content for AI crawlers to scrape and I don't know how much longer I can do this. I'm adding Goodreads-esque features currently, but if it doesn't get the sign ups I'll be forced to archive the code and take the site down.


Why are you not using a CDN or edge worker? This boggles my mind that you don’t just have something that can scale to billions of requests for pennies


Frame or shape leather hide to hold water, add water and hold over flame or add hot rocks.


Are you using the same tools as everyone else here? You absolutely can ask "why" and it does a better job of explaining with the appropriate context than most developers I know. If you realize it's using a design pattern that doesn't fit, add it to your rules file.


You can ask it "why", and it gives a probable English string that could reasonably explain why, had a developer written that code, they made certain choices; but there's no causal link between that and the actual code generation process that was previously used, is there? As a corollary, if Model A generates code, Model A is no better able to explain it than Model B.


I think that's right, and not a problem in practice. It's like asking a human why: "because it avoids an allocation" is a more useful response than "because Bob told me I should", even if the latter is the actual cause.


> I think that's right, and not a problem in practice. It's like asking a human why: "because it avoids an allocation" is a more useful response than "because Bob told me I should", even if the latter is the actual cause.

Maybe this is the source of the confusion between us? If I see someone writing overly convoluted code to avoid an allocation, and I ask why, I will take different actions based on those two answers! If I get the answer "because it avoids an allocation," then my role as a reviewer is to educate the code author about the trade-off space, make sure that the trade-offs they're choosing are aligned with the team's value assessments, and help them make more-aligned choices in the future. If I get the answer "because Bob told me I should," then I need to both address the command chain issues here, and educate /Bob/. An answer is "useful" in that it allows me to take the correct action to get the PR to the point that it can be submitted, and prevents me from having to make the same repeated effort on future PRs... and truth actually /matters/ for that.

Similarly, if an LLM gives an answer about "why" it made a decision that I don't want in my code base that has no causal link to the actual process of generating the code, it doesn't give me anything to work with to prevent it happening next time. I can spend as much effort as I want explaining (and adding to future prompts) the amount of code complexity we're willing to trade off to avoid an allocation in different cases (on the main event loop, etc)... but if that's not part of what fed in to actually making that trade-off, it's a waste of my time, no?


Right. I don't treat the LLM like a colleague at all, it's just a text generator, so I partially agree with your earlier statement:

> it's like reviewing a PR with no trust possible, no opportunity to learn or to teach, and no possibility for insight that will lead to a better code base in the future

The first part is 100% true. There is no trust. I treat any LLM code as toxic waste and its explanations as lies until proven otherwise.

The second part I disagree somewhat. I've learned plenty of things from AI output and analysis. You can't teach it to analyze allocations or code complexity, but you can feed it guidelines or samples of code in a certain style and that can be quite effective at nudging it towards similar output. Sometimes that doesn't work, and that's fine, it can still be a big time saver to have the LLM output as a starting point and tweak it (manually, or by giving the agent additional instructions).


Although it cannot understand the rhetorical why as in a frustrated “Why on earth would you possibly do it that brain dead way?”

Instead of the downcast, chastened look of a junior developer, it responds with a bulleted list of the reasons why it did it that way.


Oh, it can infer quite a bit. I've seen many times in reasoning traces "The user is frustrated, understandably, and I should explain what I have done" after an exasperated "why???"


Companies don't do these acquisitions with cash on hand. It's OpenAI and the whole pool of their creditors and investors.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: