All the Flock cameras around me are stationed around the entrances to Lowe's parking lots.
Most of the ones in my neighborhood are pointed at parks, playgrounds, and the big transit center. Which makes no sense to me since there's a ton of government buildings around that you'd think would be under Flock surveillance for "safety."
All of the ones I've noticed have been pointed directly towards streets for mostly license recognition but it's notable that they record whatever objects a typical real world AI image model could. In my area, we have Flock, Shotspotter, Stingray devices, free Ring camera programs from law enforcement departments.
Our Lowe's have the mobile parking lot camera/light units, I wasn't aware if these were Flock but either wouldn't be surprised if they were, had access or plans to buy in.
I don't really see how this is a productive comment for the article. Most of big tech focuses on AI and those typically get traction in the news. AWS specically has plenty of non-AI announcements: https://aws.amazon.com/new/
Parent comment made a low quality joke that lacked substance.
I think that is a joke that reflects pretty well the feeling of many people (me included) that miss the ten years ago AWS and their ability to amaze us with solutions for practical problems, instead of marketing claims on PowerPoints.
A fork is more expensive to maintain than funding/contributing to the original project. You have to duplicate all future work yourselves, third party code starts expecting their version instead of your version, etc.
Not exactly true, one of the main issues with heat pumps in cold weather is the outside coil freezing up with ice blocking airflow due to them being below the freezing point of water.
This is actually why older heat pumps became less effective around 40F because the coils would start to hit 32F since they are attempting to pull heat from the warmer outside air and are therefore colder than the outside air.
There are various solutions to this problem, the standard way is to run it in reverse as a air conditioner for a short period if it detects the situation to defrost the coils and if the system has resistive heat strips it uses those to warm the air that is being cooled. This obviously reduces the efficiency of the system the more it has to defrost and may not be very comfortable to the users.
Cold weather heat pumps work better in drier climates due to this as well because the lower the outside humidity the slower frost will form on the outside coils.
Some cool weather heat pumps will have two compressor units and fans and alternate between them with one defrosting the other, there are many other tricks they are using to prevent frost buildup and continue working above COP 1 far below freezing.
For the past ten years I've run a side project that estimates the word count of books and how long it takes to read them. Maintaining and improving this requires tens of hours a month, and a few hundred dollars in RDS, ECS, etc. costs. Two years ago I was at least breaking even on affiliate income, so the cost put into it was purely my own time and effort which I enjoy. These days my total traffic numbers are about 10x, but human traffic is down 50-70%.
I'm basically paying to host content for AI crawlers to scrape and I don't know how much longer I can do this. I'm adding Goodreads-esque features currently, but if it doesn't get the sign ups I'll be forced to archive the code and take the site down.
Are you using the same tools as everyone else here? You absolutely can ask "why" and it does a better job of explaining with the appropriate context than most developers I know. If you realize it's using a design pattern that doesn't fit, add it to your rules file.
You can ask it "why", and it gives a probable English string that could reasonably explain why, had a developer written that code, they made certain choices; but there's no causal link between that and the actual code generation process that was previously used, is there? As a corollary, if Model A generates code, Model A is no better able to explain it than Model B.
I think that's right, and not a problem in practice. It's like asking a human why: "because it avoids an allocation" is a more useful response than "because Bob told me I should", even if the latter is the actual cause.
> I think that's right, and not a problem in practice. It's like asking a human why: "because it avoids an allocation" is a more useful response than "because Bob told me I should", even if the latter is the actual cause.
Maybe this is the source of the confusion between us? If I see someone writing overly convoluted code to avoid an allocation, and I ask why, I will take different actions based on those two answers! If I get the answer "because it avoids an allocation," then my role as a reviewer is to educate the code author about the trade-off space, make sure that the trade-offs they're choosing are aligned with the team's value assessments, and help them make more-aligned choices in the future. If I get the answer "because Bob told me I should," then I need to both address the command chain issues here, and educate /Bob/. An answer is "useful" in that it allows me to take the correct action to get the PR to the point that it can be submitted, and prevents me from having to make the same repeated effort on future PRs... and truth actually /matters/ for that.
Similarly, if an LLM gives an answer about "why" it made a decision that I don't want in my code base that has no causal link to the actual process of generating the code, it doesn't give me anything to work with to prevent it happening next time. I can spend as much effort as I want explaining (and adding to future prompts) the amount of code complexity we're willing to trade off to avoid an allocation in different cases (on the main event loop, etc)... but if that's not part of what fed in to actually making that trade-off, it's a waste of my time, no?
Right. I don't treat the LLM like a colleague at all, it's just a text generator, so I partially agree with your earlier statement:
> it's like reviewing a PR with no trust possible, no opportunity to learn or to teach, and no possibility for insight that will lead to a better code base in the future
The first part is 100% true. There is no trust. I treat any LLM code as toxic waste and its explanations as lies until proven otherwise.
The second part I disagree somewhat. I've learned plenty of things from AI output and analysis. You can't teach it to analyze allocations or code complexity, but you can feed it guidelines or samples of code in a certain style and that can be quite effective at nudging it towards similar output. Sometimes that doesn't work, and that's fine, it can still be a big time saver to have the LLM output as a starting point and tweak it (manually, or by giving the agent additional instructions).
Oh, it can infer quite a bit. I've seen many times in reasoning traces "The user is frustrated, understandably, and I should explain what I have done" after an exasperated "why???"