Hacker Newsnew | past | comments | ask | show | jobs | submit | jcutrell's commentslogin

Garner Health | Data Engineer II | Full-time | NYC Onsite / Hybrid (4 days/week) | https://job-boards.greenhouse.io/garnerhealth/jobs/552040000... Garner Health (https://getgarner.com) is revolutionizing healthcare economics through advanced doctor performance analytics and innovative incentive models. Our platform is reshaping how organizations access high-quality, affordable care, powering decisions at leading healthcare systems and enterprise clients. We’ve doubled revenue annually for 5 years running, making us the fastest-growing company in our space. We’re hiring a Data Engineer II to play a pivotal role in building our enterprise-grade data platform from the ground up, ensuring secure data access across our rapidly scaling organization.

Stack: AWS, Snowflake, Argo, dbt, Terraform, Airbyte, JetStreamWork style: Hybrid in NYC. In-office up to 3 days per week.Target compensation: $120,000 - $160,000 + equity

Apply here: https://job-boards.greenhouse.io/garnerhealth/jobs/552040000... or email Jonathan.Cutrell [at] getgarner [dot] com


I really like this idea, I think it represents an interesting intentional step to get out in front of what hiring managers might do anyway.

I am working on building profiles for people I work with, and really my goal is to end at something like this for them.


Sadly it will be praised as "tough" by many.


Look at the comments on FoxNews, it was America's finest hour.


I'll add to the conversation another interesting technique from Chris Voss, which is to use no-oriented questions.

People like to say no. (I'm not sure what this cognitive bias is, but anecdotally I agree.)

So, if you can frame your requests in a way that "no is permission", it will often get a red light a bit easier.

Example: replace "Is this a good idea?" with "is this a bad idea?"

Now, of course "not a bad idea" is not the same thing as "good idea", but it's a lot more likely. Even reading that, I imagine most people would respond more intuitively, because it helps us avoid a commitment we don't necessarily want to adopt.


I think there is some positive effect potential for Apple to let this slide. The broader this network is, the more adoption it receives. P2P as a super-structure has always been a bigger than vendor problem; adoption by any means is likely an allowable tradeoff, especially since Apple doesn't have to do the work here.

Eventually they will capitalize more on the mesh density, rather than crushing the adoption now.


Except that custom tags like these do not require an Apple device in order to use them, so the size of the network is not increased. They only increase the load on the network. FindMy is not a P2P/mesh network; all these tags do is broadcast keys which are picked up by iDevices, which then upload those reports to Apple.


Are the keys not tied to known apple products? Or do you make them up when you first register a device?

Trying to understand why apple doesn’t (or can’t?) already reject broadcast data from keys that are not apple products.


Two master secrets are randomly generated when pairing the AirTag for the first time, which are then saved to the iCloud keychain. Those secrets are then used to generate a new keypair every 15 minutes (at most), and the public key is broadcasted by the tag. Not only does Apple not know what the master secrets are in the first place (because they're stored in the keychain), but that's also an insane number of keys to compare against, with no real possibility to precompute them. And that's a big win in terms of privacy.


I would guess because they don’t care. The marginal cost is zero and I think they would only bother if someone ddoses or it becomes an issue.

Until then, more devices are probably positive for reducing potential pitchforking.


I suspect that, given a reasonable prompt, it would absolutely discard certain phrases or concepts for others. I think it may find it difficult to cross check and synthesize, but "term families" are sort of a core idea of using multi-dimensional embedding. Related terms have low square distances in embeddings. I'm not super well versed on LLMs but I do believe this would be represented in the models.


I've been in the industry for something like 15 years. I've been using LLMs to help me create the stuff I always wanted but never had time to make myself. This is how LLMs can be used by seniors to great effect - not just to cut time off tasks.


Same here (not in the industry though). I recently got a personal project done with the help of LLM's that I otherwise wouldnt have had the time or energy to research properly if it wasnt for the time savings.


I’ve done so many tiny hobby projects lately that scratch 10+ year itches, where I’ve said so many times “I wish there was an application for this, but I’m too lazy to sit down and learn some Python library and actually do it.” Little utilities that might have taken me a day to bring up a bunch of boilerplate, study a few docs, write the code switching back and forth from the docs, and then debugging. Today that utility takes me 30 minutes tops to write just using Copilot and it’s done.


Remember - the vast majority of candidates who take the time to do right by your process get zero reward for their effort. You get a reward in the end, so it feels imbalanced. This is true for VERY good candidates, as well.


Precisely my problem. I only apply if I know I’m a good fit and have the required experience. I spent countless hours manually adjusting my resume and writing cover letter out of my heart. Just got the usual cold rejection from a no-reply address. I know do the same with ChatGPT. I also get rejections, but at least I waste little time and can therefore submit many more applications - so my odds are higher


Not downplaying the amazing progress, but even the video showcases have some weird uncanny valley effects. The winged horse one in particular - the wings and legs morph, the wing on the left disappears and reappears through the tail.

This stuff is a little ways off, but still some amazing effects here. I think it will be a little bit before it is sufficient for production use in any real commercial situation. There's something unsettling about all of the videos generated here.


Use ChatGPT / Claude differently. Instead of prompting it to help you code, prompt it to coach you as a junior engineer, and to stay broad. Focus on principles, etc. Remind you of different pros and cons, and discuss things thoroughly with you. Explore alternatives, etc.

This kind of prompting changes the conversation entirely.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: