Hacker Newsnew | past | comments | ask | show | jobs | submit | oumua_don17's favoriteslogin

Because Passkeys are considered more secure. They had a rough, confusing start but seem to be taking off.

I don't know your level of technical knowledge, but Passkeys replace passwords by using public key cryptography instead of shared secrets (passwords).

They consist of a key pair (public and private) and are based on the FIDO2/WebAuthn standard. Like other public key systems (PGP, SSH keys, or SSL certificates), the public key is shared while the private key remains secret.

There are two main types of Passkeys: device-bound and synced. Both types are significantly more secure than passwords and resistant to phishing, but they have different trade-offs between security and convenience.

Device-Bound Passkeys:

With device-bound Passkeys, the private key is stored in a Secure Enclave or a Trusted Platform Module (TPM) on your device. The Secure Enclave and TPM are hardware-isolated, preventing even your operating system from directly accessing them. Instead, you use a special authentication API to make calls. There is no direct memory access to these keys unless an exploit is discovered.

Think of a Secure Enclave and Trusted Platform Modules as a little, isolated computer inside your device -- because that's what they are! They have their own processor and operating system, and they are completely isolated from the rest of the device. They only release signatures and never release secrets (and aren't even capable of doing so, under normal cases). The device can only talk to the Secure Enclave/TPM via special authentication APIs.

Synced Passkeys:

Synced Passkeys store the private key in encrypted form within a password manager or platform keychain (like iCloud Keychain, Google Password Manager, or 1Password. These passkeys are designed to be backed up and synchronized across your devices for convenience. While the keys are encrypted during storage and transit, they're not permanently bound to a single hardware chip.

This makes them more flexible and user-friendly, though they rely on the security of your account and the encryption used by the syncing service rather than hardware isolation.

Here's how the flow works as I understand it:

1. You visit a website and try to login.

2. The server sends a randomized challenge string.

3. Your device's authenticator signs that challenge using the private key.

4. That signature gets sent back to the server.

5. The server verifies the signature using the public key it has on file.

Why Passkeys are cool:

- No shared secrets, so there's nothing on the server that's useful to steal.

- They're phishing resistant, the browser or whatever ensures the origin matches before allowing auth.

- No replay attacks because the server issues a new randomized challenge string every time.

- No cred stuffing because each passkey is unique to the service it's generated for.

This should all be correct to the best of my unexpert knowledge.

---

Edit: Corrected some serious errors and forgot to explain device-bound vs. synced passkeys -- a major oversight!

Disclaimer: Yes, I'm a human and I use --. This comment is self-written but I did use AI for grammar correction.


You don't have to buy them all at once! :)

And then, at least for the compiler books, there is: http://t3x.org/files/whichbook.pdf


Many universities now offer a course using the Computer Systems: A Programmer's Perspective book (http://csapp.cs.cmu.edu/index.html). This is a fantastic book, focusing on nit-picky assembly and C-level work -- all the foundational stuff for an operating systems or compilers class, even if you never go on to one. Having TA'd this class the last few years, I find most students really enjoy it.

In fact, I think most of the HN crowd would greatly enjoy the "bomb lab" (http://csapp.cs.cmu.edu/public/1e/labs.html -- you can read the writeup but not get the source). The idea is that you have a binary and have to use gdb and some nice dumping tools to "defuse" a bunch of stages of the program, each with increasing difficulty. It's a fabulous exercise, and really makes students pick up a deep appreciation for stepping through assembly and data structures that are just lying around in memory.


I strongly suggest “How to Read a Book” by Adler. It’s an excellent discussion of types and levels of reading.

When faced with a large, imposing text, my method is:

1. Read the title page, table of contents, index, and introduction several times. This should make it clear what the book is about and how it’s structure.

2. Read it straight through from start to end. No notes. Don’t worry about not understanding. Intellectually, I’m very passive. Just familiarizing myself with the terrain. Keep in mind the knowledge gained from the previous step.

3. Read through again, paying attention to key concepts and terms. Make a list of key concepts and terms. Try to understand and resolve the concepts and terms in relation to each other and the structure of the text. This is still very passive reading. You shouldn’t be forming any opinions at this point.

4. Create your own outline of the text. If the table of contents is well thought out, this might be redundant. Still very passive. No opinions yet.

5. Go through again, summarizing all the key ideas. Still very passive, you’re just trying to map out the text. Not forming opinions.

6. Rewrite the outline, now including your summaries.

7 and above. Now, if necessary, start actively critiquing. That’s a whole different matter. And usually involves playing the author off other authors. You usually don’t get to this part. I’ll refer you back to “How to Read a Book.”

The key thing is to focus on passively understanding, rather than opinion formation.


Gemini is amazing for taking a merge file of your whole repo, dropping it in there, and chatting about stuff. The level of whole codebase understanding is unreal, and it can do some amazing architectural planning assistance. Claude is nowhere near able to do that.

My tactic is to work with Gemini to build a dense summary of the project and create a high level plan of action, then take that to gpt5 and have it try to improve the plan, and convert it to a hyper detailed workflow xml document laying out all the steps to implement the plan, which I then hand to claude.

This avoids pretty much all of Claude's unplanned bumbling.


In undergrad I took an abstract algebra class. It was very difficult and one of the things the teacher did was have us memorize proofs. In fact, all of his tests were the same format: reproduce a well-known proof from memory, and then complete a novel proof. At first I was aghast at this rote memorization - I maybe even found it offensive. But an amazing thing happened - I realized that it was impossible to memorize a proof without understanding it! Moreover, producing the novel proofs required the same kinds of "components" and now because they were "installed" in my brain I could use them more intuitively. (Looking back I'd say it enabled an efficient search of a tree of sequences of steps).

Memorization is not a panacea. I never found memorizing l33t code problems to be edifying. I think it's because those kinds of tight, self-referential, clever programs are far removed from the activity of writing applications. Most working programmers do not run into a novel algorithm problem but once or twice a career. Application programming has more the flavor of a human-mediated graph-traversal, where the human has access to a node's local state and they improvise movement and mutation using only that local state plus some rapidly decaying stack. That is, there is no well-defined sequence for any given real-world problem, only heuristics.


This author [1] has a fundamental misunderstanding of cons cells and what their purpose is. Cons cells are a primitive. They are best understood in contrast to the C model where memory is an array, i.e. an associative map from integers (or at least some abstract data type with a total ordering) onto values which are also integers (or at least sets of objects that can be mapped into finite sets of integers).

The key difference between the C primitive and the Lisp primitive is that the Lisp primitive does not require anything to be mapped onto integers whereas C does. You have to be able to increment and decrement pointers in C. You can have a fully functional Lisp with no numbers at all. In fact, this was the whole point of Lisp when it was invented: providing a model for computing on symbolic expressions rather than numbers. There is a reason that the original Lisp paper was called, "Recursive functions of symbolic expressions and their computation by machine" rather than "Cons cells: a new computational primitive".

All of the "problems" with cons cells are not problems with cons cells at all, they are problems with punning cons cells, i.e. using cons cells to implement higher-level abstractions without properly hiding the fact that the underlying implementation is a cons cell. The exact same thing happens in C when you run off the end of an array or cast a pointer to (void *). These are not problems with pointers per se, they are problems with using pointers to implement higher level abstractions (like finite arrays) without properly hiding the underlying implementation.

Punning cons cells and abusing pointers have similar consequences: segfaults in the case of C, and (typically) "NIL is not a valid argument to..." errors in Lisp. Both of these are the result of bad abstraction discipline, not a problem with the primitives.

---

[1] The author is Xah Lee, who is pretty well known in the Lisp community. He is widely considered a troll, an opinion which I personally share. I've tried to make my assessment of this post as dispassionate as I can, but I thought I should disclose the fact that I have a pretty strong bias.


As an oldie who has lived through multiple hype cycles - Don't take the "retire early" crowd seriously.

Over the decades, I have seen too many friends who have fallen for this story, 'hit their numbers' and loose their minds after 'retiring'. This is because the mind just unravels(unable to find meaning), if it doesn't have daily structure and purpose.

You are ready to retire, not just when you are financially independent but when you have settled into a pattern that is sustainable long term, of daily 'meaning formation'.

AI will create problems too. Anything that grows in complexity also generates more issues. Never less issues. Don't believe the hype. Tune it out. Focus on problems. Not tools.

Another thing I learnt is - Balance the work "has to feel good" story with whatever responsibilities you have or will have in life. So if people depend on you or will depend on you in future, don't make the decision purely on I want work to feel like play. It's easier to keep work and play separate than to try and merge them. Both are required though for 'meaning formation' but balance it out. So look for both in separate areas.

Try looking for work with teams that are multi-disciplinary. If you head back to Uni don't just roam the comp sci dept. Go look depts and ask them if they need you skills. This is where the most "fun" happens imho. But people head where the money is and end up doing a lot more boring stuff. Software is everywhere so you can get involved with whatever you want - chemistry, biology, astrophysics etc etc if you just go look. All the best!


It was put best by Steve Jobs:

“John [Sculley] came from PepsiCo, and they, at most, would change their product once every 10 years. To them, a new product was, like, a new-size bottle, right? So if you were a product person, you couldn’t change the course of that company very much. So who influenced the success of PepsiCo? The sales and marketing people. Therefore, they were the ones that got promoted, and therefore, they were the ones that ran the company. Well, for PepsiCo, that might have been okay. But it turns out, the same thing can happen in technology companies that get monopolies. Like, oh, IBM and Xerox. If you were a product person at IBM or Xerox…So you make a better copier or a better computer. So what? When you have a monopoly market share, the company is not any more successful. So the people that can make the company more successful are sales and marketing people, and they end up running the companies. And the product people get driven out of the decision-making forums. And the companies forget what it means to make great products. The product sensibility and the product genius that brought them to that monopolistic position gets rotted out by people running these companies who have no conception of a good product versus a bad product. They have no conception of the craftsmanship that’s required to take a good idea and turn it into a good product. And they really have no feeling in their hearts usually about wanting to really help the customers.”

I’m watching this happen at my current company. It’s tragic, and so obvious.


I agree. This is what I tell my students:

1. Stop living other people's experiences. Start having your own.

2. Stop reading blogs. Start building apps.

3. Everyone's experience depends on their use case or limitations. Don't follow someone's opinion or ideology without understanding why.

4. Don't waste time chasing employees or researchers on Twitter or Substack. Most of them are just promoting themselves or their company.

5. Don't let anxiety or FOMO take over your time. Focus on learning by doing. If something important comes out, you'll find out eventually.

6. Being informed matters, but being obsessed with information doesn't. Be smart about how you manage your time.

That's what I tell them.


One of the motivators behind GOAL (Lisp dialect used on Jak & Daxter games) is to allow programming the PS2's various processors very close to the metal in a single language.

The Jaguar might benefit from a GOAL-like language of its own.


No amount of reading or watching will make any meaningful difference in terms of skills.

Only work, i.e. producing, creating something, getting your hands dirty.

Beware also about the simple fact that most of so called youtube experts are only experts in youtube. By design.

Real experts are busy engineering things. You can’t be both because each area is extremely time and energy consuming.

> I’m not learning any new skills and maybe falling behind the industry

Welcome to reality. You are not falling behind. Majority of work “in the industry” is like this - boring, repetitive, not challenging.

Only challenging goal will push you to raise the skill bar.

Challenging software engineering goal == complex problem that you think is _almost_ beyond your current abilities.


This is going to sound really strange to you and totally glib. Sorry for that, but it'll make sense after.

Read Plato's "Apology," "Crito," and "Phaedo." Yes, in that order.

https://classics.mit.edu/Plato/apology.html

https://classics.mit.edu/Plato/crito.html

https://classics.mit.edu/Plato/phaedo.html

If you know nothing of these dialogs, that's almost better.

I know that some internet rando telling you to read a bunch of Plato is, like, never going to happen.

But on the off chance that you do, the less I tell you, the better. And if you do decide to do this, you can cheat and read along with the sparknotes and use an LLM to help read along too. But do try your best to read it first, then use the other resources to guide you. It'll make more sense after you read them. Again sorry, about this being really strange.

But it is worth your time and effort, I promise you.


Surprised that "controlling cost" isn't a section in this post. Here's my attempt.

---

If you get a hang of controlling costs, it's much cheaper. If you're exhausting the context window, I would not be surprised if you're seeing high cost.

Be aware of the "cache".

Tell it to read specific files (and only those!), if you don't, it'll read unnecessary files, or repeatedly read sections of files or even search through files.

Avoid letting it search - even halt it. Find / rg can have a thousands of tokens of output depending on the search.

Never edit files manually during a session (that'll bust cache). THIS INCLUDES LINT.

The cache also goes away after 5-15 minutes or so (not sure) - so avoid leaving sessions open and coming back later.

Never use /compact (that'll bust cache, if you need to, you're going back and forth too much or using too many files at once).

Don't let files get too big (it's good hygiene too) to keep the context window sizes smaller.

Have a clear goal in mind and keep sessions to as few messages as possible.

Write / generate markdown files with needed documentation using claude.ai, and save those as files in the repo and tell it to read that file as part of a question. I'm at about ~$0.5-0.75 for most "tasks" I give it. I'm not a super heavy user, but it definitely helps me (it's like having a super focused smart intern that makes dumb mistakes).

If i need to feed it a ton of docs etc. for some task, it'll be more in the few $, rather than < $1. But I really only do this to try some prototype with a library claude doesn't know about (or is outdated). For hobby stuff, it adds up - totally.

For a company, massively worth it. Insanely cheap productivity boost (if developers are responsible / don't get lazy / don't misuse it).


day jobs are underrated:

scientists and inventors: https://bigthink.com/hard-science/inventors-day-jobs/

see also: writers, musicians, actors ...


Sure! I recommend sitting with the book, a pen, and a notebook at a cafe or wherever you like and write solutions to the practice problems you see sprinkled in each chapter as you read every single word. Then choose a few of the homework problems and do those, some will require a computer. Most of all, work through the labs and don't cheat yourself by looking at other (probably not very good) solutions posted online! Solving the labs with the textbook and TLPI[0] as a reference is how I got the most out of the course. A list of the assignments, as they're done at CMU, is posted below[1]. Good luck!

[0]: http://www.man7.org/tlpi/ [1]: https://www.cs.cmu.edu/~213/assignments.html


It would appear you can watch Chris Curry and Hermann Hauser watch Micro Men: https://www.youtube.com/watch?v=yaonVYOTSsk -- and then have a post-viewing chat: https://www.youtube.com/watch?v=l4I2ktcWdJM

"I saw the first five minutes and had to run away, because I couldn't bear to see myself portrayed by Martin Freeman" -- Chris Curry


Someone will be along in a minute to tell you to watch Micro Men, an amusing and fairly accurate BBC dramatization of the Sinclair/Acorn rivalry :) but I'm here to recommend that you watch the Computer History Museum's interview with Hermann Hauser the erstwhile director of Acorn - he's very charming: https://m.youtube.com/watch?v=Y0sC3lT313Q

I'm fairly sure they've got one with Chris Curry too, but I can't spot it just now.


Reminds me of people in abusive relationships. From outside everyone wonders why they are not getting away from their abusers. But inside, the victims believe there is nowhere else to go.

Windsurf <- try this first Cursor <- then this Aider <- then add this to your workflow

Name of the game now is not to get fired at all costs and weather the storm until the dust settles...

> 99% of the code in this PR [for llama.cpp] is written by DeekSeek-R1

It's definitely possible for AI to do a large fraction of your coding, and for it to contribute significantly to "improving itself". As an example, aider currently writes about 70% of the new code in each of its releases.

I automatically track and share this stat as graph [0] with aider's release notes.

Before Sonnet, most releases were less than 20% AI generated code. With Sonnet, that jumped to >50%. For the last few months, about 70% of the new code in each release is written by aider. The record is 82%.

Folks often ask which models I use to code aider, so I automatically publish those stats too [1]. I've been shifting more and more of my coding from Sonnet to DeepSeek V3 in recent weeks. I've been experimenting with R1, but the recent API outages have made that difficult.

[0] https://aider.chat/HISTORY.html

[1] https://aider.chat/docs/faq.html#what-llms-do-you-use-to-bui...


Assuming it's a real trend rather than hype/excuses... It'll be interesting to see all the bugs and maintenance headaches that occur on a delay. That said, the job market can probably stay irrational longer than I can stay solvent.

I expect to lean heavily on softer skills like "being able to collaborate with product owners help them figure out what they actually want to build", or "able to usefully simplify concepts in a way that helps non-developers make decisions."


The layoffs and/or pay cuts are coming this year.

I don't see companies keeping large engineering head counts when they can get working software in minutes from specs.

So, I suggest you become better in writing software specs.

Writing specs is safe for the foreseeable future because:

1. There is still no ai architecture that is comparable to human brain other than rot memory.

2. There is no enough data to train a model from such architecture if and when it's invented.


At that age, it's rather simple. Do whatever it takes to get the foot in the door at a FAANG company. Take whatever RSUs they award you and put them in the bank for 10 years. Get promoted, play the game and after ten or fifteen years, collect all the RSU's you've built up, cash them in and then retire somewhere cheap building out personal projects for the rest of your days to keep yourself engaged and learning (on your own dime).

My list:

- Goodtape (audio transcription made right)

- Wondershare Dynamics (AI MoCap from single video, using for sports analysis)

- On1 Pro Raw / Davinci Resolve AI features (game changers for my photo/video editing)

- ComfyUI/Comflowy (for image generation pipelines)

- Windsurf Editor (brand new, but gosh this is magic)


I could see Broadcom picking up x86.

The Machinery of Life by David S. Goodsell - it's a short microbiology book with paintings and renders of molecules and molecular processes. It's easy to read and imagining biology is straightforward when you can just see the molecules. I heard about it from https://news.ycombinator.com/item?id=40103590

Change is hard, but to succeed in a political organization with political currents in the mix, you have to embrace it, so if you choose to stay, I recommend adopting a fresh perspective -- Your old job is over. This is a new job. Forget all past attachments to your old boss and walk into this new team with an open mind. Maybe the new boss is a decent leader, maybe not, but go in without any preconceived notions, do the work and see what happens.

Realize that even though it may be political, the leadership chose your new boss, so he is doing something they like. You are tanking your own role if you go in fighting. So go in and see what is going on that is working. There may be completely different measures of success vs. what you were striving for, which is why there is a discrepancy in how people view performance. Learn what the desired outcomes and expectations are, and why.

And if you spend some time in that mode of learning and acceptance and find they are all idiots, then leave. It is never too late to walk out. But give them a chance - there is a possibility that teams other than your own are different, but still decent teams.


My 2 cents as someone with a related masters. Imo a masters is a quick way for someone without experience or no viable way to get that experience to quickly gain rough and dirty exposure in the hope that that will open doors to gaining said experience. That is a sentencious way of saying I think it is inferior to actual experience in the field building products, learning from that and maybe doing some self learning on the side to cover the gaps in your knowledge. A masters will not provide you with very deep theoretical knowledge - if what you want is deep expertise you should be looking at a PhD. That said there are benefits from doing one at a school like Stanford or MIT as they tend to be quite rigorous and you might pick up intangible benefits such as new study habits or whatever one picks up from being around a lot of smart motivated people, if you didnt already attend a prestigious school for college. Also a lot of employers value a masters degree so independently of wether it actually improves your skill or knowledge the paper might be worth having. I do think you can self teach though. These days its easy to find an online course or mooc that will give you enough to be reasonably well versed if you're not looking for deep phd level knowledge.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: