Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Just the beginning, what happens when it becomes impossible to parse fact and fiction?

I mean rsa 2048 can be currently broken in 104 days, and we'll be at quantum levels sooner than we realize.

How we validating anything after that? Biometric?



> I mean rsa 2048 can be currently broken in 104 days

Only if one has a sufficiently-large fault-tolerant quantum computer with 10,000 qubits and 2.23 trillion quantum gates [0]. Which is currently an unachivium.

[0] https://www.itnews.com.au/news/quantum-computers-wont-break-...


With this whole AI push, there have been pretty serious advancements in technology. I wouldn't doubt in 5 to 10 years will have a vastly different threat landscape. Nation states can afford supercomputers, so can some private corporations.


How is AI related to this? Did LLMs figure how crack encryptions?


That doesn't seem too unbelievable. Maybe a sufficiently large LLM will be able to find a flaw in the algorithms or logic behind some encryption schemes which even our greatest monkey brains overlooked.


No, they cannot. LLMs are LANGUAGE models, not KNOWLDEGE models. They do not "know" what algorithms mean any more than they know what everything else they "talk about" mean.

What they are more than capable of is to produce really convincing hype, which claims that LLMs can do anything and everything.


Some skepticism is healthy, but I think you have a misconception about what an LLM is. Neural networks use machine learning to automagically find connections between data. Handwriting recognition is an example of this, as it's trivial to build a model that can recognize the differences between characters/digits, whereas writing an algorithm to do that effectively would be very difficult. You can train a model on sample data, and it will "learn" the patterns that make up a '7', 'G', 'O', etc with the same or better accuracy than a person.

Language models are more or less the same thing, except they're trained on human language. Since human language is an encoding of human thought processes, then the model learns the patterns of human thought which are embedded in text, and can both generate and recognize them.

This is why LLMs can "understand" and answer complicated questions that don't exist in their training set, and do things like debug code. Even though it never saw that exact question, it recognizes the high level/abstract concepts that make up the question (since those abstract concepts exist in various different forms in the training set).

So giving an LLM a textual description of an algorithm, and having it detect logical errors in it, is not that unbelievable. Whether current models are good enough to do it well is the question to ask.


I think you're accidentally making a jump in meta-levels:

The equivalent of an existing handwriting recognizer for e.g. RSA wouldn't be a model that tells you how to crack RSA, it'd be a model that _does_ crack it (maybe by returning a probability distribution over the plaintext that's better than uniform). That feels pretty unlikely to me personally, but maybe that's doable, who knows.

For the standard of "the LLM tells us how to crack RSA," it feels like the equivalent for handwriting recognition would be the LLM outputting a description of a novel algorithm for handwriting recognition (and probably one that isn't just an existing one with some hyperparameters tweaked).

My understanding of the weaknesses of current LLMs is that they're actually pretty bad at that, since they tend to regurgitating existing content when they're able to.


> The equivalent of an existing handwriting recognizer for e.g. RSA wouldn't be a model that tells you how to crack RSA, it'd be a model that _does_ crack it (maybe by returning a probability distribution over the plaintext that's better than uniform). That feels pretty unlikely to me personally, but maybe that's doable, who knows.

Well, my point was based on the assumption that there is a flaw/backdoor in RSA, but nobody has discovered it yet. So you'd just need a model that can take a textual description of the algorithm as input, and spit out a list of logical errors/flaws.

I'm not saying there is a flaw or backdoor, but if there were, then a LLM would potentially be able to find it, while a team of human experts could miss it.

A model that receives an RSA encrypted payload and outputs the decrypted version would of course be impossible unless the above assumption is true... or you give it access to some compute power so it can either try to brute-force it, or try to track down and hack the servers with the keys :P


> So you'd just need a model that can take a textual description of the algorithm as input, and spit out a list of logical errors/flaws.

Right, I'm saying that this is the thing that's dissimilar from handwriting recognition -- the nearer equivalent would an LLM creating a textual description of a novel approach to handwriting recognition, which as far as I'm aware, is beyond current models.


Most people think information is transmitted from source to receiver. This is actually wrong.

What happens is the information is creatively formed in the mind of the receiver, by the receiver, in all cases.

In other words, people form an explanation for the observation in their minds and judge it compared to other competing explanations.

So instead of the post-truth, doom-and-gloom world that you fear, another possible outcome is that people get better at forming or identifying good explanatory knowledge. In other words, the bar for what you call truth actually goes up and people get smarter.

An example of this happening would be door-to-door sales people. They basically no longer exist. Because people learned not to answer the door when a random person knocks.

The old explanation “someone I want to talk to is here to see me” got replaced with a new better explanation of “some time wasting stranger is probably at the door.”


That is an optimistic view, which I appreciate. The only issue I have is that I think we will need to mobilize as a culture to embrace it, in order for it to succeed.

I don't think it is as simple as replacing one explanation with another (like your door-to-door salesman example) because it is such an open-ended problem. I think it is more like completing the Enlightenment as conceived by Kant and others.

I think we could do it (i.e., philosophy classes beginning in elementary school, etc), but it would be tough to convince people to participate. It is hard, mind-bending, solitary work that is inherently alienating, which most people avoid.

Additionally, there is also the inherent contradiction in developing your own independent mind by order of the government or teachers, as well as a host of other paradoxes.

Maybe the best route is to try to make philosophy popular? Use social behavior for people to be attracted to it for the wrong reasons, but hope that a sizeable part of the population end up taking it seriously? And that sizeable chunk is enough to influence the group?

I have no idea, but I like your comment and hope there is a silver-lining that we become deeper and more thoughtful people.


Need a source on RSA 2048 being breakable. QC like that is still quite far off and it can’t be conventionally breakable unless there has been some kind of mathematical breakthrough.



We're good! It's a risk assessment - nothing has been broken, but they propose that with a computer with these specs it could be done.

I wouldn't worry too much about encryption being easily broken soon.


For those who worry about it, it isn't that it's broken soon, but that it will be broken within a timeframe that the data is still useful.

Veritasium : How Quantum Computers Break The Internet... Starting Now - https://youtu.be/-UrdExQW0cs

https://en.wikipedia.org/wiki/Harvest_now,_decrypt_later


I'm not sure what encrypted data produced today would be relevant in 40 years time? Thinking about how some cases can't be prosecuted since it's been too long since the offence


>I mean rsa 2048 can be currently broken in 104 days

With what hardware? I've been CONSTANTLY trying to break Hermes2 ransomware (which uses RSA2048) for almost a solid decade.


I wonder if we'll see analogue methods as the only reliable ones left.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: