Hacker Newsnew | past | comments | ask | show | jobs | submit | GTP's commentslogin

There has to be some balance though, as requiring change too frequently encourages the use of insecure but easy to remember passwords, or password that are very similar to the previous one thus failing the purpose of the change (e.g. a password containing the year, and the employee only changes the year every time). Best would be pushing for the use of a password manager or auth tokens like Yubikeys.

I think the point here is that, while such a translation wouldn't be admissible in court, many of us already used machine translation to read some legal agreement in a language we don't know.


> many of us already used machine translation to read some legal agreement in a language we don't know.

Have we? Most of us? Really? When?


Most people don't have to deal with documents in foreign languages in the first place.

But for those that do, yes, machine translation use is widespread if only as a first pass.


I know I did for rent contracts and know other people that did the same. And I said many, not most.


In my opinion, programming languages with a loose type system or no explicit type system only appear to foster productivity, because it is way easier to end up with undetected mistakes that can bite later, sometimes much later. Maybe some people argue that then it is someone else's problem, but even in that case we can agree that the overall quality suffers.


With the added issue that now the expert is working with code they didn't write, and that could be in general be harder to understand than human-written code. So they could find it easier to just throw it away and start from scratch.


I doubt some of this article's claims.

> At present, a human with specialist expertise still has to guide the process, but it’s not hard to extrapolate and imagine that process becoming fully automated in the next few years.

We already had some software engineers here on HN explain that they don't make a large use of LLMs because the hard part of their job isn't to actually write the code, but to understand the requirements behind it. And formal verification is all about requirements.

> Reading and writing such formal specifications still requires expertise and careful thought. But writing the spec is vastly easier and quicker than writing the proof by hand, so this is progress.

Writing the spec is easier once you are confident about having fully understood the requirements, and here we get back to the above issue. Plus, it is already the case that you don't write the proof by hand, this is what the prover either assists you with or does in full.

> I find it exciting to think that we could just specify in a high-level, declarative way the properties that we want some piece of code to have, and then to vibe code the implementation along with a proof that it satisfies the specification.

And here is where I think problems will arise: moving from the high level specification to the formal one that is the one actually getting formally verified.

Of course, this would still be better than having no verification at all. But it is important to keep in mind that, with these additional levels of abstractions, you will likely end up with a weaker form of formal verification, so to speak. Maybe it is worth it to still verify some high assurance software "the old way" and leave this only for the cases where additional verification is nice to have but not a matter of life or death.


Could you please explain me the difference? As UDP is the "User Datagram Protocol" when I read about datagrams I always think about UDP and though it was just a different way of saying the same thing. Maybe "datagram" is supposed to be the packet itself, but you're still sending it via UDP, right?


There's actually a lot of combinations of (domain, type, protocol) that are available. It is not always the case that the protocol implies the type.

In IP land (domains AF_INET and AF_INET6), we have the well known UDP and TCP protocols, of course. UDP is always datagram (SOCK_DGRAM) and TCP is always stream (SOCK_STREAM). Besides datagram-only ICMP, there's also SCTP, which lets you choose between stream and sequential-packet (SOCK_SEQPACKET) types. A sequential-packet socket provides in-order delivery of packet-sized messages, so it sits somewhere between datagram and stream in terms of features.

In AF_UNIX land, there are no protocols (the protocol field is always 0), but all 3 of the aforementioned types are available. You just have to pick the same type on both sides.

Footnotes: SCTP is not widely usable because Windows doesn't natively support it and many routers will drop or reject it instead of forwarding it. Also, AF_UNIX is now supported on Windows, but only with SOCK_STREAM type.


UDP and TCP are Layer 3 protocols, and so is ICMP. They all fill the same bits within network packets, like at the same level. So sending an ICMP packet (protocol 1) is not the same as sending a UDP packet (protocol 17).

You can see a list of network protocols in /etc/protocols actually, or here: https://www.iana.org/assignments/protocol-numbers/protocol-n...


Think of it as a protocol for sending custom ("user") datagrams. There are other datagram protocols too.

The kernel API defaults to UDP when you pass 0 as the protocol for a datagram socket, which may be the source of the confusion.


When I read about animals I always think about elephants and though it was just a different way of saying the same thing.


As the other commenter pointed out, UDP is transport protocol, not a packet level protocol.

Think of it like this:

Ethernet sends data frames to a MAC address. It has a sender and a receiver address. See here for structure: https://en.wikipedia.org/wiki/Ethernet_frame

Internet Protocols (v6 and v4) send packets via Ethernet (or WiFi or Bluetooth or anything else) from an IP address to an IP address. For structure see https://en.wikipedia.org/wiki/IPv6_packet or if for some reason you still need the legacy version see https://en.wikipedia.org/wiki/IPv4#Packet_structure (aside but notice how much complexity was removed from the legacy version). Notably, IP does not have any mechanism for reliability. It is essentially writing your address and a destination address on a brick and tossing over your fence to the neighbor’s yard and asking them to pass it along. If your neighbor isn’t home your brick is not moving along.

TCP and UDP send streams and datagrams respectively and use the concept of application ports. A TCP stream is what it sounds like: a continuous stream of bytes with no length or predefined stopping point. TCP takes your stream and chunks it into IP packets, the size of which is determined by the lowest Ethernet (or whatever data link protocol) data frame size. Typically this is 1500 but don’t forget to account for header sizes so useful payload size is smaller. TCP is complex because it guarantees that your stream will eventually be delivered in the exact order in which it was sent. Eventually here could mean at t = infinity. UDP simply has source and destination port numbers in its header (which follows the IP header in the data frame), and guarantees nothing: not ordering not guaranteed delivery, etc. If an IP packet is a brick with two house addresses, a UDP datagram is a brick with two house addresses and an ATTN: application X added. An address represents an computer (this is very approximate in the world where any machine can have N addresses and run M VMs or containers which themselves can have O addresses), and a port represents a specific process on that computer.

ICMP does not use ports. ICMP is meant for essentially network and host telemetry so you are still sending and receiving only at an IP address level. But it has a number of message types. You can see them here: https://en.wikipedia.org/wiki/ICMPv6. Note that ping and pong are just two of the types. Others are actually responsible for things like communicating what raw IP cannot. For example Packet Too Large type tells you that an IP packet you tried to send was hitting a part of its path where the datagram size did not allow it to fit and it’s used for IP MTU path discovery (you keep sending different size packets to find what is the largest that will go through).

There are other protocols that run directly on top of IP (6in4 for example, or SCTP). Most are way less popular than the three mentioned above. Some use datagrams (discrete “bricks” of data), some use streams (endless “tapes” of data), which is the difference in protocol family: datagrams vs stream. You can also go a level deeper and just craft raw IP packets directly but for that you typically must be the root user since you can for example send a packet with the source port set to 22 even though you are not the ssh daemon.

Since ICMP has no concept of a port, when you send a ping to a remote host and it returns a ping to you, how does your kernel know to hand the response to your process and not some other one? In the ICMP header there is an ICMP identifier (often the process PID) and when the reply comes back it has the same identifier (but with source and destination IPs swapped and type updated to echo reply). This is what the kernel uses to find the process to which it will deliver the ICMP packet.

I hope this clears up some of this.


Not really in the loop either, but when Deepseek R1 was released, I sumbled upon this YouTube channel [1] that made local AI PC builds in the 1000-2000$ range. But he doesn't always use GPUs, maybe the cheaper builds were CPU plus a lot of RAM, I don't remember.

[1] https://youtube.com/@digitalspaceport?si=NrZL7MNu80vvAshx


Digital Spaceport is a really good channel, I second that - the author is not sparing any detail. The cheaper options always use CPU only, or sharding between different cheap GPUs (without SLI/switching) - which is not good for all use cases (he also highlights this). But some his prices are one-off bargains for used stuff. And RAM prices doubled this year, so you won't buy 2x256 GB DDR4 for $336, no matter what: https://digitalspaceport.com/500-deepseek-r1-671b-local-ai-s...


'lots of RAM' got expensive lately -_-


Sure, but I think that the underlying assumption is that, after the public release of ChatGPT, the amount of autogenerated content on the web became significantly bigger. Plus, the auto-generated content was easier to spot before.


They don't boot without it, but you can make it think that there's a battery by connecting power directly to the battery pins [1].

[1] https://www.youtube.com/watch?v=7f8SliNGeDM&pp=ygUYZ3JlYXRzY...


It means when it will gain a significant market share of the desktop segment, but it is mostly used as a meme.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: