Hacker Newsnew | past | comments | ask | show | jobs | submit | dividuum's commentslogin

Not true. It's getting a constant stream of bugfixes. It's also not "stuck" on Lua 5.1, but is deliberately not following Lua's path, except for some backports. There's also a recent post about how a LuaJIT 3 might work.

Where is that post?

https://www.freelists.org/post/luajit/Question-about-LuaJIT-...

Warning: Ridiculous cookie consent banner, needs dozens of clicks to opt out.


This cookie consent banner is handled in 0 clicks thanks to Consent-O-Matic Firefox extension

OK, then I got some wrong info. If it's stuck at it deliberately, then it's worse. May be someone should fork it and bring it up to date with recent Lua versions. Why is this split needed?

My understanding is that there was a language fork after 5.1. One thing was a complete reworking of how math works. It used to be just floating point for everything but the new idea was to make it like Python 3. So most operations are float/integer with some weird exceptions.

As with any language fork there will be some who stay and others who switch to the new thing. Often a fork will drive people away from a particular language as in my case.


Lua's nature as a primarily embedded language means backwards compatibility is not guaranteed between any version. If 5.2 was a language fork then so was 5.3, 5.4, 5.5, etc. (5.2 did have some more significant changes though)

For that reason luajit staying at ~5.1 actually works in its favor. Rather than trying to follow the moving target of the newest version, it gives a robust focal point for the lua ecosystem, while modern versions can be developed and continue to serve their purpose in embedded systems and whatnot where appropriate.


I don't see a reason not to update LuaJIT still. Changes in Lua aren't just version numbers, it should be improving something, meaning that would be missing in LuaJIT.

Isn't it a bit naive to declare that, just because Lua created a new minor version, it should be somehow better? The author of LuaJIT has often written his arguments, including why he disagrees with the changes to the language, why they should have been implemented as libraries instead, that in his view LuaJIT is still more performant and more widely used than PUC Lua, and more.

As for forking, you can try, but I would warn you that one does not simply fork LuaJIT. Required is deep expertise in tracing JIT compilers, in assembly and in many different computer architectures. Nobody was really up to the task when Mike Pall announced that he was searching for a maintainer, before his eventual return.


LuaJIT does have some backported features from newer versions. But Mike Pall -- the mad genius behind LuaJIT -- has made it clear he doesn't agree with many of the changes made in newer versions, hence why it's staying where it's at.

the beauty of open source is there's nothing stopping you! this might be your calling. best of luck

Language fork is unfortunate. Python situation isn't much of a fork really. Python 2 is basically EOL.

There’s no “basically”. Stick a fork in it; it’s done: https://www.python.org/doc/sunset-python-2/

It might not be supported by the consortium, but python2 still lives, slowly, in one place or another:

> The RHEL 8 AppStream Lifecycle Page puts the end date of RHEL 8's Python 2.7 package at June 2024.

https://access.redhat.com/solutions/4455511

At this point in RHEL it is only "deprecated", not "obsolete".


In RHEL I would never touch system python at all, and would install what every version I needed in a venv and configure any software I installed to use what ever version I needed. I learned the hard way to never mess with system python.

Which is better than this mess with Lua situation.

I strenuously disagree. Not every language needs to chase trends and pile on unnecessary complexity because developers want the latest shiny language toys to play with. It's good to have a simple, stable language that works and that you can depend on to remain sane for the forseeable future.

C is a language like that but I fear the feature creep is coming (auto? AUTO??.) JS is a lost cause.


Languages are products as well, either they evolve to cater to new audiences, or they slowly die as the userbase shrinks away with the passing of each developer generation.

The language is different. The changes to environments in particular are a non-starter. Sandboxing is incredibly clunky in 5.2+, and we lost a lot of metaprogramming power and dynamic behavior.

> May be someone should fork it and bring it up to date with recent Lua versions. Why is this split needed?

Good news, you're someone. If you care, you're welcome to go for it.


I remember you could trivially circumvent that with „/lib/ld-linux.so <executable>“. Does that no longer work?


noexec now prevents mmaping files on that filesystem as executable.


That seems like an implementation detail, not a fundamental design decision as it should be easy to change how packfiles are implemented. I'm not sure it would be an improvement though: it already only stores deltas for similar objects.


Would be surprised if that’s not how basically all tools behave, as I expect them all to seek to the central directory and to the referenced offset of individual files when extracting. Doesn’t really make a difference if that’s across a network file system or a local disc.


Nothing against scratching itches (we all do), but that's what literally thousand of existing digital signage solutions already offer out of the box.


I wrote a solver for a similar puzzle: IIRC it was 3x3x3 with different shaped pieces that left some space unfilled. Some of the pieces had holes and you additionally had three 3-long metal rods to place inside somewhere. I ran my code and it didn’t find a solution. Best it could do was fit all wooden pieces and two of those rods through holes inside the pieces. [Spoiler] Turns out, the solution was to ignore to rods first, leave a 3x1x1 “tunnel” and put all three rods in there, completely ignoring the holes in the wooden pieces. I remember being slightly annoyed :-)


Pretty sure you can rotate JPEG images lossless. But it’s still simpler to just modify metadata.


A quick search suggests to me that it's only a lossless process if the image dimensions are a clean integer multiple of 8 or 16 (as the blocks can be 8x8, 8x16, 16x8, or 16x16), otherwise the edges must be reencoded. Never written a JPEG codec though, so happy to be proven wrong.


This is true, and others have mentioned it, but I think people are underestimating just how universal sensor dimensions being a multiple of 16 is — I really can’t think of any exceptions.


Well, it already has, among a ton of other modules, a memcached and a JavaScript module (njs), so you’re actually not that far off. An optional ACME module sounds fitting.


restic is basically identical and you can choose where you store your data.


restic can supposedly be set up to prevent a corrupted / compromised client from destroying old data using S3 versioning policy, but this doesn’t appear to be a well-supported feature with clearly-described security properties.

Tarsnap, in contrast, has an explicit first-class ability to prevent a compromised client from damaging old backups.


That’s because restic is not opinionated about where and how you store your backups. Restic provides a nice interface to create the backups, and then lets you choose where you want to store them (and how access to them is managed), be it locally or via SFTP or S3 or many other backends. Any security properties related to S3 are not in the scope of what restic is meant to do.

It’s pretty simple to enable versioning and object lock on your S3 bucket, but it is another step if you’re using restic. Sure, if you just want all of that taken care of for you, you can use tarsnap, but you’re paying a 5x+ premium for it.

The other nice thing about restic is that since it’s just the client-side interface, it allows others to provide managed storage. Borgbase.com is a storage backend that is supported by Restic that supports append-only backups, and is cheaper than tarsnap.


I disagree, strongly. Here are the relevant docs:

https://restic.readthedocs.io/en/stable/030_preparing_a_new_...

I would like to see an explicit discussion of what permissions are needed for what operation. I would also like to see a clearly specified model in which backups can be created in a bucket with less than full permissions and, even after active attack by an agent with those same permissions, one can enumerate all valid backups in the bucket and be guaranteed to be able to correctly restore any backup as long as one can figure out which backup one wants to restore.

Instead there are random guides on medium.com describing a configuration that may or may not have the desired effect.


Again, this isn’t at all in the scope of restic’s docs. If you’re using S3 as the storage, it’s on you to understand how S3 works and what permissions are needed, just like it’s on you to understand how your local file system works and file permissions work if you use the local file system as a backend.

If you don’t understand S3 or don’t want to learn, then that’s fine, and you can pay the premium to tarsnap for simplifying it for you. But that’s your choice, not an issue with restic.

If you think differently, have you submitted a PR to restic’s docs to add the information you think should be there?


Interesting play on the debate- but after the response to restic's original decision to upstream Object Store permissions and features... to the Object Store, along with my attempts to explain S3 to several otherwise reasonably technical people....

I think people are frequently trapped in some way of thinking (not sure exactly) that doesn't allow them to think of storage as anything other than Block based. They repeatedly try to reduce S3 to LBA's, or POSIX permissions (not even modern ACL type permissions), or some other comparison that falls apart quickly.

Best I've come up with is "an object is a burned CD-R." Even that falls apart though


I still completely disagree. It’s on me to understand IAM. It should not be on me to understand the way that restic uses S3 such that I can determine whether I can credibly restore from an S3 bucket after a compromised client gets permission to create objects that didn’t previously exist. Or to create new corrupt versions of existing objects.

For that matter, suppose an attacker modifies an object and replaces it with corrupt or malicious contents, and I detect it, and the previous version still exists. Can the restic client, as written, actually manage the process of restoring it? I do not want to need to patch the client as part of my recovery plan.

(Compare to Tarsnap. By all accounts, if you backup up, your data is there. But there are more than enough reports of people who are unable to usefully recover the data because the client is unbelievably slow. The restore tool needs to do what the user needs it to do in order for the backup to be genuinely useful.)


I think you two may be talking past each other a bit here. Bear in mind I am not a security expert, just a spirited hobbyist; I may be missing something. As stated in my digital resilience audit, I actually use both Tarsnap and restic for different use cases. That said:

Tarsnap's deduplication works on the archive level, not on the particular files etc within the archive. Someone can set up a write-only Tarsnap key and trust the deduplication to work. A compromised machine with a write-only Tarsnap key can't delete Tarsnap archive blobs, it can only keep writing new archive blobs to try to bleed your account dry (which, ironically, the low sync rate helps protect against - not a defense for it, just a funny coincidence).

restic by contrast does do its dedupe at the file level, and what's more it seems to handle its own locks within its own files. Upon starting a backup, I observe restic first creates a lock and uploads it to my S3 compatible backend - my general purpose backups actually use Backblaze B2, not AWS S3 proper, caveat emptor. Then restic later attempts to delete that lock and syncs that change too to my S3 backend. That would require a restic key to have both write access and some kind of delete access to the S3 backend, at a minimum, which is not ideal for ransomware protection.

Many S3 backends including B2 have some kind of bucket-level object lock which prevent the modification/deletion of objects within that bucket for, say, their first 30 days. But this doesn't save us from ransomware either, because restic's own synced lock gets that 30 day protection too.

I can see why one would think you can't get around this without restic itself having something to say about it. Gemini tells me that S3 proper does let you set delete permissions at a granular enough level that you can tell it to only allow delete on locks/, with something like

        # possible hallucination.
        # someone good at s3 please verify
        {
            "Sid": "AllowDeleteLocksOnly",
            "Effect": "Allow",
            "Action": "s3:DeleteObject",
            "Resource": "arn:aws:s3:::backup-bucket/locks/*"
        }
But, I have not tested this myself, and this isn't necessarily true across S3 compatible providers. I don't know how to get this level of granularity in Backblaze, for example, and that's unfortunate because B2 is about a quarter the cost of S3 for hot storage.

The cleanest solution would probably be to have some way for restic to handle locks locally, so that locks never need to hit the S3 backend in the first place. I imagine restic's developers are already aware of that, so this seems likely to be a much harder problem to solve than it first appears. Another option may be to use a dedicated, restic-aware provider like BorgBase. It sounds like they handle their own disks, so they probably already have some kind of workaround in place for this. Of course, as others have mentioned, you may not get as many nines out of BB as you would out of one of the more established general-purpose providers.

P.S.: Thank you both immensely for this debate, it's helped me advance the state of my own understanding a little further.


Fair enough. Personally I use an ssh target with zfs file system with its own automatic snapshots. The restic snapshots don’t directly correspond to the zfs snapshots, but I can live with that.


Can't enjoy that without DK's wobbly affine texture mapping :-)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: