> … support persistent client-side storage using available JS APIs. As of this writing, that includes the Origin-Private FileSystem (OPFS) [0]
This is really good news, and exactly what the OPFS was designed for.
You may have seen “Absurd SQL” [1] which was a proof of concept for building a SQLite Virtual FS backend using IndexedDB. It provided full ACID compliment transactions. Incredible work but a hack at best.
The OPFS supersedes all that and makes it possible to have proper consistent and resilient transactions.
WASM SQLite with the OPFS is the future of offline first web app development. The concept of a single codebase web/mobile/desktop app with proper offline storage is here.
What I really want to see next is an eventually constant sync system between browser and server (or truly distributed with WebRTC). The SQLite Session Extension [2] potentially has the building blocks needed for such a system.
That is awesome! I’m going to have to play with that.
I previously experimented with combining Yjs (CRDT toolkit) with Pouch/CouchDB to create an eventually consistent db, but decided that CouchDB was the wrong backend.
It looks like you have built exactly what I wanted to do if I had had time.
I happen across AntidoteDB just the other day, and my take-away was that it was still under development but not really ready for production use. Curious your opinion / experience with it?
Yup, it’s a very rigorous system developed over ~10 years with solid testing, benchmarking and formal proofs. However it has some known issues, such as behaviour under high load and not currently handling node failure within a DC.
It’s not production ready and neither are we (we’re in developer preview mode, which is like a public alpha [0]).
There are also other aspects on the Antidote roadmap such as efficiently materialising consistent secondary indexes that are ongoing challenges but aren’t so relevant to how we’re using it as a replication layer.
We are working, alongside others, with the Antidote developers to help fix these issues and generally improve reliability / engineer correct behaviour under load. (Professor Annette Bieniusa, who leads the development, is our Chief Architect). We have a fork at electric-sql/vaxine [1].
We are also taking advantage of some simplifications which mitigate the known issues. We have an #antidote channel in the ElectricSQL discord [2] if you’d be interested in chatting more.
I’m just going to put the idea here… C# and Blazor, with linq to sql, in the browser… automatically synced to the backend server… that just sounds like magic, but magic I would throw money at.
Yup, open source (https://github.com/electric-sql) + managed replication-as-service. Some of our service code (eg: infra, control plane) is proprietary.
Download size depends on the driver (we support different SQLite drivers for different environments). The web is quite heavy as it loads the SQL.js WASM (as a separate file, it’s not bundled). I want to say ~350-400kb total but I need to check and it depends a bit on how you build/bundle and serve it.
This looks good. I’m currently exploring Firebase for offline support in Ionic/Angular app, but am much more familiar with SQL. Is capacitor support on the cards?
Hey, yup, definitely. It should be very similar to the Cordova integration.
You can also follow the generic driver integration instructions [0]. Basically need to implement an adapter interface and compose the various utilities.
Happy to help with this — shout on Discord [1] if you’d be interested in collaborating on it.
> WASM SQLite with the OPFS is the future of offline first web app development.
is that basically a desktop app that doesn't need to be installed? You still download it using a browser but then you don't install it on your operating system you just run it in your browser?
I find it kinda sad that browsers get to remove Web SQL on shaky grounds (which was just sqlite available to JS) yet here we are, back to that same spot, but with less performance and more complexity
> I find it kinda sad that browsers get to remove Web SQL on shaky grounds (which was just sqlite available to JS)
i recently had to work with WebSQL in the context of comparing it to sqlite's new WASM support (of which i'm the developer). WebSQL, quite frankly, is a toy. Its execution model is far too limited and excludes all sorts of functionality, not the least of which is that it's impossible to delete a WebSQL db.
> yet here we are, back to that same spot, but with less performance and more complexity
Wrong. We have benchmarked the two in apples-to-apples comparisons, taking into account WebSQL's limitations. The two approaches are roughly equivalent, with both winning out under certain loads, despite WebSQL being implemented in native code.
Who is "we" and where I can see those benchmarks ?
Also even at similar performance you still need to download a bunch of extra stuff to even run the WASM version. My whole webpage takes less than it...
We is the sqlite team. i'm the "JS/WASM Guy" for the project.
> and where I can see those benchmarks ?
You can't currently because we don't have them in a user-consumable form. We've done a tremendous amount of benchmarking during the development because All The Speed was one of our design goals. However, all such records were in transient spreadsheets intended for one-shot note-taking use, not publication.
Once our documentation effort settles down, and responding to user feedback from the initial announcement slows down, i hope to implement a benchmarking application similar to:
Until then, however, you'll simply have to (A) take my word for it, (B) try it out yourself, or (C) none of the above, as you wish. Edit: or (D): we have a WASM port of sqlite's standard benchmarking tool, known as "speedtest1", in the sqlite source tree, but getting it up and running requires reading a good deal of documentation:
That tool is how we've benchmarked it so far, with the exception of comparing it to WebSQL, which required a custom application which is also in that directory (batch-runner.*). batch-runner, however, is in no way user friendly.
This is somewhat more complex for the developers of websites but dramatically less complex for the developers of web browsers, and that is where we really should be caring: the complexity in the browser leads to both increased attack surface and nigh unto
guarantees there will only ever be a single good implementation of the browser. Both standards and security critical systems should be easy to build and easy to reason about.
I'm a bit confused as to why it supports OPFS, but not the much more widely supported regular FileSystem API. As far as I understand, this is also capable of reading and writing to files directly, and also allows seeking. All the user needs to do is select a file in a file picker first.
Is there a required feature that only OPFS provides, other than file locking (which seems like a pretty non-essential feature tbh)?
> And will this SQLite file be portable (import and export to somewhere else) ?
sqlite's storage format is independent of the underlying device. OPFS is "just another backend" and the only tricky part of implementing it was that OPFS's API is largely asynchronous and sqlite3 requires synchronous I/O. The db that's stored in OPFS is exactly what would be stored on your hard drive if you were operating outside of the browser.
Querying already cached data without reimplementing the query logic in javascript. How… I mean this is so obvious to have a RDBMS local to any non-basic app, why are you even asking? No offense, I just don’t understand how to create webapps and be not bothered by a lack of fundamental things. Some e-stores and sites literally reach server on each tick on a sidebar, when it could be a fragmentarily replicated local database with a background periodic or live sync. Imagine adjusting a filter and not waiting for reload and rescroll in a “web 2.0” app, when an entire dataset is comparable to a bundle.min.js and a roundtrip to it is 1000x+ slower than a query itself. E.g. a page where I buy t-shirts is barely a megabyte of json (sans images) for all items, but it spends probably half an hour in awaiting responses when I’m shopping around. An app like that shouldn’t even make requests, apart from downloading new jpegs and one narrow replication stream. Even for open-in-new-tab, because resources are cached and a dataset is in sync.
If your app lets users sort, search, group, join or transform non-trivial data then SQL requires far fewer lines of code than procedural JavaScript, especially if you don't know ahead of time all the different ways in which users might want to query the data. It's definitely not useful for every kind of app though.
Think of something like personal finance, accounting, payroll, project management, CRM, zettelkasten style note taking, health and fitness tracking, etc, where users may want flexible analysis and reporting tools.
Much of this is now done on the backend, which makes sense if the data is accessed by many users in a transactional manner. But if it's a single user app or something used by tiny teams then cutting latency down to zero could greatly improve usability while still benefitting from everything a real query engine has to offer.
It has indeed been a pretty rare case for web apps, but it's a not completely insignificant niche for desktop and mobile apps. Some of those (e.g many Electron apps) could become pure web apps if more of their enabling technologies were available as WASM modules. I think this SQLite initiative as well as the underlying file system access API is part of that trend.
And how does it help in case of frontend development? A half of those points don't apply to web dev.
Do you really need joins?
Do you really need atomic transactions?
Do you really need SQL?
Most of API's that used by web apps operate with objects or arrays of objects. Why should we add complexity of SQL if we already can store those objects in a plain array?
It helps because apps are often used to view or edit files. By “files” I mean things like word doc files, photoshop files, Apple Keynote files. Many people like working with files and saving them locally, rather than on a server. With SQLite running in the browser, a file that is an SQLite database can be very easily read into memory, worked with, then saved back to disk. The page I linked explains this in detail. There are some examples of companies taking this approach at https://www.sqlite.org/appfileformat.html. Hope this extra detail is helpful.
Edit: Additionally, I encourage you to experiment with this. If you haven’t already, you may be surprised at how efficient/compact SQLite files are. They’re significantly more compact that JSON or XML documents and can include binary files like images. It’s much simpler to use SQLite as a file format than to create a custom binary file format.
You haven't answered the questions.
I don't see a reason to use RDBMS in a browser, if it's not, for example, an SQL tutorial site.
You gave an example of a t-shirt store. A plain array of JS objects is enough to store items in a browser. Why do you need a fully backed RDBMS for it?
Sorry, but it's just you opinion. Do you have proofs of how t-shirt store will benefit of putting RDBMS on a client side (how much space does it take BTW) instead of using simple list of objects?
Serving my htmx app as a WASM app, so interaction by users will be done on their computer. Then only the DB changes will be have to be synced with the backend, saving a lot of traffic / server capacity. This was already possible, but then the user's changes would be lost if they don't sync in time. If I understand things correctly this is the missing piece: persistence. So if the user goes offline and makes changes, those will persist so they can be synced when the user goes online again.
(All of this is as far as I understand things, I could be completely wrong.)
I haven't looked into those, but I'm happily using SQLite already, and this would mean I won't have to swap it for something else to achieve the same thing.
> If I understand things correctly this is the missing piece: persistence. So if the user goes offline and makes changes, those will persist so they can be synced when the user goes online again.
1. You can persist data with localStorage/IndexDB, can't you?
2. There is nothing in the article about syncing. There is no out-of-the-box solution, as I can see.
> You can persist data with localStorage/IndexDB, can't you?
I guess so, but given the situation that I'd already be using WASM SQLite in the client's browser, then I'd have to implement that part myself, or use something like Absurd SQL. I'd rather use the implementation made by the creator of SQLite.
> There is nothing in the article about syncing.
Correct, that's something I'll have to do myself. I don't see any issues there though, I'd just need to verify the user is authorised to make those changes, which in my specific use case seems pretty trivial.
They aren’t working to standardise SQLite, the are working to standardise a low level block based file system api that SQLite and other database engine can use. Thats much more exciting than “standardising SQLite” for the web.
The Origin Privet File System api is going to provide the opportunity for any db engine to be used in offline first PWAs.
There is no “one size fits all” database engine, that was proved by both WebSQL and IndexedDB. The OPFS in combination with WASM is the correct solution to in browser DBs.
WebSQL was killed off because the direction it was heading was certain to cause major compatibility problems down the road, and no one was willing to do the work that would be required to avert that (and no one was sold that it would be worth it even at that).
> There is no “one size fits all” database engine, that was proved by both WebSQL and IndexedDB.
It was not proven at all. WebSQL was deprecated based on "we don't want to standarize on single project". IndexedDB happened because they wanted to standarize on single API (lmao), but it was just too inept.
There is no one size fits all but WebSQL fit A LOT of use cases.
Low level storage that works for DBs is interesting idea but that also means you need to ship additional megabytes of code with every app.
Although I agree that modern web apps suffers from the size already and additional 300kb doesn't hurt in this case, but one of the most popular use cases for the client DB is the offline capability for an small footprint web app. You know, it is when your connection is poor or even absent. 300kb hurt really badly.
> Also, couldn't the implementation of that storage spec. be shipped in the standard JS web APIs (i.e. by the browser)? Why would it be in every app?
Actually, that is exactly what happens to WebSQL and IndexedDB. WebSQL got deprecated, they were not able integrate SQLite, and created IndexedDB, which is hated by many developers. Just an example:
https://news.ycombinator.com/item?id=27511941
This is good but it can never be as good as Web SQL could have been, because it can't be a truly shared library. Like all WASM modules, It has to be downloaded and JIT compiled separately for every site that uses it. Not for technical reasons, but privacy reasons, so it's basically unfixable: https://developer.chrome.com/en/blog/http-cache-partitioning...
Maybe if half the sites on the web start using it, browsers can finally be convinced that it would be OK to add SQLite to the base platform.
I disagree, this is much better than WebSQL. WebSQL would have been tied to one specific version of SQLite, and developers would have to work to the lowest supported version. There would have been no extension mechanism.
WASM SQLite is the correct solution. It’s extendable by the developer using it, they can uses SQLite extension modules, and build their own.
But almost more so it proves the idea of a WASM db engine backed by a low level block FS api. We will see other db engines uses this architecture. DuckDB have already done it. I’m sure MongoDBs Realm and CouchBase Mobile will do the same soon too.
We are in for an exciting time in the next few years.
> WebSQL would have been tied to one specific version of SQLite, and developers would have to work to the lowest supported version. There would have been no extension mechanism.
Not only that, but WebSQL disables a lot of mundane functionality, like the instr() SQL function and it's impossible to VACUUM because WebSQL requires explicit transactions and VACUUM cannot run in a transaction. i had the "pleasure" of having to work with WebSQL over the past couple of months for purposes of comparing its performance to the new sqlite features, and IMO, WebSQL is, as the kids say today, _weak sauce_. Its API is far too limited.
> WebSQL would have been tied to one specific version of SQLite, and developers would have to work to the lowest supported version. There would have been no extension mechanism.
Why not? Web SQL would not be different from WebGL/GLSL or even JavaScript itself in that respect. Developers use feature detection and and work to the lowest supported version. APIs and languages can evolve, but in (mostly) backwards compatible ways. Extension and versioning mechanisms can be made. That's how the web works and Web SQL could have worked that way too.
More broadly, you could apply arguments like this against everything in the entire web platform. Maybe everything should be a WASM module that developers could choose themselves! Image loading, video decoding, font rendering, DOM, JS engine, why not? It's actually a beautiful vision and I'd be all for it if not for cache partitioning. Every site would have to re-download and re-JIT an entire browser engine before it could do anything.
The base platform needs to include a diverse set of commonly used features so that apps don't have to download the world, and on a list of ubiquitous libraries SQLite is right up there with other libraries backing the web platform, like zlib.
WebGL/GLSL is a low level API the equivalent of the OPFS api. Standardising WebSQL would be like browsers standardising on Three.JS rather than WebGL/GLSL.
WebGL/GLSL give the developer a low level api to the graphics hardware.
Video decoding apis talk to the hardware video decoding hardware.
OPFS gives developers a low level API to the persistent file system / HDD / SSD.
All of these APIs are going to be very reliable fingerprinting opportunities, especially in combination. Keep that in mind when you think it’s going to be great for your web browser to also be a full featured application runtime.
Don't hold back the one open platform there is (the web) with fingerprinting concerns when its competition (mobile platforms) require an identity to use them.
I’m supposing that the problem is that the web browser is the universal platform for all applications. There’s a benefit for information consumption (web pages) being separate from functionality rich and infinitely fingerprintable “native capabilities”
I've been using the web since 1994. It's always been an application platform and anyone who says otherwise is misremembering.
> I’m supposing that the problem is that the web browser is the universal platform for all applications
This is a feature not a problem.
> There’s a benefit for information consumption (web pages) being separate from functionality rich and infinitely fingerprintable “native capabilities”
What exactly is that benefit supposed to be? If you want a read-only publishing platform, put PDFs on a FTP site.
The web platform has tons of very high level stuff in it, much higher level than SQLite, and more is being added. Even specifically on the topic of 3D, they're trying to add a <model> tag and standard 3D model file format to HTML right now. It doesn't make sense to reject Web SQL, which is much more foundational, on the grounds of being too high level. Not now, but even less so back when the decision was made in 2010, when the web itself was all higher level and the lower level APIs you mentioned didn't even exist.
This is wrong. It moved to the immersive web CG but that doesn't mean it changed form to a JS API. It's just a venue change. The WebXR repo you linked is a different project entirely. The new location for <model> is here: https://immersive-web.github.io/model-element/
The issue with flash and applets was not that you had to download stuff.
It was that the security was non-existent (leading to the embedded runtime routinely crashing your browser), the interactivity was divergent from its surroundings, and the accessibility model was MIA.
Also downloading 500K over 56k and over fiber or 5G are rather different propositions.
I'd still rather have basic SQL (that is enough for most) to just be there, for the apps where including SQL engine would be vastly more code than app itself.
If we had block API and SQL (it could be just a concrete version running from WASM itself, to not bloat browser itself) it could save a lot on app size. App then could look at browser version and decide to use that or to download newest one to run.
> It has to be downloaded and JIT compiled separately for every site that uses it
I don't really see the problem with that. Looking at the sql.js demo[1] the WASM binary is 610K (305K compressed/transferred), and it seems to run pretty fast even on my slow laptop.
> Maybe if half the sites on the web start using it
Most websites have no reason to use it; simple key/value localStorage is enough for many sites or apps that need some sort of storage. It's kind of a niche thing. Many regular desktop applications have no need SQLite, either.
It’s an ubiquitous need for any app that has a dynamic collection view. You can’t take all sites, see that 99% of them are static documents developed as degenerate apps and then conclude that “most apps” have no reason to use it. It’s like saying that most cats are on jpegs, so cat food is kind of a niche thing for a cat owner.
> I don't really see the problem with that. Looking at the sql.js demo[1] the WASM binary is 610K (305K compressed/transferred), and it seems to run pretty fast even on my slow laptop.
> and you then visited a second website, and it also included the same resource, then the resource would be loaded from the shared cache rather than being downloaded from the internet a second time. Cookies set by these resources would also be shared.
The privacy problem is not a result of the shared dependency, it's a result of the shared cookies.
Yes, if you share the execution space between multiple programs running the same lib, there's a privacy concern.
> The privacy problem is not a result of the shared dependency, it's a result of the shared cookies.
No, it's not the result of the shared cookies. You just ignored all of the timing attacks and fingerprinting which a shared cache allows, as those articles discuss.
The cookie thing is honestly completely irrelevant to the topic of privacy, if you understand how the shared cache used to work. If you loaded the same library from separate CDNs on different websites, the shared cache didn't come into play at all. The library was loaded twice anyways. There was no chance for cookies from different CDNs to accidentally cross the streams. Browsers weren't attempting to heuristically determine if you were trying to load the same asset from different hosts.
The shared cache only came into play if you loaded the same asset from the same third-party CDN on multiple websites. The host serving an asset is the one responsible for setting the cookies, and sharing the cookies is helpful in that case, since the same CDN is the host serving the asset to the browser for both websites. These aren't cookies controlled by separate websites, they're cookies supplied by the CDN, and they should be the same for both requests anyways, outside of maybe some remote possibility of a theoretical attack involving a malicious CDN intentionally setting some weird request-dependent cookies, but I can't see how that would even do anything harmful anyways. So, the cookies get shared because the browser is serving a cached response for the same asset from the same host to both sites, which makes sense.
So, cookies aren't the problem here, and they're not the reason the shared cache was partitioned. If you want browsers to undo that in any form, you would have to solve the actual privacy problems here.
Fingerprinting is possible if you know what's in the cache before you use it.
If I make a request for a given dependency, and am allowed to so much as time how long it takes to resolve, I can detect if it was already there and there's an information leak.
Sure. At some point though, a malicious site's gonna end up making some very weird requests - obviously polling the cache.
You could specify the dependency set in a static context and limit the ability of a site to measure how long dependency resolution takes.
Is there still an information leak? Yes.
Do I think it should stand in the way of a functional internet? Not really.
Google, Apple, and Mozilla apparently all failed to find a solution to this problem, even though lots of people wanted there to be a solution. Certainly, any browser that shipped this feature in a privacy-respecting manner would have bragging rights for awhile, so there is an incentive. Given that, I don't think I'm exaggerating the difficulty of the problem. This problem is also very similar to the challenges brought by Spectre-class vulnerabilities, and that one has been enormously costly for the whole industry.
I think it is completely fair to say that privacy-respecting shared caches are not simple.
Some solutions can be imagined, but they come with weird trade-offs or they do nothing for majority of the web. A new manifest format like you describe falls into the latter, since it would only apply to new websites using the new feature, and that’s without digging into the other problems it would pose.
In practice, people often visit the same websites repeatedly; they aren't constantly visiting new websites only once. A partitioned cache works just fine for the normal scenario. It's slightly less efficient for the first day someone uses their browser, but then things are honestly fine after that. It's unfortunate that we can't eek out the last tiny bit of performance for this, but I think the difference would be hard to measure in practice.
In my opinion, if websites would more commonly use brotli, that would make a far larger difference in efficiency than returning to a shared cache, and if browsers could have a standardized means of downloading only the bytes that changed in an asset like a javascript library instead of downloading the new version from scratch, that would make a much bigger difference too.
I think it is completely fair to say that privacy-respecting shared caches are not simple.
Couldn’t they make an exception for some domains and create a registry of really popular or fundamental links to packages like jquery et al? I have read on this topic before, but it sounded like all or nothing no shades of grey maximalism. Fine, partition those memes from imgur cdns, but let common libraries with known hashes to be shared at least. The potential attack is based on leaving a cdn-pixel and dl-time-testing it on other sites. But there is no big data in who has the 10 most popular releases of wasm-sqlite, dayjs or bootstap.min.css in their cache. These could be warmed up from literally anywhere, or even synced in background by an idle browser thread.
I feel like Google Chrome shipped an experiment at one point that was going to include some of the most popular libraries with the browser, so they would be equally cached for all Chrome users, for all sites. I'm having trouble finding any announcements about this, so maybe I dreamed this up.
Interesting — so basically an overhead until it’s fully native? I was reading the Chrome initiative as a pathway to native. Would that mean it can’t be WASM?
That tweet is not implying that it will ever be native. Quite the opposite; Web SQL is native today and they are going to remove it since it was rejected by other browsers.
Hi! I'm the main maintainer of sql.js. Is there somewhere I can get in touch with you? Would you be interested in publishing this as a new major version of sql.js itself ?
The thing I don't get here is that literally every browser already uses Sqlite3 for a million things, why are we not just finally acknowledging that, yes, this very specific library is a universal good, just like gzip compression, or the jpeg image format, and here's the JS API for directly working with it?
Why do we need a separate WASM version of something that's already built right into every single browser? Why is there no all-browser-vendor-blessed `Sqlite3` global?
W3C tried this but it failed due to lack of independent implementations[1]:
> This document was on the W3C Recommendation track but specification work has stopped. The specification reached an impasse: all interested implementors have used the same SQL backend (Sqlite), but we need multiple independent implementations to proceed along a standardisation path.
See, that's the insanity: why would we need an independent implementation? sqlite has authoritative libraries, that everyone uses, and sqlite3 has been stable for literally over a decade. For all that time, the world could have benefited from "just bloody expose it" as opposed to "but there's no API-equivalent alternative implementations!".
"SQLite is open-source, meaning that you can make as many copies of it as you want and do whatever you want with those copies, without limitation. But SQLite is not open-contribution."
That's an exceptional bit of edition-based lying you're doing here. Here's the end of the paragraph whose start you quoted:
> the project does not accept patches from people who have not submitted an affidavit dedicating their contribution into the public domain.
So your statement that SQLite "does NOT accept contributions" is plainly wrong. SQLite is not "open contribution" because it requires an affidavit in order to ensure the project remains public domain.
>> > the project does not accept patches from people who have not submitted an affidavit dedicating their contribution into the public domain.
> So your statement that SQLite "does NOT accept contributions" is plainly wrong. SQLite is not "open contribution" because it requires an affidavit in order to ensure the project remains public domain.
You're both right. It does require an affidavit, but the project does not simply accept affidavits from drive-by folks. It's effectively "by invitation." (Citation/disclaimer: i'm a member of the sqlite project team.)
Given that your browser history, localStorage, etc. are all stored in sqlite databases: no, why would that even remotely be a problem? Clearly sqlite works for everyone's needs, just expose it via an API. Nothing about that would even remotely require changing sqlite itself.
We all know that 99.99% sites that need a client-side database will use $subj or its derivative regardless of these kicks and screams. Because there is no decent 300kb RDBMS which could be ported into a browser. All that was achieved by that decision was to put the whole industry on hold for more than ten years.
The irony is that nobody wanted to make a new production-quality implementation because, well, why bother if SQLite already exists and is that good? It's an issue that's always going to arise when some problem has one solution that is already adopted as a de facto standard by the industry - which is exactly the kind of stuff that ought to be stable and battle-tested enough to build formal standards on.
They tried that at one point, I think it was called WebSQL? But it was decided that they didn’t want to make the quirks of sqlite3 a web standard since it’d be hard to do a clean room implementation.
Except I don't want WebSQL, I literally want Sqlite3 as a JS API, backed by the unadulterated authoritative Sqlite3 library. I don't wonder why there's no good in-browser database solution that uses SQL, I literally don't get why sqlite3, only, and specifically, still isn't usable by JS in 2022.
> I literally don't get why sqlite3, only, and specifically, still isn't usable by JS in 2022.
It is now :). We provide 4 separate APIs, from the lowest-level 1-to-1 C-via-WASM bindings to one quite similar to sql.js, plus all of the low-level pieces necessary to create your own.
Historical note: those of us within the sqlite project had never paid any attention to the unfortunately-named WebAssembly (which has been dubbed "neither web nor assembly") because it didn't seem to hold any relevance for us. It wasn't until April-ish 2022 that we took a look at it, and have been working on "officially" bringing it to the browser world ever since. Even so, folks have been producing WASM bindings of it for a number of years now, and the relative ease of doing so (sqlite3.c requires no changes whatsoever to compile to WASM) is possibly (i opine) why we didn't get requests from users to do this sooner.
Sidebar: someone is going to ask "if it's so easy, why did you need 6 months to get it out the door?" Fair question: we had some very specific technical goals which i'm not at liberty to elaborate on, and only a single developer to put on it. Plus, WASM was a completely new tech for us, so there was much learning and experimentation involved.
No, WASM applications aren't a built-in universal web API, they are a huge payload that needs to be sent, anew (thanks to per-site caching in modern browsers), that solves the problem I don't understand exists in the first place.
It's great that you went "what if we compile to WASM?" but this is something that's not on you: this is something that should have been on browser vendors to just _expose_ because every browser ships with sqlite baked in already. Every user already has it, they just can't use it.
> ... but this is something that's not on you: this is something that should have been on browser vendors to just _expose_ because every browser ships with sqlite baked in already.
Richard (the sqlite lead) has often described his definition of "freedom" as "being able to take care of yourself," a philosophy he lives and breathes with his software. The sqlite project providing wasm builds for folks, and the materials they need to make their own custom builds, is directly in line with that. Depending on every browser vendor to play along in sync is, quite frankly, a lost cause.
> ... they are a huge payload ...
If you truly believe that 500kb of content is "huge" nowadays, i challenge you to go watch (via the browser dev tools) how much stuff your favorite websites are serving. (HN, of course, is a spartan exception to the rule.) Hit any given news or social media site and you'll get at least a meg of content, most of which is constantly replaced (so caching it is of little use). Hit IMDB and you'll get 2+mb (compressed). Hit GDrive and you'll get 4.75mb compressed (nearly 15mb uncompressed!). i just hit www.google.com, which has a long history of minimalism, and it transferred 881kb (2.21mb uncompressed).
By comparison, 500kb-1mb (uncompressed) isn't even worthy of honorable mention.
> Depending on every browser vendor to play along in sync is, quite frankly, a lost cause.
Very true, but that is not mutually exclusive with not understanding why it's a lost cause. There's literally three browser vendors, and of those, really only one of them needs to go "fuck it, sqlite now has an API" and the other two kind of don't have a lot of choice but to follow suit. It's a Chrome world right now.
We could have easily done this, we chose not to. Why? (and you probably don't have the answer to that. I don't know if anyone does)
> There's literally three browser vendors, and of those, really only one of them needs to go "fuck it, sqlite now has an API" and the other two kind of don't have a lot of choice but to follow suit. It's a Chrome world right now.
In the general case, admittedly true. Note, however, that the Chrome folks have no say-so in the shape of this particular sqlite API, with the tiny exception that they are not willing to create a 100% synchronous API for OPFS, which complicates the sqlite-side development of that one small part of the API considerably[^1]. The shape/flavor/whatever we'd like to call it of the sqlite JS/WASM APIs is left 100% to the project members' discretion. If Chrome dies tomorrow, this API is still a thing, it will just have severely reduced client-side persistence options until the next hypothetical vendor creates one, at which point we'd latch onto that one.
[1] = When we consider that async APIs exist largely to account for network latency, and OPFS _has no latency_ beyond the local storage device, OPFS's interface being async is, IMHO, a design flaw stemming from the misled assumption of "it's on the web ergo it must be async" (and i've told them that in emails and meetings). Based on our testing metrics, the performance of the OPFS sqlite layer would increased by at least 30% if it had a 100% synchronous API to work with, as it tends to waste anywhere from the low-30s to mid-40s percentage points of its time waiting at cross-thread communication boundaries.
As a little pro-tip, the demos appear to be serving a gzip compressed WASM file, but offering a brotli compressed version for clients that say they accept it would be even better:
665K sqlite3.wasm
306K sqlite3.wasm.gz (-54% compared to uncompressed)
266K sqlite3.wasm.br (-13% compared to gz, -60% compared to uncompressed)
Oooff, 266 KB is a hefty price to pay. I really wish WebSQL would have won out over IndexedDB, because I would have loved to have been able to build libraries on top of that instead of the headache IndexedDB is.
"Hefty"... for an asset that can easily be cached long term, so it is effectively a one time penalty for a given website, and many web apps will load images larger than this. Certainly, just about any marketing/product page will load more bytes than this in the form of images, let alone videos.
So, "hefty" seems like a bit of an exaggeration to me. All other things equal, lighter is always better, but 266KB for all the functionality SQLite offers isn't that bad, IMO.
On the topic of caching, since SQLite is (intentionally or not) going to be setting an example with their docs and demos[0], I would suggest that SQLite should demonstrate the industry best practices with regards to caching as well. Static assets like WASM and JavaScript should be served with a very high cache duration, and the name of the asset should include a hash of the asset. This way, the site maintainer can update the HTML to reference the new SQLite bundles by the new hash whenever they upload new versions, and browsers will immediately request the new versions, but they will otherwise instantly load the cached version after the first visit whenever there isn't a new version.
> On the topic of caching, since SQLite is (intentionally or not) going to be setting an example with their docs and demos[0], I would suggest that SQLite should demonstrate the industry best practices with regards to caching as well. Static assets like WASM and JavaScript should be served with a very high cache duration
That can't work on the documentation site because the wasm file is being served from the Fossil SCM and fossil cannot cache resources which are fetched by name because a new version may be checked in at any given moment (they're currently updated very often: https://sqlite.org/wasm/finfo/jswasm/sqlite3.wasm). Fossil can hypothetically cache resources which are fetched by hash, but that's not a feasible way for us to maintain the documentation on that site.
> ... although these topics could still potentially be mentioned in the documentation.
They're not relevant to the wasm/js deliverables. They're an implementation detail of the server which happens to be serving the related documentation. If you have specific verbiage which you feel would improve the docs in this regard, please feel free to send it. i'm likely to miss most responses in HN, so please use email (stephan at sqlite dot org) or the sqlite forum: https://sqlite.org/forum.
> so it is effectively a one time penalty for a given website
which shows that I already acknowledged the demise of cross-origin caching.
Each website the user visits that uses this library has to pay a 266KB penalty once, which isn't that bad. (Well, once, unless the site developer decides to upgrade the library, then the penalty applies once more, but that's expected behavior.)
i think you'll find that many high-end websites often download more than a megabyte of CSS and JS code. imdb.com home page: 2.13mb transferred for 6.odd mb data. drive.google.com: 4.75mb transferred for nearly 15mb of data(!!!).
266kb doesn't even register nowadays for app-centric pages.
> As a little pro-tip, the demos appear to be serving a gzip compressed WASM file, but offering a brotli compressed version for clients that say they accept it would be even better:
That is interesting, but the repository you're looking at is a Fossil SCM repo and the wasm file is being served directly from it. Fossil does the compression transparently and doesn't support brotli. (Edit: i'll investigate whether brotli compression would be interesting for us to add to fossil.)
However, the sqlite project will not be hosting shared copies of the wasm/js files for use by arbitrary 3rd-party sites. It's up to each site to host their own (and even build their own if they need to customize the build), so they're free to use whatever compression they like.
The downside of that is that application developers would have to use the minimum supported version of SQLite.
If every site embeds their own version, the application developers can choose their own version that runs the same across every browser. Or even compile it themselves if they want, so they can use extensions. The download size is unfortunate but remember - unlike javascript, wasm bytecode loads almost instantly. Having a 250k wasm module is more like having a 250k image on your site than 250k of javascript. (Though it might still affect time-to-interactive depending on how the site is built)
> Sqlite updates are solid and as far as I know, do not break your code.
Almost every major SQLite release has a few minor releases after it which fix bug and regressions, including things like queries returning the wrong result.
Browser engines have regressions too. SQLite has a better test suite than any browser. It's not going to be a major source of regressions compared to the rest of the platform.
That's correct. SQLite has an impressive test suite that makes it very stable. That's why it's used in millions of Android devices or will be used in Cloudflare's edge with D1, for example.
> That's why it's used in millions of Android devices
s/millions/billions/g. It is widely believed to be either the single most widely-deployed piece of software in the world, or maybe second behind zlib (we have no way of being sure).
This would provide more ways to fingerprint users. It's also not possible to guarantee no sqlite update will ever change observable behavior, even if it happens to mostly be true now.
Once you start giving special privileges to specific wasm binaries, the floodgates are open for a hundred different vendors to demand the same special treatment because 10% of websites use their .wasm blob. And then you've created a barrier to entry for competition. Nobody wins in the long run except a couple of your friends.
> This would provide more ways to fingerprint users. It's also not possible to guarantee no sqlite update will ever change observable behavior, even if it happens to mostly be true now.
Browser behaviour changes (and even breaks) way often than SQLite does.
> Sqlite updates are solid and as far as I know, do not break your code.
That applies to the C code. The wasm code is still in beta, and won't see a public beta release until 3.40 is released in November. See the notes about API stability here: <https://sqlite.org/wasm/doc/trunk/api-index.md>
"Folks have been building sqlite3 for WASM since at least as far back as 2019, but this subproject is the first effort "officially" associated with the SQLite project, created with the goal of making WASM builds of the library first-class members of the family of supported SQLite deliverables."
Awesome news. SQLite is already a defacto cross-platform file format. An official WASM release will be widely welcomed and extremely useful.
SQLite was one of the first large C code bases to be ported to the web. Alon Zakai (@kripken, the creator of emscripten), made the first commit to sql.js at the start of 2012 [0]. I got involved two years later, and have been maintaining sql.js since. A lot was added since 2012, but the initial core api with 3 functions 'open', 'exec', and 'close' still works today.
> Folks have been building sqlite3 for WASM since as far back as 2012
for *the web*, not for WASM. SQLite was initially compiled to simple javascript, then to asm.js when it appeared, then to WASM when it replaced asm.js :)
The sql.js project is older than the idea of WASM itself. And I like to think it contributed to showing the potential of compiling native code to the browser, and in the creation of the WASM standard.
Docs have been updated. Thank you for the feedback! Edit: for future feedback (from anyone reading this) i can be reached via stephan at sqlite org. i'm unlikely to catch most doc feedback posted to HN.
> Browser support is currently limited to Chrome and Edge. Firefox and Safari don't support this yet.
Within the sqlite project we're fairly convinced that FF and Safari will catch up as soon as their larger customers start targeting Chromium-based browsers simply for the OPFS support. My estimate is mid- to late- 2023 at the latest. My (mis?)understanding is that Safari has most of this support but not the latest changes from Google (namely "sync handles"), and sqlite needs those latest features in order to use OPFS.
> Very excited to try this once support is added to Firefox and Safari!
That's up to the browser vendors, but it seems very likely that they'll jump on board once large apps start making use of it OPFS (independently of whether or not those apps use sqlite). If their impls are API-compatible with Chrome's (which Google is certainly pushing for), sqlite will "just work". It is likely that the OPFS APIs will be tweaked somewhat in the mean time (e.g. changes in the locking-related support are under discussion), and sqlite's support will/would need to be adjusted accordingly, but "one of these days" it will "just work" across the 3 major browsers.
I love SQLite but this is a terrible take, in my opinion. The Web is not meant to be a property of Google. Browsers don't have to implement the Chrome's API in a compatible way. There is a standardization process in place that should govern the progress of the Web Platform. The "do whatever Chrome does, then everybody else will be forced to follow suit" is doing it wrong.
> The Web is not meant to be a property of Google.
i'm not sure where you read in that that Google is owning this whole thing. They're just the first out the gate with working OPFS, so that's the implementation we worked against to get this up and running with sqlite. Google has worked with the other browser vendors from the start on the API.
Very nice. I built sqlite3 for WASI/a-Shell to use on my iPad (https://github.com/rcarmo/wasi-sqlite) and it still has a few issues, I hope this will help (although right now the biggest issue seems to be that the REPL has some sort of memory leak when run inside WASI).
I think there may be a space for super-large multi-GB files served from static storage being accessible from SQlite as well. Another one would be this full-text search of a 43GB SQLite database of Wikipedia's full text search: http://static.wiki/ . Hearing there's official support for this is awesome and I hope they also might add some nice stuff for those sticking with POSIX/Emscripten as well; maybe some optimizations to access patterns or other stuff like indexing or split DBs?
There are some amazing things for SQLite in the browser especially if you're looking for ways to host queryable data for cheap. The example I have below costs $0.42 cents a month to host. For 28GB. Insane.
I have a hacked up POC experimental version of the datasette-lite Python UI which runs in Emscripten to be able to look at multi-GB databases at https://github.com/simonw/datasette-lite/pull/49. It uses a hacked up chunk'd lazyFile implementation from emscripten and others to grab pages from Cloudflare R2.
That demo link is so impressive: I just watched my browser DevTools and it loaded 23MB of data in order to run that query against that 28GB database.
I really need to dig in and figure out how Datasette (and Datasette Lite) can work better for this. I think this issue might help - the ability to turn off row counts entirely: https://github.com/simonw/datasette/issues/1818
(I noticed that trying to access the table directly seems to suck in a LOT of data, presumably because it's trying to calculate a count across the whole table?)
> I think there may be a space for super-large multi-GB files served from static storage being accessible from SQlite as well. Another one would be this full-text search of a 43GB SQLite database of Wikipedia's full text search
To the best of my knowledge, OPFS's current quota is about 256mb (per origin).
The browsers currently have no way of viewing/managing the content of OPFS, so it's sort of a storage black hole. i can't even tell you how many sqlite3 database files have been orphaned in my local OPFS since development of the new sqlite wasm support started, with no reasonable way of me being able to find them without using the OPFS-specific JS API to fish through the storage (which i haven't yet been willing to do).
OPFS storage cannot sensibly be exposed at the system filesystem level (i.e. browseable with a file manager) because that would open not only security holes (the ability to "side load" data into any origin) but also huge file locking headaches, especially on platforms which use virus scanners.
> Feels like WASM is hitting an inflection point this year. After nearly a decade of hearing "someday", that day is finally here.
At the start of this development effort (April, IIRC (2022)) we (in the sqlite project) had only ever heard of wasm but hadn't paid any attention to it. Within just a few days of starting this project, we were fully convinced that the combination of sqlite and OPFS will be one of the Next Big Things for web app development.
As the project's "JS/WASM Guy" i'm exceedingly excited to see what people do with this and what improvements we'll make based on user feedback. (It's long been my experience that the most interesting feature suggestions come from users.)
Compound interest. Tools that make making other tools easier. Once programming languages started having WASM compile target a great many things became easier
> I wonder what the catalyst finally is/was to cause such an acceleration
IMO, all that was missing was a truly compelling use case. The combination of the ubiquitous sqlite with non-trivially-sized persistent storage gives us that use case.
> very cool to look at the unpacked module, in all its wasm glory
Very nice :). Be aware that the copy of sqlite3.wasm on the sqlite.org/wasm site gets updated fairly frequently, so may differ from that at any moment. That particular site is only for documentation purposes, not for hosting the canonical wasm file release (which is pending along with the release of sqlite 3.40), and its sqlite3.wasm/js copies get updated hand in hand with development of the canonical copies.
Good to know! FWIW, this is a very beta demo version of a product I'm working on (Modsurfer). It's made to capture snapshot/moment-in-time details about a module (maybe you're running untrusted code in your system and want to track what you're actually running). However, in the future, it will likely be able to automatically import and track modules from cloud storage or URLs.
I am preparing for this! Building Kikko[0] that widely supports all the of platforms that have SQLite on it, and allows building reactive interface on top of React/Vue/AngularJS(WIP)/whatever.
Here is the code example for react: https://kikko-doc.netlify.app/react-integration/installation.
Some technical details: with special sql`INSERT INTO ${sql.table`some-table`}` syntax, it tracks in which tables changes are happened, and notify other tabs to make refetch to the subscribed tables.
Project is in alpha, and already has support of absurd-sql, wa-sqlite for web; expo, tauri, electron, ionic, React Native.
I am super excited to add support of official SQLite wasm implementation
Btw, I still see one problem of official SQLite solution — it requires COOP to use ( https://web.dev/i18n/en/coop-coep/ ). It is a pretty annoying restriction. For example, you will not be able to use iframe to other resources(like embed youtube video). COOP is needed due to SharedArrayBuffer usage.
wa-sqlite can work without COOP, but it is under GPL, unfortunately. It don't allow using it in private codebase without open-sourcing the project
> Btw, I still see one problem of official SQLite solution — it requires COOP to use
Only for the OPFS support, because any solution involving hiding an asynchronous API (OPFS) behind a synchronous one (sqlite) requires it because the "await" keyword in JS is "viral": it can only be used from global-scope code or from functions which are themselves flagged as "async", and flagging a function as "async" changes its return semantics in ways which are fundamentally incompatible with C code. Any solution to that problem in JS requires SharedArrayBuffer and the Atomics APIs. WASMFS's OPFS implementation has the same limitation, despite being implemented in native code.
(That said: there is some talk among those who know better than i of modifying WASM to be able to accept Promises as return values, and returning the resolved promise value to C. i don't think it's possible because promise _rejection_ cannot pass through C code, but folks who know better than i seem to think it can be done.)
If you don't need OPFS support you don't need COOP/COEP. (Citation: i'm the sqlite js/wasm developer and have had this discussion with Google's OPFS folks.)
Yep, yep, I understand the reasons. That's why in where I work we decided to use asyncify because COOP is very strict for us It has a penalty on cold start, but once many blocks cached it works perfectly, without event loop interruptions
You're not the only one :/. We suspect that the COOP/COEP requirement for OPFS will be outright untenable for many folks, in particular those using hosting which does not offer them the option of modifying outbound headers (in fact, we had to extend sqlite.org's http server, althttpd, to add that capability for this purpose!). We've brought this pain point to the OPFS folks' attention but there is currently simply no way around it.
I entirely missed the era where SQLite was a candidate for W3C inclusion. It was always a mental bookmark in case I found a need to have an app with any substantial offline mode, but by the time I found a perfect candidate (cataloging plants in situ), it was already deprecated, and a key-value store is no replacement for SQLite.
Key-value store would be OK, we (developers) can handle that. And in fact, we do it already. But IndexedDB is broken by design. But I agree, handle relational data is what we want. And you can easily emulate non-relation data with it, if you need.
> Key-value store would be OK, we (developers) can handle that. And in fact, we do it already. ... But I agree, handle relational data is what we want.
You can now have both: a relational database in your key-value localStorage:
> localStorage suffers from issues like size limits and performance
Absolutely, but _it works_ and nothing trumps working code ;). A slow and space-limited database is better than none at all!
We implemented that functionality primarily to make peoples' eyes bug out ;), but also so that folks who don't yet have OPFS can have some form of persistent sqlite databases.
The lack of guarantees about longevity put it in the realm of local cache, not buffering. If I can't count on the data to stick around then its use case is predominantly for consumption of data, not creation. We ought to pay more attention to how often we make systems whose sole purpose is for consumption.
Sqlite is amazing! At yazz.com we have built a tool to build UI apps which use Sqlite as a back end, and then the apps compile to a HTML page which includes the Sqlite engine and the data, which works offline too. +100 for sqlite which makes this possible!
It depends on you use case but, for example, if your db is not super large you could dowload it and use it locally istead of having to connect to an app server for every query or transaction. You could synchronize in the background with the app server. The app speed and responsiveness would be way better.
Or maybe you could use a sqlite replication tool in the browser for near realtime changes in the master node db.
And for single user apps developped with tools like Electron having your db engine in the browser makes the app faster and life easier for the dev.
Node as app server and db backend for a single user app is inefficient, slow and resource hogging.
Using Sqlite you could get rid of node and therefore, reduce memory usage, speed up app loading and execution, have a single code base in the browser that makes debugging far easier, reduce code complexity and app size, reduce cpu cycles, energy usage and CO2 emissions, save precious life time, etc... :-)
Offline-capable apps on phones are "a thing". There are plenty of times when I have poor (or nonexistent) cellular coverage and would love web-based apps to work.
Currently, it is a browser app intended to run on mobile devices. Not exactly PWA, but very similar. Later it will be, probably, packaged with capacitor.
And in all seriousness, WASM is exactly the way this should be done. It doesn’t tie a db engine to a specific standardised version. It’s far more important to develop low level block storage apis like the OPFS that WASM SQLite is using.
> I guess they are trying to do it slowly so they don't break stuff, but they're already doing things like removing FTS support.
FWIW, i can say with some authority that they have been waiting on this new sqlite/wasm stuff so that they can offer a replacement to WebSQL for their folks who still use it. Despite Google's long history of pulling the plug on products (G+, how you are missed!), they're not willing to outright drop WebSQL without a viable replacement.
It would not be difficult to write a drop-in workalike WebSQL wrapper on top of the new wasm/js APIs, and we (in the sqlite project) may even get around to doing so as time allows for. The difficulty would be making it "quirk for quirk compatible," as such quirks are not documented anywhere. OTOH, as far as we're aware, the only extant WebSQL implementation is the one in Chrome, so that's the only one which counts for purposes of quirks.
Tucked away in spreadsheets there are but we don't currently have any benchmarks in a publishable form. In broad strokes, i can assure you (as the one who performed the benchmarks) that the new sqlite wasm is competitive with WebSQL in terms of performance, with either one winning out in certain benchmarks. Given that WebSQL is implemented in native code and sqlite in wasmified C, we're quite happy with those results.
Note, however, that benchmarks are very browser-dependent. Firefox's wasm engine, for example, is significantly slower than Chrome's, but it's also more consistent. If you run a given test 10 times in FF, the difference in runtimes across them will be small (maybe 10%), whereas there will be a +/-50% difference in runtimes for the same tests in Chrome. In Chrome, if the dev tools are open when wasm is running, wasm's performance can (for unknown reasons) drop by as much as half or more even if the wasm code produces no output.
> I hope the author(s) of IndexedDB would finally understand how stupid the idea of IDB was
While not one of said authors... You mind giving me an actual rundown on issues you've got with it instead of just slinging mud?
I've used IDB in a few projects, and apart from:
- The god awful callback crap born of being pre-promises.
- The lack of partial indexes (e.g. indexing a subset of documents based on some parameters)
- The iOS Webkit teams continued habit adding weird, app-breaking bugs every other release to an api/system that should be stable?
It's been fairly useful. Hell, it supports storing JS types like CryptoKey or Blob/File with no real issues[1]
[1] Sans iOS WebKit, which a version or 2 back would just eat Blobs at random.
You already gave a great overview of issues. But inventing something totally different from what people know without any reason doesn't make any sense. Look, YugaByteDB, CockroachDB, and some others use Postgres-compatible protocol/dialect. You can use most of standard tooling/query-builders/ORMs to speak with this databases. Why not use Redis or Cassandra protocol? Damn, leaving WebSQL as is would be much better choice. And please, don't say that "Web is different".
> But inventing something totally different from what people know without any reason doesn't make any sense.
See this argument falls apart because:
It's a key-value store... I'm fairly certain people are aware of that concept, or can grok it fairly quickly once introduced to it.
Also as to "without any reason"? It handles js data natively. You're not having to immediately throw an ORM or a bunch of custom code to marshal data into and out of the db. People build more than just todo lists. There was also some intent of it being "low level" that others could build niceties like rich query syntax etc atop.
Now, that didn't really happen much, so the debate about it being "terrible" should maybe focus there on what got missed for that goal.
No, it isn't. While kv is easy on its own, the IDB API was never the right answer to the demand. This is exactly the reason why we (devs) are so hot about the persistent client-side storage. We want just use something like SQLite (or WebSQL, or Postgres) and forget about the IndexedDB nightmare. I'm pretty sure, we will see a huge boost around libraries and tooling when the things get more stable eventually.
Much of that log is just issues with one implementation, which would indicate that the Safari team is either underfunded (time/talent/money wise) or incompetent to break a working/"feature complete" system, multiple times. Not really an issue with the spec as written.
As for transactions.... Yeah I'll admit they suck hard, and can be a pain to remember the quirks of them, as well as dealing with the callbacks.
Quotas and Private mode... Those same issues would apply to webSQL, and webSQL didn't have a way to delete the database to clear it up.
This is really good news, and exactly what the OPFS was designed for.
You may have seen “Absurd SQL” [1] which was a proof of concept for building a SQLite Virtual FS backend using IndexedDB. It provided full ACID compliment transactions. Incredible work but a hack at best.
The OPFS supersedes all that and makes it possible to have proper consistent and resilient transactions.
WASM SQLite with the OPFS is the future of offline first web app development. The concept of a single codebase web/mobile/desktop app with proper offline storage is here.
What I really want to see next is an eventually constant sync system between browser and server (or truly distributed with WebRTC). The SQLite Session Extension [2] potentially has the building blocks needed for such a system.
0: https://webkit.org/blog/12257/the-file-system-access-api-wit...
1: https://github.com/jlongster/absurd-sql
2: https://www.sqlite.org/sessionintro.html