Poor Man's Back End-as-a-Service (BaaS), Similar to Firebase/Supabase/Pocketbase
dcu | 205 points | 4day ago | github.com
CharlesW|4day ago
Pocketbase is already the poor man's BaaS, and is minimalist compared to the two others mentioned.
> Data stored in human-readable CSVs
The choice to not use a database when two near-perfect tiny candidates exist, and furthermore to choose the notorious CSV format for storing data, is absolutely mystifying. One can use their Wasm builds if platform-specific binaries offend.
SOLAR_FIELDS|4day ago
I just deployed a wasm built SQLite with FTS5 enabled and it’s insane what it is capable of. It’s basically elasticsearch entirely on the client. It’s not entirely as robust as ES but it’s like 80% of the way there, and I repeat, it runs on the client side on your phone or any other SQLite supported device
tommica|4day ago
How large of a bundle is it? And are we talking about wikipedia stuffed into sqlite, or only a few hundred pages of internal docs?
SOLAR_FIELDS|3day ago
I'm using wa-sqlite, and the standalone wasm package is 714kb. The use case is a few hundred pages of internal docs.
bsaul|3day ago
how large is the wasm package for an empty sqlite, together with the client library to access it ?
SOLAR_FIELDS|3day ago
the standalone wasm package is 714kb
loeber|4day ago
In 2025, pretending that a CSV can be a reasonable alternative to a database because it is "smaller" is just wild. Totally unconscionable.
r0fl|4day ago
I use CSV files to run multiple sites with 40,000+ pages each. Close to 1mil pages total
Super fast
Can’t hack me because those CSV files are stored elsewhere and only pulled on build
Free, ultra fast, no latency. Every alternative I’ve tried is slower and eventually costs money.
CSV files stored on GitHub/vercel/netlify/cloudflare pages can scale to millions of rows for free if divided properly
vineyardmike|4day ago
Can't argue with what works, but...
All these benefits also apply to SQLite, but SQLite is also typed, indexed, and works with tons of tools and libraries.
It can even be stored as a static file on various serving options mentioned above. Even better, it can be served on a per-page basis, so you can download just the index to the client, who can query for specific chunks of the database, further reducing the bandwidth required to serve.
deepsun|4day ago
Just to be pedantic, SQLite is not really typed. I'd call them type-hints, like in Python. Their (bad IMHO) arguments for it: https://www.sqlite.org/flextypegood.html
newlisp|4day ago
fer|3day ago
> Just to be pedantic, SQLite is not really typed. I'd call them type-hints, like in Python
Someone already chimed in for SQLite, so worth mentioning that Python is hard typed, just dynamic. Everyone has seen TypeError; you'll get that even without hints. It becomes particularly obvious when using Cython, the dynamic part is gone and you have to type your stuff manually. Type hints are indeed hints, but for your IDE, or mypy, or you (for clarity).
It's a bit like saying C++ isn't typed because you can use "auto".
edmundsauto|4day ago
Don’t you think it’s better in this dimension than CSV though? It seems to me like it’s a strictly better improvement than the other option discussed.
dragonwriter|4day ago
A sibling comment posted a blind link whose contents address this, but (for the benefit of people who aren't likely to follow such links), recent versions of SQLite support STRICT tables which are rigidly typed, if you have a meed tor that instead of the default loose type affinity system.
ezekiel68|4day ago
TBH this is why I've never messed with SQLite.
If I want to bother with a SQL database, I at least want the benefit of the physical layer compressing data to the declared types and PostgreSQL scales down surprisingly well to lower-resource (by 2025 standards) environments.
MobiusHorizons|4day ago
How exactly do you anticipate using Postgres on client? Or are you ignoring the problem statement and saying it’s better to run a backend?
throwaway032023|4day ago
felipeccastro|3day ago
Not sure why this was downvoted, but I’d be very interested in learning how well does pglite compares to SQLite (pros and cons of each, maturity, etc)
MobiusHorizons|3day ago
Interesting. TIL
Drew_|4day ago
It sounds like you use CSVs to build static websites, not store or update any dynamic data. That's not even remotely comparable.
tansan|3day ago
The way you write this makes it sound like your websites are pulling from the CSV per request. However, you're building static websites and uploading it to a CDN. I don't think SQL is needed here and CSV makes life way easier, but you can swap your CSV with any other storage device in this strategy and it would work the same.
AbraKdabra|4day ago
So... SQLite with less features basically.
Spivak|4day ago
Every file format is SQLite with fewer features.
deepsun|4day ago
Unless it's Apache Arrow or Parquet.
Moto7451|4day ago
For both fun and profit I’ve used the Parquet extension for SQLite to have the “Yes” answer to the question of “SQLite or Parquet?”
akudha|4day ago
Is this a static website? If yes, what do you use to build?
ncruces|4day ago
In 2020 Tailscale used a JSON file.
https://tailscale.com/blog/an-unlikely-database-migration
CharlesW|4day ago
If you continue reading, you'll see that they were forced to ditch JSON for a proper key-value database.
ncruces|4day ago
I know. Now see how far JSON got them.
So why wouldn't you just use a text format to persist a personal website a handful of people might use?
I created one of the SQLite drivers, but why would you bring in a dependency that might not be available in a decade unless you really need it? (SQLite will be there in 2035, but maybe not the current Go drivers)
deepsun|4day ago
It's self-restriction, like driving a car not using the rear view mirror. Or using "while" loops always instead of "for" loops.
It's great for an extra challenge. Or for writing good literature.
moritzwarhier|4day ago
You didn't really answer the dependency argument though.
Until the data for a static website becomes large enough to make JSON parsing a bottleneck, where is the problem?
I know, it's not generally suitable to store data for quick access of arbitrary pieces without parsing the whole file.
But if you use it at build time anyway (that's how I read the argument), it's pretty likely that you never will reach this bottleneck that makes you require any DBMS. Your site is static, you don't need to serve any database requests.
There is also huge overhead in powering static websites by a full-blown DBMS, in the worst case serving predictable requests without caching.
So many websites are powered by MySQL while essentially being static... and there are often unnecessarily complicated layers of caching to allow that.
But I'm not arguing against these layers per se (the end result is the same), it's just that, if your ecosystem is already built on JSON as data storage, it might be completely unneeded to pull in another dependency.
Not the same as restricting syntax within one programming language.
jpc0|3day ago
> SQLite will be there in 2035, but maybe not the current Go drivers
Go binaries are statically linked, unless you expect the elf/pe format to not exist in 2035 your binary will still run just the same.
And if not well there will be an SQLite driver in 2035 and other than 5 lines of init code I don’t interact with the SQLite drover but rather the SQL abstraction in golang.
And if it’s such an issue then directly target the sqlite C api which will also still be there in 2035.
nickjj|3day ago
If you ignore size as a benefit, CSV files still have a lot of value:
- It's plain text
- It's super easy to diff
- It's a natural fit for saving it in a git repo
- It's searchable using standard tools (grep, etc.)
- It's easy to backup and restore
- You don't need to worry about it getting corrupt
- There are many tools designed to read it to produce X types of outputs
A few months ago I wrote my own CLI driven CSV based income and expense tracker at
https://github.com/nickjj/plutus. It helps me do quartly taxes in a few minutes and I can get an indepth look at my finances on demand in 1 command.My computer built in 2014 can parse 100,000 CSV rows in 560ms which is already 10x more items than I really have. I also spent close to zero effort trying to optimize the script for speed. It's a zero dependency single file Python script using "human idiomatic" code.
Overall I'm very pleased in the decision to use a single CSV file instead of a database.
nullishdomain|4day ago
Not so sure about this. At scale, sure, but how many apps are out there that perform basic CRUD for a few thousand records max and don't need the various benefits and guarantees a DB provides?
makeitdouble|4day ago
I assume parent's dispair is about CSV's amount of traps and parsing quirks.
I'd also be hard pressed to find any real reason to chose CSV over JSONL for instance. Parsing is fast and utterly standard, it's predictible and if your data is really simple JSONL files will be super simple.
At it's simplest, the difference between a CSV line and a JSON array is 4 characters.
sieabahlpark|4day ago
[dead]
skeeter2020|4day ago
I agree on both your main points. It's not like PB has a bunch of cruft and fat to trim. The BD of the project is very aggressive in constraining scope, which is one of the reasons it's so good. The CSV-thing feels like an academic exercise. The fact I can't open an SQLite database in my text editor is a little thin, considering many tools are lighter weight than text editors, and "reading" a database (any format) is seldom the goal. You probably want to query it so the first thing you need to do here is import the CSV into DuckDB and write a bunch of queries with "WHERE active=1"
waldrews|4day ago
The append-only, text csv format you can concatenate to from a script, edit or query in a spreadsheet, and that's still fast because of the in-memory pointer cache, seems like a big win (assuming you're in the target scaling category).
bravesoul2|4day ago
For local use cases this could be useful. Run locally. Do your thing. Edit with Excel or tool of choice.
Also one less dependency.
zffr|4day ago
What’s the other candidate besides pocketbase?
CharlesW|4day ago
Apologies to anyone who found this unclear — the two near-perfect tiny candidate databases are SQLite and DuckDB.
zffr|4day ago
My understanding is that SQLite is OLTP and duckdb is OLAP. Duckdb is column based so not a great fit for a traditional backend db
fmbb|3day ago
The data files are not human readable though, right?
jonny_eh|4day ago
Firebase, Supabase, Pocketbase
TekMol|4day ago
Do we still need a back-end, now that Chrome supports the File System Access API on both desktop and mobile?
I have started writing web apps that simply store the user data as a file, and I am very pleased with this approach.
It works perfectly for Desktop and Android.
iOS does not allow for real Chrome everywhere (only in Europe, I think), so I also offer to store the data in the "Origin private file system" which all browsers support. Fortunately it has the same API, so implementing it was no additional work. Only downside is that it cannot put files in a user selected directory. So in that mode, I support a backup via an old-fashioned download link.
This way, users do not have to put their data into the cloud. It all stays on their own device.
gavmor|4day ago
What about those of us who use multiple devices, or multiple browsers? I've been using local storage for years and it's definitely hampering adoption, especially for multiplayer.
TekMol|4day ago
One approach might be to save the file to a shared drive like Google Drive?
gavmor|4day ago
Not sure I trust Dropbox to merge data. What happens when I want to migrate my data structures to a new schema?
TekMol|3day ago
As far as I know, Dropbox does not merge data.
I never tried it, but from the descriptions I have read, Dropbox detects conflicting file saves (if you save on two devices while they are offline) and stores them as "conflicting copies". So the user can handle the conflict.
As a developer, you would do this in the application. "Hey, you are trying to save your data but the data on disk is newer than when you loaded it ... Here are the differences and your options how to merge.".
gavmor|3day ago
> Hey, you are trying to save your data but the data on disk is newer than when you loaded it
You're suggesting an actual API-facilitated data sync via Dropbox? Sure, but at that point why? Unless the data also needs to be read by 3rd party applications, might as well host it myself.
TekMol|3day ago
Sure. You brought up Dropbox. Not me.
gavmor|3day ago
s/Dropbox/Google Drive/
keysdev|4day ago
Syncthing pls. Pls try to use open source alternative whenever possible even though they are not as developed as the closed sourced one, it works better for the public.
thomascountz|4day ago
TIL! I enjoy building cloudless apps and have been relying on localstorage for persistence with an "export" button. This is exactly what I've been looking for.
A lot of what I've read about local-first apps included solving for data syncing for collaborative features. I had no idea it could be this simple if all you need is local persistence.
gausswho|4day ago
At least on the Android front, I'd prefer the app allow me to write to my own storage target. The reason is because I already use Syncthing-Fork to monitor a parent Sync directory of stuff (Obsidian, OpenTracks, etc.) and send to my backup system. In effect it allows apps to be local first and potentially even without network access, but allow me to have automatic backups.
If there were something that formalized this a little more, developers could even make their apps in a... Bring Your Own Network... kinda way. Maybe there's already someone doing this?
TekMol|4day ago
What do you mean by "storage target"?
Since the File Access API lets web apps simply use the file system, I guess you could just write the file to a shared drive.
gausswho|4day ago
I may have misunderstoood. Does that mean with this API on both desktop and phone I can point to an arbitrary drive on the system without restriction? If so, it does indeed do what I'd like.
TekMol|3day ago
That's basically how the File System Access API works, yes.
Technically probably not completely "without restriction". But for all practical purposes, it works just fine for me.
tehbeard|3day ago
The closest I'm aware of is https://remotestorage.io/ , the protocol has been relatively static for a while but not widely adopted.
nico|4day ago
> Do we still need a back-end, now that Chrome supports the File System Access API on both desktop and mobile?
Could this allow accessing a local db as well? Would love something that could allow an app to talk directly to a db that lives locally in my devices, and that the db could sync across the devices - that way I still get my data in all of my devices, but it always stays only in my devices
Of course this would be relatively straightforward to do with native applications, but it would be great to be able to do it with web applications that run on the browser
Btw, does Chrome sync local storage across devices when logged in?
stephenlf|4day ago
> Could this allow accessing a local db as well?
Like IndexDB? It’s a browser API for an internal key-value storage database.
> Btw, does Chrome sync local storage across devices when logged in?
Syncing across devices still requires some amount of traffic through Google’s servers, if I’m not mistaken. Maybe you could cook something up with WebRTC, but I can’t imagine you could make something seamless.
porridgeraisin|4day ago
> Btw, does Chrome sync local storage across devices when logged in?
No, but extensions have an API to a storage which syncs itself across logged-in devices. So potentially you can have a setup where you create a website and an extension and the extension reads the website's localStorage and copies it to `chrome.storage.sync`.
Sounds like an interesting idea actually.
nico|4day ago
That's a clever solution
I've been playing with chrome extensions recently, and have made them directly talk to a local server with a db. So using extensions, it's relatively easy to to store data locally and potentially sync it across devices
I like the idea of leveraging chrome.storage.sync though, I wonder what the limitations are
porridgeraisin|4day ago
> I wonder what the limitations are
https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/Web...
says that there is a 100kb limit, and a 512 KV pair limit per extension.
Quite limiting, but if this pattern becomes popular I don't see why it can't be expanded to have the same limit as localStorage (5MB)
thekingshorses|4day ago
Any examples?