tl;dr - Slonik is my go-to Postgres driver for recent projects (mostly SvelteKit-based) – it is amazingly lightweight, simple and powerful. I prefer it even though it lacks time-saving ORM methods (ex. Model.save()
).
As much as I loved using Slonik due to how lightweight it is, it just isn't a reasonable choice when compared with something like TypeORM.
With issues like this one in the ecosystem, Slonik seems to constantly introduce breaking changes without brining the ecosystem along. Doesn't make sense to use a DB framework without migrations built in in 2023 or any other year from now..
Funnily enough, TypeORM has had it's issues too (the original developer had a really hard time maintaining it), but it's a much more solid tool (and one I know well) which although heavier has decent ways to use it relatively lightly (Sticking to the Repository pattern, and building queries rather than using Hibernate-style APIs).
I’ve used Slonik on a few projects now, and I’m pretty excited about it – it’s been the perfect light weight NodeJS Postgres driver for me. While I’m not a full-blown DBA (DataBase Adminstrator) type, I know enough to be dangerous and it makes my favorite database even easier to use.
These days most I just want a tool that mostly gets out of the way and lets me write queries and does translation to JS objects in a reasonable, predicatble way. Slonik is that tool.
TypeORM is probably the most robust/complete ORM for Postgres and other DBs on NodeJS that I know of. It can do things like allowing you to use a table as a tree, access a NoSQL store like MongoDB and has great ergonomics allowing for access to lower level features.
This tech is good, but I’m not using it – maybe it’s to my own detriment but I don’t think so.
Sequelize is one of the best known and widely used projects for interacting with databases in NodeJS – it’s downloaded over 1MM times a week, and even that seems low (given that TypeORM is at 1MM too). It looks something like this:
import { Sequelize, Model, DataTypes } from 'sequelize';
const sequelize = new Sequelize('sqlite::memory:');
const User = sequelize.define('User', {
username: DataTypes.STRING,
birthday: DataTypes.DATE,
});
const jane = await User.create({
username: 'janedoe',
birthday: new Date(1980, 6, 20),
});
const users = await User.findAll();
Sequelize is many developers’ first introduction to DB drivers and it deserves to be that, but I personally stopped using it years ago for reasons I’ve long since forgotten. My distaste for overly prescriptive ORMs (or ORMs in general, really) is probably part of it.
It looks like newer versions of Sequelize are very well put together, but I know that personally I’m all in on Typescript and something without deep integration or a hand-rolled type/schema system just isn’t for me.
I know that’s not very useful to someone who might be new and considering what to try, but if that’s you feel free to disregard this entire article! Start which whichever tool feels most comfortable to you.
Knex is another entrant in the space – with a distinct focus on query building. This is more my speed – most of the time I just want to build queries and feed those to the database. It looks something like this:
const knex = require('knex')({
client: 'sqlite3',
connection: {
filename: './data.db',
},
});
// Create a table
await knex.schema.createTable('users', table => {
table.increments('id');
table.string('user_name');
});
// Insert a row
const insertedRows = await knex('users').insert({ user_name: 'Tim' })
// Query (with a join)
const selectedRows = await knex('users')
.join('accounts', 'users.id', 'accounts.user_id')
.select('users.user_name as user', 'accounts.account_name as account')
// Enrich results/do your own transformations
const enrichedRows = selectedRows.map(row => ({ ...row, active: true }))
Knex is the direction I want to go in, but not quite the implementation I want. I’m not fully against it but in general:
knex.schema.createTable()
), it’s always underpowered compared to raw SQL and is rarely better than writing out migrations by hand – you still have to know what a table is to createTable()
(and if you don’t you probably shouldn’t be writing the migrations!)In general though, it’s less that Knex (or Sequelize for that matter) are bad, and more that I think TypeORM is very good.
Knex also has had one or two vulnerabilities and some critiques (written by @gajus, the creator of slonik) which I think is somewhat reasonable. People get pretty intense about knex.
There is a LOT of stuff I haven’t tried or gotten used to. Up until now I have been very very happy with TypeORM and often write simple code to abstract over it:
// @see EntityStore
public deleteEntitiesByPartial<T>(cls: any, partial: Partial<T>, opts?: DeleteOpts): Promise<void> {
let repo: any;
let softDelete: boolean = false;
return this.getConnection()
.then(c => {
// Check if the entity is for soft deleting
if (isSoftDeletableEntity(c, cls) && !opts?.forceHardDelete) { softDelete = true; }
return c.getRepository(cls);
})
.then(r => repo = r)
.then(() => {
if (softDelete) {
return this.updateEntitiesWithPartial(cls, { deleted: true, deletedAt: new Date() }, partial);
}
return repo.delete(partial);
})
.then(() => undefined);
}
(forgive the lack of async
/await
, this code is quite old!)
This may not be the greatest code you’ve ever seen, but it’s quite high-leverage for me, without requiring too much overhead. That said, there are tools out there that promise to do a lot more for you, come with UIs and fancy GUIs and are generally much more hyped than TypeORM.
A lot of people like Prisma, it seems by all accounts to be a great tool which is really accelerating projects and developers that use it.
I personally don’t use Prisma because I think it does a little too much hand holding. I’m not such a fan of heavy handed/guided ORMs and that’s kind of the point of Prisma – you give control over the experience over so that Prisma can do what it does – make your life easier.
One slightly more concrete complaint I have against Prisma is the “hassle free migrations” powered by a DSL. I’m generally against DSLs because I find they end up being leaky and eventually either mirroring the underlying language too closely (for high essential complexity problems) or being superseded by fully powered programming languages.
Anyway, people seem to be very productive with Prisma and I wish them the best.
MikroORM seems to attempt to be “TypeORM: the good parts”. Despite that, I feel it’s a little heavy handed on the ORM bit (which makes sense, it’s in the name!).
I’ve never been a fan of Unit of work – I’m convinced it introduces an impedance mismatch that is almost entirely unnecessary
PgTyped is a new entrant to the space – I love that they’ve focused solely on Postgres.
What I don’t like so much (and why I haven’t kicked the tires yet) is because I’m not the biggest fan of the generated approach – I’m just not convinced yet that I want to write my queries in the way that PgTyped recommends. It could absolutely be a game changer, but I don’t think the time spent writing out classes is wasted, actually – I find that time critical because it provides the opportunity to reflect on my design just a bit more.
The lack of a blessed migration tool is also basically a show stopper for me.
I’m excited to see where PgTyped goes in the future – there is a lot of excitement around it and is quite popular. I guess I just need to meet the right fan which can make it’s case.
Mammoth is an even newer entrant to the space. It gets points in my book for being Postgres focused, but it does lose some for being so focused on building queries from functions rather than enabling just regular query writing.
Databases as robust as Postgres have a lot of corner cases to cover when building a heavily query-builder focused tool. It looks like Mammoth does take some of these cases into account:
const foo = defineTable({
id: uuid().primaryKey().default(`gen_random_uuid()`),
createDate: timestampWithTimeZone().notNull().default(`now()`),
name: text().notNull(),
value: integer(),
});
Again, not a huge fan of defining tables from code rather than writing out CREATE TABLE
statements, but the amount of Postgres-specific features and corner cases here makes it feel like it could work.
Another large plus is their integration with Typescript.
Alright let’s get back to why I like Slonik. I’m not one to stand on tradition/name recognition but Slonik’s heritage is quite good – it’s built by proper Postgres and NodeJS nerds:
I like Slonik because it’s:
->>
, ::jsonb
, etc)undefined
Given that the local DB is already running, here’s what it looks like when I create a new migration:
$ make db-migration NAME=db-migration-name-here
pnpm run "db:migration:create" --name db-migration-name-here.sql
> app@0.1.0 db:migration:create /path/to/the/app
> node src/lib/services/db/migrator.cli.js create "--name" "db-migration-name-here.sql"
{
event: 'created',
path: '/path/to/the/app/src/lib/services/db/migrations/2022.07.17T04.31.59.db-migration-name-here.sql'
}
{
event: 'created',
path: '/path/to/the/app/src/lib/services/db/migrations/down/2022.07.17T04.31.59.db-migration-name-here.sql'
}
And here’s the Makefile
target that powers that:
db-migration:
ifeq (,$(NAME))
$(error "NAME not set")
else
$(NPM) run "db:migration:create" --name $(NAME).sql
endif
And here’s the script that’s hooked up in package.json
like so:
"name": "app",
"version": "0.1.0",
"scripts": {
...
"db:migration:create": "node src/lib/services/db/migrator.cli.js create"
},
Aaaaaand the script that command runs:
/* global process */
import { createPool } from "slonik";
import { createMigrator } from "./migrator.js";
// Run the migrator as a script
if (!process.env['DB_URL']) { throw new Error("DB_URL not set!"); }
const migrator = createMigrator(createPool(process.env['DB_URL']));
migrator.runAsCLI();
It’s quite a bit of setup (mostly for my own sense of ergonomics), but it means I can write migrations that look like this:
-- Slonik migrator doesn't wrap migrations in a transaction by default, which is actually a good thing,
-- depending on the commands you want to run (some cannot be part of transactions)
BEGIN;
ALTER TABLE demo_leads ALTER COLUMN share_uuid DROP NOT NULL;
ALTER TABLE demo_leads ALTER COLUMN upload_uuid DROP NOT NULL;
ALTER TABLE demo_leads ADD COLUMN ref_counts jsonb NOT NULL DEFAULT '{}::jsonb';
COMMENT ON COLUMN demo_leads.ref_counts IS 'references with counts for where the lead was obtained';
COMMIT;
CREATE UNIQUE INDEX CONCURRENTLY demo_leads_email_idx ON demo_leads USING BTREE (email);
You might have noticed, but I actually have a transaction and a non-transaction statement in there. I actually filed an issue about the transaction-by-default thing, and these days these days I consider it somewhat a feature. The way slonik-tools
solved it (the singleTransaction
setting) is a great compromise as well.
Simple models are simple classes – here’s one that represents a sales lead:
import isUUID from "validator/lib/isUUID.js";
import isEmail from "validator/lib/isEmail.js";
export class Lead {
readonly uuid: string;
readonly email: string;
readonly ref: string;
// Manual object validation is yucky, but unbeatable for ease of understanding/iteration speed.
static parseObj(obj: any): obj is Lead {
return obj && typeof obj === "object"
&& "uuid" in obj && isUUID(obj.uuid)
&& "email" in obj && isEmail(obj.uuid)
&& ("ref" in obj ? typeof obj.ref === "string" : true);
}
static async tryParse(obj: any): Promise<Lead> {
if (!Lead.parseObj(obj)) {
throw new TypeError(`Object is not a Lead: ${JSON.stringify(obj, null, ' ')}`);
}
return new Lead(obj);
}
// Create a Lead from a database row
static fromRow(row: any): Lead {
const resource = new Lead({
...row,
})
return resource;
}
constructor(props?: Partial<Lead>) {
if (props) {
if (props.uuid) { this.uuid = props.uuid; }
if (props.email) { this.email = props.email; }
if (props.ref) { this.ref = props.ref; }
}
}
}
export default Lead;
Here’s what writing data looks like in my setup:
import { v1 as uuidv1 } from 'uuid';
import { Pool, sql } from "slonik";
import isUUID from "validator/lib/isUUID.js";
import { Lead } from "$lib/models";
import { isUniqueDuplicateSlonikError } from "$lib/util";
export interface CreateLeadArgs {
db: Pool;
logger?: Logger;
uuid?: string;
data: Partial<Lead>;
}
/**
* Create an Lead object
*
* @param {CreateLeadArgs} args
* @param {Partial<Lead>} args.data
* @param {string} [args.uuid]
* @param {Logger} [args.logger]
* @param {Pool} [args.db]
* @returns {Promise<Lead>} A promise that resolves to the created Lead
*/
export async function createLead(args: CreateLeadArgs): Promise<Lead> {
const { db, logger, data } = args;
// Generate a UUID beforehand
const uuid = args.uuid ?? uuidv1();
if (!isUUID(uuid)) {
logger.error(`Missing/invalid UUID [${uuid}]`);
throw new Error("Missing/invalid UUID");
}
let results;
if (!data) {
logger.error(`Missing/invalid Lead data`);
throw new Error("Missing/invalid data");
}
// Properly set all relevant properties
const email = args.data.email ?? null;
const ref = args.data.ref ?? null;
let row;
try {
// Create the ExmapleLead
row = await db.query(sql`
INSERT INTO
leads
(
uuid,
email,
ref
)
VALUES
(
${uuid},
${email},
${ref}
)
RETURNING *
`);
} catch (err: any) {
// Rethrow if not a unique slonik error
if (!isUniqueDuplicateSlonikError(err)) {
throw err;
}
// If we did get a slonik error we can just retrieve the lead
row = await db.maybeOne(sql`SELECT * FROM leads WHERE email = ${email}`);
}
if (!row) {
logger?.error({ data: args.data, uuid, entity: Lead.name }, "failed to create entity");
throw new Error(`Failed to create [${Lead.name ?? 'entity'}]`);
}
return Lead.fromRow(row);
}
I’m sure the people at home are shaking their heads furiously right now – this is exactly the kind of query that ORMs save you from writing! A simple Lead.create
or new Lead({ ... }).save()
would have saved so much time!
The ORM devotees are right – it would have been easier to have an ORM write this code, but I personally put a premium on knowing exactly what is happening, and thi code is so easy to write (and generalizable, should I so choose) that it strikes a balance that works for me.
This is obviously more involved than an ORM might be but what I love is that it is simple the code in here is obvious, and will never suprise you by not committing, or leaving an object in an unexpected state. If I want to take advantage of a native Postgres feature (I’m already using RETURNING *
), I can do so with ease.
I love Slonik for small to midsize projects but one of the big things I’m missing out on is how it operates under heavier usage. There’s no reason to think it won’t operate well, but there aren’t enough companies who have written about their successes and failures with Slonik yet.
Since Slonik is such a great low-level tool, it feels like another yet-to-be-built project (AFAIK) is a CRUD layer on top of Slonik. Leveraging reflect-metadata
to convert a Class
to a suite of CRUD functions would be an amazing addition that could surface the best parts of ORMs on top of the raw query power of Slonik. This would remove the obvious deficiency compared to the usual ORM Model.save
.
Well hopefully you’ve learned a little bit about why I like Slonik and you’ll consider using it for your next project. I’ve personally been quite pleased with the balance of raw query writing power and ease of use that Slonik offers me and hopefully you will be too.
Keep calm and write SQL.