• Type:

CH Show HN: Zapatos, a Postgres Library for TypeScript

Postgres logoTypeScript logo

What does it do?

Postgres and TypeScript
are independently awesome. Zapatos is a library that aims to make them awesome together.

To achieve that, it does these five things:

  • Typescript schema   A command-line tool speaks to your Postgres database and writes up
    a detailed TypeScript schema for every table. This enables the next three things in this list. Show me »

  • Arbitrary SQL   Simple building blocks help you write arbitrary SQL using tagged
    templates
    , and manually apply the right types to what goes in and what comes back. Show me »

  • Everyday CRUD   Shortcut functions produce everyday CRUD queries with no fuss and no
    surprises, fully and automatically typed. Show me »

  • JOINs as nested JSON   Nested shortcut calls generate LATERAL JOIN
    queries, resulting in arbitrarily complex nested JSON structures, still fully and automatically typed. Show me »

  • Transactions   A transaction function helps with managing and retrying
    transactions. Show me »

How does that look?

Typescript schema

A command-line tool speaks to your Postgres database and writes up a detailed TypeScript schema for every
table.

Take this ultra-simple SQL schema for a single table, authors:

CREATE TABLE "authors" 
( "id" SERIAL PRIMARY KEY
, "name" TEXT NOT NULL
, "isLiving" BOOLEAN );

We run npx zapatos to generate a file named schema.ts, including table definitions like
this one:

export namespace authors {
  
  export interface Selectable {
    id: number;
    name: string;
    isLiving: boolean | null;
  };
  export interface Insertable {
    id?: number | DefaultType | SQLFragment;
    name: string | SQLFragment;
    isLiving?: boolean | null | DefaultType | SQLFragment;
  };
  export interface Updatable extends Partial { };
  export type Whereable = { [K in keyof Insertable]?: 
    Excludenull | DefaultType> };
  
}

The types are, I hope, pretty self-explanatory. authors.Selectable is what I’ll get back from a
SELECT query on this table. authors.Insertable is what I can INSERT:
similar to the Selectable, but any fields that are NULLable and/or have
DEFAULT values are allowed to be missing, NULL or DEFAULT.
authors.Updatable is what I can UPDATE the table with: like what I can
INSERT, but all columns are optional: it’s a simple Partial.
authors.Whereable, finally, is what I can use in a WHERE condition

schema.ts includes a few other types that get used internally, including some handy type mappings,
such as this one:

export type SelectableForTableextends Table> = {
  authors: authors.Selectable,
  books: books.Selectable,
  tags: tags.Selectable,
  
}[T];

Tell me more about the command line tool »

Arbitrary SQL

Simple building blocks help you write arbitrary SQL using tagged
templates
, and manually apply the right types to what goes in and what comes back.

Let’s insert something into that authors table for which we just generated the types. We’ll write
the SQL query ourselves, to show how that works (though we’ll see an easier way in the
next section
):

import * as db from './zapatos/src';
import * as s from './zapatos/schema';
import pool from './pgPool';const
  author: s.authors.Insertable = {
    name: 'Gabriel Garcia Marquez',
    isLiving: false,
  },
  [insertedAuthor] = await db.sql`
      INSERT INTO ${"authors"} (${db.cols(author)})
      VALUES (${db.vals(author)}) RETURNING *`
    .run(pool);
INSERT INTO "authors" ("isLiving", "name")
  VALUES ($1, $2)
RETURNING *
[false, "Gabriel Garcia Marquez"]
[
  {
    "id": 1,
    "name": "Gabriel Garcia Marquez",
    "isLiving": false
  }
]

We’ve applied the appropriate type to the object we’re trying to insert (s.authors.Insertable),
giving us type-checking and autocompletion on that object. And we’ve specified both which types are allowed as
interpolated values in the template string (s.authors.SQL) and what type is going to be returned
(s.authors.Selectable[]) when the query runs.

You can click ‘Explore types’ above to open the code in an embedded Monaco (VS Code) editor, so you can check
those typings for yourself.

Tell me more about writing arbitrary SQL »

Everyday CRUD

Shortcut functions produce everyday CRUD queries with no fuss and no
surprises, fully and automatically typed.

So — writing SQL with Zapatos is nicer than constructing a query and all its input and output types from
scratch. But for a totally bog-standard CRUD query like the INSERT above, it still involves quite a
lot of boilerplate.

To eliminate the boilerplate, Zapatos supplies some simple functions to generate these sorts of queries, fully
and automatically typed.

Let’s use one of them — insert — to add two more authors:

import * as db from './zapatos/src';
import pool from './pgPool';const [doug, janey] = await db.insert('authors', [
  { name: 'Douglas Adams', isLiving: false },
  { name: 'Jane Austen', isLiving: false},
]).run(pool);
INSERT INTO "authors" ("isLiving", "name")
  VALUES ($1, $2), ($3, $4)
RETURNING to_jsonb ("authors".*) AS result
[false, "Douglas Adams", false, "Jane Austen"]
[
  {
    "id": 2,
    "name": "Douglas Adams",
    "isLiving": false
  },
  {
    "id": 3,
    "name": "Jane Austen",
    "isLiving": false
  }
]

The insert shortcut accepts a single Insertable or an Insertable[] array,
and correspondingly returns a single Selectable or a Selectable[] array. Since we
specified 'authors' as the first argument here, and an array as the second, input and output will be
checked and auto-completed as authors.Insertable[] and authors.Selectable[]
respectively.

Again, click ‘Explore types’ to play around and check those typings.

In addition to insert, there are shortcuts for select, selectOne and
count, and for update, upsert, delete and
truncate.

Tell me more about the shortcut functions »

JOINs as nested JSON

Nested shortcut calls generate LATERAL JOIN
queries, resulting in arbitrarily complex nested JSON structures, still fully and automatically typed.

CRUD is our bread and butter, but the power of SQL is that it’s relational — it’s in the
JOINs. And Postgres has some powerful JSON features that can deliver us sensibly-structured
JOIN results with minimal post-processing (that’s json_agg,
json_build_object, and so on).

To demonstrate, let’s say that authors have books and books have
tags, adding two new tables to our simple schema:

CREATE TABLE "books" 
( "id" SERIAL PRIMARY KEY
, "authorId" INTEGER NOT NULL REFERENCES "authors"("id")
, "title" TEXT
, "createdAt" TIMESTAMPTZ NOT NULL DEFAULT now() );

CREATE TABLE "tags"
( "tag" TEXT NOT NULL
, "bookId" INTEGER NOT NULL REFERENCES "books"("id") ON DELETE CASCADE );

CREATE UNIQUE INDEX "tagsUniqueIdx" ON "tags"("tag", "bookId");

Now, let’s say I want to show a list of books, each with its (one) author and (many) associated tags. We could
knock up a manual query for this, of course, but it gets quite hairy. The select shortcut has an
option called lateral that can nest other select queries and do it for us.

Let’s try it:

import * as db from './zapatos/src';
import pool from './pgPool';const bookAuthorTags = await db.select('books', db.all, {
  lateral: {
    author: db.selectOne('authors', { id: db.parent('authorId') }),
    tags: db.select('tags', { bookId: db.parent('id') }),
  }
}).run(pool);
SELECT coalesce(jsonb_agg(result), '[]') AS result
FROM (
  SELECT to_jsonb ("books".*) || jsonb_build_object($1::text, "ljoin_0".result, $2::text, "ljoin_1".result) AS result
  FROM "books"
  LEFT JOIN LATERAL (
    SELECT to_jsonb ("authors".*) AS result
    FROM "authors"
    WHERE ("id" = "books"."authorId")
  LIMIT $3) AS "ljoin_0" ON true
  LEFT JOIN LATERAL (
    SELECT coalesce(jsonb_agg(result), '[]') AS result
    FROM (
      SELECT to_jsonb ("tags".*) AS result
      FROM "tags"
      WHERE ("bookId" = "books"."id")) AS "sq_tags") AS "ljoin_1" ON true) AS "sq_books"
["author", "tags", 1]
[
  {
    "id": 1000,
    "tags": [
      {
        "tag": "His Dark Materials",
        "bookId": 1000
      },
      {
        "tag": "1/3",
        "bookId": 1000
      }
    ],
    "title": "Northern Lights",
    "author": {
      "id": 1000,
      "name": "Philip Pullman",
      "isLiving": true
    },
    "authorId": 1000,
    "createdAt": "2020-05-12T12:36:00.424295+01:00"
  },
  {
    "id": 1001,
    "tags": [
      {
        "tag": "His Dark Materials",
        "bookId": 1001
      },
      {
        "tag": "2/3",
        "bookId": 1001
      }
    ],
    "title": "The Subtle Knife",
    "author": {
      "id": 1000,
      "name": "Philip Pullman",
      "isLiving": true
    },
    "authorId": 1000,
    "createdAt": "2020-05-12T12:36:00.425429+01:00"
  },
  {
    "id": 1002,
    "tags": [
      {
        "tag": "His Dark Materials",
        "bookId": 1002
      },
      {
        "tag": "3/3",
        "bookId": 1002
      }
    ],
    "title": "The Amber Spyglass",
    "author": {
      "id": 1000,
      "name": "Philip Pullman",
      "isLiving": true
    },
    "authorId": 1000,
    "createdAt": "2020-05-12T12:36:00.425819+01:00"
  },
  {
    "id": 1003,
    "tags": [
      {
        "tag": "mystery",
        "bookId": 1003
      }
    ],
    "title": "The Curious Incident of the Dog in the Night-Time",
    "author": {
      "id": 1001,
      "name": "Mark Haddon",
      "isLiving": true
    },
    "authorId": 1001,
    "createdAt": "2020-05-12T12:36:00.429545+01:00"
  },
  {
    "id": 1004,
    "tags": [
      {
        "tag": "adventure",
        "bookId": 1004
      }
    ],
    "title": "Holes",
    "author": {
      "id": 1002,
      "name": "Louis Sachar",
      "isLiving": true
    },
    "authorId": 1002,
    "createdAt": "2020-05-12T12:36:00.430543+01:00"
  }
]

This generates an efficient three-table LATERAL JOIN that returns a nested JSON structure directly
from the database. Every nested element is again fully and automatically typed.

Again, you can click ‘Explore types’ above to open the code in an embedded Monaco (VS Code) editor, so you
can check those typings for yourself.

We can of course extend this to deeper nesting (e.g. query each author, with their books, with their tags); to
self-joins (of a table with itself, e.g. employees to their managers in the same employees table);
and to joins on relationships other than foreign keys (e.g. joining the nearest N somethings using the
PostGIS <-> distance operator).

Tell me more about nested select queries »

Transactions

A transaction function helps with managing and retrying transactions.

Transactions are where I’ve found traditional ORMs like TypeORM and Sequelize probably most footgun-prone.
Zapatos is always explicit about what client or pool is running your query — hence the pool
argument in all our examples so far.

Zapatos also offers a simple transaction helper function that handles issuing a SQL
ROLLBACK on error, releasing the database client in a TypeScript finally clause (i.e.
whether or not an error was thrown), and automatically retrying queries in case of serialization failures. It
looks like this:

import * as db from './zapatos/src';
import pool from './pgPool';const result = await db.transaction(pool, db.Isolation.Serializable, async txnClient => {
  
});

For example, take this bankAccounts table:

CREATE TABLE "bankAccounts" 
( "id" SERIAL PRIMARY KEY
, "balance" INTEGER NOT NULL DEFAULT 0 CHECK ("balance" > 0) );

We can use the transaction helper like so:

import * as db from './zapatos/src';
import pool from './pgPool';const [accountA, accountB] = await db.insert('bankAccounts', 
  [{ balance: 50 }, { balance: 50 }]).run(pool);

const transferMoney = (sendingAccountId: number, receivingAccountId: number, amount: number) =>
  db.transaction(pool, db.Isolation.Serializable, txnClient => Promise.all([
    db.update('bankAccounts',
      { balance: db.sql`${db.self} - ${db.param(amount)}` },
      { id: sendingAccountId }).run(txnClient),
    db.update('bankAccounts',
      { balance: db.sql`${db.self} + ${db.param(amount)}` },
      { id: receivingAccountId }).run(txnClient),
  ]));

try {
  const [[updatedAccountA], [updatedAccountB]] = await transferMoney(accountA.id, accountB.id, 60);
} catch(err) {
  console.log(err.message, '/', err.detail);
}
INSERT INTO "bankAccounts" ("balance")
  VALUES ($1), ($2)
RETURNING to_jsonb ("bankAccounts".*) AS result
[50, 50]
[
  {
    "id": 1,
    "balance": 50
  },
  {
    "id": 2,
    "balance": 50
  }
]
START TRANSACTION ISOLATION LEVEL SERIALIZABLE
UPDATE
  "bankAccounts"
SET ("balance") = ROW ("balance" - $1)
WHERE ("id" = $2)
RETURNING to_jsonb ("bankAccounts".*) AS result
[60, 1]
UPDATE
  "bankAccounts"
SET ("balance") = ROW ("balance" + $1)
WHERE ("id" = $2)
RETURNING to_jsonb ("bankAccounts".*) AS result
[60, 2]
ROLLBACK
new row for relation "bankAccounts" violates check constraint "bankAccounts_balance_check" / Failing row contains (1, -10).

Finally, it provides a set of hierarchical isolation types so that, for example, if you type a
txnClient argument to a function as TxnSatisfying.RepeatableRead, you can call it with
Isolation.Serializable or Isolation.RepeatableRead but not
Isolation.ReadCommitted.

Why does it do those things?

It is a truth universally acknowledged that ORMs aren’t very good.

I like SQL, and Postgres especially. In my experience, abstractions that obscure the underlying SQL, or that
prioritise ease of switching to another database tomorrow over effective use of this database
today, are a source of misery.

I’ve also come to love strongly typed languages, and TypeScript in particular. VS Code’s type checking and
autocomplete speed development, prevent bugs, and simplify refactoring. Especially when they just happen,
they bring joy.

Zapatos aims to minimise the misery of abstraction, intensify the joy of type inference, and represent a credible
alternative to traditional ORMs.

What doesn’t it do?

Zapatos doesn’t handle schema migrations. Other tools can help you with this: check out dbmate, for instance.

It also doesn’t manage the connection pool for you, as some ORMs do — mainly because the pg module
makes this so easy. For example, my pgPool.ts looks something like this:

import pg from 'pg';
export default new pg.Pool({ connectionString: process.env.DATABASE_URL });

Finally, it won’t tell you how to structure your code: Zapatos doesn’t deal in the ‘model’ classes beloved of
traditional ORMs, just (fully-typed) POJOs.

How do I use it?

Zapatos provides a command line tool, which is run like so:

npx zapatos

This generates the TypeScript schema for your database in a folder named zapatos/schema.ts, and
copies (or symlinks) the Zapatos source files into zapatos/src.

You must import the Zapatos source files from this copied/symlinked zapatos/src
directory, not from 'zapatos' in the usual way (which would find them in
node_modules).

That’s because the source files depend on importing your custom, Zapatos-generated schema.ts, which
they cannot do if they’re imported in the usual way.

Of course, before you can run npx zapatos, you need to install and configure it.

Installation

Install it with npm:

npm install --save-dev zapatos

If you are copying the source files, which is the recommended default, you can make the library a
devDependency with --save-dev (conversely, if you are symlinking them, which is not
recommended, you will need the library as a standard dependency with plain old --save).

Configuration

Add a top-level file zapatosconfig.json to your project. Here’s an example:

{
  "db": {
    "connectionString": "postgresql://localhost/example_db"
  },
  "outDir": "./src",
  "schemas": {
    "public": {
      "include": "*",
      "exclude": ["excluded_table_1", "excluded_table_2"]
    }
  }
}

This file has up to four top-level keys:

  • "db" gives Postgres connection details. You can provide anything that you’d pass to
    new pg.Pool(/* ... */) here. This key is required.

  • "outDir" defines where your zapatos folder will be created, relative to the project
    root. If not specified, it defaults to the project root, i.e. ".".

  • "srcMode" can take the values "copy" (the default) or "symlink",
    determining whether zapatos/src will be a copy of the folder
    node_modules/zapatos/src or just a symlink to it. The symlink option can cause enormous headaches
    with tools like ts-node and ts-jest, which refuse to compile anything inside
    node_modules, and is not recommended.

  • "schemas" is an object that lets you define schemas and tables to include and exclude. Each key
    is a schema name, and each value is an object with keys "include" and "exclude".
    Those keys can take the values "*" (for all tables in schema) or an array of table names. The
    "exclude" list takes precedence over the "include" list.

    Note that schemas are not fully supported by Zapatos, since they are not included in the output types, but
    they will work by using Postgres’s search path if none of your table names is duplicated across different
    schemas.

    If not specified, the default value for "schemas" includes all tables in the public
    schema, i.e.:

    "schemas": {
      "public": {
        "include": "*",
        "exclude: []
      }
    }

    One more example: if you use PostGIS, you’ll likely want to exclude its system tables:

    "schemas": {
      "public": {
        "include": "*",
        "exclude": [
          "geography_columns", 
          "geometry_columns", 
          "raster_columns", 
          "raster_overviews", 
          "spatial_ref_sys"
        ]
      }
    }

Environment variables

All values in zapatosconfig.json can have environment variables (Node’s
process.env.SOMETHING) interpolated via handlebars-style
doubly-curly-brackets {{variables}}.

This is likely most useful for the database connection details. For example, on Heroku you’d probably configure
your database as:

"db": {
  "connectionString": "{{DATABASE_URL}}"
}

tslint

One general configuration suggestion: set up tslint with the no-floating-promises and
await-promise rules to avoid
Promise-related pitfalls.

User guide

sql tagged template strings

Arbitrary queries are written using the tagged template function sql, which returns SQLFragment class instances.

The sql function is generic, having two type variables. For
example:

import * as db from './zapatos/src';
import * as s from './zapatos/schema';
import pool from './pgPool';const authors = await db.sql`
  SELECT * FROM ${"authors"}`.run(pool);
SELECT *
FROM "authors"
[
  {
    "id": 1000,
    "name": "Philip Pullman",
    "isLiving": true
  },
  {
    "id": 1001,
    "name": "Mark Haddon",
    "isLiving": true
  },
  {
    "id": 1002,
    "name": "Louis Sachar",
    "isLiving": true
  },
  {
    "id": 1,
    "name": "Gabriel Garcia Marquez",
    "isLiving": false
  },
  {
    "id": 2,
    "name": "Douglas Adams",
    "isLiving": false
  },
  {
    "id": 3,
    "name": "Jane Austen",
    "isLiving": false
  }
]

The first type variable, Interpolations (above: s.authors.SQL), defines allowable
interpolation values. If we were joining the authors and books tables, say, then we
could specify s.authors.SQL | s.books.SQL here.

The Interpolations type variable defaults to db.SQL if not specified. This is the union
of all per-table SQL types, and thus allows all table and column names present in the database as
string interpolations. However, TypeScript will infer a more specific type from the first interpolated value,
and if you have multiple interpolated values of different types then you may need to specify a value
explicitly (either db.SQL or something more precise).

The second type variable, RunResult (above: s.authors.Selectable[]), describes what
will be returned if we call run() on the query (after any transformations performed in runResultTransform()), or if we embed it
within the extras or lateral
query options. Its default value if not specified is any[].

Take another example of these type variables:

import * as db from './zapatos/src';
import pool from './pgPool';const [{ random }] = await db.sqlnumber }]>`
  SELECT random()`.run(pool);

console.log(random);
SELECT random()
[
  {
    "random": 0.931065785232931
  }
]
0.931065785232931

Interpolations is never because nothing needs to be interpolated in this query, and the
RunResult type says that the query will return one row comprising one numeric column, named
random. The random TypeScript variable we initialize will of course be typed as a
number.

If you’re happy to have your types tied down a little less tightly, it also works to wholly omit the type
variables in this particular query, falling back on their defaults:

import * as db from './zapatos/src';
import pool from './pgPool';const [{ random }] = await db.sql`SELECT random()`.run(pool);

In this case, the random variable is of course still a number, but it is typed as
any.

sql template interpolation types

Strings

The strings that can be directly interpolated into a sql template string are defined by its
Interpolations type variable, as noted above. Typically,
this will limit them to the names of tables and columns.

Interpolated strings are passed through to the raw SQL query double-quoted, to preserve capitalisation and
neutralise SQL keywords, but otherwise unchanged.

It’s highly preferable to use interpolated string literals for table and column names rather than just writing
those values in the query itself, in order to benefit from auto-completion and (ongoing) type-checking.

So, for example, do write:

import * as db from './zapatos/src';
import pool from './pgPool';const title = await db.sql`
  SELECT ${"title"} FROM ${"books"} LIMIT 1`.run(pool);

But don’t write

import * as db from './zapatos/src';
import pool from './pgPool';const title = await db.sql`
  SELECT "title" FROM "books" LIMIT 1`.run(pool);  

— even if the two produce the same result right now.

More critically, never override the type-checking so as to write:

import * as db from './zapatos/src';
import pool from './pgPool';const 
  nameSubmittedByUser = 'books"; DROP TABLE "authors"; --',
  title = await db.sql<any>`
    SELECT * FROM ${nameSubmittedByUser} LIMIT 1`.run(pool);  
SELECT *
FROM "books";

DROP TABLE "authors";

If you override type-checking to pass untrusted data to Zapatos in unexpected places, such as the above use of
any, you can expect successful SQL injection attacks. (It is safe to pass untrusted data as
values in Whereable, Insertable, and Updatable objects, manually by using
param, and in certain other places. If in doubt, check
whether the generated SQL is using $1, $2, … parameters).

cols() and vals()

The cols and vals wrapper functions (which return ColumnNames and
ColumnValues class instances respectively) are intended to help with INSERT queries.

Pass them each the same Insertable object: cols is compiled to a comma-separated list
of the object’s keys, which are the column names, and vals is compiled to a comma-separated list of
SQL placeholders ($1, $2, …) associated with the corresponding values, in matching
order. To return to (approximately) an earlier example:

import * as db from './zapatos/src';
import * as s from './zapatos/schema';
import pool from './pgPool';const
  author: s.authors.Insertable = {
    name: 'Joseph Conrad',
    isLiving: false,
  },
  [insertedAuthor] = await db.sql`
    INSERT INTO ${"authors"} (${db.cols(author)})
    VALUES (${db.vals(author)}) RETURNING *`.run(pool);
INSERT INTO "authors" ("isLiving", "name")
  VALUES ($1, $2)
RETURNING *
[false, "Joseph Conrad"]
[
  {
    "id": 4,
    "name": "Joseph Conrad",
    "isLiving": false
  }
]

A second use for the cols function is in selecting only a subset of columns, in conjunction with the
OnlyCols type. Pass an array of column names to cols, and they’re compiled
appropriately, as seen in this example:

import * as db from './zapatos/src';
import * as s from './zapatos/schema';
import pool from './pgPool';
const bookCols = <const>['id', 'title'];
type BookDatum = s.books.OnlyCols<typeof bookCols>;

const
  bookData = await db.sql`
    SELECT ${db.cols(bookCols)} FROM ${"books"}`.run(pool);
SELECT "id", "title"
FROM "books"
[
  {
    "id": 1000,
    "title": "Northern Lights"
  },
  {
    "id": 1001,
    "title": "The Subtle Knife"
  },
  {
    "id": 1002,
    "title": "The Amber Spyglass"
  },
  {
    "id": 1003,
    "title": "The Curious Incident of the Dog in the Night-Time"
  },
  {
    "id": 1004,
    "title": "Holes"
  }
]

Whereable

Any plain JavaScript object interpolated into a sql template string is type-checked as a
Whereable, and compiled into one or more conditions joined with AND (but, for
flexibility, no WHERE). The object’s keys represent column names, and the corresponding values are
compiled as (injection-safe) parameters.

For example:

import * as db from './zapatos/src';
import * as s from './zapatos/schema';
import pool from './pgPool';const 
  title = 'Northern Lights',
  books = await db.sql`
    SELECT * FROM ${"books"} WHERE ${{ title }}`.run(pool);
SELECT *
FROM "books"
WHERE ("title" = $1)
["Northern Lights"]
[
  {
    "id": 1000,
    "authorId": 1000,
    "title": "Northern Lights",
    "createdAt": "2020-05-12T11:36:00.424Z"
  }
]

A Whereable‘s values can also be SQLFragments, however, and this makes them extremely
flexible. In a SQLFragment inside a Whereable, the special symbol self can
be used to refer to the column name. This arrangement enables us to use any operator or function we want — not
just =.

For example:

import * as db from './zapatos/src';
import * as s from './zapatos/schema';
import pool from './pgPool';const 
  titleLike = `Northern%`,
  books = await db.sql`
    SELECT * FROM ${"books"} WHERE ${{ 
      title: db.sql`${db.self} LIKE ${db.param(titleLike)}`,
      createdAt: db.sql`${db.self} > now() - INTERVAL '7 days'`,
    }}`.run(pool);
SELECT *
FROM "books"
WHERE (("createdAt" > now() - INTERVAL '7 days')
  AND ("title" LIKE $1))
["Northern%"]
[
  {
    "id": 1000,
    "authorId": 1000,
    "title": "Northern Lights",
    "createdAt": "2020-05-12T11:36:00.424Z"
  }
]

self

The use of the self symbol is explained in the section on
Whereables
.

param(value: any): Parameter

In general, Zapatos’ type-checking won’t let us pass user-supplied data unsafely
into a query
by accident. The param wrapper function exists to enable the safe passing of
user-supplied data into a query using numbered query parameters ($1, $2, …).

For example:

import * as db from './zapatos/src';
import * as s from './zapatos/schema';
import pool from './pgPool';const 
  title = 'Pride and Prejudice',
  books = await db.sql`
    SELECT * FROM ${"books"} WHERE ${"title"} = ${db.param(title)}`.run(pool);
SELECT *
FROM "books"
WHERE "title" = $1
["Pride and Prejudice"]

This same mechanism is applied automatically when we use a Whereable object
(and in this example, using a Whereable would be more readable and more concise). It’s also applied
when we use the vals function to create a ColumnValues
wrapper object.

default

The default symbol simply compiles to the SQL DEFAULT keyword. This may be useful in
INSERT and UPDATE queries where no value is supplied for one or more of the affected
columns.

sql template strings

sql template strings (resulting in SQLFragments) can be interpolated within other
sql template strings (SQLFragments). This provides flexibility in building queries
programmatically.

For example, the select shortcut makes extensive use of
nested sql templates to build its queries:

const
  rowsQuery = sqlany>`
    SELECT ${allColsSQL} AS result 
    FROM ${table}${tableAliasSQL}
    ${lateralSQL}${whereSQL}${orderSQL}${limitSQL}${offsetSQL}`,

  
  
  query = mode !== SelectResultMode.Many ? rowsQuery :
    sqlany>`
      SELECT coalesce(jsonb_agg(result), '[]') AS result 
      FROM (${rowsQuery}) AS ${raw(`"sq_${aliasedTable}"`)}`;

Arrays

Items in an interpolated array are treated just the same as if they had been interpolated directly. This, again,
can be useful for building queries programmatically.

To take the select shortcut as our example again, an
interpolated array is used to generate LATERAL JOIN query elements from the lateral
option, like so:

const
  lateralOpt = allOptions.lateral,
  lateralSQL = lateralOpt === undefined ? [] :
    Object.keys(lateralOpt).map(k => {
      const subQ = lateralOpt[k];
      subQ.parentTable = aliasedTable;  
      return sql` LEFT JOIN LATERAL (${subQ}) AS ${raw(`"cj_${k}"`)} ON true`;
    });

The lateralSQL variable — a SQLFragment[] — is subsequently interpolated into the
final query (some additional SQL using jsonb_build_object() is interpolated earlier in that query, to
return the result of the lateral subquery alongside the main query columns).

Note that a useful idiom also seen here is the use of the empty array ([]) to conditionally
interpolate nothing at all.

raw(value: string): DangerousRawString

The raw function returns DangerousRawString wrapper instances. This represents an
escape hatch, enabling us to interpolate arbitrary strings into queries in contexts where the param
wrapper is unsuitable (such as when we’re interpolating basic SQL syntax elements). If you pass
user-controlled data to this function you will open yourself up to SQL injection attacks.

parent(columnName: string): ParentColumn

Within select, selectOne or count queries passed as subqueries to the
lateral option of select or selectOne, the parent() wrapper
can be used to refer to a column of the table that’s the subject of the immediately containing query. For details,
see the documentation for the lateral option.

SQLFragment

SQLFragment class instances are what is returned by the sql tagged
template function — you’re unlikely ever to contruct them directly with new. They take on the
RunResult type variable from the sql template function that constructs them.

You can interpolate them into other sql tagged template strings,
or call/access the following properties on them:

async run(queryable: Queryable): Promise

The run function compiles, executes, and returns the transformed result of the query represented by
this SQLFragment. The awaited return value is typed according to the
SQLFragment‘s RunResult type variable.

Taking that one step at a a time:

  1. First, the compile function is called, recursively compiling
    this SQLFragment and its interpolated values into a { text: '', values: [] } query
    that can be passed straight to the pg module. If a queryListener function has been configured, it is called with the query as its argument now.

  2. Next, the compiled SQL query is executed against the supplied Queryable, which is defined as
    either a pg.Pool instance or a subtype of pg.PoolClient (TxnClient) as
    provided by the transaction helper function.

  3. Finally, the result returned from pg is fed through this SQLFragment‘s runResultTransform() function, whose
    default implementation simply returns the rows property of the result. If a
    resultListener function has been configured, it is called
    with the transformed result as its argument now.

Examples of the run function are scattered throughout this documentation.

compile(): SQLQuery

The compile function recursively transforms this SQLFragment and its interpolated
values into a SQLQuery object ({ text: string; values: any[]; }) that can be passed
straight to the pg module. It is called without arguments (the arguments it can take are for internal
use).

For example:

import * as db from './zapatos/src';
import * as s from './zapatos/schema';const 
  authorId = 12,  
  query = db.sql`
    SELECT * FROM ${"books"} WHERE ${{authorId}}`,
  compiled = query.compile();

console.log(compiled);
{
  text: 'n    SELECT * FROM "books" WHERE ("authorId" = $1)',
  values: [ 12 ]
}

You may never need this function. Use it if and when you want to see the SQL that would be executed by the
run function, without in fact executing it.

runResultTransform: (qr: pg.QueryResult) => any

When you call run, the function stored in this property is applied to the QueryResult
object returned by pg, in order to produce the result that the run function ultimately
returns.

By default, the QueryResult’s rows property (which is an array) is returned: that is,
the default implementation is just qr => qr.rows. However, the shortcut functions supply their own
runResultTransform implementations in order to match their declared RunResult types.

Generally you will not need to call this function directly, but there may be cases where you want to assign a new
function to replace the default implementation.

For example, imagine we wanted to create a function returning a query that, when run, returns the current
database timestamp directly as a Date. We could do so like this:

import * as db from './zapatos/src';
import pool from './pgPool';function dbNowQuery() {
  const query = db.sqlDate>`SELECT now()`;
  query.runResultTransform = qr => qr.rows[0].now;
  return query;
}

const dbNow = await dbNowQuery().run(pool);
SELECT now()
"2020-05-12T11:36:09.431Z"

Note that the RunResult type variable on the sql template function (in this case,
Date) must reflect the type of the transformed result, not what comes straight back from
pg (which in this case is roughly { rows: [{ now: Date }] }).

If a SQLFragment does not have run called on it directly — for example, if it is
instead interpolated into another SQLFragment, or given as the value of the lateral
option to the select shortcut — then the runResultTransform function is never applied.

Manual joins using Postgres’ JSON features

We can make use of Postgres’ excellent JSON support to achieve a variety of JOIN queries. That’s not
unique to Zapatos, of course, but it may be helpful to consider a few example queries in this context.

Take this example, retrieving each book with its (single) author:

import * as db from './zapatos/src';
import * as s from './zapatos/schema';
import pool from './pgPool';type bookAuthorSQL = s.books.SQL | s.authors.SQL | "author";
type bookAuthorSelectable = s.books.Selectable & { author: s.authors.Selectable };

const query = db.sql`
  SELECT ${"books"}.*, to_jsonb(${"authors"}.*) as ${"author"}
  FROM ${"books"} JOIN ${"authors"} 
  ON ${"books"}.${"authorId"} = ${"authors"}.${"id"}`;

const bookAuthors = await query.run(pool);
SELECT "books".*, to_jsonb ("authors".*) as "author"
FROM "books"
  JOIN "authors" ON "books"."authorId" = "authors"."id"
[
  {
    "id": 1000,
    "authorId": 1000,
    "title": "Northern Lights",
    "createdAt": "2020-05-12T11:36:00.424Z",
    "author": {
      "id": 1000,
      "name": "Philip Pullman",
      "isLiving": true
    }
  },
  {
    "id": 1001,
    "authorId": 1000,
    "title": "The Subtle Knife",
    "createdAt": "2020-05-12T11:36:00.425Z",
    "author": {
      "id": 1000,
      "name": "Philip Pullman",
      "isLiving": true
    }
  },
  {
    "id": 1002,
    "authorId": 1000,
    "title": "The Amber Spyglass",
    "createdAt": "2020-05-12T11:36:00.426Z",
    "author": {
      "id": 1000,
      "name": "Philip Pullman",
      "isLiving": true
    }
  },
  {
    "id": 1003,
    "authorId": 1001,
    "title": "The Curious Incident of the Dog in the Night-Time",
    "createdAt": "2020-05-12T11:36:00.430Z",
    "author": {
      "id": 1001,
      "name": "Mark Haddon",
      "isLiving": true
    }
  },
  {
    "id": 1004,
    "authorId": 1002,
    "title": "Holes",
    "createdAt": "2020-05-12T11:36:00.431Z",
    "author": {
      "id": 1002,
      "name": "Louis Sachar",
      "isLiving": true
    }
  }
]

Of course, we might also want the converse query, retrieving each author with their (many) books. This is also
easy enough to arrange:

import * as db from './zapatos/src';
import * as s from './zapatos/schema';
import pool from './pgPool';type authorBooksSQL = s.authors.SQL | s.books.SQL;
type authorBooksSelectable = s.authors.Selectable & { books: s.books.Selectable[] };

const query = db.sql`
  SELECT ${"authors"}.*, jsonb_agg(${"books"}.*) AS ${"books"}
  FROM ${"authors"} JOIN ${"books"} 
  ON ${"authors"}.${"id"} = ${"books"}.${"authorId"}
  GROUP BY ${"authors"}.${"id"}`;

const authorBooks = await query.run(pool);
SELECT "authors".*, jsonb_agg("books".*) AS "books"
FROM "authors"
  JOIN "books" ON "authors"."id" = "books"."authorId"
GROUP BY "authors"."id"
[
  {
    "id": 1001,
    "name": "Mark Haddon",
    "isLiving": true,
    "books": [
      {
        "id": 1003,
        "title": "The Curious Incident of the Dog in the Night-Time",
        "authorId": 1001,
        "createdAt": "2020-05-12T12:36:00.429545+01:00"
      }
    ]
  },
  {
    "id": 1002,
    "name": "Louis Sachar",
    "isLiving": true,
    "books": [
      {
        "id": 1004,
        "title": "Holes",
        "authorId": 1002,
        "createdAt": "2020-05-12T12:36:00.430543+01:00"
      }
    ]
  },
  {
    "id": 1000,
    "name": "Philip Pullman",
    "isLiving": true,
    "books": [
      {
        "id": 1000,
        "title": "Northern Lights",
        "authorId": 1000,
        "createdAt": "2020-05-12T12:36:00.424295+01:00"
      },
      {
        "id": 1001,
        "title": "The Subtle Knife",
        "authorId": 1000,
        "createdAt": "2020-05-12T12:36:00.425429+01:00"
      },
      {
        "id": 1002,
        "title": "The Amber Spyglass",
        "authorId": 1000,
        "createdAt": "2020-05-12T12:36:00.425819+01:00"
      }
    ]
  }
]

Note that if you want to include authors with no books, you need a LEFT JOIN in this query, and then
you’ll also want to fix the annoying [null]
array results jsonb_agg will return for those authors
.

Rather than do it that way, though, we can achieve the same result using a LATERAL JOIN instead:

import * as db from './zapatos/src';
import * as s from './zapatos/schema';
import pool from './pgPool';type authorBooksSQL = s.authors.SQL | s.books.SQL;
type authorBooksSelectable = s.authors.Selectable & { books: s.books.Selectable[] };

const query = db.sql`
  SELECT ${"authors"}.*, bq.* 
  FROM ${"authors"} LEFT JOIN LATERAL (
    SELECT coalesce(json_agg(${"books"}.*), '[]') AS ${"books"}
    FROM ${"books"}
    WHERE ${"books"}.${"authorId"} = ${"authors"}.${"id"}
  ) bq ON true`;

const authorBooks = await query.run(pool);
SELECT "authors".*, bq.*
FROM "authors"
  LEFT JOIN LATERAL (
    SELECT coalesce(json_agg("books".*), '[]') AS "books"
    FROM "books"
    WHERE "books"."authorId" = "authors"."id") bq ON true
[
  {
    "id": 1000,
    "name": "Philip Pullman",
    "isLiving": true,
    "books": [
      {
        "id": 1000,
        "authorId": 1000,
        "title": "Northern Lights",
        "createdAt": "2020-05-12T12:36:00.424295+01:00"
      },
      {
        "id": 1001,
        "authorId": 1000,
        "title": "The Subtle Knife",
        "createdAt": "2020-05-12T12:36:00.425429+01:00"
      },
      {
        "id": 1002,
        "authorId": 1000,
        "title": "The Amber Spyglass",
        "createdAt": "2020-05-12T12:36:00.425819+01:00"
      }
    ]
  },
  {
    "id": 1001,
    "name": "Mark Haddon",
    "isLiving": true,
    "books": [
      {
        "id": 1003,
        "authorId": 1001,
        "title": "The Curious Incident of the Dog in the Night-Time",
        "createdAt": "2020-05-12T12:36:00.429545+01:00"
      }
    ]
  },
  {
    "id": 1002,
    "name": "Louis Sachar",
    "isLiving": true,
    "books": [
      {
        "id": 1004,
        "authorId": 1002,
        "title": "Holes",
        "createdAt": "2020-05-12T12:36:00.430543+01:00"
      }
    ]
  },
  {
    "id": 1,
    "name": "Gabriel Garcia Marquez",
    "isLiving": false,
    "books": []
  },
  {
    "id": 2,
    "name": "Douglas Adams",
    "isLiving": false,
    "books": []
  },
  {
    "id": 3,
    "name": "Jane Austen",
    "isLiving": false,
    "books": []
  },
  {
    "id": 4,
    "name": "Joseph Conrad",
    "isLiving": false,
    "books": []
  }
]

Lateral joins of this sort are very flexible, and can be nested multiple levels deep — but can quickly become
quite hairy in that case. The select shortcut function and
its lateral option can make this much less painful.

Shortcut functions and lateral joins

A key contribution of Zapatos is a set of simple shortcut functions that make everyday CRUD queries extremely easy to work
with. Furthermore, the select shortcut can be nested in order to generate LATERAL JOIN
queries, resulting in arbitrarily complex nested JSON structures with inputs and outputs that are still fully and
automatically typed.

Because the shortcuts make heavy use of Postgres’s JSON support, their return values are generally
JSONSelectables rather than plain Selectables. The only difference between these types
is that, because JSON has no native Date representation, columns that would have been returned as
Date values in a Selectable are instead returned as ISO 8601 strings (the result of
calling toJSON() on them) in a JSONSelectable.

Since you’re using Node, it’s safe to convert this string straight back to a Date by passing it to
new Date() (web browsers’ date parsing may vary). But since JavaScript’s built-in date/time support
is terrible, you’re probably anyway better off using a library such as Luxon (where you would instead use DateTime.fromISO());

insert

interface InsertSignatures {
  extends Table>(table: T, values: InsertableForTable): SQLFragment>;
  extends Table>(table: T, values: InsertableForTable[]): SQLFragment[]>;
}

The insert shortcut inserts one or more rows in a table, and returns them with any
DEFAULT values filled in. It takes a Table name and the corresponding
Insertable or Insertable[], and returns the corresponding JSONSelectable or
JSONSelectable[].

For example:

import * as db from './zapatos/src';
import pool from './pgPool';const 
  
  steve = await db.insert('authors', { 
    name: 'Steven Hawking', 
    isLiving: false,
  }).run(pool),

  
  [time, me] = await db.insert('books', [{ 
    authorId: steve.id, 
    title: 'A Brief History of Time',
    createdAt: db.sql`now()`,
  }, { 
    authorId: steve.id, 
    title: 'My Brief History',
    createdAt: db.sql`now()`,
  }]).run(pool),

  tags = await db.insert('tags', [
    { bookId: time.id, tag: 'physics' },
    { bookId: me.id, tag: 'physicist' },
    { bookId: me.id, tag: 'autobiography' },
  ]).run(pool);
INSERT INTO "authors" ("isLiving", "name")
  VALUES ($1, $2)
RETURNING to_jsonb ("authors".*) AS result
[false, "Steven Hawking"]
{
  "id": 5,
  "name": "Steven Hawking",
  "isLiving": false
}
INSERT INTO "books" ("authorId", "createdAt", "title")
  VALUES ($1, now(), $2), ($3, now(), $4)
RETURNING to_jsonb ("books".*) AS result
[5, "A Brief History of Time", 5, "My Brief History"]
[
  {
    "id": 1,
    "title": "A Brief History of Time",
    "authorId": 5,
    "createdAt": "2020-05-12T12:36:10.498713+01:00"
  },
  {
    "id": 2,
    "title": "My Brief History",
    "authorId": 5,
    "createdAt": "2020-05-12T12:36:10.498713+01:00"
  }
]
INSERT INTO "tags" ("bookId", "tag")
  VALUES ($1, $2), ($3, $4), ($5, $6)
RETURNING to_jsonb ("tags".*) AS result
[1, "physics", 2, "physicist", 2, "autobiography"]
[
  {
    "tag": "physics",
    "bookId": 1
  },
  {
    "tag": "physicist",
    "bookId": 2
  },
  {
    "tag": "autobiography",
    "bookId": 2
  }
]

You’ll note that Insertables can take SQLFragment values (from the sql
tagged template function) as well as direct values (strings, numbers, and so on).

Postgres can accept up to 65,536 parameters per query (since an Int16 is used to convey the number of parameters in
the Bind message of the wire protocol). If there’s a
risk that a multiple-row INSERT could have more inserted values than that, you’ll need a mechanism to
batch them up into separate calls.

update

interface UpdateSignatures {
  extends Table>(table: T, values: UpdatableForTable, where: WhereableForTable | SQLFragment): SQLFragment[]>;
}

The update shortcut updates rows in the database. It takes a Table name and a
corresponding Updatable and Whereable — in that order, matching the order in a raw
SQL query. It returns a corresponding JSONSelectable[], listing every row affected.

For example, when we discover with that we’ve mis-spelled a famous physicist’s name, we can do this:

import * as db from './zapatos/src';
import pool from './pgPool';await db.update('authors', 
  { name: 'Stephen Hawking' },
  { name: 'Steven Hawking' }
).run(pool);
UPDATE
  "authors"
SET ("name") = ROW ($1)
WHERE ("name" = $2)
RETURNING to_jsonb ("authors".*) AS result
["Stephen Hawking", "Steven Hawking"]
[
  {
    "id": 5,
    "name": "Stephen Hawking",
    "isLiving": false
  }
]

Like Insertable values, Updatable values can also be SQLFragments. For
instance, take a table such as the following:

CREATE TABLE "emailAuthentication" 
( "email" citext PRIMARY KEY
, "consecutiveFailedLogins" INTEGER NOT NULL DEFAULT 0
, "lastFailedLogin" TIMESTAMPTZ );

To atomically increment the consecutiveFailedLogins value, we can do something like this:

import * as db from './zapatos/src';
import pool from './pgPool';await db.update("emailAuthentication", { 
  consecutiveFailedLogins: db.sql`${db.self} + 1`,
  lastFailedLogin: db.sql`now()`,
}, { email: 'me@privacy.net' }).run(pool);
UPDATE
  "emailAuthentication"
SET ("consecutiveFailedLogins", "lastFailedLogin") = ROW ("consecutiveFailedLogins" + 1, now())
WHERE ("email" = $1)
RETURNING to_jsonb ("emailAuthentication".*) AS result
["me@privacy.net"]
[
  {
    "email": "me@privacy.net",
    "lastFailedLogin": "2020-05-12T12:36:11.269109+01:00",
    "consecutiveFailedLogins": 1
  }
]

upsert

interface UpsertAction { $action: 'INSERT' | 'UPDATE'; }
type UpsertReturnableForTableextends Table> = JSONSelectableForTable & UpsertAction;

interface UpsertSignatures {
  extends Table>(table: T, values: InsertableForTable, uniqueCols: ColumnForTable | ColumnForTable[], noNullUpdateCols?: ColumnForTable | ColumnForTable[]): SQLFragment>;
  extends Table>(table: T, values: InsertableForTable[], uniqueCols: ColumnForTable | ColumnForTable[], noNullUpdateCols?: ColumnForTable | ColumnForTable[]): SQLFragment[]>;
}

The upsert shortcut issues an INSERT ... ON CONFLICT ... DO UPDATE
query. Like insert, it takes a Table name and a corresponding Insertable or
Insertable[].

It then takes, in addition, an appropriate column name or array of column names as the ‘arbiter index(es)’ on
which a conflict is to be detected. Optionally, it can also take a column name or array of column names which are
not to be overwritten with NULL in the case that the UPDATE branch is taken.

It returns an UpsertReturnable or UpsertReturnable[]. An UpsertReturnable
is the same as a JSONSelectable except that it includes one additional property,
$action, taking the string 'INSERT' or 'UPDATE' so as to indicate which
eventuality occurred for each row.

Let’s say we have a table of app subscription transactions:

CREATE TABLE "appleTransactions" 
( "environment" "appleEnvironment" NOT NULL  
, "originalTransactionId" TEXT NOT NULL
, "accountId" INTEGER REFERENCES "accounts"("id") NOT NULL
, "latestReceiptData" TEXT );

ALTER TABLE "appleTransactions" ADD CONSTRAINT "appleTransPKey" 
  PRIMARY KEY ("environment", "originalTransactionId");

When we receive a purchase receipt, we need to either store a new record or update an existing record for each
distinct (environment, originalTransactionId) it contains.

We can map the transaction data in the receipt into an appleTransactions.Insertable[],
and do what’s needed with a single upsert call. In this example, though, we hard-code the
Insertable[] for ease of exposition:

import * as db from './zapatos/src';
import * as s from './zapatos/schema';
import pool from './pgPool';const 
  newTransactions: s.appleTransactions.Insertable[] = [{
    environment: 'PROD',
    originalTransactionId: '123456',
    accountId: 123,
    latestReceiptData: "TWFuIGlzIGRpc3Rp",
  }, {
    environment: 'PROD',
    originalTransactionId: '234567',
    accountId: 234,
    latestReceiptData: "bmd1aXNoZWQsIG5v",
  }],
  result = await db.upsert("appleTransactions", newTransactions, 
    ["environment", "originalTransactionId"]).run(pool);
INSERT INTO "appleTransactions" ("accountId", "environment", "latestReceiptData", "originalTransactionId")
  VALUES ($1, $2, $3, $4), ($5, $6, $7, $8)
ON CONFLICT ("environment", "originalTransactionId")
  DO UPDATE SET
    ("accountId", "latestReceiptData") = ROW (EXCLUDED. "accountId", EXCLUDED. "latestReceiptData")
  RETURNING to_jsonb ("appleTransactions".*) || jsonb_build_object('$action', CASE xmax
      WHEN 0 THEN
        'INSERT'
      ELSE
        'UPDATE'
      END) AS result
[123, "PROD", "TWFuIGlzIGRpc3Rp", "123456", 234, "PROD", "bmd1aXNoZWQsIG5v", "234567"]
[
  {
    "$action": "UPDATE",
    "accountId": 123,
    "environment": "PROD",
    "latestReceiptData": "TWFuIGlzIGRpc3Rp",
    "originalTransactionId": "123456"
  },
  {
    "$action": "INSERT",
    "accountId": 234,
    "environment": "PROD",
    "latestReceiptData": "bmd1aXNoZWQsIG5v",
    "originalTransactionId": "234567"
  }
]

deletes

export interface DeleteSignatures {
  extends Table>(table: T, where: WhereableForTable | SQLFragment): SQLFragment[]>;
}

The deletes shortcut, unsurprisingly, deletes rows from a table (delete, unfortunately,
is a JavaScript reserved word). It takes the table name and an appropriate Whereable or
SQLFragment, and returns the deleted rows as a JSONSelectable.

For example:

import * as db from './zapatos/src';
import pool from './pgPool';await db.deletes('books', { title: 'Holes' }).run(pool);
DELETE FROM "books"
WHERE ("title" = $1)
RETURNING to_jsonb ("books".*) AS result
["Holes"]
[
  {
    "id": 1004,
    "title": "Holes",
    "authorId": 1002,
    "createdAt": "2020-05-12T12:36:00.430543+01:00"
  }
]

truncate

type TruncateIdentityOpts = 'CONTINUE IDENTITY' | 'RESTART IDENTITY';
type TruncateForeignKeyOpts = 'RESTRICT' | 'CASCADE';

interface TruncateSignatures {
  (table: Table | Table[], optId: TruncateIdentityOpts): SQLFragment<undefined>;
  (table: Table | Table[], optFK: TruncateForeignKeyOpts): SQLFragment<undefined>;
  (table: Table | Table[], optId: TruncateIdentityOpts, optFK: TruncateForeignKeyOpts): SQLFragment<undefined>;
}

The truncate shortcut truncates one or more tables. It takes a Table name or a
Table[] name array, and (optionally) the options
'CONTINUE IDENTITY'/'RESTART IDENTITY' and/or
'RESTRICT'/'CASCADE'.

For instance:

import * as db from './zapatos/src';
import pool from './pgPool';await db.truncate('bankAccounts').run(pool);

One context in which this may be useful is in emptying a testing database at the start of each test run. Zapatos
provides an AllTables type to help you ensure that you’ve listed all your tables:

import * as s from './zapatos/schema';export const allTables: s.AllTables = [
  'appleTransactions', 
  'authors', 
  'bankAccounts', 
  'books', 
  'doctors',
  'emailAuthentication', 
  'employees', 
  'shifts',
  'stores',
  'tags',
];

You can then empty the database like so:


await db.truncate(allTables, 'CASCADE').run(pool);

select, selectOne and count

export interface SelectSignatures {
  extends Table, C extends ColumnForTable[], L extends SQLFragmentsMap, E extends SQLFragmentsMap, M extends SelectResultMode = SelectResultMode.Many> (
    table: T,
    where: WhereableForTable | SQLFragment | AllType,
    options?: SelectOptionsForTable,
    mode?: M,
  ): SQLFragment>;
}
export interface SelectOneSignatures {
  extends Table, C extends ColumnForTable[], L extends SQLFragmentsMap, E extends SQLFragmentsMap>(
    table: T,
    where: WhereableForTable | SQLFragment | AllType,
    options?: SelectOptionsForTable,
  ): SQLFragment>;
}
export interface CountSignatures {
  extends Table>(
    table: T, 
    where: WhereableForTable | SQLFragment | AllType, 
    options?: { columns?: ColumnForTable[], alias?: string },
  ): SQLFragment<number>;
}

Yes, the signatures are beastly — and that’s leaving out the horrors behind
FullSelectReturnTypeForTable<...> — but don’t panic!

The select shortcut function, in its basic form, takes a Table name and some
WHERE conditions, and returns a SQLFragment. Those
WHERE conditions can be the symbol all (meaning: no conditions), the appropriate
Whereable for the target table, or a SQLFragment from a sql template
string. Recall that a Whereable can itself contain SQLFragment
values
, which means the SQLFragment variant is rarely required.

The selectOne function does the same except it gives us a
SQLFragment, promising only a single object when run. The count
function, finally, generates a query to count matching rows, and thus returns a
SQLFragment.

In use, they look like this:

import * as db from './zapatos/src';
import * as s from './zapatos/schema';
import pool from './pgPool';const 
  
  allBooks = await db.select('books', db.all).run(pool),

  
  authorBooks = await db.select('books', { authorId: 1000 }).run(pool),

  
  oneAuthor = await db.selectOne('authors', { id: 1000 }).run(pool),

  
  numberOfAuthors = await db.count('authors', db.all).run(pool),

  
  recentAuthorBooks = await db.select('books', { 
    authorId: 1001,
    createdAt: db.sql`
      ${db.self} > now() - INTERVAL '7 days'` 
  }).run(pool),

  
  allRecentBooks = await db.select('books', db.sql`
    ${"createdAt"} > now() - INTERVAL '7 days'`).run(pool);
SELECT coalesce(jsonb_agg(result), '[]') AS result
FROM (
  SELECT to_jsonb ("books".*) AS result
  FROM "books") AS "sq_books"
[
  {
    "id": 1000,
    "title": "Northern Lights",
    "authorId": 1000,
    "createdAt": "2020-05-12T12:36:00.424295+01:00"
  },
  {
    "id": 1001,
    "title": "The Subtle Knife",
    "authorId": 1000,
    "createdAt": "2020-05-12T12:36:00.425429+01:00"
  },
  {
    "id": 1002,
    "title": "The Amber Spyglass",
    "authorId": 1000,
    "createdAt": "2020-05-12T12:36:00.425819+01:00"
  },
  {
    "id": 1003,
    "title": "The Curious Incident of the Dog in the Night-Time",
    "authorId": 1001,
    "createdAt": "2020-05-12T12:36:00.429545+01:00"
  },
  {
    "id": 1,
    "title": "A Brief History of Time",
    "authorId": 5,
    "createdAt": "2020-05-12T12:36:10.498713+01:00"
  },
  {
    "id": 2,
    "title": "My Brief History",
    "authorId": 5,
    "createdAt": "2020-05-12T12:36:10.498713+01:00"
  }
]
SELECT coalesce(jsonb_agg(result), '[]') AS result
FROM (
  SELECT to_jsonb ("books".*) AS result
  FROM "books"
  WHERE ("authorId" = $1)) AS "sq_books"
[1000]
[
  {
    "id": 1000,
    "title": "Northern Lights",
    "authorId": 1000,
    "createdAt": "2020-05-12T12:36:00.424295+01:00"
  },
  {
    "id": 1001,
    "title": "The Subtle Knife",
    "authorId": 1000,
    "createdAt": "2020-05-12T12:36:00.425429+01:00"
  },
  {
    "id": 1002,
    "title": "The Amber Spyglass",
    "authorId": 1000,
    "createdAt": "2020-05-12T12:36:00.425819+01:00"
  }
]
SELECT to_jsonb ("authors".*) AS result
FROM "authors"
WHERE ("id" = $1)
LIMIT $2
[1000, 1]
{
  "id": 1000,
  "name": "Philip Pullman",
  "isLiving": true
}
SELECT count("authors".*) AS result
FROM "authors"
8
SELECT coalesce(jsonb_agg(result), '[]') AS result
FROM (
  SELECT to_jsonb ("books".*) AS result
  FROM "books"
  WHERE ("authorId" = $1
    AND ("createdAt" > now() - INTERVAL '7 days'))) AS "sq_books"
[1001]
[
  {
    "id": 1003,
    "title": "The Curious Incident of the Dog in the Night-Time",
    "authorId": 1001,
    "createdAt": "2020-05-12T12:36:00.429545+01:00"
  }
]
SELECT coalesce(jsonb_agg(result), '[]') AS result
FROM (
  SELECT to_jsonb ("books".*) AS result
  FROM "books"
  WHERE "createdAt" > now() - INTERVAL '7 days') AS "sq_books"
[
  {
    "id": 1000,
    "title": "Northern Lights",
    "authorId": 1000,
    "createdAt": "2020-05-12T12:36:00.424295+01:00"
  },
  {
    "id": 1001,
    "title": "The Subtle Knife",
    "authorId": 1000,
    "createdAt": "2020-05-12T12:36:00.425429+01:00"
  },
  {
    "id": 1002,
    "title": "The Amber Spyglass",
    "authorId": 1000,
    "createdAt": "2020-05-12T12:36:00.425819+01:00"
  },
  {
    "id": 1003,
    "title": "The Curious Incident of the Dog in the Night-Time",
    "authorId": 1001,
    "createdAt": "2020-05-12T12:36:00.429545+01:00"
  },
  {
    "id": 1,
    "title": "A Brief History of Time",
    "authorId": 5,
    "createdAt": "2020-05-12T12:36:10.498713+01:00"
  },
  {
    "id": 2,
    "title": "My Brief History",
    "authorId": 5,
    "createdAt": "2020-05-12T12:36:10.498713+01:00"
  }
]

Similar to our earlier shortcut examples, once I’ve typed in 'books' or 'authors' as
the first argument to the function, TypeScript and VS Code know both how to type-check and auto-complete both the
WHERE argument and the type that will returned by run.

The select and selectOne shortcuts can also take an options object as
their third argument, which has these possible keys: columns, order, limit,
offset, extras, lateral and alias.

columns

The columns key specifies that we want to return only a subset of columns, which we might do for
reasons of efficiency. It takes an array of Column names for the appropriate table. For example:

import * as db from './zapatos/src';
import pool from './pgPool';const bookTitles = await db.select('books', db.all, 
  { columns: ['title'] }).run(pool);
SELECT coalesce(jsonb_agg(result), '[]') AS result
FROM (
  SELECT jsonb_build_object($1::text, "title") AS result
  FROM "books") AS "sq_books"
["title"]
[
  {
    "title": "Northern Lights"
  },
  {
    "title": "The Subtle Knife"
  },
  {
    "title": "The Amber Spyglass"
  },
  {
    "title": "The Curious Incident of the Dog in the Night-Time"
  },
  {
    "title": "A Brief History of Time"
  },
  {
    "title": "My Brief History"
  }
]

The return type is appropriately narrowed to the requested columns only, so VS Code will complain if we now try
to access bookTitles[0].authorId, for example.

order, limit and offset

The limit and offset options each take a number and pass it directly through to SQL
LIMIT and OFFSET clauses. The order option takes an
OrderSpecForTable[], which has this shape:

interface OrderSpecForTableextends Table> {
  by: SQLForTable;
  direction: 'ASC' | 'DESC';
  nulls?: 'FIRST' | 'LAST';
}

Putting them together gives us queries like this:

import * as db from './zapatos/src';
import pool from './pgPool';const [lastButOneBook] = await db.select('books', db.all, { 
  order: [{ by: 'createdAt', direction: 'DESC' }], 
  limit: 1, 
  offset: 1,
}).run(pool);
SELECT coalesce(jsonb_agg(result), '[]') AS result
FROM (
  SELECT to_jsonb ("books".*) AS result
  FROM "books"
  ORDER BY "createdAt" DESC
  LIMIT $1 OFFSET $2) AS "sq_books"
[1, 1]
[
  {
    "id": 2,
    "title": "My Brief History",
    "authorId": 5,
    "createdAt": "2020-05-12T12:36:10.498713+01:00"
  }
]

I used destructuring assignment here (const [lastButOneBook] = /* ... */;) to account for the fact
that I know this query is only going to return one response. Unfortunately, destructuring is just syntactic sugar
for indexing, and indexing in TypeScript doesn’t
reflect that the result may be undefined
. That means that lastButOneBook is now typed as a
JSONSelectable, but it could actually be undefined, and that could lead to errors down
the line.

To work around this, we can use the selectOne function instead, which turns the example above into
the following:

import * as db from './zapatos/src';
import pool from './pgPool';const lastButOneBook = await db.selectOne('books', db.all, {
  order: [{ by: 'createdAt', direction: 'DESC' }], 
  offset: 1 
}).run(pool);
SELECT to_jsonb ("books".*) AS result
FROM "books"
ORDER BY "createdAt" DESC
LIMIT $1 OFFSET $2
[1, 1]
{
  "id": 2,
  "title": "My Brief History",
  "authorId": 5,
  "createdAt": "2020-05-12T12:36:10.498713+01:00"
}

The { limit: 1 } option is now applied automatically. And the return type following
await needs no destructuring and is now, correctly, JSONSelectable | undefined.

lateral and alias

Earlier we put together some big LATERAL joins
of authors and books
. This was a powerful and satisfying application of Postgres’ JSON support … but also a
bit of an eyesore, heavy on both punctuation and manually constructed and applied types.

We can improve on this. Since SQLFragments are already designed to contain other
SQLFragments, it’s a pretty small leap to enable
select/selectOne/count calls to be nested inside other
select/selectOne calls in order to significantly simplify this kind of
LATERAL join query.

We achieve this with an additional options key, lateral, which takes a mapping of
property names to nested query shortcuts. It allows us to write an even bigger join (of books, each with their
author and tags) like so:

import * as db from './zapatos/src';
import pool from './pgPool';const booksAuthorTags = await db.select('books', db.all, {
  lateral: {
    author: db.selectOne('authors', { id: db.parent('authorId') }),
    tags: db.select('tags', { bookId: db.parent('id') }),
  }
}).run(pool);
SELECT coalesce(jsonb_agg(result), '[]') AS result
FROM (
  SELECT to_jsonb ("books".*) || jsonb_build_object($1::text, "ljoin_0".result, $2::text, "ljoin_1".result) AS result
  FROM "books"
  LEFT JOIN LATERAL (
    SELECT to_jsonb ("authors".*) AS result
    FROM "authors"
    WHERE ("id" = "books"."authorId")
  LIMIT $3) AS "ljoin_0" ON true
  LEFT JOIN LATERAL (
    SELECT coalesce(jsonb_agg(result), '[]') AS result
    FROM (
      SELECT to_jsonb ("tags".*) AS result
      FROM "tags"
      WHERE ("bookId" = "books"."id")) AS "sq_tags") AS "ljoin_1" ON true) AS "sq_books"
["author", "tags", 1]
[
  {
    "id": 1000,
    "tags": [
      {
        "tag": "His Dark Materials",
        "bookId": 1000
      },
      {
        "tag": "1/3",
        "bookId": 1000
      }
    ],
    "title": "Northern Lights",
    "author": {
      "id": 1000,
      "name": "Philip Pullman",
      "isLiving": true
    },
    "authorId": 1000,
    "createdAt": "2020-05-12T12:36:00.424295+01:00"
  },
  {
    "id": 1001,
    "tags": [
      {
        "tag": "His Dark Materials",
        "bookId": 1001
      },
      {
        "tag": "2/3",
        "bookId": 1001
      }
    ],
    "title": "The Subtle Knife",
    "author": {
      "id": 1000,
      "name": "Philip Pullman",
      "isLiving": true
    },
    "authorId": 1000,
    "createdAt": "2020-05-12T12:36:00.425429+01:00"
  },
  {
    "id": 1002,
    "tags": [
      {
        "tag": "His Dark Materials",
        "bookId": 1002
      },
      {
        "tag": "3/3",
        "bookId": 1002
      }
    ],
    "title": "The Amber Spyglass",
    "author": {
      "id": 1000,
      "name": "Philip Pullman",
      "isLiving": true
    },
    "authorId": 1000,
    "createdAt": "2020-05-12T12:36:00.425819+01:00"
  },
  {
    "id": 1003,
    "tags": [
      {
        "tag": "mystery",
        "bookId": 1003
      }
    ],
    "title": "The Curious Incident of the Dog in the Night-Time",
    "author": {
      "id": 1001,
      "name": "Mark Haddon",
      "isLiving": true
    },
    "authorId": 1001,
    "createdAt": "2020-05-12T12:36:00.429545+01:00"
  },
  {
    "id": 1,
    "tags": [
      {
        "tag": "physics",
        "bookId": 1
      }
    ],
    "title": "A Brief History of Time",
    "author": {
      "id": 5,
      "name": "Stephen Hawking",
      "isLiving": false
    },
    "authorId": 5,
    "createdAt": "2020-05-12T12:36:10.498713+01:00"
  },
  {
    "id": 2,
    "tags": [
      {
        "tag": "physicist",
        "bookId": 2
      },
      {
        "tag": "autobiography",
        "bookId": 2
      }
    ],
    "title": "My Brief History",
    "author": {
      "id": 5,
      "name": "Stephen Hawking",
      "isLiving": false
    },
    "authorId": 5,
    "createdAt": "2020-05-12T12:36:10.498713+01:00"
  }
]

Or we can turn this around, nesting more deeply to retrieve authors, each with their books, each with their tags:

import * as db from './zapatos/src';
import pool from './pgPool';const authorsBooksTags = await db.select('authors', db.all, {
  lateral: {
    books: db.select('books', { authorId: db.parent('id') }, {
      lateral: {
        tags: db.select('tags', { bookId: db.parent('id') }, { columns: ['tag'] })
      }
    })
  }
}).run(pool);
SELECT coalesce(jsonb_agg(result), '[]') AS result
FROM (
  SELECT to_jsonb ("authors".*) || jsonb_build_object($1::text, "ljoin_0".result) AS result
  FROM "authors"
  LEFT JOIN LATERAL (
    SELECT coalesce(jsonb_agg(result), '[]') AS result
    FROM (
      SELECT to_jsonb ("books".*) || jsonb_build_object($2::text, "ljoin_0".result) AS result
      FROM "books"
        LEFT JOIN LATERAL (
          SELECT coalesce(jsonb_agg(result), '[]') AS result
          FROM (
            SELECT jsonb_build_object($3::text, "tag") AS result
            FROM "tags"
            WHERE ("bookId" = "books"."id")) AS "sq_tags") AS "ljoin_0" ON true
      WHERE ("authorId" = "authors"."id")) AS "sq_books") AS "ljoin_0" ON true) AS "sq_authors"
["books", "tags", "tag"]
[
  {
    "id": 1000,
    "name": "Philip Pullman",
    "books": [
      {
        "id": 1000,
        "tags": [
          {
            "tag": "His Dark Materials"
          },
          {
            "tag": "1/3"
          }
        ],
        "title": "Northern Lights",
        "authorId": 1000,
        "createdAt": "2020-05-12T12:36:00.424295+01:00"
      },
      {
        "id": 1001,
        "tags": [
          {
            "tag": "His Dark Materials"
          },
          {
            "tag": "2/3"
          }
        ],
        "title": "The Subtle Knife",
        "authorId": 1000,
        "createdAt": "2020-05-12T12:36:00.425429+01:00"
      },
      {
        "id": 1002,
        "tags": [
          {
            "tag": "His Dark Materials"
          },
          {
            "tag": "3/3"
          }
        ],
        "title": "The Amber Spyglass",
        "authorId": 1000,
        "createdAt": "2020-05-12T12:36:00.425819+01:00"
      }
    ],
    "isLiving": true
  },
  {
    "id": 1001,
    "name": "Mark Haddon",
    "books": [
      {
        "id": 1003,
        "tags": [
          {
            "tag": "mystery"
          }
        ],
        "title": "The Curious Incident of the Dog in the Night-Time",
        "authorId": 1001,
        "createdAt": "2020-05-12T12:36:00.429545+01:00"
      }
    ],
    "isLiving": true
  },
  {
    "id": 1002,
    "name": "Louis Sachar",
    "books": [],
    "isLiving": true
  },
  {
    "id": 1,
    "name": "Gabriel Garcia Marquez",
    "books": [],
    "isLiving": false
  },
  {
    "id": 2,
    "name": "Douglas Adams",
    "books": [],
    "isLiving": false
  },
  {
    "id": 3,
    "name": "Jane Austen",
    "books": [],
    "isLiving": false
  },
  {
    "id": 4,
    "name": "Joseph Conrad",
    "books": [],
    "isLiving": false
  },
  {
    "id": 5,
    "name": "Stephen Hawking",
    "books": [
      {
        "id": 1,
        "tags": [
          {
            "tag": "physics"
          }
        ],
        "title": "A Brief History of Time",
        "authorId": 5,
        "createdAt": "2020-05-12T12:36:10.498713+01:00"
      },
      {
        "id": 2,
        "tags": [
          {
            "tag": "physicist"
          },
          {
            "tag": "autobiography"
          }
        ],
        "title": "My Brief History",
        "authorId": 5,
        "createdAt": "2020-05-12T12:36:10.498713+01:00"
      }
    ],
    "isLiving": false
  }
]

You’ll note the use of the parent function to refer to a join column in the table of the containing
query. This is simply a convenience: in the join of books to authors above, we could just as well formulate the
Whereable as:

{ authorId: sql`${"authors"}.${"id"}` }

We can also nest count and selectOne calls, as you might expect. And we can join a
table to itself, though in this case we must remember to use the alias option to define an
alternative table name, resolving ambiguity:

Take this new, self-referencing table:

CREATE TABLE "employees"
( "id" SERIAL PRIMARY KEY
, "name" TEXT NOT NULL
, "managerId" INTEGER REFERENCES "employees"("id") );

Add some employees:

import * as db from './zapatos/src';
import pool from './pgPool';const
  anna = await db.insert('employees', 
    { name: 'Anna' }).run(pool),
  [beth, charlie] = await db.insert('employees', [
    { name: 'Beth', managerId: anna.id },
    { name: 'Charlie', managerId: anna.id },
  ]).run(pool),
  dougal = await db.insert('employees', 
    { name: 'Dougal', managerId: beth.id }).run(pool);
INSERT INTO "employees" ("name")
  VALUES ($1)
RETURNING to_jsonb ("employees".*) AS result
["Anna"]
{
  "id": 1,
  "name": "Anna",
  "managerId": null
}
INSERT INTO "employees" ("managerId", "name")
  VALUES ($1, $2), ($3, $4)
RETURNING to_jsonb ("employees".*) AS result
[1, "Beth", 1, "Charlie"]
[
  {
    "id": 2,
    "name": "Beth",
    "managerId": 1
  },
  {
    "id": 3,
    "name": "Charlie",
    "managerId": 1
  }
]
INSERT INTO "employees" ("managerId", "name")
  VALUES ($1, $2)
RETURNING to_jsonb ("employees".*) AS result
[2, "Dougal"]
{
  "id": 4,
  "name": "Dougal",
  "managerId": 2
}

Then query for a summary (joining the table to itself twice, with appropriate aliasing):

import * as db from './zapatos/src';
import pool from './pgPool';const people = await db.select('employees', db.all, {
  columns: ['name'], 
  lateral: {
    lineManager: db.selectOne('employees', { id: db.parent('managerId') },
      { alias: 'managers', columns: ['name'] }),
    directReports: db.count('employees', { managerId: db.parent('id') },
      { alias: 'reports' }),
  },
}).run(pool);
SELECT coalesce(jsonb_agg(result), '[]') AS result
FROM (
  SELECT jsonb_build_object($1::text, "name") || jsonb_build_object($2::text, "ljoin_0".result, $3::text, "ljoin_1".result) AS result
  FROM "employees"
  LEFT JOIN LATERAL (
    SELECT jsonb_build_object($4::text, "name") AS result
    FROM "employees" AS "managers"
    WHERE ("id" = "employees"."managerId")
  LIMIT $5) AS "ljoin_0" ON true
  LEFT JOIN LATERAL (
    SELECT count("reports".*) AS result
    FROM "employees" AS "reports"
    WHERE ("managerId" = "employees"."id")) AS "ljoin_1" ON true) AS "sq_employees"
["name", "lineManager", "directReports", "name", 1]
[
  {
    "name": "Anna",
    "lineManager": null,
    "directReports": 2
  },
  {
    "name": "Beth",
    "lineManager": {
      "name": "Anna"
    },
    "directReports": 1
  },
  {
    "name": "Charlie",
    "lineManager": {
      "name": "Anna"
    },
    "directReports": 0
  },
  {
    "name": "Dougal",
    "lineManager": {
      "name": "Beth"
    },
    "directReports": 0
  }
]

As usual, this is fully typed. If, for example, you were to forget that directReports is a count
rather than an array of employees, VS Code would soon disabuse you.

There are still a couple of limitations to type inference for nested queries. First, there’s no check that your
join makes sense (column types and REFERENCES relationships are not exploited in the
Whereable term). Second, the result type of a nested selectOne always includes
undefined even if the relevant foreign key is NOT NULL and has a REFERENCES
constraint (in which case we know that Postgres will have enforced the existence of a record).

Nevertheless, this is a handy, flexible — but still transparent and zero-abstraction — way to generate and run
complex join queries.

The extras option allows us to include additional result keys that don’t represent columns of our
tables. That could be a computed quantity, such as a geographical distance via PostGIS.

The option takes a mapping of property names to sql template strings (i.e.
SQLFragments). The RunResult type variables of those template strings are significant,
as they are passed through to the result type.

Let’s see extras in use, with an example that shows too how the lateral option can go
well beyond simply matching a foreign key to a primary key.

Take this new table:

CREATE EXTENSION postgis;
CREATE TABLE "stores"
( "id" SERIAL PRIMARY KEY
, "name" TEXT NOT NULL
, "geom" GEOMETRY NOT NULL );
CREATE INDEX "storesGeomIdx" ON "stores" USING gist("geom");

Insert some new stores:

import * as db from './zapatos/src';
import pool from './pgPool';const gbPoint = (mEast: number, mNorth: number) =>
  db.sql`ST_SetSRID(ST_Point(${db.param(mEast)}, ${db.param(mNorth)}), 27700)`;

const [brighton] = await db.insert('stores', [
  { name: 'Brighton', geom: gbPoint(530590, 104190) },
  { name: 'London', geom: gbPoint(534930, 179380) },
  { name: 'Edinburgh', geom: gbPoint(323430, 676130) },
  { name: 'Newcastle', geom: gbPoint(421430, 563130) },
  { name: 'Exeter', geom: gbPoint(288430, 92130) },
]).run(pool);
INSERT INTO "stores" ("geom", "name")
  VALUES (ST_SetSRID (ST_Point ($1, $2), 27700), $3), (ST_SetSRID (ST_Point ($4, $5), 27700), $6), (ST_SetSRID (ST_Point ($7, $8), 27700), $9), (ST_SetSRID (ST_Point ($10, $11), 27700), $12), (ST_SetSRID (ST_Point ($13, $14), 27700), $15)
RETURNING to_jsonb ("stores".*) AS result
[530590, 104190, "Brighton", 534930, 179380, "London", 323430, 676130, "Edinburgh", 421430, 563130, "Newcastle", 288430, 92130, "Exeter"]
[
  {
    "id": 1,
    "geom": "0101000020346C0000000000003C31204100000000E06FF940",
    "name": "Brighton"
  },
  {
    "id": 2,
    "geom": "0101000020346C0000000000002453204100000000A0E50541",
    "name": "London"
  },
  {
    "id": 3,
    "geom": "0101000020346C00000000000098BD13410000000044A22441",
    "name": "Edinburgh"
  },
  {
    "id": 4,
    "geom": "0101000020346C000000000000D8B8194100000000742F2141",
    "name": "Newcastle"
  },
  {
    "id": 5,
    "geom": "0101000020346C000000000000B89A114100000000207EF640",
    "name": "Exeter"
  }
]

And now query my local store (Brighton) plus its three nearest alternatives, with their distances in metres,
using PostGIS’s index-aware <-> operator:

import * as db from './zapatos/src';
import * as s from './zapatos/schema';
import pool from './pgPool';const localStore = await db.selectOne('stores', { id: 1 }, {
  columns: ['name'],
  lateral: {
    alternatives: db.select('stores', db.sql`${"id"} <> ${db.parent("id")}`, {
      alias: 'nearby',
      columns: ['name'],
      extras: {  
        distance: db.sqlnumber>`
          ${"geom"} <-> ${db.parent("geom")}`,
      },
      order: [{ 
        by: db.sql`
          ${"geom"} <-> ${db.parent("geom")}`, 
        direction: 'ASC' 
      }],
      limit: 3,
    })
  }
}).run(pool);
SELECT jsonb_build_object($1::text, "name") || jsonb_build_object($2::text, "ljoin_0".result) AS result
FROM "stores"
  LEFT JOIN LATERAL (
    SELECT coalesce(jsonb_agg(result), '[]') AS result
    FROM (
      SELECT jsonb_build_object($3::text, "name") || jsonb_build_object($4::text, "geom" <-> "stores"."geom") AS result
      FROM "stores" AS "nearby"
      WHERE "id" <> "stores"."id"
      ORDER BY "geom" <-> "stores"."geom" ASC
      LIMIT $5) AS "sq_nearby") AS "ljoin_0" ON true
WHERE ("id" = $6)
LIMIT $7
["name", "alternatives", "name", "distance", 3, 1, 1]
{
  "name": "Brighton",
  "alternatives": [
    {
      "name": "London",
      "distance": 75315.1492065175
    },
    {
      "name": "Exeter",
      "distance": 242460.11878245
    },
    {
      "name": "Newcastle",
      "distance": 471743.393382462
    }
  ]
}

transaction

export enum Isolation {
  
  
  Serializable = "SERIALIZABLE",
  RepeatableRead = "REPEATABLE READ",
  ReadCommitted = "READ COMMITTED",
  SerializableRO = "SERIALIZABLE, READ ONLY",
  RepeatableReadRO = "REPEATABLE READ, READ ONLY",
  ReadCommittedRO = "READ COMMITTED, READ ONLY",
  SerializableRODeferrable = "SERIALIZABLE, READ ONLY, DEFERRABLE"
}
export async function transaction<T, M extends Isolation>(
  pool: pg.Pool,
  isolationMode: M,
  callback: (client: TxnClient) => Promise
): Promise<T>

The transaction helper takes a pg.Pool instance, an isolation mode, and an
async callback function. It then proceeds as follows:

  • Issue a BEGIN TRANSACTION.
  • Call the callback, passing to it a database client to use in place of a pg.Pool.
  • If a serialization error is thrown, try again after a configurable
    random delay, a configurable number of times.
  • If any other error is thrown, issue a ROLLBACK, release the database client, and re-throw the
    error.
  • Otherwise COMMIT the transaction, release the database client, and return the callback’s result.

As is implied above, for REPEATABLE READ or SYNCHRONIZED isolation modes the callback
could be called several times. It’s therefore important that it doesn’t have any non-database-related side-effects
(i.e. don’t, say, bill your customer’s credit card from this function).

We already saw one transaction example. Here’s another, adapted from CockroachDB’s write-up on
SERIALIZABLE
.

We have a table of doctors, and one of their assigned shifts.

CREATE TABLE "doctors"
( "id" SERIAL PRIMARY KEY
, "name" TEXT NOT NULL );

CREATE TABLE "shifts" 
( "day" DATE NOT NULL
, "doctorId" INTEGER NOT NULL REFERENCES "doctors"("id")
, PRIMARY KEY ("day", "doctorId") );

We populate those tables with two doctors and two days’ shifts:

import * as db from './zapatos/src';
import pool from './pgPool';await db.insert('doctors', [
  { id: 1, name: 'Annabel' }, 
  { id: 2, name: 'Brian' },
]).run(pool);

await db.insert('shifts', [
  { day: '2020-12-24', doctorId: 1 },
  { day: '2020-12-24', doctorId: 2 },
  { day: '2020-12-25', doctorId: 1 },
  { day: '2020-12-25', doctorId: 2 },
]).run(pool);
INSERT INTO "doctors" ("id", "name")
  VALUES ($1, $2), ($3, $4)
RETURNING to_jsonb ("doctors".*) AS result
[1, "Annabel", 2, "Brian"]
[
  {
    "id": 1,
    "name": "Annabel"
  },
  {
    "id": 2,
    "name": "Brian"
  }
]
INSERT INTO "shifts" ("day", "doctorId")
  VALUES ($1, $2), ($3, $4), ($5, $6), ($7, $8)
RETURNING to_jsonb ("shifts".*) AS result
["2020-12-24", 1, "2020-12-24", 2, "2020-12-25", 1, "2020-12-25", 2]
[
  {
    "day": "2020-12-24",
    "doctorId": 1
  },
  {
    "day": "2020-12-24",
    "doctorId": 2
  },
  {
    "day": "2020-12-25",
    "doctorId": 1
  },
  {
    "day": "2020-12-25",
    "doctorId": 2
  }
]

The important business logic is that there must always be at least one doctor on shift. Now let’s say
both doctors happen at the same moment to request leave for 25 December.

import * as db from './zapatos/src';
import pool from './pgPool';const requestLeaveForDoctorOnDay = async (doctorId: number, day: string) =>
  db.transaction(pool, db.Isolation.Serializable, async txnClient => {
    const otherDoctorsOnShift = await db.count('shifts', {
      doctorId: db.sql`${db.self} != ${db.param(doctorId)}`,
      day,
    }).run(txnClient);
    if (otherDoctorsOnShift === 0) return false;

    await db.deletes('shifts', { day, doctorId }).run(txnClient);
    return true;
  });

const [leaveBookedForAnnabel, leaveBookedForBrian] = await Promise.all([
  
  requestLeaveForDoctorOnDay(1, '2020-12-25'),
  requestLeaveForDoctorOnDay(2, '2020-12-25'),
]);

console.log(`Leave booked for:
  Annabel – ${leaveBookedForAnnabel}
  Brian – ${leaveBookedForBrian}`);
START TRANSACTION ISOLATION LEVEL SERIALIZABLE
START TRANSACTION ISOLATION LEVEL SERIALIZABLE
SELECT count("shifts".*) AS result
FROM "shifts"
WHERE ("day" = $1
  AND ("doctorId" != $2))
["2020-12-25", 1]
SELECT count("shifts".*) AS result
FROM "shifts"
WHERE ("day" = $1
  AND ("doctorId" != $2))
["2020-12-25", 2]
1
DELETE FROM "shifts"
WHERE ("day" = $1
    AND "doctorId" = $2)
RETURNING to_jsonb ("shifts".*) AS result
["2020-12-25", 1]
1
DELETE FROM "shifts"
WHERE ("day" = $1
    AND "doctorId" = $2)
RETURNING to_jsonb ("shifts".*) AS result
["2020-12-25", 2]
[
  {
    "day": "2020-12-25",
    "doctorId": 1
  }
]
COMMIT
[
  {
    "day": "2020-12-25",
    "doctorId": 2
  }
]
COMMIT
ROLLBACK
Transaction #1 rollback (code 40001) on attempt 1 of 5, retrying in 66ms
Retrying transaction #1, attempt 2 of 5
START TRANSACTION ISOLATION LEVEL SERIALIZABLE
SELECT count("shifts".*) AS result
FROM "shifts"
WHERE ("day" = $1
  AND ("doctorId" != $2))
["2020-12-25", 2]
0
COMMIT
Leave booked for:
  Annabel – true
  Brian – false

Expanding the results, we see that one of the requests is retried and then fails — as it must to retain one
doctor on shift — thanks to the SERIALIZABLE isolation (REPEATABLE READ, which is one
isolation level weaker, wouldn’t help).

TxnSatisfying types

export namespace TxnSatisfying {
  export type Serializable = Isolation.Serializable;
  export type RepeatableRead = Serializable | Isolation.RepeatableRead;
  export type ReadCommitted = RepeatableRead | Isolation.ReadCommitted;
  export type SerializableRO = Serializable | Isolation.SerializableRO;
  export type RepeatableReadRO = SerializableRO | RepeatableRead | Isolation.RepeatableReadRO;
  export type ReadCommittedRO = RepeatableReadRO | ReadCommitted | Isolation.ReadCommittedRO;
  export type SerializableRODeferrable = SerializableRO | Isolation.SerializableRODeferrable;
}

If you find yourself passing transaction clients around, you may find the TxnSatisfying types
useful. For example, if you type a txnClient argument to a function as
TxnSatisfying.RepeatableRead, you can call it with Isolation.Serializable or
Isolation.RepeatableRead but not Isolation.ReadCommitted.

Run-time configuration

There are a few configuration options you can set at runtime:

export interface Config {
  transactionAttemptsMax: number;
  transactionRetryDelay: { minMs: number, maxMs: number };
  queryListener?(str: any): void;
  resultListener?(str: any): void;
  transactionListener?(str: any): void;
};

Read the current values with getConfig() and set new values with
setConfig(newConfig: Partial).

  • transactionAttemptsMax determines how many times the transaction helper will try to
    execute a query in the face of serialization errors before giving up. It defaults to 5.

  • transactionRetryDelay determines the range within which the transaction helper will
    pick a random delay before each retry. It’s expressed in milliseconds and defaults to
    { minMs: 25, maxMs: 250 }.

  • queryListener and resultListener, if set, are called from the run
    function, and receive the results of (respectively) compiling and then executing and transforming each query.

  • transactionListener, similarly, is called with messages about transaction retries.

You might use one or more of the three listener functions to implement logging. They’re also used in generating
the Show generated SQL, results elements of this documentation.

This documentation

This document is generated from a separate repository. All
generated SQL has been funnelled through pgFormatter for
easier reading.

Fixes, feature and contributions

If you’re asking for or contributing new work, my response is likely to reflect these principles:

Correct, consistent, comprehensible. I’m pretty likely to accept pull requests that fix bugs or
improve readability or consistency without any major trade-offs. I’ll also do my best to act on clear, minimal
test cases that demonstrate unambiguous bugs.

Small is beautiful. I’m less likely to accept pull requests for features that significantly
complicate the code base either to address niche use-cases or to eke out minor performance gains that are almost
certainly swamped by network and database latencies.

Scratching my own itch. I’m unlikely to put a lot of my own effort into features I don’t
currently need … unless we’re talking about paid consultancy, which I’m more than happy to discuss.

What’s next

Some nice-to-haves would include:

  • More complete typing of lateral queries. It would be great to make use of
    foreign key relationships and suchlike in generated types and the shortcut functions that make use of them.

  • Tests. The proprietary server API that’s the original consumer of this library, over at Psychological Technologies, has a test suite that exercises most of the
    code base at least a little. Nevertheless, a proper test suite is still kind of indispensable. It should test
    not just returned values but also inferred types — which is a little fiddly.

Licence

This software is released under the MIT licence.

Copyright © 2020 George MacKerron

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
documentation files (the “Software”), to deal in the Software without restriction, including without limitation
the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and
to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of
the Software.

THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO
THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
IN THE SOFTWARE.

Zapatos = shoes

Read More

Previous Post

CH Show HN: Quizlime, run your own real-time collaborative quizzes

Next Post

CH Show HN: Dark Mode for HN

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to top