Even tho is not documented in the official documentation, it is possible to avoid storing archives on PGBackRest setups. In certain scenarios, you may want to disregard storing archives for reducing costs, specially if you are using cloud storage for your backups.
Those situations could be:
Tests environments, whether you run intensive tests but you don’t need or care about PITR (Point in Time Recovery). Highly updated Postgres databases, where you can recover changes by other means, such as scrapping or restoring from external sources.
ā¹ļø PGCat’s sharding feature is currently experimental. The code for the laboratory can be found at lab_pgcat. PGCat sharding mechanisms As of today, PGCat supports 2 mechanisms of sharding through extended syntax:
By setting the shard explicitly: SET SHARD TO '<index>';, which allows you to do deterministic sharding, whether you choose your shard according to a rule, such a lookup, region, customer’s group, etc. This is great if you have one of those well delimited or an even distribution.
Introduction ā¹ļø Laboratory for this post can be found at babelfishpg-lab. ā ļø In this post, we cover 4.2.0 version. There is a slight difference in the logging settings and format from 4.1.1. Even tho BabelfishPG is a Postgres flavor, the configuration is done through its extensions (babelfish_tds and babelfish_tsql). In this post, we focus on how to log query timings and enable showing query plans.
For Postgres vanilla, you can log slow queries by setting the log_min_duration_statements, but this won’t log TSQL statements.
Combining query_timeout and statement_timeout In pgbouncer’s documentation it’s stated that the query_timeout should be set slightly higher that Postgres’ statement_timeout. Although this apply in most of the cases, it depends on the business requirements.
Generally, the statement_timeout should be set to the 99th percentile of your statements duration. However, there are cases were certain statements require larger timeouts, due to particularities like a large customer set, or larger fields, as in TOAST compression cases.
It supports TDS, right? Some things happen once in a lifetime, and the story around this is quite particular. A customer required something that at the beginning sound counter intuitive: migrate an existing Postgres database to BabelfishPG.
The thing was that the application was a critical business core, with a large amount of code that would require years to migrate fully for supporting another storage strategy. But the real reason was that their customer didn’t want to stick to any private licensing model, and required to use Open Source solutions.
Next post will cover performance tests using tdspool.
BabelfishPG connection architecture Inherited from Postgres connection architecture, each connection trhough the TDS port will instance a Postgres backend. As in Postgres, BabelfishPG needs a middleware for funnel connections through the TDS port for avoiding running out of connections and processing capacity in the database server.
For Postgres, we have plenty of options, like PGBouncer, Odyssey, pgcat, name it. For T-SQL (read as MSSQL compatible language), there aren’t many open sourced solutions.
NOTE: This has not been yet updated in the Earthly documentation, but I’m pretty sure it will be soon (and I’ll update this post accordingly)
What does labels do to reproducible builds? By default, Earthly adds dev.earthly.* labels in the built image. You can find these by doing a docker inspect <imageID> | jq -r '.[].Config.Labels'.
Here’s an example of the image configuration including the dev.earthly.* labels:
"Config": { .