download pg_sample 0.01
When you have a relatively large database (tables with, say, millions or billions of rows), it can be difficult to generate smaller datasets to work with, especially if foreign keys are heavily used.
That's where this script comes in. It will create smaller instances of each table along with any additional rows needed to satisfy foreign key constraints (circular dependencies are supported).
The script's operation closely resembles that of pg_dump. For example, assuming we have a large database named largedb, a smaller version could be produced with:
The smalldb would then contain a subset of largedb's data.
pg_sample largedb | psql smalldb
Here are the command-line options (many of which mirror pg_dump):
See also: pg_sample Github source repository
Output only the data, not the schema (data definitions).
Use the specified character set encoding. If not specified, uses the
environment variable PGCLIENTENCODING, if defined; otherwise, uses
the encoding of the database.
Send output to the specified file. If omitted, standard output is
Drop the sample schema if it exists.
Don't delete the sample schema when the script finishes.
The maximum number of rows to initially copy from each table
(defaults to 100). Note that sample tables may end up with
significantly more rows in order to satisfy foreign key constraints.
Randomize the rows initially selected from each table. May
significantly increase the running time of the script.
The schema name to use for the sample database (defaults to
Turn on Perl DBI tracing. See the DBI module documentation for
Output status information to standard error.
The following options control the database connection parameters.
The host name to connect to. Defaults to the PGHOST environment
variable if not specified.
The database port to connect to. Defaults to the PGPORT environment
variable, if set; otherwise, the default port is used.
User name to connect as.
Password to connect with.