This simplifies some of the error handling blocks, as now
we can unconditionally use this macro without worrying about multiple
PQclear() calls causing a double-free().
Per discussion with Daniele.
Adds a new --jobs command-line argument to specify how many worker
connections you want. These worker connections should stick around
while processing table(s) in a single database. For each table,
parcel out the indexes to be built among these worker conns,
submitting each CREATE INDEX ... request using PQsendQuery() i.e.
in non-blocking fashion.
Most of this is still rather crude, in particular the
while (num_active_workers) ... loop in rebuild_indexes(), but
it seems to be working, so I'm committing here.
Per Issue #18. SimpleStringList code borrowed from pg_dump and a
pending patch to add similar functionality to pg_restore,
clusterdb, vacuumdb, and reindexdb.
The error handling in reorg_one_table() could still be much improved,
so that an error processing a single table doesn't cause pg_reorg to
necessarily bail out and skip further tables, but I'll leave that for
another day.
This is a first pass at Daniele's suggestion in Issue #8, although it is
definitely still buggy -- it is still possible for another transaction
to get in an AccessExclusive lock and perform DDL either before the
ACCESS SHARE lock is acquired or immediately after it is released.
there are views or functions depending on columns after dropped ones.
The issue was reported by depesz, and original patch by Denish Patel.
Improved documentation how to build binaries from source.
COPYRIGHT updated.
- Add wait-timeout option and use SET statement_timeout instead of NOWAIT.
This can avoid infinite NOWAIT loops to reorganize heavily accessed tables.
- Support native build with MSVC on Windows.