An alternative way to fix#168 which is not as invasive as the changes
in #171.
This currently breaks the current behaviour of the program as the tables
specified on command line are not found.
* Use poll() if it is available, or select() otherwise, to
efficiently wait on index builds in worker queries to finish.
* fix off-by-one error when initially assigning workers
* move PQsetnonblocking() calls to setup_workers()
Adds a new --jobs command-line argument to specify how many worker
connections you want. These worker connections should stick around
while processing table(s) in a single database. For each table,
parcel out the indexes to be built among these worker conns,
submitting each CREATE INDEX ... request using PQsendQuery() i.e.
in non-blocking fashion.
Most of this is still rather crude, in particular the
while (num_active_workers) ... loop in rebuild_indexes(), but
it seems to be working, so I'm committing here.
Per Issue #18. SimpleStringList code borrowed from pg_dump and a
pending patch to add similar functionality to pg_restore,
clusterdb, vacuumdb, and reindexdb.
The error handling in reorg_one_table() could still be much improved,
so that an error processing a single table doesn't cause pg_reorg to
necessarily bail out and skip further tables, but I'll leave that for
another day.
This is a first pass at Daniele's suggestion in Issue #8, although it is
definitely still buggy -- it is still possible for another transaction
to get in an AccessExclusive lock and perform DDL either before the
ACCESS SHARE lock is acquired or immediately after it is released.
there are views or functions depending on columns after dropped ones.
The issue was reported by depesz, and original patch by Denish Patel.
Improved documentation how to build binaries from source.
COPYRIGHT updated.
- Add wait-timeout option and use SET statement_timeout instead of NOWAIT.
This can avoid infinite NOWAIT loops to reorganize heavily accessed tables.
- Support native build with MSVC on Windows.