An alternative way to fix#168 which is not as invasive as the changes
in #171.
This currently breaks the current behaviour of the program as the tables
specified on command line are not found.
__attribute__ macro is not defined in msvc and it is not essential to
the implementation. All it does is tell the compiler that this function
is similar to printf, and expects a printf-like format string
So for MSVC we define __attribute__ as a macro that do nothing
This patch includes one global counter (temp_obj_num) which counts number of temporary objects created by pg_repack. Correct order of deletion of temporary object as per count avoids unintentional error messages.
pgut version renamed to avoid confusion with the server version.
(I wonder why there is such a duplication of interfaces and
implementations there though...)
This simplifies some of the error handling blocks, as now
we can unconditionally use this macro without worrying about multiple
PQclear() calls causing a double-free().
Per discussion with Daniele.
* Use poll() if it is available, or select() otherwise, to
efficiently wait on index builds in worker queries to finish.
* fix off-by-one error when initially assigning workers
* move PQsetnonblocking() calls to setup_workers()
Adds a new --jobs command-line argument to specify how many worker
connections you want. These worker connections should stick around
while processing table(s) in a single database. For each table,
parcel out the indexes to be built among these worker conns,
submitting each CREATE INDEX ... request using PQsendQuery() i.e.
in non-blocking fashion.
Most of this is still rather crude, in particular the
while (num_active_workers) ... loop in rebuild_indexes(), but
it seems to be working, so I'm committing here.
Per Issue #18. SimpleStringList code borrowed from pg_dump and a
pending patch to add similar functionality to pg_restore,
clusterdb, vacuumdb, and reindexdb.
The error handling in reorg_one_table() could still be much improved,
so that an error processing a single table doesn't cause pg_reorg to
necessarily bail out and skip further tables, but I'll leave that for
another day.
This is a first pass at Daniele's suggestion in Issue #8, although it is
definitely still buggy -- it is still possible for another transaction
to get in an AccessExclusive lock and perform DDL either before the
ACCESS SHARE lock is acquired or immediately after it is released.
there are views or functions depending on columns after dropped ones.
The issue was reported by depesz, and original patch by Denish Patel.
Improved documentation how to build binaries from source.
COPYRIGHT updated.
- Add wait-timeout option and use SET statement_timeout instead of NOWAIT.
This can avoid infinite NOWAIT loops to reorganize heavily accessed tables.
- Support native build with MSVC on Windows.
pg_reorg broke catalog definition if the target table had any dropped columns.
Now pg_reorg removes dropped columns and renumbers valid columns.
You can use pg_reorg to shrink column definitions if you have many dropped
columns. (without pg_reorg, dropped columns are filled with zero forever)