Previously, pg_repack shows "ERROR: ERROR: relation foo does not
exist" when specify non-existing table. Though the first ERROR
is from pg_repack and the second ERROR is from PostgreSQL server,
some users might think that pg_repack shows error level twice
wrongly.
"Waiting for %d transactions to finish. First PID: %s"
message. Display it on every loop through the SQL_XID_ALIVE check
(i.e. every second), instead of only when the number of transactions
we're waiting on changes -- previously, it was too easy for that
important message to get lost in other messages.
And don't display the message at all when running under pg_regress,
i.e. as part of `make installcheck`. We had been getting occasional
errors from pg_regress when autovacuum was running and that message
got logged.
These calls can require an access share lock on the table, which might
conflict with an existing or later acquires lock. So perform these calls
while we already have an exclusive lock on the table. This unfortuantely
means that we ave to remove the constness of the table parameter to
repack_one_table, as it is not modifying the table object to set up the
indexes.
problem was really about the OID being interpreted as an integer
literal upon input, and overflowing its integer space before even making
it into pg_try_advisory_lock(). (We do still need to add -2147483648 to
make the result fit into an integer, as 4b3347 does.)
Hopefully fixes issue #30, for real this time.
4-byte int accepted by the two-argument form of pg_try_advisory_lock()
we are using.
Fixes#30. Thanks to Mark Steben and Greg Sabino Mullane for the report
and diagnosis.
Prior to 506104686b these DELETEs had been done in large batches (of
DEFAULT_PEEK_COUNT size), but that naive method of choosing rows to
delete was unsafe. Here we continue to keep track of which rows must
be deleted as we process them.
Adds support for repacking only the tables in a specified schema. This
doesn't support --only-indexes mode, but that seems alright for now.
Fix merge conflicts, and make a few tweaks along the way:
* bump version to 1.3-dev0
* add Beena to list of maintainers
* documentation wordsmithing
* fix up the INFO message printed for each index in --index or
--only-indexes mode, so that it is only printed once per index, and
prints the name of the original index, not that of the transient
index_%u name.
It is not safe to assume that we can bulk-delete all entries from the
log table based on their "id" being less than the largest "id" we have
processed so far, as we may unintentionally delete some tuples from the
log which we haven't actually processed yet in the case of concurrent
transactions adding tuples into the log table.
It was possible to specify both --schema and --table which probably
-should- be legal but would need some code to be rewritten. This patch
adds a check that both can't be specified and returns an error telling
the user to use schema.table notation instead. A regression test
checking this behaviour was added.
problem: in case there are open transactions on other databases then the
one pg_repack is working on and pg_locks doesn't contain any
information about the affected database oid of the locked relation
(e.g. there is no locked relation, only an open transaction),
pg_repack will wait for that connection to release the lock
(even if the relation that gets reorganized is held in an
different database).
solution: join pg_database (via pg_stat_activity's datid) and check
if the connection (of the conflicted transaction) is established
on a different database than the relation treated by pg_repack
and skip them.
furthermore don't exclude transactions from other databases when
shared objects are locked.