Previously we exited from lock_exclusive() while opening the
transaction that started at beggning if --no-kill-backend option
is specified. This caused that DROP INDEX CONCURRENTLY fails
because it cannot be executed within a user transaction block.
Fixed issue #129.
Commit 5adff6ff0b88d6f162719eff7176069730537c2a separated the
data copy from creating table. This is a cause of bug that
pg_repack doesn't actually sort table during reorganization.
This commit fixes this issue by adding ORDER BY clause to Copy
SQL rather than CREATE TABLE SQL.
Reported by acmzero on issue #138.
Previously, even if the table whose column storage type has been
changed the pg_repack did first copy the data to table without changing
column storage paramater. This cause of that the existing data is
pushed out to its toast table even if actual column storage type is
"main".
Issue #94.
The current client checks for superuser before attempting to
execute pg_repack commands. In Amazon RDS, customers are given
access to a psuedo-superuser role called rds_superuser, but they
are not given access to superuser. However, rds_superusers will
otherwise have the ability to execute pg_repack commands in RDS.
This change introduces the --no-superuser-check option in the
client code so that users can disable the client-side superuser
checks.
pg_repack needs to take an exclusive lock at the end of the
reorganization. If the lock cannot be taken after duration
--wait-timeout option specified and this option is true,
pg_repack gives up to repack a target table instead of
cancelling conflicting backend. False by default.
During repacking table, if a transaction executes INSERT CONFLICT
ON UPDATE/DO NOTHING, because we define BEFORE trigger on target
table, the contents of operation log table becomes inconsistent
easliy. As a result, pg_reapck fails with a high probability.
To resolve this issue, this changes the trigger type from BEFORE
to AFTER. We define AFTER trigger that is the first of the AFTER
trigger to fire on the table.
From inspecting the call sites of utoa, it appears that some of them
(especially the recent cleanup patch which added the strdup there) wanted
to prevent overwriting a local variable by repeated call (to utoa) using
the same variable as argument. This commit instead makes such call sites
strdup the variable itself before passing it to utoa. That seems cleaner
considering that it does not seem utoa's contract to do so (strdup its
parameter that is).
If there are two concurrent pg_repack commands are run on the same
table, the one starting later fails with error message:
Another pg_repack command may be running on the table. Please try again.
The document says this is shown as ERROR, but actualy is WARNING.
This patch includes one global counter (temp_obj_num) which counts number of temporary objects created by pg_repack. Correct order of deletion of temporary object as per count avoids unintentional error messages.
Previously, pg_repack shows "ERROR: ERROR: relation foo does not
exist" when specify non-existing table. Though the first ERROR
is from pg_repack and the second ERROR is from PostgreSQL server,
some users might think that pg_repack shows error level twice
wrongly.
"Waiting for %d transactions to finish. First PID: %s"
message. Display it on every loop through the SQL_XID_ALIVE check
(i.e. every second), instead of only when the number of transactions
we're waiting on changes -- previously, it was too easy for that
important message to get lost in other messages.
And don't display the message at all when running under pg_regress,
i.e. as part of `make installcheck`. We had been getting occasional
errors from pg_regress when autovacuum was running and that message
got logged.
These calls can require an access share lock on the table, which might
conflict with an existing or later acquires lock. So perform these calls
while we already have an exclusive lock on the table. This unfortuantely
means that we ave to remove the constness of the table parameter to
repack_one_table, as it is not modifying the table object to set up the
indexes.