First pass at implementing concurrent index builds using multiple connections.

Adds a new --jobs command-line argument to specify how many worker
connections you want. These worker connections should stick around
while processing table(s) in a single database. For each table,
parcel out the indexes to be built among these worker conns,
submitting each CREATE INDEX ... request using PQsendQuery() i.e.
in non-blocking fashion.

Most of this is still rather crude, in particular the
while (num_active_workers) ... loop in rebuild_indexes(), but
it seems to be working, so I'm committing here.
This commit is contained in:
Josh Kupershmidt
2012-12-10 21:08:01 -07:00
parent b4d8a90437
commit 509e568c52
3 changed files with 331 additions and 40 deletions

View File

@ -48,6 +48,14 @@ typedef struct pgut_option
typedef void (*pgut_optfn) (pgut_option *opt, const char *arg);
typedef struct worker_conns
{
int max_num_workers;
int num_workers;
PGconn **conns;
} worker_conns;
extern char *dbname;
extern char *host;
@ -58,12 +66,15 @@ extern YesNo prompt_password;
extern PGconn *connection;
extern PGconn *conn2;
extern worker_conns workers;
extern void pgut_help(bool details);
extern void help(bool details);
extern void disconnect(void);
extern void reconnect(int elevel);
extern void setup_workers(int num_workers);
extern void disconnect_workers(void);
extern PGresult *execute(const char *query, int nParams, const char **params);
extern PGresult *execute_elevel(const char *query, int nParams, const char **params, int elevel);
extern ExecStatusType command(const char *query, int nParams, const char **params);