Price table replicated to website

Replicating GP price table to website

To provide our website with bang up to date product prices as they are in our ERP system, we replicate the price table from our ERP system to the website SQL server database. The price table holds nearly two million price rows consisting of many combinations of currency, item, price quantity, units of measure, discount breaks and customer specific price lists.

The replicated table works great, until a big price update is required. If most of the prices are updated say in line with inflation, it hits a good number of those rows in the database. This causes a BIG transaction to make its way through the relatively thin wire to our website. From opening the transaction (and thus locking the subscriber table) to committing the transaction can take a long time locally and then for that to make its way through the slow connection to the website and be committed. All these processes take a finite amount of time. The lock caused on the price table at the website database while all this is happening causes a problem. That lock caused any reads on the price table to be blocked for a long time until everything had passed through, bringing the website to a halt for tens of minutes.

To avoid this queries at the website that interrogate the replicated publisher table could be set to READ_UNCOMMITTED transactions. However, potentially this could lead to problems in reading “dirty records” that are not ready for public consumption. This is significant when you consider these are price tables and reading prices that are in an unknown state is a no-no.

Solutions

First Idea

One was to take a database snapshotbefore the bulk price update, switch all the views on the table to use the snapshot letting replication take its time and update the records in the underlying table. Once finished the views could be switched back to point at the original table again. This could, perhaps be controlled by a replicated signalling table from the ERP system so that we don’t have to issue SQL between the databases. It should work well in that once the all clear signal is set in the publication database, it will not propagate through to the subscriber until after all the other changes in the log have been replayed and committed to the subscriber.

Second Idea

The second idea was to switch the subscriber database into ALLOW_SNAPSHOT_ISOLATION ON mode. Simply by executing the following commands:

    ALTER DATABASE [ERPReplicationdatabase]SET ALLOW_SNAPSHOT_ISOLATION ON
    ALTER DATABASE [ERPReplicationdatabase]SET READ_COMMITTED_SNAPSHOT ON

Once the database is in snapshot isolation mode, the reads will use versioned row numbers from the database. The reads are never blocked as when a transaction begins, the row version before the transaction and below is used for any reads. So reading the website is not blocked while a transaction is underway, and what is read is the state before the transaction started, that for our application is perfect.

No DML commands need issuing or signalling between the databases. This is the cleanest solution for what is required. The next task is to ensure that all the changes made to the price tables are made inside one transaction to keep the reads off any of the new changed data until is fully committed to the database.