Update all XML element nodes in an XML field of a SQL table

I wrote an electronic invoicing solution for Dynamics GP a long time ago. The XML invoices of various formats are stored against the invoice number in the database. The column was upgraded to XML type when SQL sever started supporting it in SQL 2005. Although it is possible to cast a varchar into XML data type, it makes it simpler to work with as a native XML data type.

Recently an issue surfaced where the tax amount on the invoice lines needed to be zeroed for a set of invoices, as the XML values had been incorrectly generated. Once the invoices had been corrected (only a dozen), they could be marked for retransmission. But how to quickly and easily update the invoices?

We have been using XML functionality in SQL server for another project recently, so it seemed natural to address the issue with with the inbuilt XML functions of SQL server.

Showing an snippet of the "modify replace value of" SQL


The nodes in the document all look like this:


Whatever the value is, it needs to be zero after the update.


Modify() function

The following is a good guide to using the XML modify() statement in TSQL

Using this as a guide I end up with the xPath looking like this
replace value of
with ("0")'

This gives us a name space issue error due to the :core namespaces
XQuery [dbo.CA_SOP_EINVOICE_Output.XMLToSend.modify()]: 
The name "core" does not denote a namespace.

This is easily resolved by specifying the namespace with
;WITH XMLNAMESPACES('rrn:org.xcbl:schemas/xcbl/v4_0/core/core.xsd' AS core)

Next we get an error of
The target of 'replace value of' must be a non-metadata attribute or an 
element with simple typed content
This is due to the schema not defining a type for our node, so just update it as text, by adding text()

Only one replace is allowed at a time, this is a real shame as it makes an otherwise trivial statement into a script. To make it work for this problem, a loop is created that will keep updating the first node it finds until all the nodes have been set to zero. This then gives us the final script to do the task as shown below:

DECLARE @KeepLooping bit

SELECT @SOPTYPE=3, @SOPNUMBE='[Enter Doc Number]'


-- Declare the namespace we are using before each select/update
;WITH XMLNAMESPACES('rrn:org.xcbl:schemas/xcbl/v4_0/core/core.xsd' AS core)
SELECT @KeepLooping= XMLToSend.exist('//core:TotalTaxAmount/core:MonetaryAmount!=0')


-- Do the update on first non-matching node each time until they are all done
;WITH XMLNAMESPACES('rrn:org.xcbl:schemas/xcbl/v4_0/core/core.xsd' AS core)
replace value of
with ("0")'

;WITH XMLNAMESPACES('rrn:org.xcbl:schemas/xcbl/v4_0/core/core.xsd' AS core)
SELECT @KeepLooping=

That was another interesting dip into SQL Server support for XML!

SQL Replication duplicate log reader in SQL replication monitor

It is upgrade time, Dynamics GP does not like schema objects bound by replication during upgrades. During version updates some tables are copied dropped and recreated, depending on what changes have been made to the schema. If replication is left running on a Dynamics GP company during the upgrade, then the upgrade will fail, if the object being replicated is change during the upgrade.

As a result, SQL replication has to be removed during the upgrade, after first scripting it out to SQL as a create so that it can be rebuilt post upgrade. During a recent upgrade, after rebuilding replication there were two log readers running for each publication. Oddly looking at the properties of the SQL log readers, they both reference the same source SQL jobs. This was causing issues trying to start and stop them or trying to change credentials as it all got a bit confusing. 

This has happened to me a few times now. The following command removes the extra log reader registration.

Selecting all the readers, from the replication distribution database where they are registered, simply choose the higher numbered duplicates for deletion, then remove them with the delete SQL command, using the ID from the table. 


WHERE id={duplicate ID #}
Now the table should be left with one reader per publication (unless you have a more complicated replication arrangement going on). 
replication monitor showing reader agents

SSMS locked / frozen after network disconnect or computer sleep

When both the following are true:

  • the machine is woken from sleep or the network connection has been interrupted to SQL server
  • SQL Server Managment Studio Activity Monitor was open on a tab somewhere in the editor deck in SSMS.

-then It will be no longer be possible to interact with SSMS. It will feel like the SQL Server Management Studio has crashed or has frozen.

The cause

SSMS crashing as a result of connection lost or sleep  is caused by the modal dialog indicating the loss of connection for the activity monitor popping up in a modal mode behind the application surface somewhere. You cannot see this window by right clicking on the application in the bar, but it IS there. Modal mode prevents interaction with any other windows in the applicaiton until that modal window is closed.


The Fix

To get up and running again, click ALT+Tab until you find the pesky dialog window, then click OK on that window to dismiss it. SSMS will spring back to life. This is a regular occurrance for me after computer sleeps and I’ve been monitoring the SQL server.

SQL Server 2012 SP2 Replication Log Reader error v.11.00.5058

After moving Dynamics GP to a newly built server, using a back up and restore strategy, the Dynamics GP SQL replication kept causing problems. The following message drowned the error logs of the server. I ended up rebuilding replication configuration entirely to solve the issue, but there may be some clues in the steps I went through that might help someone else out there. There was lots more digging and prodding than summary shown below!

The last step did not log any message! (Source: MSSQL_REPL, Error number: MSSQL_REPL22037


It was possible to add text logging by going to the “Run Agent” step in the Log Reader SQL job and then adding to the list of the parameters, the following;

-Publisher [xxxxx] -PublisherDB [xxxxx] -Distributor [xxxxx] -DistributorSecurityMode 1  -Continuous -Output f:\SQLBackups\Reports\ReplicationOutputLog.txt


The ReplicationOutputLog.txt then contained the following:

Validating publisher

            DBMS: Microsoft SQL Server
            Version: 11.00.5058
            catalog name:
            user name: dbo
            API conformance: 0
            SQL conformance: 0
            transaction capable: 1
            read only: F
            identifier quote char: "
            non_nullable_columns: 0
            owner usage: 15
            max table name len: 128
            max column name len: 128
            need long data len:
            max columns in table: 1000
            max columns in index: 16
            max char literal len: 131072
            max statement len: 131072
            max row size: 131072
2015-02-06 08:54:59.278 OLE DB xxxxxx 'xxxxxxx': {call sp_MSensure_single_instance(N'Repl-LogReader-0-xxxx-19', 2)}
2015-02-06 08:54:59.278 Parameter values obtained from agent profile:
            -pollinginterval 5000
            -historyverboselevel 1
            -logintimeout 15
            -querytimeout 1800
            -readbatchsize 500
            -readbatchsize 500000
2015-02-06 08:54:59.278 Status: 0, code: 22037, text: 'The last step did not log any message!'.
2015-02-06 08:54:59.278 The last step did not log any message!


We were also getting lots of errors relating to “Only one Log Reader Agent or log-related procedure”…”can connect to a database at a time” (see below for screenshot)


Google revealed some posts around SP1/SP2 hotfixes and cumulative updates for SQL 2012 fixing issues around area of multiple log reader errors like this. Other posts talked about the database not being upgraded correctly, new schema differences between versions. My conclusion on reading these posts was that the SQL replication database (distribution), may have been brought over from the previous server (we don’t know the full history of if it was restored or/and replication rebuilt by scripts). The restored database may not have been correctly upgraded by whatever is in SP1/SP2, both of which were applied prior to the restore of our data and thus any operations included in SP1/SP2 would have not been applied against it due to the time line.

After a few evenings spent removing all I could find of relevance in the replication distribution database tables and clearing out and recreating log reader agent jobs and snapshot agent jobs, still problems were persisting. After countless recreations of the publications and subscriptions, it felt like there were remnants of previous versions replication clinging on deep in the bowels of replication (amazing how gut feeling develops with experience).

Failing in patience and for lack of a silver bullet, the solution was to remove the subscriptions and the publications, disable replication on the server. Then ensured the replication distribution database and its data files were gone. Also ensured no SQL jobs relating to replication were left behind (there were so removed them too). Also checked all the nodes under SSMS that relate to replication to ensure nothing was left at all, including on the subscriber (there was a shadow of the subscription left on the subscriber).

Then built replication a new, created a new distributor database, with a new name for peace of mind. Created new publication, added articles, created new subscribers.

After reinitializing all subscriptions and getting snapshots, everything started working how it should.

I feel lucky that I don’t have a totally mission critical replication setup. It is good that we have two copies of our replicated tables in different schemas, with views fronting them. This allows us to toggle between live and cached data modes, while “messing on with replication”.

Hey its working!


The only thing left to figure out is if the “one log reader” error is something to worry about, or perhaps it will simply go away with the next service pack, whenever that is…