Task to think about when migrating SQL server for Dynamics GP to a new server

After performing SQL migration I had this top level list of things I had to think about, so this is just a private list put here of things that you might need to think about when migrating from one SQL server to another. This is for my environment, if you use clustering and other features of SQL server, then you will have some more to add to the list. However hopefully this will help me next time and may provoke some thoughts to others working on a similar project.

In this case reporting services is co-hosted on the server as is eConnect service and some IIS API end points.
They are in random order, there are many interdependencies.

  • Linked Servers, Script out any linked SQL servers and re-create on the new server.
  • SQL Jobs needs scripting out and re-creating on new server. Any reporting services jobs need removing and let reporting services rebuild the schedules when it the service starts. References to server names needs checking within the scripts.
  • Maintenance plans need moving to the new server.
  • SQL Logins needs scripting out and moving to the new server.
  • TempDB files need creating with appropriate number of data files for the server and run on appropriate disks.
  • Replication needs scripting, removing, then rebuilding on the new server. Change server names in replication as has to use the actual server names not aliases.
  • The user databases needs need backup from old server and restore to new server.
  • Reporting services, if installed needs keys applying and configuring with new host names in .config file, restore the encryption keys from the old server. 
  • Back up devices needs scripting out and recreating on the new server.
  • eConnect Service needs installing on the new server.
  • eConnect Config for service needs updating for changes to port numbers and transport protocol used (if not using defaults).
  • Any IIS web services needs migrating to new server (if any installed).
  • Script out extended events and recreate on new server.
  • SSIS packages need exporting and importing to new sever.
  • SQL Operators need scripting out and moving to new server.
  • After migration switch DNS aliases to point at new server from old (including reporting services and any iis site alias if applicable).
  • Check Database Collations as set correctly for new sever vs old.
  • Compare side by side new and old SQL server settings pages for all settings.
  • Start SQL server Jobs after migration and once old server is out.
  • Check virtual machine settings after migration to ensure full resources have been restored to the virtual machine.
  • Ensure Full text index is installed (if required).
  • Ensure SSIS is installed (if required).
  • Ensure Reporting Services installed (if required).
  • Install VMWare SCSI drivers for performance reasons.
  • Check firewall settings on machine are the same between old and new, including application settings. 
  • Duplicate any windows file shares on the new server to be the same as the old server, including share and NTFS permissions.
  • Double check any local users and user permissions.
  • Install Dynamics GP workflow server CLR objects on the server using stored procedure DYNAMICS..wfDeployClrAssemblies
  • Migrate mail profiles from old to new server.

SSMS Registered Servers–Key not valid for use in specified state

After exporting the registered servers from SQL Server Management Studio’s “Registered Servers window” and then importing the exported list into a new machine. You may encounter the following error if passwords were stored against the servers…

Registered Servers    Key not valid for use in specified state. (System.Security)
Registered Servers - Key Not Valid for use in specified state.

This is due to the encryption key used to encrypt the passwords being specific to the machine the server details were exported from and it not existing on the new machine.

Even more annoying, this message will pop up once for each server with an issue. Those servers will be absent from the registered servers window in the new machine SSMS.

When I got around to fixing this, the original machine was long gone, so re-exporting the server list without passwords and re-importing it was no longer an option (this would have also have worked).

To solve this problem needed to remove the password from the affected registered servers, once the registered server list is properly populated again I can then simply edit the server properties to put the password back in for each server affected.

This registered server list is populated from an xml file found in the following location on your machine:

C:\Users\{username}\AppData\Roaming\Microsoft\Microsoft SQL Server\{sql version}\Tools\Shell\RegSrvr.xml

On opening this file and examining the XML, the snag was that some servers had been added since the server import. Some of those servers were named the same as the ones causing the error. Scrolling through the XML in Notepad, it was not easy to tell which was which.

To help I went back into the registered server list in SSMS and added a distinct comment to all the servers I could see and exited again to updated the XML file. On opening the file again, I searched for “Password=” in the file and removed the section Password=”jdsalkfjdalksjd”; – where the random text is the password and the xml node was of type ConnectionStringWithEncryptedPassword. I removed the whole password parameter as shown by highlighting.

Note that the password property of the connection string was only removed if the description node shown below did not appear, or was present but had another value not containing the unique text we entered earlier in description fields (in this case the unique text was “**MyMarkerText**”).

<RegisteredServers:Description type="string">**MyMarkerText**</RegisteredServers:Description>

Save the file and launch SSMS. The error will not appear and the missing registered servers will be available again. You must now right click and edit the properties of each to manually restore the passwords.

SQL indexed views are incompatible with Dynamics GP

Trying to use indexed views with Dynamics GP? – well I’ve been there and I have seen others attempt the same. The Dynamics GP database and application are not really compatible with SQL indexed views. To help out those searching around this subject I thought I should write up our experience.

What is an indexed view?

An indexed view is a SQL database view that has had an index applied to it. This sounds obvious, but the important thing to realise is that the index will be materialised as an index on disk. Think of it as another table created on disk representing the data held in the view, this is why it is quick to get the data if it is summary data. This index "table" is maintained whenever the data that it covers is altered. So it is easy to imagine that indexed views are great for creating summaries, filtered views of data that cross multiple tables, as they will keep track of changes automatically and update the index "table", thus keeping the data "in-sync" with the data in the various tables that it derives from. As the data that represents the SQL view has been pushed to disk and persisted there, querying that summary data is lightweight and quick, because the work has already been done to process the data when it was stored, pre-joined between tables and/or summarised, rather than the database engine having to do all that work on the fly for each query as the query is ran.

However, there is a downside. When (any) index is created, you pay a price for the benefit of fast data recall when reading, by suffering with slightly longer writes times and those longer writes lead to longer living locks on the data and then this can lead to blocking. This may cause issues with performance in some circumstances. The extra writing is because the data held in the index "table" that represents the view needs to be maintained (updated) whenever any of the data underpinning that view changes. Depending on the workload sizes of tables, IO performance etc, this may be significant work if large numbers of records are updated at once. Again as with any SQL index, it takes space on disk too. For large wide views over large tables this may be a consideration too bloating storage and the knock on consequences that brings.

One of the most frustrating matters when working with indexed views is that there are a whole heap of constraints and restrictions around what is permitted in the query that forms them. For example, when using GROUP BY, it must contain the function BIG_COUNT(*) as a column and various database settings restrictions can apply too (there are many, many more). This means when designing the simplest views SQL compile errors warning that it is not possible to do "this and that” frequently cause annoyance. It is very obvious why this is the case when you think about it. It is due to the fact the data is persisted to disk, so it needs to be unchanging to store it, thus any SQL function used by the view has to be deterministic -aha, no “getdate()” function! It is surprising how often this will catch people out. I could go on and on about the restrictions and requirements of indexed views, but just go try making one and you’ll discover the pain yourself, then go read up the documentation to realise how much there is to it!

Another example below are the SQL SET OPTIONS that are required by SQL indexed views…


Using indexed view with Dynamics GP

Professionally we use indexed views a lot in our applications for the speed of access and real time integrity of summary data they offers, so sooner or later a GP admin or GP developer decides they would quite like to use an indexed view... -then it all ends in tears, let us see why…


This is the first problem. To build an index view the view must use SCHEMABINDING. This locks the database schema and the view together. Unfortunately this may then cause issues with application updates. When Dynamics GP is updated for a service pack or upgrade, sometimes the tables or other database objects may need to be dropped and recreated. This happens during the upgrade process, however if the object that is to dropped is schema bound, then the upgrade script will not be allowed to do what it wants, causing the upgrade to fall over. Hence if indexed views have been bound to the GP tables this is a real risk when a site comes to do an upgrade. Obviously if the person performing the upgrade was aware, then they could drop the view and recreate it after the upgrade, but in real life the knowledge is lost as employees leave or contractors move on causing upgrade pain. 

The second problem is more terminal. Let's try it and see what happens if we create a view over the sales order header and lines to speed up the number of sales lines by country that and not voided in historical transactions. Create the view on a non-production database like this…


-- We must schema bind
IF OBJECT_ID ('SalesVoidedLinesSummary', 'view') IS NOT NULL
    DROP VIEW SalesVoidedLinesSummary ;
CREATE VIEW SalesVoidedLinesSummary
       SELECT  COUNT_BIG(*) Cnt,
               SOP30300.CCODE as Country
       SOP30200.VOIDSTTS=0 --Not Voided
       AND SOP30200.SOPTYPE IN (1,2)


Then create the indexed view by adding the index. - REMEMBER I SAID NO PRODUCTION DATABASES WITH THIS LITTLE EXPERIMENT!

--We materialise the view by creating an index on it
    ON SalesVoidedLinesSummary (SOPTYPE, Country);


We can now query the view data like so..

SELECT  * FROM SalesVoidedLinesSummary where SOPTYPE=1 AND Country='PT'

and directly before we created the view with this…

               SOP30300.CCODE as Country
       SOP30200.VOIDSTTS=0 --Voided
       AND SOP30200.SOPTYPE = 1
       AND SOP30300.CCODE='PT'


...and we find that from not having the view to having the view we have dramatically reduced the query time as shown below. I’m not sure about the integrity of those figures but this isn’t a discussion around query optimisations, just accept that indexed views solve performance problems.

    Total execution time without view:  17390   
     Total execution time with view:        505   


In production

So developer or IT pro after testing that out on the test company database says, “great, now lets create it on the production database as that worked like a charm”, and then goes to login to GP and create a sales order having created the view…

...to then get the dismay of the error that follows...

“An edit operation on table ‘SOP_Master_Number_SETP’ failed. A record was already locked'.”



INSERT failed because the following SET options have incorrect settings ‘QUOTED_IDENTIFIER, CONCAT_NULL_YEALS_NULL,ANSI_WARNINGS’. Verify that SET options are correct for use with indexed views and”…


… then starts the support calls flowing in from the users! 

So why we can’t have nice things?

Go back up in this post, as you saw in the graphic, that there are certain SQL SET options required to make indexed views work. Sadly these are not compatible with the SET options required to make Dynamics GP work shown below! -thus the application breaks, simple as that! If those phones are still ringing from your users right now, then just simply drop the view and your users will be happy again.

We did play around a bit but never found a solution to this issue, but if you have a work around do let me know.


Mariano Gomez has a nice post about why this is required by Dexterity to make GP run correctly in this post:

Microsoft SQL Server DSN Configuration

Microsoft - Creating Indexed Views

Gavin wrote this which inspired me to document the issue before others fall into it...

Brief overview and comparison of how summary values are stored and calculated in Dynamics GP, Dynamics NAV and Dynamics 365 Business Central


Yes you can - SQL Server Table Compression and Dynamics GP


Today I attended SqlBits in Manchester UK, where there was a session “Performance tuning SQL server on crappy hardware” by Monica Rathbun.

Monica has a the fast and punchy presentation style I enjoyed. Although I had already experienced or knew most of what was covered it was still a good presentation. There was one take away I noted in my notebook to comeback to later. Now back at the hotel I’m having a look.

Row/Page compression - More Data in MEMORY

Monica was promoting the use of COMPRESSION – not just backup compression but ROW/PAGE database compression in the database engine itself.

By compressing the data in the database, the theory goes that you reduce I/O required to move the data around and allow much more relevant data to be held in SQL server’s caches and perhaps the underlying storage system’s caches too. Having more data in memory leads to a more performant system.

For some reason the existence of compression in the database as was something that had slipped under my radar, perhaps because it used to be an Enterprise feature but now its available to me in our SQL2016 Standard Edition.

This is particularly interesting to Dynamics GP users as our database is full of padded CHAR data types, has very wide tables full of only partially used data (depending on modules used) or repeating data in the case of settings flags. Dynamics GP also has many tables full of decimal columns that are all zero, again due to configuration or options in how GP is set up or what modules are active. So from the outset it feels like Dynamics GP would benefit.

“Enabling compression only changes the physical storage format of the data that is associated with a data type but not its syntax or semantics”. This means the compression occurs inside the SQL engine but is transparent to the application interacting with SQL server. There are two levels of compression of interest and available to us. ROW compression takes each data row in the table,

  • It uses variable-length storage format for numeric types (for example integer, decimal, and float) and the types that are based on numeric (for example datetime and money).

  • It stores fixed character strings by using variable-length format by not storing the blank characters.

So imagine how much room can be saved when you consider the fields in Dynamics GP are fixed length!

What is more there is another option, PAGE compression that looks at repeating data within the pages of data stored on the filesystem and compresses that data. As this is over an entire page its more heavy on CPU resources but is great where there is a lot of repeated data down the rows of a table. Wait, repeating data down rows of a column? – That is what we get lots of due to status flags and little used fields in the GP tables that vary little from top to bottom of the table.

Just look at something like Item Master table IV00101 or one of the pricing tables etc. There are distributions and settings that are the same, repeated for all items and are ripe for compression as this leads to repeated content in the pages.

Data repeated down from table IV00101 Item Master

So both the nature of the data in the tables and the use of compressible data types by Dynamics GP sure makes it look good for compression.

Compression does cause more CPU load, but unless you are pulling millions of rows then it seems insignificant, see more here:

https://sqlperformance.com/2017/01/sql-performance/compression-effect-on-performance where it is proved it has little effect.

We can run EXEC sp_estimate_data_compression_savings 'dbo', 'IV00101', NULL, NULL, 'PAGE' ;

This will show us, by sampling a subset of the table, much as the statistics does, how much space should be saved by compressing the table, without having to actually do it. Let try with Item Master in Dynamics GP.

SELECT COUNT(*) from IV00101

EXEC sp_estimate_data_compression_savings 'dbo', 'IV00101', NULL, NULL, 'PAGE' ;

Item Master compression test

So we can see the item master table goes from 81,944KB to 16,304KB that is only 20% of what it was!

No trying it with IV00108  that has SELECT COUNT(*) FROM IV00108 = 6,107,169 rows and we get 779,816 going down to 139,440, that is only 19% of what it was before.

Compression testing with IV00108

So you can see how much saving can be achieved this way, imagine the reduced I/O from having 20% of what used to be read.

Even going down to ROW compression gives you only slightly less compression but less overhead too:


Only 46% of what it was with row level compression.


There are not many downsides. The first is technical, compressing the data take up CPU, most SQL servers are not CPU bound in terms of resources, so this should not be an issue. Typically 10-30% increase, so check your current CPU load. As this is a table by table selection, you could tackle only the main most sizable tables in GP to get the majority of the benefit without having to apply compression to every table and index. When the data is written you take a hit on compressing it to reap the rewards later. So tables and indexes with great numbers of inserts per second may cause issues (have to be big loads).

The article below has some good scripts to see what will work and what will not…



Much smaller data means less I/O more in cache. And more data in memory to make for more efficient queries.


I am going to gradually add tables to compression and see what happens to CPU usage. The benefits should be substantial in terms of reads so it seems well worth pursuing.


This article would indicate its supported for Dynamics GP, although the tool referenced for choosing tables to compress is no longer available, however it is possible to manually work with the database to turn on compression.


This is the reason going to conference is so worth while, this is only one of many things I leant or got reinforced today in the various sessions I attended.