SQL indexed views are incompatible with Dynamics GP

Trying to use indexed views with Dynamics GP? – well I’ve been there and I have seen others attempt the same. The Dynamics GP database and application are not really compatible with SQL indexed views. To help out those searching around this subject I thought I should write up our experience.

What is an indexed view?

An indexed view is a SQL database view that has had an index applied to it. This sounds obvious, but the important thing to realise is that the index will be materialised as an index on disk. Think of it as another table created on disk representing the data held in the view, this is why it is quick to get the data if it is summary data. This index "table" is maintained whenever the data that it covers is altered. So it is easy to imagine that indexed views are great for creating summaries, filtered views of data that cross multiple tables, as they will keep track of changes automatically and update the index "table", thus keeping the data "in-sync" with the data in the various tables that it derives from. As the data that represents the SQL view has been pushed to disk and persisted there, querying that summary data is lightweight and quick, because the work has already been done to process the data when it was stored, pre-joined between tables and/or summarised, rather than the database engine having to do all that work on the fly for each query as the query is ran.

However, there is a downside. When (any) index is created, you pay a price for the benefit of fast data recall when reading, by suffering with slightly longer writes times and those longer writes lead to longer living locks on the data and then this can lead to blocking. This may cause issues with performance in some circumstances. The extra writing is because the data held in the index "table" that represents the view needs to be maintained (updated) whenever any of the data underpinning that view changes. Depending on the workload sizes of tables, IO performance etc, this may be significant work if large numbers of records are updated at once. Again as with any SQL index, it takes space on disk too. For large wide views over large tables this may be a consideration too bloating storage and the knock on consequences that brings.

One of the most frustrating matters when working with indexed views is that there are a whole heap of constraints and restrictions around what is permitted in the query that forms them. For example, when using GROUP BY, it must contain the function BIG_COUNT(*) as a column and various database settings restrictions can apply too (there are many, many more). This means when designing the simplest views SQL compile errors warning that it is not possible to do "this and that” frequently cause annoyance. It is very obvious why this is the case when you think about it. It is due to the fact the data is persisted to disk, so it needs to be unchanging to store it, thus any SQL function used by the view has to be deterministic -aha, no “getdate()” function! It is surprising how often this will catch people out. I could go on and on about the restrictions and requirements of indexed views, but just go try making one and you’ll discover the pain yourself, then go read up the documentation to realise how much there is to it!

Another example below are the SQL SET OPTIONS that are required by SQL indexed views…

IndexViewSettings

Using indexed view with Dynamics GP

Professionally we use indexed views a lot in our applications for the speed of access and real time integrity of summary data they offers, so sooner or later a GP admin or GP developer decides they would quite like to use an indexed view... -then it all ends in tears, let us see why…

WITH SCHEMABINDING

This is the first problem. To build an index view the view must use SCHEMABINDING. This locks the database schema and the view together. Unfortunately this may then cause issues with application updates. When Dynamics GP is updated for a service pack or upgrade, sometimes the tables or other database objects may need to be dropped and recreated. This happens during the upgrade process, however if the object that is to dropped is schema bound, then the upgrade script will not be allowed to do what it wants, causing the upgrade to fall over. Hence if indexed views have been bound to the GP tables this is a real risk when a site comes to do an upgrade. Obviously if the person performing the upgrade was aware, then they could drop the view and recreate it after the upgrade, but in real life the knowledge is lost as employees leave or contractors move on causing upgrade pain. 

The second problem is more terminal. Let's try it and see what happens if we create a view over the sales order header and lines to speed up the number of sales lines by country that and not voided in historical transactions. Create the view on a non-production database like this…

 

SET NUMERIC_ROUNDABORT OFF;
SET ANSI_PADDING, ANSI_WARNINGS, CONCAT_NULL_YIELDS_NULL, ARITHABORT,
    QUOTED_IDENTIFIER, ANSI_NULLS ON;
-- We must schema bind
IF OBJECT_ID ('SalesVoidedLinesSummary', 'view') IS NOT NULL
    DROP VIEW SalesVoidedLinesSummary ;
GO
CREATE VIEW SalesVoidedLinesSummary
    WITH SCHEMABINDING
    AS 
       SELECT  COUNT_BIG(*) Cnt,
               SOP30200.SOPTYPE,
               SOP30300.CCODE as Country
       FROM 
       dbo.SOP30200
       JOIN
       dbo.SOP30300 ON SOP30200.SOPTYPE=SOP30300.SOPTYPE AND SOP30200.SOPNUMBE=SOP30300.SOPNUMBE
       WHERE
       SOP30200.VOIDSTTS=0 --Not Voided
       AND SOP30200.SOPTYPE IN (1,2)
       GROUP BY  SOP30200.SOPTYPE, SOP30300.CCODE
GO

 

Then create the indexed view by adding the index. - REMEMBER I SAID NO PRODUCTION DATABASES WITH THIS LITTLE EXPERIMENT!


--We materialise the view by creating an index on it
CREATE UNIQUE CLUSTERED INDEX IX_SOP30200SOP30300VoidSummary
    ON SalesVoidedLinesSummary (SOPTYPE, Country);
GO

 

We can now query the view data like so..

SELECT  * FROM SalesVoidedLinesSummary where SOPTYPE=1 AND Country='PT'

and directly before we created the view with this…

SELECT  COUNT_BIG(*) Cnt,
               SOP30200.SOPTYPE,
               SOP30300.CCODE as Country
       FROM 
       dbo.SOP30200
       JOIN
       dbo.SOP30300 ON SOP30200.SOPTYPE=SOP30300.SOPTYPE AND SOP30200.SOPNUMBE=SOP30300.SOPNUMBE
       WHERE
       SOP30200.VOIDSTTS=0 --Voided
       AND SOP30200.SOPTYPE = 1
       AND SOP30300.CCODE='PT'
       GROUP BY  SOP30200.SOPTYPE, SOP30300.CCODE

 

...and we find that from not having the view to having the view we have dramatically reduced the query time as shown below. I’m not sure about the integrity of those figures but this isn’t a discussion around query optimisations, just accept that indexed views solve performance problems.

    Total execution time without view:  17390   
     Total execution time with view:        505   

 

In production

So developer or IT pro after testing that out on the test company database says, “great, now lets create it on the production database as that worked like a charm”, and then goes to login to GP and create a sales order having created the view…

...to then get the dismay of the error that follows...

“An edit operation on table ‘SOP_Master_Number_SETP’ failed. A record was already locked'.”

IndexViewGPError1

and

INSERT failed because the following SET options have incorrect settings ‘QUOTED_IDENTIFIER, CONCAT_NULL_YEALS_NULL,ANSI_WARNINGS’. Verify that SET options are correct for use with indexed views and”…

GPIndexViewError2

… then starts the support calls flowing in from the users! 

So why we can’t have nice things?

Go back up in this post, as you saw in the graphic, that there are certain SQL SET options required to make indexed views work. Sadly these are not compatible with the SET options required to make Dynamics GP work shown below! -thus the application breaks, simple as that! If those phones are still ringing from your users right now, then just simply drop the view and your users will be happy again.

We did play around a bit but never found a solution to this issue, but if you have a work around do let me know.

References

Mariano Gomez has a nice post about why this is required by Dexterity to make GP run correctly in this post:

Microsoft SQL Server DSN Configuration

Microsoft - Creating Indexed Views

Gavin wrote this which inspired me to document the issue before others fall into it...

Brief overview and comparison of how summary values are stored and calculated in Dynamics GP, Dynamics NAV and Dynamics 365 Business Central

 

Find orphan note records in a Dynamics GP company database

About the Dynamics GP Notes table

Notes are held within the SY03900 company table in Dynamics GP. Every note in that notes table has a NOTEINDX, which is used by other tables in GP to reference that note. This makes the notes system extensible, when new modules/addins are created for GP,  those modules can simply piggy back off the existing notes table for their notes too by inserting notes into the table and referring to them by index. However this does have the disadvantage that looking at a random note in the notes table, you have no idea as to what records may be referencing that note. 

Hence, if you were to delete records from tables that reference note records, it orphans any note record for that deleted record. The note record can still exist, but the record that refers to it has been removed. An example of where this could happen is, if someone were to delete a sales order line using SQL DELETE, without also deleting the note record in SY03900 that is referenced by that sales order line record. 

You see, the note table does not have a "source" field to identify the "owner" table, so there is nothing in the note record to indicate from what table the note record originated. Thus you must check all the NOTEINDX fields in all the tables over the entire company database in order to find its owner.

Lucky for us, using dynamic SQL we can do this. We can query SQL server for the table definitions of all the tables in the database containing the field named NOTEINDX. We can then copy all the values that exist from each of those tables found into one big list of NOTEINDX values. Finally we can compare the notes table with that extracted big list of all valid values and find where we have a value in the notes table that does not exist in any of the tables in the GP company database. These will (most likely) be orphan records. Note you must exclude the SY03900 table itself when preparing this consolidated list!

Dynamic SQL to find all references to note records

The following script does this. It looks for any table that have a field named NOTEINDX and inserts all the note index numbers from those tables and fields into a temp table, from where you may join it back to the notes table to find the notes that no longer have reference in the database (orphaned). 

If you are then going to use the results from this to delete notes, beware as if you have a third party product that uses notes but does not name its reference to the note as NOTEINDX, so I'm saying use this with care, especially if you start removing notes based on it, check what they are first and gain confidence they are genuine orphans.

 

IF EXISTS (
SELECT *
FROM tempdb..sysobjects
WHERE NAME = '##NotesConsolidated'
)
DROP TABLE dbo.##NotesConsolidated

SET NOCOUNT ON

CREATE TABLE ##NotesConsolidated (
NOTEINDX numeric(19,5)
,TableName VARCHAR(1000)
)
GO
DECLARE @SqlStatement VARCHAR(500)
DECLARE @DatabaseName AS VARCHAR(5)
SET @DatabaseName = cast(db_name() AS VARCHAR(5))

DECLARE This_cursor CURSOR
FOR
SELECT 'INSERT INTO ##NotesConsolidated (NOTEINDX, TableName) select NOTEINDX, ''[' + SCHEMA_NAME(schema_id) + '].[' + t.NAME + ']'' from [' + SCHEMA_NAME(schema_id) + '].[' + t.NAME + ']'
FROM sys.tables AS t
INNER JOIN sys.columns c ON t.OBJECT_ID = c.OBJECT_ID
WHERE c.NAME = 'NOTEINDX' AND NOT (t.name='SY03900' AND SCHEMA_NAME(schema_id)='dbo');
OPEN This_cursor

FETCH NEXT
FROM This_cursor
INTO @SqlStatement

WHILE (@@fetch_status <> - 1)
BEGIN
EXEC (@SqlStatement)

FETCH NEXT
FROM This_cursor
INTO @SqlStatement
END
DEALLOCATE This_cursor

SELECT * FROM SY03900
where NOTEINDX not in (
SELECT NOTEINDX FROM ##NotesConsolidated )

 

Results 

Examples of the four hundred and seven (yours will differ) tables containing references to notes in the notes table:

Rebuild QTYONORDER of IV00102 Dynamics GP

I had to rebuild the quantity on purchase order of the inventory site qtys table today (On Order in Item Enquiry).

Wrote this SQL script that works for our install with our settings, but you should verify your install does not have other modules or modifications that may make this break.

 

Comment out the UPDATEs and uncomment the selects to see what is going to happen first, running SQL against your production environment is dangerous, test, test, test first. This is on the HIGH side of risky as a script to run. 

 

-- Rebuilds the qty on order in inventory from purchase order table
-- Don't know how manufacturing module would play with this
BEGIN TRANSACTION
-- Part 1 Correct the site/location qty on order from receipts and purchase orders
;
WITH CTE_ShippedQtys
AS (
SELECT PONUMBER
,POLNENUM
,SUM(QTYSHPPD*UMQTYINB )TotQTYSHIPPED
,SUM(QTYREPLACED*UMQTYINB ) TotQTYREPLACED
FROM POP10500
GROUP BY PONUMBER
,POLNENUM
)
,CTE_OnOrderQtys
AS (
SELECT SUM(CASE WHEN POTYPE IN (1,3) THEN ((QTYORDER - QTYCANCE)*POP10110.UMQTYINB) - ISNULL(TotQTYSHIPPED, 0)+ISNULL(TotQTYREPLACED,0) ELSE 0 END) AS QTYORDER
,ITEMNMBR
,LOCNCODE
FROM POP10110
LEFT JOIN CTE_ShippedQtys ON POP10110.PONUMBER = CTE_ShippedQtys.PONUMBER
AND pop10110.ORD = CTE_ShippedQtys.POLNENUM
WHERE POLNESTA IN (
2
,3
)
GROUP BY ITEMNMBR
,LOCNCODE
)
UPDATE IV00102
SET QTYONORD = ISNULL(QTYORDER,0)
--SELECT QTYONORD,QTYORDER, *
FROM IV00102
LEFT JOIN CTE_OnOrderQtys ON IV00102.ITEMNMBR = CTE_OnOrderQtys.ITEMNMBR
AND IV00102.LOCNCODE = CTE_OnOrderQtys.LOCNCODE
AND IV00102.LOCNCODE !=''
WHERE
IV00102.LOCNCODE!=''AND
(QTYONORD != QTYORDER
OR (QTYONORD!=0 AND QTYORDER IS NULL))
ROLLBACK TRANSACTION

--Summ up the location on order Qtys for the summary record
BEGIN TRANSACTION;
--Part 2 Correct the summary location
WITH CTE_SumLocations
AS (
SELECT SUM(QTYONORD) totqty
,ITEMNMBR
FROM IV00102
WHERE LOCNCODE != ''
GROUP BY ITEMNMBR
)
UPDATE IV00102
SET QTYONORD = totqty
--SELECT *
FROM IV00102
JOIN CTE_SumLocations ON CTE_SumLocations.ITEMNMBR = IV00102.ITEMNMBR
AND IV00102.LOCNCODE = ''
AND IV00102.RCRDTYPE = 1
AND QTYONORD != totqty

ROLLBACK TRANSACTION

Yes you can - SQL Server Table Compression and Dynamics GP


2019-03-01_23-12-19

Today I attended SqlBits in Manchester UK, where there was a session “Performance tuning SQL server on crappy hardware” by Monica Rathbun.

Monica has a the fast and punchy presentation style I enjoyed. Although I had already experienced or knew most of what was covered it was still a good presentation. There was one take away I noted in my notebook to comeback to later. Now back at the hotel I’m having a look.

Row/Page compression - More Data in MEMORY

Monica was promoting the use of COMPRESSION – not just backup compression but ROW/PAGE database compression in the database engine itself.

By compressing the data in the database, the theory goes that you reduce I/O required to move the data around and allow much more relevant data to be held in SQL server’s caches and perhaps the underlying storage system’s caches too. Having more data in memory leads to a more performant system.

For some reason the existence of compression in the database as was something that had slipped under my radar, perhaps because it used to be an Enterprise feature but now its available to me in our SQL2016 Standard Edition.


This is particularly interesting to Dynamics GP users as our database is full of padded CHAR data types, has very wide tables full of only partially used data (depending on modules used) or repeating data in the case of settings flags. Dynamics GP also has many tables full of decimal columns that are all zero, again due to configuration or options in how GP is set up or what modules are active. So from the outset it feels like Dynamics GP would benefit.

“Enabling compression only changes the physical storage format of the data that is associated with a data type but not its syntax or semantics”. This means the compression occurs inside the SQL engine but is transparent to the application interacting with SQL server. There are two levels of compression of interest and available to us. ROW compression takes each data row in the table,

  • It uses variable-length storage format for numeric types (for example integer, decimal, and float) and the types that are based on numeric (for example datetime and money).

  • It stores fixed character strings by using variable-length format by not storing the blank characters.

So imagine how much room can be saved when you consider the fields in Dynamics GP are fixed length!

What is more there is another option, PAGE compression that looks at repeating data within the pages of data stored on the filesystem and compresses that data. As this is over an entire page its more heavy on CPU resources but is great where there is a lot of repeated data down the rows of a table. Wait, repeating data down rows of a column? – That is what we get lots of due to status flags and little used fields in the GP tables that vary little from top to bottom of the table.

Just look at something like Item Master table IV00101 or one of the pricing tables etc. There are distributions and settings that are the same, repeated for all items and are ripe for compression as this leads to repeated content in the pages.

Data repeated down from table IV00101 Item Master


So both the nature of the data in the tables and the use of compressible data types by Dynamics GP sure makes it look good for compression.


Compression does cause more CPU load, but unless you are pulling millions of rows then it seems insignificant, see more here:

https://sqlperformance.com/2017/01/sql-performance/compression-effect-on-performance where it is proved it has little effect.


We can run EXEC sp_estimate_data_compression_savings 'dbo', 'IV00101', NULL, NULL, 'PAGE' ;

This will show us, by sampling a subset of the table, much as the statistics does, how much space should be saved by compressing the table, without having to actually do it. Let try with Item Master in Dynamics GP.

SELECT COUNT(*) from IV00101


EXEC sp_estimate_data_compression_savings 'dbo', 'IV00101', NULL, NULL, 'PAGE' ;

Item Master compression test

So we can see the item master table goes from 81,944KB to 16,304KB that is only 20% of what it was!

No trying it with IV00108  that has SELECT COUNT(*) FROM IV00108 = 6,107,169 rows and we get 779,816 going down to 139,440, that is only 19% of what it was before.

Compression testing with IV00108

So you can see how much saving can be achieved this way, imagine the reduced I/O from having 20% of what used to be read.

Even going down to ROW compression gives you only slightly less compression but less overhead too:

2019-03-02_00-31-51

Only 46% of what it was with row level compression.


Downside

There are not many downsides. The first is technical, compressing the data take up CPU, most SQL servers are not CPU bound in terms of resources, so this should not be an issue. Typically 10-30% increase, so check your current CPU load. As this is a table by table selection, you could tackle only the main most sizable tables in GP to get the majority of the benefit without having to apply compression to every table and index. When the data is written you take a hit on compressing it to reap the rewards later. So tables and indexes with great numbers of inserts per second may cause issues (have to be big loads).

The article below has some good scripts to see what will work and what will not…

https://thomaslarock.com/2018/01/when-to-use-row-or-page-compression-in-sql-server/


Upside

Much smaller data means less I/O more in cache. And more data in memory to make for more efficient queries.


Summary

I am going to gradually add tables to compression and see what happens to CPU usage. The benefits should be substantial in terms of reads so it seems well worth pursuing.


Support

This article would indicate its supported for Dynamics GP, although the tool referenced for choosing tables to compress is no longer available, however it is possible to manually work with the database to turn on compression.

https://blogs.msdn.microsoft.com/nav/2011/07/21/sql-server-data-compression-and-microsoft-dynamics/


This is the reason going to conference is so worth while, this is only one of many things I leant or got reinforced today in the various sessions I attended.