SSMS SQL Hazard

Let me point out a hazard using SSMS, when developing a SQL delete query, especially more complex ones that take time to build up. I also think about some of the ways I work and how they minimise the chances of damage to data.
Let us say we are developing the simple query to remove orphan records from the prices table of Dynamics GP:
 
DELETE IV00107
SELECT *
FROM IV00107
LEFT JOIN IV00108
ON IV00107.ITEMNMBR=IV00108.ITEMNMBR
and IV00107.CURNCYID=IV00108.CURNCYID
AND IV00107.PRCLEVEL=IV00108.PRCLEVEL
AND IV00107.UOFM=IV00108.UOFM
WHERE
IV00107.PRCLEVEL='LIST'
AND IV00108.ITEMNMBR IS NULL
 
…as we develop it, I highlight from SELECT down to NULL and hit F5 to run it. That will only run the “select”, to see what it will be deleting later. Later I would put double dash in front of the select to and run the whole statement to get the delete to execute.
 
Let us say we have a break, then come back having got the approval to delete the records. So our eye catches the delete and so we highlight from DELETE to NULL and hit F5. Oh no! We have just deleted everything in IV00107! Think about it, the delete and SELECT are interpreted (correctly) as two different operations.
 
Working this way, developing the record set to remove as a SELECT, then adding the DELETE to the top ready to be ran is a common pattern for me. Although I usually put a double dash in front of the DELETE as shown below to prevent this mistake or the mistake of just hitting F5 during testing, without remembering to select first. The action of uncommenting the DELETE triggers my mind to also then comment out the SELECT.
 
--DELETE IV00107
SELECT *
FROM IV00107
LEFT JOIN IV00108
ON IV00107.ITEMNMBR=IV00108.ITEMNMBR
and IV00107.CURNCYID=IV00108.CURNCYID
AND IV00107.PRCLEVEL=IV00108.PRCLEVEL
AND IV00107.UOFM=IV00108.UOFM
WHERE
IV00107.PRCLEVEL='LIST'
AND IV00108.ITEMNMBR IS NULL

I made this mistake against a test SQL database today, a reminder of why you should develop against a test environment. Luckily I use SSMSBoost, a productivity pack plug in for SSMS. This addin to SSMS will warn when you are executing a delete without a where clause, throwing up a fatal action guard window,  so it prevented me from actually causing any damage.
 
ssmsboost fatal action guard window
 
Immediately I could see what I had done and hit the No button.
 
Another factor that led to this potential mistake was not aliasing my tables like I normally would do, if I had written the following then the DELETE would not have been able to find the table i7 and hence would not have executed and would return;

Msg 208, Level 16, State 1, Line x
Invalid object name 'i7'

DELETE TOP(1000) i7
SELECT *
FROM IV00107 i7
LEFT JOIN IV00108 i8
ON i7.ITEMNMBR=i8.ITEMNMBR
and i7.CURNCYID=i8.CURNCYID
AND i7.PRCLEVEL=i8.PRCLEVEL
AND i7.UOFM=i8.UOFM
WHERE
i7.PRCLEVEL='LIST'
AND i8.ITEMNMBR IS NULL
 
Yet another practice that would have helped is that I also would normally have the TOP statement in the query. I would then pressing F5 ten times, in this case until there are no more rows to process. I use TOP to make it run faster and not cause lock escalation (we have 2million records in price table). Other times if there is too much to remove from F5, I’d put it in a loop with a @ROWCOUNT check to see if anything is left to process, that would not have stopped a disaster though. In reality a delete on this table would have taken so long I would soon see my mistake and cancel the query before it committed, but I’m showing a principle here in this post. So the TOP statement would have prevented quite so much data loss.
 
Luckily I didn’t end up with any issues at all, but as is often the case an accident only happens when the perfect storm of factors come together. Keeping to my normal way of working protects me but I thought others might learn from why I work the way I do.
 
If I had removed the records in production, then it would not have been a bit deal in this case as we restore our production database into our test company regularly and automatically (see my post on how to do this). As these prices do not change much from day to to, I could have just squirted the missing records from that company into production, then restored to point in time the test company and again re-synced the prices from that restore, resulting in no impact on users.  This is one of the reasons I’m a fan of having fresh copies of production available in test, also useful for patching up mistakes users make in a timely manner.
 

Uppercase SQL script using SSMS

I just realised I use this functionality a lot and thought that others might not know it was possible.

Highlight a lower case bit of text in the SQL editor window of SSMS and press CTRL+SHIFT+U and the text will go to upper case (L for lower case).

SQL Upper case

This is handy when someone else has written a SQL script with GP field names in lower case as I’m so used to them in upper, it speeds up my reading to make them upper.

This is just an example. You may also highlight the whole script and make it upper case or individual lines, the above GIF is just showing the principle. In this particular case it would be quicker to have made the whole lot upper case then go back and lower case the table aliases.

Using SQL UNPIVOT operator to reconcile Dynamics GP inventory items

Working with the Dynamics GP inventory tables you will encounter the field named QTYTYPE a lot. This is usually seen with its partner LOCNCODE.

Inventory in GP can reside in different locations (for example depots New York, London, Sidney). That location then is broken down further into five item types. Item types can be though of condition/state of the item where the item states are an indexed as followed:

1 = On Hand
2 = Returned
3 = In Use
4 = In Service
5 = Damaged

This is a way we can categorise where and what state/status the inventory is in.

The item stock levels for each item is stored in the table IV00102, keyed by ITEMNMBR and LOCNCODE. To avoid creating four times more rows, the table architect decided to pivot the table, giving each quantity type its own field column in the table. The fields names are as follows:

ITEMTYPE Field Name Description
1 QTYONHND On Hand
2 QTYRTRND Returned
3 QTYINUSE In Use
4 QTYINSVC In Service
5 QTYDMGED Damaged

 

This presents a problem as the other inventory tables have a QTYTYPE field and have separate rows for different quantity types. If we need to join to the IV00102 table this becomes troublesome for us. One solution (I know there are others) is to use the UNPIVOT operator in TSQL to unpivot the IV00102 table, causing the columns to present themselves as extra rows.

SELECT ITEMNMBR
,LOCNCODE
,CASE valuename
WHEN 'QTYONHND'
THEN 1
WHEN 'QTYRTRND'
THEN 2
WHEN 'QTYINUSE'
THEN 3
WHEN 'QTYINSVC'
THEN 4
WHEN 'QTYDMGED'
THEN 5
END AS QTYTYPE
,QtyValue
FROM iv00102
UNPIVOT(QtyValue FOR valuename IN (
QTYONHND
,QTYRTRND
,QTYINUSE
,QTYINSVC
,QTYDMGED
)) UnPiv
WHERE ITEMNMBR = '100XLG'

This gives rise to the following result set.

SQL results

See how each quantity type now has its own row rather  than being named columns? We also used a CASE statement to alias the names of the columns back to index numbers.

Example, reconciling inventory values using SQL

In the blog post by Mahmood M. Alsaadi  Reconciling Quantity on Hand – SQL Script, he shows a script to reconcile inventory, so let us not reinvent the wheel and start with that script. The example provided in his post, at the time of writing did not take into account the quantity types. Unfortunately I ended up debugging the SQL to work this out, only to then return to the original post comments to find someone else had also done the same and pointed this out. However the solution they proposed was to tie the QTYTYPE=1 so that only on hand quantities are reconciled.  I felt I could do better than!

I build on the original script adding in the UNPIVOT introduced above:

SELECT TRX_BALANCE.ITEMNMBR AS ItemNumber
,TRXLOCTN AS Location
,Master_Balance.QTYTYPE AS QTYTYPE
,BALANCE AS TRX_BALNACE
,QtyValue AS Master_Balance
,ATYALLOC AS Master_AllocatedQuantity
,QtyAvailable
,BALANCE - QtyValue AS Variance
FROM (
SELECT ITEMNMBR
,TRXLOCTN
,QTYTYPE
,SUM(QTYRECVD) - SUM(QTYSOLD) AS BALANCE
FROM dbo.IV10200
--WHERE IV10200.ITEMNMBR='40-322'
GROUP BY ITEMNMBR
,TRXLOCTN
,QTYTYPE
) AS TRX_BALANCE
LEFT OUTER JOIN (
SELECT ITEMNMBR
,LOCNCODE
,CASE valuename
WHEN 'QTYONHND'
THEN 1
WHEN 'QTYRTRND'
THEN 2
WHEN 'QTYINUSE'
THEN 3
WHEN 'QTYINSVC'
THEN 4
WHEN 'QTYDMGED'
THEN 5
END AS QTYTYPE
,QtyValue
,CASE valuename
WHEN 'QTYONHND'
THEN ATYALLOC
ELSE 0
END AS ATYALLOC
,CASE valuename
WHEN 'QTYONHND'
THEN QtyValue - ATYALLOC
ELSE 0
END AS QtyAvailable
FROM IV00102
UNPIVOT(QtyValue FOR valuename IN (
QTYONHND
,QTYRTRND
,QTYINUSE
,QTYINSVC
,QTYDMGED
)) IV00102Pivot
) AS Master_Balance ON TRX_BALANCE.ITEMNMBR = Master_Balance.ITEMNMBR
AND TRX_BALANCE.TRXLOCTN = Master_Balance.LOCNCODE
AND TRX_BALANCE.QTYTYPE = Master_Balance.QTYTYPE
WHERE BALANCE - QtyValue <> 0

This will output the item, location and quantity type of the items that have an incorrect inventory level in the IV00102 table (this is the table viewed when looking at the Dynamics GP item enquiry form).

This can be a useful  script to set as a scheduled SQL Job to notify your GP admin that inventory needs reconciling, even listing the items that need reconciling. This could also lead to generation of a macro to do the reconcile… one for another day…..

Document attach feature Dynamics GP database and BusObjKey formats

I’ve been working on an archive project recently where I’ve been archiving records from GP into another archive database. This meant looking at how attachments are handled in Dynamics GP, in order to take any attachments related to records being moved into the archive.
 
The Table CO00102 is the table that relates the attachment to the record it is attached to and contains the BusObjKey column, that is implemented as a resource path back to the database record that the attachment is related to. Using a resource path makes it flexible, allowing the widest variety of GP object to be compatible with the attachment system.
 
CO00102

The resource path is implemented in a inconstant manner, varying due to the different nature of the records involved and also through misuse by developers too (I think). 

An example of the format pattern is:

0\PM\Purchase Order History\290941

Where the elements are as follows:

0 Dynamics GP Product Dictionary ID (see hyperlink)
PM Table Series ID
Purchase Order History Table “Display Name”
(see Microsoft Dynamics GP>>Resource Descriptions>> Tables)
290941 Record ID
Could be customer id or purchase order number or other identifier for record

However for attachments to notes windows, the Record ID is the NOTEINDX (ID of the record) but encoded into a hexadecimal version! Some developer was obviously thinking we didn’t have enough hoops to jump through when they imagined that one up and fancied burning some more CPU on the SQL server! 

Update 2017-01-12: If you've used the OLE notes migration tool, you will notice that the OLE notes look like they used this way of encoding the record ID too, this might be where this scheme came from.

The article Document Attach vs. Notes and how to locate whether a master record has a document attached explains how to get from the note index against an attachment record to the hex version of that note index for participating in the BusObjKey column.

CONVERT(VARCHAR(MAX), CONVERT(binary(4),cast(NOTEINDX as integer)), 2)
 
In the screen shot above, the note ending in number 000001CF2 is actually note index ID 7410.
 
Now examining the final format format in the list, this is for the Advanced Credit Control Module (module 1577). For this module they decided that they would shorted the format of the path. Note how the format is one “folder” shorter in depth than the other paths, it misses out the “Table Series”, then jumps right to the table display name. Doing this screws up some SQL scripts that you might encounter out there on the internet that are used for reporting on attachments. They assume the same number of path separators when deconstructing the path but this has one short–oh dear! So what I’m saying is that you must program/script defensively when working with these BusObjKey fields as you are at the mercy of developers, also expect to use a lot of case statements or to create a SQL scalar function to convert your BusObjKey values!

I would be really interested to expand this article if you have any other formats that you have encountered, please drop me a comment against this post!