DECLARE @ROWCOUNTER AS INT;
SET @ROWCOUNTER = 903843;
WHILE @ROWCOUNTER != 0
DELETE TOP (1000)
OUTPUT DELETED.[ITEMNMBR], DELETED.[CURNCYID], DELETED.[PRCLEVEL], DELETED.[UOFM], DELETED.[TOQTY], DELETED.[FROMQTY], DELETED.[UOMPRICE], DELETED.[QTYBSUOM], DELETED.[DEX_ROW_TS] INTO IV00108_History
WHERE PRCLEVEL IN ('LIST');
SET @ROWCOUNTER = @@ROWCOUNT;
--select * from iv00108_history
Create a history table with the same schema as the live table (might want to remove the indexes to keep size down).
Use the above script, supplying the price list name to move the prices from the live IV00108 table to the new custom table IV00108_history.
This moves the records in batches of 1000 to keep the transaction locking under control in the database. If too much information is locked (selected) we start going from row locks to page locks into table locks. Performance goes at and users start to complain at that point. The price table has three million rows in it so this is a wise precaution.
Quality and finance audits
In your annual audits, as a software manager the question always crops up of how is work in progress protected? All the project source code is normally safely locked away in team foundation server. The Team Foundation Server is in turn backed up and backup shipped off site. What about drive failure on the local developer machine? Normally hours or days of work could be lost since the last check was performed!
My answer to this is to use Auto Shelve extension and a weekly machine backup of my development machines.
Auto Shelve protects work loss from drive failure since last checkin
Auto shelve is a Visual Studio extension that periodically & automatically shelves your pending changes into source control as you work. It does this in background and so has minimal impact. Should my SSD drive fail catastrophically, work since my last check in will have been preserved in a shelf set in the team foundation server.
After setting it up, Auto Shelve keeps overwriting the shelf set with the latest pending changes. After using it for some time now I can say it is effortless backup and I love it. I will love it even more when the day comes that I have a disaster on my hands.
Install it through Tools>>Extensions and Updates
Search on the online node to the left, for autoshelve and install.
Go to the settings to set up your shelf set name and the frequency of shelving.
This is a polished Excel document that you can point at your Dynamics GP database and get dashboard information presented in a nice interface.
See this post here Analytics For Dynamics GP – BI Solution From Mark Polino (DynamicAccounting.Net) #MSDYNG
Looking at the dashboard got me thinking, I really struggle with dashboards, I guess I worry about what the summaries are hiding, desire to see more, but then they are useful for some KPI measurements, early high level warning signs too.
Personally with 30,000+ customers and similar number of products, typical dash boards need to take a different shape, so its not for me, but great to see for others who may find it useful especially where you have less customers and products to keep tabs on than we do.
Now back to my power pivot and DAX – loving the BI experience in office 2013!
AND 1 = CASE
WHEN @IncludeFreight='Y' THEN 1
WHEN (ItemNumber NOT LIKE '9898%') AND (ItemNumber <> 'FREIGHT') THEN 1
The above is an snippet from a “catch all” query, I uncovered it in one of our reporting SQL statements it, was declared “WITH RECOMPILE” as it should, on order to properly optomise, but the query was refusing to finish executing on the 4.5million rows in the table involved. It lots of IO wait states on our table.
It seems that the query optomiser just couldn’t handle the case in the where clause with parameters.
Changed it to the following and it executed in seconds.
AND ( @IncludeFreight='Y'
OR (@IncludeFreight='N' AND ItemNumber != 'FREIGHT' AND ItemNumber NOT LIKE '9898%'))
Most of the theory behind this behaviour can be seen in this article: http://sqlinthewild.co.za/index.php/2009/03/19/catch-all-queries/