Limitations using Multiple Binning and a SQL script - Dynamics GP

It seems that the priorities for bins can be entered against an item in the Item Site Default Bins window. The documentation says this is so the bins can be prioritised, for the order they are consumed. It seems the GP application does not actually use this priority, other than in ordering a GUI pick list from which the user can manually choose when to take the stock from. To me this does not seem that well integrated, but to keep the application UI simple and flexible I can see why this may have been done.

The default position is to take the stock from the bin defined in the specific boxes above the priority scrolling window. For example the Sales Order Fulfilment bin will be used by default and the user prompted once supply from that bin is exhausted, it is at this point the user is given the priority list to help them choose when the remainder of stock should be taken from.

Item Site Default Bins Window Dynamics GP

Back in the day, the first thing I did with GP was to automate picking and fulfilment. Advanced distribution didn’t exist or we didn’t know about it back then. Picking and fulfilment in a distribution business with a large warehouses is important so a better solution was produced. When orders are sent to the warehouse by the SOP entry screen (a button alternative to save), the order is checked for credit, needs to be ship complete, and for many other checks. A windows service runs performing all these checks, finally if appropriate a pick list is printed to PDF for archive purposes and also to printer for warehouse picking, another copy is also sent to the large automated storage carousels to allow auto picking from these motorised vertical stores when the picker arrives at them.

The picker takes the next pick list from the printer and picks it, new orders get new picklists on the printer. All our documentation is printed using reporting services, a decision made in the betas of RS, we early adopted years before GP got native support for RS. Before that we used the free Crystal .NET version that shipped with first versions of .NET.  Basically GP report writer sucks, even more so when multilingual highly formatted, multi output format,  invoices/order confirmations/pick lists/despatch notes/receipts/quotations etc are needed.

A gripe I have with the native multiple binning is the way that the location of the goods to be picked can not be “allocated” to the order before picking. How can a picker be directed to the product if it is split over bins, if the bins and qtys are not tied to the order? Hence I have created a table to hold, this info against the order, that is what bins should be used for the pick, for that item. In GP it is only when an item on the order is fulfilled that the link is made in GP, allocating stock in the bin to the order. The goods have to be picked before they can be fulfilled, so this direct link between Qty Fulfilled and Qty allocated seems wrong to be for our use case.

I get off the point, which is that the fulfilment we use is also a bespoke replacement for the native GP system and is very wizard and barcode driven. I don’t want pickers choosing where to take stock from, I want to push them to the stock. Besides, if product variations are held in different locations sales users can then direct the picker to the correct variation (say split packs and sealed packs). 

The advantage of all this being bespoke is that the priorities of the bins can be used. I wrote the script below to act as a source of the data for bin qtys for an item at a location. These are polled on pick production, frozen as an allocation by convention, in a child table of the pick list table, this means the bin locations can be printed on the pick lists, sending the picker to all the locations where stock needs picking, not just the default.

I thought this might be handy for others if using excel or other techniques to pick items. Feel free to hack it to your purposes!

SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

-- =============================================
-- Author: Tim Wappat
-- Create date: 12th Jan 2016
-- Description: Fetch Bin Qtys for an item ordered by priority
-- Defaults to the SOP Fulfilment Bin if no Priorities
-- =============================================
CREATE PROCEDURE canford.CA_SOP_BinQtysByItem (
@ITEMNMBR CHAR(31)
,@LOCNCODE CHAR(11)
,@QTYTYPE SMALLINT
)
AS
SET NOCOUNT ON;

-- Gets the bin qtys ordered by bin priority
SELECT t1.ITEMNMBR
,t1.LOCNCODE
,t1.BIN
,t1.QTYTYPE
,t1.QUANTITY
,t1.ATYALLOC
,ISNULL(t2.[PRIORITY], CASE
WHEN t3.SOFULFILLMENTBIN = t1.BIN
THEN 0
ELSE 255
END) AS [PRIORITY]
FROM IV00112 t1
LEFT JOIN IV00117 t2 ON t1.ITEMNMBR = t2.ITEMNMBR
AND t1.LOCNCODE = t2.LOCNCODE
AND t1.BIN = t2.BIN
LEFT JOIN IV00102 t3 ON t3.ITEMNMBR = t1.ITEMNMBR
AND t3.LOCNCODE = t1.LOCNCODE
WHERE t1.ITEMNMBR = @ITEMNMBR
AND t1.QTYTYPE = @QTYTYPE
AND t1.LOCNCODE = @LOCNCODE
ORDER BY ISNULL(t2.[PRIORITY], 0)
GO

GRANT EXECUTE
ON canford.CA_SOP_BinQtysByItem
TO DYNGRP
GO

Price table replicated to website

Replicating GP price table to website

To provide our website with bang up to date product prices as they are in our ERP system, we replicate the price table from our ERP system to the website SQL server database. The price table holds nearly two million price rows consisting of many combinations of currency, item, price quantity, units of measure, discount breaks and customer specific price lists.

The replicated table works great, until a big price update is required. If most of the prices are updated say in line with inflation, it hits a good number of those rows in the database. This causes a BIG transaction to make its way through the relatively thin wire to our website. From opening the transaction (and thus locking the subscriber table) to committing the transaction can take a long time locally and then for that to make its way through the slow connection to the website and be committed. All these processes take a finite amount of time. The lock caused on the price table at the website database while all this is happening causes a problem. That lock caused any reads on the price table to be blocked for a long time until everything had passed through, bringing the website to a halt for tens of minutes.

To avoid this queries at the website that interrogate the replicated publisher table could be set to READ_UNCOMMITTED transactions. However, potentially this could lead to problems in reading “dirty records” that are not ready for public consumption. This is significant when you consider these are price tables and reading prices that are in an unknown state is a no-no.

Solutions

First Idea

One was to take a database snapshot before the bulk price update, switch all the views on the table to use the snapshot letting replication take its time and update the records in the underlying table. Once finished the views could  be switched back to point at the original table again. This could, perhaps be controlled by a replicated signalling table from the ERP system so that we don’t have to issue SQL between the databases. It should work well in that once the all clear signal is set in the publication database, it will not propagate through to the subscriber until after all the other changes in the log have been replayed and committed to the subscriber.

Second Idea

The second idea was to switch the subscriber database into ALLOW_SNAPSHOT_ISOLATION ON mode. Simply by executing the following commands:

ALTER DATABASE [ERPReplicationdatabase]
SET ALLOW_SNAPSHOT_ISOLATION ON

ALTER DATABASE [ERPReplicationdatabase]
SET READ_COMMITTED_SNAPSHOT ON

Once the database is in snapshot isolation mode, the reads will use versioned row numbers from the database. The reads are never blocked as when a transaction begins, the row version before the transaction and below is used for any reads. So reading the website is not blocked while a transaction is underway, and what is read is the state before the transaction started, that for our application is perfect.

No DML commands need issuing or signalling between the databases. This is the cleanest solution for what is required. The next task is to ensure that all the changes made to the price tables are made inside one transaction to keep the reads off any of the new changed data until is fully committed to the database.

@@SERVERNAME is NULL & SQL Server Replication

Issuing the following query should return the server name of the instance you are working with. In the scenario that is about to be described, the server had some work done on it which left the query below returning NULL.

This causes problems when attempting to run tools such as the replication monitors and wizards as they appear to use this query internally and thus complain that a connection cannot be made to a (NULL) server during the login process. Also various other errors start occurring using Server Management Studio.

SELECT @@SERVERNAME
 
As the sever had been used for replication in the past, there was a node under linked servers for itself. The server was acting as its own distributor in replication, thus had a “replication” database under System Databases node in Management Studio.
There was also a Linked Server named repl_distributor that I take to be a pointer to the distributor, in this case the same server.
 
The problem was the the Repliation >> Local Publications  node contained a publication that was not working.
 
Trying to run any commands resulted in the server complaining about the null server name, by the way the following command returned the correct server name;
SELECT SERVERPROPERTY('MachineName'), SERVERPROPERTY ('InstanceName')

It seems this is held more up to date than the other property and syncs with the sever name better.
 

Change @@SERVERNAME

All the server names “registered” can be viewed with

sp_helpserver

In the returned table it was possible to see the Linked server, but there was no server with the ID column of 0. The row with the ID column of 0 is the server name returned when the @@SERVERNAME property is used. To set this add a server with the following;

sp_addserver '<YourServerName>','local'

The local parameter tells it that you are talking about this special server name. However attempting this in this case resulted in the server complaining that the server already existed, due to the LINKED server.

It should be possible to drop the existing linked server by issuing the following command;

SP_DROPSERVER '<YourServerName>','droplogins'

This didn’t work as it complained that the server was acting as a publisher. Argh!

Temporary Change to @@SERVERNAME

In order to “getinto” the SQL tools I creating a ficticous servername, adding that servername to the local hosts file of the server.

 
sp_addserver '<YourServerName>TEMP','local'

In the hosts file C:\WINDOWS\system32\drivers\etc\hosts

127.0.0.1       localhost
127.0.0.1    <YourServerName>TEMP

YOU MUST restart the SQL server service for the @@SERVERNAME variable to refresh, easy to forget!

So now @@SERVERNAME returned a false servername, however at least working on the server I could use Management Studio to connect to the server under this fake name and start accessing normal tools for replication.

Removing replication

 

At this point the aim was to totally remove replication and then add it all back in once it was all gone. To do this there is a guidance document How to manually remove a replication in SQL Server 2000 or in SQL Server 2005this seemed to work for 2008 too.

For each replicated database, this was used to rip out all replication from the databases.

exec master.dbo.sp_removedbreplication '<replicated_databse_name'
sp_dropdistributor @no_checks=1, @ignore_distributor=1
 
The rest of the commands in this section are a medley of commands that can be used to get to the point where the replication is gone.
 
To drop the replication distribution database use this;
alter database distribution set offline
drop database distribution
 
master.dbo.sp_serveroption @server=N'<YourServerName>', @optname=N'dist', @optvalue =N'false'



Helpful help functions the last two seemed to be the final step needed in this case to allow the dropping of the various objects (linked server and replication database).

sp_msrepl_check_server
sp_helpdistpublisher
sp_get_distributor
sp_helpdistributor
sp_helppublication
select * from msdb..msdistpublishers
delete from msdb..msdistpublishers
 
“There are still remote logins or linked logins for the server” - one biggest breakthrough with all this was the clue to try right clicking on the object you are trying to destroy, scripting it and then using that to get the server names for the above manually removing replication method and the other commands listed. It also gives you the scaffold to change the option values to false for the linked servers. See the following fragment that I hacked to get our options for replication and publication set to false after using the scripting technique.
 

/****** Object: LinkedServer [<YourLinkedServerName>] Script Date: 02/18/2011 02:43:09 ******/
EXEC master.dbo.sp_addlinkedserver @server = N'<YourLinkedServerName>', @srvproduct=N'SQL Server'
/* For security reasons the linked server remote logins password is changed with ######## */
EXEC master.dbo.sp_droplinkedsrvlogin @rmtsrvname=N'<YourLinkedServerName>',@locallogin=NULL

@useself=N'False',@locallogin=NULL,@rmtuser=N'webreplication',@rmtpassword='########'
EXEC master.dbo.sp_droplinkedsrvlogin @rmtsrvname=N'<YourLinkedServerName>',@locallogin=NULL

@useself=N'False',,@locallogin=NULL,@rmtuser=N'webreplication',@rmtpassword='########'
EXEC master.dbo.sp_serveroption @server=N'<YourLinkedServerName>', @optname=N'dist',

@optvalue=N'false'
GO
EXEC master.dbo.sp_serveroption @server=N'<YourLinkedServerName>', @optname=N'data access',

@optvalue=N'true'
GO
EXEC master.dbo.sp_serveroption @server=N'<YourLinkedServerName>', @optname=N'sub', @optvalue=

N'false'
GO
EXEC master.dbo.sp_serveroption @server=N'<YourLinkedServerName>', @optname=N'pub', @optvalue=

N'false'


Changing @@SERVERNAME to what it should be

Now it was possible to change the servername back to what is should have been in the first place, and remember to delete the entry in the hosts file.

SP_DROPSERVER '<YourServerName>TEMP','droplogins'
sp_addserver '<YourServerName','local'
 
YOU MUST restart the SQL server service for the @@SERVERNAME variable to refresh, easy to forget!

sp_helpserver

Looking and help server should now return the server names with the actual servername having an ID of 0.

SQL Jobs

 
Don’t forget to delete all the sql replication jobs and agent jobs.
 
 
Thus after a few hours the server now has no replication and it is possible to start again.