A few months ago we kept hitting our seventy five concurrent user limit on our GP server. Rather than rushing out to buy some more licences it was time to investigate how it was getting used.
In order to do this a SQL job was set up sample the current concurrent users on the system throughout the working day, logging the results into a SQL server table for later analysis.
I was not too interested at this point in the detail of who and where people were logged in, merely how many, thus a few mins later we have created a table in the BI database, UserConcurentCount that has two columns, UserCount and TimeDate.
The sql job created had one step running every 10 mins that did the following:
/* Insert into the monitot table how many users visiting */
Insert into BI.dbo.UserConcurentCount
(select Count(*) as UserCount, getdate() as SampleTimestamp from dynamics..activity)
This results in a table that can be used to build a histogram using a bit of TSQL and excel of the number of users in the system through the day. The problem seems to be with the changing patterns of work in a small number of the part time staff in some departments causing a pinch point when change over half way though the day occurrs and both sets of staff are in the system. The problem now seems to be resolved and we hover under the user limit now.
Take care to put a clean up script on a weekly job clean old entries out of this log table, no need to build up excessive records unless it is useful to you.
This implementation of Dynamics GP had some new challenges. The company have a trade counter at the front of the warehouse, thus they needed a solution to allow entering of orders, picking and fulfilling them and presenting an invoice to the customer all as the customer waits. As this was trade, customers need invoices to take away with the goods.
The workflow has ended up like this;
- Customer goes to trade counter
- Trade counter staff work with customer to enter the items they require on to an invoice SOP type document in Dynamics GP. There is no need for a sales order as the items will not be back ordered. If this is required the back ordered items can be pushed through to an order.
- Send the invoice to the warehouse batch to allow the warehouse pick printing service to print off the pick list and maintain the integrity of the pick by locking access to the document.
- Staff pick the items for the invoice.
- Goods arrive at the counter, customer agrees it is all they want - customers have an annoying tendency to change their mind or want to add stuff to the invoice.
If they add stuff or change stuff on the order, the invoice is recalled from the warehouse batch, changes made and a new edition of the pick list is printed by submitting it to the warehouse batch again.
- Once everyone is happy, the sales order processing fulfilment module I wrote fulfills the order, recording it is of collect type, defaults the picker, packer and checker names and prints delivery notes, one for the customer to sign the other for the counter records, it also prints invoices as required for the customer to take away.
PDF versions of all the documents invoice, pick lists, despatch notes, as usual are all automatically stored on the document server for later viewing or reprinting or web server serving as required.
Often the telesales team or the website will take orders that are for customer collect, these are dealt with by transferring them to invoice as the customer arrives at the counter, and otherwise the process remains the same.
Order intake statistics
This concept of receiving sales orders in to invoice type documents breaks one of the business rules that our reporting uses for establishing order intake. Previously this analysis was done merely on sales orders. Thus some work on the Business Intelligence was required to work out what constituted a genuine order intake document, indeed line and what was just part of the process document flow. i.e. don't count an invoice created from a sales order as order intake, unless any lines were added to the invoice by editing the invoice after the transfer, in this case they need to be counted. I've avoided having to write this report, frankly that is great as I can get on with some more process improvements.
Quick and smooth picking of goods
This solution seems to be an improvement on the previous ERP system that was in use and I intend to make some more changes to the fulfilment software to improve it further.
Reporting services used to produce invoices via XML web services
I used reporting services, in an attempt to move away from the Crystal that comes with .NET. I had to call the report from a windows service and from the .NET application but as the invoice can be subject to many changes during the early days while things are settling in, I chose to use a reporting services server to print the invoices. This ensures that any changes to the report end up reflected on both the invoices produced automatically from the fulfilment software and those produced in batches for other orders where the invoices are posted. I used the web services API to invoke printing of the report. More about that in the next blog post. The speed of invoice printing is not quite what I would like at the moment by using this method, but we have not looked at all at optomising the server or code yet so I am confident improvements will soon follow.
During the hours after switching on Dynamics GP I decided perhaps a Twitter service could have helped provide a good mechanism to communicate the status of the issue snagging from us the minority implementation team of 3 to them the large number of employees in the company we were putting it into. A thought for the next major system implementation that affects a large user community.
Today, another Dynamics GP installation now live and hanging together. Still coding into the early hours of Monday morning paid off.
It is always a learning experience delving into the inner workings of a company, trying to extract reality from fantasy as regards business processes whist navigating the rocks that lie beneath in the form of people's pride, work history, attitudes and capability to adapt to change.
It never fails to amaze me how you can ask so many people from the same department and get each one claiming that the thing you know is white, is in fact a different shade of grey to each other. This is where experience in mapping processes kicks in and you implement the system and transform the data during data load as you know in reality they use it now how they tell you they are using it! This has saved me big times several times during this project, together with my saviour, reporting services that came to my aid several times.
For this company orders were coming in from website, telesales, trade counter, post and emails, going to the warehouse for fulfilment and going out on a next day carrier. Switching over the ERP system was bound to be hard work. Due to plenty of planning, it went fairly smoothly considering the sorts of things I've seen go wrong before. We had a big load of GP modifications (vba & .NET & SQL) running to make it a slick system, mostly these were already tried and tested, even if they were implemented slightly differently than they were previously used. Of course there was an issue log growing by the hour from Monday morning, but the majority were trivial issues, like picking notes, picklists truncating the new extra long item numbers, a two min fix in reporting services etc. The modification that totalled up the weight of the order as the lines were entered in SOP transaction entry was wrong by a factor of 100, although it was picked up in UAT, the fix got lost before go live, that did annoy me. Quickly fixed by a nip into the stored procedure on sql server and divide by 100 on the output of the proc.
I soon realised communicating these quick fixes to issues was taking longer than actually fixing them. Even worse than that, the communication of what was going on was not always clear enough, so people were not using weights a couple of hours after a fix as they were on lunch when the announcement it was fixed was made and other scenarios occurred too.
I now think that I should really have used a Twitter (http://www.twitter.com) like service to broadcast information over the first two or three days. This would permit the workers to see that stuff was getting addressed and in real time. I have seen reports of this type of service being implemented internally by companies to communicate short messages to staff, information that would otherwise get lost waiting to be included in other larger communications.
Using a Twitter service would give stake holders, of all levels, a better tool to see the volume of snags we were being cleared. For those how have not tried Twitter, a stream of micro messages is provided to the consumer of the service, the key is that they are limited to the size of a SMS message. This is the key as it distils and concentrates the message content.
Those people with lower priority or more convoluted issues feel that their problem is not getting addressed. We know we are clearing lots of more important issues, they have no sight of that. A stream of messages saying things like "inability to charge customers at trade counter RESOLVED", help put their problems into context of the larger picture.
Obviously there continues to be a place for a shared SharePoint site for a view of issues and status of issues, but this is quite depressing/scary to view for us and the company stake holders half way through the first couple of days after the switch on. It shows every little thing to be addressed not the bigger picture of how much is actually working and how little disruption to the business there has been.
Finally I was glad to find that taking down one ERP system, replacing it with another does not always have to hurt, it didn't this time, or is that just experience taught us to have contingencies and effort in the correct proportions in the correct places... :)
Some years ago I wrote a lovely bit of software to that allowed us to gain much more control over the way that orders are handled by the warehouse at despatch.
Once an order is sent to the "Warehouse" Batch a custom windows service prints off a pick list for that order or sends the order to a "Pending Stock" batch if it is awaiting allocation of more goods due to pending purchase orders or stock shortages. The order is checked for weight and manual handling warnings are printed on the pick if any item requires special handling as does notice get applied should there be hazodous chemicals or problems with air frieghting the item. The picklist is then added to a despatch screen as awaiting picking and fulfilment.
The operative takes the pick list from the printer and goes off to pick the order (version II was always intended to feed the virtual picklist to PDA's that the staff would carry, not got that one done yet!). Once they have the pick complete, the items are weighed and they then scan the barcode on the picklist at the despatch terminal. The terminal asks the user for the weight of the consignment cartons/boxes and the dimentions if they are exceptional or for export. The wizard then asks for the picker, checker and despatcher names for that pick. Finally it caculates the best carrier for the consignment depending on the weight, if the order has none stock items (thus would be delay if item was to be lost in normal post), if the value of the consignment is over a threshold, the time of day (to catch the next carrier arriving), dimentions are checked for those supported by different carriers, including if the consignment is on a pallet. The soup is stirred and a service and carrier selected to get the items to the customer by the time they requested.
A despatch note is printed by the despatch terminal and tracking numbers are obtained from the carrier systems, and labels printed off for the parcels, and xml sent to the carrier machines so they know all about what they have to deliver.
It automates much more such as preventing duplicate despatches, taking notice of process holds and customer credit issues, charging Credit cards if they were applied to the order, its great!
However I've another company implemenation to do over the next couple of weeks, we are putting Dynamics GP into a computer periferal company that we aquired. Although they have very similar requirements to those already existing they also add in the complexity of a trade counter operation. This means the software now needs to support sales staff at the counter fulfilling orders, producing invoices etc. Cool - I like a challenge. Thus I've cracked open the solution file and started having a look at how it all works again as a refresher to see what needs adapting for the next implementation. Now I've realised that times have moved on with GP and .NET. We now have the WCF and the WPF to help me out together with better webservices in GP. All this application is direct to table, which was the only way to go when it was written, econnect was an arm and a leg and laden with bugs. Now it is a mature affordable soution - thanks MS!
Thus I start thinking about where to go as I also have to think about multiple binning that was never used in the old implementation. This is where you have stock held in multiple locations in the warehouses and set priority for where the stock should be drawn from. Useful if you wish to keep bulk stock up a height and draw it down to accessable locations further down the racks. This lead me to a blog of interest http://scruffylookingcatherder.com/archive/2008/01/03/custom-dynamics-warehousing.aspx where Jacob sounds he is in a similar suituation to my own using similar technology sets to solve the problems. Jacob also has some interesting posts on development to read.
Now I just need to get my skates on and work out how to get my application working for next week....