Office Snipping Tool for creating documentation screen grabs - screen shot of the Snipping Tool

The sniping tool is a handy tool that was introduced into the windows operating system and is one of the tools I most frequently use in windows. It is useful when creating documentation for Dynamics GP processes and other software documentation.

Snipping of the Windows Snipping Tool user interface

Recently I wanted to take a screen shot of the snipping tool itself, but for good reasons the Snipping Tool will not take screen shots of itself (it probably cause a worm hole to open up in space and time).

Durwin Brown suggested I use the office snipping tool to achieve my goal.

Below I show the example with Word, but this can be done from the insert menu of Outlook and other applications in the Office suite.

Insert Menu of Word with the Screen Button Pressed showing all available windows to screenshot

The word screenshot button when clicked shows all the available windows currently open on the computer, saving having to hunt the window down in the Z order of windows currently open as would have to be done with the Snipping Tool. This alone makes it easier to use when ploughing through documentation of a process using screenshots. Before going back this tool, I had not realised this is the advantage of the in built functionality over the Snipping Tool and I will change my usage going forward.

At the bottom you see “Screen Clipping” this is the cross hair, select a region style screenshotting we are used to with the Snipping Tool, but the content goes right into the office document.

A great tool if you have lots of documentation to create in a hurry, the integrated nature of this screenshot makes the whole process feel a little slicker than using the free-standing Snipping Tool.

This, as you can see from the first image in this post, allowed me to grab the screenshot of the Snipping Tool that I was originally after.

Chrome Browser Tab Discards

Have you noticed how when visiting tabs that have been left untouched in the Chrome browser, that they appear to re-load as those tabs are finally clicked again? This is tab discarding at work, a memory saving exercise by the Chrome team that have recognised the bad rep Chrome has been getting as a memory hog.

When the machine is under memory pressure, chrome will discard uninteresting background tabs to free up some memory. When the tab is clicked, it comes back to life.

You can view the tab discards by going to the URL chrome://discards in your browser, remember Chrome only discards tabs you are not interacting with and only if the system is under memory pressure, so it may be that you see no discards.

Below is a screenshot of my discards where I can see two discarded tabs.


In this post Tab Discarding in Chrome: a Memory-Saving Experiment the chrome team show a plot of how at 50MB per tab, memory soon gets eaten up when a dozen tabs get left open, to read later. The tab discard functionality is to address this issue, removing those background tabs from memory until they are needed again.

I had noticed something different a while back about the behaviour when using Chrome. The discards was formally discovered when we saw some behaviour when debugging our ecommerce platform.

Scott Hanselman has an interesting post,

Two Must-Have Tools for a More Readable Web, on the subject of “read it later”, a habit of leaving tabs open to come back to read later. He shows there are more healthy, sanitary alternative ways of handling this need, though in the responses to the post others seems to have built in ways to deal with this.

Managing document attachments in Dynamics GP

What is document attach?

Dynamics GP 2013R2 introduces document attach, allowing files (photos, PDF’s, drawings, spread sheets, emails etc) to be associated, attached with objects in GP. An attachment icon appears on transactions and cards, window notes and many other places. An example may be attaching customer purchase orders and emails to sales orders.

in addition to providing a repository for documents related to GP, windows, transactions and master data, the documents for some GP objects can be set to “flow”. This means the attached document will also attach itself to downstream objects created from that object. Put simply, attaching a contract terms and conditions PDF to a customer and setting it to flow will make that document attach itself to quote, orders and then invoices created for that same customer automatically. The same applies to say a word health and safety (COSHH) data sheet attached to an item, the data sheet will follow the item through the transactions that include that item, imagine how helpful that would be exposed to say order enquiry on a eCommerce site view.

What did it replace?

Something had to be done to replace the aged OLE notes system that was in use by previous versions for basic attachments, for a number of reasons. The dawn of HTML web clients means some application architectures had to change to enable web browser remote delivery. The OLE notes was also often not well understood by system administrators. This frequently led to it being misconfigured, or configurations being corrupted or broken. I have often seen the OLE notes set back to default local storage on a PC, causing the documents on that workstation to ultimately be lost on that one machine. I have also seen multiple document stores created, due to different points in the network UNC path being set as the root of the store by different configs. I have even seen document stores getting misplaced on the network too. OLE notes were also locked away by the nature of the implementation, in a silo that was difficult to penetrate and thus  housekeep. Writing scripts to remove or audit attachments for documents over retention periods was tricky challenge.

As the OLE notes were a second class feature, the importance and location of the store was not high on the radar of network admins, sometimes leading to in adequate backup and permissions being set on those folders. Also led to challenges in terminal server or citrix deployments.

How does it work and what challenges does it bring?

Now document attach puts the documents into BLOB fields in the Dynamics GP database, a great thing that everything is now encapsulated in one place. Then when I first saw the tweets from convergence on the announcement of document attach as a feature and it then sunk in what the consequences would be. The reasons for my horror, was that this was back before we had a SAN and VEEAM for virtualisation backups. I knew we had over 50GB of data in our OLE notes, and 200GB in our own custom document attach functionality. I assumed that it would be sensible to migrate that 200GB from our solution to the new native functionality. Our main GP company data base was 100GB, hardly fitting on the tape backups as it was and taking longer than we’d like to ship to our DR site. So this would make our 100GB grow to over (assuming extra meta data stored in DB) 350GB database, ouch!

At that time SQL storage was much more expensive storage than file storage, due to the higher spec of the drives required to serve SQL server compared file store. Also the servicing SQL backups was also more expensive in skills and rigour required.

For information on the tables and SQL involved, view this post Document Attach vs. Notes and how to locate whether a master record has a document attached

Impact on test environments

It also filled me with dread that when we do our automated weekly restore of the live company to test, all this data would also be duplicated into the test company too with it now residing in the company database. This was also the case for when it was pushed into the developer clone of the live environment, the developer clone obviously having both live and test database as it is on purpose a duplicate of live. Previously OLE notes were not required for the test environments as they were simply not used in any test scenarios we had encountered. There was going to be approaching a terabyte of extra space required on those environments.

Often during development we’d restore SQL backups of the database to test, say test an integration, then restore again and again through testing. Triple the size of the restore would make this take a really impractical amount of time. Currently if a table got deleted in SQL server, restore of the company into another database, roll it forward to the point before the delete to recover the data would only take about half an hour, a new risk is now present if the restore take so much longer.


Steve Endow wrote a blog post about this, How will Dynamics GP Document Attach affect my database size? – my conclusion is that if you store customer PO’s, drawing for custom art work for products, quote requests, email trails etc against orders like we do, then yes it database size will rocket!

I looked an partitioning and advanced SQL server techniques but mostly they require SQL enterprise edition, have you seen the cost of running enterprise SQL server?!

So to mitigate the issues arising from the change we’ll,

  • Keep our custom file store backed attachment system, it allows drag and drop from outlook and other locations that users don’t want to loose.
  • Let the SAN soak up the data sizes
  • Let modern backup techniques deal with the back up sizes
  • Use the table storage to our advantage, it will allow better housekeeping  management of the attachments, to help us keep the overall size down.

I found it interesting how Mark Polino sums it up well the same issues I was originally feeling in his post Four reasons why Microsoft Dynamics GP 2013 R2’s new Doc Attach is here to stay

The file attach functionality in GP has been effectively broken for a long time. It's clunky to use, easy to setup incorrectly, and easy to break. The new database storage model is harder on database admins and will require better education for customers, but in the end I think it's a better option for the business as a whole.

I think what Mark says is mostly true but if system admins now have the responsibility for this then give some more options around the backing store, and flexibility to handle what can be a very big chunk of storage differently to the rest of the database.  Ideally in its own database, as an option, perhaps looking at SQL FILESTREAM would help too?

Dynamics GP auto roll out of updates

GP is very configurable, that is one of its big strengths. IT teams can update reports and form layouts, developers can create wonderful new add ins, new modules can be added or removed. etc. With these different sources of updates, and frequency of changes, it becomes important to keep GP users all in sync with Dynamics GP add-ins, reports and forms. I thoughts others might be interested in how we tackle this on our infrastructure.

New install

GP is first installed from a packaged version of the basic GP version, say GP2013R2 as it was the day we upgraded. The customised reports and forms are contained in the msi package that is silently deployed as part of our network group policy management. There is a packaging option under the main setup of Dynamics GP to create a msi of the configuration for your company. This is great to get the application installed with registry settings and all the other dependences bootstrapped on too.

Machines that have GP installed then also pickup further group policy that runs an windows batch file .This batch file does a delete of the add-in directory of the GP application and then copies from a deployment location on the network, the GP program files and addin directory differences over the top of the current program files directory. Thus patching the install with our current forms and dictionary modifications, in addition to the current add ins directory. Any config files etc are also copied over. Over the years we have tried many ways of deploying these files more elegantly but time and again have found ourselves caught out by unusual events and circumstances leading to deployment fails. Thus the brute force ugly method we now use. At one point I was creating a custom msi and deploying that through network management tools, but this was too cumbersome for the operations guys (and devs) who want a quick and simple way to roll out minor tweaks such as a new reports dictionary.

The batch has a constant variable to define the current release version, comparing that against a known, versioned by filename text file included in the application directory.

With the batch file keeping the files in sync, when a user launches GP it will always be up to date. The years have taught me nothing is that simple. Some users don’t logout very often. Or others decide they will just end task or close the batch file window that performs the copy, as they are too busy to wait the 20 seconds for the update. This leads to the dangerous situation of thinking everyone is running the same version, where as they are not. So I added into our visual studio mods a version checker.

Version Check performed by Login Window

In the login window, our GP will check if the user has the current version or not, if not, it then asks if they would like to upgrade. The current version is maintained by a file with a version number in the filename that also acts as a version history text file. When challenged, the user chooses to upgrade, then GP launches another updater application that gives the user a clear message to shut down all instances of GP and wait. The updater application waits for all instances of GP to close on the machine ( we use different mechanism for terminal server). Once the last instance closes, the same sync batch script is called from the updater application. The script is called in a hidden window to minimise the chance the user will cancel it. GP is updated, the updater application closes and GP is again launched, The user is now allowed to login.

This worked great until I discovered a couple of users who NEVER clicked update and always chose to avoid the install of the update, as they are too busy to wait for an update to be applied. I discovered this when they started to experience issues due to being so far behind the current release version.

Forced updates

This made me force the update by disabling the login button on the login form, until the user has the correct version, effectively forcing them to update (unless they are in an admin security group in active directory). Since implementing this, I can be confident that everyone is running the current version of GP throughout the organisation. It also means if I update the GP install, by logging out of GP and back in again, the users get the next version, which leads to really quick releases in production.

For the terminal servers, the autoupdater application can be ran from a scheduled task, where it waits for the last user to logout and then does an upgrade.