Dynamics GP drill down logging to trace file for diagnosing problems

In a previous posts I’ve looked that protocol handler used to create the drill down features in reports and other applications used with Dynamics GP.

Asynchronous pluggable Protocol Handler for Dynamics GP (for drilldown/drillback)
Dynamics GP Drill Down Protocol Handler error

-in those posts, I investigated the debug switches that can be added to the protocol handler’s configuration file and showed some of the various errors that can be generated from Dynamics GP drill down.

The configuration file for the Dynamics GP protocol handler can normally be found here:

\Program Files (x86)\Common Files\Microsoft Shared\Dexterity\Microsoft.Dynamics.GP.ProtocolHandler.exe.config

Adding these switches to the above file will cause a dialog box to pop when errors occur, that the user can then screen shot and pass to you.

<add key="DebugMode" value="true" />
<add key="UseWindowsEventLog" value="true" />
<add key="UseLogFile" value="true" />

Debug switches for Dynamics GP drill down

Example of error window generated after applying these switches:

Error window from Microsoft dynamics GP drill down

Logging Dynamics GP Drill down errors to trace file

However WCF, the underlying enabling technology that the protocol handler is utilising to talk to GP, allows us to log activity to a log file that can be analysed too. To utilise this, change the configuration file to look like the following. Then create a C:\log folder for the log file to go to, or change the location in  the line:

<add initializeData="C:\Log\WcfTraceServer.svclog" 

The detail of the logging and what is logged can be changed with different settings in this XML. This example gets you going without learning detail of WCF, which is beyond the scope of this post.


WCF debug nodes added to configuration for Dynamics GP Drill Down debug

Now when an exception occurs the file will be generated in the folder:

example of file created

Using the Service Trace Viewer Tool

This file is an xml file that is difficult to read and understand, however you can use the Service Trace Viewer Tool (SvcTraceViewer.exe) to investigate the file. I have shown an example below. This give a richer environment to investigate errors and allows a less disruptive way of capturing them from the client machine.

Debugging Dynamics GP drill down with Service Trace Viewer Tool

Armed with this information from the log file, it is much easier to get an investigate any errors the service may be encountering, errors that otherwise would be hidden from the user and admin. Below is a full configuration file given as an example to show the context of the changes.

The WCF Configuration Editor Tool is my recommended way to edit WCF configuration files, but may be daunting for those who do not have a basic understanding of WCF. Configuration Editor Tool (SvcConfigEditor.exe)

Example of full configuration file:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
<system.diagnostics>
<sources>
<source propagateActivity="true" name="System.ServiceModel" switchValue="Information,ActivityTracing">
<listeners>
<add type="System.Diagnostics.DefaultTraceListener" name="Default">
<filter type="" />
</add>
<add name="traceListener">
<filter type="" />
</add>
</listeners>
</source>
<source name="System.ServiceModel.MessageLogging">
<listeners>
<add type="System.Diagnostics.DefaultTraceListener" name="Default">
<filter type="" />
</add>
<add name="traceListener">
<filter type="" />
</add>
</listeners>
</source>
</sources>
<sharedListeners>
<add initializeData="C:\Log\WcfTraceServer.svclog" type="System.Diagnostics.XmlWriterTraceListener"
name="traceListener" traceOutputOptions="LogicalOperationStack, DateTime, Timestamp, ProcessId, ThreadId, Callstack">
<filter type="" />
</add>
</sharedListeners>
<trace autoflush="true" />
</system.diagnostics>
<system.serviceModel>
<diagnostics>
<messageLogging logMalformedMessages="true" logMessagesAtServiceLevel="true"
logMessagesAtTransportLevel="true" />
</diagnostics>
<bindings>
<netNamedPipeBinding>
<binding name="NetNamedPipeBinding_IDrillBackToGP" closeTimeout="00:01:00"
openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00"
transactionFlow="false" transferMode="Buffered" transactionProtocol="OleTransactions"
hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="524288"
maxBufferSize="65536" maxConnections="10" maxReceivedMessageSize="65536">
<readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384"
maxBytesPerRead="4096" maxNameTableCharCount="16384" />
<security mode="Transport">
<transport protectionLevel="EncryptAndSign" />
</security>
</binding>
</netNamedPipeBinding>
</bindings>
<client>
<endpoint address="net.pipe://dynamicsgpdrillback/" binding="netNamedPipeBinding"
bindingConfiguration="NetNamedPipeBinding_IDrillBackToGP"
contract="DynamicsGPDrillBackService.IDrillBackToGP" name="NetNamedPipeBinding_IDrillBackToGP" />
</client>
</system.serviceModel>
<appSettings>
<!-- String value, please use good file system notation (i.e. "c:\Dynamics\GP\DynamicsGPDrillBack.xml") -->
<add key="BindingName" value="NetNamedPipeBinding_IDrillBackToGP" />
<!-- Boolean values only (true/false) -->
<add key="DebugMode" value="true" />
<add key="UseWindowsEventLog" value="true" />
<add key="UseLogFile" value="true" />
</appSettings>
</configuration>

Dynamics GP, Item description is one hundred character capacity?

A little curiosity of mine is around this finding & why it is so…

Dynamics GP Item description field


Fill the item description field of an item in dynamics GP and then paste the text into a notepad application to measure its length. You will find it has a capacity of one hundred characters…

measure capacity of UI as 100


Yet have a look at the database, it has a field size of 101…

Database field length is 101

 

but look, the UI is limiting the keyable length to 100…

Keyable length of Item Description 100

 

So there is an “extra” inaccessible character in the descriptions that you cannot use? What secret confidential information do you keep in your extra description extra character?

 

[Edit 2017/08/08] David in the comments explains this for us as:

Every string field of even length will have an extra character at the database level. 

This is a Dexterity feature from legacy behaviour. 

Before SQL was used as the database, Ctree and Btrieve was used. They performed better when each record in a table was a multiple of 16 bits, 2 bytes, as the early x86 processors were 16 bit. 

To ensure this string fields were padded to make the storage length an even number. Strings require the number of characters in the string plus a length byte when stored and so can be 0-255 characters long. 

On your screenshot, the keyable length is 100, plus a storage byte = 101, plus pad to even gives 102 with an extra hidden character. 

Odd length strings don't need padding to be an even total size. 

Why does Dynamics GP use Table Heaps (no clustered indexes)?

First we should define what a table heap is. A heap is a table without a clustered index.

Data is stored in the heap without specifying an order. Usually data is initially stored in the order in which is the rows are inserted into the table, but the Database Engine can move data around in the heap to store the rows efficiently; so the data order cannot be predicted.

 

There are sometimes good reasons to leave a table as a heap instead of creating a clustered index, but using heaps effectively is an advanced skill. Most tables should have a carefully chosen clustered index unless a good reason exists for leaving the table as a heap.

 

Generally I would say we are more used to finding a clustered index on a table, but looking below you can see the more normal picture for the indexes of a GP table.

2017-08-06_21-19-10

Microsoft Dynamics GP does not use many clustered indexes in its database.

 

If you read The Microsoft Dynamics® GP Architecture White Paper, then it would seem to be a decision based on research and data:

2017-08-06_20-29-30

So which tables? Lets look for tables in the SOP series that have clustered indexes on them:

SELECT
     t.name AS table_name,
     I.type_desc AS index_type,
     I.is_unique AS is_unique_index
FROM sys.tables AS t
INNER JOIN sys.schemas AS s
     ON t.schema_id = s.schema_id
INNER JOIN sys.indexes AS I
     ON t.object_id = I.object_id
WHERE I.type_desc = 'CLUSTERED'
and t.name like 'Sop%'
order by 1;

2017-08-06_21-28-00

 

Interesting is that only seven of the SOP tables that have clustered indexes, the rest are heaps. So large tables such as SOP30300, that is particularly large, a 8GB table in one of the companies I work with, are a heap. This at first glance seems wrong. in fact the MS advice from Heaps (Tables without Clustered Indexes) is

Do not use a heap when there are no non-clustered indexes and the table is large. In a heap, all rows of the heap must be read to find any row.

 

I do wonder about the conditions under which the decisions to not use clustered indexes was made, but have to trust the research was accurate (/S).

 

So the reason? GP didn’t always run on SQL server, it was on ISAM in the old days. The database schema that GP uses owes a lot to that legacy and is the reason for many of the oddities and areas lacking in GP’s use of SQL server and I expect this use of heap tables is also from that legacy and that study showed there were no performance benefits from introducing a clustered index.

 

There reports of people converting the primary keys into clustered indexes with success. It is possible to write a DML script to go through the database doing this for all tables too. However this is internet forum hearsay, so I cannot vouch for the accuracy of this information nor rigour of the testing. I also fear that the work can be undone when tables are dropped during upgrades, loosing the changes. I'm pretty certain this would also invalidate your support with MS, as all that careful performance tuning that GP database schema has experienced over the years may not be compatible with this action.  ;)

Finding a rogue “dll” causing trouble with “different version of the same dependent assembly”

In Dynamics GP development, we have lots of .dll files around arising from support for many version releases of GP. These files litter our projects and sometimes a dll may go astray and cause trouble by ending up in a folder to which it should not belong.

This powershell command is a quick way to look for all the versions of a .NET assembly (dll) version within a folder tree.


Get-ChildItem -Filter Microsoft.Dexterity.Bridge.dll -Recurse | Select-Object -ExpandProperty VersionInfo | Out-String -Width 180

 
2017-08-02_16-14-09

ref: Stackoverflow Get file version and assembly version of DLL files in the current directory and all sub directories

The versions can be seen on the left and any offending .dll files that are not in the correct directory for their actual version number can be quickly and easily identified.

This avoids getting into <assemblyBinding> redirects when dealing with the error at compile time of

Found conflicts between different versions of the same dependent assembly