#NAV Tech Days 2017–Antwerp Belgium, Day one

For day two: Day two summary

So here we are in Antwerp Belgium to attend my first NAV event. Being primarily involved with GP this is both a research trip and a chance to network with the wider Microsoft Dynamics community.

NAV TechDays model

Venue and organisation

There are 1236 participants at todays event, all there to find out more about NAV development and that is a sizeable focused audience! Look at the global magnet NAV TechDays has become!

attendance

That is an amazing turnout for such a niche, in fact, it is 200 up on the previous year. At this rate the event could well out grow the venue in a couple of more years if numbers continue to grow  like this. The venue is a multiplex cinema complex that has been designed to be a conference venue too. It splits in two when a conference is on, providing very acceptable facilities. For the sessions, it was nice to be somewhere where the seats were plush and comfy, the projection screen, obviously were giant and excellent quality. They were so big as to be able to show a real time view of the presenter combined with the computer feed.

Exhibitors area

I feel it worthwhile to praise the NAVTechdays conference logistics and general arrangements, it has been effortless to attend, from an attendee perspective, staying at the conference hotel. Shuttle busses from the hotel to the venue, and shuttle busses laid on from the conference to the airports after the event. Food & open coffee, Coke/Fanta etc chiller cabinets open and free throughout. Quality food, breads, soups and a wonderful selection of excellent food choices on the evening. The venue is arranged with the food and beverage counters intermingled with the exhibitor stands. This makes for high volume, repeated footfall for the exhibitors at each break, lunch and walking dinner on the evening. The free drinks bars on the evening were amazing with a good selection of drink options, including Belgium beer tasting with a wide selections of beer strengths and types to try. All this makes it easier for the exhibitors to prey on the drunk and helpless! As the sponsors and exhibitors make these sorts of events economically viable it is great to see it give them the best opportunities. The signage around the venue and conference hotel meant all information was available up front, with shuttle time tables, cloakrooms etc all well advertised. The social evening went well with casino games and a busy floor right into the evening. The sessions I attended all ran to time starting promptly and ending on the scheduled time.

belgium beer

I found it interesting that there was mini presentation area off the exhibition hall for min presentations by exhibitors during breaks. As the area gets very loud during these times with everyone chatting and getting drinks, bluetooth headphones were worn by anyone wanting to watch. I thought this solved the problem of condensing more into a limited time, meaning those wishing to attend these mini sessions could grab pastry and drink and sit for the mini session immediately, it also was good for the people running the sessions as they were very visible, and thus the ones I saw where well attended.

The A/V set up was on the whole good- helped by the fact it was a cinema so acoustics were good but the AV company did a good job in addition. I’ve seen it before, but worth pointing out that the Q&A at the end of each session used these catch box microphones, basically a microphone mounted in a foam block so the block can be chucked to the person wanting to ask the question allowing them to quickly get the microphone and for everyone else to hear the question. Also the block acts as a baton for those with the right to talk as sometimes found in debates. The block shown has a headset mic lying on top of it that is nothing to do with the device.

catch box


This conference is an indy conference, these tend to be the best as no commercial agenda (other than conference’s survival) is influencing it, generally meaning the content should be much more in tune with what the visitors would like to see. It was nice to hear Microsoft say they will support the event as long as it is around. The NAV community seemed really friendly and much the day felt much like a GP community event, just with less familiar faces.


Keynote opening Session – Vincent Nicolas, Thomas Hejlsberg

Next year NAV will be getting a rebrand – to become Dynamics 365, current working code name is “Tenerife”. This lead to speculation as to what name #NAVTechDays will become next year for 2018.

The common data service (CDS) of Dynamics 365 has not been widely adopted yet, that must change.

Machine learning and other Azure services such as Cognitive services are touted as becoming areas to watch going forward, transforming the way businesses work in new ways not seen before.

It was noted that machine learning examples can already be found in NAV ( forecasting).

Pushing for adoption of the now comprehensive and constantly growing offerings in the Azure cloud.


A deeper insight of how the service fabric provisions a NAV instance so quickly and provide adequate reliability was provided. Buffer tenants, pre-provisioned databases are waiting in the wings and are dynamically allocated to an tenant when provisioning occurs, thus avoiding excessive preparation, working within a few seconds. The database is shared between tenants on Azure SQL server, the load each shard is under is monitored, the databases are moved around the Elastic pool to even out load on machines. Shared databases leads to extra work in backing up only the correct information required for that user, but offers overall cost savings by sharing machine resources.

The elastic pool optimiser is responsible for making sure the load is balanced, if a database gets too hot, it will be moved to a quieter host machine.

This is all App Fabric – can kind of be thought of as the Azure operating system.

It was noted that the NAV team can now roll it out to a new datacentre quickly as required.


Telemetry required to keep Azure running to required service levels

Monitoring – Diagnostics – Analysis

There is extensive telemetry used to keep the services running sweet. Logs are stripped of personal as the monitoring software or nodes may not be in the same geo location as the server, so this is done to prevent personal info crossing country borders. Kirsto Geneva and other names were thrown around in context of these operations.

ICM incidents are logged and we saw behind the scenes at how the network engineers can dill right into any problems, right into the blocks of code running that may be causing the issue.


Extensions 1 is dead, 2 is current, 3 is in planning.

This is a CSIDE replacement with new complier.

We saw how the click and drag GUI visual designer works for pages. This was impressive. It makes it easy to move a field, remove it, or a a new one. The visual designer mode seems very nice and helpful to use. The terms page extension and form must be sued carefully or you’ll get a laugh from the audience.

Once the visual design is in place, use visual studio code editor with the AL extension to start coding against it. The design changes are exported to Zip that is then extracted to the developer file system and VS code opened against it. Rather than being linked to the NAV database Visual Studio Code, simply uses the file system as its repository.

There is not project file, merely a folder structure.

Project is defined by the folders and the launch.json file.

The launch.json file contains the connection details of the server and is not normally checked into source control.

To know what NAV looks like, it is necessary to extract the symbol reference from the  NAV instance.

It was demonstrated how Visual studio code updates the NAV instance with F6 that overwrites any exiting configuration.

If was shown that a start object ID is required to get going too in the launch.json file.


The Type PageExtension object was shown in the context of extending the customer card.

VS Code gets the symbols information from NAV so it knows what is present in the NAV instance. A bit like running the DAG.exe against Dynamics GP to get the reference assemblies for GP addins.

F6 and F7 must be used carefully as you can overwrite changes if you don’t pay attention.

If further changes are made then they can be synchronised either way as required by VS Code.


It was announced that PageExtension now has been fixed so it has access to curpage.


Some changes to architecture are being made. As NAV holds its code in the database, the limit on the row record size has caused problems for some. Also multi tenant causes issues. Thus now each extension will get its own table for extensions making things better isolated. Companion tables are used for database extensions that make more sense on shared databases where one customer has extension to DB and yet, others do not. The companion tables are SQL joined to the base product tables so the AL code does not care as it sees the record fields through the join.


Deploy and Install

These operations have extra event triggers are introduced per company database? and per company, to allow upgrade code to run appropriately and to be able to tell difference between upgrades and installs.


If schema changes mean a column is not longer used in the database, then the column is abandoned, to be later cleaned up by a clean up routine where the column is then actually removed. Upgrades also run like a transaction, so if anything fails, everything reverts back to how it was. 


API

There are 44 API all documented for use in NAV and the REST APIs now support actions too.  Aim is APIs need to be really easy for non-NAV people to consume.


UX

For 2018 the UX team have worked hard. Although people seem to call the windows client the role taylored client, really the web client is role taylored too, so you still see this in the web client. New cleaner, simplified navigation comes in, the outlook style side bar is gone as it doesn’t fit for the modern application. There is now also a new click preview that pops up to preview things that can be clicked, this may replace the fact boxes eventually.

The grid views have been refined, inspired from the Financial Times website treatment of figures in grids.


Docker

A quick introduction to Docker and how it can make life so much easier for getting the latest working images of the NAV product on a local developer machine.

This included a demo, but as I’ve done this before, I didn’t pay attention.


Future is here…

Emphasis that cognitive services will change the way we work and offer many new services, by showing the voice comprehension trial video of the McDonalds drive through ordering we saw at GPUG Summit in Tampa last the previous October. It still is impressive and does show how in controlled subsets these technologies can make business processes more efficient and less error prone.

It was also pointed out that SQL server now has the PREDICT() function so predictions using Azure ML can be made right from the database itself.



Deep Dive into the New Tools Stanislaw Stempin, Jesper Schulz-Wedde

This was after the break, this turned into an enjoyable, joined up look at the development process in the new world of designer and VS Code.

Get VS Code, install the AL extension.

Hit F1 and type al.go,  this will kick you off.


After the wizard you have an empty project. The project is defined by the file structure, no project files. You do have a Launch.Json that should not go into source control.

We saw how to build an extension that uses bing translation service to translate language on item.


Remember VS Code is not connected to the NAV database, instead it is connected via a service, using symbol references to NAV. When finished, it complies and zips up the content, sending it back to the service that compiles it again and makes the objects that get bunged into the database.

Symbols is like the DAG in GP, it gets the current state of NAV and captures it so that at design time you can get intellisense etc.

For formats of the files are cleaned up versions of the txt obtained by exporting code from NAV, made more structured and human readable.

The many NAV designers are gone for now. You have to start with a blank code page, but intellisence, context aware filling etc in the VS Code help you find your way.

The Code resembles the grids (so they say).

There are a number of code snippets also available for extending pages and tables etc that also make it easier.

CTRL F5 F6/F7 are your friends in this world.  Got to watch you don’t overwrite your DB if it has changes in it with code from VS Code, take care with F6/F7.


Again it was pointed out that on the cloud you are not alone on the machine, so resources are shared and data is sandboxed.

Note that:

.NET Interop is not available any more – although some useful frequently used .NET functions have been implemented like httpclient. However no access to file system as dangerous on shared cloud machine. Replacements are being though about for File.IO in another form.

Some platform APIs are not available

Methods not supported.


Can debug using a service. Debugging is started in a debugging context to allow others to continue to use the application. It uses bi-directional signal-R connection to debug remotely.

Multiple sessions may be debugged at the same time due to context.


Apps can have dependency on each other. They are complied into a .dll like assembly for referencing.


Client Addin

this used to take 3pages to explain and painful to implement. So improvements made. Now really simple to do.

Client Translation

Resources can now be exported as XLIFF a universal file format that language translators use with many commonly available editors. Just set the meta data up appropriately to get it output for translation.

XLIFF

Legacy upgrades

it is possible using

finSQL.exe

and txt2al.exe

to export AL code and get it converted. It will not re-architect the code to be event driven but helps get leg work done.

Mark Brummel has a webcast about this I’ve watched in the past on the SQL Skills you tube channel.(think its in this one? https://www.youtube.com/watch?v=EqCxrxR9f3Y&index=23&list=PLhZ3P-LY7CqnJY3p9AuwSBC6TOgYFPnY3)


Improvements in testability.

Improvements in developer experience around testability – I lost concentration by this point, but have seen blog posts and webinars out there on this.



Azure Functions deep dive Vjekoslav Babic


Most of the Azure functions talk I’d done before, it was more a introduction to than a deep dive, although it did give the NAV context their use. A common case may be to get around the lack of .NET interop.

Performance considerations due to latency were also given by stress testing functions in different geo locations. Something to be aware of if you have an organisation split over the globe.

functionperf

Also nice demo of how to get continuous deployment working with Github and Azure functions.


Creating Great APIs Anders Larsen, Nikola Kukrika

apitoken

api entities

Turned out not to be about how to version, plan and document your API with swagger. Instead it was about the NAV apis, which to be fair was as interesting. I tried to play back in the early preview days, but I guess nothing was ready back then which is why i failed to get anything working.

The three ways to authenticate were covered, and how to get keys etc.

The APIs are off by default, so you have to turn them on, this was also shown.

end points overview

It seems the NAV team are being wagged by the Office team as they have to conform to their standards, as the API is part of the office APIs so must be performant to be included in the Graph API. To get the performance it has been necessary to build shadow tables to pre-compute the computed columns and the like to get the performance. Guess this is one advantage to we have in Dynamics GP  as it is holding the summary data in DB rather than dynamically calculating it.

There was also covered complex types and parent child relationships. Reading binary data like images and pdfs.


The slides when they come out will be self explanatory on this session so I can’t add much. We did get some handy URLs for getting started though.

Api


getstarted apis



Walking Dinner

Good chance to network, got to meet Mark Brummel & James Crowter – everyone says it – but one of the best things is meeting in person social media friends.

markb


Finally I got a picture with #IamDynGP and NAVTechDays combined…


iamgp

Service Unavailable (HTTP Error 503) Server Error in ‘/lus’ Application–Dynamics GP

Recently user home pages have been showing a few different error messages under the “connect widget” area of the home page. The connect widget is considered by some as a waste of bandwidth and is disabled for all users, to others it is a useful source feed of news and information about Dynamics GP.

Service Unavailable, HTTP Error 503. The service is unavailable.

The content that is shown on the user home pages within the “connect” area is populated by Dynamics GP polling a Microsoft server over the internet to fetch the content with a web request. If the server that Dynamics GP is contacting to obtain that home page content is misconfigured, or is experiencing stability issues, then a number of error messages can be handed back to GP and shown to the user. This results in an error, rather than the intended content being seen, that in turn may lead to support calls from users.

Dynamics GP Service Unavailable HTTP 503 error

A more severe version of this is when the user home page area for connect says

Server Error in ‘/lus’ Application

This is the ASP.NET application (lus) that is running on Microsoft’s IIS server throwing an error. This is the ASP.NET application that is responsible for serving the Dynamics GP homepage connect content.  So this is a problem that Microsoft would need to resolve with their application on their server. The homepage makes a web request to a url behind the scenes of GP, along the following lines to fetch the content:

https://online.dynamics.com/lus/? – see the lus on the end of the uri? - this is the application name.

Server Error in /lus Application Dynamics GP homepage

There are also reports of this error problem from malformed urls that are caused by unusual characters in the company name of the Dynamics GP company (e.g. angle braces on <test> company instance names), where those characters are not correctly encoded by GP when the web request is made. The following posts have information on this issue:

Home Screen Connect Error – Server Error in ‘/lus’ Application.’

Server Error in /lus Application Error on Home Screen

I have not seen the character issue on GP2015 with the test company. Generally the errors have not persisted in the cases I have seen for more than a few hours, when presumably someone at Microsoft addresses and fixes the issue.

Jan 2018 update

Two new messages in the Dynamics GP Connect window for this month. The message Dynamics GP is giving is

Can’t reach this page

that is then followed by

Can’t connect securely to this page

when you click “refresh”.

This content area is showing a webpage from Microsoft, the page is served over https: however it looks like there is no SSL certificate bound to the web server site, causing an error page rather than the content to be displayed on the GP client.

Dynamics GP - Can't reach this page

Dynamics GP - Can't connect securely to this page

Dynamics GP Connect “Navigation to the webpage was cancelled”

Jan 19th 2018 Update

It seems the long running issues with the connect page continue with a new variation of

Navigation to the webpage was cancelled

Update 24th January 2018

The word from Microsoft is:

The servers for Connect are still being worked on and until that is completed, Connect on the GP home page will work sporadically, if at all.

Recovering a deleted Reporting Services Report

Yes, I deleted the wrong report while housekeeping. This raised the question as to how to recover it once it has been deleted? There is no recycle bin to or undelete option in reporting services on the version I work with and at the time of writing (Nov 2017).

It turns out to be very simple and quick to restore, if you have backups of your “ReportServer” database, which of course every one has.

Option 1 – Restore to point in time

The simplest way is to restore the “ReportServer” database from backup, to a point in time just before the deletion, but this would loose any changes on the report server made since it was deleted. In my case I had spent a few hours after deleting the report with housekeeping all all kinds of things on the report server, so this was not desirable as I’d loose that work.

Option 2 – Restore the single report from backup

Slightly more involved as an option, but still quick and easy, follow the steps below, it only takes a few minutes to do.

To restore the “ReportServer” database

    • Right click the database in SSMS, select restore
    • Rename the Destination Database to ReportServerRestored
    • Use the check boxes to find the point in time to restore to, here I chose to not apply the logs to make for a speedy restore
    • Click Files on page selector in left hand side. If the default path is not appropriate (under Restore As, in the grid, you may need to scroll left to see), then change the path that the database files will be restored to
    • Click options on page selector in left hand side to select options. Unselect Take tail-log backup.
    • Click ok to start the restore

2017-11-02_12-07-37

Extract the XML that represents the report from the restored database

Reports are stored in a binary field in a table in the database as XML. The following article explains how to get at the XML that defines the reports,

 Extracting SSRS Report RDL (XML) from the ReportServer database

Using the information in this article we can get to the report we need by running the script in SSMS.

--The first CTE gets the content as a varbinary(max)
--as well as the other important columns for all reports,
--data sources and shared datasets.
WITH ItemContentBinaries AS
(
SELECT
ItemID,Name,[Type]
,CASE Type
WHEN 2 THEN 'Report'
WHEN 5 THEN 'Data Source'
WHEN 7 THEN 'Report Part'
WHEN 8 THEN 'Shared Dataset'
ELSE 'Other'
END AS TypeDescription
,CONVERT(varbinary(max),Content) AS Content
FROM ReportServerRestored.dbo.Catalog
WHERE Type IN (2,5,7,8)
),
--The second CTE strips off the BOM if it exists...
ItemContentNoBOM AS
(
SELECT
ItemID,Name,[Type],TypeDescription
,CASE
WHEN LEFT(Content,3) = 0xEFBBBF
THEN CONVERT(varbinary(max),SUBSTRING(Content,4,LEN(Content)))
ELSE
Content
END AS Content
FROM ItemContentBinaries
)
--The outer query gets the content in its varbinary, varchar and xml representations...
SELECT
ItemID,Name,[Type],TypeDescription
,Content --varbinary
,CONVERT(varchar(max),Content) AS ContentVarchar --varchar
,CONVERT(xml,Content) AS ContentXML --xml

FROM ItemContentNoBOM
where name like '%{enter part of report name here}%'

Note you need to edit the {enter par of report name here} to be what it says, you will then be presented with the row(s) of interest. Work out which is the report you need if multiple rows are returned (note the restored database name is embedded into the SQL, change if you restored to another name).

 2017-11-02_12-35-35

On the row, example shown above, click the hyperlink of the column “ContentXML” to open the XML in an XML editor within SSMS (cool feature). Then go to File>>Save As.. within SSMS to save to a drive for later import to the report server.

Rename the xml file

Rename the xml file as a .rdl file you can make the filename an appropriate descriptive name for the report as this will later show in the report server

Upload the report definition file to the server

Upload the .rdl file to the reports folder on the reporting services server. Navigate to the folder it should reside in and select “Upload File” as shown below

2017-11-02_12-22-29

Recreate subscriptions and schedules

You will have to manually set up subscription schedules again for the report if they existed before.

Delete the restored database

From SSMS right click the database name and select delete to remove it. Check that the database files are also deleted on the server.

 

The report is now restored and the task is completed!

If you found this useful please comment, it helps motivate me to keep adding to the blog…

Dynamics GP drill down logging to trace file for diagnosing problems

In a previous posts I’ve looked that protocol handler used to create the drill down features in reports and other applications used with Dynamics GP.

Asynchronous pluggable Protocol Handler for Dynamics GP (for drilldown/drillback)
Dynamics GP Drill Down Protocol Handler error

-in those posts, I investigated the debug switches that can be added to the protocol handler’s configuration file and showed some of the various errors that can be generated from Dynamics GP drill down.

The configuration file for the Dynamics GP protocol handler can normally be found here:

\Program Files (x86)\Common Files\Microsoft Shared\Dexterity\Microsoft.Dynamics.GP.ProtocolHandler.exe.config

Adding these switches to the above file will cause a dialog box to pop when errors occur, that the user can then screen shot and pass to you.

<add key="DebugMode" value="true" />
<add key="UseWindowsEventLog" value="true" />
<add key="UseLogFile" value="true" />

Debug switches for Dynamics GP drill down

Example of error window generated after applying these switches:

Error window from Microsoft dynamics GP drill down

Logging Dynamics GP Drill down errors to trace file

However WCF, the underlying enabling technology that the protocol handler is utilising to talk to GP, allows us to log activity to a log file that can be analysed too. To utilise this, change the configuration file to look like the following. Then create a C:\log folder for the log file to go to, or change the location in  the line:

<add initializeData="C:\Log\WcfTraceServer.svclog" 

The detail of the logging and what is logged can be changed with different settings in this XML. This example gets you going without learning detail of WCF, which is beyond the scope of this post.


WCF debug nodes added to configuration for Dynamics GP Drill Down debug

Now when an exception occurs the file will be generated in the folder:

example of file created

Using the Service Trace Viewer Tool

This file is an xml file that is difficult to read and understand, however you can use the Service Trace Viewer Tool (SvcTraceViewer.exe) to investigate the file. I have shown an example below. This give a richer environment to investigate errors and allows a less disruptive way of capturing them from the client machine.

Debugging Dynamics GP drill down with Service Trace Viewer Tool

Armed with this information from the log file, it is much easier to get an investigate any errors the service may be encountering, errors that otherwise would be hidden from the user and admin. Below is a full configuration file given as an example to show the context of the changes.

The WCF Configuration Editor Tool is my recommended way to edit WCF configuration files, but may be daunting for those who do not have a basic understanding of WCF. Configuration Editor Tool (SvcConfigEditor.exe)

Example of full configuration file:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
<system.diagnostics>
<sources>
<source propagateActivity="true" name="System.ServiceModel" switchValue="Information,ActivityTracing">
<listeners>
<add type="System.Diagnostics.DefaultTraceListener" name="Default">
<filter type="" />
</add>
<add name="traceListener">
<filter type="" />
</add>
</listeners>
</source>
<source name="System.ServiceModel.MessageLogging">
<listeners>
<add type="System.Diagnostics.DefaultTraceListener" name="Default">
<filter type="" />
</add>
<add name="traceListener">
<filter type="" />
</add>
</listeners>
</source>
</sources>
<sharedListeners>
<add initializeData="C:\Log\WcfTraceServer.svclog" type="System.Diagnostics.XmlWriterTraceListener"
name="traceListener" traceOutputOptions="LogicalOperationStack, DateTime, Timestamp, ProcessId, ThreadId, Callstack">
<filter type="" />
</add>
</sharedListeners>
<trace autoflush="true" />
</system.diagnostics>
<system.serviceModel>
<diagnostics>
<messageLogging logMalformedMessages="true" logMessagesAtServiceLevel="true"
logMessagesAtTransportLevel="true" />
</diagnostics>
<bindings>
<netNamedPipeBinding>
<binding name="NetNamedPipeBinding_IDrillBackToGP" closeTimeout="00:01:00"
openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00"
transactionFlow="false" transferMode="Buffered" transactionProtocol="OleTransactions"
hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="524288"
maxBufferSize="65536" maxConnections="10" maxReceivedMessageSize="65536">
<readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384"
maxBytesPerRead="4096" maxNameTableCharCount="16384" />
<security mode="Transport">
<transport protectionLevel="EncryptAndSign" />
</security>
</binding>
</netNamedPipeBinding>
</bindings>
<client>
<endpoint address="net.pipe://dynamicsgpdrillback/" binding="netNamedPipeBinding"
bindingConfiguration="NetNamedPipeBinding_IDrillBackToGP"
contract="DynamicsGPDrillBackService.IDrillBackToGP" name="NetNamedPipeBinding_IDrillBackToGP" />
</client>
</system.serviceModel>
<appSettings>
<!-- String value, please use good file system notation (i.e. "c:\Dynamics\GP\DynamicsGPDrillBack.xml") -->
<add key="BindingName" value="NetNamedPipeBinding_IDrillBackToGP" />
<!-- Boolean values only (true/false) -->
<add key="DebugMode" value="true" />
<add key="UseWindowsEventLog" value="true" />
<add key="UseLogFile" value="true" />
</appSettings>
</configuration>