After working to support Dynamics GP Macros in our Visual Studio add in for GP I discovered a new layer of macro language not touched on in Mark Polno’s publication of Kevin Gross’s GP Macro Reference.
How to use macros to activate .NET add in forms
Use the following command:
NewActiveWin dictionary ‘default’ form [customID] window [.NET form name]
[customID] seems to be a jumble of characters to uniquely identify this as a addin form.
[.NET form name] is the .NET form name for the form we are trying to show.
NewActiveWin dictionary 'default' form pARthOSt window AuxFormIV00101
Macro commands passed to RecordMacroItem
Any commands sent to the Macro subsystem from the .NET form are wrapped in a wrapper named “ShellCommand”. To record macro commands to the currently recording macro, call the following method on the form derived from Microsoft.Dexterity.Shell.DexUIForm.
RecordMacroItem(MacroCommandText as string, MacroComment as string)
The text passed in MacroCommandText is wrapped in a “ShellCommand” statement in the resulting macro file, as shown here where the highlighted text represents the string passed as the MacroCommandText when the macro was recorded;
ShellCommand 'ClickHit field "btnOK"'
Long Macro Lines
There is another scenario that must be dealt with, the Add in macro equivalent of
ContTypeTo field 'Account Description' , 'nt 1'
This command would continue typing into the field Account Description appending to whatever is already there. The macro system wraps the whole lot in a ShellCommandBegin block, with each line starting ShellCommandAppend. The following is an example of this arrangement:
ShellCommandAppend 'TypeTo field "rebHTMLText", "<ul>
<li>HDMI cable with swivel ends - up to 180 De'
ShellCommandAppend 'grees </li>
<li>Reduces stress on your cables and risk of disconnecting </li>
ShellCommandAppend 'i>Provides both high definition video and multi-channel audio connection between'
ShellCommandAppend ' digital high definition AV sources such as Blu-ray, DVD players etc. </li>
ShellCommandAppend 'Transfer bandwidth: 10.2Gbps / 340Mhz (v1.3)</li>
<li>Signal Type: Transmission '
ShellCommandAppend 'minimised differential signalling (TMDS)</li>
<li>Connector Type: Gold plated </'
How annoying is it that docking can’t be used when developing add ins for Dynamics GP forms? If a panel is docked it slides under the toolbar at the top but upsets the visual styles such as the separator lines on the buttons.
See below where the panel has been put behind the toolbar, loosing the button effects and toolbar visuals (see highlighted area). The toolbar it seems is painted onto the form itself.
Just one of those niggles. I end up floating all my controls in a container with anchors set for all directions, nothing like as robust as just setting dock>fill.
Replicating GP price table to website
To provide our website with bang up to date product prices as they are in our ERP system, we replicate the price table from our ERP system to the website SQL server database. The price table holds nearly two million price rows consisting of many combinations of currency, item, price quantity, units of measure, discount breaks and customer specific price lists.
The replicated table works great, until a big price update is required. If most of the prices are updated say in line with inflation, it hits a good number of those rows in the database. This causes a BIG transaction to make its way through the relatively thin wire to our website. From opening the transaction (and thus locking the subscriber table) to committing the transaction can take a long time locally and then for that to make its way through the slow connection to the website and be committed. All these processes take a finite amount of time. The lock caused on the price table at the website database while all this is happening causes a problem. That lock caused any reads on the price table to be blocked for a long time until everything had passed through, bringing the website to a halt for tens of minutes.
To avoid this queries at the website that interrogate the replicated publisher table could be set to READ_UNCOMMITTED transactions. However, potentially this could lead to problems in reading “dirty records” that are not ready for public consumption. This is significant when you consider these are price tables and reading prices that are in an unknown state is a no-no.
One was to take a database snapshot before the bulk price update, switch all the views on the table to use the snapshot letting replication take its time and update the records in the underlying table. Once finished the views could be switched back to point at the original table again. This could, perhaps be controlled by a replicated signalling table from the ERP system so that we don’t have to issue SQL between the databases. It should work well in that once the all clear signal is set in the publication database, it will not propagate through to the subscriber until after all the other changes in the log have been replayed and committed to the subscriber.
The second idea was to switch the subscriber database into ALLOW_SNAPSHOT_ISOLATION ON mode. Simply by executing the following commands:
ALTER DATABASE [ERPReplicationdatabase]
SET ALLOW_SNAPSHOT_ISOLATION ON
ALTER DATABASE [ERPReplicationdatabase]
SET READ_COMMITTED_SNAPSHOT ON
Once the database is in snapshot isolation mode, the reads will use versioned row numbers from the database. The reads are never blocked as when a transaction begins, the row version before the transaction and below is used for any reads. So reading the website is not blocked while a transaction is underway, and what is read is the state before the transaction started, that for our application is perfect.
No DML commands need issuing or signalling between the databases. This is the cleanest solution for what is required. The next task is to ensure that all the changes made to the price tables are made inside one transaction to keep the reads off any of the new changed data until is fully committed to the database.
After pulling a very large solution into smaller projects over the last da or two, a windows forms event handler that used to work, started complaining about the signature not matching for the event. A custom class inherited from EventArgs and added some properties for this event. The definition for the custom event argument resided in one of the class libraries that had been refactored.
To the eye and the intellisense tooltips the signatures looked identical. The break through was found by right clicking on one of the event arguments to “find definition” that lead to the class viewer rather than the code defining the class. This was a clue as this normally only happens when the source is not loaded in the IDE or the project is not holding a project reference, rather a file reference to the .dll of the class library in question.
Furthermore clicking the “find definition” on the other side of the event handler did lead to the source definition for the custom class.
I looked in the class viewer in visual studio and found two identical copies of the class in there. Somehow a cached version of the old .dll pre-refactor was still in the solution. After deleting the /obj and /bin contents for all projects and taking the references out and back in again, it all came back to life as it should.