Smart List, Excel cannot open file because file format or extension is not valid– Dynamics GP

Exporting from Dynamics GP Smart List to Excel can cause the following error on GP2013R2 upwards.

Excel cannot open the file “xx.xlsx” because the file format or the file extension is not valid. Verify that the file has not been corrupted and that the file extension matches the format of the file


The community post SmartList Export - Excel cannot open the file... has some ideas for Smart List Builder and what may cause the issue however we saw this issue today in normal smart lists.

By prematurely stopping the export at different time intervals (press stop), it is possible to find the threshold record where the export fails. Limiting the number of columns in the smart list also narrows down to the field involved.  In this example it was some extended characters that had found their way from an ecommerce solution through econnect into the customer name field. It is also important to bear in mind that wrong assumptions about the tables and fields in the smart lists can be made. For example the customer name may originate from one entered in an order, not the debtor maintenance record. Try to establish the source of the data in the smart list.

Once the offending character was identified, in the GP user interface, that character was edited out of the name that resulted in the export to excel working correctly again.

The following Stack Overflow question helped by providing some TSQL function that allows such characters to be found:

Find Non-Ascii Characters in One Varchar Column or Mulitiple Columns using SQL Server

Following function looks for characters that are “non-ASCII”:

create function [dbo].[udf_ContainsNonASCIIChars]
@string nvarchar(4000),
@checkExtendedCharset bit
returns bit

declare @pos int = 0;
declare @char varchar(1);
declare @return bit = 0;

while @pos < len(@string)
select @char = substring(@string, @pos, 1)
if ascii(@char) < 32 or ascii(@char) > 126
if @checkExtendedCharset = 1
if ascii(@char) not in (9,124,130,138,142,146,150,154,158,160,170,176,180,181,183,184,185,186,192,193,194,195,196,197,199,200,201,202,203,204,205,206,207,209,210,211,212,213,214,216,217,218,219,220,221,223,224,225,226,227,228,229,230,231,232,233,234,235,236,237,238,239,240,241,242,243,244,245,246,248,249,250,251,252,253,254,255)
select @return = 1;
select @pos = (len(@string) + 1)
select @pos = @pos + 1
select @return = 1;
select @pos = (len(@string) + 1)
select @pos = @pos + 1

return @return;



The above function can be called against GP table fields like this, do so for any suspect fields and tables:

-- Find Non-ASCII in Debtor CUSTNAME 
from RM00101
dbo.udf_ContainsNonASCIIChars(CUSTNAME, 1) = 1

-- Find Non-ASCII in Purchase order work, Vendorname
select *
from POP30300
dbo.udf_ContainsNonASCIIChars(VENDNAME, 1) = 1

-- Find Non-ASCII in Vendors, Vendorname
select *
from PM00200
dbo.udf_ContainsNonASCIIChars(VENDNAME, 1) = 1

Note that many characters such as “™ ® ” don’t seem to cause issues, the character that caused the problem in this case was a “T” like character finding its way in substitute for the “&” character. Investigating we find this is character code 22. This is useful as attempting to copy and paste this character causes issues in editors.



It can be seen that now there is a character code number to work with, a SQL replace is now possible, for occurrences of this character, in these fields. As always backup your data and don’t play if you don’t know what you are doing, instead contact your support partner.

SELECT REPLACE(VENDNAME, char(22), '&') from pm00200 where vendorid='DELO'

It is also a good idea to check your integration and integrating applications, to find the source of the corruption.

I would like some further investigation time to see what other characters can cause this error, I guess it is all the low value codes (below 32). It would be possible make a SQL reporting job to notify users of data issues if they occur again in the future.

Check for User Messages(1) Dynamics GP Process Monitor

Since GP 2013 sometimes when checking in the GP process monitor, (Microsoft Dynamics GP>> Process Monitor), the message “Check for User Messages(1)” may be found in the process queue.

The origin of this message is a new feature of GP that allows administrators to send messages to GP users. These messages pop-up on the users’ screens after a short delay.

This is achieved through the existence of a polling process that runs regularly on each client instance. That polling process checks for any messages waiting to display to the user. It is this process waiting in the queue that shows up as “Check for User Messages(1)”.

It will keep adding itself to the process queue, after running check links or similar very long processes, I’ve seen dozens of these all stacked up awaiting processing. These processes quickly pop off the queue and disappear, once it reaches the top of the queue.

See below process monitor screen shot for an example of stuck processes:


Send User Message Functionality

The new message functionality is found under:
Microsoft Dynamics GP>> Tools>> Utilities >> System>> Send Users Message


The user is selected, and message entered.


That after around 30 seconds or so results in the following on the user’s screen.


Obviously this process is polling a table in the DYNAMICS database. The table is SY30000 and the message is put in the “Offline_Message” field, see below where the message to me is “test”.


Stuck Process Queue

If the process queue gets stuck (crashes) it will no longer process queued jobs, however the “check for user messages” process continues to be added to the queue every min or so. As the queue has crashed and is no longer removing items, it just builds, filling up with this process. What is seen as a result is this process swamping the queue.

As the “check for user messages” process is the most frequent process to be added to the queue, it therefore becomes the most likely process to be added immediately after any crash occurs in the queue. This makes it appear, when looking at the process monitor, like the “check for user messages” caused the problem, however it is merely a victim. The actual process responsible was whichever process immediately preceded it in the queue, as that was the item that caused the queue to halt.

Investigation of the reason why that previous process crashed can be done by checking SQL logs, switching logging on in GP (use the Support Debugging tool), using SQL profiler, changing reports for unmodified vanilla ones if they have been changed etc. This is really a different subject and I’d encourage you to involve a GP consultant as instigations can be arduous.


I’ve just noted that the “i” button on User Activity window, Microsoft Dynamics GP>> Tools>>Utilities>>System>>User Activity, shown blow also short cuts to send a message window too.


SQL Server 2012 SP2 Replication Log Reader error v.11.00.5058

After moving Dynamics GP to a newly built server, using a back up and restore strategy, the Dynamics GP SQL replication kept causing problems. The following message drowned the error logs of the server. I ended up rebuilding replication configuration entirely to solve the issue, but there may be some clues in the steps I went through that might help someone else out there. There was lots more digging and prodding than summary shown below!

The last step did not log any message! (Source: MSSQL_REPL, Error number: MSSQL_REPL22037


It was possible to add text logging by going to the “Run Agent” step in the Log Reader SQL job and then adding to the list of the parameters, the following;

-Publisher [xxxxx] -PublisherDB [xxxxx] -Distributor [xxxxx] -DistributorSecurityMode 1  -Continuous -Output f:\SQLBackups\Reports\ReplicationOutputLog.txt


The ReplicationOutputLog.txt then contained the following:

Validating publisher

            DBMS: Microsoft SQL Server
            Version: 11.00.5058
            catalog name:
            user name: dbo
            API conformance: 0
            SQL conformance: 0
            transaction capable: 1
            read only: F
            identifier quote char: "
            non_nullable_columns: 0
            owner usage: 15
            max table name len: 128
            max column name len: 128
            need long data len:
            max columns in table: 1000
            max columns in index: 16
            max char literal len: 131072
            max statement len: 131072
            max row size: 131072
2015-02-06 08:54:59.278 OLE DB xxxxxx 'xxxxxxx': {call sp_MSensure_single_instance(N'Repl-LogReader-0-xxxx-19', 2)}
2015-02-06 08:54:59.278 Parameter values obtained from agent profile:
            -pollinginterval 5000
            -historyverboselevel 1
            -logintimeout 15
            -querytimeout 1800
            -readbatchsize 500
            -readbatchsize 500000
2015-02-06 08:54:59.278 Status: 0, code: 22037, text: 'The last step did not log any message!'.
2015-02-06 08:54:59.278 The last step did not log any message!


We were also getting lots of errors relating to “Only one Log Reader Agent or log-related procedure”…”can connect to a database at a time” (see below for screenshot)


Google revealed some posts around SP1/SP2 hotfixes and cumulative updates for SQL 2012 fixing issues around area of multiple log reader errors like this. Other posts talked about the database not being upgraded correctly, new schema differences between versions. My conclusion on reading these posts was that the SQL replication database (distribution), may have been brought over from the previous server (we don’t know the full history of if it was restored or/and replication rebuilt by scripts). The restored database may not have been correctly upgraded by whatever is in SP1/SP2, both of which were applied prior to the restore of our data and thus any operations included in SP1/SP2 would have not been applied against it due to the time line.

After a few evenings spent removing all I could find of relevance in the replication distribution database tables and clearing out and recreating log reader agent jobs and snapshot agent jobs, still problems were persisting. After countless recreations of the publications and subscriptions, it felt like there were remnants of previous versions replication clinging on deep in the bowels of replication (amazing how gut feeling develops with experience).

Failing in patience and for lack of a silver bullet, the solution was to remove the subscriptions and the publications, disable replication on the server. Then ensured the replication distribution database and its data files were gone. Also ensured no SQL jobs relating to replication were left behind (there were so removed them too). Also checked all the nodes under SSMS that relate to replication to ensure nothing was left at all, including on the subscriber (there was a shadow of the subscription left on the subscriber).

Then built replication a new, created a new distributor database, with a new name for peace of mind. Created new publication, added articles, created new subscribers.

After reinitializing all subscriptions and getting snapshots, everything started working how it should.

I feel lucky that I don’t have a totally mission critical replication setup. It is good that we have two copies of our replicated tables in different schemas, with views fronting them. This allows us to toggle between live and cached data modes, while “messing on with replication”.

Hey its working!


The only thing left to figure out is if the “one log reader” error is something to worry about, or perhaps it will simply go away with the next service pack, whenever that is…