Recovering a deleted Reporting Services Report

Yes, I deleted the wrong report while housekeeping. This raised the question as to how to recover it once it has been deleted? There is no recycle bin to or undelete option in reporting services on the version I work with and at the time of writing (Nov 2017).

It turns out to be very simple and quick to restore, if you have backups of your “ReportServer” database, which of course every one has.

Option 1 – Restore to point in time

The simplest way is to restore the “ReportServer” database from backup, to a point in time just before the deletion, but this would loose any changes on the report server made since it was deleted. In my case I had spent a few hours after deleting the report with housekeeping all all kinds of things on the report server, so this was not desirable as I’d loose that work.

Option 2 – Restore the single report from backup

Slightly more involved as an option, but still quick and easy, follow the steps below, it only takes a few minutes to do.

To restore the “ReportServer” database

    • Right click the database in SSMS, select restore
    • Rename the Destination Database to ReportServerRestored
    • Use the check boxes to find the point in time to restore to, here I chose to not apply the logs to make for a speedy restore
    • Click Files on page selector in left hand side. If the default path is not appropriate (under Restore As, in the grid, you may need to scroll left to see), then change the path that the database files will be restored to
    • Click options on page selector in left hand side to select options. Unselect Take tail-log backup.
    • Click ok to start the restore

2017-11-02_12-07-37

Extract the XML that represents the report from the restored database

Reports are stored in a binary field in a table in the database as XML. The following article explains how to get at the XML that defines the reports,

 Extracting SSRS Report RDL (XML) from the ReportServer database

Using the information in this article we can get to the report we need by running the script in SSMS.

--The first CTE gets the content as a varbinary(max)
--as well as the other important columns for all reports,
--data sources and shared datasets.
WITH ItemContentBinaries AS
(
SELECT
ItemID,Name,[Type]
,CASE Type
WHEN 2 THEN 'Report'
WHEN 5 THEN 'Data Source'
WHEN 7 THEN 'Report Part'
WHEN 8 THEN 'Shared Dataset'
ELSE 'Other'
END AS TypeDescription
,CONVERT(varbinary(max),Content) AS Content
FROM ReportServerRestored.dbo.Catalog
WHERE Type IN (2,5,7,8)
),
--The second CTE strips off the BOM if it exists...
ItemContentNoBOM AS
(
SELECT
ItemID,Name,[Type],TypeDescription
,CASE
WHEN LEFT(Content,3) = 0xEFBBBF
THEN CONVERT(varbinary(max),SUBSTRING(Content,4,LEN(Content)))
ELSE
Content
END AS Content
FROM ItemContentBinaries
)
--The outer query gets the content in its varbinary, varchar and xml representations...
SELECT
ItemID,Name,[Type],TypeDescription
,Content --varbinary
,CONVERT(varchar(max),Content) AS ContentVarchar --varchar
,CONVERT(xml,Content) AS ContentXML --xml

FROM ItemContentNoBOM
where name like '%{enter part of report name here}%'

Note you need to edit the {enter par of report name here} to be what it says, you will then be presented with the row(s) of interest. Work out which is the report you need if multiple rows are returned (note the restored database name is embedded into the SQL, change if you restored to another name).

 2017-11-02_12-35-35

On the row, example shown above, click the hyperlink of the column “ContentXML” to open the XML in an XML editor within SSMS (cool feature). Then go to File>>Save As.. within SSMS to save to a drive for later import to the report server.

Rename the xml file

Rename the xml file as a .rdl file you can make the filename an appropriate descriptive name for the report as this will later show in the report server

Upload the report definition file to the server

Upload the .rdl file to the reports folder on the reporting services server. Navigate to the folder it should reside in and select “Upload File” as shown below

2017-11-02_12-22-29

Recreate subscriptions and schedules

You will have to manually set up subscription schedules again for the report if they existed before.

Delete the restored database

From SSMS right click the database name and select delete to remove it. Check that the database files are also deleted on the server.

 

The report is now restored and the task is completed!

If you found this useful please comment, it helps motivate me to keep adding to the blog…

Dynamics GP drill down logging to trace file for diagnosing problems

In a previous posts I’ve looked that protocol handler used to create the drill down features in reports and other applications used with Dynamics GP.

Asynchronous pluggable Protocol Handler for Dynamics GP (for drilldown/drillback)
Dynamics GP Drill Down Protocol Handler error

-in those posts, I investigated the debug switches that can be added to the protocol handler’s configuration file and showed some of the various errors that can be generated from Dynamics GP drill down.

The configuration file for the Dynamics GP protocol handler can normally be found here:

\Program Files (x86)\Common Files\Microsoft Shared\Dexterity\Microsoft.Dynamics.GP.ProtocolHandler.exe.config

Adding these switches to the above file will cause a dialog box to pop when errors occur, that the user can then screen shot and pass to you.

<add key="DebugMode" value="true" />
<add key="UseWindowsEventLog" value="true" />
<add key="UseLogFile" value="true" />

Debug switches for Dynamics GP drill down

Example of error window generated after applying these switches:

Error window from Microsoft dynamics GP drill down

Logging Dynamics GP Drill down errors to trace file

However WCF, the underlying enabling technology that the protocol handler is utilising to talk to GP, allows us to log activity to a log file that can be analysed too. To utilise this, change the configuration file to look like the following. Then create a C:\log folder for the log file to go to, or change the location in  the line:

<add initializeData="C:\Log\WcfTraceServer.svclog" 

The detail of the logging and what is logged can be changed with different settings in this XML. This example gets you going without learning detail of WCF, which is beyond the scope of this post.


WCF debug nodes added to configuration for Dynamics GP Drill Down debug

Now when an exception occurs the file will be generated in the folder:

example of file created

Using the Service Trace Viewer Tool

This file is an xml file that is difficult to read and understand, however you can use the Service Trace Viewer Tool (SvcTraceViewer.exe) to investigate the file. I have shown an example below. This give a richer environment to investigate errors and allows a less disruptive way of capturing them from the client machine.

Debugging Dynamics GP drill down with Service Trace Viewer Tool

Armed with this information from the log file, it is much easier to get an investigate any errors the service may be encountering, errors that otherwise would be hidden from the user and admin. Below is a full configuration file given as an example to show the context of the changes.

The WCF Configuration Editor Tool is my recommended way to edit WCF configuration files, but may be daunting for those who do not have a basic understanding of WCF. Configuration Editor Tool (SvcConfigEditor.exe)

Example of full configuration file:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
<system.diagnostics>
<sources>
<source propagateActivity="true" name="System.ServiceModel" switchValue="Information,ActivityTracing">
<listeners>
<add type="System.Diagnostics.DefaultTraceListener" name="Default">
<filter type="" />
</add>
<add name="traceListener">
<filter type="" />
</add>
</listeners>
</source>
<source name="System.ServiceModel.MessageLogging">
<listeners>
<add type="System.Diagnostics.DefaultTraceListener" name="Default">
<filter type="" />
</add>
<add name="traceListener">
<filter type="" />
</add>
</listeners>
</source>
</sources>
<sharedListeners>
<add initializeData="C:\Log\WcfTraceServer.svclog" type="System.Diagnostics.XmlWriterTraceListener"
name="traceListener" traceOutputOptions="LogicalOperationStack, DateTime, Timestamp, ProcessId, ThreadId, Callstack">
<filter type="" />
</add>
</sharedListeners>
<trace autoflush="true" />
</system.diagnostics>
<system.serviceModel>
<diagnostics>
<messageLogging logMalformedMessages="true" logMessagesAtServiceLevel="true"
logMessagesAtTransportLevel="true" />
</diagnostics>
<bindings>
<netNamedPipeBinding>
<binding name="NetNamedPipeBinding_IDrillBackToGP" closeTimeout="00:01:00"
openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00"
transactionFlow="false" transferMode="Buffered" transactionProtocol="OleTransactions"
hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="524288"
maxBufferSize="65536" maxConnections="10" maxReceivedMessageSize="65536">
<readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384"
maxBytesPerRead="4096" maxNameTableCharCount="16384" />
<security mode="Transport">
<transport protectionLevel="EncryptAndSign" />
</security>
</binding>
</netNamedPipeBinding>
</bindings>
<client>
<endpoint address="net.pipe://dynamicsgpdrillback/" binding="netNamedPipeBinding"
bindingConfiguration="NetNamedPipeBinding_IDrillBackToGP"
contract="DynamicsGPDrillBackService.IDrillBackToGP" name="NetNamedPipeBinding_IDrillBackToGP" />
</client>
</system.serviceModel>
<appSettings>
<!-- String value, please use good file system notation (i.e. "c:\Dynamics\GP\DynamicsGPDrillBack.xml") -->
<add key="BindingName" value="NetNamedPipeBinding_IDrillBackToGP" />
<!-- Boolean values only (true/false) -->
<add key="DebugMode" value="true" />
<add key="UseWindowsEventLog" value="true" />
<add key="UseLogFile" value="true" />
</appSettings>
</configuration>

Dynamics GP, Item description is one hundred character capacity?

A little curiosity of mine is around this finding & why it is so…

Dynamics GP Item description field


Fill the item description field of an item in dynamics GP and then paste the text into a notepad application to measure its length. You will find it has a capacity of one hundred characters…

measure capacity of UI as 100


Yet have a look at the database, it has a field size of 101…

Database field length is 101

 

but look, the UI is limiting the keyable length to 100…

Keyable length of Item Description 100

 

So there is an “extra” inaccessible character in the descriptions that you cannot use? What secret confidential information do you keep in your extra description extra character?

 

[Edit 2017/08/08] David in the comments explains this for us as:

Every string field of even length will have an extra character at the database level. 

This is a Dexterity feature from legacy behaviour. 

Before SQL was used as the database, Ctree and Btrieve was used. They performed better when each record in a table was a multiple of 16 bits, 2 bytes, as the early x86 processors were 16 bit. 

To ensure this string fields were padded to make the storage length an even number. Strings require the number of characters in the string plus a length byte when stored and so can be 0-255 characters long. 

On your screenshot, the keyable length is 100, plus a storage byte = 101, plus pad to even gives 102 with an extra hidden character. 

Odd length strings don't need padding to be an even total size. 

Why does Dynamics GP use Table Heaps (no clustered indexes)?

First we should define what a table heap is. A heap is a table without a clustered index.

Data is stored in the heap without specifying an order. Usually data is initially stored in the order in which is the rows are inserted into the table, but the Database Engine can move data around in the heap to store the rows efficiently; so the data order cannot be predicted.

 

There are sometimes good reasons to leave a table as a heap instead of creating a clustered index, but using heaps effectively is an advanced skill. Most tables should have a carefully chosen clustered index unless a good reason exists for leaving the table as a heap.

 

Generally I would say we are more used to finding a clustered index on a table, but looking below you can see the more normal picture for the indexes of a GP table.

2017-08-06_21-19-10

Microsoft Dynamics GP does not use many clustered indexes in its database.

 

If you read The Microsoft Dynamics® GP Architecture White Paper, then it would seem to be a decision based on research and data:

2017-08-06_20-29-30

So which tables? Lets look for tables in the SOP series that have clustered indexes on them:

SELECT
     t.name AS table_name,
     I.type_desc AS index_type,
     I.is_unique AS is_unique_index
FROM sys.tables AS t
INNER JOIN sys.schemas AS s
     ON t.schema_id = s.schema_id
INNER JOIN sys.indexes AS I
     ON t.object_id = I.object_id
WHERE I.type_desc = 'CLUSTERED'
and t.name like 'Sop%'
order by 1;

2017-08-06_21-28-00

 

Interesting is that only seven of the SOP tables that have clustered indexes, the rest are heaps. So large tables such as SOP30300, that is particularly large, a 8GB table in one of the companies I work with, are a heap. This at first glance seems wrong. in fact the MS advice from Heaps (Tables without Clustered Indexes) is

Do not use a heap when there are no non-clustered indexes and the table is large. In a heap, all rows of the heap must be read to find any row.

 

I do wonder about the conditions under which the decisions to not use clustered indexes was made, but have to trust the research was accurate (/S).

 

So the reason? GP didn’t always run on SQL server, it was on ISAM in the old days. The database schema that GP uses owes a lot to that legacy and is the reason for many of the oddities and areas lacking in GP’s use of SQL server and I expect this use of heap tables is also from that legacy and that study showed there were no performance benefits from introducing a clustered index.

 

There reports of people converting the primary keys into clustered indexes with success. It is possible to write a DML script to go through the database doing this for all tables too. However this is internet forum hearsay, so I cannot vouch for the accuracy of this information nor rigour of the testing. I also fear that the work can be undone when tables are dropped during upgrades, loosing the changes. I'm pretty certain this would also invalidate your support with MS, as all that careful performance tuning that GP database schema has experienced over the years may not be compatible with this action.  ;)