Team Foundation Server Source Control locks

Locks

Moving lap tops, desktops, operating systems or having a developer move on all cause issues with occasional items checked out to workspaces you have no control over.

There is a good blog post here on how to resolve these issues with the command line and TF.exe. You may remove workspaces or undo checkout locks on source code by using the commands outlined below.

Undoing a checkout that belongs to another user

James Manning's blog

The key command for my own reference is:

TF.exe undo
 /workspace:<WorkspaceName>;<UserName>
 $/<TeamProject>/<FileLocation> 
 /s:http://<YourTFSServer>:8080

Had to use this today after finding a couple of instances of code checked out on my old vista installation but the files had been migrated to my new Windows 7 install. So undid the checkout finished the changes and checked in the new versions.

Prerequisites in VS2008 Setup project

Directory path for prerequisites

For .NET Visual Studio installation projects, right clicking on the project in solution explorer for the setup project allows the properties of that project to be viewed.

Setup Properties Window 

In here is a button that allows the project prerequisites to be defined. When the setup project is built, it will produce a setup.exe and yourproject.msi.

Prerequisites Window

Setup.exe is a bootstrap program, that is it can run some other things before your msi installer starts executing. Therefore customers should be advised to run the setup.exe, rather than the msi.

Most often in .NET programming this would be a check that the operating system to which you are installing the software is running the required .NET framework version and install it if it is found to be missing. What checks that setup.exe executes are defined with the above project prerequisites window. If a package needs installing, there are three choices from where the bootstrap can automatically load the package, a manually entered directory path, the path of where the installer is running from or by running it from the Internet. If install from installer path is chosen, the installer copies the package directory to the root of the complied setup project output. It gets the package from a package store elsewhere on the machine.

The path to the prerequisites central store is something I keep getting confused/forgetting between the times I set up the installer projects. So here is a reminder to myself. For my installation on VS2008 it is:

C:\program files\microsoft sdks\Windows\v6.0A\Bootstrapper\Packages


In here you find subdirectories for each package like the .NET framework that we may wish to install. Visual studio iterates through this directory when it shows the prerequisites to display the available packages.

Boot strap package store locations

In each of these package subdirectories is a products.xml file that defines the criteria for checking for the existence of the product on the machine and how to install it if it is missing.

Custom packages

It is easy to get one of these product.xml files and hack it to install your own package which is what we do to install media player for one of our products. You can find guides to doing this by searching for custom installer packages guide on search engines.

Google Mini Remove URL from index

One of my ASP.NET ecommerce applications uses URL rewriting for product pages. For example:

Item Sku Number: 473-151
Product Description: Bright Products, Black box converter
Website URL: href=http://www.mydomain.com/Products/473-151- Bright Products, Black box converter

Note: Text after the SKU item number is irrelevant as it is disregarded for the purposes of the ASP.NET engine, only the 473-151 finds the page.

Google Mini

We use a Google Mini to index the site and provide search results to the site users. As the product page can be entered from a number of differing routes in the past using different access URLs, and to help keep the page count down in the results from the box we use the canonical header meta tag to provide what should be the definitive page url for this page.

Canonical headers are supported by all the search engines of importance. The tag looks like this;
<link rel="canonical" href=http://www.mydomain.com/Products/473-151- Bright Products, Black box converter />

Change of description

Recently a supplier complained as the description of the product in the URL for the item was wrong, although on the page it was correct. After investigation it was found that the item description had been changed as the supplier had rebranded the brand name, see below.

Item Sku Number: 473-151
Product Description: Mighty Products, Black box converter
Website URL: href=http://www.mydomain.com/Products/473-151- Bright Products, Black box converter

This meant that when the item was searched for in the Google mini, it found “Black box converter” but had the incorrect URL shown above. It should have the url as follows;

Website URL: href=http://www.mydomain.com/Products/473-151- Mighty Products, Black box converter

What's wrong?

So what is wrong? It turns out that the Google Mini still has the old URL in the index. In fact it turns out that the page is very persistent at staying in the index. Thus the box happily crawls it each night.

It seems “the index” is a list of pages the Google Mini has found at some time in the past. In fact a page can now have been “unlinked” from the site, having no inbound links to it, but it will still persist in the index and thus results.

The only way to remove a page from the Google Mini Index is highlighted in this document Administering Crawl for Web and File Share Content: Introduction, here it sates that;

  • The license limit is exceeded
  • The crawl pattern is changed
  • The robots.txt file is changed
  • Document is not found (404)

These are the only ways that a page will be removed. As in this scenario, the page still returns a valid page, as it has the same item SKU number, it keeps indexing under the wrong URL potentially forever!

Also it is worth noting that if you are having problems with re-indexing the content of the page rather than the URL of the page then check the “Last-Modified” header that is being returned by the page in the response from the web server. This is particularly an issue in dynamic pages as normally static pages will be dealt with appropriately from the last modified date of the file on the file system of the site. You can study the headers from the page by using a developer tool bar (now built into IE8).

Solution attempt 1

Aha I thought I know how to tackle this. The old URL no longer exists now, as it has been superseded by the new page, thus the ASP.NET site should be issuing a response.status = “301 moved permanently” to force the page out of the Google Mini to index the new page page and register that URL and presumably drop the old URL from the index.

Couple of lines and problem was solved I thought.

If Not officialUriForPage.PathAndQuery.EndsWith(Request.RawUrl) Then
  Response.Clear()
  Response.Status = "301 Moved Permanently"
  Response.AddHeader("Location", utility.GetPublicProductURL( _
            Me.ProductDetails.ProductId, Me.ProductDetails.ItemDescription))
  Response.End()
End If

 

So now the old page will issue a “301 moved permanently” response to the browser and Google Mini, it will go index that new page and drop the old URL – However it don’t work that way.

Solution attempt 2

After the overnight index solution 1 turned turned out a failure. Reading the documentation again it turns out the Goole Mini is being helpful and returning both URL’s, the new and moved URL, for any searches that have a search hit inside the new URL content. It seems that the four methods of removal noted earlier really are the only way to remove a page from the index.

Action

I could put the URL I wanted to remove from the index into the “Don’t Crawl URLS” box of the crawl pattern definition in the Google Crawl admin pages. This would then cause the Google Mini to, after 15 minutes to six hours, re-examine the index and realise this page no longer should be there and remove it. This would be done under the criteria “The crawl pattern is changed”, item two of the list of conditions for removal of pages in the list earlier. I could then remove the don’t crawl URL again from the Google Box so I don’t forget and accidentally block a future new URL replacement. This should work for a few pages, we have about 15,000 products online, need something better.

Instead I went for the last option in the list, “If the search appliance receives a 404 (Document not found) error from the Web server when attempting to fetch a document, the document is removed from the index.”.

Hence I changed the code sample above to redirect to our generic 404 not found page rather than redirecting with the moved redirect. Check that the 404 page responds in the header with a 404 status code or the Google Mini will not see the 404 status. However I don’t want this to happen for end users only the Google Mini. This is because for an end user they just want to be redirected to the URL, a 404 not found is rude and would make lost sales as users assume the item no longer exists. Luckily the Goole box sends a configurable user_agent variable in requests, so we can behave differently to it.

If Not IsNothing(System.Web.HttpContext.Current.Request. _
         ServerVariables("HTTP_USER_AGENT")) _
    AndAlso System.Web.HttpContext.Current.Request. _
         ServerVariables("HTTP_USER_AGENT").Contains("gsa-crawler") Then
    'Not found for Google Mini
    Response.Clear()
    Response.Status = "404 Not Found"
    Response.AddHeader("Location", "/ErrorPages/404.aspx")
    Response.End()
Else
    'Perm redirect
    Response.Clear()
    Response.Status = "301 Moved Permanently"
    Response.AddHeader("Location", common.utility.GetPublicProductURL( _
                Me.ProductDetails.ProductId, Me.ProductDetails.ItemDescription))
    Response.End()
End If

I hope the problem is now resolved.

Invalid Viewstate - Request path: /ScriptResource.axd

 

Getting these errors in our logs more and more, they have been building up recently. Thirty today was the final straw, I had to investigate more.
The exception is as follows:

Process information: 
    Process ID: 5164 
    Process name: w3wp.exe 
    Account name: NT AUTHORITY\NETWORK SERVICE 
 
Exception information: 
    Exception type: HttpException 
    Exception message: Invalid viewstate. 
 
Request information: 
    Request URL: http://www.mydomain.co.uk/ScriptResource.axd?d=zssJ_ZkntDf8wJe24iZ0zF_fJVnfucP3oqIlDwt8BO1EweVFWfL2juu9RlhRVTDPTWMvo7NxKPBKbedKroducts</a></span></li><li><span><ahttp://www.mydomain.co.uk/ScriptResource.axd?d=zssJ_ZkntDf8wJe24iZ0zF_fJVnfucP3oqIlDwt8BO1EweVFWfL2juu9RlhRVTDPTWMvo7NxKPBKbedKroducts&lt;/a></span></li><li><span><ahttp://www.mydomain.co.uk/ScriptResource.axd?d=zssJ_ZkntDf8wJe24iZ0zF_fJVnfucP3oqIlDwt8BO1EweVFWfL2juu9RlhRVTDPTWMvo7NxKPBKbedKroducts&lt;/a></span></li><li><span><ahttp://www.mydomain.co.uk/ScriptResource.axd?d=zssJ_ZkntDf8wJe24iZ0zF_fJVnfucP3oqIlDwt8BO1EweVFWfL2juu9RlhRVTDPTWMvo7NxKPBKbedKroducts&lt;/a></span></li><li><span><a%20href=href=href=href= 
    Request path: /ScriptResource.axd 

I found a promising lead from Project 31-A blog that lead me to Microsoft connect and this report, Bug IE8 – 4K dropped - "Invalid viewstate" when loading ScriptResource.axd or WebResource.axd (asp net) 
It seems there is an IE8 bug that  might explain the occurrences growing. The description of the issue matches what we are seeing to a tee. I have started work on trying to mitigate our exposure to the issue by removing the META-Content tags content="text/html; charset=ISO-8859-1" from our pages that have them and hope this reduces the impact.

I am worried that this IS affecting the user experience of our site. More testing required on this one…