Monday, November 16, 2009

WCF middle tier client – ClientBase proxy (svcutil) vs. ChannelFactory

After running with WCF based SOA in our production for a year or so, we have been working on our next major version. We have added some new services lately.

in the load test, we encounter the following error:

System.Net.Sockets.SocketException: Only one usage of each socket address (protocol/network address/port) is normally permitted

After some investigation (See Durgaprasad Gorti's WebLog) we understood that as we load huge amount of request on our middle tier application (actually ASP.NET web service) it send lots of requests to other services in our distributed system. We have used TCPView to see what’s going on with the ports – and saw that each request have opened a new client port – which was never reused until the OS released it 4 minutes later.

We have changed the TCPTimedWaitDelay value in the registry to 30 seconds – and it solve the problem.
we decided, however, that this is not a real solution – just a workaround. the real problem is the no reuse of ports – and we need to solve it –as we don’t know when we will hit the wall next time.

Our code used Client Proxy generated by Svcutil.exe, and we have created a new client each time a request was needed:

System.ServiceModel.ClientBase<T> clientProxy = Activator.CreateInstance(proxyType, new object[] { endpoint.Binding, endpoint.Address }) as System.ServiceModel.ClientBase<T>;

Now, one of our developers said that in .NET 3.5 (actually .NET 3.0 SP1) Microsoft has added some kind of caching which should have solved this issue.

As we were running on .NET 3.5 SP1 environment – this clearly hasn’t solved the problem. So we went to look for the reason. It hasn’t took long to find Wenlong Dong's Blog about “Performance Improvement for WCF Client Proxy Creation in .NET 3.5 and Best Practices”.

It was very clear from his blog, that we fail to use the built in caching because we use a constructor with Binding as a parameter. Actually, even if weren’t used a constructor that disable the caching – it was disabled because we accessed the ChannelFactory property too:

clientProxy.ChannelFactory.Endpoint.Behaviors.Add(epb);

We wanted to fix this by calling a different constructor and by adding the behavior to our configuration, but we couldn’t, as our configuration isn’t based on config file, but is read from a central DB – and there is no constructor that doesn’t read configuration from config files (there should be such constructor in 4.0 – See this new constructor).

You can work around this in various ways, like this one by Pablo M. Cibraro.
Read also in great details how the client actually work – at What WCF client does

Anyway, we have decide to different direction.

If you read Microsoft’s Middle-Tier Client Applications, you’ll see 2 options:

  • Cache the WCF client object and reuse it for subsequent calls where possible.
  • Create a ChannelFactory object and then use that object to create new WCF client channel objects for each call.

Clearly, the first option… is not an option, as it talks about subsequent calls.
We need robust multi threaded middle tier.

So we were left with only one valid solution – to use ChannelFactory instead of the Client proxy, and to cache it (actually redoing the caching mechanism Microsoft added for the ClientBase).
We can succeed caching in the place the framework fails, because we allow ourselves to take some assumptions regarding the reuse of the same ChannelFactory.

We use the cached factory (if not exist – we create one) to return a Typed channel by calling its CreateChannel(). This open a new socket for each request, unless already used socket were closed and available for reuse (You should of course close each created channel, by calling its Close() method).

This is what we wanted.

The load test finished successfully.

Thanks Daniel, Michael & Eyal.

Thursday, October 29, 2009

Convert folder to branch failure - Deleted folder

When you try to conevert a folder into a branch in TFS 2010, you may fail if you already have a branch somewhere in its sub folders.


The error is TF203028: You cannot create a branch at $serverPath because a branch already exists at $SubFolderServerPath.

You can fix this by converting this sub folder back to folder (as you can't have a branch inside a branch)



You might however don't see any such branch folder.
In this case - just set Visual Studio to show deleted items in Team Explorer.
Then you will be able to convert this sub folder into a folder - which will fix the problem

Bypass Gated Check in from TF Checkin Command Line (TFS 2010)

In Team System 2010, Microsoft added long wanted feature – Gated Check in.

This feature lets you promise you won’t break the build by your check in, by converting the check in into a shelveset – running a build with your changes, and only if the build succeed – it check in the code on behalf of you.

It gives you of course the option to bypass this behavior (if you have the required permission).

My problem started in the fact that during the build process itself – we perform a check in…
So this check in fail with the message (and exited with code 1):

Your check-in has been placed into shelveset Gated_XXXXXXX;Domain\User and submitted for validation by build definition \Project-Name\Build-Definition.

I have looked at MSDN for TF Checkin flag that will replace the GUI option to bypass the gated build:

msdn

But as you can see, no new option was added.

Luckily, I have run also the TF checkin /? command:

Command Line

And as you can see, there are few new options here:

  1. shelveset
  2. bypass
  3. login

Those 3 new options came to support this “Gated Check in” feature.
I was actually looking for the /bypass option – which, what a surprise, bypass this feature.

I hope that this MSDN document will be updated soon.

Monday, October 26, 2009

Upgrade from TFS 2008 to TFS 2010 Beta 2 – long wait (literally)

Yes! We have finally got the TFS 2010 Beta 2!!!

I have restored our production Team Foundation Server Databases into a clean computer.
The setup was very easy – just selecting Upgrade flow, and filling some details.

Then it arrived to the most critical part – upgrading the DBs themselves.

It took almost 6 hours…

TFS_Upgrade_Image

but… it finished successfully!

TFS_Upgrade_Image_-_success

So the proof of concept for upgrading has proofed the concept!

Now of course, we still need to see how to upgrade our customized CMMI process – to have all the new staff Microsoft put in the new one. I guess we will use Allen Clark’s Enabling New Features of Visual Studio Team System 2010 Beta 1 in Upgraded Team Projects

Friday, May 01, 2009

Real Incremental Build - Part 4 – Compare Assemblies using ILDASM

Part 1 – for motivation.
Part 2 – Plan for getting only new/updated files.
Part 3 – Out of the box Incremental build with Team Build.

In last part we succeed getting only new/updated files except binaries which were regenerated also no actual change was made to their code – because some reference in their dependency tree was changed.

We want only real updated assemblies.

Again, the trivial solution doesn’t work:

If you try to compare each assembly with its equivalent in the original build you’ll find that each time you build a project, the assembly is different! (even if no code was changed)

Here comes ILDASM (MSIL Disassembler) to the rescue.

This tool takes and assembly (either DLL or EXE) and can create a text file containing all structure and actual IL code of the file.

If you compare 2 disassembled files which should be identical you’ll probably see the following different lines:

  1. Time-date stamp
    This parameter is regenerated each time – so it will be different every time.
  2. MVID
    This is a generated GUID – Regenerated each time.
  3. .ver
    Lines with .ver arr declarations of the specific version of referenced assemblies.
    Those lines would be changed if some referenced assemblies has changed their versioning.
    This happens due to the use of * in the AssemblyVersion attribute in AssemblyInfo.cs
    You need to decide whether this is a breaking change from your point of view – or not.
    In my case – we don’t work with Signed DLLs, and I don’t mind what is the specific version. 
  4. PrivateImplementationDetails

    I don’t know for sure what that means, but from my experience it changed without any real change happened.
  5. WARNING:

    Those lines happen to be changed.

So here the actual process:

  1. You iterate through all the remaining files.
  2. If the file’s extension is either .dll or .exe – you run ILDASM on this file, . Those are the parameters I used, but to tell the truth – I haven’t investigate them much:

    ildasm.exe  assemblyFileName /OUT=out.txt  /ALL /RAWEH  /SOURCE /LINENUM /CAVERBAL /NOBAR /UTF8 /TYPELIST

    Run it second time for the original assembly file too.
  3. Compare the 2 output files (I just iterate through the lines in both files) and ignore the changes which have been described above. (Time-date stamp, MVID, etc.)

    I would jump to the next line which doesn’t contain “WARNING:” in case I meet such a line.
    This is because I have seen cases were only one of the assemblies contained such a line, and then it will break you comparison if you not jump to the next valid line.
  4. If the files are equal – delete the assembly from the drop location.

The result of this procedure is: Diff package containing only new/updated files

Easy it was, wasn’t it?

Real Incremental Build - Part 3 – Out of the box with Team Build

See Part 1 – for motivation.
See Part 2 – Plan for getting only new/updated files.

Ok, so how do we actually perform our plan?

We will write an application which does the following

  1. Run Build with the original source:

    See Building a Specific Version with Team Build 2008.
    to put here the bottom line – add the following parameter to the command-line arguments:
    /p:GetVersion=version (where version can be C### – which means – Changeset number ###)

    for doing this programmatically, use IBuildServer.GetBuildDefinition(teamProject, name) and call CreateBuildRequest.

    The following line sets the command line to get the version you want:
    BuildRequest.CommandLineArguments = "/p:GetVersion=" + OriginalVersionSpec;

    Call QueueBuild(BuildRequest) to actually start the build. 
  2. Wait till the build ends:

    You can create a Timer and check the build status each time - QueuedBuild.Status
    You’ll need to refresh the build object – using - QueuedBuild.Refresh() before.
    Save the finish time for later use.
  3. Get changes only:

    This is the actual incremental build (all previous steps were preparations).
    You do this the same way like the build before (with the /p:GetVersion).
    But this time, you add another parameter: IncrementalBuild

    The full parameter string will look like this: (see the semicolon separator) 

    "/p:IncrementalBuild=true;GetVersion=" + CurrentVersionSpec
  4. Rebuild:

    Queue this build as well, and again wait for it to finish.
  5. Delete old files:

    Iterate through all files recursively in the drop location of the incremental build.
    Delete all files with LastWriteTime older than the finish time of the original build.

So now it seems we have what we wanted.

In the Drop location of the incremental build – only newer files will remain.
All files which weren't changed (or executables which weren’t regenerated due to no code change) – were deleted – leaving new/updated files only.

Or is it?

If you’ll look at the results – you’ll see lots of binaries (DLLs and EXEs) which shouldn’t be there.
No code change was done in their projects.
So why are they here?

Because at least one of their references assemblies has changed…

So we succeed partially:
All file types except assemblies – we have updated files only.
Assemblies – we have all files which had a change somewhere in their dependency tree.

Back to square one?

Solution in part 4...

Thursday, April 30, 2009

Real Incremental Build - Part 2 – Getting only new/updated files

see part 1 for motivation.

So how do you prepare a package with only the new and updated files?
(whether those are binaries, images, aspx files, resource files, etc.)

The simplistic answer would be to ask the developer which files he touched, and either take them themselves or their products (e.g. the assembly which was generated from the source file).

However there could be lots of developers involved who changed multiple files each.
Try to keep a trace of all those files…

You can use your source control to find all new and changes files, of course, but that still requires someone to sit and analyze which of them can be taken as is, and which of them generates changes in end products.

The knowledge of what file update ends up in a new version of which dll or exe sits inside the project files. So you must use this knowledge automatically.

Which means… Build. 

But wouldn’t a build gives us the whole package again?
No. at least not if you use Incremental Build.

Remember when you change some code in visual studio after you already complied big solution?
the next build finishes faster.
Why? because it rebuild only those projects that have been changed.

So let’s see how can we use it:

  1. Build the original source (which already in use in production).
  2. Write down the build finish time.
  3. Get only the changes from source control (all changes from last original check in till last fix).
  4. Build again.
  5. Get only new files (newer then the original’s finish time) – or – delete all older files.

How to do it technically?

See part 3.