.Net (4.0 up) Enterprise Caching Strategies & tips

http://blogs.msdn.com/b/paolos/archive/2011/04/05/how-to-use-a-wcf-custom-channel-to-implement-client-side-caching.aspx

Enterprise Library Caching Block is with version 6.0 Absolete , being replaced with MemoryCache or AppFabric.

We frequently get asked on best practices for using Windows Azure Cache/Appfabric cache.  The compilation below is an attempt at putting together an initial list of bestpractices. I’ll publish an update to this in the future if needed.

I’m breaking down the best practices to follow by various topics.

Using Cache APIs

1.    Have Retry wrappers around cache API calls

Calls into cache client can occasionally fail due to a number of reasons such as transient network errors, cache  servers being unavailable due to maintainance/upgrades or cache servers being low on memory. The cache client raises a DataCacheException with an errorcode that indicates the reason for the failure in these cases. There is a good overview of how an application should handle these exceptions in MSDN

It is a good practice for the application to implement a retry policy. You can implement a custom policy or consider using a framework like the TransientFault Handling Application Block

2.    Keep static instances of DataCache/DataCacheFactory

Instances of DataCacheFactory (and hence DataCache instances indirectly) maintain tcp connections to the cache servers. These objects are expensive to create and destroy. In addition, you want to have as few of these as needed to ensure cache servers are not overwhelmed with too many connections from clients.

You can find more details of connection management here. Please note that the ability to share connections across factories is currently available only in November 2011 release of the Windows Azure SDK (and higher versions). Windows Server appfabric 1.1 does not have this capability yet.

Overhead of creating new factory instances is lower if connection pooling is enabled. In general though, it is a good practice to pre-create an instance of DataCacheFactory/DataCache and use them for all subsequent calls to the APIs. Do avoid creating an instance of DataCacheFactory/DataCache on each of your request processing paths. 

3.    WCF Services using Cache Client

It is a common practice for WCF services to use cache to improve their performance. However, unlike asp.net web applications, wcf services are susceptible for IO-thread starvation issues when making blocking calls (such as cache API calls) that further require IO threads to receive responses (such as responses from cache servers).

This issue is described in detail in the following KB article – The typical symptom that surfaces if you run into this is that when you’ve a sudden burst of load, cache API calls timeout. You can confirm if you are
running into this situation by plotting the thread count values against incoming requests/second as shown in the KB article.

4.    If app is using Lock APIs – Handle ObjectLocked, ObjectNotLocked exceptions

If you are using lock related APIs, please ensure you are handling exceptions such asObjectLocked ( and ObjectNotLocked (Object being referred to is not locked by any client) error.

GetAndLock can fail with “<ERRCA0011>:SubStatus<ES0001>:Object being referred to is currently locked, and cannot be accessed until it is unlocked by the locking client. Please retry later.” error if another caller has acquired a lock on the object.

The code should handle this error and implement an appropriate retry policy.

PutAndUnlock can fail with “<ERRCA0012>:SubStatus<ES0001>:Object being referred to is not locked by any client” error.

This typically means that the lock timeout specified when the lock was acquired was not long enough because the application request took longer to process. Hence the lock expired before the call to PutAndUnlock and the cache server returns this error code.

The typical fix here is to both review your request processing time as well as set a higher lock timeout when acquiring a lock.

You can also run into this error when using the session state provider for cache. If you are running into this error from session state provider, the typical solution is to set a higher executionTimeout for your web app.

Session State Provider Usage

You can find more info about asp.net session state providers for appfabric cache hereand azure cache here.

The session state provider has an option to store the entire session as 1 blob (useBlobMode=”true” which is the default), or to store the session as individual key/value pairs.

useBlobMode=”true” incurs fewer round trips to cache servers and works well for most applications.

If you’ve a mix of small and large objects in session, useBlobMode=”false” (a.ka. granular mode) might work better since it will avoid fetching the entire (large) session object for all requests. The cache should also be marked as nonEvictable cache if useBlobMode=”false” option is being used. Because Azure shared cache does not give you the ability to mark a cache as non evictable, please note that useBlobMode=”true” is the only supported option against Windows Azure Shared cache.

Performance Tuning and Monitoring

            Tune MaxConnectionsToServer

Connection management between cache clients and servers is described in more detailhere. Consider tuning MaxConnectionToServer setting. This setting controls the number of connections from a client to cache
servers. (MaxConnectionsToServer * Number of DataCacheFactory Instances *Number of Application Processes) is a rough value for the number of connections that will be opened to each of the cache servers. So, if you have 2 instances of your web role with 1 cache factory and MaxConnectionsToServer set to 3, there will be 3*1*2 = 6 connections opened to each of the cache servers.

Setting this to number of cores (of the application machine) is a good place to start. If you set this too high, a large number of connections can get opened to each of the cache servers and can impact throughput.

If you are using Azure cache SDK 1.7, maxConnectionsToServer is set to the default of number of cores (of the application machine). The on-premise appfabric cache (v1.0/v1.1) had the default as one, so that value might need to be tuned if needed.

            Adjust Security Settings

The default security settings for on-premise appfabric cache is to run with security on at EncryptAndSign protection level. If you are running in a trusted environment and don’t need this capability, you can turn this off by explicitly setting security to off.

The security model for Azure cache is different and theabove adjustment is not needed for azure cache.

            Monitoring

There is also a good set of performance counters on the cache servers that you can monitor to get a better understanding of cache performance issues. Some of thecounters that are typically useful to troubleshoot issues include:

1)     %cpu used up by cache service

2)     %time spent in GC by cache service

3)     Total cache misses/sec – A high value here can indicate your application performance might suffer because it is not able to fetch data from cache. Possible causes for this include eviction and/or expiry
of items from cache.

4)     Total object count – Gives an idea of how many items are in the cache. A big drop in object count could mean eviction or expiry is taking place.

5)     Total client reqs/sec – This counter is useful in giving an idea of how much load is being generated on the cache servers from the application. A low value here usually means some sort of a bottleneck
outside of the cache server (perhaps in the application or network) and hence very little load is being placed on cache servers.

6)     Total Evicted Objects – If cache servers are constantly evicting items to make room for newer objects in cache, it is usually a good indication that you will need more memory on the cache servers to hold
the dataset for your application.

7)     Total failure exceptions/sec and Total Retry exceptions/sec

Lead host vs Offloading

This applies only for the on-premise appfabric cache deployments. There is a good discussion of the  tradeoffs/options in this blog – As noted in the blog, with v1.1, you can use sql to just store config info and use lead-host model for cluster runtime. This option is attractive if setting up a highly-available sql server for offloading purposes is hard.

Other Links

Here are a set of blogs/articles that provide more info on some of the topics covered above.

1)     Jason Roth and Jaime Alva have written an  article providing additional guidance to developers using Windows
Azure Caching.

2)     Jaime Alva’s blog post on logging/counters for On-premise appfabric cache.

3)     MSDN article about connection management between cache client & servers.

4)     Amit Yadav and Kalyan Chakravarthy’s blog on lead host vs offloading options for cache clusters.

5)     MSDN article on common cache exceptions and Transient Fault Handling Application Block

Advertisements

How to use Unity – Dependency Container thread Safe

First I will Try to define the Problem : The Constructor Injection pattern is easy to understand until a follow-up question comes up:

Where should we compose object graphs?

It’s easy to understand that each class should require its dependencies through its constructor, but this pushes the responsibility of composing the classes with their dependencies to a third party. Where should that be?

It seems to me that most people are eager to compose as early as possible, but the correct answer is:

As close as possible to the application’s entry point.

This place is called the Composition Root of the application and defined like this:

A Composition Root is a (preferably) unique location in an application where modules are composed together.

This means that all the application code relies solely on Constructor Injection (or other injection patterns), but is never composed. Only at the entry point of the application is the entire object graph finally composed.

The appropriate entry point depends on the framework:

  • In console applications it’s the Main method
  • In ASP.NET MVC applications it’s global.asax and a custom IControllerFactory
  • In WPF applications it’s the Application.OnStartup method
  • In WCF it’s a custom ServiceHostFactory
  • etc

The Composition Root is an application infrastructure component.Only applications should have Composition Roots. Libraries and frameworks shouldn’t.

The Composition Root can be implemented with Poor Man’s DI, but is also the (only) appropriate place to use a DI Container.

A DI Container should only be referenced from the Composition Root. All other modules should have no reference to the container.

Using a DI Container is often a good choice. In that case it should be applied using the Register Resolve Release pattern entirely from within the Composition Root.

Read more in Dependency Injection in .NET.

Having seen the benefits of a Composition Root over the Service Locator anti-pattern global That entailed an Unity container,

The solution Consists of two classes. DependencyFactory is the public wrapper for an internal class,  DependencyFactoryInternal. The outer class presents easy-to-use static methods and acquires the right kind of lock (read or write) for whatever operations you’re Trying to do .

First time you access the container, the code Will the load-type registrations in your. Config file. You can register aussi kinds programmatically with the static RegisterInstance method.

DependencyFactory.RegisterInstance <IMyType> ( new MyConcreteType ());

To resolve a type use the static Resolve method

var myObject = DependencyFactory.Resolve <IMyTypeType> ();

Finally, the static FindRegistrationSingle method exposed an existing ContainerRegistration .

public sealed class DependencyFactory : IDisposable
    {
        #region Properties
        /// <summary>
        /// Get the singleton Unity container.
        /// </summary>
        public IUnityContainer Container
        {
            get { return DependencyFactoryInternal.Instance.Container; }
        }
        #endregion

        #region Constructors
        /// <summary>
        /// Default constructor. Obtains a write lock on the container since this is the safest policy.
        /// Also uses a relatively long timeout.
        /// </summary>
        public DependencyFactory()
            : this(true, 10000)
        {
        }

        /// <summary>
        /// Construct an object that can access the singleton Unity container until the object is Disposed.
        /// </summary>
        /// <param name="allowContainerModification">True to allow modification of the container.
        /// The caller is responsible for behaving according to this pledge!
        /// The default is to allow modifications, which results in a write lock rather than
        /// the looser read lock. If you're sure you're only going to be reading the container, 
        /// you may want to consider passing a value of false.</param>
        /// <param name="millisecondsTimeout">Numer of milliseconds to wait for access to the container,
        /// or -1 to wait forever.</param>
        internal DependencyFactory(bool allowContainerModification, int millisecondsTimeout = 500)
        {
            if (allowContainerModification)
                DependencyFactoryInternal.Instance.EnterWriteLock(millisecondsTimeout);
            else
                DependencyFactoryInternal.Instance.EnterReadLock(millisecondsTimeout);
        }
        #endregion

        #region Methods
        /// <summary>
        /// Resolves a type in the static Unity container.
        /// This is a convenience method, but has the added benefit of only enforcing a Read lock.
        /// </summary>
        /// <param name="overrides">Overrides to pass to Unity's Resolve method.</param>
        /// <typeparam name="T">The type to resolve.</typeparam>
        /// <returns>An concrete instance of the type T.</returns>
        /// <remarks>
        /// If you already have a DependencyFactory object, call Resolve on its Container property instead.
        /// Otherwise, you'll get an error because you have the same lock on two threads.
        /// </remarks>
        static public T Resolve<T>(params ResolverOverride[] overrides)
        {
            using (var u = new DependencyFactory(false, 10000))
            {
                return u.Container.Resolve<T>(overrides);
            }
        }

        /// <summary>
        /// Convenience method to call RegisterInstance on the container.
        /// Constructs a DependencyFactory that has a write lock.
        /// </summary>
        /// <typeparam name="T">The type of register.</typeparam>
        /// <param name="instance">An object of type T.</param>
        static public void RegisterInstance<T>(T instance)
        {
            using (var u = new DependencyFactory())
            {
                u.Container.RegisterInstance<T>(instance);
            }
        }

        /// <summary>
        /// Find the single registration in the container that matches the predicate.
        /// </summary>
        /// <param name="predicate">A predicate on a ContainerRegistration object.</param>
        /// <returns>The single matching registration. Throws an exception if there is no match, 
        /// <remarks>
        /// Only uses a read lock on the container.
        /// </remarks>
        /// or if there is more than one.</returns>
        static public ContainerRegistration FindRegistrationSingle(Func<ContainerRegistration, bool> predicate)
        {
            using (var u = new DependencyFactory(false,10000))
            {
                return u.Container.Registrations.Single(predicate);
            }
        }

        /// <summary>
        /// Acquires a write lock on the Unity container, and then clears it.
        /// </summary>
        static public void Clear()
        {
            using (var u = new DependencyFactory())
            {
                DependencyFactoryInternal.Instance.Clear();
            }
        }
        #endregion

        #region IDisposable Members
        /// <summary>
        /// Dispose the object, releasing the lock on the static Unity container.
        /// </summary>
        public void Dispose()
        {
            if (DependencyFactoryInternal.Instance.IsWriteLockHeld)
                DependencyFactoryInternal.Instance.ExitWriteLock();
            if (DependencyFactoryInternal.Instance.IsReadLockHeld)
                DependencyFactoryInternal.Instance.ExitReadLock();
        }
        #endregion
    }

Internal class

 /// This class is internal; consumers should go through DependencyFactory.
    /// </remarks>
    internal class DependencyFactoryInternal
    {
        #region Fields
        // Lazy-initialized, static instance member.
        private static readonly Lazy<DependencyFactoryInternal> _instance
            = new Lazy<DependencyFactoryInternal>(() => new DependencyFactoryInternal(),
            true /*thread-safe*/ );

        private ReaderWriterLockSlim _lock = new ReaderWriterLockSlim();
        #endregion

        #region Properties
        /// <summary>
        /// Get the single instance of the class.
        /// </summary>
        public static DependencyFactoryInternal Instance
        {
            get { return _instance.Value; }
        }

        IUnityContainer _container = null;
        /// <summary>
        /// Get the instance of the Unity container.
        /// </summary>
        public IUnityContainer Container
        {
            get
            {
                try
                {
                    return _container ?? (_container = new UnityContainer().LoadConfiguration());
                }
                catch (Exception ex)
                {
                    throw new InvalidOperationException("Could not load the Unity configuration.", ex);
                }
            }
        }

        /// <summary>
        /// Tells whether the underlying container is null. Intended only for unit testing. 
        /// (Note that this property is internal, and thus available only to this assembly and the
        /// unit-test assembly.)
        /// </summary>
        internal bool ContainerIsNull 
        { 
            get { return _container == null; } 
        }

        /// <summary>
        /// Tell whether the calling thread has a read lock on the singleton.
        /// </summary>
        internal bool IsReadLockHeld { get { return _lock.IsReadLockHeld; } }

        /// <summary>
        /// Tell whether the calling thread has a write lock on the singleton.
        /// </summary>
        internal bool IsWriteLockHeld { get { return _lock.IsWriteLockHeld; } }
        #endregion

        #region Constructor
        /// <summary>
        /// Private constructor. 
        /// Makes it impossible to use this class except through the static Instance property.
        /// </summary>
        private DependencyFactoryInternal()
        {
        }
        #endregion

        #region Methods
        /// <summary>
        /// Enter a read lock on the singleton.
        /// </summary>
        /// <param name="millisecondsTimeout">How long to wait for the lock, or -1 to wait forever.</param>
        public void EnterReadLock(int millisecondsTimeout)
        {
            if (!_lock.TryEnterReadLock(millisecondsTimeout))
                throw new TimeoutException(Properties.Resources.TimeoutEnteringReadLock);
        }

        /// <summary>
        /// Enter a write lock on the singleton.
        /// </summary>
        /// <param name="millisecondsTimeout">How long to wait for the lock, or -1 to wait forever.</param>
        public void EnterWriteLock(int millisecondsTimeout)
        {
            if (!_lock.TryEnterWriteLock(millisecondsTimeout))
                throw new TimeoutException(Properties.Resources.TimeoutEnteringWriteLock);
        }

        /// <summary>
        /// Exit the lock obtained with EnterReadLock.
        /// </summary>
        public void ExitReadLock()
        {
            _lock.ExitReadLock();
        }

        /// <summary>
        /// Exit the lock obtained with EnterWriteLock.
        /// </summary>
        public void ExitWriteLock()
        {
            _lock.ExitWriteLock();
        }

        /// <summary>
        /// Clear the Unity container and the lock so we can restart building the dependency injections.
        /// </summary>
        /// <remarks>
        /// Intended for unit testing.
        /// </remarks>
        public void Clear()
        {
            if (_container != null)
                _container.Dispose();
            _container = null;
            while (_lock.IsWriteLockHeld)
                _lock.ExitWriteLock();
            while (_lock.IsReadLockHeld)
                _lock.ExitReadLock();
        }
        #endregion
    }

Entity Framework 6 is Feature Complete including async , RTM later this year

I try to put together some information’s related to the release of RC for EF6.

 

What  is New in EF6

This is the complete list of new features in EF6.

Tooling

Our focus with the tooling has been on adding EF6 support and enabling us to easily ship out-of-band between releases of Visual Studio.

The tooling itself does not include any new features, but most of the new runtime features can be used with models created in the EF Designer.

Runtime

The following features work for models created with Code First or the EF Designer:

  • Async Query and Save adds support for the task-based asynchronous patterns that were introduced in .NET 4.5. We’ve created a walkthrough and a feature specification for this feature.
  • Connection Resiliency enables automatic recovery from transient connection failures. The feature specificationshows how to enable this feature and how to create your own retry policies.
  • Code-Based Configuration gives you the option of performing configuration – that was traditionally performed in a config file – in code. We’ve created an overview with some examples and a feature specification.
  • Dependency Resolution introduces support for the Service Locator pattern and we’ve factored out some pieces of functionality that can be replaced with custom implementations. We’ve created a feature specification and a list of services that can be injected.
  • Interception/SQL logging provides low-level building blocks for interception of EF operations with simple SQL logging built on top. We’ve created a feature specification for this feature and Arthur Vickers has created a multi-part blog series covering this feature.
  • Testability improvements make it easier to create test doubles for DbContext and DbSet. We’ve created walkthroughs showing how to take advantage of these changes using a mocking framework or writing your own test doubles.
  • Enums, Spatial and Better Performance on .NET 4.0 – By moving the core components that used to be in the .NET Framework into the EF NuGet package we are now able to offer enum support, spatial data types and the performance improvements from EF5 on .NET 4.0.
  • DbContext can now be created with a DbConnection that is already opened which enables scenarios where it would be helpful if the connection could be open when creating the context (such as sharing a connection between components where you can not guarantee the state of the connection).
  • Default transaction isolation level is changed to READ_COMMITTED_SNAPSHOT for databases created using Code First, potentially allowing for more scalability and fewer deadlocks.
  • DbContext.Database.UseTransaction and DbContext.Database.BeginTransaction are new APIs that enable scenarios where you need to manage your own transactions.
  • Improved performance of Enumerable.Contains in LINQ queries.
  • Significantly improved warm up time (view generation) – especially for large models – as the result of a contributions from AlirezaHaghshenas and VSavenkov
  • Pluggable Pluralization & Singularization Service was contributed by UnaiZorrilla.
  • Improved Transaction Support updates the Entity Framework to provide support for a transaction external to the framework as well as improved ways of creating a transaction within the Framework. See this feature specificationfor details.
  • Entity and complex types can now be nested inside classes.
  • Custom implementations of Equals or GetHashCode on entity classes are now supported. See the feature specification for more details.
  • DbSet.AddRange/RemoveRange were contributed by UnaiZorrilla and provides an optimized way to add or remove multiple entities from a set.
  • DbChangeTracker.HasChanges was contributed by UnaiZorrilla and provides an easy and efficient way to see if there are any pending changes to be saved to the database.
  • SqlCeFunctions was contributed by ErikEJ and provides a SQL Compact equivalent to the SqlFunctions.
  • Interception/SQL logging provides low-level building blocks for interception of EF operations with simple SQL logging built on top. We’ve created a feature specification for this feature and Arthur Vickers has created a multi-part blog series covering this feature.
  • Testability improvements make it easier to create test doubles for DbContext and DbSet. We’ve created walkthroughs showing how to take advantage of these changes using a mocking framework or writing your own test doubles.
  • Extensive API changes as a result of polishing the design and implementation of new features. In particular, there have been significant changes in Custom Code First Conventions and Code-Based Configuration. We’ve updated the feature specs and walkthroughs to reflect these changes.
  • EF Designer now supports EF6 in projects targeting .NET Framework 4. This limitation from EF6 Beta 1 has now been removed.

The following features apply to Code First only:

  • Custom Code First Conventions allow write your own conventions to help avoid repetitive configuration. We provide a simple API for lightweight conventions as well as some more complex building blocks to allow you to author more complicated conventions. We’ve created a walkthough and a feature specification for this feature.
  • Code First Mapping to Insert/Update/Delete Stored Procedures is now supported. We’ve created a feature specification for this feature.
  • Idempotent migrations scripts allow you to generate a SQL script that can upgrade a database at any version up to the latest version. The generated script includes logic to check the __MigrationsHistory table and only apply changes that haven’t been previously applied. Use the following command to generate an idempotent script.
    Update-Database -Script -SourceMigration $InitialDatabase
  • Configurable Migrations History Table allows you to customize the definition of the migrations history table. This is particularly useful for database providers that require the appropriate data types etc. to be specified for the Migrations History table to work correctly. We’ve created a feature specification for this feature.
  • Multiple Contexts per Database removes the previous limitation of one Code First model per database when using Migrations or when Code First automatically created the database for you. We’ve created a feature specification for this feature.
  • DbModelBuilder.HasDefaultSchema is a new Code First API that allows the default database schema for a Code First model to be configured in one place. Previously the Code First default schema was hard-coded to “dbo” and the only way to configure the schema to which a table belonged was via the ToTable API.
  • DbModelBuilder.Configurations.AddFromAssembly method  was contributed by UnaiZorrilla. If you are using configuration classes with the Code First Fluent API, this method allows you to easily add all configuration classes defined in an assembly. 
  • Custom Migrations Operations were enabled by a contribution from iceclow and this blog post provides an example of using this new feature.

How to Diagnose .NET Memory Issues in Production using Visual Studio

One of the issues that frequently affects .NET applications running in production environments is problems with their memory use which can impact both the application and potentially the entire machine. To help with this, we’ve introduced a feature in Visual Studio 2013 to help you understand the .NET memory use of your applications from .dmp files collected on production machines. In this post, I’ll first discuss common types of memory problems and why they matter. I’ll then walk through how to collect data, and finally describe how to use the new functionality to solve memory problems in your applications.

Why worry about memory problems

.NET is a garbage collected runtime, which means most of the time the framework’s garbage collector takes care of cleaning up memory and the user never notices any impact. However, when an application has a problem with its memory this can have a negative impact on both the application and the machine.

  1. Memory leaks are places in an application where objects are meant to be temporary, but instead are left permanently in memory. In a garbage collected runtime like .NET, developers do not need to explicitly free memory like they need to do in a runtime like C++. However the garbage collector can only free memory that is no longer being used, which it determines based on whether the object is reachable (referenced) by other objects that are still active in memory. So a memory leak occurs in .NET when an object is still reachable from the roots of the application but should not be (e.g. a static event handler references an object that should be collected). When memory leaks occur, usually memory increases slowly over time until the application starts to exhibit poor performance.  Physical resource leaks are a sub category of memory leaks where a physical resource such as a file, or OS handler is accidentally left open or retained. This can lead to errors later in execution as well as increased memory consumption.
  2. Inefficient memory use is when an application is using significantly more memory than intended at any given point in time, but the memory consumption is not the result of a leak. An example of inefficient memory use in a web application is querying a database and bringing back significantly more results than are needed by the application.
  3. Unnecessary allocations. In .NET, allocation is often quite fast, but overall cost can be deceptive, because the garbage collector (GC) needs to clean it up later. The more memory that gets allocated, the more frequently the GC will need to run. These GC costs are often negligible to the program’s performance, but for certain kinds of apps, these costs can add up quickly and make a noticeable impact to the performance of the app

If an application suffers from a memory problem, there are three common symptoms that may affect end users.

  1. The application can crash with an “Out of Memory” exception. This is a relatively common problem for 32bit applications because they are limited to only 4GB of total Virtual Address Space. It is however less common for 64bit applications because they are given much higher virtual address space limits by the operating system.
  2. The application will begin to exhibit poor performance. This can occur because the garbage collector is running frequently and competing for CPU resources with the application, or because the application constantly needs to move memory between RAM (physical memory) and the hard drive (virtual memory); which is called paging.
  3. Other applications running on the same machine exhibit poor performance. Because the CPU and physical memory are both system resources, if an application is consuming a large amount of these resources, other applications are left with insufficient amounts and will exhibit negative performance.

In this post I’ll be covering a new feature added to Visual Studio 2013 intended to help identify memory leaks and inefficient memory use (the first two problem types discussed above). If you are interested in tools to help identify problems related to unnecessary allocations, see .NET Memory Allocation Profiling with Visual Studio 2012.

Collecting the data

To understand how the new .NET memory feature for .dmp files helps us to find and fix memory problems let’s walk through an example. For this purpose, I have introduced a memory leak when loading the Home page of a default MVC application created with Visual Studio 2013 (click here to download). However to simulate how a normal memory leak investigation works, we’ll use the tool to identify the problem before we discuss the problematic source code.

The first thing I am going to do is to launch the application without debugging to start the application in IIS Express. Next I am going to open Windows Performance Monitor to track the memory usage during my testing of the application. Next I’ll add the “.NET CLR Memory -> # Bytes in all Heaps” counter, which will show me how much memory I’m using in the .NET runtime (which I can see is ~ 3.5 MB at this point). You may use alternate or additional tools in your environment to detect when memory problems occur, I’m simply using Performance Monitor as an example. The important point is that a memory problem is detected that you need to investigate further.

HomePagewDefaultPerfMon

The next thing I’m going to do is refresh the home page five times to exercise the page load logic. After doing this I can see that my memory has increased from ~3.5 MB to ~13 MB so this seems to indicate that I may have a problem with my application’s memory since I would not expect multiple page loads by the same user to result in a significant increase in memory.

image

For this example I’m going to capture a dump of iisexpress.exe using ProcDump, and name it “iisexpress1.dmp” (notice I need to use the –ma flag to capture the process memory, otherwise I won’t be able to analyze the memory). You can read about alternate tools for capturing dumps in what is a dump and how do I create one?

image

Now that I’ve collected a baseline snapshot, I’m going to refresh the page an additional 10 times. After the additional refreshes I can see that my memory use has increased to ~21 MB. So I am going to use procdump.exe again to capture a second dump I’ll call “iisexpress2.dmp”

image

Now that we’ve collected the dump files, we’re ready to use Visual Studio to identify the problem.

Analyzing the dump files

The first thing we need to do to begin analysis is open a dump file. In this case I’m going to choose the most recent dump file, “iisexpress2.dmp”.

Once the file is open, I’m presented with the dump file summary page in Visual Studio that gives me information such as when the dump was created, the architecture of the process, the version of Windows, and what the version of the .NET runtime (CLR version) the process was running. To begin analyzing the managed memory, click “Debug Managed Memory” in the “Actions” box in the top right.

image

This will begin analysis

image

Once analysis completes I am presented with Visual Studio 2013’s brand new managed memory analysis view. The window contains two panes, the top pane contains a list of the objects in the heap grouped by their type name with columns that show me their count and the total size. When a type or instance is selected in the top pane, the bottom one shows the objects that are referencing this type or instance which prevent it from being garbage collected.

image

[Note: At this point Visual Studio is in debug mode since we are actually debugging the dump file, so I have closed the default debug windows (watch, call stack, etc.) in the screenshot above.]

Thinking back to the test scenario I was running there are two issues I want to investigate. First, 16 page loads increased my memory by ~18 MB which appears to be an inefficient use of memory since each page load should not use over 1 MB. Second, as a single user I’m requesting the same page multiple times, which I expect to have a minimal effect on the process memory, however the memory is increasing with every page load.

Improving the memory efficiency

First want to see if I can make page loading more memory efficient, so I’ll start looking at the objects that are using the most memory in the type summary (top pane) of memory analysis window.

Here I see that Byte[] is the type that is using the most memory, so I’ll expand the System.Byte[] line to see the 10 largestByte[]’s in memory. I see that this and all of the largest Byte[]’s are ~1 MB each which seems large so I want to determine what is using these large Byte[]’s. Clicking on the first instance shows me this is being referenced by aSampleLeak.Models.User object (as are all of the largest Byte[]’s if I work my way down the list).

image

At this point I need to go to my application’s source code to see what User is using the Byte[] for. Navigating to the definition of User in the sample project I can see that I have a BinaryData member that is of type byte[]. It turns out when I’m retrieving my user from the database I’m populating this field even though I am not using this data as part of the page load logic.

public class User : IUser
{

[Key]
public string Id { get; set; }

public string UserName { get; set; }

public byte[] BinaryData { get; set; }
}

Which is populated by the query

User user = MockDatabase.SelectOrCreateUser(
“select * from Users where Id = @p1”,
userID);

In order to fix this, I need to modify my query to only retrieve the Id and UserName when I’m loading a page, I’ll retrieve the binary data later only if and when I need it.

User user = MockDatabase.SelectOrCreateUser(
“select Id, UserName from Users where Id = @p1”,
userID);

Finding the memory leak

The second problem I want to investigate is the continual growth of the memory that is indicating a leak. The ability to see what has changed over time is a very powerful way to find leaks, so I am going to compare the current dump to the first one I took. To do this, I expand the “Select Baseline” dropdown, and choose “Browse…” This allows me to select “iisexpress1.dmp” as my baseline.

image

Once the baseline finishes analyzing, I have an additional two columns, “Count Diff” and “Total Size Diff” that show me the change between the baseline and the current dump. Since I see a lot of system objects I don’t control in the list, I’ll use the Search box to find all objects in my application’s top level namespace “SampleLeak”. After I search, I see thatSampleLeak.Models.User has increased the most in both size, and count (there are additional 10 objects compared to the baseline). This is a good indication that User may be leaking.

image

The next thing to do is determine why User objects are not being collected. To do this, I select the SampleLeak.Models.Userrow in the top table. This will then show me the reference graph for all User objects in the bottom pane. Here I can see thatSampleLeak.Models.User[] has added an additional 10 references to User objects (notice the reference count diff matches the count diff of User).

image

Since I don’t remember explicitly creating a User[] in my code, I’ll expand the reference graph back to the root to figure out what is referencing the User[]. Once I’ve finished expansion, I can see that the User[] is part of a List<User> which is directly being referenced by a root (the type of root is displayed in []’s next to the root type; in this case System.Object[] is a pinned handle)

image

What I need to do next is determine where in my application I have a List<> that User objects are being added to. To do this, I search for List<User> in my sample application using Edit -> Find -> Quick Find (Ctrl + F). This takes me to theUserRepository class I added to the application.

public static class UserRepository
{
//Store a local copy of recent users in memory to prevent extra database queries
static private List<User> m_userCache = new List<User>();
public static List<User> UserCache { get { return m_userCache; } }

public static User GetUser(string userID)
{

//Retrieve the user’s database record
User user = MockDatabase.SelectOrCreateUser(
“select Id, UserName from Users where Id = @p1”,
userID);

//Add the user to cache before returning
m_userCache.Add(user);
return user;
}
}

Note, at this point determining the right fix usually requires an understanding of how the application works. In the case of my sample application, when a user loads the Home page, the page’s controller queries the UserRepository for the user’s database record. If the user does not have an existing record a new one is created and returned to the controller. In myUserRepository I have created a static List<User> I’m using as a cache to keep local copies so I don’t always need to query the database. However, statics are automatically rooted, which is why the List<User> shows as directly referenced by a root rather than by UserRepository.

Coming back to the investigation, a review of the logic in my GetUser() method reveals that the problem is I’m not checking the cache before querying the database, so on every page load I’m creating a new User object and adding it to the cache. To fix this problem I need to check the cache before querying the database.

public static User GetUser(string userID)
{
//Check to see if the user is in the local cache
var cachedUser = from user in m_userCache
where user.Id == userID
select user;

if (cachedUser.Count() > 0)
{
return cachedUser.FirstOrDefault();
}
else
{
//User is not in the local cache, retrieve user from the database
User user = MockDatabase.SelectOrCreateUser(
“select * from Users where Id = @p1”,
userID);

//Add the user to cache before returning
m_userCache.Add(user);

return user;
}
}

Validating the fix

Once I make these changes I want to verify that I have correctly fixed the problem. In order to do this, I’ll launch the modified application again and after 20 page refreshes, Performance Monitor shows me only a minimal increase in memory (some variation is to be expected as garbage builds up until it is collected).

Just to definitely validate the fixes, I’ll capture one more dump and a look at it shows me that Byte[] is no longer the object type taking up the most memory. When I do expand Byte[] I can see that the largest instance is much smaller than the previous 1 MB instances, and it is not being referenced by User. Searching for User shows me one instance in memory rather than 20, so I am confident I have fixed both of these issues.

image

How To Use TFS 2013 with SharePoint 2013 Sp1 and Sql 2012 sp1 on Windows 2012 R2

In new deployment scenarios you will need the TFS 2013 or 2012 on an windows 2012 R2 server, that will never support SharePoint 2010, so we need an SharePoint 2013 SP1, that support windows 2012 R2 for now.

Before Run all Windows Updates before installing SharePoint 2013, and get the CU updates of Sql2012 sp1 and SharePoint 2013 Sp1 .

If That box already has TFS 2013 on an windows 2012 R2 server . by Installing updates are the key steps that will prevent tantrums from SharePoint . Always, install of the required updates and ideally the optional ones also.

installation of SharePoint 2013 with Sp1

SharePoint Team They have really slicked up the installation process for SharePoint,

Instead use the auto-run that comes from running the DVD directly, or you can just run the “prerequisiteinstaller” from the root first.

image

When the prerequisites are complete you can start the installation proper and enter your key . If you get this wrong you will be next completing an uninstall to pick the right option. You can Avoid express at all costs and in this case we already have Team Foundation Server 2013 Sp1 installed and already have SQL Server 2012 sp2 installed.

Using configuration wizard will lead you through the custom config but if  you are running on a local computer with no domain, like me, then you will have to run a command line to generate the database before you proceed.

Well, do not dispair because PowerShell –as always –  is your friend. So just start the SharePoint 2013 management PowerShell console and use the cmdlet :

New-SPConfigurationDatabase

image

image

We have now a farm we can complete the configuration. Just work though the wizard as , although you are on your own if you select kerberos for single sign-on.

SharePoint 2013 SP1 will then run though its configuration steps and give you a functional, but empty SharePoint environment. At the end you get a simple Finish button and some instructions that you need to follow for getting your site to render in a browser.

Info: SharePoint 2013 works now in Chrome and other non Microsoft browsers…

Now you get almost 25 services that you can chose to install or not. If you leave them all ticked then you will get about 10-12 new databases in SQL, Its too hard to figure out what the dependencies are and what you need .

If the verification of the SharePoint configuration passes then configuration should work

Configuring processes and extending can be long and will add solution into SharePoint. Will be a site template added but as it will likely look the nice new SharePoint 2013 Sp1 interface we will need to create the site manually.

Configuration completed successfully

Now that the SharePoint bits have been setup we will have a default link setup between SharePoint and Team Foundation Server. Although if we had a separate Team Foundation Server instance we would need to tell it where the TFS server is.

Info: You have to install the Extensions for SharePoint Products on every front end server for your SharePoint farm.

SharePoint 2013 Web Applications Configuration in Team Foundation Server 2013

Now we have installed and configured the bits for SharePoint as well as telling it where the TFS server is we now need to tell TFS where to go.

There is no account listed as an administrator! I am using the Builtin\Administrator user as both the TFS Service Account and the SharePoint Farm Admin you don’t need one.

Site Configuration Collections

In order to have different sites for different collection and enable the ability to have the same Team Project name in multiple collection then you need to create a root collection site under the main site. Some folks like to create this at the ^/sites/[collection] level, but I always create the collection site as a sub site of the root. This have the benefit of creating automatic navigation between the sites…

This final test as when you click OK the Admin Console will go off and try to hook into, or create a site for us. if you do want to have a greater degree of separation between the sites and have them in different collections you can indeed do that as well. You may want to do that if you are planning to separate collection to multiple environments, but I can think of very few reasons that you would want to do that.

Using the new Team Project Site

If we create a new team project the template from the Process Template that you selected will be used to create the new site. These templates are designed to work in any version of SharePoint but they may look cool.

This team project was created before there was ever a SharePoint server so it has no portal. Lets go ahead and create one manually.

They have moved things around a little in SharePoint and we now create new sub sites from the “View Content” menu.

This, while much more hidden is really not something you do every day. You are much more likely to be adding apps to an existing site so having this more clicks away is not a big deal.

When we care the new site we have two options. We can create it using the provided “Team Foundation Project Portal” bit it results in a slightly ugly site, or you can use the default “Team Site” template to get a more native 2013 feel.

This is due to the features not yet being enables… so head on over to “cog | Site Settings | Site Actions | Manage site features” to enable them.

You can enable one of:

  • Agile Dashboards with Excel Reporting – for the MSF for Agile Software Development 6.x Process Template
  • CMMI Dashboards with Excel Reports – for the MSF for CMMI Software Development 6.x Process Template
  • Scrum Dashboards with Reporting – for the Visual Studio Scrum 2. Recommended Process Template

The one you pick depends on the Process Template that you used to create the Team Project. I will activate the Scrum one as I used the Visual Studio Scrum 2.0 Recommended Process Template which I heartily recommend. You will have noticed that here are 2 or 3 for each of the “Agile | SMMI | Scrum” monikers and this is due to the different capabilities that you might have. For example:

  • Agile Dashboards – I have TFS with no Reporting Services or Analysis Services
  • Agile Dashboards with Basic Reporting – I have Reporting Services and Analysis Services but not SharePoint Enterprise
  • Agile Dashboards with Excel Reporting – I have Everything! Reporting Services, Analysis Services and SharePoint Enterprise

If you enable the highest level of the one you want it will figure out the one that you can run  and in this case I can do “Scrum Dashboards with Reporting”.

sharepoint_scrum_boards

sharepoint_agile_boards_reporting
Scrum template does not have any built in Excel Reports, but it does have Reporting Services reports. Now when I return to the homepage I get the same/similar portal you would have seen in old versions of SharePoint 2010.

Conclusion

Team Foundation Server 2013 & 2012 works with SharePoint 2013 Sp1 on Windows server 2012 R2 and we have manually created our Team Project Portal site.

Effective Unit Testing with Microsoft Fakes now free for VS premium & Ebook

 

An initiative of ALM Rangers was  to create some value-add guidance on Microsoft Fakes for the ALM community. Last week, was shipped the eBook – Better Unit Testing with Microsoft Fakes.

When an ALM Rangers project gets signoff it needs to pass a final hurdle before getting the green-light to proceed. This involves presenting the project’s Vision, Epics, Delivery Artifacts, Milestones etc… and then hoping that fellow ALM Rangers will offer to join the project.

That was delivered as the ALM Rangers first eBook and on a subject that just got a whole broader audience now that Microsoft Fakes will be included in Visual Studio Premium with Update 2.

All Info’s from Update 2 , has Brian Harry pointed out here

http://blogs.msdn.com/b/bharry/archive/2013/01/30/announcing-visual-studio-2012-update-2-vs2012-2.aspx.

Other resources on Fakes you find also here, some of the in German language

Book can be downloaded from download the guide and see what Microsoft Fakes is all about.

What you will find in V1 is shortly briefing here :

Foreword

Introduction

Chapter 1: A Brief Theory of Unit Testing

  • Software testing

  • The fine line between good and flawed unit testing

Chapter 2: Introducing Microsoft Fakes

  • Stubs
  • Shims
  • Choosing between a stub and a shim

Chapter 3: Migrating to Microsoft Fakes

  • Migrating from Moles to Microsoft Fakes
  • Migrating from commercial and open source frameworks

Chapter 4: Miscellaneous Topics

  • Targeting Microsoft .NET Framework 4
  • Adopting Microsoft Fakes in a team
  • You can’t Fake everything!
  • Verbose logging
  • Working with strong named assemblies
  • Optimizing the generation of Fakes
  • Looking under the covers
  • Refactoring code under test
  • Removing Fakes from a project
  • Using Fakes with Team Foundation Version Control
  • Using Microsoft Fakes with ASP.NET MVC

Chapter 5: Advanced Techniques

  • Dealing with Windows Communication Foundation (WCF) service boundaries
  • Dealing with non-deterministic calculations
  • Gathering use-case and other analytical information
  • Analyzing internal state
  • Avoiding duplication of testing structures

Chapter 6: Hands-on Lab

  • Exercise 1: Using Stubs to isolate database access (20 – 30 min)
  • Exercise 2: Using Shims to isolate from file system and date (20 – 30 min)
  • Exercise 3: Using Microsoft Fakes with SharePoint (20 – 30 min)
  • Exercise 4: Bringing a complex codebase under test (20 – 30 min)

In Conclusion

Appendix

How to create C++ Apps with XAML

This blog post will help explain how XAML and C++ work together in the build system to make a Windows Store application that still respects the C++ language build model and syntax.  (Note: this blog post is targeted towards Windows Store app developers.)

The Build Process

From a user-facing standpoint, Pages and other custom controls are really a trio of user-editable files.  For example, the definition of the class MainPage is comprised of three files: MainPage.xaml, MainPage.xaml.h, and MainPage.xaml.cpp.  Both mainpage.xaml and mainpage.xaml.h contribute to the actual definition of the MainPage class, while MainPage.xaml.cpp provides the method implementations for those methods defined in MainPage.xaml.h.  However, how this actually works in practice is far more complex.

This drawing is very complex, so please bear with me while I break it down into its constituent pieces.

Every box in the diagram represents a file.  The light-blue files on the left side of the diagram are the files which the user edits.  These are the only files that typically show up in the Solution Explorer.  I’ll speak specifically about MainPage.xaml and its associated files, but this same process occurs for all xaml/h/cpp trios in the project.

The first step in the build is XAML compilation, which will actually occur in several steps.  First, the user-edited MainPage.xaml file is processed to generate MainPage.g.h.  This file is special in that it is processed at design-time (that is, you do not need to invoke a build in order to have this file be updated).  The reason for this is that edits you make to MainPage.xaml can change the contents of the MainPage class, and you want those changes to be reflected in your Intellisense without requiring a rebuild.  Except for this step, all of the other steps only occur when a user invokes a Build.

Partial Classes

You may note that the build process introduces a problem: the class MainPage actually has two definitions, one that comes from MainPage.g.h:

 partial ref class MainPage : public ::Windows::UI::Xaml::Controls::Page,
      public ::Windows::UI::Xaml::Markup::IComponentConnector
{
public:
   void InitializeComponent();
   virtual void Connect(int connectionId, ::Platform::Object^ target);

private:
   bool _contentLoaded;

};

And one that comes from MainPage.xaml.h:

public ref class MainPage sealed
{
public:
   MainPage();

protected:
   virtual void OnNavigatedTo(Windows::UI::Xaml::Navigation::NavigationEventArgs^ e) override;
};

This issue is reconciled via a new language extension: Partial Classes.

The compiler parsing of partial classes is actually fairly straightforward.  First, all partial definitions for a class must be within one translation unit.  Second, all class definitions must be marked with the keyword partial except for the very last definition (sometimes referred to as the ‘final’ definition).  During parsing, the partial definitions are deferred by the compiler until the final definition is seen, at which point all of the partial definitions (along with the final definition) are combined together and parsed as one definition.  This feature is what enables both the XAML-compiler-generated file MainPage.g.h and the user-editable file MainPage.xaml.h to contribute to the definition of the MainPage class.

Compilation

For compilation, MainPage.g.h is included in MainPage.xaml.h, which is further included in MainPage.xaml.cpp.  These files are compiled by the C++ compiler to produce MainPage.obj.  (This compilation is represented by the red lines in the above diagram.)  MainPage.obj, along with the other obj files that are available at this stage are passed through the linker with the switch /WINMD:ONLY to generate the Windows Metadata (WinMD) file for the project. This process is denoted in the diagram by the orange line.  At this stage we are not linking the final executable, only producing the WinMD file, because MainPage.obj still contains some unresolved externals for the MainPage class, namely any functions which are defined in MainPage.g.h (typically the InitializeComponent and Connect functions).  These definitions were generated by the XAML compiler and placed into MainPage.g.hpp, which will be compiled at a later stage.

MainPage.g.hpp, along with the *.g.hpp files for the other XAML files in the project, will be included in a file called XamlTypeInfo.g.cpp.  This is for build performance optimization: these various .hpp files do not need to be compiled separately but can be built as one translation unit along with XamlTypeInfo.g.cpp, reducing the number of compiler invocations required to build the project.

Data Binding and XamlTypeInfo

Data binding is a key feature of XAML architecture, and enables advanced design patterns such as MVVM.  C++ fully supports data binding; however, in order for the XAML architecture to perform data binding, it needs to be able to take the string representation of a field (such as “FullName”) and turn that into a property getter call against an object.  In the managed world, this can be accomplished with reflection, but native C++ does not have a built-in reflection model.

Instead, the XAML compiler (which is itself a .NET application) loads the WinMD file for the project, reflects upon it, and generates C++ source that ends up in the XamlTypeInfo.g.cpp file.  It will generate the necessary data binding source for any public class marked with the Bindable attribute.

It may be instructive to look at the definition of a data-bindable class and see what source is generated that enables the data binding to succeed.  Here is a simple bindable class definition:

[Windows::UI::Xaml::Data::Bindable]
public ref class SampleBindableClass sealed {
public:
   property Platform::String^ FullName;
};

When this is compiled, as the class definition is public, it will end up in the WinMD file as seen here:

This WinMD is processed by the XAML compiler and adds source to two important functions within XamlTypeInfo.g.cpp:CreateXamlType and CreateXamlMember.

The source added to CreateXamlType generates basic type information for the SampleBindableClass type, provides an Activator (a function that can create an instance of the class) and enumerates the members of the class:

if (typeName == L"BlogDemoApp.SampleBindableClass")
{
  XamlUserType^ userType = ref new XamlUserType(this, typeName, GetXamlTypeByName(L"Object"));
  userType->KindOfType = ::Windows::UI::Xaml::Interop::TypeKind::Custom;
  userType->Activator =
   []() -> Platform::Object^
   {
     return ref new ::BlogDemoApp::SampleBindableClass();
   };
  userType->AddMemberName(L"FullName");
  userType->SetIsBindable();
  return userType;
}

Note how a lambda is used to adapt the call to ref new (which will return a SampleBindableClass^) into the Activator function (which always returns an Object^).

From String to Function Call

As I mentioned previously, the fundamental issue with data binding is transforming the text name of a property (in our example, “FullName”) into the getter and setter function calls for this property.  This translation magic is implemented by the XamlMemberclass.

XamlMember stores two function pointers: Getter and Setter.  These function pointers are defined against the base type Object^(which all WinRT and fundamental types can convert to/from).  A XamlUserType stores a map<String^, XamlUserType^>; when data binding requires a getter or setter to be called, the appropriate XamlUserType can be found in the map and its associatedGetter or Setter function pointer can be invoked.

The source added to CreateXamlMember initializes these Getter and Setter function pointers for each property.  These function pointers always have a parameter of type Object^ (the instance of the class to get from or set to) and either a return parameter of type Object^ (in the case of a getter) or have a second parameter of type Object^ (for setters).

if (longMemberName == L"BlogDemoApp.SampleBindableClass.FullName")
{
  XamlMember^ xamlMember = ref new XamlMember(this, L"FullName", L"String");
  xamlMember->Getter =
  [](Object^ instance) -> Object^
  {
    auto that = (::BlogDemoApp::SampleBindableClass^)instance;
    return that->FullName;
  };

  xamlMember->Setter =
  [](Object^ instance, Object^ value) -> void
  {
    auto that = (::BlogDemoApp::SampleBindableClass^)instance;
    that->FullName = (::Platform::String^)value;
  };
  return xamlMember;
}

The two lambdas defined use the lambda ‘decay to pointer’ functionality to bind to Getter and Setter methods.  These function pointers can then be called by the data binding infrastructure, passing in an object instance, in order to set or get a property based on only its name.  Within the lambdas, the generated code adds the proper type casts in order to marshal to/from the actual types.

Final Linking and Final Thoughts

After compiling the xamltypeinfo.g.cpp file into xamltypeinfo.g.obj, we can then link this object file along with the other object files to generate the final executable for the program.  This executable, along with the winmd file previously generated, and your xaml files, are packaged up into the app package that makes up your Windows Store Application.

A note: the Bindable attribute described in this post is one way to enable data binding in WinRT, but it is not the only way.  Data binding can also be enabled on a class by implementing either the ICustomPropertyProvider interface or IMap<String^,Object^>.  These other implementations would be useful if the Bindable attribute cannot be used, particularly if you want a non-public class to be data-bindable.

For additional info, I recommend looking at this walkthrough, which will guide you through building a fully-featured Windows Store Application in C++/XAML from the ground up.  The Microsoft Patterns and Practices team has also developed a large application which demonstrates some best practices when developing Windows Store Applications in C++: project Hilo.  The sources and documentation for this project can be found at http://hilo.codeplex.com/

TFS Services RTM – still free

The Team Foundation Team has updated the TFS services to RTM – https://tfs.visualstudio.com/

image

Free Plan till 2013

 

A plan for up to 5 users is now available!

What it includes
  • Up to 5 users
  • Unlimited number of projects
  • Version control
  • Work item tracking
  • Agile planning tools
  • Feedback management
  • Build (still in preview)
MSDN

The service will be a benefit of certain MSDN subscriptions at no additional charge.

When will other service plans be announced?

Windows 8 RTM

 

Windows 8 reached Release to Manufacturing, Windows 8 is now being issued to all PC OEM and manufacturing partners.

More details http://blogs.msdn.com/b/b8/archive/2012/08/01/releasing-windows-8-august-1-2012.aspx

  • August 15th: Developers will be able to download the final version of Windows 8 via your MSDN subscriptionsand DreamSpark Premium
  • August 15th: IT professionals testing Windows 8 in organizations will be able to access the final version of Windows 8 through your TechNet subscriptions.
  • August 16th: Education institutions with existing Microsoft Software Assurance for Windows will be able to download Windows 8 Enterprise edition through the Volume License Service Center (VLSC), allowing you to test, pilot and begin adopting Windows 8 Enterprise within your organization.
  • August 16th: Microsoft Partner Network members will have access to Windows 8.
  • August 20th: Microsoft Action Pack Providers (MAPS) receive access to Windows 8.
  • September 1st: Volume License customers without Software Assurance will be able to purchase Windows 8 through Microsoft Volume License Resellers and Academic License Resellers.

So over the next few days/weeks you will see the availability of exciting new models of PCs loaded with Windows 8 and online availability of Windows 8 on October 26, 2012.

More details http://windowsteamblog.com/windows/b/bloggingwindows/archive/2012/08/01/windows-8-has-reached-the-rtm-milestone.aspx

Also, Windows Server 2012 has been released to manufacturing.

On September 4.  That’s when Windows Server 2012 will be generally available for evaluation and download by all customers around the world.  On that day we will also host an online launch event where our executives, engineers, customers and partners will share more about how Windows Server 2012 can help organizations of all sizes realize the benefits of what we call the Cloud OS.  You will be able to learn more about the features and capabilities and connect with experts and peers.  You’ll also be able to collect points along the way for the chance to win some amazing prizes. You don’t want to miss it.  Visit this site to save the date for the launch event.

More details http://blogs.technet.com/b/windowsserver/archive/2012/08/01/windows-server-2012-released-to-manufacturing.aspx

Why TFS 2012 rocks and what you need to know to scale out

 

Experiences from Microsoft Developer Division , Developer Division is running on TFS 2012 RC!

Back in the beginning, the DevDiv server was dogfood server for Microsoft Developer Division .  Then as all of the folks shipping products in Visual Studio, there were too many critical deadlines to be able to put early, sometimes raw, builds on the server.   So was dogfood TFS on a server called pioneer, as described here.  Pioneer is used mostly by the teams in ALM (TFS, Test & Lab Management, and Ultimate), and has been running TFS 2012 on it since February 2011, which was a full year before beta. Never before have we been able to use TFS so early in the product cycle, and our ability to get that much usage on early TFS 2012 really showed in the successful upgrade of the DevDiv server.

DevDivision also  run TFS 2012 in the cloud at http://tfspreview.com, and that’s been running for a year now.  While that’s not a dogfood effort, it’s helped us improve TFS 2012 significantly. The other dogfooding effort leading up to this upgrade was Microsoft IT.  They upgraded a TFS server to TFS 2012 Beta, and we learned from that as well.

The scale of the DevDiv server is huge, being used by 3,659 users in the last 14 days.  Nearly all of those users are working in a single team project for the delivery of Visual Studio 2012.  Our branches and workspaces are huge (a full branch has about 5M files, and a typical dev workspace 250K files).  For the TFS 2010 product cycle, was not upgrade the server until after RTM.  Having been able to do this upgrade with TFS 2012 RC, the issues found will be fixed in the RTM release of TFS 2012!

Here’s the topology of the DevDiv TFS deployment, which I’ve copied from Grant Holliday’s blog post on the upgrade to TFS 2010 RTM two years ago.  I’ll call out the major features.

  • We use two application tiers behind an F5 load balancer.  The ATs will each handle the DevDiv load by themselves, in case we have to take one offline (e.g., hardware issues).
  • There are two SQL Server 2008 R2 servers in a failover configuration.  We are running SP1 CU1.  TFS 2012 requires an updated SQL 2008 for critical bug fixes.
  • SharePoint and SQL Analysis Services are running on separate computer in order balance the load (cube processing is particularly intensive).
  • We use version control caching proxy servers both in Redmond and for remote offices.

These statistics will give you a sense of the size of the server.  There are two collections, one that is in use now and has been used since the beginning of the 2012 product cycle (collection A) and the original collection which was used by everyone up through the 2010 product cycle (collection B).  The 2010 collection had grown in uncontrolled ways, and there were more than a few hacks in it from the early days of scaling to meet demand.  Since moving to a new collection, has been able pare back the old collection, and the result of those efforts has been a set of tools that we use on both collections (will be eventually release them).  Both collections were upgraded.  The third column is a server we call pioneer.

Grant posted the queries to get the stats on your own server (some need a little tweaking because of schema changes, and we need to add build).  Also, the file size is now all of the files, including version control, work item attachments, and test attachments, as they are all stored in the same set of tables now.

 

Coll.A                      Coll. B                          Pioneer
Recent Users 3659 1516 1,143
Build agents and controllers 2,636 284 528
Files 16,855,771 21,799,596 11,380,950
Uncompressed File Size (MB) 14,972,584 10,461,147 6,105,303
Compressed File Size (MB) 2,688,950 3,090,832 2,578,826
Checkins 681,004 2,294,794 133,703
Shelvesets 62,521 12,967 14,829
Merge History 1,512,494,436 2,501,626,195 162,511,653
Workspaces 22,392 6,595 5,562
Files in workspaces 4,668,528,736 366,677,504 406,375,313
Work Items 426,443 953,575 910,168
Areas & Iterations 4,255 12,151 7,823
Work Item Versions 4,325,740 9,107,659 9,466,640
Work Item Attachments 144,022 486,363 331,932
Work Item Queries 54,371 134,668 28,875

The biggest issue  faced after the upgrade was getting the builds going again.  DevDiv (collection B) has 2,636 build agents and controllers, with about 1,600 being used at any given time.  On pioneer, didn’t have nearly that many running.  The result was that was hit a connection limit, and the controllers and agents would randomly go online and offline.

The upgrade to TFS 2012 RC was a huge success, and it was a very collaborative effort across TFS, central engineering, and IT.  As a result of this experience and  experience on pioneer, TFS 2012 is not only a great release with an incredible set of features, but it’s also running at high scale on a mission critical server!