Migration from File Shares to Document Management (SharePoint 2013) using OneDrive (Skydrive) for Business

Some tips  How to migrate file shares to SharePoint and use OneDrive (SkyDrive) for Business (ODFB) and if you are planning to migrate file share content into SharePoint and want to make use of ODFB for synchronizing the SharePoint content offline.

Note: that these steps are both valid for SharePoint 2013 on-premises and SharePoint Online (SPO).

Info about SharePoint Limits

http://technet.microsoft.com/en-us/library/cc262787(v=office.15).aspx#Limits

First Step  – Analyze your File Shares

As a first step, try to understand the data that resides on the file shares. Ask yourself the following questions:

  • What is the total size of the file share data that the customer wants to migrate?
  • How many files are there in total?
  • What are the largest file sizes?
  • How deep are the folder structures nested?
  • Is there any content that is not being used anymore?
  • What file types are there?

Let me try to explain why you should ask yourself these questions.

Total Size

If the total size of the file shares are more that the storage capacity that you have on SharePoint, you need to buy additional storage (SPO) or increase your disk capacity (on-prem). To determine how much storage you will have in SPO, please check the Total available tenant storage in the tables in this article. Another issues that may arise is that in SharePoint is that you reach the capacity per site collection. For SPO that is 1000 Gigabyte (changed from 100 GB to 1 TB), for on-premises the recommended size per site collection is still around 200 Gigabyte.

What if we have more than 1000 Gigabyte?

  • Try to divide the file share content over multiple site collections when it concerns content which needs to be shared with others.
  • If certain content is just for personal use, try to migrate that specific content into the personal site of the user.

How Many Files

The total amount of files on the file shares is important as there are some limits in both SharePoint as well as ODFB that can result in an unusable state of the library or list within SharePoint but you also might end up with missing files when using the ODFB client.

First, in SPO we have a fixed limit of 5000 items per view, folder or query. Reasoning behind this 5000 limit boils all the way down to how SQL works under the hood. If you would like to know more about it, please read this article. In on-prem there is a way to boost this up, but it is not something we recommend as the performance can significantly decrease when you increase this limit.

Secondly for ODFB there is also a 5000 limit for synchronizing team sites and 20000 for synchronizing personal sites. This means that if you have a document library that contains more that 5000 items, the rest of the items will not be synchronized locally.

There is also a limit of 5 million items within a document library, but I guess that most customer in SMB won’t reach that limit very easily.

What should I do if my data that I want to migrate to a document library contains more than 5000 items in one folder?

  • Try to divide that amount over multiple subfolders or create additional views that will limit the amount of documents displayed.

But wait! If I already have 5000 items in one folder, doesn’t that mean that the rest of the other document won’t get synchronized when I use ODFB?

Yes, that is correct. So if you would like to use ODFB to synchronize document offline, make sure that the total amount of documents per library in a team site, does not exceed 5000 documents in total.

How do I fix that limit ?

  • Look at the folder structure of the file share content and see if you can divide that data across multiple sites and/or libraries. So if there is a folder marketing for example, it might make more sense to migrate that data into a separate site anyway, as this department probably wants to store additional information besides just documents (e.g. calendar, general info about the marketing team, site mailbox etc). An additional benefit of spreading the data over multiple sites/libraries is that it will give the ODFB users more granularity about what data they can take offline using ODFB. If you would migrate everything into one big document library (not recommended), it would mean that all users will need to synchronize everything which can have a severe impact on your network bandwidth.

Largest File Sizes

Another limit that exists in SPO and on-prem is the maximum file size. For both the maximum size per file is 2 Gigabyte. In on-prem the default is 250 MB, but can be increased to a maximum of 2 Gigabyte.

So, what if I have files that exceed this size?

  • Well, it won’t fit in SharePoint, so you can’t migrate these. So, see what type of files they are and determine what they are used for in the organization. Examples could be software distribution images, large media files, training courses or other materials. If these are still being used and not highly confidential, it is not a bad thing to keep these on alternative storage like a SAN, NAS or DVDs. If it concerns data that just needs to be kept for legal reasons and don’t require to be retrieved instantly, you might just put these on DVD or an external hard drive and store them in a safe for example.

Folder Structures

Another important aspect to look at on your file shares is the depth of nested folders and file length. The recommended total length of a URL in SharePoint is around 260 characters. You would think that 260 characters is pretty lengthy, but remember that URLs in SharePoint often has encoding applied to it, which takes up additional space. E.g. a space is one character but in Unicode this a %20, which takes up three characters. The problem is that you can run into issues when the URL becomes to large. More details about the exact limits can be found here, but as a best practice try to keep the URL length of a document under 260 characters.

What if I have files that will have more than 260 characters in total URL length?

  • Make sure you keep your site URLs short (the site title name can be long though). E.g. don’t call the URL Human Resources, but call it HR. If you land on the site, you would still see the full name Human Resources as Site Title and URL are separate things in SharePoint.
  • Shorten the document name (e.g. strip of …v.1.2, or …modified by Andre), as SharePoint has versioning build in. More information about versioning can be found here.

Content Idle

When migrating file shares into SharePoint is often also a good momentum to clean up some of the information that the organization has been collecting over the years. If you find there is a lot of content which is not been accessed for a couple of years, what would be the point of migrating that data it to SharePoint?

So, what should I do when I come across such content?

  • Discuss this with the customer and determine if it is really necessary to keep this data.
  • If the data cannot be purged, you might consider storing it on a DVD or external hard drive and keep it in a safe.
  • If the content has multiple versions, such as proposal 1.0.docx, proposal 1.1.docx, proposal final.docx, proposal modified by Andre.docx, you might consider just moving the latest version instead of migrating them all. This manual process might be time consuming, but can safe you lots of storage space in SharePoint. Versioning is also something that is build into the SharePoint system and is optimized to store multiple versions of the same document. For example, SharePoint only stores the delta of the next version, saving more storage space that way. This functionality is called Shredded Storage.

Types of Files

Determine what kind of files the customer is having. Are they mainly Office documents? If so, then SharePoint is the best place to store such content. However, if you come across developers code for example, it is not a good idea to move that into SharePoint. There are also other file extensions that are not allowed in SPO and/or on-prem. A complete list of blocked file types for both SPO and on-prem can be found here.

what if I come across such blocked file extensions?

  • Well, you can’t move them into SharePoint, so you should either ask yourself, do I still need these files? And if so, is there an alternative storage facility such as a NAS, I can store these files on? If it concerns developer code, you might want to store such code on a Team Foundation Service Server instead.

Tools for analyzing and fixing file share data

In order to determine if you have large files or exceed the 5000 limit for example, you need to have some kind of tooling. There are a couple of approaches here.

  • There is a PowerShell script that has been pimped up by  Hans Brender, which checks for blocked file types, bad characters in files and folders and finally for the maximum URL length. The script will even allow you to fix invalid characters and file extensions for you. It is a great script, but requires you to have some knowledge about PowerShell. Another alternative I was pointed at is a tool called SharePrep. This tool does a scan for URL length and invalid characters.
  • There are other 3rd party tools that can do a scan of your file share content such as Treesize. However such tools do not necessarily check for the SharePoint limitations we talked about in the earlier paragraphs, but at least they will give you a lot more insight about the size of the file share content.
  • Finally there are actual 3rd party migration tools that will move the file share content into SharePoint, but will check for invalid characters, extensions and URL length upfront. We will dig into these tools in Step 2 – Migrating your data.

Second Step – Migrating your data

So, now that we have analyzed our file share content, it is time to move them into SharePoint. There are a couple of approaches here.

Document Library Open with Explorer

If you are in a document library you can open up the library in the Windows Explorer. That way you can just do a copy and paste from the files into SharePoint.

image

Are some drawbacks using this scenario. First of all, I’ve seen lots of issues trying to open up the library in the Windows Explorer. Secondly, the technology that is used for copying the data into SharePoint is not very reliable, so keep that in mind when copying larger chunks of data. Finally there is also drag & drop you can use, but this is only limited to files (no folders) and only does a maximum of 100 files per drag. So this would mean if you have 1000 files, you need to drag them 10 times in 10 chunks. More information can be found in this article. Checking for invalid characters, extensions and URL length upfront are also not addressed when using the Open with Explorer method.

Pros: Free, easy to use, works fine for smaller amounts of data

Cons: Not always reliable, no metadata preservations, no detection upfront for things like invalid characters, file type restrictions, path lengths etc.

OneDrive (formerly SkyDrive) for Business

You could also use ODFB to upload the data into a library. This is fine as long as you don’t sync more than 5000 items per library. Remember though that ODFB is not a migration tool, but a sync tool, so it is not optimized for large chunks of data to be copied into SharePoint. Things like character and file type restrictions, path length etc. is on the list of the ODFB team to address, but they are currently not there.

The main drawbacks of using either the Open in Explorer option or using ODFB is that when you use these tools, they don’t preserve the metadata of the files and folder that are on the file shares. By this I mean, things like the modified date or owner field are not migrated into SharePoint. The owner will become the user that is copying the data and the modified date will be the timestamp of the when the copy operation was executed. So if this metadata on the files shares is important, don’t use any of the methods mentioned earlier, but use one of the third party tools below.

Pros: Free, easy to use, works fine for smaller amounts of data (max 5000 per team site library or 20000 per personal site)

Cons: No metadata preservations, no detection upfront for things like invalid characters, file type restrictions, path lengths etc.

3rd party tools

Here are some of the 3rd party tools that will provide additional detection, fixing and migration capabilities that we mentioned earlier:

Where some have a focus on SMB, while other more focused on the enterprise segment. We can’t speak out any preference for one tool or the other, but most of the tools will have a free trial version available, so you can try them out yourself.

Summary

When should I use what approach?

Here is a short summary of capabilities:

Open in Explorer
OneDrive for Business (with latest update)
3rd party

Amount of data
Relatively small
No more than 5000 items per library
Larger data sets

Invalid character detection
No
No
Mostly yes1

URL length detection
No
No
Mostly yes1

Metadata preservation
No
No
Mostly yes1

Blocked file types detection
No
No
Mostly yes1

1This depends on the capabilities of the 3rd party tool.

Troubleshooting

ODFB gives me issues when synchronizing data
Please check if you have the latest version of ODFB installed. There have been stability issues in earlier released builds of the tool, but most of the issues should be fixed by now. You can check if you are running the latest version, by opening up Word-> File-> Account and click on Update Options-> View Updates. If your current version number is lower than the one you have, click on the Disable Updates button (click yes if prompted), then click Enable updates (click yes if prompted). This will force downloading the latest version of Office and thus the latest version of the ODFB tool.

image

If you are running the stand-alone version of ODFB, make sure you have downloaded the latest version from here.

Latency Time by the upload , process is taking so long?
This really depends on a lot of things. It can depend on:

  • The method or tool that is used to upload the data
  • The available bandwidth for uploading the data. Tips:
  • Check your upload speed at http://www.speedtest.net and do a test for your nearest Office 365 data center. This will give you an indication of the maximum upload speed.
  • Often companies have less available upload bandwidth then people at home. If you have the chance, uploading from a home location might be faster.
  • Schedule the upload at times when there is much more bandwidth for uploading the data (usually at night)
  • Test your upload speed upfront by uploading maybe 1% of the data. Multiply it by 100 and you have a rough estimate of the total upload time.
  • The computers used for uploading the data. A slow laptop can become a bottle neck while uploading the data.

How to use Unity – Dependency Container thread Safe

First I will Try to define the Problem : The Constructor Injection pattern is easy to understand until a follow-up question comes up:

Where should we compose object graphs?

It’s easy to understand that each class should require its dependencies through its constructor, but this pushes the responsibility of composing the classes with their dependencies to a third party. Where should that be?

It seems to me that most people are eager to compose as early as possible, but the correct answer is:

As close as possible to the application’s entry point.

This place is called the Composition Root of the application and defined like this:

A Composition Root is a (preferably) unique location in an application where modules are composed together.

This means that all the application code relies solely on Constructor Injection (or other injection patterns), but is never composed. Only at the entry point of the application is the entire object graph finally composed.

The appropriate entry point depends on the framework:

  • In console applications it’s the Main method
  • In ASP.NET MVC applications it’s global.asax and a custom IControllerFactory
  • In WPF applications it’s the Application.OnStartup method
  • In WCF it’s a custom ServiceHostFactory
  • etc

The Composition Root is an application infrastructure component.Only applications should have Composition Roots. Libraries and frameworks shouldn’t.

The Composition Root can be implemented with Poor Man’s DI, but is also the (only) appropriate place to use a DI Container.

A DI Container should only be referenced from the Composition Root. All other modules should have no reference to the container.

Using a DI Container is often a good choice. In that case it should be applied using the Register Resolve Release pattern entirely from within the Composition Root.

Read more in Dependency Injection in .NET.

Having seen the benefits of a Composition Root over the Service Locator anti-pattern global That entailed an Unity container,

The solution Consists of two classes. DependencyFactory is the public wrapper for an internal class,  DependencyFactoryInternal. The outer class presents easy-to-use static methods and acquires the right kind of lock (read or write) for whatever operations you’re Trying to do .

First time you access the container, the code Will the load-type registrations in your. Config file. You can register aussi kinds programmatically with the static RegisterInstance method.

DependencyFactory.RegisterInstance <IMyType> ( new MyConcreteType ());

To resolve a type use the static Resolve method

var myObject = DependencyFactory.Resolve <IMyTypeType> ();

Finally, the static FindRegistrationSingle method exposed an existing ContainerRegistration .

public sealed class DependencyFactory : IDisposable
    {
        #region Properties
        /// <summary>
        /// Get the singleton Unity container.
        /// </summary>
        public IUnityContainer Container
        {
            get { return DependencyFactoryInternal.Instance.Container; }
        }
        #endregion

        #region Constructors
        /// <summary>
        /// Default constructor. Obtains a write lock on the container since this is the safest policy.
        /// Also uses a relatively long timeout.
        /// </summary>
        public DependencyFactory()
            : this(true, 10000)
        {
        }

        /// <summary>
        /// Construct an object that can access the singleton Unity container until the object is Disposed.
        /// </summary>
        /// <param name="allowContainerModification">True to allow modification of the container.
        /// The caller is responsible for behaving according to this pledge!
        /// The default is to allow modifications, which results in a write lock rather than
        /// the looser read lock. If you're sure you're only going to be reading the container, 
        /// you may want to consider passing a value of false.</param>
        /// <param name="millisecondsTimeout">Numer of milliseconds to wait for access to the container,
        /// or -1 to wait forever.</param>
        internal DependencyFactory(bool allowContainerModification, int millisecondsTimeout = 500)
        {
            if (allowContainerModification)
                DependencyFactoryInternal.Instance.EnterWriteLock(millisecondsTimeout);
            else
                DependencyFactoryInternal.Instance.EnterReadLock(millisecondsTimeout);
        }
        #endregion

        #region Methods
        /// <summary>
        /// Resolves a type in the static Unity container.
        /// This is a convenience method, but has the added benefit of only enforcing a Read lock.
        /// </summary>
        /// <param name="overrides">Overrides to pass to Unity's Resolve method.</param>
        /// <typeparam name="T">The type to resolve.</typeparam>
        /// <returns>An concrete instance of the type T.</returns>
        /// <remarks>
        /// If you already have a DependencyFactory object, call Resolve on its Container property instead.
        /// Otherwise, you'll get an error because you have the same lock on two threads.
        /// </remarks>
        static public T Resolve<T>(params ResolverOverride[] overrides)
        {
            using (var u = new DependencyFactory(false, 10000))
            {
                return u.Container.Resolve<T>(overrides);
            }
        }

        /// <summary>
        /// Convenience method to call RegisterInstance on the container.
        /// Constructs a DependencyFactory that has a write lock.
        /// </summary>
        /// <typeparam name="T">The type of register.</typeparam>
        /// <param name="instance">An object of type T.</param>
        static public void RegisterInstance<T>(T instance)
        {
            using (var u = new DependencyFactory())
            {
                u.Container.RegisterInstance<T>(instance);
            }
        }

        /// <summary>
        /// Find the single registration in the container that matches the predicate.
        /// </summary>
        /// <param name="predicate">A predicate on a ContainerRegistration object.</param>
        /// <returns>The single matching registration. Throws an exception if there is no match, 
        /// <remarks>
        /// Only uses a read lock on the container.
        /// </remarks>
        /// or if there is more than one.</returns>
        static public ContainerRegistration FindRegistrationSingle(Func<ContainerRegistration, bool> predicate)
        {
            using (var u = new DependencyFactory(false,10000))
            {
                return u.Container.Registrations.Single(predicate);
            }
        }

        /// <summary>
        /// Acquires a write lock on the Unity container, and then clears it.
        /// </summary>
        static public void Clear()
        {
            using (var u = new DependencyFactory())
            {
                DependencyFactoryInternal.Instance.Clear();
            }
        }
        #endregion

        #region IDisposable Members
        /// <summary>
        /// Dispose the object, releasing the lock on the static Unity container.
        /// </summary>
        public void Dispose()
        {
            if (DependencyFactoryInternal.Instance.IsWriteLockHeld)
                DependencyFactoryInternal.Instance.ExitWriteLock();
            if (DependencyFactoryInternal.Instance.IsReadLockHeld)
                DependencyFactoryInternal.Instance.ExitReadLock();
        }
        #endregion
    }

Internal class

 /// This class is internal; consumers should go through DependencyFactory.
    /// </remarks>
    internal class DependencyFactoryInternal
    {
        #region Fields
        // Lazy-initialized, static instance member.
        private static readonly Lazy<DependencyFactoryInternal> _instance
            = new Lazy<DependencyFactoryInternal>(() => new DependencyFactoryInternal(),
            true /*thread-safe*/ );

        private ReaderWriterLockSlim _lock = new ReaderWriterLockSlim();
        #endregion

        #region Properties
        /// <summary>
        /// Get the single instance of the class.
        /// </summary>
        public static DependencyFactoryInternal Instance
        {
            get { return _instance.Value; }
        }

        IUnityContainer _container = null;
        /// <summary>
        /// Get the instance of the Unity container.
        /// </summary>
        public IUnityContainer Container
        {
            get
            {
                try
                {
                    return _container ?? (_container = new UnityContainer().LoadConfiguration());
                }
                catch (Exception ex)
                {
                    throw new InvalidOperationException("Could not load the Unity configuration.", ex);
                }
            }
        }

        /// <summary>
        /// Tells whether the underlying container is null. Intended only for unit testing. 
        /// (Note that this property is internal, and thus available only to this assembly and the
        /// unit-test assembly.)
        /// </summary>
        internal bool ContainerIsNull 
        { 
            get { return _container == null; } 
        }

        /// <summary>
        /// Tell whether the calling thread has a read lock on the singleton.
        /// </summary>
        internal bool IsReadLockHeld { get { return _lock.IsReadLockHeld; } }

        /// <summary>
        /// Tell whether the calling thread has a write lock on the singleton.
        /// </summary>
        internal bool IsWriteLockHeld { get { return _lock.IsWriteLockHeld; } }
        #endregion

        #region Constructor
        /// <summary>
        /// Private constructor. 
        /// Makes it impossible to use this class except through the static Instance property.
        /// </summary>
        private DependencyFactoryInternal()
        {
        }
        #endregion

        #region Methods
        /// <summary>
        /// Enter a read lock on the singleton.
        /// </summary>
        /// <param name="millisecondsTimeout">How long to wait for the lock, or -1 to wait forever.</param>
        public void EnterReadLock(int millisecondsTimeout)
        {
            if (!_lock.TryEnterReadLock(millisecondsTimeout))
                throw new TimeoutException(Properties.Resources.TimeoutEnteringReadLock);
        }

        /// <summary>
        /// Enter a write lock on the singleton.
        /// </summary>
        /// <param name="millisecondsTimeout">How long to wait for the lock, or -1 to wait forever.</param>
        public void EnterWriteLock(int millisecondsTimeout)
        {
            if (!_lock.TryEnterWriteLock(millisecondsTimeout))
                throw new TimeoutException(Properties.Resources.TimeoutEnteringWriteLock);
        }

        /// <summary>
        /// Exit the lock obtained with EnterReadLock.
        /// </summary>
        public void ExitReadLock()
        {
            _lock.ExitReadLock();
        }

        /// <summary>
        /// Exit the lock obtained with EnterWriteLock.
        /// </summary>
        public void ExitWriteLock()
        {
            _lock.ExitWriteLock();
        }

        /// <summary>
        /// Clear the Unity container and the lock so we can restart building the dependency injections.
        /// </summary>
        /// <remarks>
        /// Intended for unit testing.
        /// </remarks>
        public void Clear()
        {
            if (_container != null)
                _container.Dispose();
            _container = null;
            while (_lock.IsWriteLockHeld)
                _lock.ExitWriteLock();
            while (_lock.IsReadLockHeld)
                _lock.ExitReadLock();
        }
        #endregion
    }

Why TFS 2012 rocks and what you need to know to scale out

 

Experiences from Microsoft Developer Division , Developer Division is running on TFS 2012 RC!

Back in the beginning, the DevDiv server was dogfood server for Microsoft Developer Division .  Then as all of the folks shipping products in Visual Studio, there were too many critical deadlines to be able to put early, sometimes raw, builds on the server.   So was dogfood TFS on a server called pioneer, as described here.  Pioneer is used mostly by the teams in ALM (TFS, Test & Lab Management, and Ultimate), and has been running TFS 2012 on it since February 2011, which was a full year before beta. Never before have we been able to use TFS so early in the product cycle, and our ability to get that much usage on early TFS 2012 really showed in the successful upgrade of the DevDiv server.

DevDivision also  run TFS 2012 in the cloud at http://tfspreview.com, and that’s been running for a year now.  While that’s not a dogfood effort, it’s helped us improve TFS 2012 significantly. The other dogfooding effort leading up to this upgrade was Microsoft IT.  They upgraded a TFS server to TFS 2012 Beta, and we learned from that as well.

The scale of the DevDiv server is huge, being used by 3,659 users in the last 14 days.  Nearly all of those users are working in a single team project for the delivery of Visual Studio 2012.  Our branches and workspaces are huge (a full branch has about 5M files, and a typical dev workspace 250K files).  For the TFS 2010 product cycle, was not upgrade the server until after RTM.  Having been able to do this upgrade with TFS 2012 RC, the issues found will be fixed in the RTM release of TFS 2012!

Here’s the topology of the DevDiv TFS deployment, which I’ve copied from Grant Holliday’s blog post on the upgrade to TFS 2010 RTM two years ago.  I’ll call out the major features.

  • We use two application tiers behind an F5 load balancer.  The ATs will each handle the DevDiv load by themselves, in case we have to take one offline (e.g., hardware issues).
  • There are two SQL Server 2008 R2 servers in a failover configuration.  We are running SP1 CU1.  TFS 2012 requires an updated SQL 2008 for critical bug fixes.
  • SharePoint and SQL Analysis Services are running on separate computer in order balance the load (cube processing is particularly intensive).
  • We use version control caching proxy servers both in Redmond and for remote offices.

These statistics will give you a sense of the size of the server.  There are two collections, one that is in use now and has been used since the beginning of the 2012 product cycle (collection A) and the original collection which was used by everyone up through the 2010 product cycle (collection B).  The 2010 collection had grown in uncontrolled ways, and there were more than a few hacks in it from the early days of scaling to meet demand.  Since moving to a new collection, has been able pare back the old collection, and the result of those efforts has been a set of tools that we use on both collections (will be eventually release them).  Both collections were upgraded.  The third column is a server we call pioneer.

Grant posted the queries to get the stats on your own server (some need a little tweaking because of schema changes, and we need to add build).  Also, the file size is now all of the files, including version control, work item attachments, and test attachments, as they are all stored in the same set of tables now.

 

Coll.A                      Coll. B                          Pioneer
Recent Users 3659 1516 1,143
Build agents and controllers 2,636 284 528
Files 16,855,771 21,799,596 11,380,950
Uncompressed File Size (MB) 14,972,584 10,461,147 6,105,303
Compressed File Size (MB) 2,688,950 3,090,832 2,578,826
Checkins 681,004 2,294,794 133,703
Shelvesets 62,521 12,967 14,829
Merge History 1,512,494,436 2,501,626,195 162,511,653
Workspaces 22,392 6,595 5,562
Files in workspaces 4,668,528,736 366,677,504 406,375,313
Work Items 426,443 953,575 910,168
Areas & Iterations 4,255 12,151 7,823
Work Item Versions 4,325,740 9,107,659 9,466,640
Work Item Attachments 144,022 486,363 331,932
Work Item Queries 54,371 134,668 28,875

The biggest issue  faced after the upgrade was getting the builds going again.  DevDiv (collection B) has 2,636 build agents and controllers, with about 1,600 being used at any given time.  On pioneer, didn’t have nearly that many running.  The result was that was hit a connection limit, and the controllers and agents would randomly go online and offline.

The upgrade to TFS 2012 RC was a huge success, and it was a very collaborative effort across TFS, central engineering, and IT.  As a result of this experience and  experience on pioneer, TFS 2012 is not only a great release with an incredible set of features, but it’s also running at high scale on a mission critical server!

Baseless Merge with TFS 2010

 

The Baseless Merge it is in the TFS 2012 integrated in merging wizard-

First I would like to say this should be avoided if at all possible. Having a relationship between branches makes it much easier to deal with branching. So unless you absolutely have to  merge between unrelated related branches try not to.

What you need to know, TFS does not automatically create a merge relationship between two sibling branches (that share the same parent). This is by design. The VSTS Rangers Guidance II suggests avoiding baseless merges for normal merging scenarios.

The best way to merge changes from sibling (source) to another sibling (target) branch is to RI the change first to the parent branch and then FI the change to the sibling (target) branch. A huge risk with doing a sibling->sibling merge is that future siblings (which branch from the same parent) won’t have the change since it was never RI’d back to the parent branch.

As you know, baseless merges introduce a myriad of problems including:

  1. Conflicts cannot be auto-merged
  2. Deletes are not propagated

Now that disclaimers are out of the way, what is a baseless merge?

Take the following branch hierarchy. Dev can be merged with QA, and QA can be merged with hotfix or Prod. If you tried to merge a change from hotfix directly to Dev the UI will not let you. There is no relationship between them therefore it would be a baseless merge.

If I were to make a change in the Hotfix branch and attempt to merge it with Dev that option would not be available. As you can see in the merge dialog below the only option available for merging is the QA branch.

If I was to merge changeset 108 from Hotfix to QA the visualizer would show something like this.

If I wanted to merge the latest version of HotFix into Dev skipping QA I would have to do a Baseless merge from the command line. Using the baseless switch on the tf merge command.

tf merge /recursive /baseless Hotfix Dev

If we then take a look at the visualizer we can see that we did a baseless merge denoted by the dotted line.

TFS 2012  View of branching

Baseless merge in the UI – Another long standing piece of feed

back is people want to be able to initiate baseless merges in the UI.  We’ve supported it in the command line for a long time but much as I said about rollback in my post on the Power Tools, for many people, if it’s not in the UI, it’s not in the product.  Now it’s in the UI.  If you initiate a merge from the Source Control Explorer, the merge wizard now has a browse button that allows you to go find branches to do a baseless merge against.

Let’s look at an example.  I started with a branch structure that looked like this:

Hierarchy

As you can see there’s no merge relationship between destination and AlternateDestination.  But let’s imagine that I have some changes in destination that I want in AlternateDestination but I don’t want to go through Source.  I want a baseless merge from destination to AlternateDestination.  I can now go to Source Control Explorer and start the standard merge experience:

MergeWiz

As always, only Source is in the list because it is the only related branch to destination.  However, unlike TFS 2010, there is now a Browse… button for baseless merges.  If you click it, you get:

BrowseBranch

and here you can see I can pick AlternateDestination.  And if I do, and hit OK, I go back to the wizard:

BaselessMerge

And it warns me that I’m doing a baseless merge but I can hit next and then finish and proceed with it.  So, now you can do baseless merges in the UI!

Merge on unshelve – From the beginning people have loved shelving as a simple, clean way to package up changes and set them aside.  However, we’ve often heard the complaint that the inability to unshelve into a workspace with pending changes and deal with the merge conflicts was a serious limitation.  No more!  We have now built merging into unshelve operation and it works just like merging does elsewhere in the product.

This post has gotten plenty long so I’m not going to include screenshots of this.  The new thing is that merges are performed, conflicts are filed, etc in this scenario.  All the UI around that is the same as on other scenarios (like merge and get) where merges happen.

.Net Framework 4.0.2 Rollup

 

.Net Framework 4.0.2 Release Information Rollup

Multi-Targeting Pack for Microsoft .NET Framework 4.0.2 (KB2544526)

This update adds support for designing and developing applications for the Update 4.0.2 for Microsoft .NET Framework 4 by using Microsoft Visual Studio 2010 SP1 or later. The MT Pack adds new reference assemblies, IntelliSense files, and other supporting files. For further details about the contents of this Targeting Pack refer to the Knowledge Base Article KB2544526.

Update 4.0.2 for Microsoft .NET Framework 4 – Design-time Update for Visual Studio 2010 SP1 (KB2544525)

This package contains updated design-time files for Visual Studio 2010 SP1 corresponding to Update 4.0.2 for Microsoft .NET Framework 4. For further details about the contents of Update 4.0.2 for Microsoft .NET Framework 4 – Design-time Update please refer to the Knowledge Base Article KB2544525.
This design time package installs the following individual packages:

  • Update 4.0.2 for Microsoft .NET Framework 4 – Runtime Update (KB2544514)
  • Multi-Targeting Pack for the Microsoft .NET Framework 4.0.2 (KB2544526)
  • Visual Studio 2010 SP1 Update for enabling workflow state machine designer (KB2495593)

Update 4.0.2 for Microsoft .NET Framework 4 – Runtime Update (KB2544514)

Update 4.0.2 for Microsoft .NET Framework 4 package contains updated runtime files. For further details about the contents of this Runtime Update please refer to the Knowledge Base Article KB2544514.

Multi-Targeting Pack for Microsoft .NET Framework 4.0.1 (KB2495638)

This update adds support for designing and developing applications for the Update 4.0.2 for Microsoft .NET Framework 4 by using Microsoft Visual Studio 2010 SP1 or higher. The MT Pack adds new reference assemblies, IntelliSense files, and other supporting files. For further details about the contents of this Targeting Pack refer to the Knowledge Base Article KB2495638.

Article ID: 2544525 – Update 4.0.2 for Microsoft .NET Framework 4 – Design-Time Update for Visual Studio 2010 SP1

This update adds support for designing and developing applications on Microsoft Visual Studio 2010 SP1 for the Update 4.0.2 for Microsoft .NET Framework 4.
This update installs the packages that are described in the following Microsoft Knowledge Base articles:

To use new features that are provided by this update, follow these steps:

  1. Install the .NET Framework 4.0.2 – Design-Time Update for Visual Studio 2010 SP1.
  2. Open Visual Studio 2010 SP1.
  3. Create a new workflow project, and then set the target framework for the project to .NET Framework 4.0.2 Client Profile or to .NET Framework 4.0.2.
    Note The target framework can be changed by using the Target Framework list in the Project Properties dialog box.
  4. After the project is created, you can code and use designer to build a .NET Framework 4.0.2-based application.

Note If you set the target framework to .NET Framework 4.0.2, IntelliSense for all the new public APIs from Update 4.0.2 for Microsoft .NET Framework 4 – Runtime Update are exposed.

Article ID: 2544514 – Update 4.0.2 for Microsoft .NET Framework 4 – Runtime Update

Update 4.0.2 for Microsoft .NET Framework 4 is now available. This update contains some new features that are based on specific requests from some top customer and on some important .NET Framework scenarios. This update also contains some important software updates for ClickOnce and for .NET Framework 4-based Windows Presentation Foundation (WPF) applications.
Notes

  • This update release updates only the runtime files for the Microsoft .NET Framework 4. For more information about the details of this update, see the "More Information" section.
  • This update contains all the runtime changes from the following update:

    2478063 (http://support.microsoft.com/kb/2478063 / ) Update 4.0.1 for Microsoft .NET Framework 4 – Runtime Update

    Therefore, this update is a cumulative update. Any application built for the .NET Framework 4.01 can run on a computer that has the .NET Framework 4 and the Update 4.0.2 runtime installed. We recommend that applications built for the Microsoft .NET Framework 4.0.1 be upgraded to the Update 4.0.2 runtime. However, this upgrade is optional.

  • We do not support any application that this update was used to build on any prerelease version of the .NET Framework 4 such as a Beta. Additionally, we recommend that any such application be upgraded to at least the Microsoft .NET Framework 4 RTM

Features that are introduced by this update

  • AlwaysOn support in SqlClient
  • SQL Server Express Local Database Runtime support in SqlClient
  • A new DbProviderFactories.GetFactory overload

Issues that this update resolves

Issue 1
Consider the following scenario:

  • You create a .NET Framework application that targets the Microsoft .NET Framework 4.0.1 or the Microsoft .NET Framework 4.0.1 Client Profile.
  • You publish the application.
  • You install the application. ClickOnce is used to install the application.

In this scenario, the installation fails, and you receive an error message that contains the following text:

Version 4.0.1 is required

Issue 2
Consider the following scenario:

  • You create a Windows Presentation Foundation (WPF) application.
  • You set a window to be a child window of two windows.

In this scenario, WPF reports incorrect child-window dimensions.
Note After you install this update, WPF returns the correct dimensions of the child window.

Enterprise Library v5 & Contrib

 

A couple of days ago, the EntLibContrib project has reached an important milestone – porting a big chunk of the EntLibContrib codebase to be compatible with EntLib v5. They released the binaries via NuGet (search for ‘entlibcontrib’ or ‘entlib’). Here’s what this October 2011 release includes:

  • EntLibContrib 5.0 – Oracle ODP.NET Data Provider
    • allows to use the Oracle Data Provider for .NET (ODP.NET) with the Microsoft Enterprise Library Data Access Application Block (DAAB).
  • EntLibContrib 5.0 – MySql Data Provider
    • allows to use the MySql .NET Connector with the Microsoft Enterprise Library Data Access Application Block (DAAB).
  • EntLibContrib 5.0 – Query Application Block
    • provides a common interface to query and update data stored in a DB, XML file or Web/WCF service.
  • EntLibContrib 5.0 – Query Application Block Database Extensions
    • allow to use the Query Application Block with the Microsoft Enterprise Library Data Access Application Block.
  • EntLibContrib 5.0 – Validation Application Block Extensions
    • provide additional validators, design-time enhancements and some other features.
  • EntLibContrib 5.0 – Logging Application Block Extensions
    • provide additional trace listeners, log entries parsing support and some other features.
  • EntLibContrib 5.0 – Exception Handling Application Block Extensions
    • provide additional exception handlers.
  • EntLibContrib 5.0 – Policy Injection Application Block Extensions
    • provide additional PIAB call handlers.
  • EntLibContrib 5.0 – Common Extensions
    • extend the Microsoft Enterprise Library Common Infrastructure by providing new configuration element classes (TypeConfigurationElement and AnonymousConfigurationElement) that help dealing with design-time support for your custom providers.
  • EntLibContrib 5.0 – Source Code
    • this package includes a zip file with the source code of all the extensions and the additional blocks of EntLibContrib v5. This source code can also be used in combination with the PDBs for debugging purpose.

The following packages are planned to be subsequently released in November:

  • EntLibContrib 5.0 – Castle Windsor Configuration Support
    • This Castle Windsor IContainerConfigurator implementation will allow EntLib to use Windsor as the container (instead of Unity).
  • EntLibContrib 5.0 – Autofac Configuration Support
    • This Autofac IContainerConfigurator implementation will allow EntLib to use Autofac as the container (instead of Unity).
  • Additional Data Providers for DB2, PostgreSQL, SQLite, and SqlEx
  • EntLibContrib 5.0 – Resource Application Block
    • a full application block of configurable providers for Globalization and Localization, complete with configuration console designer, Unity support, group policy support and instrumentation.
  • EntLibContrib 5.0 – Mapping Application Block
    • complements the Query Application Block (QAB) and manages the data transfer objects used by the QAB and mapping them to and from fully typed domain objects.

Congratulations to the EntLibContrib community on this release! It’s great to see such work coming from community enthusiasts. If you get to use these extensions and benefit from them, make sure to send your kudos to the contributors. And if you build your own extensions and feel that other EntLib users may benefit from them, consider sharing them via EntLibContrib as well.