TFS 2015 Update 1 RC is Here

In The new Release cadence have Microsoft Developer Division the VS2015 update 1 CTP & TFS 2015 update 1 RC released . – download if you like –

Backlog looks Fancy and have more features that can be used on Planning meetings

Working with tasks directly on the Kanban board

Some of the my favorites that where in VSO for a while now are :

  • Version control: use Git and Team Foundation Version Control in the same project, history and getting started improvements on the web portal, social #ID in pull requests, commit details summary is easier to read, and an improved experience for cloning Git repositories.
  • Backlogs: multi-select on all backlogs, drag any item to an iteration from anywhere, Add panel on the iteration backlog, line on the burndown indicates actual capacity, configure settings directly, add/remove users in the sprint plan, and multiple activities per team member in planning capacity for a sprint.
  • Kanban boards: query on columns, card coloring, tag coloring, inline renaming of columns and swimlanes, reorder cards when changing columns, configure settings directly, and hide empty fields on cards.
  • Work items and tasks: tasks as checklist, link to branches and pull requests in work items, task board card coloring, and limit values shown for Work Item type in queries.
  • Build: improved access control for resources, improved source control integration, usability fixes in Build Explorer, and parity with XAML builds for label sources and client-side workspace mappings.
  • Testing: export test outcome for manual tests and test result retention policy improvements.
  • Dashboards: 100% customizable with new widgets and multiple dashboards.
  • SonarQube: works for Java programs built with a Maven task, SonarQube Analysis build tasks work with on-premises and hosted agents.

Sonarqube build tasks


you can read more about here-

Live from Advanced Developer Conference 2015 – #Adc15

This Year like every year hosts ppedv AG  –  the Advanced  Developer Conference in Mannheim with a Interesting Content and Speaker list  where you can find TOP Quality Microsoft and Microsoft Community speakers like :

Immo Landwerth – .net core Team , sen. Programm Manager , Microsoft
Tarek Madkour – Principal Group Program Manager, Visual Studio IDE ,Microsoft
Laurent Bugnion  -MVVM Light Erfinder
Neno Loje – ALM MVP / Scrum Master
Thomas Schissler – Bereichsleiter Dev ALM MVP, Scrum Trainer
Christian Binder – ALM Architek Microsoft


And also interesting Content like, DevOPs, Agile Delivery, Net. Core & Visual Studio IDE, as well like Azure Microservices and EF7 , architecture and coding.

Sessions you can also view on – ppedv Youtube channel

Silverlight 6 – NO Go from Microsoft Team

For few months has the Microsoft team the answer to  Visual Studio User Voice case .

Shortly described :

While Microsoft continues to support Silverlight, and remains committed to doing so into 2021, there will be no new development work except security fixes and high-priority reliability fixes.

Silverlight out-of-browser apps will work in Windows 10. Silverlight controls and apps will continue to work in Internet Explorer until October 12, 2021, on down-level browsers and on the desktop.

Our support lifecycle policy for Silverlight remains unchanged:

As was stated in this blog post, the need for browser extensions including ActiveX controls and Browser Hosted Objects has been significantly reduced by the advances of HTML5-era capabilities which also produce interoperable code across browsers. On the web, plugin-based architectures, such as Silverlight, are moving towards modern open standards, such as HTML5, CSS and JavaScript which are supported on a wide variety of browsers and platforms.

For future development, we recommend modernizing Silverlight applications to HTML5 solutions which provide broader reach across platforms and browsers. In addition to the solutions listed in the blog post above, there is a free .NET Technology Guide eBook to help with migration planning located at:

Leaks Analysis with the Memory Usage Tool in Visual Studio 2015

Memory Usage tool in the Diagnostics Tool window

In Visual Studio 2015 CTP 6 was introduced the new debugger-integrated diagnostics tools, including the Memory Usage tool. For the first time, you could investigate memory growth on the managed heap without leaving everyone’s favorite tool, the debugger. Based on your feedback, we’ve been refining the experience for Visual Studio 2015 RC. This Post how to use the Memory Usage tool while debugging to find and fix a common source of leaks in .NET code: event handlers.

The Test app

For this walkthrough, I’ll be using the WPF version our sample app, PhotoFilter. You can find it in the PhotoFilter.WPF folder inside the solution (zip file).image

PhotoFilter loads all the images in your Pictures library, and displays them in a list. Double-click any image to open an ImagePage view in a new window. While the MainWindow list displays smaller thumbnails, the ImagePage view shows a scaled version of the full image. PhotoFilter offers two ways to view this larger image: either change the selection in MainWindow to update the image displayed in the open ImagePage window, or close the ImagePage window and open a new one by double-clicking again in the Main Window’s list.

It is a leak anyway?

There are only a handful of ways to leak memory in the managed, garbage-collected environment of the .NET CLR. One of the more common scenarios is when the garbage collector refuses to clean-up an object, even though we are certain it’s beyond its useful life. This is generally an indication that some object somewhere is holding a reference to the should-be-dead object. Sometimes these references are very subtle, or not apparent in our own code. Using the Memory Usage tool, we can not only discover the leaks, but also track down the references that are keeping the zombies alive.

Find the leak

I start debugging to run the application, exercising the code paths while watching the Memory graph. The graph displays process memory using a metric called Private Bytes. You can find more on Private Bytes in one of my earlier blog posts on the Memory Usage tool. An app’s managed heap is part of the process memory, so increases in the heap will also cause increases in the Private Bytes.


Repeatedly opening and closing new ImagePage windows shows a disturbing trend on the graph. Each time I open the ImagePage window, the memory climbs. The first jump, at point A, I expect to happen. Each time my application opens the ImagePage, the bitmap being displayed is fully decoded into memory. Point B is where I get concerned. At that point, I closed the ImagePage window, and opened and closed three new ones for the next pictures in the list. After closing the first ImagePage, I expect the next garbage collection to clean up the ImagePage and all the memory it used. Instead, what I see is a stepped pattern of memory always showing a net increase. Each new ImagePage adds to the overall process memory. Closing them doesn’t result in the memory being cleaned up. These are signs of a leak.

Finding the source

I only exercised the code paths for these two views. So, I can be pretty sure that this leak behavior is somehow related to the ImagePage objects. One option, something we’re all pretty familiar with, is digging around the code seeing if anything looks “suspicious” or “smells”. With VS2015, instead of hunting and smelling, we can use the Memory Usage tool.

I’ll do that by taking snapshots before and after the interesting parts, and then investigating the diff to better understand what’s keeping the zombie objects alive.

The Memory Usage tab

In the Diagnostic Tools window, I’ll switch to the Memory Usage tab. Once there, a simple toolbar shows me all the basic interactions.


Note: “Take Snapshot” temporarily pauses the process if it’s running, and walks the managed heap. This finds all the objects that are still live and not eligible for clean up by the garbage collector. Once a snapshot completes, an overview of its key stats appears in the table below the toolbar.

Getting back to my investigation, I start debugging. Then I wait for the app to start and for the memory graph to stabilize. For many applications, you’ll want to interact with it first to ensure you’ve eliminated any initialization costs before considering the memory usage stabilized. This app is very simple, so I won’t worry about additional initialization costs for my current investigation.


Once the graph has settled, I take the first snapshot, which will serve as the baseline for comparison. Because I first noticed the issue by opening and closing the ImagePage view a few times, for my investigation I’ll follow the same steps. This time, however, I’ll open and close it a total of ten times. This should help amplify any “spikes” in the data.

Before taking the second snapshot, I first want to get my app into a break state. Unlike snapshots taken while the app is running, snapshots taken while broken have a super-power: you can inspect the values of the individual instances of objects live on the heap. This super-power is only available while you’re still in the same break state that the snapshot was taken in. Once you continue, or take another step, instance inspection won’t be available on that snapshot again.

But, how do I know where to set my breakpoint? No need! Once I’ve completed my repro steps, I can just press the Break All button on the Debug toolbar.


Now that we’re in a break state, I’ll go ahead and take the second snapshot. Once it’s finished, I’ll keep the process paused. Notice the gray arrow to the left of the second snapshot in the table below? That indicates that the object inspection super-power is available for that snapshot as long as I don’t continue, step, or stop debugging.


Snapshot overview table

Let’s take a look at what each snapshot shows us in the overview. From left-to-right:

  • The snapshot’s sequential number. Numbers are reset each debugging session.
  • The process running time when the snapshot was taken.
  • The count of the live objects on the managed heap. In parenthesis is the diff of the count from the preceding snapshot.
  • The size of the live objects on the managed heap, also followed by a diff in parenthesis.

Each blue metric in the table is a link that launches the Heap View for the snapshot. Heap View will be the focus of most of your memory investigations. I’ll start with the live object count diff. By clicking the object count diff link (gold arrow in above image), Heap View opens in diff mode, sorted by the “Count Diff” column. By default, the diff mode compares the chosen snapshot to the one immediately preceding it. If you have more than two snapshots, you can use the “Compare to” dropdown to customize which snapshot to compare against.


There’s quite a bit of data in the Heap View. Depending on your knowledge of the .NET framework, the types at the top of the table may look completely unfamiliar. Don’t let that discourage you! A very simple strategy let’s you start with the types you know best: the types you wrote in just your code. I’ll show you how.

In the top-right corner of the Heap View, there’s a search box. I can quickly narrow down the types in the table by searching for my app’s module name ‘PhotoFilter’.


And there it is, right at the top of the Types table: PhotoFilter.WPF.ImagePage. A total of 10 instances are still alive, despite the fact that the windows hosting the views are long closed. Now, I’ve confirmed the leak, and know one of the players. Unfortunately, I still don’t know why these ImagePage objects are zombies.

Instances, instances

When hovering over the entry for PhotoFilter.WPF.ImagePage in the table, you’ll see an icon appear. This is the Instances view icon. I click it, and navigate to a new view that shows data on the individual instances of ImagePage.


Because this snapshot is super-power enabled, I can inspect each instance, with full DataTip support for complex values.


Inspecting each ImagePage, I confirm that these are the views of the images I clicked on. These should have been cleaned up by the garbage collector, but some object somewhere is holding a reference to each instance. By selecting an instance in the top pane, the Paths to Root will open in the bottom pane. This view shows a bottom-up hierarchy of what objects are holding references that prevent garbage collection. Here, in the Instances view, the tree will auto-expand to show the primary roots. Following these paths usually reveals the culprit. For ImagePage, it’s also worth noting that each instance has the exact same type hierarchy in its Paths to Root. So, for my investigation, a single code fix might be all I need.


Right below PhotoFilter.WPF.ImagePage is a suspicious entry: SelectionChangedEventHandler. Event handler subscription is a well-known cause of leaking objects in .NET. Continuing up the tree, I can see that the event handler belongs to a ListView. My app only has one ListView, on the MainWindow. I know the major players are the ImagePage, a SelectionChangedEventhandler, and the ListView that owns it. At this point, it’s a good idea to take a look at the code. I’ll begin with my own code, the ImagePage code-behind.

Right away, in the ImagePage constructor, I see all the major players come together.


A reference to a ListView is passed to the ImagePage constructor (line 51), and the new instance of ImagePage subscribes to the SelectionChanged event of that ListView (line 56). Looking at the subscribed event handler, _parentList_SelectionChanged, this code implements the feature that updates an open ImagePage view when the selection changes in the ListView on the MainWindow.

An object that subscribes to an event of a longer-lived object needs to explicitly unsubscribe from that event at some point, or else the shorter-lived object will never really die. For PhotoFilter, I decided to override the Window.OnClosed handler, and unsubscribe from the SelectionChangedEventHandler there (line 73).


Now, when I close an ImagePage window, it unsubscribes itself from the ListView.SelectionChanged event. If that event handler was the only reference rooting the object, they should now be cleaned up by the garbage collector.


It’s always a good to verify a fix, so I’ll rerun the experiment to make sure memory is getting cleaned up by the garbage collector as expected. Looking at the graph, after restarting the app and opening and closing ImagePage 10 times, this now appears to be exactly what’s happening. Before the fix, process memory was around 350MB. After the fix, it’s now less than 100MB. Problem solved!

Code Quality improvements with MSbuild & Team Build – SonarQube integration

Technical debt is the set of problems in a development effort that make forward progress on customer value inefficient.  Technical debt saps productivity by making code hard to understand, fragile, difficult to validate, and creates unplanned work that blocks progress. Technical debt is insidious.  It starts small and grows over time through rushed changes, lack of context and lack of discipline. Organizations often find that more than 50% of their capacity is sapped by technical debt.


SonarQube is an open source platform that is the de facto solution for understanding and managing technical debt.

Customers have been telling us and SonarSource, the company behind SonarQube, that the SonarQube analysis of .Net apps and integration with Microsoft build technologies needs to be considerably improved.

Over the past few months we have been collaborating with our friends from SonarSource and are pleased to make available a set of integration components that allow you to configure a Team Foundation Server (TFS) Build to connect to a SonarQube server and send the following data, which is gathered during a build under the governance of quality profiles and gates defined on the SonarQube server.

  • results of .Net and JavaScript code analysis
  • code clone analysis
  • code coverage data from tests
  • metrics for .Net and JavaScript

We have initially targeted TFS 2013 and above, so customers can try out these bits immediately with code and build definitions that they already have. We have tried using the above bits with builds in Visual Studio Online (VSO), using an on-premises build agent, but we have uncovered a bug around the discovery of code coverage data which we are working on resolving. When this is fixed we’ll send out an update on this blog. We are also working on integration with the next generation of build in VSO and TFS.

In addition, SonarSource have produced a set of .Net rules, written using the new Roslyn-based code analysis framework, and published them in two forms: a nuget package and a VSIX. With this set of rules, the analysis that is done as part of build can also be done live inside Visual Studio 2015, exploiting the new Visual Studio 2015 code analysis experience

The source code for the above has been made available at, specifically:

We are also grateful to our ever-supportive ALM Rangers who have, in parallel, written a SonarQube Installation Guide, which explains how to set up a production ready SonarQube installation to be used in conjunction with Team Foundation Server 2013 to analyse .Net apps. This includes reference to the new integration components mentioned above.


This is only the start of our collaboration. We have lots of exciting ideas on our backlog, so watch this space.

As always, we’d appreciate your feedback on how you find the experience and ideas about how it could be improved to help you and your teams deliver higher quality and easier to maintain software more efficiently.

If you have any technical issues then please make your way over to, tagging your questions with sonarqube, and optionally tfs, c#, .net etc. For the current list of sonarqube questions see

Smart Unit Testing With Visual Studio 2015

Smart Testing (former pex & moles ) has a new name now – IntelliTest and explores your .NET code to generate test data and a suite of unit tests. For every statement in the code, a test input is generated that will execute that statement. A case analysis is performed for every conditional branch in the code. For example, if statements, assertions, and all operations that can throw exceptions are analyzed. This analysis is used to generate test data for a parameterized unit test for each of your methods, creating unit tests with maximum code coverage. Then you bring your domain knowledge to improve these unit tests.

When you run IntelliTest, you can easily see which tests are failing and add any necessary code to fix them. You can select which of the generated tests to save into a test project to provide a regression suite. As you change your code, rerun IntelliTest to keep the generated tests in sync with your code changes.

Get started with IntelliTest

You must use Visual Studio Enterprise. Sad smile

Explore: Use IntelliTest to explore your code paths and generate test data
  1. Open your solution in Visual Studio. Then open the class file that has methods you want to test.
  2. Right-click in a method in your code and choose Run IntelliTest to generate unit tests for all the code paths in your method.

    Right-click in your method to generate unit tests

    A parameterized unit test is generated for this method. The test data is created to exercise the code paths in the method. IntelliTest runs your code many times with different inputs. Each run is represented in the table showing the input test data and the resulting output or exception.

    Exploration Results window is displayed with tests

    To generate unit tests for all the public methods in a class, simply right-click in the class rather than a specific method. Then choose Run IntelliTest. Use the drop-down list in the Exploration Results window to display the unit tests and the input data for each method in the class.

    Select the test results to view from the list

    For tests that pass, check that the reported results in the result column match your expectations for your code. For tests that fail, fix your code and add exception handling if necessary. Then rerun IntelliTest to see if your fixes generated more test data from different code paths.

Persist: Save test data and unit tests as a regression suite
  • Select the data rows that you want to save with the parameterized unit test into a test project.

    Select tests; right-click and choose Save

    You can view the test project and the parameterized unit test that has been created with a PexMethod attribute. (The individual unit tests, corresponding to each of the rows, are saved in the .g.cs file in the test project.) The unit tests are created using the Visual Studio test framework, so you can run them and view the results from Test Explorer just as you would for any unit tests that you created manually.

    Open class file in test method to view unit test

    Any necessary references are also added to the test project.

    If the method code changes, rerun IntelliTest to keep the unit tests in sync with the changes.

Assist: Use IntelliTest to find issues in your code
  1. If you have more complex code, IntelliTest can help you discover any issues for unit testing. For example, if you have a method that has an interface as a parameter and there is more than one class that implements that interface. After you run IntelliTest, warnings are displayed for this issue. View the warnings to decide what you want to do.

    Right-click method and choose Smart Unit Tests


  2. After you investigate the code and understand what you want to test, you can fix a warning to choose which classes to use to test the interface.

    Right-click the warning and choose Fix

    This choice is added into the PexAssemblyInfo.cs file.

    [assembly: PexUseType(typeof(Camera))]


  3. Now you can rerun IntelliTest to generate a parameterized unit test and test data just using the class that you fixed.

    Rerun Smart Unit Tests to generate the test data

Q & A

Q: Can you use IntelliTest for unmanaged code?

A: No, IntelliTest only works with managed code, because it analyzes the code by instrumenting the MSIL instructions.

Q: When does a generated test pass or fail?

A: It passes like any other unit test if no exceptions occur. It fails if any assertion fails, or if the code under test throws an unhandled exception.

If you have a test that can pass if certain exceptions are thrown, you can set one of the following attributes based on your requirements at the test method, test class or assembly level:

  • PexAllowedExceptionAttribute
  • PexAllowedExceptionFromTypeAttribute
  • PexAllowedExceptionFromTypeUnderTestAttribute
  • PexAllowedExceptionFromAssemblyAttribute

Q: Can I add assumptions to the parameterized unit test?

A: Yes, use assumptions to specify which test data is not required for the unit test for a specific method. Use the PexAssume class to add assumptions. For example, you can add an assumption that the lengths variable is not null like this.


If you add an assumption and rerun IntelliTest, the test data that is no longer relevant will be removed.

Q: Can I add assertions to the parameterized unit test?

A: Yes, IntelliTest will check that what you are asserting in your statement is in fact correct when it runs the unit tests. Use the PexAssert class to add assertions. For example, you can add an assertion that two variables are equal.

PexAssert.AreEqual(a, b);

If you add an assertion and rerun IntelliTest, it will check that your assertion is valid and the test fails if it is not.

Q: What testing frameworks does IntelliTest support?

A: Currently only mstest is supported.

Q: Can I learn more about how the tests are generated?

A: Yes, read this blog post about the model and process.

New Features for TFS @ Visual Studio Online and free Ebook from TFS Rangers Team

Visual Studio Online (VSO) 24×7 for Teams Available  and it  explore as we evolve, innovate, and continuously fine-tune the processes . The recent Managing agile open-source software projects with Microsoft Visual Studio Online eBook is already dated, thanks to the cool features that are released as part of the regular service updates.

Here are top features are noted today, while working on the setup of the VSO Extensibility Ecosystem, App Sample and Guidance project, which will be covered in more detail in one of the upcoming posts.

  1. After creating a new team project, you can fast-track to your Kanban board to get organised, or to your source control system to manage your code.
  2. Your backlog visually indicates which work items are managed by your or another team. For example, 10516-10528 are owned by another team and cannot be reordered on the ALM team board.
    If we show the area path, this becomes even more evident.
  3. The Value Area allows us to define a business of architecture (runway) value for each work item, as used by the Scaled Agile Framework (SAFe).
  4. We can opt in the Epic backlog level and stop decorating some of our Features with an Epic tag to simulate an Epic.
  5. Board columns can be customised, allowing us to introduce our favourite “in flight” analogy for active projects.
  6. Definition of Done (DoD) can be specified for each column and instead of searching for the DoD in a document or work item, you simply click image.
  7. Part of the customisation is the ability to split columns into Doing and Done.


Microsoft Development News –Visual Studio 2015 RC and VS Code multiplatform

Today @Build conference is announced the release of Visual Studio 2015 RC. This version includes many new features and updates, such as tools for Universal Windows app development, cross-platform mobile development for iOS, Android, and Windows, including Xamarin, Apache Cordova, and Unity, portable C++ libraries, native activity C++ templates for Android, and more.

And now, you can watch our great Build 2015 session recordings as they become available, or catch-up on your favorite features with 40+ of our brand new short Connect(“on-demand”); feature videos.

To install the most recent version of Visual Studio 2015, use the following link.

Download: Visual Studio 2015 RC

To learn more about the most recent version of TFS, see the Team Foundation Server RC release notes.

Windows Holgografic is another Announcement regarding the vision of HoloLense and integration with all from IOC to Home Media .


Important: Most applications you build with Visual Studio 2015 RC are considered “go-live” and can be redistributed and used in production settings as outlined in the license agreement. However, those that are built for Windows 10 cannot be distributed or uploaded to the Windows Store. Instead, you will have to rebuild applications built for Windows 10 by using the final version of Visual Studio 2015 before submitting to the Windows Store. Also, please note that ASP.NET 5 is still in preview and is not recommended for production use at this time. You are free to use ASP.NET 4.6 in production.

Last November, Microsoft said that it would bring some of the core features of its .NET platform — which has traditionally been Windows-only — to Linux and Mac. Today, at its Build developer conference, the company announced its first full preview of the .NET Core runtime for Linux and Mac OS X.

In addition, Microsoft is making the release candidate of the full .NET framework for Windows available to developers today.

The highlight here, though, is obviously the release of .NET Core for platforms other than Windows. As Microsoft VP of its developer division S. “Soma” Somasegar told me earlier this week, the company now aims to meet developers where they are — instead of necessarily making them use Windows — and .NET Core is clearly part of this move.

Microsoft says it is taking .NET cross-platform in order to build and leverage a bigger ecosystem for it. As the company also noted shortly after the original announcement, it decided that, to take .NET cross-platform, it had to do so as an open source project. To shepherd it going forward, Microsoft also launched the .NET Foundation last year.

While it’s still somewhat of a shock for some to see Microsoft active in the open-source world, it’s worth remembering that that the company has made quite a few contributions to open source projects lately.

Even before the .NET framework announcement, the company had already open-sourced theRoslyn .NET Compiler platform. Earlier this year, Microsoft shuttered its MS OpenTechsubsidiary, which was mostly responsible for its open source projects, in order to bring these projects into the overall Microsoft fold.

TFS free Book – Managing Agile Open-Source Software Projects with Microsoft Visual Studio Online

We’re happy to announce the release of our newest free ebook, Managing Agile Open-Source Software Projects with Microsoft Visual Studio Online (ISBN 9781509300648), by Brian Blackman, Gordon Beeming, Michael Fourie, and Willy-Peter Schaub.

With this ebook, the ALM Rangers share their best practices in managing solution requirements and shipping solutions in an agile environment, an environment where transparency, simplicity, and trust prevail. The ebook is for Agile development teams and their Scrum Masters who want to explore and learn from the authors’ “dogfooding” experiences and their continuous adaptation of software requirements management. Product Owners and other stakeholders will also find value in this ebook by learning how they can support their Agile development teams and by gaining an understanding of the constraints of open-source community projects.

Download all formats (PDF, Mobi and ePub) at the Microsoft Virtual Academy.

Below you’ll find the ebook’s Foreword and a few helpful sections from its Introduction:


The ALM Rangers are a special group for several reasons. Not only are they innovative and focused on the real world, providing value-added solutions for the Visual Studio developer community, but they live and work in all four corners of the globe. The ALM Rangers are a volunteer organization. Talk about dedication! When we were offered the opportunity to write a foreword for this book, we knew we’d be part of something special.

The ALM Rangers don’t pontificate that they’ve found the one true way. This is practical advice and examples for producing great software by those who’ve done it and–most importantly–are still innovating and coding. Readers will find that they have virtual coworkers who share their experiences with honesty and humor, revealing learnings and what has worked for them. This doesn’t mean that this book lacks prescriptive guidance. The Rangers have embraced Visual Studio Online as their one and only home. They are evolving with the product, embracing open source software in GitHub to learn how successful OSS projects are run there and what the community values most. They’ve created an ecosystem that identifies the “low hanging fruit” and tracks it from idea to solution, and they never fail to recognize the Rangers and the ALM VPs who dedicate their personal time and passion to their OSS projects.

The extensive guidance shared here is not an end-to-end plan for everyone, although it could be used as a definitive guide for some teams. One of the many assets of this book is its organization into practical walkthroughs of typical ALM Ranger projects from idea to solution, presented as an easy to consume reference. Other bonuses are an appendix to quick-start your own project and reference checklists to keep you on track.

Among the authors, this book was called the “v1 dawn edition.” True to their core value of “learn from and share all experiences,” the ALM Rangers are always mindful that producing great software means continuous refinements from new learnings and feedback and that there will be more versions of this book. But first we invite you to immerse yourself in Managing Agile Open-Source Software Projects with Microsoft Visual Studio Online.

In the true spirit of Agile, ongoing innovation,

Sam Guckenheimer
Clemri Steyn

This book assumes that you have at least a minimal understanding of Agile, Lean, and Scrum development concepts and are familiar with Team Foundation Server (TFS) and Visual Studio Online (VSO). To go beyond this book and expand your knowledge of Agile practices or Visual Studio technologies, MSDN and other Microsoft Press books offer both complete introductions and comprehensive information.

This book might not be for you if …
This book might not be for you if you are looking for an in-depth discussion focused on the process, development, or architecture of software requirements, tooling, or practices.

Similarly, if you are looking for source code or guidance on ALM, DevOps, or proven and official frameworks such as Agile, Scrum and Kanban, this book will not be fully relevant, and we recommend that you consider these publications instead:

Release Management and DevOps with TFS 2013 and future – FAQ

The Subject of Operations (DevOps) it is meanwhile an Hot aspect in the ALM of any application from the point of view of Delivery, Quality, Costs and Reliability. That is the first part of an series regarding this topic.

I will use  a dummy MVC Web Application that’s basically the Visual Studio 2013 template plus a small modification to the web.config so that we can demo the workflow but The same Flow can be , sure with depending complexity with any kind of Application (Native, Multiplatform, Managed or hybrid combinations).

The example I have is to show a message on the home page from a setting in the web.config. This setting then varies based on the environment (dev, prod, etc) the application is running on, and this will be the variable that will be changed by web.config transformations and Release Manager.

Although I’m using an AppSettings key, you can use it for other settings and connection strings as well. The same rules apply.

I made the following modifications:




The application should show  the msg from the web.config settings as per below (running from Visual Studio);

Setting up web.config transforms

I’ll not go into too much detail on how to setup web.config transforms and what the transformation syntax is for them, as they’ve been very well explained on MSDN.

The strategy that I used here was to structure my transformations so that:

  1. The original web.config file has the settings needed for the developer to run in debug mode in Visual Studio
  2. I added a new configuration (Release) that will have the proper web.config transformations to put tokens in the key/values that I need to be changed later by Release Management:

The same transformed config using Release configuration will be used by Release Management (RM) to deploy to all the environments in the release path.RM will then substitute the “__EnvLabel__” token with the appropriate value for each target deployment environment.

TFS Build

You can use any template for the build, including the RM template to trigger a release directly from the build.

The trick here is to force the web config transformation to happen during build, so you end up with a web.config that has the tokens RM is expecting. By default, web config transformations only happen during deployment.

In order for you to add this behaviour, you can add the following arguments to MSBuild:

/p:UseWPP_CopyWebApplication=true /p:PipelineDependsOnBuild=false

Also, make sure that you’re building the correct configuration that has the transforms to add the token (in our example is Any CPU and Release):

Once the build finishes, the output should look like this (noticed the web.config transformed to include the token):

Release Path setup

Now, we have to take that config and replace the tokens with the proper values according to the target environment. In this example, I’m going to use an Agent-based setup. I’ll update this later to also include an Agent-less deployment using PowerShell Desired State Configuration (DSC).

First we configure Release Manager Server, Release Management Client and a deployment agent on the target server. You can find more detailed information on setting up those parts here: Install Release Management Server and Client, Install Deployment Agents..

I used the same target server to simulate both DEV and PROD IIS environments, by using two different ports:

Then I configured a server in RM to point to that box:

Next I configured the two environments, DEV and PROD:

Next step is to configure a release path where we have a DEV -> PROD deployment flow:

The setup above is, of course, overly simplistic for demo purposes. What you should have in mind is that you can configure who can approve (individuals or groups), who can validate and whether those steps are automated or not. In my example, I don’t need approvals for DEV, but need one for PROD. When the deployment workflow is initiated, DEV will be deployed automatically while PROD will wait for an approval before proceeding.

Next step is to configure the components to be deployed, in this case our web application:

I chose to use the “builds with application” option, since I’m going to use the build definition that will be defined in the release path.

Next, I’ll setup the deployment tool. In my case, XCopy, but you can use MS Deploy or other tool as appropriate:

The secret of the web.config token replacement happens here on the Configuration Variables. We specify that the replacement happens “Before Installation”, so that the config file is updated before being copied to the target server. We also specify the wildcard to tell RM what files we want RM to look for tokens: in this case *.config.

Next we define the release template. In this example:

  • You can see the DEV -> PROD workflow
  • Servers in the deployment (I only have one in this example, but you might have multiple most likely)
  • Components: the application being deployed (you might have to manually add it by right-clicking Components
  • I included optional steps to backup current site, and to rollback in case deployment fails (more of a best practice, but they’re not necessary)

Notice that the “EnvLabel” token we specified before gets a value here, depending on the environment, along with the other variables like target and backup folders:


Let’s get some action going!

We initiate the release manually, choosing the template we configured previously, PROD as the last stage and selecting the last build as the one to deploy:

After a while we should see the following results:

Notice that the deployment to DEV was completed, but deployment to PROD is waiting for approval.

DEV has been successfully deployed:

While PROD has not yet:

The reason is that we’re waiting for approval. Let’s go ahead and open the approval window using the RM web client:

Clicking on the “Approve” for the selected release, we get the following dialog:

If we want to approve, but delay the deployment for later (off-hours deployment), we can click on the “Deferred Deployment” and select a day/time for when the deployment will execute:

After approval is given, the workflow resumes and finishes deploying to PROD:

There you have it!