Entity Framework 6 is Feature Complete including async , RTM later this year

I try to put together some information’s related to the release of RC for EF6.

 

What  is New in EF6

This is the complete list of new features in EF6.

Tooling

Our focus with the tooling has been on adding EF6 support and enabling us to easily ship out-of-band between releases of Visual Studio.

The tooling itself does not include any new features, but most of the new runtime features can be used with models created in the EF Designer.

Runtime

The following features work for models created with Code First or the EF Designer:

  • Async Query and Save adds support for the task-based asynchronous patterns that were introduced in .NET 4.5. We’ve created a walkthrough and a feature specification for this feature.
  • Connection Resiliency enables automatic recovery from transient connection failures. The feature specificationshows how to enable this feature and how to create your own retry policies.
  • Code-Based Configuration gives you the option of performing configuration – that was traditionally performed in a config file – in code. We’ve created an overview with some examples and a feature specification.
  • Dependency Resolution introduces support for the Service Locator pattern and we’ve factored out some pieces of functionality that can be replaced with custom implementations. We’ve created a feature specification and a list of services that can be injected.
  • Interception/SQL logging provides low-level building blocks for interception of EF operations with simple SQL logging built on top. We’ve created a feature specification for this feature and Arthur Vickers has created a multi-part blog series covering this feature.
  • Testability improvements make it easier to create test doubles for DbContext and DbSet. We’ve created walkthroughs showing how to take advantage of these changes using a mocking framework or writing your own test doubles.
  • Enums, Spatial and Better Performance on .NET 4.0 – By moving the core components that used to be in the .NET Framework into the EF NuGet package we are now able to offer enum support, spatial data types and the performance improvements from EF5 on .NET 4.0.
  • DbContext can now be created with a DbConnection that is already opened which enables scenarios where it would be helpful if the connection could be open when creating the context (such as sharing a connection between components where you can not guarantee the state of the connection).
  • Default transaction isolation level is changed to READ_COMMITTED_SNAPSHOT for databases created using Code First, potentially allowing for more scalability and fewer deadlocks.
  • DbContext.Database.UseTransaction and DbContext.Database.BeginTransaction are new APIs that enable scenarios where you need to manage your own transactions.
  • Improved performance of Enumerable.Contains in LINQ queries.
  • Significantly improved warm up time (view generation) – especially for large models – as the result of a contributions from AlirezaHaghshenas and VSavenkov
  • Pluggable Pluralization & Singularization Service was contributed by UnaiZorrilla.
  • Improved Transaction Support updates the Entity Framework to provide support for a transaction external to the framework as well as improved ways of creating a transaction within the Framework. See this feature specificationfor details.
  • Entity and complex types can now be nested inside classes.
  • Custom implementations of Equals or GetHashCode on entity classes are now supported. See the feature specification for more details.
  • DbSet.AddRange/RemoveRange were contributed by UnaiZorrilla and provides an optimized way to add or remove multiple entities from a set.
  • DbChangeTracker.HasChanges was contributed by UnaiZorrilla and provides an easy and efficient way to see if there are any pending changes to be saved to the database.
  • SqlCeFunctions was contributed by ErikEJ and provides a SQL Compact equivalent to the SqlFunctions.
  • Interception/SQL logging provides low-level building blocks for interception of EF operations with simple SQL logging built on top. We’ve created a feature specification for this feature and Arthur Vickers has created a multi-part blog series covering this feature.
  • Testability improvements make it easier to create test doubles for DbContext and DbSet. We’ve created walkthroughs showing how to take advantage of these changes using a mocking framework or writing your own test doubles.
  • Extensive API changes as a result of polishing the design and implementation of new features. In particular, there have been significant changes in Custom Code First Conventions and Code-Based Configuration. We’ve updated the feature specs and walkthroughs to reflect these changes.
  • EF Designer now supports EF6 in projects targeting .NET Framework 4. This limitation from EF6 Beta 1 has now been removed.

The following features apply to Code First only:

  • Custom Code First Conventions allow write your own conventions to help avoid repetitive configuration. We provide a simple API for lightweight conventions as well as some more complex building blocks to allow you to author more complicated conventions. We’ve created a walkthough and a feature specification for this feature.
  • Code First Mapping to Insert/Update/Delete Stored Procedures is now supported. We’ve created a feature specification for this feature.
  • Idempotent migrations scripts allow you to generate a SQL script that can upgrade a database at any version up to the latest version. The generated script includes logic to check the __MigrationsHistory table and only apply changes that haven’t been previously applied. Use the following command to generate an idempotent script.
    Update-Database -Script -SourceMigration $InitialDatabase
  • Configurable Migrations History Table allows you to customize the definition of the migrations history table. This is particularly useful for database providers that require the appropriate data types etc. to be specified for the Migrations History table to work correctly. We’ve created a feature specification for this feature.
  • Multiple Contexts per Database removes the previous limitation of one Code First model per database when using Migrations or when Code First automatically created the database for you. We’ve created a feature specification for this feature.
  • DbModelBuilder.HasDefaultSchema is a new Code First API that allows the default database schema for a Code First model to be configured in one place. Previously the Code First default schema was hard-coded to “dbo” and the only way to configure the schema to which a table belonged was via the ToTable API.
  • DbModelBuilder.Configurations.AddFromAssembly method  was contributed by UnaiZorrilla. If you are using configuration classes with the Code First Fluent API, this method allows you to easily add all configuration classes defined in an assembly. 
  • Custom Migrations Operations were enabled by a contribution from iceclow and this blog post provides an example of using this new feature.

How To Troubleshoot TFS Build Server Errors

Sometimes are some helpless situations where you think you have tried every possible suggestion on the search engines to bring the build server back but it just won’t work. Before hunting around for a solution it is important to understand what the problem is, if the error messages in the build logs don’t seem to help you can always enable tracing on the build server to get more information on what could possibly be the main cause of failure.

how to enable tracing on, TFS Client , TFS 2010/11 Server ,Build Server

image

Enable TFS Tracing on the Client Machine
  • On the client machine, shut down Visual Studio, navigate to C:\Program Files\Microsoft Visual Studio 10.0\Common 7\IDE

image

  • Search for devenv.exe.config, make a backup copy of the config file and right click the file and from the context menu select edit. If its not already there create this file.

image

  • Edit devenv.exe.config by adding the below code snippet before the last </configuration> tag
<system.diagnostics>
  <switches>
    <add name="TeamFoundationSoapProxy" value="4" />
    <add name="VersionControl" value="4" />
  </switches>
  <trace autoflush="true" indentsize="3">
    <listeners>
      <add name="myListener"
      type="Microsoft.TeamFoundation.TeamFoundationTextWriterTraceListener,Microsoft.TeamFoundation.Common, 
          Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"
          initializeData="c:\tf.log" />
      <add name="perfListener" type="Microsoft.TeamFoundation.Client.PerfTraceListener,
           Microsoft.TeamFoundation.Client, 
           Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"/>
    </listeners>
  </trace>
</system.diagnostics>
  • The highlighted path above is where the Log file will be created. If the folder is not already there then create the folder.
  • Start Visual Studio and after a bit of activity you should be able to see the new log file being created on the folder specified in the config file.
Enable Tracing on Team Foundation Server 2010/2011
  • On the Team Foundation Server navigate to C:\Program Files\Microsoft Team Foundation Server 2010\Application Tier\Web Services, right click web.config and from the context menu select edit.

image

  • Search for the <appSettings> node in the config file and set the value of the key ‘traceWriter’ to true.

image

  • In the <System.diagnostics> tag set the value of switches from 0 to 4 to set the trace level to maximum to write diagnostics level trace information.

image

  • Restart the TFS Application pool to force this change to take effect. The application pool restart will impact any one using the TFS server at present. Note – It is recommended that you do not make any changes to the TFS production application server, this can have serious consequences and can even jeopardize the installation of your server.

image

image

Enable Tracing on Build Controller/Agents
  • Log on to the Build Controller/Agent and Navigate to the directory C:\Program Files\Microsoft Team Foundation Server 2010\Tools

image

  • Look for the configuration file ‘TFSBuildServiceHost.exe.config’ if it is not already there create a new text file and rename it to ‘TFSBuildServiceHost.exe.config’

image

  • To Enable tracing uncomment the <system.diagnostics> and paste the snippet below if it is not already there.
<configuration>
  <system.diagnostics>
    <switches>
      <add name="BuildServiceTraceLevel" value="4"/>
    </switches>
    <trace autoflush="true" indentsize="4">
      <listeners>
        <add name="myListener" type="Microsoft.TeamFoundation.TeamFoundationTextWriterTraceListener,
             Microsoft.TeamFoundation.Common, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" 
             initializeData="c:\logs\TFSBuildServiceHost.exe.log" />
        <remove name="Default" />
      </listeners>
    </trace>
  </system.diagnostics>
</configuration>
  • The highlighted path above is where the Log file will be created. If the folder is not already there then create the folder, also, make sure that the account running the build service has access to write to this folder.

image

  • Restart the build Controller/Agent service from the administration console (or net stop tfsbuildservicehost & net start tfsbuildservicehost) in order for the new setting to be picked up.

image

Other Info’s

Below are some Key resource you might like to review. I would highly recommend the documentation, walkthroughs and videos available on MSDN.

How to Diagnose .NET Memory Issues in Production using Visual Studio

One of the issues that frequently affects .NET applications running in production environments is problems with their memory use which can impact both the application and potentially the entire machine. To help with this, we’ve introduced a feature in Visual Studio 2013 to help you understand the .NET memory use of your applications from .dmp files collected on production machines. In this post, I’ll first discuss common types of memory problems and why they matter. I’ll then walk through how to collect data, and finally describe how to use the new functionality to solve memory problems in your applications.

Why worry about memory problems

.NET is a garbage collected runtime, which means most of the time the framework’s garbage collector takes care of cleaning up memory and the user never notices any impact. However, when an application has a problem with its memory this can have a negative impact on both the application and the machine.

  1. Memory leaks are places in an application where objects are meant to be temporary, but instead are left permanently in memory. In a garbage collected runtime like .NET, developers do not need to explicitly free memory like they need to do in a runtime like C++. However the garbage collector can only free memory that is no longer being used, which it determines based on whether the object is reachable (referenced) by other objects that are still active in memory. So a memory leak occurs in .NET when an object is still reachable from the roots of the application but should not be (e.g. a static event handler references an object that should be collected). When memory leaks occur, usually memory increases slowly over time until the application starts to exhibit poor performance.  Physical resource leaks are a sub category of memory leaks where a physical resource such as a file, or OS handler is accidentally left open or retained. This can lead to errors later in execution as well as increased memory consumption.
  2. Inefficient memory use is when an application is using significantly more memory than intended at any given point in time, but the memory consumption is not the result of a leak. An example of inefficient memory use in a web application is querying a database and bringing back significantly more results than are needed by the application.
  3. Unnecessary allocations. In .NET, allocation is often quite fast, but overall cost can be deceptive, because the garbage collector (GC) needs to clean it up later. The more memory that gets allocated, the more frequently the GC will need to run. These GC costs are often negligible to the program’s performance, but for certain kinds of apps, these costs can add up quickly and make a noticeable impact to the performance of the app

If an application suffers from a memory problem, there are three common symptoms that may affect end users.

  1. The application can crash with an “Out of Memory” exception. This is a relatively common problem for 32bit applications because they are limited to only 4GB of total Virtual Address Space. It is however less common for 64bit applications because they are given much higher virtual address space limits by the operating system.
  2. The application will begin to exhibit poor performance. This can occur because the garbage collector is running frequently and competing for CPU resources with the application, or because the application constantly needs to move memory between RAM (physical memory) and the hard drive (virtual memory); which is called paging.
  3. Other applications running on the same machine exhibit poor performance. Because the CPU and physical memory are both system resources, if an application is consuming a large amount of these resources, other applications are left with insufficient amounts and will exhibit negative performance.

In this post I’ll be covering a new feature added to Visual Studio 2013 intended to help identify memory leaks and inefficient memory use (the first two problem types discussed above). If you are interested in tools to help identify problems related to unnecessary allocations, see .NET Memory Allocation Profiling with Visual Studio 2012.

Collecting the data

To understand how the new .NET memory feature for .dmp files helps us to find and fix memory problems let’s walk through an example. For this purpose, I have introduced a memory leak when loading the Home page of a default MVC application created with Visual Studio 2013 (click here to download). However to simulate how a normal memory leak investigation works, we’ll use the tool to identify the problem before we discuss the problematic source code.

The first thing I am going to do is to launch the application without debugging to start the application in IIS Express. Next I am going to open Windows Performance Monitor to track the memory usage during my testing of the application. Next I’ll add the “.NET CLR Memory -> # Bytes in all Heaps” counter, which will show me how much memory I’m using in the .NET runtime (which I can see is ~ 3.5 MB at this point). You may use alternate or additional tools in your environment to detect when memory problems occur, I’m simply using Performance Monitor as an example. The important point is that a memory problem is detected that you need to investigate further.

HomePagewDefaultPerfMon

The next thing I’m going to do is refresh the home page five times to exercise the page load logic. After doing this I can see that my memory has increased from ~3.5 MB to ~13 MB so this seems to indicate that I may have a problem with my application’s memory since I would not expect multiple page loads by the same user to result in a significant increase in memory.

image

For this example I’m going to capture a dump of iisexpress.exe using ProcDump, and name it “iisexpress1.dmp” (notice I need to use the –ma flag to capture the process memory, otherwise I won’t be able to analyze the memory). You can read about alternate tools for capturing dumps in what is a dump and how do I create one?

image

Now that I’ve collected a baseline snapshot, I’m going to refresh the page an additional 10 times. After the additional refreshes I can see that my memory use has increased to ~21 MB. So I am going to use procdump.exe again to capture a second dump I’ll call “iisexpress2.dmp”

image

Now that we’ve collected the dump files, we’re ready to use Visual Studio to identify the problem.

Analyzing the dump files

The first thing we need to do to begin analysis is open a dump file. In this case I’m going to choose the most recent dump file, “iisexpress2.dmp”.

Once the file is open, I’m presented with the dump file summary page in Visual Studio that gives me information such as when the dump was created, the architecture of the process, the version of Windows, and what the version of the .NET runtime (CLR version) the process was running. To begin analyzing the managed memory, click “Debug Managed Memory” in the “Actions” box in the top right.

image

This will begin analysis

image

Once analysis completes I am presented with Visual Studio 2013’s brand new managed memory analysis view. The window contains two panes, the top pane contains a list of the objects in the heap grouped by their type name with columns that show me their count and the total size. When a type or instance is selected in the top pane, the bottom one shows the objects that are referencing this type or instance which prevent it from being garbage collected.

image

[Note: At this point Visual Studio is in debug mode since we are actually debugging the dump file, so I have closed the default debug windows (watch, call stack, etc.) in the screenshot above.]

Thinking back to the test scenario I was running there are two issues I want to investigate. First, 16 page loads increased my memory by ~18 MB which appears to be an inefficient use of memory since each page load should not use over 1 MB. Second, as a single user I’m requesting the same page multiple times, which I expect to have a minimal effect on the process memory, however the memory is increasing with every page load.

Improving the memory efficiency

First want to see if I can make page loading more memory efficient, so I’ll start looking at the objects that are using the most memory in the type summary (top pane) of memory analysis window.

Here I see that Byte[] is the type that is using the most memory, so I’ll expand the System.Byte[] line to see the 10 largestByte[]’s in memory. I see that this and all of the largest Byte[]’s are ~1 MB each which seems large so I want to determine what is using these large Byte[]’s. Clicking on the first instance shows me this is being referenced by aSampleLeak.Models.User object (as are all of the largest Byte[]’s if I work my way down the list).

image

At this point I need to go to my application’s source code to see what User is using the Byte[] for. Navigating to the definition of User in the sample project I can see that I have a BinaryData member that is of type byte[]. It turns out when I’m retrieving my user from the database I’m populating this field even though I am not using this data as part of the page load logic.

public class User : IUser
{

[Key]
public string Id { get; set; }

public string UserName { get; set; }

public byte[] BinaryData { get; set; }
}

Which is populated by the query

User user = MockDatabase.SelectOrCreateUser(
“select * from Users where Id = @p1”,
userID);

In order to fix this, I need to modify my query to only retrieve the Id and UserName when I’m loading a page, I’ll retrieve the binary data later only if and when I need it.

User user = MockDatabase.SelectOrCreateUser(
“select Id, UserName from Users where Id = @p1”,
userID);

Finding the memory leak

The second problem I want to investigate is the continual growth of the memory that is indicating a leak. The ability to see what has changed over time is a very powerful way to find leaks, so I am going to compare the current dump to the first one I took. To do this, I expand the “Select Baseline” dropdown, and choose “Browse…” This allows me to select “iisexpress1.dmp” as my baseline.

image

Once the baseline finishes analyzing, I have an additional two columns, “Count Diff” and “Total Size Diff” that show me the change between the baseline and the current dump. Since I see a lot of system objects I don’t control in the list, I’ll use the Search box to find all objects in my application’s top level namespace “SampleLeak”. After I search, I see thatSampleLeak.Models.User has increased the most in both size, and count (there are additional 10 objects compared to the baseline). This is a good indication that User may be leaking.

image

The next thing to do is determine why User objects are not being collected. To do this, I select the SampleLeak.Models.Userrow in the top table. This will then show me the reference graph for all User objects in the bottom pane. Here I can see thatSampleLeak.Models.User[] has added an additional 10 references to User objects (notice the reference count diff matches the count diff of User).

image

Since I don’t remember explicitly creating a User[] in my code, I’ll expand the reference graph back to the root to figure out what is referencing the User[]. Once I’ve finished expansion, I can see that the User[] is part of a List<User> which is directly being referenced by a root (the type of root is displayed in []’s next to the root type; in this case System.Object[] is a pinned handle)

image

What I need to do next is determine where in my application I have a List<> that User objects are being added to. To do this, I search for List<User> in my sample application using Edit -> Find -> Quick Find (Ctrl + F). This takes me to theUserRepository class I added to the application.

public static class UserRepository
{
//Store a local copy of recent users in memory to prevent extra database queries
static private List<User> m_userCache = new List<User>();
public static List<User> UserCache { get { return m_userCache; } }

public static User GetUser(string userID)
{

//Retrieve the user’s database record
User user = MockDatabase.SelectOrCreateUser(
“select Id, UserName from Users where Id = @p1”,
userID);

//Add the user to cache before returning
m_userCache.Add(user);
return user;
}
}

Note, at this point determining the right fix usually requires an understanding of how the application works. In the case of my sample application, when a user loads the Home page, the page’s controller queries the UserRepository for the user’s database record. If the user does not have an existing record a new one is created and returned to the controller. In myUserRepository I have created a static List<User> I’m using as a cache to keep local copies so I don’t always need to query the database. However, statics are automatically rooted, which is why the List<User> shows as directly referenced by a root rather than by UserRepository.

Coming back to the investigation, a review of the logic in my GetUser() method reveals that the problem is I’m not checking the cache before querying the database, so on every page load I’m creating a new User object and adding it to the cache. To fix this problem I need to check the cache before querying the database.

public static User GetUser(string userID)
{
//Check to see if the user is in the local cache
var cachedUser = from user in m_userCache
where user.Id == userID
select user;

if (cachedUser.Count() > 0)
{
return cachedUser.FirstOrDefault();
}
else
{
//User is not in the local cache, retrieve user from the database
User user = MockDatabase.SelectOrCreateUser(
“select * from Users where Id = @p1”,
userID);

//Add the user to cache before returning
m_userCache.Add(user);

return user;
}
}

Validating the fix

Once I make these changes I want to verify that I have correctly fixed the problem. In order to do this, I’ll launch the modified application again and after 20 page refreshes, Performance Monitor shows me only a minimal increase in memory (some variation is to be expected as garbage builds up until it is collected).

Just to definitely validate the fixes, I’ll capture one more dump and a look at it shows me that Byte[] is no longer the object type taking up the most memory. When I do expand Byte[] I can see that the largest instance is much smaller than the previous 1 MB instances, and it is not being referenced by User. Searching for User shows me one instance in memory rather than 20, so I am confident I have fixed both of these issues.

image

How To Use TFS 2013 with SharePoint 2013 Sp1 and Sql 2012 sp1 on Windows 2012 R2

In new deployment scenarios you will need the TFS 2013 or 2012 on an windows 2012 R2 server, that will never support SharePoint 2010, so we need an SharePoint 2013 SP1, that support windows 2012 R2 for now.

Before Run all Windows Updates before installing SharePoint 2013, and get the CU updates of Sql2012 sp1 and SharePoint 2013 Sp1 .

If That box already has TFS 2013 on an windows 2012 R2 server . by Installing updates are the key steps that will prevent tantrums from SharePoint . Always, install of the required updates and ideally the optional ones also.

installation of SharePoint 2013 with Sp1

SharePoint Team They have really slicked up the installation process for SharePoint,

Instead use the auto-run that comes from running the DVD directly, or you can just run the “prerequisiteinstaller” from the root first.

image

When the prerequisites are complete you can start the installation proper and enter your key . If you get this wrong you will be next completing an uninstall to pick the right option. You can Avoid express at all costs and in this case we already have Team Foundation Server 2013 Sp1 installed and already have SQL Server 2012 sp2 installed.

Using configuration wizard will lead you through the custom config but if  you are running on a local computer with no domain, like me, then you will have to run a command line to generate the database before you proceed.

Well, do not dispair because PowerShell –as always –  is your friend. So just start the SharePoint 2013 management PowerShell console and use the cmdlet :

New-SPConfigurationDatabase

image

image

We have now a farm we can complete the configuration. Just work though the wizard as , although you are on your own if you select kerberos for single sign-on.

SharePoint 2013 SP1 will then run though its configuration steps and give you a functional, but empty SharePoint environment. At the end you get a simple Finish button and some instructions that you need to follow for getting your site to render in a browser.

Info: SharePoint 2013 works now in Chrome and other non Microsoft browsers…

Now you get almost 25 services that you can chose to install or not. If you leave them all ticked then you will get about 10-12 new databases in SQL, Its too hard to figure out what the dependencies are and what you need .

If the verification of the SharePoint configuration passes then configuration should work

Configuring processes and extending can be long and will add solution into SharePoint. Will be a site template added but as it will likely look the nice new SharePoint 2013 Sp1 interface we will need to create the site manually.

Configuration completed successfully

Now that the SharePoint bits have been setup we will have a default link setup between SharePoint and Team Foundation Server. Although if we had a separate Team Foundation Server instance we would need to tell it where the TFS server is.

Info: You have to install the Extensions for SharePoint Products on every front end server for your SharePoint farm.

SharePoint 2013 Web Applications Configuration in Team Foundation Server 2013

Now we have installed and configured the bits for SharePoint as well as telling it where the TFS server is we now need to tell TFS where to go.

There is no account listed as an administrator! I am using the Builtin\Administrator user as both the TFS Service Account and the SharePoint Farm Admin you don’t need one.

Site Configuration Collections

In order to have different sites for different collection and enable the ability to have the same Team Project name in multiple collection then you need to create a root collection site under the main site. Some folks like to create this at the ^/sites/[collection] level, but I always create the collection site as a sub site of the root. This have the benefit of creating automatic navigation between the sites…

This final test as when you click OK the Admin Console will go off and try to hook into, or create a site for us. if you do want to have a greater degree of separation between the sites and have them in different collections you can indeed do that as well. You may want to do that if you are planning to separate collection to multiple environments, but I can think of very few reasons that you would want to do that.

Using the new Team Project Site

If we create a new team project the template from the Process Template that you selected will be used to create the new site. These templates are designed to work in any version of SharePoint but they may look cool.

This team project was created before there was ever a SharePoint server so it has no portal. Lets go ahead and create one manually.

They have moved things around a little in SharePoint and we now create new sub sites from the “View Content” menu.

This, while much more hidden is really not something you do every day. You are much more likely to be adding apps to an existing site so having this more clicks away is not a big deal.

When we care the new site we have two options. We can create it using the provided “Team Foundation Project Portal” bit it results in a slightly ugly site, or you can use the default “Team Site” template to get a more native 2013 feel.

This is due to the features not yet being enables… so head on over to “cog | Site Settings | Site Actions | Manage site features” to enable them.

You can enable one of:

  • Agile Dashboards with Excel Reporting – for the MSF for Agile Software Development 6.x Process Template
  • CMMI Dashboards with Excel Reports – for the MSF for CMMI Software Development 6.x Process Template
  • Scrum Dashboards with Reporting – for the Visual Studio Scrum 2. Recommended Process Template

The one you pick depends on the Process Template that you used to create the Team Project. I will activate the Scrum one as I used the Visual Studio Scrum 2.0 Recommended Process Template which I heartily recommend. You will have noticed that here are 2 or 3 for each of the “Agile | SMMI | Scrum” monikers and this is due to the different capabilities that you might have. For example:

  • Agile Dashboards – I have TFS with no Reporting Services or Analysis Services
  • Agile Dashboards with Basic Reporting – I have Reporting Services and Analysis Services but not SharePoint Enterprise
  • Agile Dashboards with Excel Reporting – I have Everything! Reporting Services, Analysis Services and SharePoint Enterprise

If you enable the highest level of the one you want it will figure out the one that you can run  and in this case I can do “Scrum Dashboards with Reporting”.

sharepoint_scrum_boards

sharepoint_agile_boards_reporting
Scrum template does not have any built in Excel Reports, but it does have Reporting Services reports. Now when I return to the homepage I get the same/similar portal you would have seen in old versions of SharePoint 2010.

Conclusion

Team Foundation Server 2013 & 2012 works with SharePoint 2013 Sp1 on Windows server 2012 R2 and we have manually created our Team Project Portal site.

Effective Unit Testing with Microsoft Fakes now free for VS premium & Ebook

 

An initiative of ALM Rangers was  to create some value-add guidance on Microsoft Fakes for the ALM community. Last week, was shipped the eBook – Better Unit Testing with Microsoft Fakes.

When an ALM Rangers project gets signoff it needs to pass a final hurdle before getting the green-light to proceed. This involves presenting the project’s Vision, Epics, Delivery Artifacts, Milestones etc… and then hoping that fellow ALM Rangers will offer to join the project.

That was delivered as the ALM Rangers first eBook and on a subject that just got a whole broader audience now that Microsoft Fakes will be included in Visual Studio Premium with Update 2.

All Info’s from Update 2 , has Brian Harry pointed out here

http://blogs.msdn.com/b/bharry/archive/2013/01/30/announcing-visual-studio-2012-update-2-vs2012-2.aspx.

Other resources on Fakes you find also here, some of the in German language

Book can be downloaded from download the guide and see what Microsoft Fakes is all about.

What you will find in V1 is shortly briefing here :

Foreword

Introduction

Chapter 1: A Brief Theory of Unit Testing

  • Software testing

  • The fine line between good and flawed unit testing

Chapter 2: Introducing Microsoft Fakes

  • Stubs
  • Shims
  • Choosing between a stub and a shim

Chapter 3: Migrating to Microsoft Fakes

  • Migrating from Moles to Microsoft Fakes
  • Migrating from commercial and open source frameworks

Chapter 4: Miscellaneous Topics

  • Targeting Microsoft .NET Framework 4
  • Adopting Microsoft Fakes in a team
  • You can’t Fake everything!
  • Verbose logging
  • Working with strong named assemblies
  • Optimizing the generation of Fakes
  • Looking under the covers
  • Refactoring code under test
  • Removing Fakes from a project
  • Using Fakes with Team Foundation Version Control
  • Using Microsoft Fakes with ASP.NET MVC

Chapter 5: Advanced Techniques

  • Dealing with Windows Communication Foundation (WCF) service boundaries
  • Dealing with non-deterministic calculations
  • Gathering use-case and other analytical information
  • Analyzing internal state
  • Avoiding duplication of testing structures

Chapter 6: Hands-on Lab

  • Exercise 1: Using Stubs to isolate database access (20 – 30 min)
  • Exercise 2: Using Shims to isolate from file system and date (20 – 30 min)
  • Exercise 3: Using Microsoft Fakes with SharePoint (20 – 30 min)
  • Exercise 4: Bringing a complex codebase under test (20 – 30 min)

In Conclusion

Appendix

Rhino Mocks vs Moq – Best .Net Mocking Frameworks

I’ve heard multiple people state that they prefer Moq over Rhino Mocks because the Rhino Mocks syntax isn’t as clean. And while I don’t have a strong preference between the two, I’d like to help dispel that myth. It is true that, if you choose the wrong syntax, Rhino can be more complicated. This is because Rhino has been around for a while and it’s original syntax pre-dates the improved Arrange/Act/Assert syntax. I go into a lot more detail on Rhino Mocks and the appropriate way to use it , but to dispel the myth that Rhino Mocks is complicated I would like to compare the syntax for creating stubs and mocks using Rhino vs. Moq:

Moq Syntax:

  //Arrange
  var mockUserRepository= new Mock<IUserRepository>();
  var classUnderTest = new ClassToTest(mockUserRepository.object);
  mockUserRepository
      .Setup(x => x.GetUserByName("user-name"))
      .Returns(new User());

  //Act
  classUnderTest.MethodUnderTest();

  //Assert
  mockUserRepository
      .Verify(x => x.GetUserByName("user-name"));

Rhino Stub Syntax:

  //Arrange
  var mockUserRepository=MockRepository.GenerateMock<IUserRepository>();
  var classUnderTest = new ClassToTest(mockUserRepository);
  mockUserRepository
      .Stub(x => x.GetUserByName("user-name"))
      .Returns(new User());

  //Act
  classUnderTest.MethodUnderTest();

  //Assert
  mockUserRepository
      .AssertWasCalled(x => x.GetUserByName("user-name"));

Notice that there are only four differences in those examples: the use of “new Mock” instead of MockRepository, the term Setup vs. Stub and the term Verify vs. AssertWasCalled and the need to call “.Object” on the mock object when using Moq. So why do so many people think Rhino Mocks more complicated? Because of it’s older Record/Replay syntax which required code like the following:

using ( mocks.Record() )
{
    Expect
        .Call( mockUserRepository.GetUserByName("user-name) )
        .Returns( new User() );
}

using ( mocks.Playback() )
{
    var classUnderTest = new ClassUnderTest(mockUserRepository);
    classUnderTest.TestMethod();
}

There is also another benefit of using Rhino Mocks. The support for Rhino by StructureMap AutoMocker is cleaner due to the fact that the mocks returned by Rhino are actual implementations of the interface being mocked rather than a wrapper around the implementation which is what Moq provides. AutoMocker allows you to abstract away the construction of the class under test so that your tests aren’t coupled to the constructors of the class under test. This reduces refactoring tension when new dependencies are added to your classes. It also cleans up your tests a bit when you have a number of dependencies. If you haven’t used AutoMocker, you can quickly learn how it works by checking out the AutoMocker module in Pluralsight Rhino Mocks video. But here is a quick comparison of using AutoMocker with Moq vs. Rhino:

Moq syntax using MoqAutoMocker:

//Arrange
var autoMocker = new MoqAutoMocker();
  Mock.Get(autoMocker.Get<IUserRepository>())
      .Setup(x => x.GetUserByName("user-name"))
      .Returns(new User());

  //Act
  autoMocker.ClassUnderTest.MethodUnderTest();

  //Assert
  Mock.Get(autoMocker.Get<IUserRepository>())
      .Verify(x => x.GetUserByName("user-name"));

Rhino syntax using RhinoAutoMocker:

//Arrange
var autoMocker = new RhinoAutoMocker<ClassUnderTest>();
  autoMocker.Get<IUserRepository>();
      .Setup(x => x.GetUserByName("user-name"))
      .Returns(new User());

  //Act
  autoMocker.ClassUnderTest.MethodUnderTest();

  //Assert
  autoMocker.Get<IUserRepository>();
      .AssertWasCalled(x => x.GetUserByName("user-name"));

I hope that this clarifies, for some, the correct way to use Rhino Mocks and illustrates that it is every bit as simple as Moq when used correctly.

I have to say, I did prefer Moq slightly to Rhino Mocks.  The newer Arrange, Act, Assert (AAA) style syntax in Rhino Mocks is a huge improvement over the old Record/Replay mess.  But Moq has the benefit of being born at the right time and has AAA style calls without the burden of supporting deprecated syntax.  This means Moq has cleaner documentation, fewer options, and fewer ways to get confused by the API.  I really couldn’t find anything I needed to do in Moq that I couldn’t do.  I did get a little tripped up having to get the object out of the mock, with myMock.Object calls, but that wasn’t a big deal.

Here’s the NuGet Moq install package syntax to add Moq to your unit test project:

image

Why TFS 2012 rocks and what you need to know to scale out

 

Experiences from Microsoft Developer Division , Developer Division is running on TFS 2012 RC!

Back in the beginning, the DevDiv server was dogfood server for Microsoft Developer Division .  Then as all of the folks shipping products in Visual Studio, there were too many critical deadlines to be able to put early, sometimes raw, builds on the server.   So was dogfood TFS on a server called pioneer, as described here.  Pioneer is used mostly by the teams in ALM (TFS, Test & Lab Management, and Ultimate), and has been running TFS 2012 on it since February 2011, which was a full year before beta. Never before have we been able to use TFS so early in the product cycle, and our ability to get that much usage on early TFS 2012 really showed in the successful upgrade of the DevDiv server.

DevDivision also  run TFS 2012 in the cloud at http://tfspreview.com, and that’s been running for a year now.  While that’s not a dogfood effort, it’s helped us improve TFS 2012 significantly. The other dogfooding effort leading up to this upgrade was Microsoft IT.  They upgraded a TFS server to TFS 2012 Beta, and we learned from that as well.

The scale of the DevDiv server is huge, being used by 3,659 users in the last 14 days.  Nearly all of those users are working in a single team project for the delivery of Visual Studio 2012.  Our branches and workspaces are huge (a full branch has about 5M files, and a typical dev workspace 250K files).  For the TFS 2010 product cycle, was not upgrade the server until after RTM.  Having been able to do this upgrade with TFS 2012 RC, the issues found will be fixed in the RTM release of TFS 2012!

Here’s the topology of the DevDiv TFS deployment, which I’ve copied from Grant Holliday’s blog post on the upgrade to TFS 2010 RTM two years ago.  I’ll call out the major features.

  • We use two application tiers behind an F5 load balancer.  The ATs will each handle the DevDiv load by themselves, in case we have to take one offline (e.g., hardware issues).
  • There are two SQL Server 2008 R2 servers in a failover configuration.  We are running SP1 CU1.  TFS 2012 requires an updated SQL 2008 for critical bug fixes.
  • SharePoint and SQL Analysis Services are running on separate computer in order balance the load (cube processing is particularly intensive).
  • We use version control caching proxy servers both in Redmond and for remote offices.

These statistics will give you a sense of the size of the server.  There are two collections, one that is in use now and has been used since the beginning of the 2012 product cycle (collection A) and the original collection which was used by everyone up through the 2010 product cycle (collection B).  The 2010 collection had grown in uncontrolled ways, and there were more than a few hacks in it from the early days of scaling to meet demand.  Since moving to a new collection, has been able pare back the old collection, and the result of those efforts has been a set of tools that we use on both collections (will be eventually release them).  Both collections were upgraded.  The third column is a server we call pioneer.

Grant posted the queries to get the stats on your own server (some need a little tweaking because of schema changes, and we need to add build).  Also, the file size is now all of the files, including version control, work item attachments, and test attachments, as they are all stored in the same set of tables now.

 

Coll.A                      Coll. B                          Pioneer
Recent Users 3659 1516 1,143
Build agents and controllers 2,636 284 528
Files 16,855,771 21,799,596 11,380,950
Uncompressed File Size (MB) 14,972,584 10,461,147 6,105,303
Compressed File Size (MB) 2,688,950 3,090,832 2,578,826
Checkins 681,004 2,294,794 133,703
Shelvesets 62,521 12,967 14,829
Merge History 1,512,494,436 2,501,626,195 162,511,653
Workspaces 22,392 6,595 5,562
Files in workspaces 4,668,528,736 366,677,504 406,375,313
Work Items 426,443 953,575 910,168
Areas & Iterations 4,255 12,151 7,823
Work Item Versions 4,325,740 9,107,659 9,466,640
Work Item Attachments 144,022 486,363 331,932
Work Item Queries 54,371 134,668 28,875

The biggest issue  faced after the upgrade was getting the builds going again.  DevDiv (collection B) has 2,636 build agents and controllers, with about 1,600 being used at any given time.  On pioneer, didn’t have nearly that many running.  The result was that was hit a connection limit, and the controllers and agents would randomly go online and offline.

The upgrade to TFS 2012 RC was a huge success, and it was a very collaborative effort across TFS, central engineering, and IT.  As a result of this experience and  experience on pioneer, TFS 2012 is not only a great release with an incredible set of features, but it’s also running at high scale on a mission critical server!

What you need to use Hosted TFS 2012 (Team Foundation Service)

 

You can connect to the Team Foundation Service Preview from a number of applications as well as visiting your account URL at tfspreview.com. In the following post we talk about the client software necessary if you want to connect your development environment to your new Team Foundation Server in the cloud.

  • Visual Studio 2010 SP1 or Microsoft Test Manager 2010 SP1
    To connect and authenticate with the Team Foundation Service Preview you need to install the compatibility GDR.
    Note: You must have Service Pack 1 for Visual Studio 2010 installed before installing the GDR above.
  • Eclipse
    For Eclipse 3.5 and higher (or IDE’s based on those versions of Eclipse) on any operating system (including Mac, Linux as well as Windows) you can install the TFS plug-in for Eclipse which comes as part of the Team Explorer Everywhere 11 Beta.
  • Build Server (Build Controller and Agent)
    To have a build server that talks to the Team Foundation Service Preview, you need to install the Build Service from the latest Team Foundation Server 11 Beta.
  • Visual Studio 11 Beta
    Also, don’t forget that the Visual Studio 11 Beta has all the bits built in to be able to make use of your Team Foundation Service Preview account. Not only that, some of the great new features of the Team Foundation Service Preview will only light up from the newest version Visual Studio. We are keen to get your feedback on this beta release so download it and give it a go from a test machine and let us know what you think.

Links are now pointing to the beta, you need to use the RC

Introduce the Windows 8 Beta

 

A reimagined Windows

A day ago in Barcelona, was announced the release of Windows 8 Consumer Preview, available to download  for anyone interested in trying it out.

With Windows 8, the whole experience of Windows has been reimagined. It’s designed to work on a wide range of devices , from touch-enabled tablets, to laptops, to desktops and all-in-ones.MS designed Windows 8 to give instant access to  apps, files, and the information care about most so spend less time navigating and more time doing what you actually want to do. Move between Windows 8 PCs easily and access the files and settings from virtually anywhere. Touch a first-class experience and navigating with a mouse and keyboard fast and fluid. And just like Windows 7, reliability and security features are built in.


This is still just a small preview of what you’ll find in Windows 8. If you want to see more, check out our Windows 8 In Depth series,

Update 29.02.2012:  Downloads of  Windows 8 Consumer Preview are available same as Windows 8 Server.

Download Windows 8 Server Beta als 64Bit-ISO
Download Windows 8 Server Beta als englische VHD
Download Windows 8 Consumer Preview als deutsche 32Bit-ISO
Download Windows 8 Consumer Preview als deutsche 64Bit-ISO

andere Sprachen

Serial NR. für Windows 8 Consumer Preview: DNJXJ-7XBW8-2378T-X22TX-BKG7J

Some things you should know before installing Windows 8 Consumer Preview

Before you start the download, there are a few things to keep in mind.

First, this is a prerelease operating system

The Windows 8 Consumer Preview is just that: a preview of what’s to come. It represents a work in progress, and some things will change before the final release.

Second, should be pretty comfortable with new technology

If you’re used to running prereleased software, you’re OK with a little troubleshooting, and you don’t mind doing a few technical tasks here and there, then you’ll probably be OK giving the Windows 8 Consumer Preview a spin. If a list of hardware specs is a little overwhelming for you, or you’re not sure what you’d do if something unexpected happened, this might not be the time to dive in.

As with pre-release software in general, there won’t be official support for the Windows 8 Consumer Preview, but if you have problems, please share them with us. You can post a detailed explanation of any issues you run into at the Windows 8 Consumer Preview forum.. In addition, the Windows 8 Consumer Preview FAQ on the Windows website has information that could help you out and make the Windows 8 experience more productive .

And finally, you’ll need the right hardware

Windows 8 Consumer Preview should run on the same hardware that powers Windows 7 today. In general, you can expect Windows 8 Consumer Preview to run on a PC with the following:

  • 1 GHz or faster processor
  • 1 GB RAM (32-bit) or 2 GB RAM (64-bit)
  • 16 GB available hard disk space (32-bit) or 20 GB (64-bit)
  • DirectX 9 graphics device with WDDM 1.0 or higher driver
  • 1024 x 768 minimum screen resolution

However, there are some additional requirements to take into consideration in order to use certain features in Windows 8. In order to use the Snap feature, you will need a PC with a 1366×768 resolution or higher. If you want to use touch, you’ll need a multitouch-capable laptop, tablet, or display. Windows 8 supports up to five simultaneous touch points, so if your hardware doesn’t, you may find typing on the onscreen keyboard and using certain controls more of a challenge. You’ll also need an Internet connection to try out the Windows Store, to download and install apps, and to take your settings and files with you from one Windows 8 PC to

Performance Increases

First Look at What's New in Windows 8

One of the issues that’s been on our minds since they previewed this new interface was whether this will keep bogging Windows down with more running processes, and whether running a full Windows desktop on a low-powered tablet was really a good idea (after all, we’ve seen Windows run on netbooks).

Microsoft knows the fears, and has addressed them: Windows 8 is slated to have better performance than Windows 7, even with this metro interface running on top of a desktop. I ran a few tests back when the the Developer Preview came out and found that to be the case, especially when it comes to boot times. Tablet users and netbook users especially should notice a fairly significant performance increase with Windows 8. Especially considering that any of your tablet-based apps will suspend themselves when you jump into the traditional desktop, so they don’t take up any of your resources.

The following tests were performed on an (overclocked) 3.8 GhZ i7 machine with 6GB of RAM, a 2TB hard drive, an Nvidia GeForce 9800 GT, and connected to the internet over Ethernet at a maximum speed of 20mbps.

Windows 8
Windows 7

Boot Time (Windows Screen to Desktop)
0:10
0:35

Compress a ~700MB File
0:29
0:32

Decompress a ~700MB File
0:11
0:12

Duplicate a ~700MB File
0:01
0:02

Encode a Movie in Handbrake
8:06
8:15

Cold Start 9 Applications
0:46
0:46

Open 10 Tabs in Chrome
0:07
0:07

3dmark10 Score
6470 (5218 Graphics, 23098 CPU)
6455 (5199 Graphics, 23448 CPU)

Total Time
9:56
10:29

The Lock Screen

First Look at What's New in Windows 8

Windows 8’s lock screen is pretty much what you’d expect: it’s got a beautiful picture along with a few little widgets full of information, like the time, how many emails you have, and so on. However, after swiping to unlock, Windows 8 shows off some pretty neat touch-based features, particularly a "picture password" feature. Instead of using a PIN or a lock pattern to get into your system, you swipe invisible gestures using a picture to orient yourself (in the example they showed, the password was to tap on a persons nose and swipe left across their arm). Android modders might find this similar to CyanogenMod’s lock screen gestures.

The Home Screen

First Look at What's New in Windows 8

The home screen is very familiar to anyone who’s used Windows Phone. You’ve got a set of tiles, each of which represents an application, and many of which show information and notifications that correspond to the app. For example, your email tile will tell you how many unread emails you have (and who they’re from), your calendar tile will show upcoming events, your music tile will show you what’s playing, and so on. You can also create tiles for games, contacts, and even traditional Windows apps that will pull you into the Windows desktop. The tablet-optimized apps are all full screen and "immersive", though, and you can rearrange their icons on the home screen easily (just as you would on any other tablet platform).

Running Apps

First Look at What's New in Windows 8

Running a basic app works as you expect—you tap on its home screen icon and it goes full screen. The browser has lots of touch-based controls, like pinch to zoom and copy and paste, and you can access options like search, share, and settings through the Charms bar, which you can get by swiping from the right edge of the screen or pressing Win+C. Apps can share information one another easily, such as selected text or photos. After picking your media from one app, you’ll then be able to choose which app you want to share with, and work with it from there. For example, you can share photos to Facebook, send text from a web page in an email, and so on.

None of this is brand new to touch-based platforms, but what is new is the ability to not only multitask, but run these apps side by side. Say you want to watch a video and keep an eye on your news feed at the same time. Just like in Windows 7 for the desktop, you can dock an app to one side of the screen while docking another app at the opposite side, which is a seriously cool feature. Imagine being able to IM and play a game at the same time, or browse the web while writing an email. It’s a fantastic way to fix one of the big shortcomings of mobile OSes, thus allowing you to ignore the full desktop interface more often and stay in the touch-friendly, tablet view.

The Desktop

First Look at What's New in Windows 8

The traditional desktop is still there, though it may be a tad different than what you’re used to. First and foremost, there’s no start button to speak of. Your taskbar merely shows the apps you have pinned, with your system tray on the right, as usual. You can jump back to the start screen (that is, the Metro screen) by pressing the Windows key or by moving your mouse to the bottom left corner of the screen. Other than that, everything looks pretty similar (though the windows no longer have rounded corners). The Control Panel has been updated a bit, as well as the Task Manager and Windows Explorer, which we’ll discuss below.

A New Task Manager

First Look at What's New in Windows 8

Microsoft’s finally redesigned the task manager, and it looks pretty great. You have a very simple task manager for basic task killing, but if you’re a more advanced user, you can bring up the detailed task manager filled with information on CPU and RAM usage, Metro app history, and even startup tweaking—so you can get rid of apps that launch on startup without going all the way into msconfig. For more information on the new Task Manager.

Windows Explorer

First Look at What's New in Windows 8

Most of it isn’t new information: we’ll have native ISO mounting in Windows Explorer, a new Office-style ribbon, and a one folder up button like the old days of XP (thank God). It also has a really cool "quick access" toolbar in the left-hand corner of the title bar, that gives you super quick access to your favorite buttons from the ribbon. For more info, check out in-depth look at the new Windows Explorer.

Other Features

First Look at What's New in Windows 8Along with these cool features, Windows 8 also comes with other features we’ve come to know and love in our mobile OSes. It’s got system-wide spellchecking, so you don’t have to rely on a specific app to keep your writing top-notch, as well as a system-wide search feature, that lets you search anything from your music library to your contacts to the web itself. It also has a really cool feature for desktop users that lets your run the Metro UI on one monitor while running the traditional desktop on the other.

It also has a really cool feature called "refresh your PC", where you can do a clean install with the tap of a button. Whether you’re selling your machine or just want a cleaner, faster installation of Windows, you can do it all in one click. You can even set refresh points, similar to restore points, so you can refresh your PC to the way it was at a certain point in time.

The Windows Store

First Look at What's New in Windows 8

The Windows Store, which is now available in the Consumer Preview, looks much like the home screen, with tiles that correspond to different categories and featured apps. From there, you can look at a more detailed list of the available apps in a given section. And, the store contains not only touch-based apps for the tablet interface, but some of the more traditional desktop Windows apps you’re used to, so you have one portal to discover all your Windows apps no matter what interface you’re using.

Right now, the Windows Store is full of free apps from Microsoft and its partners, so you can check out some of the upcoming apps now. When Windows 8 officially releases to the public, though, you should find many more apps in the store, including paid ones. What’s really cool about the app store is that you can try apps before you buy, and then download the full version without losing your place in the app or reinstalling anything.

Sync All Your Data to the Cloud

First Look at What's New in Windows 8

The cloud is taking center stage, with your Microsoft account driving all the syncing in Windows 8. Your address book, photos, SkyDrive data, and even data within third-party apps can sync up to the cloud, and you can access them on any Windows 8 device—even a brand new one. Just sign in, and you’ll have access to everything (not unlike Chrome OS, which immediately loaded your themes and extensions when you logged in—great for lending your computer to a friend). The address book also syncs with other services like Facebook and Twitter as well. You can even sync all of your settings from one Windows 8 PC to another. Just sign onto your Windows 8 with a Microsoft account and you’ll get all your themes, languages, app settings, taskbar, and other preferences will show right up. It’s a pretty neat feature if you have multiple Windows 8 PCs and don’t want to set them all up separately—just a few taps and you’ve got all your preferences ready to go.

Charms let us work faster

In Windows 8, are built new, fast ways to get around the operating system and do common tasks. They’re called charms. Swipe in from the right edge of the screen or move your mouse to the upper-right corner, and the charms bar appears (you can also use the Windows key + C). The charms are the quickest way to navigate to key tasks in Windows 8. You can go to the Start screen, or use the charms for quick shortcuts to common tasks.

Charms appear on the right side of the screen

Search

Just like in Windows 7, with Windows 8, you can easily search for apps, settings, or files on your PC. And with the Search charm, searching now goes even deeper. You can search within apps and on the web, so you can find a specific email quickly in the Mail app, or see what a friend has put on Facebook using the People app. You can also get search results from within apps right from the Start screen. If the info you need is on the web, just choose Internet Explorer in your search results, and Search brings the results right to you. Apps designed specifically for Windows 8 can use the Search charm easily, so as you install more apps, you can find movie reviews or show times, opinions on restaurants, or even stock prices (just to name a few), without having to hunt around. If you’re using a keyboard, you can also search right from the Start screen – just start typing, and the results will appear. You can filter results to view apps or settings, or to search within individual apps.

The Search charm lets you search within apps like Internet Explorer
Share

When I read something great on the web or see a picture that makes me laugh, I like to pass it on. The Share charm makes it incredibly easy. And just like with Search, apps can hook into Share easily, so you don’t have to jump in and out of an app to share great content. You can quickly send wise words with the Mail app or share a photo on SkyDrive. The apps you use most often are listed first for quick access, and you can choose whether to share with just one person, or with all of your contacts at once.

Sharing the Windows Phone website via Mail with the Share charm

Devices

The Devices charm lets you get to the devices you want to use so you can do things like getting photos from a digital camera, streaming video to your TV, or sending files to a device, all from one place. For example, if you’re watching a movie in the Video app and want to share it with everyone in room, the Devices charm lets you stream a video right to your Xbox to show it on your TV.

Troubleshooting TFS 2010 reports & Warehouse

 

Background: Physical Architecture of TFS Reporting

Each TFS component maintains its own set of transaction databases. This includes work items, source control, tests, bugs, and Team Build. This data is aggregated into a relational database. The data is then placed in an Online Analytical Processing (OLAP) cube to support trend-based reporting and more advanced data analysis.

The TfsWarehouse relational database is a data warehouse designed to be used for data querying rather than transactions. Data is transferred from the various TFS databases, which are optimized for transaction processing, into this warehouse for reporting purposes. The warehouse is not the primary reporting store, but you can use it to build reports. The TfsReportDS data source points to the relational database. The Team System Data Warehouse OLAP Cube is an OLAP database that is accessed through SQL Server Analysis Services. The cube is useful for reports that provide data analysis of trends such as ‘how many bugs closed this month versus last month?’ The TfsOlapReportDS data source points to the Team System Data Warehouse OLAP cube in the analysis services database.

10 Steps to trouble shoot TFS Reporting

1. On the TFS Application tier server, open an Administrative Command Prompt

2. Run the following command: Net Stop TFSJobAgent

3. Once this completes, run the following command to restart the TFSJobAgent: Net Start TFSJobAgent

4. Open the TFS Administration console, and select the Reporting Node

5. Click the Start Rebuild link to rebuild the warehouse. Refresh this page until it displays “Configured and Jobs Enabled”

6. Open a web browser and navigate to the warehousecontrolservice.asmx page at:

http://<server>:8080/tfs/teamfoundation/administration/v3.0/warehousecontrolservice.asmx

7. Click ProcessWarehouse, then click Invoke on the subsequent page. This should return True.

8. Return to the WarehouseControlService.asmx page, then click ProcessAnalysisDatabase.

9. Enter Full for the processingType, then click Invoke, this should also return True.

10. Return to the WarehouseControlService.asmx page and click GetProcessingStatus, this should return the processing status page

with the current processing results. It should indicate Full Analysis processing is occurring. Refresh this page until the status

(ResultMessage) of the “Full Analysis Database Sync” indicates “Succeeded”

 

Refresh TFS Warehouse, Cube and Reports on demand

By default, TFS will process it’s Data Warehouse and Analysis Services Cube (and thus update the data for the reports) every 2 hours. Be careful with changing it to lower values than every hour:

Important

If you reduce the interval to less than the default of two hours (7200 seconds), processing of the data warehouse will consume server resources more frequently. Depending on the volume of data that your deployment has to process, you may want to reduce the interval to one hour (3600 seconds) or increase it to more than two hours. [Source: MSDN]

Alternatively you can use this small command line utility from Neno Loje:

Syntax/Usage:

tfsrefreshwarehouse.exe /server:http://servername:8080/tfs [/full] [/status]

Manually process the TFS Warehouse and Cube

Using just the /status paramter returns useful information about cube processing:

Using /status shows the last and next scheduled sync times

(Note: The user needs to have the ‘Administer Warehouse‘ permission in TFS)

Download the tool from here: TfsRefreshWarehouse.exe (.ZIP, 12,8 KB)