New Features in Visual Studio 2015

Few weeks ago The product Team has presented What’s new in VS2015 CTP6 , so here are big part of the changes

Other changes:

Related releases:

Other changes:

CTP 6

UI debugging tools for XAML

Are  added two new tools—the Live Visual Tree and the Live Property Explorer—that you can use to inspect the visual tree of your running WPF application, as well as the properties on any element in the tree. In short, these tools will allow you to select any element in your running app and show the final, computed and rendered properties. Here’s more:

  • Live Visual Tree. Now, you can view the full visual tree of a running application during a debug session. The Live Visual Tree is available when you press F5 in Visual Studio or attach to a running application. You can use the Live Visual Tree to select elements in a running application for inspection in the Live Property Explorer. Descendant count is displayed, and if the source information is available, you can instantly find the file and location of the element’s definition.
  • Live Property Explorer. Use this new tool to inspect the properties set on any element in a running application, grouped by the scope in which they are set. You can modify these properties during a debugging session and immediately see their changes in the running application.

Picking apart how properties override each other and figuring out winning behavior can prove difficult at design time. Now, by using our new UI debugging tools for XAML, you can perform these inspections at runtime, when you can take everything into account.

(Support for Windows Store apps will be released in a future update.)

UI Debugging Tools for XAML, full screen

Single sign-in

You, like many other developers today, take advantage of multiple cloud services when developing your applications. For example, you’ve probably added a cloud backend to your applications to store data, stored your source code in the cloud, or published an application to a store.

In the past, each of these services required a separate sign-in process, and each service managed the signed-in user state separately.

With this release, we are reducing the authentication prompts required to access many integrated cloud services in Visual Studio. Now, when you authenticate to the first cloud service in Visual Studio, we will automatically sign you in, or reduce the authentication prompts for other integrated cloud services.

CodeLens

With CodeLens, you can find out more about your code while staying focused on your work in the editor. In CTP 6, you can now see the history of your C++, SQL, or JavaScript files versioned in Git repositories by using CodeLens file-level indicators. When working with source control in Git and work items in TFS, you can also can get information about the work items associated with C++, SQL, or JavaScript files by using CodeLens file-level work items indicators.

CodeLens file level team indicators
Learn more about Code Lens.

Code Maps

When you want to understand specific dependencies in your code, visualize them by creating Code Maps. You can then navigate these relationships by using the map, which appears next to your code. This helps you track your place in the code while you work.

  • Performance improvements. With this release, now you can get reactive Code Maps more quickly: drag and drop operations produce an immediate result, and the links between nodes are created much more quickly, without affecting subsequent user-initiated operations such as expanding a node or requesting more nodes. When you create Code Maps without building the solution, all the corner cases—such as when assemblies are not built—are now processed.
  • Code Map filtering. Now you can quickly unclutter your Code Maps by filtering for nodes and groups. You can show or hide code elements on a map based on their category, as well as group code elements by solution folders, assemblies, namespaces, project folders, and types.
  • Dependency links. Dependency links no longer represent the inheritance from System.Object, System.ValueType, System.Enum, and System.Delegate, which makes it easier to see external dependencies in your code map.
  • Improved top-down diagrams. For a medium to large Visual Studio solutions, you can now use a simplified Architecture menu to get a more useful Code Map for your solution. The assemblies of your solution are grouped alongside the solution folders, so you can see them in context and leverage the effort you’ve put in structuring the solution. you’ll immediately see project and assembly references, and then the link types appear. Also, the assemblies external to your solution are grouped in a more compact way.
  • Improved link filtering. Link filtering now applies to cross group links, which makes working with the filter window less intrusive than it was in previous releases.

Learn more about Code Maps.

Diagnostics Tools

In CTP 6, the Diagnostic Tools debugger window has the following improvements:

  • Supports 64-bit Windows Store apps
  • The timeline zooms as necessary so the most recent break event is always visible
Exception Settings

You can configure debugger exception settings by using the Exception Settings tool window. The new window is non-modal and includes improved performance, search, and filter capabilities.

Exceptions Settings - Break when Thrown window

JavaScript Editor
  • Task List support. You can use the Task List feature to review task comments, such as // TODO, in your JavaScript code. Learn more about the Task List in Visual Studio.
  • Object literal IntelliSense. The JavaScript editor now provides you with IntelliSense suggestions when passing an object literal to functions documented using JSDoc.
Unit Tests

In Visual Studio 2015 preview, we introduced Smart Unit Tests, which explores your .NET code to generate test data and a suite of unit tests. In CTP 6,are added the following functionality:

  • Parameterized Unit Tests. Smart Unit Tests enables support for an API that you can use to guide test data generation, specify correctness properties of the code under test, and direct the exploration of the code under test. This API is available in the Microsoft.Pex.Framework namespace and can be used in the test methods (parameterized unit tests, factory methods) generated by Smart Unit Tests. Consequently, the “Smart Unit Tests” context menu command is now available from the generated test methods as well.
  • Test stubs creation. “Create Unit Tests” is now available on the context menu as a command that provides the ability to create and configure a test project, a test class, and a test stub.

Learn more about Smart Unit Tests.

Visual Studio Emulator for Android

In CTP 6, the Visual Studio Emulator for Android now supports the following:

  • OpenGL ES
  • Android Version 5.0 (Lollipop, API Level 21)
  • Camera interaction using image files or your webcam
  • Multi-touch input
Visual Studio Tools for Apache Cordova

Over the last few releases, we listened to your feedback and broadened the number of devices you can debug to, as follows:

  • Android 4.4, Android 4.3 and earlier with jsHybugger
  • iOS 6, 7, and 8
  • Windows Store 8.1

With CTP6, we are broadening our debugging support further. You can now debug your Apache Cordova apps that target Windows Phone 8.1

You can set breakpoints, inspect variables, use the console, and perform other debugging tasks on your Windows Phone 8.1 emulator or attached device.
Debugging with Visual Studio Tools for Apache Cordova
Learn more about the Visual Studio Tools for Apache Cordova.

Visual Studio C++ for Cross-Platform Mobile Development

You can use Visual Studio to share, reuse, build, deploy, and debug your cross-platform mobile code, all within a single solution. And in CTP 6, also added or updated the following:

  • Support for Android API Level 21 (Lollipop).
  • Improvements to Android Logcat. (Logcat is a diagnostic tool and essential for a good edit->build->debug experience.)
    Use Logcat to do the following:
    • Search for specific log messages by using search bar.
    • Use Toggle Autoscroll to view upcoming log messages easily.
    • Clear previous log output messages.
    • Choose between various log levels.
  • A new template that is based on make file support for Android, which allows using an external build system (including NDK-BUILD).
  • Precompiled headers in all templates (including Dynamic Shared Library, Static Library, and Cross-platform mobile templates).
ASP.NET

In this CTP 6 release, are added the following new features and performance improvements:

  • Run and debug settings are now stored in debugSetting.json, which can be customized to configure how the project is started.
  • Add reference to a system assembly.
  • Improved IntelliSense while editing project.json.
  • New Web API template.
  • Improvements to call out ASP.NET 4.6/ASP.NET 5 on the New ASP.NET Project (One ASP.NET) dialog.
  • Ability to use a Windows PowerShell script which can be customized for the web publish experience for ASP.NET 5.
  • You can use Lambda expressions in the debugger watch windows for ASP.NET 5 applications when running on the Desktop CLR.

Learn more about ASP.NET 5 updates in Visual Studio 2015 CTP 6.

Visual C++

Are added the following new features to Visual C++ in CTP 6:

  • Control Flow Guard (CFG). With this new security feature, simply add a new option to your project, and the Visual C++ compiler will now inject extra security checks into your binaries to help detect attempts to hijack your code. When the check fires, it will stop execution of your code before a hijacker can do damage to your data or PC. Learn more about Control Flow Guard.
    Note: We have updated the command options. Instead of using the /d2guard4 switch as you did in CTP 5, you should now use /guard:cf in CTP 6.
  • Typename keyword. Users can now write typename instead of class in a template template parameter. Learn more about  typename.
Related releases

For additional features of Visual Studio 2015, see our Preview release notes.

CTP 5

XAML Language Service

The XAML language service is rebuild on top of .NET Compiler Platform (“Roslyn”) so that we can provide you with a fast, reliable, and modern XAML editing experience that includes IntelliSense.

This makes the XAML authoring experience equal to other first-class languages in Visual Studio. We’ll also be able to deliver powerful feature sets around cross-language refactoring to you at a much faster cadence.

Timeline Tool

Our new Timeline tool in CTP 5 provides you with a scenario-centric view of the resources that your applications consume, which you can use to inspect, diagnose, and improve the performance of your WPF and Windows Store 8.1 applications.

The Timeline tool, which is in the Performance and Diagnostics hub, shows you how much time your application spends in preparing UI frames and in servicing networks and disk requests, and it does so in the context of scenarios such as Application Load and Page Load.

The new Timeline tool
Learn more about the new Timeline Tool in Visual Studio 2015.  (The new Timeline tool replaces the XAML UI Responsiveness tool.)

Diagnostics Tools

Are added a new Diagnostic Tools window in CTP 5 that appears when you start debugging (press F5). The Diagnostics Tools window contains the following features:

  • Debugger Events (with IntelliTrace)
    Debugger Events (with IntelliTrace) gives you access to all Break, Output, and IntelliTrace events collected during your debugging session. The data is presented both as a timeline and as a tabular view. The two views are synchronized and can interact with each other.
    Learn more about IntelliTrace in Visual Studio 2015.
  • Memory Usage
    The Memory Usage tool allows you to monitor the memory usage of your app while you are debugging. You can also take and compare detailed snapshots of native and managed memory to analyze the cause of memory growth and memory leaks.
  • CPU Usage
    The CPU Usage tool allows you to monitor the CPU usage of your application while you are debugging.
    (This tool replaces the CPU time PerfTip that was available in the Preview release of Visual Studio 2015.)

The Diagnostics Tools window supports the following project types and debugging configurations:

  • Managed WPF, WinForms, and Console projects
  • Native Win32, Console, and MFC projects
  • ASP.NET 4 using IIS express only
    (ASP.NET 5 and IIS are not supported at this time)
  • Managed or Native 32-bit Windows Store projects running locally
    (Windows Store projects that are 64-bit, using JavaScript, running on a remote device, or running on a phone are not supported at this time)

Learn more about the Diagnostics Tools window in Visual Studio 2015.

ASP.NET

In this CTP 5 release, we’ve added some new features to the ASP.NET 5 experience, as well as improved its performance.

  • Now, you can add a reference to a standard C# project.
    (In previous releases, the Add Reference dialog only supported referencing other ASP.NET 5 projects.)
  • We’ve added IntelliSense and validation to our HTML, CSS, and JavaScript editors.
  • We’ve improved our support of client-side task runners (such as Grunt and Gulp) that run alongside the Task Runner Explorer.
  • For ASP.NET 5 projects, you can select the browser you want while running or debugging a project.
    Select the browser you want
  • You can define custom commands in project.json file, which you can launch by using the ASP.NET 5 command-line tools. And now, you can also run and debug your custom commands directly in Visual Studio 2015.
  • Are updated our ASP.NET 5 templates to include a project.json file that uses the latest packages, and  fixed some bugs in the template content.

Learn more about ASP.NET updates in Visual Studio 2015.

Visual C++

In CTP 5,  added the following new features to Visual C++ to bring the compiler closer to conformance with the standards set in Visual Studio 2012 and Visual Studio 2013.

  • Digit separators: Now, you can intersperse numerical literals with single quotes to make them more readable. For example,int x = 1’000’000;
  • Universal character names in literals: You can now write basic characters, like ‘A’ and the line feed character, as code points in literals. For example, const char *s = “\u0041\u000A”;

.Net (4.0 up) Enterprise Caching Strategies & tips

http://blogs.msdn.com/b/paolos/archive/2011/04/05/how-to-use-a-wcf-custom-channel-to-implement-client-side-caching.aspx

Enterprise Library Caching Block is with version 6.0 Absolete , being replaced with MemoryCache or AppFabric.

We frequently get asked on best practices for using Windows Azure Cache/Appfabric cache.  The compilation below is an attempt at putting together an initial list of bestpractices. I’ll publish an update to this in the future if needed.

I’m breaking down the best practices to follow by various topics.

Using Cache APIs

1.    Have Retry wrappers around cache API calls

Calls into cache client can occasionally fail due to a number of reasons such as transient network errors, cache  servers being unavailable due to maintainance/upgrades or cache servers being low on memory. The cache client raises a DataCacheException with an errorcode that indicates the reason for the failure in these cases. There is a good overview of how an application should handle these exceptions in MSDN

It is a good practice for the application to implement a retry policy. You can implement a custom policy or consider using a framework like the TransientFault Handling Application Block

2.    Keep static instances of DataCache/DataCacheFactory

Instances of DataCacheFactory (and hence DataCache instances indirectly) maintain tcp connections to the cache servers. These objects are expensive to create and destroy. In addition, you want to have as few of these as needed to ensure cache servers are not overwhelmed with too many connections from clients.

You can find more details of connection management here. Please note that the ability to share connections across factories is currently available only in November 2011 release of the Windows Azure SDK (and higher versions). Windows Server appfabric 1.1 does not have this capability yet.

Overhead of creating new factory instances is lower if connection pooling is enabled. In general though, it is a good practice to pre-create an instance of DataCacheFactory/DataCache and use them for all subsequent calls to the APIs. Do avoid creating an instance of DataCacheFactory/DataCache on each of your request processing paths. 

3.    WCF Services using Cache Client

It is a common practice for WCF services to use cache to improve their performance. However, unlike asp.net web applications, wcf services are susceptible for IO-thread starvation issues when making blocking calls (such as cache API calls) that further require IO threads to receive responses (such as responses from cache servers).

This issue is described in detail in the following KB article – The typical symptom that surfaces if you run into this is that when you’ve a sudden burst of load, cache API calls timeout. You can confirm if you are
running into this situation by plotting the thread count values against incoming requests/second as shown in the KB article.

4.    If app is using Lock APIs – Handle ObjectLocked, ObjectNotLocked exceptions

If you are using lock related APIs, please ensure you are handling exceptions such asObjectLocked ( and ObjectNotLocked (Object being referred to is not locked by any client) error.

GetAndLock can fail with “<ERRCA0011>:SubStatus<ES0001>:Object being referred to is currently locked, and cannot be accessed until it is unlocked by the locking client. Please retry later.” error if another caller has acquired a lock on the object.

The code should handle this error and implement an appropriate retry policy.

PutAndUnlock can fail with “<ERRCA0012>:SubStatus<ES0001>:Object being referred to is not locked by any client” error.

This typically means that the lock timeout specified when the lock was acquired was not long enough because the application request took longer to process. Hence the lock expired before the call to PutAndUnlock and the cache server returns this error code.

The typical fix here is to both review your request processing time as well as set a higher lock timeout when acquiring a lock.

You can also run into this error when using the session state provider for cache. If you are running into this error from session state provider, the typical solution is to set a higher executionTimeout for your web app.

Session State Provider Usage

You can find more info about asp.net session state providers for appfabric cache hereand azure cache here.

The session state provider has an option to store the entire session as 1 blob (useBlobMode=”true” which is the default), or to store the session as individual key/value pairs.

useBlobMode=”true” incurs fewer round trips to cache servers and works well for most applications.

If you’ve a mix of small and large objects in session, useBlobMode=”false” (a.ka. granular mode) might work better since it will avoid fetching the entire (large) session object for all requests. The cache should also be marked as nonEvictable cache if useBlobMode=”false” option is being used. Because Azure shared cache does not give you the ability to mark a cache as non evictable, please note that useBlobMode=”true” is the only supported option against Windows Azure Shared cache.

Performance Tuning and Monitoring

            Tune MaxConnectionsToServer

Connection management between cache clients and servers is described in more detailhere. Consider tuning MaxConnectionToServer setting. This setting controls the number of connections from a client to cache
servers. (MaxConnectionsToServer * Number of DataCacheFactory Instances *Number of Application Processes) is a rough value for the number of connections that will be opened to each of the cache servers. So, if you have 2 instances of your web role with 1 cache factory and MaxConnectionsToServer set to 3, there will be 3*1*2 = 6 connections opened to each of the cache servers.

Setting this to number of cores (of the application machine) is a good place to start. If you set this too high, a large number of connections can get opened to each of the cache servers and can impact throughput.

If you are using Azure cache SDK 1.7, maxConnectionsToServer is set to the default of number of cores (of the application machine). The on-premise appfabric cache (v1.0/v1.1) had the default as one, so that value might need to be tuned if needed.

            Adjust Security Settings

The default security settings for on-premise appfabric cache is to run with security on at EncryptAndSign protection level. If you are running in a trusted environment and don’t need this capability, you can turn this off by explicitly setting security to off.

The security model for Azure cache is different and theabove adjustment is not needed for azure cache.

            Monitoring

There is also a good set of performance counters on the cache servers that you can monitor to get a better understanding of cache performance issues. Some of thecounters that are typically useful to troubleshoot issues include:

1)     %cpu used up by cache service

2)     %time spent in GC by cache service

3)     Total cache misses/sec – A high value here can indicate your application performance might suffer because it is not able to fetch data from cache. Possible causes for this include eviction and/or expiry
of items from cache.

4)     Total object count – Gives an idea of how many items are in the cache. A big drop in object count could mean eviction or expiry is taking place.

5)     Total client reqs/sec – This counter is useful in giving an idea of how much load is being generated on the cache servers from the application. A low value here usually means some sort of a bottleneck
outside of the cache server (perhaps in the application or network) and hence very little load is being placed on cache servers.

6)     Total Evicted Objects – If cache servers are constantly evicting items to make room for newer objects in cache, it is usually a good indication that you will need more memory on the cache servers to hold
the dataset for your application.

7)     Total failure exceptions/sec and Total Retry exceptions/sec

Lead host vs Offloading

This applies only for the on-premise appfabric cache deployments. There is a good discussion of the  tradeoffs/options in this blog – As noted in the blog, with v1.1, you can use sql to just store config info and use lead-host model for cluster runtime. This option is attractive if setting up a highly-available sql server for offloading purposes is hard.

Other Links

Here are a set of blogs/articles that provide more info on some of the topics covered above.

1)     Jason Roth and Jaime Alva have written an  article providing additional guidance to developers using Windows
Azure Caching.

2)     Jaime Alva’s blog post on logging/counters for On-premise appfabric cache.

3)     MSDN article about connection management between cache client & servers.

4)     Amit Yadav and Kalyan Chakravarthy’s blog on lead host vs offloading options for cache clusters.

5)     MSDN article on common cache exceptions and Transient Fault Handling Application Block

Migration from File Shares to Document Management (SharePoint 2013) using OneDrive (Skydrive) for Business

Some tips  How to migrate file shares to SharePoint and use OneDrive (SkyDrive) for Business (ODFB) and if you are planning to migrate file share content into SharePoint and want to make use of ODFB for synchronizing the SharePoint content offline.

Note: that these steps are both valid for SharePoint 2013 on-premises and SharePoint Online (SPO).

Info about SharePoint Limits

http://technet.microsoft.com/en-us/library/cc262787(v=office.15).aspx#Limits

First Step  – Analyze your File Shares

As a first step, try to understand the data that resides on the file shares. Ask yourself the following questions:

  • What is the total size of the file share data that the customer wants to migrate?
  • How many files are there in total?
  • What are the largest file sizes?
  • How deep are the folder structures nested?
  • Is there any content that is not being used anymore?
  • What file types are there?

Let me try to explain why you should ask yourself these questions.

Total Size

If the total size of the file shares are more that the storage capacity that you have on SharePoint, you need to buy additional storage (SPO) or increase your disk capacity (on-prem). To determine how much storage you will have in SPO, please check the Total available tenant storage in the tables in this article. Another issues that may arise is that in SharePoint is that you reach the capacity per site collection. For SPO that is 1000 Gigabyte (changed from 100 GB to 1 TB), for on-premises the recommended size per site collection is still around 200 Gigabyte.

What if we have more than 1000 Gigabyte?

  • Try to divide the file share content over multiple site collections when it concerns content which needs to be shared with others.
  • If certain content is just for personal use, try to migrate that specific content into the personal site of the user.

How Many Files

The total amount of files on the file shares is important as there are some limits in both SharePoint as well as ODFB that can result in an unusable state of the library or list within SharePoint but you also might end up with missing files when using the ODFB client.

First, in SPO we have a fixed limit of 5000 items per view, folder or query. Reasoning behind this 5000 limit boils all the way down to how SQL works under the hood. If you would like to know more about it, please read this article. In on-prem there is a way to boost this up, but it is not something we recommend as the performance can significantly decrease when you increase this limit.

Secondly for ODFB there is also a 5000 limit for synchronizing team sites and 20000 for synchronizing personal sites. This means that if you have a document library that contains more that 5000 items, the rest of the items will not be synchronized locally.

There is also a limit of 5 million items within a document library, but I guess that most customer in SMB won’t reach that limit very easily.

What should I do if my data that I want to migrate to a document library contains more than 5000 items in one folder?

  • Try to divide that amount over multiple subfolders or create additional views that will limit the amount of documents displayed.

But wait! If I already have 5000 items in one folder, doesn’t that mean that the rest of the other document won’t get synchronized when I use ODFB?

Yes, that is correct. So if you would like to use ODFB to synchronize document offline, make sure that the total amount of documents per library in a team site, does not exceed 5000 documents in total.

How do I fix that limit ?

  • Look at the folder structure of the file share content and see if you can divide that data across multiple sites and/or libraries. So if there is a folder marketing for example, it might make more sense to migrate that data into a separate site anyway, as this department probably wants to store additional information besides just documents (e.g. calendar, general info about the marketing team, site mailbox etc). An additional benefit of spreading the data over multiple sites/libraries is that it will give the ODFB users more granularity about what data they can take offline using ODFB. If you would migrate everything into one big document library (not recommended), it would mean that all users will need to synchronize everything which can have a severe impact on your network bandwidth.

Largest File Sizes

Another limit that exists in SPO and on-prem is the maximum file size. For both the maximum size per file is 2 Gigabyte. In on-prem the default is 250 MB, but can be increased to a maximum of 2 Gigabyte.

So, what if I have files that exceed this size?

  • Well, it won’t fit in SharePoint, so you can’t migrate these. So, see what type of files they are and determine what they are used for in the organization. Examples could be software distribution images, large media files, training courses or other materials. If these are still being used and not highly confidential, it is not a bad thing to keep these on alternative storage like a SAN, NAS or DVDs. If it concerns data that just needs to be kept for legal reasons and don’t require to be retrieved instantly, you might just put these on DVD or an external hard drive and store them in a safe for example.

Folder Structures

Another important aspect to look at on your file shares is the depth of nested folders and file length. The recommended total length of a URL in SharePoint is around 260 characters. You would think that 260 characters is pretty lengthy, but remember that URLs in SharePoint often has encoding applied to it, which takes up additional space. E.g. a space is one character but in Unicode this a %20, which takes up three characters. The problem is that you can run into issues when the URL becomes to large. More details about the exact limits can be found here, but as a best practice try to keep the URL length of a document under 260 characters.

What if I have files that will have more than 260 characters in total URL length?

  • Make sure you keep your site URLs short (the site title name can be long though). E.g. don’t call the URL Human Resources, but call it HR. If you land on the site, you would still see the full name Human Resources as Site Title and URL are separate things in SharePoint.
  • Shorten the document name (e.g. strip of …v.1.2, or …modified by Andre), as SharePoint has versioning build in. More information about versioning can be found here.

Content Idle

When migrating file shares into SharePoint is often also a good momentum to clean up some of the information that the organization has been collecting over the years. If you find there is a lot of content which is not been accessed for a couple of years, what would be the point of migrating that data it to SharePoint?

So, what should I do when I come across such content?

  • Discuss this with the customer and determine if it is really necessary to keep this data.
  • If the data cannot be purged, you might consider storing it on a DVD or external hard drive and keep it in a safe.
  • If the content has multiple versions, such as proposal 1.0.docx, proposal 1.1.docx, proposal final.docx, proposal modified by Andre.docx, you might consider just moving the latest version instead of migrating them all. This manual process might be time consuming, but can safe you lots of storage space in SharePoint. Versioning is also something that is build into the SharePoint system and is optimized to store multiple versions of the same document. For example, SharePoint only stores the delta of the next version, saving more storage space that way. This functionality is called Shredded Storage.

Types of Files

Determine what kind of files the customer is having. Are they mainly Office documents? If so, then SharePoint is the best place to store such content. However, if you come across developers code for example, it is not a good idea to move that into SharePoint. There are also other file extensions that are not allowed in SPO and/or on-prem. A complete list of blocked file types for both SPO and on-prem can be found here.

what if I come across such blocked file extensions?

  • Well, you can’t move them into SharePoint, so you should either ask yourself, do I still need these files? And if so, is there an alternative storage facility such as a NAS, I can store these files on? If it concerns developer code, you might want to store such code on a Team Foundation Service Server instead.

Tools for analyzing and fixing file share data

In order to determine if you have large files or exceed the 5000 limit for example, you need to have some kind of tooling. There are a couple of approaches here.

  • There is a PowerShell script that has been pimped up by  Hans Brender, which checks for blocked file types, bad characters in files and folders and finally for the maximum URL length. The script will even allow you to fix invalid characters and file extensions for you. It is a great script, but requires you to have some knowledge about PowerShell. Another alternative I was pointed at is a tool called SharePrep. This tool does a scan for URL length and invalid characters.
  • There are other 3rd party tools that can do a scan of your file share content such as Treesize. However such tools do not necessarily check for the SharePoint limitations we talked about in the earlier paragraphs, but at least they will give you a lot more insight about the size of the file share content.
  • Finally there are actual 3rd party migration tools that will move the file share content into SharePoint, but will check for invalid characters, extensions and URL length upfront. We will dig into these tools in Step 2 – Migrating your data.

Second Step – Migrating your data

So, now that we have analyzed our file share content, it is time to move them into SharePoint. There are a couple of approaches here.

Document Library Open with Explorer

If you are in a document library you can open up the library in the Windows Explorer. That way you can just do a copy and paste from the files into SharePoint.

image

Are some drawbacks using this scenario. First of all, I’ve seen lots of issues trying to open up the library in the Windows Explorer. Secondly, the technology that is used for copying the data into SharePoint is not very reliable, so keep that in mind when copying larger chunks of data. Finally there is also drag & drop you can use, but this is only limited to files (no folders) and only does a maximum of 100 files per drag. So this would mean if you have 1000 files, you need to drag them 10 times in 10 chunks. More information can be found in this article. Checking for invalid characters, extensions and URL length upfront are also not addressed when using the Open with Explorer method.

Pros: Free, easy to use, works fine for smaller amounts of data

Cons: Not always reliable, no metadata preservations, no detection upfront for things like invalid characters, file type restrictions, path lengths etc.

OneDrive (formerly SkyDrive) for Business

You could also use ODFB to upload the data into a library. This is fine as long as you don’t sync more than 5000 items per library. Remember though that ODFB is not a migration tool, but a sync tool, so it is not optimized for large chunks of data to be copied into SharePoint. Things like character and file type restrictions, path length etc. is on the list of the ODFB team to address, but they are currently not there.

The main drawbacks of using either the Open in Explorer option or using ODFB is that when you use these tools, they don’t preserve the metadata of the files and folder that are on the file shares. By this I mean, things like the modified date or owner field are not migrated into SharePoint. The owner will become the user that is copying the data and the modified date will be the timestamp of the when the copy operation was executed. So if this metadata on the files shares is important, don’t use any of the methods mentioned earlier, but use one of the third party tools below.

Pros: Free, easy to use, works fine for smaller amounts of data (max 5000 per team site library or 20000 per personal site)

Cons: No metadata preservations, no detection upfront for things like invalid characters, file type restrictions, path lengths etc.

3rd party tools

Here are some of the 3rd party tools that will provide additional detection, fixing and migration capabilities that we mentioned earlier:

Where some have a focus on SMB, while other more focused on the enterprise segment. We can’t speak out any preference for one tool or the other, but most of the tools will have a free trial version available, so you can try them out yourself.

Summary

When should I use what approach?

Here is a short summary of capabilities:

Open in Explorer
OneDrive for Business (with latest update)
3rd party

Amount of data
Relatively small
No more than 5000 items per library
Larger data sets

Invalid character detection
No
No
Mostly yes1

URL length detection
No
No
Mostly yes1

Metadata preservation
No
No
Mostly yes1

Blocked file types detection
No
No
Mostly yes1

1This depends on the capabilities of the 3rd party tool.

Troubleshooting

ODFB gives me issues when synchronizing data
Please check if you have the latest version of ODFB installed. There have been stability issues in earlier released builds of the tool, but most of the issues should be fixed by now. You can check if you are running the latest version, by opening up Word-> File-> Account and click on Update Options-> View Updates. If your current version number is lower than the one you have, click on the Disable Updates button (click yes if prompted), then click Enable updates (click yes if prompted). This will force downloading the latest version of Office and thus the latest version of the ODFB tool.

image

If you are running the stand-alone version of ODFB, make sure you have downloaded the latest version from here.

Latency Time by the upload , process is taking so long?
This really depends on a lot of things. It can depend on:

  • The method or tool that is used to upload the data
  • The available bandwidth for uploading the data. Tips:
  • Check your upload speed at http://www.speedtest.net and do a test for your nearest Office 365 data center. This will give you an indication of the maximum upload speed.
  • Often companies have less available upload bandwidth then people at home. If you have the chance, uploading from a home location might be faster.
  • Schedule the upload at times when there is much more bandwidth for uploading the data (usually at night)
  • Test your upload speed upfront by uploading maybe 1% of the data. Multiply it by 100 and you have a rough estimate of the total upload time.
  • The computers used for uploading the data. A slow laptop can become a bottle neck while uploading the data.

How To Use TFS 2013 with SharePoint 2013 Sp1 and Sql 2012 sp1 on Windows 2012 R2

In new deployment scenarios you will need the TFS 2013 or 2012 on an windows 2012 R2 server, that will never support SharePoint 2010, so we need an SharePoint 2013 SP1, that support windows 2012 R2 for now.

Before Run all Windows Updates before installing SharePoint 2013, and get the CU updates of Sql2012 sp1 and SharePoint 2013 Sp1 .

If That box already has TFS 2013 on an windows 2012 R2 server . by Installing updates are the key steps that will prevent tantrums from SharePoint . Always, install of the required updates and ideally the optional ones also.

installation of SharePoint 2013 with Sp1

SharePoint Team They have really slicked up the installation process for SharePoint,

Instead use the auto-run that comes from running the DVD directly, or you can just run the “prerequisiteinstaller” from the root first.

image

When the prerequisites are complete you can start the installation proper and enter your key . If you get this wrong you will be next completing an uninstall to pick the right option. You can Avoid express at all costs and in this case we already have Team Foundation Server 2013 Sp1 installed and already have SQL Server 2012 sp2 installed.

Using configuration wizard will lead you through the custom config but if  you are running on a local computer with no domain, like me, then you will have to run a command line to generate the database before you proceed.

Well, do not dispair because PowerShell –as always –  is your friend. So just start the SharePoint 2013 management PowerShell console and use the cmdlet :

New-SPConfigurationDatabase

image

image

We have now a farm we can complete the configuration. Just work though the wizard as , although you are on your own if you select kerberos for single sign-on.

SharePoint 2013 SP1 will then run though its configuration steps and give you a functional, but empty SharePoint environment. At the end you get a simple Finish button and some instructions that you need to follow for getting your site to render in a browser.

Info: SharePoint 2013 works now in Chrome and other non Microsoft browsers…

Now you get almost 25 services that you can chose to install or not. If you leave them all ticked then you will get about 10-12 new databases in SQL, Its too hard to figure out what the dependencies are and what you need .

If the verification of the SharePoint configuration passes then configuration should work

Configuring processes and extending can be long and will add solution into SharePoint. Will be a site template added but as it will likely look the nice new SharePoint 2013 Sp1 interface we will need to create the site manually.

Configuration completed successfully

Now that the SharePoint bits have been setup we will have a default link setup between SharePoint and Team Foundation Server. Although if we had a separate Team Foundation Server instance we would need to tell it where the TFS server is.

Info: You have to install the Extensions for SharePoint Products on every front end server for your SharePoint farm.

SharePoint 2013 Web Applications Configuration in Team Foundation Server 2013

Now we have installed and configured the bits for SharePoint as well as telling it where the TFS server is we now need to tell TFS where to go.

There is no account listed as an administrator! I am using the Builtin\Administrator user as both the TFS Service Account and the SharePoint Farm Admin you don’t need one.

Site Configuration Collections

In order to have different sites for different collection and enable the ability to have the same Team Project name in multiple collection then you need to create a root collection site under the main site. Some folks like to create this at the ^/sites/[collection] level, but I always create the collection site as a sub site of the root. This have the benefit of creating automatic navigation between the sites…

This final test as when you click OK the Admin Console will go off and try to hook into, or create a site for us. if you do want to have a greater degree of separation between the sites and have them in different collections you can indeed do that as well. You may want to do that if you are planning to separate collection to multiple environments, but I can think of very few reasons that you would want to do that.

Using the new Team Project Site

If we create a new team project the template from the Process Template that you selected will be used to create the new site. These templates are designed to work in any version of SharePoint but they may look cool.

This team project was created before there was ever a SharePoint server so it has no portal. Lets go ahead and create one manually.

They have moved things around a little in SharePoint and we now create new sub sites from the “View Content” menu.

This, while much more hidden is really not something you do every day. You are much more likely to be adding apps to an existing site so having this more clicks away is not a big deal.

When we care the new site we have two options. We can create it using the provided “Team Foundation Project Portal” bit it results in a slightly ugly site, or you can use the default “Team Site” template to get a more native 2013 feel.

This is due to the features not yet being enables… so head on over to “cog | Site Settings | Site Actions | Manage site features” to enable them.

You can enable one of:

  • Agile Dashboards with Excel Reporting – for the MSF for Agile Software Development 6.x Process Template
  • CMMI Dashboards with Excel Reports – for the MSF for CMMI Software Development 6.x Process Template
  • Scrum Dashboards with Reporting – for the Visual Studio Scrum 2. Recommended Process Template

The one you pick depends on the Process Template that you used to create the Team Project. I will activate the Scrum one as I used the Visual Studio Scrum 2.0 Recommended Process Template which I heartily recommend. You will have noticed that here are 2 or 3 for each of the “Agile | SMMI | Scrum” monikers and this is due to the different capabilities that you might have. For example:

  • Agile Dashboards – I have TFS with no Reporting Services or Analysis Services
  • Agile Dashboards with Basic Reporting – I have Reporting Services and Analysis Services but not SharePoint Enterprise
  • Agile Dashboards with Excel Reporting – I have Everything! Reporting Services, Analysis Services and SharePoint Enterprise

If you enable the highest level of the one you want it will figure out the one that you can run  and in this case I can do “Scrum Dashboards with Reporting”.

sharepoint_scrum_boards

sharepoint_agile_boards_reporting
Scrum template does not have any built in Excel Reports, but it does have Reporting Services reports. Now when I return to the homepage I get the same/similar portal you would have seen in old versions of SharePoint 2010.

Conclusion

Team Foundation Server 2013 & 2012 works with SharePoint 2013 Sp1 on Windows server 2012 R2 and we have manually created our Team Project Portal site.

SharePoint 2010 Sp1 available

 

Hi.

Now available with Service Pack 1 SharePoint 2010 and June 2011 cumulative. But how do we install these updates?

The update process is completely different from the last CU.

The process would be:

  1.  SharePoint Foundation 2010 SP1
  2.  Language packs SharePoint 2010 SP1 Foundation
  3. SharePoint Server 2010 SP1 
  4.  SharePoint Server 2010 Language packs SP1 SP1 (Si

You should run psconfig-cmd upgrade-inplace b2b-wait on each server after installation of all packages.

After that, it is recommended strongly cumulative June 2011

Finally share a whitepaper with the new features

http://go.microsoft.com/fwlink/?LinkId=221773