#MSFT it is cool again – review from Build 2019

 

At Microsoft Build 2019 conference, Microsoft has announced a ton of new features and tool releases with a focus on innovation using AI and mixed reality with the intelligent cloud and the intelligent edge. In his opening keynote, Microsoft CEO Satya Nadella outlined the company’s vision and developer opportunity across Microsoft Azure, Microsoft Dynamics 365 and IoT Platform, Microsoft 365, and Microsoft Gaming.

“As computing becomes embedded in every aspect of our lives, the choices developers make will define the world we live in,” said Satya Nadella, CEO, Microsoft.

“Microsoft is committed to providing developers with trusted tools and platforms spanning every layer of the modern technology stack to build magical experiences that create new opportunity for everyone.”

Watch it live here!

Developer productivity in Microsoft 365 platform Increase

Microsoft Graph data connect

Microsoft Graphs are now powered with data connectivity, a service that combines analytics data from the Microsoft Graph with customers’ business data. Microsoft Graph data connect will provide Office 365 data and Microsoft Azure resources to users via a toolset. The migration pipelines are deployed and managed through Azure Data Factory. Microsoft Graph data connect can be used to create new apps shared within enterprises or externally in the Microsoft Azure Marketplace.

It is generally available as a feature in Workplace Analytics and also as a standalone SKU for ISVs. More information here.

Microsoft Search

Microsoft Search works as a unified search experience across all Microsoft apps-  Office, Outlook, SharePoint, OneDrive, Bing and Windows. It applies AI technology from Bing and deep personalized insights surfaced by the Microsoft Graph to personalized searches. Other features included in Microsoft Search are:

  • Search box displacement
  • Zero query typing and key-phrase suggestion feature
  • Query history feature, and personal search query history
  • Administrator access to the history of popular searches for their organizations, but not to search history for individual users
  • Files/people/site/bookmark suggestions

Microsoft Search will begin publicly rolling out to all Microsoft 365 and Office 365 commercial subscriptions worldwide at the end of May. Read more on MS Search here.

Fluid Framework

As the name suggests Microsoft’s newly launched Fluid framework allows seamless editing and collaboration between different applications. Essentially, it is a web-based platform and componentized document model that allows users to, for example, edit a document in an application like Word and then share a table from that document in Microsoft Teams (or even a third-party application) with real-time syncing.

Microsoft says Fluid can translate text, fetch content, suggest edits, perform compliance checks, and more. The company will launch the software developer kit and the first experiences powered by the Fluid Framework later this year on Microsoft Word, Teams, and Outlook.
Read more about Fluid framework here.

Microsoft Edge new features

Microsoft Build 2019 paved way for a bundle of new features to Microsoft’s flagship web browser, Microsoft Edge. New features include:

  • Internet Explorer mode: This mode integrates Internet Explorer directly into the new Microsoft Edge via a new tab. This allows businesses to run legacy Internet Explorer-based apps in a modern browser.
  • Privacy Tools: Additional privacy controls which allow customers to choose from 3 levels of privacy in Microsoft Edge—Unrestricted, Balanced, and Strict. These options limit third parties to track users across the web.  “Unrestricted” allows all third-party trackers to work on the browser. “Balanced” prevents third-party trackers from sites the user has not visited before. And “Strict” blocks all third-party trackers.
  • Collections: Collections allows users to collect, organize, share and export content more efficiently and with Office integration.

Microsoft is also migrating Edge over to Chromium. This will make Edge easier to develop for by third parties.

For more details, visit Microsoft’s developer blog.

New toolkit enhancements in Microsoft 365 Platform

Windows Terminal

Windows Terminal is Microsoft’s new application for Windows command-line users. Top features include:

  • User interface with emoji-rich fonts and graphics-processing-unit-accelerated text rendering
  • Multiple tab support and theming and customization features
  • Powerful command-line user experience for users of PowerShell, Cmd, Windows Subsystem forLinux (WSL) and all forms of command-line application

Windows Terminal will arrive in mid-June and will be delivered via the Microsoft Store in Windows 10. Read more here.

React Native for Windows

Microsoft announced a new open-source project for React Native developers at Microsoft Build 2019. Developers who prefer to use the React/web ecosystem to write user-experience components can now leverage those skills and components on Windows by using ” React Native for Windows” implementation.

React for Windows is under the MIT License and will allow developers to target any Windows 10 device, including PCs, tablets, Xbox, mixed reality devices and more. The project is being developed on GitHub and is available for developers to test. More mature releases will follow soon.

Windows Subsystem for Linux 2

Microsoft rolled out a new architecture for Windows Subsystem for Linux: WSL 2 at the MSBuild 2019. Microsoft will also be shipping a fully open-source Linux kernel with Windows specially tuned for WSL 2. New features include massive file system performance increases (twice as much speed for file-system heavy operations, such as Node Package Manager install). WSL also supports running Linux Docker containers.

The next generation of WSL arrives for Insiders in mid-June. More information here.

New releases in multiple Developer Tools

.NET 5 arrives in 2020

.NET 5 is the next major version of the .NET Platform which will be available in 2020. .NET 5 will have all .NET Core features as well as more additions:

  • One Base Class Library containing APIs for building any type of application
  • More choice on runtime experiences
  • Java interoperability will be available on all platforms.
  • Objective-C and Swift interoperability will be supported on multiple operating systems
  • .NET 5 will provide both Just-in-Time (JIT) and Ahead-of-Time (AOT) compilation models to support multiple compute and device scenarios.
  • .NET 5 also will offer one unified toolchain supported by new SDK project types as well as a flexible deployment model (side-by-side and self-contained EXEs)

Detailed information here.

ML.NET 1.0

ML.NET is Microsoft’s open-source and cross-platform framework that runs on Windows, Linux, and macOS and makes machine learning accessible for .NET developers. Its new version, ML.NET 1.0, was released at the Microsoft Build Conference 2019 yesterday. Some new features in this release are:

  • Automated Machine Learning Preview: Transforms input data by selecting the best performing ML algorithm with the right settings. AutoML support in ML.NET is in preview and currently supports Regression and Classification ML tasks.
  • ML.NET Model Builder Preview: Model Builder is a simple UI tool for developers which uses AutoML to build ML models. It also generates model training and model consumption code for the best performing model.
  • ML.NET CLI Preview: ML.NET CLI is a dotnet tool which generates ML.NET Models using AutoML and ML.NET. The ML.NET CLI quickly iterates through a dataset for a specific ML Task and produces the best model.

Visual Studio IntelliCode, Microsoft’s tool for AI-assisted coding

Visual Studio IntelliCode, Microsoft’s AI-assisted coding is now generally available. It is essentially an enhanced IntelliSense, Microsoft’s extremely popular code completion tool. Intellicode is trained by using the code of thousands of open-source projects from GitHub that have at least 100 stars.

It is available for C# and XAML for Visual Studio and JavaJavaScript, TypeScript, and Python for Visual Studio Code. IntelliCode also is included by default in Visual Studio 2019, starting in version 16.1 Preview 2. Additional capabilities, such as custom models, remain in public preview.

Visual Studio 2019 version 16.1 Preview 2

Visual Studio 2019 version 16.1 Preview 2 release includes IntelliCode and the GitHub extensions by default. It also brings out of preview the Time Travel Debugging feature introduced with version 16.0. Also includes multiple performances and productivity improvements for .NET and C++ developers.

Gaming and Mixed Reality

Minecraft AR game for mobile devices

At the end of Microsoft’s Build 2019 keynote yesterday, Microsoft teased a new Minecraft game in augmented reality, running on a phone. The teaser notes that more information will be coming on May 17th, the 10-year anniversary of Minecraft.

HoloLens 2 Development Edition and unreal engine support

The HoloLens 2 Development Edition includes a HoloLens 2 device, $500 in Azure credits and three-months free trials of Unity Pro and Unity PiXYZ Plugin for CAD data, starting at $3,500 or as low as $99 per month.

The HoloLens 2 Development Edition will be available for preorder soon and will ship later this year. Unreal Engine support for streaming and native platform integration will be available for HoloLens 2 by the end of May.

Intelligent Edge and IoT

Azure IoT Central new features

Microsoft Build 2019 also featured new additions to Azure IoT Central, an IoT software-as-a-service solution.

  • Better rules processing and customs rules with services like Azure Functions or Azure Stream Analytics
  • Multiple dashboards and data visualization options for different types of users
  • Inbound and outbound data connectors, so that operators can integrate with   systems
  • Ability to add custom branding and operator resources to an IoT Central application with new white labeling options

New Azure IoT Central features are available for customer trials.

IoT Plug and Play

IoT Plug and Play is a new, open modeling language to connect IoT devices to the cloud seamlessly without developers having to write a single line of embedded code. IoT Plug and Play also enable device manufacturers to build smarter IoT devices that just work with the cloud. Cloud developers will be able to find IoT Plug and Play enabled devices in Microsoft’s Azure IoT Device Catalog. The first device partners include Compal, Kyocera, and STMicroelectronics, among others.

Azure Maps Mobility Service

Azure Maps Mobility Service is a new API which provides real-time public transit information, including nearby stops, routes and trip intelligence. This API also will provide transit services to help with city planning, logistics, and transportation.

Azure Maps Mobility Service will be in public preview in June. Read more about Azure Maps Mobility Service here.

KEDA: Kubernetes-based event-driven autoscaling

Microsoft and Red Hat collaborated to create KEDA, which is an open-sourced project that supports the deployment of serverless, event-driven containers on Kubernetes. It can be used in any Kubernetes environment — in any public/private cloud or on-premises such as Azure Kubernetes Service (AKS) and Red Hat OpenShift.

KEDA has support for built-in triggers to respond to events happening in other services or components. This allows the container to consume events directly from the source, instead of routing through HTTP. KEDA also presents a new hosting option for Azure Functions that can be deployed as a container in Kubernetes clusters.

Securing elections and political campaigns

ElectionGuard SDK and Microsoft 365 for Campaigns

ElectionGuard, is a free open-source software development kit (SDK) as an extension of Microsoft’s Defending Democracy Program to enable end-to-end verifiability and improved risk-limiting audit capabilities for elections in voting systems. Microsoft365 for Campaigns provides security capabilities of Microsoft 365 Business to political parties and individual candidates. More details here.

Pattern and practices – architecture with Kubernetes and Docker


How has the world of software development changed in the era of Docker and Kubernetes? Is it possible to build an architecture once and for all using these technologies? Is it possible to unify the processes of development and integration when everything is “packed” in containers? What are the requirements for such decisions? What restrictions do they bring into play? Will they make life of developers easier or, instead, add unnecessary complications to it?

It’s time to shed the light on these (and some other) questions! (In text and original illustrations.)

This article will take you on a journey from real life to development processes to architecture and back to real life, giving answers to the most important questions at each of these stops along the way. We will try to identify a number of components and principles that should become part of an architecture and demonstrate a few examples without going into the realms of their implementation.

The conclusion of the article may upset or please you. It all depends on your experience, your perception of this three-chapter story, and perhaps even your mood at the time of reading. Let me know what you think by posting comments or questions below!

From real life to development workflows


For the most part, all development processes that I have ever seen or been honored to set up served one simple goalreduce the time between the birth of an idea and its delivery to production, while maintaining a certain degree of code quality.

It doesn’t matter whether the idea is good or bad. Bad ideas come and go fastyou just try them and turn them down to disintegrate. What’s worth mentioning here is that rolling back from a bad idea falls on the shoulders of a robot which automates your workflow.

Continuous integration and delivery seem like a life saver in the world of software development. What can be simpler than that, after all? You have an idea, you have the code, so go for it! It would’ve been flawless if not for a slight problemthe process of integration and delivery is rather difficult to formalize in isolation from the technology and business processes that are specific to your company.

However, despite the seeming complexity of the task, life constantly throws in excellent ideas that bring us (well, myself for sure) a little closer to building a flawless mechanism that can be useful in almost any occasion. The most recent step to such mechanism for me has been Docker and Kubernetes, whose level of abstraction and the ideological approach made me think that 80% of issues can now be solved using practically the same methods.

The remaining 20% obviously didn’t go anywhere. But this is exactly where you can focus your inner creative genius on interesting work, rather than deal with the repetitive routine tasks. Taking care of the “architectural framework” just once will let you forget about the 80% of solved problems.

What does all of this mean, and how exactly does Docker solve the problems of the development workflow? Let’s look at a simple process, which also happens to be sufficient for a majority of work environments:


With due approach, you can automate and unify everything from the sequence below, and forget about it for months to come.

Setting up a development environment


A project should contain a docker-compose.yml file, which can spare you a trouble of thinking about what and how you need to do to run the application/service on the local machine. A simple docker-compose upcommand should start up your application with all its dependencies, populate the database with fixtures, upload the local code inside the container, enable code tracing for compilation on the fly, and eventually start responding at the expected port. Even when setting up a new service, you needn’t worry about how to start, where to commit changes or which frameworks to use. All of this should be described in advance in the standard instructions and dictated by the service templates for different setups: frontend,backend, and worker.

Automated testing


All you want to know about the “black box” (more about why I call container this will follow later in text) is that everything’s all right inside it. Yes or no. 1 or 0. Having a finite number of commands that can be executed inside the container, and docker-compose.yml describing all of its dependencies, you can easily automate and unify testing without focusing too much on the implementation details.

For example, like this!

Here testing means not only and not so much unit testing, but also functional testing, integration testing, testing of (code style) and duplication, checking for outdated dependencies, violation of licenses for used packages, and many other things. The point is all of this should be encapsulated inside your Docker image.

Systems delivery


It doesn’t matter when and where you want to install your project. The result, just like the installation process, should always be the same. There is also no difference as to which part of the whole ecosystem you’re going to be installing or which git repo you’ll be getting it from. The most important component here is idempotence. The only thing that you should specify is the variables that control the installation.

Here’s the algorithm that seems to me quite effective at solving this problem:

  1. Collect images from all of your Dockerfiles (for example, like this)
  2. Using a meta-project, deliver these images to Kubernetes via Kube API. Initiating a delivery usually requires several input parameters:
  • Kube API endpoint
  • an “secret” object that varies for different contexts (local/showroom/staging/production)
  • the names of the systems to display and the tags of the Docker images for these systems (obtained at the previous step)

As an example of a meta-project that encompasses all systems and services (in other words, a project that describes how the ecosystem is arranged and how updates are delivered to it), I prefer to use Ansible playbooks with this module for integration with Kube API. However, sophisticated automators can refer to other options, and I’ll dwell on my own choices later. However, you have to think of a centralized/unified way of managing the architecture. Having one will let you conveniently and uniformly manage all services/systems and neutralize any complications that the upcoming jungle of technologies and systems performing similar functions may throw at you.

Typically, an installation of the environment is required in:

  • “ShowRoom”for some manual checks or debugging of the system
  • “Staging”for near-live environments and integrations with external systems (usually located in the DMZ as opposed to ShowRoom)
  • “Production”the actual environment for the end user

Continuity in integration and delivery


If you have a unified way of testing Docker imagesor “black boxes”you can assume that the results of such tests would allow you to seamlessly (and with clear conscience) integrate feature-branch in the upstream or masterbranches of your git repository.

Perhaps, the only deal breaker here is the sequence of integration and delivery. When there is no releases, how do you prevent a “race condition” on one system with a set of parallel feature-branches?

Therefore, this process should be started only when there’s no competition, otherwise the “race condition” will keep haunting your mind:

  1. Try to update the feature-branch to upstream (git rebase/merge)
  2. Build images from Dockerfiles
  3. Test all the built images
  4. Start and wait until the systems with the images from step 2 are delivered
  5. If the previous step failed, roll back the eco-system to the previous state
  6. Merge feature-branch inupstream and send it to the repository

Any failure at any step should terminate the delivery process and return the task to the developer to fix the error, whether it’s a failed test or a merge conflict.

You can use this process to work with more than one repository. Just do each step for all repositories at once (step 1 for repos A and B, step 2 for repos A and B, and so on), instead of doing the whole process repeatedly for each individual repository (steps 1–6 for repo A, steps 1–6 for repo B, and so on).

In addition, Kubernetes allows you to roll out updates in parts for carrying out various AB tests and risk analysis. Kubernetes does it internally by separating services (access points) and applications. You can always balance the new and the old versions of a component in a desired proportion to facilitate problem analysis and make way for a potential rollback.

Rollback systems

One of the mandatory requirements to an architectural framework is an ability to reverse any deployment. This, in turn, entails a number of explicit and implicit nuances. Here are some of the most important of them:

  • A service should be able to set up its environment as well as rollback changes. For example, database migration, RabbitMQ schema and so on.
  • If it’s not possible to rollback the environment, the service should be polymorphic and support both the old and the new versions of the code. For example: database migrations shouldn’t disrupt the old versions of the service (usually, 2 or 3 past versions)
  • Backwards compatibility of any service update. Usually, this is API compatibility, message formats and so on.

It is fairly simple to rollback states in a Kubernetes cluster (run kubectl rollout undo deployment/some-deploymentand Kubernetes will restore the previous “snapshot”), but for this to work, your meta-project should contain information about this snapshot. More complex delivery rollback algorithms are highly discouraged, although they are sometimes necessary.

Here’s what can trigger the rollback mechanism:

Ensuring information security and audit

There is no single workflow that can magically “build” bulletproof security and protect your ecosystem from both external and internal threats, so you need to make sure that your architectural framework is executed with an eye on the standards and security policies of the company at each level and in all subsystems.

I will address all three levels of the proposed solution later, in the section about monitoring and alerting, which themselves also happen to be critical to system integrity.

Kubernetes has a set of good built-in mechanisms for access controlnetwork policiesaudit of events, and other powerful tools related to information security, which can be used to build an excellent perimeter of protection that can resist and prevent attacks and data leaks.

From development workflows to architecture

An attempt to build a tight integration between the development workflows and the ecosystem should be taken seriously. Adding a requirement for such integration to the traditional set of requirements to an architecture (flexibility, scalability, availability, reliability, protection against threats, and so on), can greatly increase the value of your architectural framework. It is so crucial an aspect that it has resulted in the emergence of a concept called “DevOps” (Development Operation), which is a logical step towards the total automation and optimization of the infrastructure. However, granted a well-designed architecture and reliable subsystems, DevOps tasks can be minimized.

Micro-service architecture

There is no need to go into details of the benefits of a service oriented architectureSOA, including why services should be “micro”. I will only say that if you have decided to use Docker and Kubernetes, then you most likely understand (and accept) that it’s difficult and even ideologically wrong to have a monolithic architecture. Designed to run a single process and work with persistence, Docker forces us to think within the DDD framework (Domain-Driven Development). In Docker, packed code is treated as a black box with some exposed ports.

Critical components and solutions of the ecosystem

From my experience of designing systems with an increased availability and reliability, there are several components that are crucial to the operation of micro-services. I’ll list and talk about each of these components later, and even though I’ll be referring to them in the context of a Kubernetes environment, you can refer to my list as a checklist for any other platform.

If you (like me) will have come to the conclusion that it’d be great to manage each of these components as a regular Kubernetes service, then I’d recommend you to run them in a separate cluster other than “production”. For example, a “staging” cluster, because it can save your life when the production environment is unstable and you desperately need a source of its image, code, or monitoring tools. That solves the chicken and the egg problem, so to speak.

Identity service


As usual, it all starts with accessto servers, virtual machines, applications, office mail, and so on. If you are or want to be a client of one of the major enterprise platforms (IBM, Google, Microsoft), the access issue will be handled by one of the vendor’s services. However if you want to have your own solution, managed only by you and within your budget?

This list should help you decide on the appropriate solution and estimate the effort required to set up and maintain it. Of course, your choice must be consistent with the company’s security policy and approved by the information security department.

Automated service provisioning


Although Kubernetes requires having only a handful components on physical machines/cloud VMs (docker, kubelet, kube proxy, etcd cluster), you still need to automate the addition of new machines and cluster management. Here is a few simple ways to do it:

  • KOPSthis tool allows you to install a cluster on one of the two cloud providersAWS or GCE
  • Teraformthis lets you manage the infrastructure for any environment, and follows the ideology of IAC (Infrastructure as Code)
  • Ansibleversatile tool for automation of any kind

Personally, I prefer the third option (with a little Kubernetes integration module), since it allows me to work with both servers and k8s objects and carry out any kind of automation. However, nothing stops you from using Teraform and its Kubernetes module. KOPS doesn’t work well with the “bare metal” but it’s still a great tool to use with AWS/GCE, too!

Git repository and a task tracker


The only way for any Docker container to make its logs accessible is to write them to STDOUT or STDERR of the root process running in the container. Service developer doesn’t really care what happens next with the logs data, the main thing is that they should be available when necessary and preferably contain records to a certain point in the past. All responsibility for fulfilling these expectations lies with Kubernetes and the engineers who support the ecosystem.

In the official documentation, you can find a description of the basic (and a good one) strategy for working with logs, which will help you choose a service for aggregation and storage of huge text data.

Among the recommended services for a logging system, the same documentation mentions fluentd for collecting data (when launched as an agent on each node of the cluster), and Elasticsearch for storing and indexing data. Even if the efficiency of either You may disagree with the efficiency of this solution, but it’s reliable and easy to use so I think it’s at least a good start.

Elasticsearch is a resource-intensive solution but it scales well and has ready-made Docker images to run both an individual node and a cluster of a required size.

Tracing system


As perfect as your code can be, failures do happen and then you want to study them with a fine-tooth comb on production and try to understand “what the hell went wrong if everything worked fine on my local machine?”. Slow database queries, improper caching, slow disks or connectivity with an external resource, transactions in the ecosystem, bottlenecks and under-scaled computing services are some of the reasons why you will be forced to track and estimate the time spent executing your code under a real load.

Opentracing and Zipkin cope with this task for most modern programming languages and without adding any extra burden after instrumenting the code. Of course all the collected data should be stored in a suitable place, which is used as one of the components.

The complexities that arise when instrumenting the code and forwarding “Trace Id” through all the services, message queues, databases, and so on are solved by the above-mentioned development standards and service templates. The latter also take care of uniformity of the approach.

Monitoring and alerting


Prometheus has become the de facto standard in modern systems and, more importantly, it is supported in Kubernetes almost out of the box. You can refer to the official Kubernetes documentation to find out more about monitoring and alerting.

Monitoring is one of the few auxiliary systems that must be installed inside a cluster. And the cluster is an entity that is subject to monitoring. But monitoring of a monitoring system (pardon the tautology) can only be performed from the outside (for example, from the same “staging” environment). In this case, cross-checking comes in handy as a convenient solution for any distributed environment, which wouldn’t complicate the architecture of your highly unified ecosystem.

The whole range of monitoring is divided into three completely logically isolated levels. Here is what I think are the most important examples of tracking points at each level:

  • Physical level:Network resources and their availabilityDisks (i/o, available space)Basic resources of individual nodes (CPU, RAM, LA)
  • Cluster level:Availability of the main cluster systems on each node (kubelet, kubeAPI, DNS, etcd, and so on)The number of free resources and their uniform distributionMonitoring of permitted vs. actual resource consumption by servicesReloading of pods
  • Service level:Any kind of application monitoringfrom database contents to a frequency of API callsNumber of HTTP errors on the API gatewaySize of the queues and the utilization of the workersMultiple metrics for the database (replication lag, time and number of transactions, slow requests and more)Error analysis for non-HTTP processesMonitoring of requests sent to the log system (you can transform any requests into metrics)

As for the alert notifications at each level, I’d like to recommend using one of the countless external services that can send notifications to email, SMS or make calls to a mobile number. I’ll also mention another systemOpsGeniewhich has a close integration with the Prometheus’s alertmanager.

OpsGenie is a flexible tool for alerting which helps to deal with escalations, round-the-clock duty, notification channel selection and much more. It’s also easy to distribute alerts among teams. For example, different levels of monitoring should send notifications to different teams/departments: physicalInfra + Devops, clusterDevops, applicationseach to a relevant team.

API gateway and Single Sign-on


To handle tasks such as authorization, authentication, user registration (external users-clients of the company) and other kinds of access control, you will need a highly reliable service that can maintain a flexible integration with your API gateway. No harm using the same solution as for the “Identity service”, however you may want to separate the two resources to achieve a different level of availability and reliability.

The inter-service integration should not be complicated and your services should not worry about authorization and authentication of users and each other. Instead, the architecture and the ecosystem should have a proxy service that handles all communications and HTTP traffic.

Let’s consider the most suitable way of integration with the API gateway, hence with your entire ecosystemtokens. This method is good for all three access scenarios: from the UI, from service to service, and from an external system. Then the task of receiving a token (based on the login and password) lies with the UI itself or with the service developer. It also makes sense to distinguish between the lifetime of tokens used in the UI (shorter TTL) and in other cases (longer and custom TTL).

Here are some of the problems that the API gateway resolves:

  • Access to the ecosystem’s services from outside and within (services do not communicate directly with each other)
  • Integration with a Single Sign-on service:Transformation of tokens and appending HTTPS requests with headers that containing user identification data (ID, roles, other details) for the requested serviceEnabling/disabling access control to the requested service based on the roles received from the Single Sign-on service
  • Single point of monitoring for HTTP traffic
  • Combining API documentation from different services (for example, combining Swagger’s json/yml files
  • Ability to manage routing for the entire ecosystem based on the domains and requested URIs
  • Single access point for external traffic, and integration with the access provider

Event bus and Enterprise Integration/Service bus


If your ecosystem contains hundreds of services that work in one macro domain, you will have to deal with thousands of possible ways in which the services can communicate. To streamline data flows, you should think of the ability to distribute messages over a large number of recipients upon the occurrence of certain events, regardless of the contexts of the events. In other words, you need an event bus to publish events based on a standard protocol and to subscribe to them.

As an event bus, you can use any system that can operate a so-called broker: RabbitMQKafkaActiveMQ, and others. In general, high availability and consistency of data are critical for the micro services, but you’ll still have to sacrifice something to achieve a proper distribution and clustering of the bus, due to the CAP theorem.

Naturally, the event bus should be able to solve all sorts of problems of inter-service communication, but as the number of services grows from hundreds to thousands to tens of thousands, even the best event bus-based architecture will fail, and you will need to look for another solution. A good example would be an integration bus approach, which can extend the capabilities of the “Dumb pipeSmart consumer” tactics described above.

There are dozens of reasons for using the “Enterprise Integration/Service Bus” approach, which aims to reduce the complexities of a service-oriented architecture. Here’s just a few of these reasons:

  • Aggregation of multiple messages
  • Splitting of one event into several events
  • Synchronous/transactional analysis of the system response to an event
  • Coordination of interfaces, which is especially important for integration with external systems
  • Advanced logic of event routing
  • Multiple integration with the same services (from the outside and within)
  • Non-scalable centralization of the data bus

As an open-source software for an enterprise integration bus, you may want to consider Apache ServiceMix, which includes several components essential for design and development of this kind of SOA.

Databases and other stateful services


As well as Kubernetes, Docker has changed the rules of the game once and for all for services that require data persistence and work closely with the disk. Some say that the services should “live” the old way on physical servers or virtual machines. I respect this opinion and I won’t go into arguments about its pros and cons, but I’m fairly certain that such statements exist only because of the temporary lack of knowledge, solutions, and experience in managing stateful services in a Docker environment.

I should also mention that a database often occupies the central place in the storage world, and therefore the solution you select should be fully prepared to work in a Kubernetes environment.

Based on my experience and the market situation, I can distinguish the following groups of stateful services along with examples of the most suitable Docker-oriented solutions for each of them:

Dependency mirrors


If you haven’t yet encountered a situation where the packages or dependencies you need have been removed or made temporarily unavailable from a public server, don’t think that this will never happen. To avoid any unwanted unavailability and provide security to the internal systems, make sure that neither building nor delivery of your services require an Internet connection. Configure mirroring and copying of all dependencies to the internal network: Docker images, rpm packages, source repositories, python/go/js/php modules.

Each of these and any other types of dependencies have their own solutions. The most common can be googled by the query “private dependency mirror for …

From architecture to real life


Like it or not, sooner or later your entire architecture will be doomed to failure. It always happens: technologies become obsolete fast (1–5 years), methods and approachesa bit slower (5–10 years), design principles and fundamentalsoccasionally (10–20 years), yet inevitably.

Mindful of the obsolescence of technology, always try to keep your ecosystem at the peak of tech innovations, plan and roll out new services to meet the needs of developers, business and end users, promote new utilities to your stakeholders, deliver knowledge to move your team and the company forward.

Stay on top of the game by integrating into the professional community, reading relevant literature and socializing with colleagues. Be aware of your opportunities as well as the correct use of new trends in your project. Experiment and apply scientific methods to analyze the results of your research, or rely on the conclusions of other people that you trust and respect.

It is difficult to prepare yourself for fundamental changes, yet possible if you are an expert in your field. All of us will only witness a few major technological changes throughout our life, but it’s not the amount of knowledge in our heads that makes us professionals and brings us to the top, it’s the openness of to our ideas and the ability to accept metamorphosis.

Overview Microsoft Technical Summit 2016 and .Net Developer Conference

I recently held a talk for http://www.dotnet-developer-conference.de/ About  – Microservices and Scalable Backends  with Azure Service Fabric, and also a Transition path from a legacy application to scalable application.

Slides: https://1drv.ms/b/s!Akicn3-kM3q2hcpDxdbumEC7gd9usw

Regarding Microsoft’s Technical Summit(); is an event mainly for developers, providing announcements round to Microsoft’s developer Stack.

Again, a sensation to the topic Microsoft and Linux: Microsoft joins the Linux Foundation and is a Board Member there in the future! At the same time, Google is member of the .NET Foundation!

Microsoft has unveiled a Visual Studio for Mac OS X. The new IDE is however no porting existing Visual Studio, which runs only on Windows, but only an extended version of the product beginning 2016 in relation to the acquisition by Xamarin with purchased Xamarin Studio, which in turn is based on the free Mono develop. The new IDE supports the first step in the Microsoft strategy “mobile first, cloud first” only the development by Xamarin apps for iOS and Android, as well as Web applications and REST based Web services with .NET core. The programming languages can choose the developer between c# and f #.

Visual Studio “15” is now called Visual Studio 2017 and is now available as a release candidate.

Visual Studio 2017 and Visual Studio for Mac now offer a graphical view of the preview for Xamarin forms. So far, developers could write XAML tags in the editor and only saw the result at run time. Graphic design by mouse, the developers of WindowsPresentation Foundation (WPF), Silverlight or Windows universal apps are used, is still not possible on a Xamarin Forms interface.

.NET core, ASP.NET core and entity framework core version 1.1 released.

ASP.NET 1.1 core should be 15 percent faster and equips numerous functions (URL rewriting, response caching, HTTP compression, new process host listener, use of view components per day helper, new locations for sensitive data) after.

In entity framework core 1.1, Microsoft provides some programming functions from the predecessor entity framework 6.1 again at the disposal, which were missing in entity framework core 1.0. This searches for objects in the cache with find(), explicitly load of related objects with load() method, renewed preloaded objects with reload()loading as well as the simple query of object content, old and new, with GetDatabaseValues() and GetModifiedProperties(). Also the mapping to simple fields instead ofproperties and the recovery of lost database connections (connection resiliency) to the Microsoft SQL Server or SQL azure is possible again. The ability to map objects to the memory-optimized tables from Microsoft SQL Server 2016 is brand new. In addition, the makers have simplified the API with the standard functions can be replaced by entity framework core by their own implementations.

.NET 1.1 core Linux Mint 18, 42.1, OpenSUSE come with macOS 10.12 and WindowsServer 2016 as operating systems added. Samsung delivers core for Tizen also .NET.A further step in the direction of cross-platform compatibility of Microsoft products.NET represents core for the Tizen operating system, the Samsung, since June 2016Member has developed the .NET Foundation, together with relevant Visual Studio Tools.

The Team Foundation Server “15” 2017 receives the version number like Visual Studio. He had previously reached the stage release candidate 2 is now available as RTM version available

The new Visual Studio Mobile Center is a cloud application, the hosted on Github source code for IOS and Android apps automatically translated at each commit, testson real hardware and distributed successfully tested app packets to beta testers. Also, a usage analysis and run-time failure is possible. The supported programming languages are swift, objective-C, Java and c# (Xamarin). Support for Cordova and Windows universal apps is planned.

SQL Server for Linux: In March, Microsoft announced to provide the SQL Server in the future also for Linux. There is now a preview version of the database server for RedHat Enterprise Linux, Ubuntu Linux, macOS and Windows, as well as docking available. A version for SUSE Linux Enterprise Server is to follow soon. The latest release is called SQL Server Community Technology Preview vNext and represents an evolutionof SQL Server 2016, with new features in addition to the platform independence were shown in the keynote, nor previously listed on the website.

For the SQL Server 2016, a Service Pack 1 available now, that contains significant improvements in the licensing model in addition to bug fixes.

Xamarin for Free with Visual Studio & Mono Open Source

Visual Studio includes now Xamarin with no extra cost , was announced yesterday at #Build2016 Conference , and that is a huge benefits for developers.

 

  The Xamarin platform will be in every edition of Visual Studio, including the widely-available Visual Studio Community Edition, which is free for individual developers, open source projects, academic research, education, and small  teams. Now we can Develop and publish native apps for iOS and Android with C# or F# from directly within Visual Studio with no limits on app size.

For Mac developers , Xamarin Studio is now available as a benefit of Visual Studio Professional or Enterprise subscriptions. Developers can use the newly-created Xamarin Studio Community Edition for free.

To give it a try and begin developing iOS and Android apps with the full version of Xamarin and C#, download Xamarin Studio or Xamarin for Visual Studio .

Another big announcement it is that The Mono Project it is added to the .NET Foundation, including some previously-proprietary mobile-specific improvements to the Mono runtime. Mono will also be re-released under the MIT License, to enable an even broader set of uses for everyone. More details to the Mono Project blog.

The changes to Mono remove all barriers to adopting a modern, performing .NET runtime in any software product, embedded device, or engine, and open the door to easily integrate C# with apps and games on iOS, Android, Mac, and Windows and any emerging platforms developers want to target in future.

Microsoft Developer Division is Extending :) – With Xamarin

Why Xamarin

Xamarin it is the Manufacturer of tools for cross-platform development based on C#. In the company, there are also the developers, the Mono and Moonlight had developed as an open-source alternative to .NET and Silverlight.

The Californian company Xamarin provides a app-development environment based on the C # programming language and .NET classes. In addition to iOS, OS X, Android and Windows Xamarin supports now also TVOS, WatchOS and Playstation. Basis for Xamarin products is Mono , an open source implementation of Microsoft’s .NET Framework, which have existed since 2,001th

Come together what belongs together?

Mono is a different implementation because Microsoft then the source code of .NET is not declared as “open source”, but as “Shared Source”, the “Just look, do not touch” after the motto further use of the source code is not allowed. The Mono Project has in complex work the .NET runtime environment, the C # compiler and a large part of the .NET class library rebuilt and endeavored thus to remain compatible with Microsoft’s model.

  

Now is true – Microsoft has Xamarin, a manufacturer of tools for cross-platform development, adopted

Why two implementations!

.NET Developers can now hope that the two platforms together grow closer. While many base classes are uniform, there has been no sophisticated user interface technology that can run on all platforms. With Windows Presentation Foundation (WPF) , Windows runtime XAML and Xamarin Forms instead there are three dialects of markup language extensible Application Markup Language (XAML) , but which are not entirely compatible. Xamarin Forms running at least iOS and Android as well as Windows 10 Universal apps, still lies in the development far behind other XAML dialects.

Also appears in times when Microsoft .NET Framework as .NET Core itself open source and platform-neutral development, the continuation of Mono as competition for .NET Core no longer meaningful.

Read more details about the acquisition on Scott Gu’s blog:

https://weblogs.asp.net/scottgu/welcoming-the-xamarin-team-to-microsoft

Best Feature of TFS 2015 Update 1

Shortly before Christmas 2015 was the TFS 2015 Update 1 released, within this release are some new features that bring more productivity to Team Development and DevOps.

Here are some of my favorites :

TFVC and Git in the same Team Project

Lots of my customers have TFVC (centralized version control) in TFS. When Git support came out, the only option they had if they wanted to switch to Git was to create a new “Git-based” Team Project and port source code over. Then they got into a horrible situation where work items were all in the TFVC Team Project, and the source code was in the new Git Team Project.

Now, you can simply add a new Git repo to an existing TFVC Team Project! Navigate to the Code hub in Web Access, click the repository drop-down (in the top left of the Code pane) and select “New Repository”:

image

Enter the name of your repo and click Create. You’ll see the new “Empty Git page” (with a handy “Clone in Visual Studio” button):

imageThe Repository drop-down now shows multiple repos, each with their corresponding TFVC or Git icon:

image

You can also add TFVC to a Git Team Project! This makes sense if you want to source control large assets. That way you can have your code in Git, and then source control your assets in TFVC, all in the same team project.

If you’re looking for alternatives to supporting large files in Git, then you’ll be pleased to note that VSO supports Git-LFS. Unfortunately, it’s not in this CTP – though it is planned for the Update 1 Release. As a matter of interest, the real issue is the NTLM authentication support for Git-LFS – the product team are going to submit a PR to the GitHub Git-LFS repo so that it should be supported by around the time Update 1 releases.

Query and Notifications on Kanban Column

Customizing Kanban columns is great – no messing in the XML WITD files – just open the settings, map the Kanban column to the work item states, and you’re good to go. But what if you want to query on Kanban column – or get a notification if a work item moves to a particular column? Until now, you couldn’t.

If you open a work item query editor, you’ll see three additional fields that you can query on:

  • Board Column – which column the board is in. Bear in mind that the same work item could be in different columns for different teams.
  • Board Column Done – corresponding to the “Doing/Done” split
  • Board Lane – the swimlane that the work item is in

image Not only can you query on these columns, but you can also add alerts using these fields. That way you could, for example, create an alert for “Stories moved to Done in the Testing column in the Expedite Lane”.

Pull Requests in Team Explorer

You’ll need Visual Studio 2015 Update 1 for this to work. Once you have Update 1, you’ll be able to see and create Pull Requests in the Team Explorer Pull Requests tile. You can also filter PRs and select between Active, Completed and Abandoned PRs. There are PRs you’ve created as well as PRs assigned to you or your team. Clicking a PR opens it up in Web Access:

 

  Team Board Enhancements

There’s a lot to discuss under this heading. If you’re using the Kanban boards, you’ll want to upgrade just for these enhancements.

Team Board Settings Dialog

The Board Settings Dialog has been revamped. Now you can customize the cards, columns, CFD and team settings from a single place – not a single “admin” page in sight! Just click the gear icon on the top right of a Backlog board, and the Settings dialog appears:

image

Field Settings

TFS 2015 RTM introduced field customization, so not much has changed here. There’s an additional setting that allows you to show/hide empty fields – if you’ve got a lot of cards, hiding empty fields makes the cards smaller where possible, allowing more cards on the board than before.

Customisable Styles

You can now set conditional styling on the cards. For example, I’ve added some style rules that color my cards red (redder and reddest) depending on the risk:

image You can drag/drop the rules (they fire in order) and of course you can rules for multiple fields and conditions. You can change the card color and/or the title color (and font style) if the condition matches. Here’s my board after setting the styles:

image
Tag Coloring

You can now colorize your tags. You can see the iPhone and WindowsPhone tags colored in the board above because of these settings:

image
Team Board Settings

Under board settings, you’ll be able to customize the Columns (and their state mappings, Doing/Done split, and Definition of Done. Again you’ll see a drag/drop theme allowing you to re-order columns.

image

The same applies to the swimlanes configuration.

As a bonus, you can rename a Kanban column directly on the board by clicking the hearer:

image

Charts and General

Under “Charts” and “General” you’ll be able to configure the CFD chart as well as the Backlogs (opt in/out of backlogs), Working Days and how your bugs appear (Backlogs or Task boards or neither). These settings used to be scattered around the UI, so it’s great to have a single place to set all of these options.

Tasks as Checklists

If you use Tasks as checklists, then this is a great new feature. Each Story (or Requirement or PBI, depending on your template) card now shows how may child Tasks it has. Clicking on the indicator opens up the checklist view:

image

You can drag/drop to reorder, check to mark complete and add new items.

Task Board Enhancements

The Task board also gets some love – conditional styling (just like the Kanban cards) as well as the ability to add a Task to a Story inline.

More Activities Per Team Member

You can now set multiple activities per team member. I’ve always thought that this feature has been pretty limited without this ability:

image

Now you have a real reason to use the Activity field on the Task! The Task burn down now also shows actual capacity in addition to the ideal trend:

image

As a bonus, you can now also add new Team members directly from the Capacity page – without having to open up the Team administration page.

Team Dashboards

The “old” Home page (or Team Landing page) let you have a spot to pin charts or queries or build tiles. However, you couldn’t really customize how widgets were positioned, and if you had a lot of favorites, the page got a little cluttered. Enter Dashboards. You can now create a number of Dashboards and customize exactly which widgets appear (and where). Here I’m creating a new “Bugs” dashboard that will only show Bug data. Once you’ve created the Dashboard, just click the big green “+” icon on the lower right to add widgets:

image

Once you’ve added a couple of widgets, you can drag/drop them around to customize where they appear. Some widgets require configuration – like this “Query Tile” widget, where I am selecting which query to show as well as title and background color:

imageHere I’m customizing the Query widget:

image

You can see how the widgets actually preview the changes as you set them.

To add charts to a Dashboard, you need to go to the Work|Queries pane, then select the chart and add it to the Dashboard from the fly-out menu:

image

Similarly, to add a Build widget to the Dashboard you need to navigate to builds and add it to the Dashboard of your choice from the list of Builds on the left.

Now I have a really cool looking Bugs Dashboard!

image

Test Result Retention Policies

There is a tool for cleaning up test results (the Test Attachment cleaner in the TFS Power Tools) – but most users only use this when space starts running low. Now you can set retention policies that allow TFS to clean up old run, results and attachments automatically. Open up the administration page and navigate to the Test tab:

image

Team Queries: Project-Scoping Work Item Types and States

If you have multiple Team Projects, and at least one of them uses a different template, then you’ll know that it can be a real pain when querying, since you get all the work item types and all the states – even if you don’t need them. For example, I’ve got a Scrum project and an Agile project. In RTM, when I created a query in the Agile project, the Work Item types drop-down lists Product Backlog items too (even though they’ll never be in my Agile Team Project). Now, by default, only Work Item Type (and States) that appear in your Team Project show in the drop-down lists. If you want to see other work item types, then you’re doing a “cross-Project” query and there’s an option to turn that on (“Query across projects”) to the top-right of the query editor:

image

Policies for Work Item Branch

Now, in addition to Build and Code Review policies for Pull Requests in Git branch policies, you can also require that the commits are linked to work items:

imageYou can also just link the PR to a work item to fulfill the policy.

Labeling and Client-site workspace mapping in Builds

The build agent gets an update, and there are some refreshed Tasks (including SonarQube begin and end tasks). More importantly, you can now label your sources on (all or just successful) builds:

image

Also, if you’re building from a TFVC repo, you can now customize the workspace mapping:

image

And Stay tuned, because will come some other features in Update 2 that are in VSO now like :

 

Build widgets in the catalog

As Karen wrote about in the dashboards futures blog, one area we’re focusing on is improving the discoverability and ease in bringing different charts to your dashboard. With this update, you’ll see a new option to add a build history chart from the dashboard catalog, and you’ll be able to configure the build definition displayed directly from the dashboard.

Adding a build history chart from the dashboard catalog

Markdown widget with file from repository

The first version of the markdown widget allowed custom markdown stored inside the widget. You can now choose to display any markdown file in your existing repository.

Selecting a markdown file for display in the widget

Or add the file to any dashboard in your team project directly from the Code Explorer.

Displaying a markdown file in a dashboard

Check out all the January features in detail for Visual Studio Team Services:

https://www.visualstudio.com/news/2016-jan-25-vso

Azure Stack vs Azure Pack

Microsoft announced the Azure Stack at its Ignite event Last year , for running something like Azure on-premises, but how does it differ from the existing Azure Pack, which kind-of does the same thing?

This answer goes to the heart of how Microsoft is changing to become a cloud-first company, at least within its own special meaning of “cloud”. Ignite attendees heard about new versions of Windows Server, SharePoint, Exchange and SQL Server, and the common thread running through all these announcements is that features first deployed in Office 365 or Azure are now coming to the on-premises editions.

Why azure pack and azure stack?

We all living in cloud computing world now. IT people talk about “Cloud” more often. Microsoft Azure is in the top of the list providing proven stable cloud services. It includes IaaS (infrastructure as a service), PaaS (Platform as a service), SaaS (Software as a services) and lot more cloud related services. As we all know azure been very successful with availability, security, performance etc. But most of enterprises, businesses are already done lot of investment to build their infrastructure. This is much more valid for managed service providers. So instead of moving all the service to cloud, people are started to more interest on hybrid-cloud model. So some services will be using public cloud services and same time some services will be run from the datacenter.

To address hybrid-cloud model Microsoft decided to bring the azure technologies to the public so companies can use same technologies used in azure in their own datacenters. So the result was “Azure Pack”.

According to Microsoft,

Windows Azure Pack provides a multi-tenant, self-service cloud that works on top of your existing software and hardware investments. Building on the familiar foundation of Windows Server and System Center, Windows Azure Pack offers a flexible and familiar solution that your business can take advantage of to deliver self-service provisioning and management of infrastructure — Infrastructure as a service (Iaas), and application services — Platform as a Service (PaaS), such as Web Sites and Virtual Machines.

Windows-Azure-Pack_452x298

This was big relief for the MSP as they can offer a portal to their customers to manage their resources efficiently.

Azure pack is mainly depending on the infrastructure which is running based on windows server and system center. It uses system center virtual machine manager to manage virtual machines. It uses system center service provider foundation service to integrate all the related operations between portals and services. Following are some great features of azure pack.

1.    Portal for tenants to manage their resources
2.    Portal for system administrators to manage cloud services, tenants
3.    Automation using runbooks
4.    Service bus feature to provide reliable messaging between applications
5.    Database Services (MSSQL, MySQL)
6.    Web site services to setup scalable web hosting platform
7.    Console connect feature to connect to VM remotely even physical network interface not available.
8.    Multi-Factor authentication using ADFS

Why Azure Stack?

Well azure pack was the first big step toward the path, but the technology keeps changing every day. With new version of windows server software defined storage, software defined networking will do revolution change. To face this new requirement solution is the Azure stack.Microsoft keep sharpening up the azure platform. With azure stack, it will bring same proven cloud capabilities to the hybrid-cloud.

Azure Pack was “an effort to replicate the cloud experience,” Microsoft’s Ryan O’Hara (senior director, product management told the press at Ignite. By contrast, Azure Stack is “a re-implementation of not only the experience but the underlying services, the management model as well as the datacenter infrastructure.”

In other words, there is more Azure and less System Center in Stack versus Pack, and that is a good indication of Microsoft’s direction. That said, Microsoft’s Azure Stack slide says “powered by Windows Server, System Center and Azure technologies,” so we should expect bits of System Center to remain.

According to Mike Neil, General Manager for Windows Server, Microsoft

Microsoft Azure Stack extends the agile Azure model of application development and deployment to your datacenter. Azure Stack delivers IaaS and PaaS services into your datacenter so you can easily blend your enterprise applications such as SQL Server, SharePoint, and Exchange with modern distributed applications and services while maintaining centralized oversight. Using Azure Resource Manager (just released in preview last week), you get consistent application deployments every time, whether provisioned to Azure in the public cloud or Azure Stack in a datacenter environment. This approach is unique in the industry and gives your developers the flexibility to create applications once and then decide where to deploy them later – all with role-based access control to meet your compliance needs.

Built on the same core technology as Azure, Azure Stack packages Microsoft’s investments in automated and software-defined infrastructure from our public cloud datacenters and delivers them to you for a more flexible and secure datacenter environment. For example, Azure Stack includes a scalable and flexible software-defined Network Controller and Storage Spaces Direct with automated sync and failover. Shielded VMs and Guarded Hosts bring “zero-trust” software-defined security to your private cloud so you can securely segment organizations and workloads and centrally control and monitor access and administration rights. Furthermore, Azure Stack will simplify the complex process of deploying private/hosted clouds based on our experience building the Microsoft Cloud Platform System, a converged infrastructure solution.

server-cloud-may4b-1

Inside of azure pack it was “depending” on system center services. But Azure Stack will not “depend” on system Center but it is possible to integrate it with operation management suite and system Center .

Despite this disparity, Microsoft’s general approach seems to be to evolve and optimize server products for Azure and Office 365, and then to trickle down features to the on-premises editions where possible. It therefore pays for developers and admins working on Microsoft’s platform to keep an eye on the cloud platforms, since this is what you will get in a year or two even if you have no intention of becoming a cloud customer.

This approach does make sense, in that characteristics desirable in a cloud product, such as resilience and scalability, are also desirable on premises. It may give you pause for thought though if the pieces you depend on have no relevance in Microsoft’s cloud. We have already seen how the company killed Small Business Server, for which the last full version was in 2011.

That brings us to Azure Stack, the purpose of which is to bring pieces of Azure into your data Center for your very own Microsoft cloud. The existing Azure Pack already does this, but this was essentially a wrapper for System Center components (especially SCVMM) that allowed use of the Azure portal and some other features on premises.

Stay tuned on – https://azure.microsoft.com/en-us/blog

Build your own Cloud with Azure Stack

Are many Cloud Platforms used to build Custom Hybrid Cloud Scenarios, like Cloud Foundry used quite often by different Infrastructure providers Like INTEL, EMC², VMware, Swisscom, since 2 days comes another one  from Microsoft –  Azure Stack , which is great for #hybrid #cloud .

Azure Stack

Microsoft has launched the public preview of Azure Stack, something that has been in TAP for several months now. You can find the download on MSDN right now or from download link.

image

Azure Stack is a collection of software technologies that Microsoft uses for its Azure cloud computing infrastructure. It consists of “operating systems, frameworks, languages, tools and applications we are building in Azure” that are being extended to individual datacenters, Microsoft explained, in the white paper.  However, Azure Stack is specifically designed for enterprise and service provider environments.

For instance, Microsoft has to scale its Azure infrastructure as part of operations. That’s done at a minimum by adding 20 racks of servers at a time. Azure Stack, in contrast, uses management technologies “that are purpose built to supply Azure Service capacity and do it at enterprise scale,” Microsoft’s white paper explained.

Azure Stack has four main layers, starting with a Cloud Infrastructure layer at its base, which represents Microsoft’s physical datacenter capacity (see chart).

The Azure Stack software.

Next up the stack there’s an Extensible Service Framework layer. It has three sublayers. The Foundational Services sublayer consists of solutions needed to create things like virtual machines, virtual networks and storage disks. The Additional Services sublayer provides APIs for third-party software vendors to add their services. The Core Services sublayer includes services commonly needed to support both PaaS and IaaS services.

The stack also contains a Unified Application Model layer, which Microsoft describes as a fulfillment service for consumers of cloud services. Interactions with this layer are carried out via Azure Resource Manager, which is a creation tool for organizations using cloud resources. Azure Resource Manager also coordinates requests for Azure services.

Tools for Integration, Deployment and Operations you can find here – https://azure.microsoft.com/en-us/documentation/articles/azure-stack-tools-paas-services/#

More About It also in released a whitepaper providing more information on key Azure Stack concepts and capabilities that should help you gain a much richer understanding of our approach

Summary of Microsoft Connect 2015

I try to put together most of the Microsoft technology announcements from Connect 2015, some of the news where updated at 30 November with RTM releases of .NET 4.6.1, NET Core and VS 2015 Update 1 .

Brian post some of them also here – http://blogs.msdn.com/b/bharry/archive/2015/11/30/vs-2015-update-1-and-tfs-2015-update-1-are-available.aspx 

.NET Team – http://blogs.msdn.com/b/dotnet/archive/2015/12/01/the-week-in-net-12-1-2015.aspx 

below…

  • Visual Studio Code beta release, Added extensibility support, open source project
  • .NET Core 5 RC and ASP.NET 5 RC with Go-Live license, can start using it in production – now is RTM (update DEC.)
  • Visual Studio Online is now Visual Studio Team Services, agile team collaboration and DevOps
  • Visual Studio Dev Essentials, priority forum support, Pluralsight, Wintellect,, Xamarin, (Azure early 2016)
  • Visual Studio cloud subscriptions
    • Monthly subscriptions include the VS Pro or Enterprise IDE, access to VSTS
    • Annual subscriptions includes technical support incidents, Azure credits, etc
  • Visual Studio Marketplace, central place to find, acquire, install extensions for all editions of Visual Studio
  • VS 2015 Update 1 and TFS 2015 Update 1, will both happen on November 30th.
  • Xamarin 4 support. end-to-end solution to build, test, monitor native mobile apps with VS2015 Update 1
  • iOS Build with MacinCloud on VSTS, currently in preview at $30/month per agent with no limits on build hours

other Announcements

Read all the details here:

There are also more than 70 on-demand videos with additional details on: connect2015

Microsoft Unity DI Container it is Open Source

   After few months , when was announced that  Prism over to new owners. The Pattern And Practices Team spend some of time and effort into identifying  needs and vision around Unity and owners that would invest in the project and support the community.

 

Going Forward

Be sure to read the official announcement from the new team and follow their work on the new GitHub repo. Let them know what you’d like to see in future releases of Unity and help them continue to grow the community

Fork it and lets begin to improve what we need , like I wrote before in another blog about DI   – https://dumians.wordpress.com/2013/10/02/dependency-containers-and-thread-safe-how-to/.