Pattern and practices – architecture with Kubernetes and Docker


How has the world of software development changed in the era of Docker and Kubernetes? Is it possible to build an architecture once and for all using these technologies? Is it possible to unify the processes of development and integration when everything is “packed” in containers? What are the requirements for such decisions? What restrictions do they bring into play? Will they make life of developers easier or, instead, add unnecessary complications to it?

It’s time to shed the light on these (and some other) questions! (In text and original illustrations.)

This article will take you on a journey from real life to development processes to architecture and back to real life, giving answers to the most important questions at each of these stops along the way. We will try to identify a number of components and principles that should become part of an architecture and demonstrate a few examples without going into the realms of their implementation.

The conclusion of the article may upset or please you. It all depends on your experience, your perception of this three-chapter story, and perhaps even your mood at the time of reading. Let me know what you think by posting comments or questions below!

From real life to development workflows


For the most part, all development processes that I have ever seen or been honored to set up served one simple goalreduce the time between the birth of an idea and its delivery to production, while maintaining a certain degree of code quality.

It doesn’t matter whether the idea is good or bad. Bad ideas come and go fastyou just try them and turn them down to disintegrate. What’s worth mentioning here is that rolling back from a bad idea falls on the shoulders of a robot which automates your workflow.

Continuous integration and delivery seem like a life saver in the world of software development. What can be simpler than that, after all? You have an idea, you have the code, so go for it! It would’ve been flawless if not for a slight problemthe process of integration and delivery is rather difficult to formalize in isolation from the technology and business processes that are specific to your company.

However, despite the seeming complexity of the task, life constantly throws in excellent ideas that bring us (well, myself for sure) a little closer to building a flawless mechanism that can be useful in almost any occasion. The most recent step to such mechanism for me has been Docker and Kubernetes, whose level of abstraction and the ideological approach made me think that 80% of issues can now be solved using practically the same methods.

The remaining 20% obviously didn’t go anywhere. But this is exactly where you can focus your inner creative genius on interesting work, rather than deal with the repetitive routine tasks. Taking care of the “architectural framework” just once will let you forget about the 80% of solved problems.

What does all of this mean, and how exactly does Docker solve the problems of the development workflow? Let’s look at a simple process, which also happens to be sufficient for a majority of work environments:


With due approach, you can automate and unify everything from the sequence below, and forget about it for months to come.

Setting up a development environment


A project should contain a docker-compose.yml file, which can spare you a trouble of thinking about what and how you need to do to run the application/service on the local machine. A simple docker-compose upcommand should start up your application with all its dependencies, populate the database with fixtures, upload the local code inside the container, enable code tracing for compilation on the fly, and eventually start responding at the expected port. Even when setting up a new service, you needn’t worry about how to start, where to commit changes or which frameworks to use. All of this should be described in advance in the standard instructions and dictated by the service templates for different setups: frontend,backend, and worker.

Automated testing


All you want to know about the “black box” (more about why I call container this will follow later in text) is that everything’s all right inside it. Yes or no. 1 or 0. Having a finite number of commands that can be executed inside the container, and docker-compose.yml describing all of its dependencies, you can easily automate and unify testing without focusing too much on the implementation details.

For example, like this!

Here testing means not only and not so much unit testing, but also functional testing, integration testing, testing of (code style) and duplication, checking for outdated dependencies, violation of licenses for used packages, and many other things. The point is all of this should be encapsulated inside your Docker image.

Systems delivery


It doesn’t matter when and where you want to install your project. The result, just like the installation process, should always be the same. There is also no difference as to which part of the whole ecosystem you’re going to be installing or which git repo you’ll be getting it from. The most important component here is idempotence. The only thing that you should specify is the variables that control the installation.

Here’s the algorithm that seems to me quite effective at solving this problem:

  1. Collect images from all of your Dockerfiles (for example, like this)
  2. Using a meta-project, deliver these images to Kubernetes via Kube API. Initiating a delivery usually requires several input parameters:
  • Kube API endpoint
  • an “secret” object that varies for different contexts (local/showroom/staging/production)
  • the names of the systems to display and the tags of the Docker images for these systems (obtained at the previous step)

As an example of a meta-project that encompasses all systems and services (in other words, a project that describes how the ecosystem is arranged and how updates are delivered to it), I prefer to use Ansible playbooks with this module for integration with Kube API. However, sophisticated automators can refer to other options, and I’ll dwell on my own choices later. However, you have to think of a centralized/unified way of managing the architecture. Having one will let you conveniently and uniformly manage all services/systems and neutralize any complications that the upcoming jungle of technologies and systems performing similar functions may throw at you.

Typically, an installation of the environment is required in:

  • “ShowRoom”for some manual checks or debugging of the system
  • “Staging”for near-live environments and integrations with external systems (usually located in the DMZ as opposed to ShowRoom)
  • “Production”the actual environment for the end user

Continuity in integration and delivery


If you have a unified way of testing Docker imagesor “black boxes”you can assume that the results of such tests would allow you to seamlessly (and with clear conscience) integrate feature-branch in the upstream or masterbranches of your git repository.

Perhaps, the only deal breaker here is the sequence of integration and delivery. When there is no releases, how do you prevent a “race condition” on one system with a set of parallel feature-branches?

Therefore, this process should be started only when there’s no competition, otherwise the “race condition” will keep haunting your mind:

  1. Try to update the feature-branch to upstream (git rebase/merge)
  2. Build images from Dockerfiles
  3. Test all the built images
  4. Start and wait until the systems with the images from step 2 are delivered
  5. If the previous step failed, roll back the eco-system to the previous state
  6. Merge feature-branch inupstream and send it to the repository

Any failure at any step should terminate the delivery process and return the task to the developer to fix the error, whether it’s a failed test or a merge conflict.

You can use this process to work with more than one repository. Just do each step for all repositories at once (step 1 for repos A and B, step 2 for repos A and B, and so on), instead of doing the whole process repeatedly for each individual repository (steps 1–6 for repo A, steps 1–6 for repo B, and so on).

In addition, Kubernetes allows you to roll out updates in parts for carrying out various AB tests and risk analysis. Kubernetes does it internally by separating services (access points) and applications. You can always balance the new and the old versions of a component in a desired proportion to facilitate problem analysis and make way for a potential rollback.

Rollback systems

One of the mandatory requirements to an architectural framework is an ability to reverse any deployment. This, in turn, entails a number of explicit and implicit nuances. Here are some of the most important of them:

  • A service should be able to set up its environment as well as rollback changes. For example, database migration, RabbitMQ schema and so on.
  • If it’s not possible to rollback the environment, the service should be polymorphic and support both the old and the new versions of the code. For example: database migrations shouldn’t disrupt the old versions of the service (usually, 2 or 3 past versions)
  • Backwards compatibility of any service update. Usually, this is API compatibility, message formats and so on.

It is fairly simple to rollback states in a Kubernetes cluster (run kubectl rollout undo deployment/some-deploymentand Kubernetes will restore the previous “snapshot”), but for this to work, your meta-project should contain information about this snapshot. More complex delivery rollback algorithms are highly discouraged, although they are sometimes necessary.

Here’s what can trigger the rollback mechanism:

Ensuring information security and audit

There is no single workflow that can magically “build” bulletproof security and protect your ecosystem from both external and internal threats, so you need to make sure that your architectural framework is executed with an eye on the standards and security policies of the company at each level and in all subsystems.

I will address all three levels of the proposed solution later, in the section about monitoring and alerting, which themselves also happen to be critical to system integrity.

Kubernetes has a set of good built-in mechanisms for access controlnetwork policiesaudit of events, and other powerful tools related to information security, which can be used to build an excellent perimeter of protection that can resist and prevent attacks and data leaks.

From development workflows to architecture

An attempt to build a tight integration between the development workflows and the ecosystem should be taken seriously. Adding a requirement for such integration to the traditional set of requirements to an architecture (flexibility, scalability, availability, reliability, protection against threats, and so on), can greatly increase the value of your architectural framework. It is so crucial an aspect that it has resulted in the emergence of a concept called “DevOps” (Development Operation), which is a logical step towards the total automation and optimization of the infrastructure. However, granted a well-designed architecture and reliable subsystems, DevOps tasks can be minimized.

Micro-service architecture

There is no need to go into details of the benefits of a service oriented architectureSOA, including why services should be “micro”. I will only say that if you have decided to use Docker and Kubernetes, then you most likely understand (and accept) that it’s difficult and even ideologically wrong to have a monolithic architecture. Designed to run a single process and work with persistence, Docker forces us to think within the DDD framework (Domain-Driven Development). In Docker, packed code is treated as a black box with some exposed ports.

Critical components and solutions of the ecosystem

From my experience of designing systems with an increased availability and reliability, there are several components that are crucial to the operation of micro-services. I’ll list and talk about each of these components later, and even though I’ll be referring to them in the context of a Kubernetes environment, you can refer to my list as a checklist for any other platform.

If you (like me) will have come to the conclusion that it’d be great to manage each of these components as a regular Kubernetes service, then I’d recommend you to run them in a separate cluster other than “production”. For example, a “staging” cluster, because it can save your life when the production environment is unstable and you desperately need a source of its image, code, or monitoring tools. That solves the chicken and the egg problem, so to speak.

Identity service


As usual, it all starts with accessto servers, virtual machines, applications, office mail, and so on. If you are or want to be a client of one of the major enterprise platforms (IBM, Google, Microsoft), the access issue will be handled by one of the vendor’s services. However if you want to have your own solution, managed only by you and within your budget?

This list should help you decide on the appropriate solution and estimate the effort required to set up and maintain it. Of course, your choice must be consistent with the company’s security policy and approved by the information security department.

Automated service provisioning


Although Kubernetes requires having only a handful components on physical machines/cloud VMs (docker, kubelet, kube proxy, etcd cluster), you still need to automate the addition of new machines and cluster management. Here is a few simple ways to do it:

  • KOPSthis tool allows you to install a cluster on one of the two cloud providersAWS or GCE
  • Teraformthis lets you manage the infrastructure for any environment, and follows the ideology of IAC (Infrastructure as Code)
  • Ansibleversatile tool for automation of any kind

Personally, I prefer the third option (with a little Kubernetes integration module), since it allows me to work with both servers and k8s objects and carry out any kind of automation. However, nothing stops you from using Teraform and its Kubernetes module. KOPS doesn’t work well with the “bare metal” but it’s still a great tool to use with AWS/GCE, too!

Git repository and a task tracker


The only way for any Docker container to make its logs accessible is to write them to STDOUT or STDERR of the root process running in the container. Service developer doesn’t really care what happens next with the logs data, the main thing is that they should be available when necessary and preferably contain records to a certain point in the past. All responsibility for fulfilling these expectations lies with Kubernetes and the engineers who support the ecosystem.

In the official documentation, you can find a description of the basic (and a good one) strategy for working with logs, which will help you choose a service for aggregation and storage of huge text data.

Among the recommended services for a logging system, the same documentation mentions fluentd for collecting data (when launched as an agent on each node of the cluster), and Elasticsearch for storing and indexing data. Even if the efficiency of either You may disagree with the efficiency of this solution, but it’s reliable and easy to use so I think it’s at least a good start.

Elasticsearch is a resource-intensive solution but it scales well and has ready-made Docker images to run both an individual node and a cluster of a required size.

Tracing system


As perfect as your code can be, failures do happen and then you want to study them with a fine-tooth comb on production and try to understand “what the hell went wrong if everything worked fine on my local machine?”. Slow database queries, improper caching, slow disks or connectivity with an external resource, transactions in the ecosystem, bottlenecks and under-scaled computing services are some of the reasons why you will be forced to track and estimate the time spent executing your code under a real load.

Opentracing and Zipkin cope with this task for most modern programming languages and without adding any extra burden after instrumenting the code. Of course all the collected data should be stored in a suitable place, which is used as one of the components.

The complexities that arise when instrumenting the code and forwarding “Trace Id” through all the services, message queues, databases, and so on are solved by the above-mentioned development standards and service templates. The latter also take care of uniformity of the approach.

Monitoring and alerting


Prometheus has become the de facto standard in modern systems and, more importantly, it is supported in Kubernetes almost out of the box. You can refer to the official Kubernetes documentation to find out more about monitoring and alerting.

Monitoring is one of the few auxiliary systems that must be installed inside a cluster. And the cluster is an entity that is subject to monitoring. But monitoring of a monitoring system (pardon the tautology) can only be performed from the outside (for example, from the same “staging” environment). In this case, cross-checking comes in handy as a convenient solution for any distributed environment, which wouldn’t complicate the architecture of your highly unified ecosystem.

The whole range of monitoring is divided into three completely logically isolated levels. Here is what I think are the most important examples of tracking points at each level:

  • Physical level:Network resources and their availabilityDisks (i/o, available space)Basic resources of individual nodes (CPU, RAM, LA)
  • Cluster level:Availability of the main cluster systems on each node (kubelet, kubeAPI, DNS, etcd, and so on)The number of free resources and their uniform distributionMonitoring of permitted vs. actual resource consumption by servicesReloading of pods
  • Service level:Any kind of application monitoringfrom database contents to a frequency of API callsNumber of HTTP errors on the API gatewaySize of the queues and the utilization of the workersMultiple metrics for the database (replication lag, time and number of transactions, slow requests and more)Error analysis for non-HTTP processesMonitoring of requests sent to the log system (you can transform any requests into metrics)

As for the alert notifications at each level, I’d like to recommend using one of the countless external services that can send notifications to email, SMS or make calls to a mobile number. I’ll also mention another systemOpsGeniewhich has a close integration with the Prometheus’s alertmanager.

OpsGenie is a flexible tool for alerting which helps to deal with escalations, round-the-clock duty, notification channel selection and much more. It’s also easy to distribute alerts among teams. For example, different levels of monitoring should send notifications to different teams/departments: physicalInfra + Devops, clusterDevops, applicationseach to a relevant team.

API gateway and Single Sign-on


To handle tasks such as authorization, authentication, user registration (external users-clients of the company) and other kinds of access control, you will need a highly reliable service that can maintain a flexible integration with your API gateway. No harm using the same solution as for the “Identity service”, however you may want to separate the two resources to achieve a different level of availability and reliability.

The inter-service integration should not be complicated and your services should not worry about authorization and authentication of users and each other. Instead, the architecture and the ecosystem should have a proxy service that handles all communications and HTTP traffic.

Let’s consider the most suitable way of integration with the API gateway, hence with your entire ecosystemtokens. This method is good for all three access scenarios: from the UI, from service to service, and from an external system. Then the task of receiving a token (based on the login and password) lies with the UI itself or with the service developer. It also makes sense to distinguish between the lifetime of tokens used in the UI (shorter TTL) and in other cases (longer and custom TTL).

Here are some of the problems that the API gateway resolves:

  • Access to the ecosystem’s services from outside and within (services do not communicate directly with each other)
  • Integration with a Single Sign-on service:Transformation of tokens and appending HTTPS requests with headers that containing user identification data (ID, roles, other details) for the requested serviceEnabling/disabling access control to the requested service based on the roles received from the Single Sign-on service
  • Single point of monitoring for HTTP traffic
  • Combining API documentation from different services (for example, combining Swagger’s json/yml files
  • Ability to manage routing for the entire ecosystem based on the domains and requested URIs
  • Single access point for external traffic, and integration with the access provider

Event bus and Enterprise Integration/Service bus


If your ecosystem contains hundreds of services that work in one macro domain, you will have to deal with thousands of possible ways in which the services can communicate. To streamline data flows, you should think of the ability to distribute messages over a large number of recipients upon the occurrence of certain events, regardless of the contexts of the events. In other words, you need an event bus to publish events based on a standard protocol and to subscribe to them.

As an event bus, you can use any system that can operate a so-called broker: RabbitMQKafkaActiveMQ, and others. In general, high availability and consistency of data are critical for the micro services, but you’ll still have to sacrifice something to achieve a proper distribution and clustering of the bus, due to the CAP theorem.

Naturally, the event bus should be able to solve all sorts of problems of inter-service communication, but as the number of services grows from hundreds to thousands to tens of thousands, even the best event bus-based architecture will fail, and you will need to look for another solution. A good example would be an integration bus approach, which can extend the capabilities of the “Dumb pipeSmart consumer” tactics described above.

There are dozens of reasons for using the “Enterprise Integration/Service Bus” approach, which aims to reduce the complexities of a service-oriented architecture. Here’s just a few of these reasons:

  • Aggregation of multiple messages
  • Splitting of one event into several events
  • Synchronous/transactional analysis of the system response to an event
  • Coordination of interfaces, which is especially important for integration with external systems
  • Advanced logic of event routing
  • Multiple integration with the same services (from the outside and within)
  • Non-scalable centralization of the data bus

As an open-source software for an enterprise integration bus, you may want to consider Apache ServiceMix, which includes several components essential for design and development of this kind of SOA.

Databases and other stateful services


As well as Kubernetes, Docker has changed the rules of the game once and for all for services that require data persistence and work closely with the disk. Some say that the services should “live” the old way on physical servers or virtual machines. I respect this opinion and I won’t go into arguments about its pros and cons, but I’m fairly certain that such statements exist only because of the temporary lack of knowledge, solutions, and experience in managing stateful services in a Docker environment.

I should also mention that a database often occupies the central place in the storage world, and therefore the solution you select should be fully prepared to work in a Kubernetes environment.

Based on my experience and the market situation, I can distinguish the following groups of stateful services along with examples of the most suitable Docker-oriented solutions for each of them:

Dependency mirrors


If you haven’t yet encountered a situation where the packages or dependencies you need have been removed or made temporarily unavailable from a public server, don’t think that this will never happen. To avoid any unwanted unavailability and provide security to the internal systems, make sure that neither building nor delivery of your services require an Internet connection. Configure mirroring and copying of all dependencies to the internal network: Docker images, rpm packages, source repositories, python/go/js/php modules.

Each of these and any other types of dependencies have their own solutions. The most common can be googled by the query “private dependency mirror for …

From architecture to real life


Like it or not, sooner or later your entire architecture will be doomed to failure. It always happens: technologies become obsolete fast (1–5 years), methods and approachesa bit slower (5–10 years), design principles and fundamentalsoccasionally (10–20 years), yet inevitably.

Mindful of the obsolescence of technology, always try to keep your ecosystem at the peak of tech innovations, plan and roll out new services to meet the needs of developers, business and end users, promote new utilities to your stakeholders, deliver knowledge to move your team and the company forward.

Stay on top of the game by integrating into the professional community, reading relevant literature and socializing with colleagues. Be aware of your opportunities as well as the correct use of new trends in your project. Experiment and apply scientific methods to analyze the results of your research, or rely on the conclusions of other people that you trust and respect.

It is difficult to prepare yourself for fundamental changes, yet possible if you are an expert in your field. All of us will only witness a few major technological changes throughout our life, but it’s not the amount of knowledge in our heads that makes us professionals and brings us to the top, it’s the openness of to our ideas and the ability to accept metamorphosis.

Microsoft Developer Division is Extending :) – With Xamarin

Why Xamarin

Xamarin it is the Manufacturer of tools for cross-platform development based on C#. In the company, there are also the developers, the Mono and Moonlight had developed as an open-source alternative to .NET and Silverlight.

The Californian company Xamarin provides a app-development environment based on the C # programming language and .NET classes. In addition to iOS, OS X, Android and Windows Xamarin supports now also TVOS, WatchOS and Playstation. Basis for Xamarin products is Mono , an open source implementation of Microsoft’s .NET Framework, which have existed since 2,001th

Come together what belongs together?

Mono is a different implementation because Microsoft then the source code of .NET is not declared as “open source”, but as “Shared Source”, the “Just look, do not touch” after the motto further use of the source code is not allowed. The Mono Project has in complex work the .NET runtime environment, the C # compiler and a large part of the .NET class library rebuilt and endeavored thus to remain compatible with Microsoft’s model.

  

Now is true – Microsoft has Xamarin, a manufacturer of tools for cross-platform development, adopted

Why two implementations!

.NET Developers can now hope that the two platforms together grow closer. While many base classes are uniform, there has been no sophisticated user interface technology that can run on all platforms. With Windows Presentation Foundation (WPF) , Windows runtime XAML and Xamarin Forms instead there are three dialects of markup language extensible Application Markup Language (XAML) , but which are not entirely compatible. Xamarin Forms running at least iOS and Android as well as Windows 10 Universal apps, still lies in the development far behind other XAML dialects.

Also appears in times when Microsoft .NET Framework as .NET Core itself open source and platform-neutral development, the continuation of Mono as competition for .NET Core no longer meaningful.

Read more details about the acquisition on Scott Gu’s blog:

https://weblogs.asp.net/scottgu/welcoming-the-xamarin-team-to-microsoft

Azure Stack vs Azure Pack

Microsoft announced the Azure Stack at its Ignite event Last year , for running something like Azure on-premises, but how does it differ from the existing Azure Pack, which kind-of does the same thing?

This answer goes to the heart of how Microsoft is changing to become a cloud-first company, at least within its own special meaning of “cloud”. Ignite attendees heard about new versions of Windows Server, SharePoint, Exchange and SQL Server, and the common thread running through all these announcements is that features first deployed in Office 365 or Azure are now coming to the on-premises editions.

Why azure pack and azure stack?

We all living in cloud computing world now. IT people talk about “Cloud” more often. Microsoft Azure is in the top of the list providing proven stable cloud services. It includes IaaS (infrastructure as a service), PaaS (Platform as a service), SaaS (Software as a services) and lot more cloud related services. As we all know azure been very successful with availability, security, performance etc. But most of enterprises, businesses are already done lot of investment to build their infrastructure. This is much more valid for managed service providers. So instead of moving all the service to cloud, people are started to more interest on hybrid-cloud model. So some services will be using public cloud services and same time some services will be run from the datacenter.

To address hybrid-cloud model Microsoft decided to bring the azure technologies to the public so companies can use same technologies used in azure in their own datacenters. So the result was “Azure Pack”.

According to Microsoft,

Windows Azure Pack provides a multi-tenant, self-service cloud that works on top of your existing software and hardware investments. Building on the familiar foundation of Windows Server and System Center, Windows Azure Pack offers a flexible and familiar solution that your business can take advantage of to deliver self-service provisioning and management of infrastructure — Infrastructure as a service (Iaas), and application services — Platform as a Service (PaaS), such as Web Sites and Virtual Machines.

Windows-Azure-Pack_452x298

This was big relief for the MSP as they can offer a portal to their customers to manage their resources efficiently.

Azure pack is mainly depending on the infrastructure which is running based on windows server and system center. It uses system center virtual machine manager to manage virtual machines. It uses system center service provider foundation service to integrate all the related operations between portals and services. Following are some great features of azure pack.

1.    Portal for tenants to manage their resources
2.    Portal for system administrators to manage cloud services, tenants
3.    Automation using runbooks
4.    Service bus feature to provide reliable messaging between applications
5.    Database Services (MSSQL, MySQL)
6.    Web site services to setup scalable web hosting platform
7.    Console connect feature to connect to VM remotely even physical network interface not available.
8.    Multi-Factor authentication using ADFS

Why Azure Stack?

Well azure pack was the first big step toward the path, but the technology keeps changing every day. With new version of windows server software defined storage, software defined networking will do revolution change. To face this new requirement solution is the Azure stack.Microsoft keep sharpening up the azure platform. With azure stack, it will bring same proven cloud capabilities to the hybrid-cloud.

Azure Pack was “an effort to replicate the cloud experience,” Microsoft’s Ryan O’Hara (senior director, product management told the press at Ignite. By contrast, Azure Stack is “a re-implementation of not only the experience but the underlying services, the management model as well as the datacenter infrastructure.”

In other words, there is more Azure and less System Center in Stack versus Pack, and that is a good indication of Microsoft’s direction. That said, Microsoft’s Azure Stack slide says “powered by Windows Server, System Center and Azure technologies,” so we should expect bits of System Center to remain.

According to Mike Neil, General Manager for Windows Server, Microsoft

Microsoft Azure Stack extends the agile Azure model of application development and deployment to your datacenter. Azure Stack delivers IaaS and PaaS services into your datacenter so you can easily blend your enterprise applications such as SQL Server, SharePoint, and Exchange with modern distributed applications and services while maintaining centralized oversight. Using Azure Resource Manager (just released in preview last week), you get consistent application deployments every time, whether provisioned to Azure in the public cloud or Azure Stack in a datacenter environment. This approach is unique in the industry and gives your developers the flexibility to create applications once and then decide where to deploy them later – all with role-based access control to meet your compliance needs.

Built on the same core technology as Azure, Azure Stack packages Microsoft’s investments in automated and software-defined infrastructure from our public cloud datacenters and delivers them to you for a more flexible and secure datacenter environment. For example, Azure Stack includes a scalable and flexible software-defined Network Controller and Storage Spaces Direct with automated sync and failover. Shielded VMs and Guarded Hosts bring “zero-trust” software-defined security to your private cloud so you can securely segment organizations and workloads and centrally control and monitor access and administration rights. Furthermore, Azure Stack will simplify the complex process of deploying private/hosted clouds based on our experience building the Microsoft Cloud Platform System, a converged infrastructure solution.

server-cloud-may4b-1

Inside of azure pack it was “depending” on system center services. But Azure Stack will not “depend” on system Center but it is possible to integrate it with operation management suite and system Center .

Despite this disparity, Microsoft’s general approach seems to be to evolve and optimize server products for Azure and Office 365, and then to trickle down features to the on-premises editions where possible. It therefore pays for developers and admins working on Microsoft’s platform to keep an eye on the cloud platforms, since this is what you will get in a year or two even if you have no intention of becoming a cloud customer.

This approach does make sense, in that characteristics desirable in a cloud product, such as resilience and scalability, are also desirable on premises. It may give you pause for thought though if the pieces you depend on have no relevance in Microsoft’s cloud. We have already seen how the company killed Small Business Server, for which the last full version was in 2011.

That brings us to Azure Stack, the purpose of which is to bring pieces of Azure into your data Center for your very own Microsoft cloud. The existing Azure Pack already does this, but this was essentially a wrapper for System Center components (especially SCVMM) that allowed use of the Azure portal and some other features on premises.

Stay tuned on – https://azure.microsoft.com/en-us/blog

Build your own Cloud with Azure Stack

Are many Cloud Platforms used to build Custom Hybrid Cloud Scenarios, like Cloud Foundry used quite often by different Infrastructure providers Like INTEL, EMC², VMware, Swisscom, since 2 days comes another one  from Microsoft –  Azure Stack , which is great for #hybrid #cloud .

Azure Stack

Microsoft has launched the public preview of Azure Stack, something that has been in TAP for several months now. You can find the download on MSDN right now or from download link.

image

Azure Stack is a collection of software technologies that Microsoft uses for its Azure cloud computing infrastructure. It consists of “operating systems, frameworks, languages, tools and applications we are building in Azure” that are being extended to individual datacenters, Microsoft explained, in the white paper.  However, Azure Stack is specifically designed for enterprise and service provider environments.

For instance, Microsoft has to scale its Azure infrastructure as part of operations. That’s done at a minimum by adding 20 racks of servers at a time. Azure Stack, in contrast, uses management technologies “that are purpose built to supply Azure Service capacity and do it at enterprise scale,” Microsoft’s white paper explained.

Azure Stack has four main layers, starting with a Cloud Infrastructure layer at its base, which represents Microsoft’s physical datacenter capacity (see chart).

The Azure Stack software.

Next up the stack there’s an Extensible Service Framework layer. It has three sublayers. The Foundational Services sublayer consists of solutions needed to create things like virtual machines, virtual networks and storage disks. The Additional Services sublayer provides APIs for third-party software vendors to add their services. The Core Services sublayer includes services commonly needed to support both PaaS and IaaS services.

The stack also contains a Unified Application Model layer, which Microsoft describes as a fulfillment service for consumers of cloud services. Interactions with this layer are carried out via Azure Resource Manager, which is a creation tool for organizations using cloud resources. Azure Resource Manager also coordinates requests for Azure services.

Tools for Integration, Deployment and Operations you can find here – https://azure.microsoft.com/en-us/documentation/articles/azure-stack-tools-paas-services/#

More About It also in released a whitepaper providing more information on key Azure Stack concepts and capabilities that should help you gain a much richer understanding of our approach

More MS OpenSource and Multi-Platform tools , .Net Core is GoLive

@Connect 2015 for few hours Microsoft has announced a lot of new things , here are

CONNECT 2015 KEYNOTES AND VIDEOS

You can watch everyone’s talks here

Enjoy DEV Smile

TFS free Book – Managing Agile Open-Source Software Projects with Microsoft Visual Studio Online

We’re happy to announce the release of our newest free ebook, Managing Agile Open-Source Software Projects with Microsoft Visual Studio Online (ISBN 9781509300648), by Brian Blackman, Gordon Beeming, Michael Fourie, and Willy-Peter Schaub.

With this ebook, the ALM Rangers share their best practices in managing solution requirements and shipping solutions in an agile environment, an environment where transparency, simplicity, and trust prevail. The ebook is for Agile development teams and their Scrum Masters who want to explore and learn from the authors’ “dogfooding” experiences and their continuous adaptation of software requirements management. Product Owners and other stakeholders will also find value in this ebook by learning how they can support their Agile development teams and by gaining an understanding of the constraints of open-source community projects.

Download all formats (PDF, Mobi and ePub) at the Microsoft Virtual Academy.

Below you’ll find the ebook’s Foreword and a few helpful sections from its Introduction:

Foreword

The ALM Rangers are a special group for several reasons. Not only are they innovative and focused on the real world, providing value-added solutions for the Visual Studio developer community, but they live and work in all four corners of the globe. The ALM Rangers are a volunteer organization. Talk about dedication! When we were offered the opportunity to write a foreword for this book, we knew we’d be part of something special.

The ALM Rangers don’t pontificate that they’ve found the one true way. This is practical advice and examples for producing great software by those who’ve done it and–most importantly–are still innovating and coding. Readers will find that they have virtual coworkers who share their experiences with honesty and humor, revealing learnings and what has worked for them. This doesn’t mean that this book lacks prescriptive guidance. The Rangers have embraced Visual Studio Online as their one and only home. They are evolving with the product, embracing open source software in GitHub to learn how successful OSS projects are run there and what the community values most. They’ve created an ecosystem that identifies the “low hanging fruit” and tracks it from idea to solution, and they never fail to recognize the Rangers and the ALM VPs who dedicate their personal time and passion to their OSS projects.

The extensive guidance shared here is not an end-to-end plan for everyone, although it could be used as a definitive guide for some teams. One of the many assets of this book is its organization into practical walkthroughs of typical ALM Ranger projects from idea to solution, presented as an easy to consume reference. Other bonuses are an appendix to quick-start your own project and reference checklists to keep you on track.

Among the authors, this book was called the “v1 dawn edition.” True to their core value of “learn from and share all experiences,” the ALM Rangers are always mindful that producing great software means continuous refinements from new learnings and feedback and that there will be more versions of this book. But first we invite you to immerse yourself in Managing Agile Open-Source Software Projects with Microsoft Visual Studio Online.

In the true spirit of Agile, ongoing innovation,

Sam Guckenheimer
Clemri Steyn

Introduction
This book assumes that you have at least a minimal understanding of Agile, Lean, and Scrum development concepts and are familiar with Team Foundation Server (TFS) and Visual Studio Online (VSO). To go beyond this book and expand your knowledge of Agile practices or Visual Studio technologies, MSDN and other Microsoft Press books offer both complete introductions and comprehensive information.

This book might not be for you if …
This book might not be for you if you are looking for an in-depth discussion focused on the process, development, or architecture of software requirements, tooling, or practices.

Similarly, if you are looking for source code or guidance on ALM, DevOps, or proven and official frameworks such as Agile, Scrum and Kanban, this book will not be fully relevant, and we recommend that you consider these publications instead: