0 Comments

Part 1 – General Overview & Account Structure.
Part 2 – Setting up the Azure and Visual Studio Online (VSO) structure and maintenance.
Part 3 – Setting up the Azure Environments with ARM and PowerShell.
Part 4 – Setting up Build Servers.
Part 5 – Setting up Continuous delivery (CD) on CI Development builds.
Part 6 - Setting up Release Manager for Dev nightly, QA, Stage and Prod deployments.

Part 2 – Setting up the Azure and Visual Studio Online (VSO) structure and maintenance.

In part 1 of this series I discussed the general azure account and subscription structure that we settled on. In this post I will go into a little more detail how we achieved this, what automation we felt was critical to success and how we used VSO to accomplish our end-to-end Azure/Vso pipeline.

image

Azure

After the the above structure was setup in the Enterprise Portal, the next biggest thing to look at is identity.

User Identity:

Most Enterprise users will already have an Active Directory on premise, with a management policy already in place as to how users/computers are managed by a given team. Microsoft provides Azure AD Connect which is a service that will constantly sync changes between your on premise AD and your AD in Azure. Do not underestimate how long it will take to set this up, there are many prerequisites that you will have to go through with your on premise AD before you are able to sync it with azure. I really would not recommend progressing any further with Azure until you have this or an equivalent strategy for user and administrator access setup; we did and the later migration is going to be difficult.

Access Control:

The client was keen to embrace a less restrictive access control policy on developers than perhaps is ‘normal’ in order to achieve a low maintenance highly flexible environment, but with tight control around key environments for consistency control. It was decided that all developers would have full read-write control to their own programs dev subscription (i.e. all projects within the program they belong to), with full read permissions to the QA subscription and no rights to staging or production. To ensure consistency we developed a set of custom PowerShell scripts to hand over to the team in charge of account admin, this would ensure naming of groups were correct and reduce the chance of accidental escalation of privilege to other environments.

Tear down and re-provision:

For a ‘DevOps’ pipeline to work efficiently, and for all key stakeholders to have confidence in it, it is vital that everyone is encouraged / coerced into scripting every last detail of a project. To encourage this we setup Azure Automation Jobs that would tear down the resource groups within the development subscriptions every night (with a few minor exclusions for VM’s hooked up to release manager, we will come back to this later). This meant that we as developers had to ensure we added every last setting to our Azure Resource Manager templates (ARM) that described a given project.

In production a tear down and re-provision strategy is obviously not always practical, so we also wanted to test an inline upgrade. To do this, our QA deployment would delete the given resource group, retrieve the ARM template used in the current live environment, apply this to QA, before applying the new QA ARM script of our new deployment that would inline upgrade the environment. This requires a high level of co-ordination, and does have some downfalls (i.e. dependent on the changes between different version of your ARM script you may be unable to do a live deployment and have to arrange downtime for release).

VSO:

Azure subscription location:

As with on premise, your VSO and build server access control needs to be extremely tight (having permissions to check in changes, release new builds and administer VSO or azure subscriptions is as good as having admin access to any PC/server/subscription in any environment that your are deploying to). We hence decided to separate VSO into its own Azure Subscription named “dev tools”, where we would also later place our build VM’s. This isn’t entirely necessary, especially now with RBAC in Azure, but it feels like a good separation of concerns where access control can be kept tight and interlinked resources between subscriptions can be kept to a minimum (i.e. Release manager has to be able to ‘see’ all subscriptions in order to release the software).

Collections:

At the current time of writing VSO can only support 1 team collection per VSO account, unfortunately this means you either opt for multiple VSO accounts or flatten your structure and utilize team projects. Neither option is ideal, and it is good to hear that Microsoft are working to bring this feature to VSO, but after some trial and error with both approaches we settled on 1 VSO account with a flatter team project structure. This does mean we lost one level of abstraction, but on the whole this has not been a problem for the client. The clients on premise TFS 2013 only had a handful of collections anyway (one for each ‘Program of works’), so moving everything up one level and applying security at the team project level has mostly been okay. When/if Microsoft bring multiple collection support into VSO, the migration path should be much more straightforward than it would have been from multiple VSO accounts.

Branch structure:

This is one area with no change between Azure and on-premise. Following best practice set out in the book ‘Professional Team Foundation Server 2013', we implemented a streamlined branch structure. What branching structure you adopt will be very dependent on your team and the business around you, have a read at the different options setout in the book above and see what is best for you.

Build & Release:

VSO introduced ‘Build vNext’ in May 2015 and as far as possible we planned to utilize the many benefits this offered (no xaml build scripts being the main one). However, VSO has not yet released ‘Release vNext’ and until such time the current Release Manager 2015 for VSO has a dependency on XAML builds.

Microsoft has been clear that Build vNext is the future, and we were keen to establish a template for later migration, so we planned to utilized build vNext for our CI builds while having the XAML build (that was hooked into the Release Manager 2015 pipeline) operate on a “rolling build” after a given number of successful check-ins to perform our Continuous Delivery (CD). Please see part 5 of this series for further detail on the setup.

Summary:

Hopefully this has given a insight into how we setup Azure and VSO for this client. The next set of posts I intend diving a little more technically in-depth into, so if you have not already done so now is a good time to sign up for azure.

0 Comments

I was having a problem with VSO Build.Preview / Build.VNext builds not being picked up by my self hosted VSO build agent (on an Azure VM). Unfortunately this is the 2nd or 3rd time I have hit into this problem, hence I felt a blog post was in order. When queuing a build I was greeted with the message “this build has been queued and is waiting to start”, this is normal at the start of every build, what is not normal is for this to not move on and sit there permanently.

image

  1. The first thing I always check in this situation is that the build server is available and enabled in VSO at https://YOURCOMPANYNAME.visualstudio.com/DefaultCollection/_admin/_AgentQueue, both were and my build agent was “Green”.
  2. I then RDP’d to the build sever to establish if the VSO agent was active in the processes tab, it was. I then kicked off another build and could instantly see activity on the agent, but this soon died off and I was again left with the above issue.
  3. I then prompted an update of the agent by visiting https://YOURCOMPANYNAME.visualstudio.com/DefaultCollection/_admin/_AgentQueue and selecting the queue on the left hand menu and selecting “Update all Agents”. I then repeated point 1 and 2, still no luck.
    image
  4. Reboot the build server, tried step 1 and 2 again, still no luck.
  5. I found this article on msdn that described my exact issue, but that seemed to suggest this was a one off event related to a VSO update. So I left everything alone over night and tried the next day, same again. Coincidently, or not, I only ever appear to have these issues after Microsoft have rolled out a VSO update, to date I have not seen this outside the VSO update ship window.

At this stage I had no choice, I had to reinstall the agent.

Reinstall the Agent

  1. Go to https://YOURCOMPANYNAME.visualstudio.com/DefaultCollection/_admin/_AgentQueue and delete the build agent in your queue.
  2. Logon to your build sever, navigate to windows service explorer and locate the windows service “VSO Agent” (this name is dependent on if you accepted the default settings at point of install). Stop this service and then right click the service and go to properties, from here note down the install location of the service.
  3. Before we are reinstall the agent we require to delete the workspace that the agent was using. To do this open a developer command prompt (type cmd in the start menu and select the Visual Studio cmd prompt) and run the below commands to list and then delete the required workspaces.

tf workspaces /server:https://YOURCOMPANYNAME.visualstudio.com/DefaultCollection

from the above output note down the workspace and the owner and then execute

tf workspace /delete /server:https://YOURCOMPANYNAME.visualstudio.com/DefaultCollection “workspacename;owner”

Note the semi colon between workspacename and owner

4. Now open up windows explorer and navigate to the directory you noted down at step 3 and you will find in the agent folder a script named “ConfigureAgent.cmd”. Open a command prompt, navigate to this directory and run this cmd file.

5. You will be prompted if you wish to update the local agents settings, say yes then accept all default options (your settings should already be stored from the previous install).

6. Now try another build. For me this was all I had to do, if it still does not work at this point I would suggest repeating steps 1-3 before downloading a fresh agent from VSO.

Cheers

Bryan

0 Comments

I am currently working with a client that have an Azure API app that I would like to test for both performance and load. I was busy setting up Web performance and load testing using Visual Studio 2015 and Visual Studio Online, when I hit into strange assembly redirect issues. I eventually discovered the problem was due to the test runners inability to access the load tests app.config file. In this post I will explain how I got round this.

In order to setup the web performance test I first required to authenticate each test with the API Gateway which had Azure Active Directory Authentication in front of it. This meant I required to write a “request plug-in” for the web performance test that would get the “x-zumo-auth” token and append it to the request headers of any request going to the API. To do this I required a couple of packages from NuGet, which added the appropriate settings to my app.config. On running the tests I was hitting what looked like a classic dll file not found exception, however the binding redirects were correct, and so was “copy always” which means the dlls were getting copied to the correct test results output directory, but the test was still looking for an old version of a given dll.

I eventually discovered the app.config, that was included in the web and load test project, was not being used by the test runner. It turns out Visual Studio and VSO delegate the running of these test to “QTAgent_40.exe” which is located in the program files directory which in turn points to the test directory (not your bin folder) to pick up the dll containing the tests. All this means that when you do ConfigurationManager.AppSettings[configKey]; you are really reading the contents of QTAgent_40.exe app config, not your own. I came across this article from MS DevOps engineer Donovan Brown, which pointed me in the direction of my final solution below.

To solve the issue I read the assembly bindings in from my app config by parsing the xml file (no managed classes are available to access this area of the app.config) then I subscribed an event to listen out for any dlls that were not automatically being resolved, I then added a reference to my new static class and method at the top of my request plug in before calling the AD Authentication libraries. If the event fired I would go through the preloaded settings from my app.config and point the test to the correct version of the dll. Code below.

Cheers

Bryan


using System;
using System.Reflection;
using System.IO;
using System.Xml;

public static class AssemblyBinder
{
    private static XmlNodeList assemblyBindingFromAppContext;
    private static XmlNamespaceManager docNamepace;

    public static void Resolve()
    {
        LoadConfig();

        AppDomain.CurrentDomain.AssemblyResolve += delegate (object sender, ResolveEventArgs e)
        {
            var requestedName = new AssemblyName(e.Name);
    
            foreach (XmlNode assembly in assemblyBindingFromAppContext)
            {
                var selectSingleNode = assembly.SelectSingleNode("./bindings:assemblyIdentity/@name", docNamepace);
                if (selectSingleNode != null)
                {
                    var filename = selectSingleNode.Value;

                    if (requestedName.Name == filename)
                    {
                        // might want to add version number etc checks on the requestedName
                        try
                        {
                            return Assembly.LoadFrom($"{filename}.dll");
                        }
                        catch (Exception ex)
                        {                               
                            throw new FileNotFoundException($"Could not find {filename}. The error is {ex.Message}");
                        }
                    }
                }
            }
            return null;
        };
    }

    private static void LoadConfig()
    {
       var docname = Path.Combine(Environment.CurrentDirectory, $"{Assembly.GetExecutingAssembly().ManifestModule.Name}.config");

       var xmlDoc = new XmlDocument();
            xmlDoc.Load(docname);

        docNamepace = new XmlNamespaceManager(xmlDoc.NameTable);
        docNamepace.AddNamespace("bindings", "urn:schemas-microsoft-com:asm.v1");

        if (xmlDoc.DocumentElement != null)
        {
            assemblyBindingFromAppContext = xmlDoc.DocumentElement.SelectNodes("//bindings:dependentAssembly", docNamepace);
        }
    }
}

0 Comments

Part 1 – General Overview & Account Structure.
Part 2 – Setting up the Azure and Visual Studio Online (VSO) structure and maintenance.
Part 3 – Setting up the Azure Environments with ARM and PowerShell.
Part 4 – Setting up Build Servers.
Part 5 – Setting up Continuous delivery (CD) on CI Development builds.
Part 6 - Setting up Release Manager for Dev nightly, QA, Stage and Prod deployments.

Part 1 – General Overview & Account Structure.

Over the past few months I have been working in conjunction with James Kewin consulting at a Financial Services client with the ambition to host a new greenfield project on Azure. Added to this the client wished to provide a template for migration of their existing infrastructure at a later date. In accordance with this goal James and I have spent several months of R&D in moving our enterprise client to the cloud, from a complete end-to-end Application Lifecycle Management point of view, with the latest in best practice throughout the pipeline.

This blog series is a combination of the recommendations we implemented, that address enterprise issues with a move online, and a step-by-step tutorial of how we set up Azure, Visual Studio Online, Release Manager and how we implemented many DevOps techniques such as continuous delivery, environment tear down and re-provision, and the obsession with automation that is required in this new “cloud” world.

Tip: Azure is constantly subject to change, what I write here is likely not to be valid in 6 months. Always double check current limitations and advancements. This series was first written in Jun-Sep 2015. It is also important to note that what is the correct setup for one organisation may not be correct for all, please get in touch with any comments or questions.

Azure Subscriptions & the Enterprise portal

image

Most of you will be aware of the Azure portal above which seems great, but how do you go about structuring this in the enterprise world of Dev, Test, Stage, Demo and production environments with the associated security lockdown and audit requirements across 100’s if not 1000’s of applications? Luckily if you have those kind of requirements you probably have a Microsoft Enterprise Agreement and as part of this Microsoft provides the Azure Enterprise portal which gives us some functionality to manage, report and plan our subscription structures. The enterprise portal adds another 2 layers on top of a standard subscription, the first being an “Account” which has a single administrator that has full control over multiple subscriptions, and the top most level being a “Department” which is a collection of multiple Accounts when we can set budgets for Azure usage.

Design principles

In designing out department, account and subscription structures there were several design principles we wanted to keep in mind. These were:

Principle

Reason
Maintenance

200+ subscriptions (1 for every application) is impractical to maintain subscription setup overheads.

Extensibility

There should be room to grow

FlexibilityAny decision now should be easy to change going forward
SecurityEnvironments & Customer data should be segregated and access control applied appropriately.

Azure Limits

Conscious of hard Azure subscription limits (500 resource groups in a subscription etc), as well as resource limitations and scale limits.

CostsHard “cut out” department limits to prevent accidental budget blowout.
ReportingIt should be easy for management to retrieve cost reporting at all levels.

Constraints of Azure Department, Account and Subscription structures

It is all very well having a set of design principles, but we had to be aware of the current constraints and considerations associated with how we structure the Azure Enterprise Portal in relation to these principles. At the current time of writing (Aug 2015) these are:

  • Cost Caps. Unlike an Azure public subscription, a subscription tied to an Enterprise Agreement only has hard cut out limits for your budget at the department level. All controls, and visibility, of costs are at a department level only, something your dev team is not likely to have access to. It is all too easy for a junior dev to create several whopping VM’s that costs £1,000’s per month each if you let them.

  • Subscription are difficult to move between accounts, but accounts are easy to move between departments. There is no way of moving a subscription between accounts yourself and although you can contact Microsoft support and have them move your subscription to another account for you, my personal experience in doing this has not been pleasant (hint it involved a great deal of downtime). As such it is easier to assume that once a subscription is attached to an account it is fixed there until such times as Microsoft add the functionality to the Azure portal like they have for moving an account to a different department with 0 effort and 0 downtime. **UPDATE** MS have now added this ability to the enterprise portal (Oct 2015)**

  • Subscription Security. Azure is very much still in “Migration” mode from the old https://manage.windowsazure.com to the new https://portal.azure.com/. While the new portal provides us Role Based Access Control (RBAC) to manage who has access to see, modify and create what asset, server, VM etc (and in turn the data associated with those assets) not all asset types are available in the new portal as yet, hence you are forced to provide your team with a greater level of access to a subscription than is perhaps desired. Notable exceptions to the new portal are Service Bus and Azure Active Directory. Hence locking down who has access to a production asset vs a dev, test, stage or demo asset within one subscription is currently not possible. Hence our environments will have to reside in multiple subscriptions for audit and security requirements, especially in financial services.

How did we implement the enterprise structure in an Azure Department/account/subscription structure?

It is important to note here that there are many ways to map your Enterprise structure into an Azure one. This is something you really want to get right from the outset, so take those extra few weeks and speak to all interested parties across the organization.

To understand why we implemented the structure below it is probably best I give some context. Our client is not too dissimilar to many medium / large enterprise organizations in that they have multiple “program of works” running simultaneously with each other, each with multiple teams/projects, each of which have their own Dev, QA, BA and Architecture teams. We wanted to find something that was going to be suitable for everyone while still allowing each “program” a deal of flexibility in how they operate.

At the current time our client also has a clear line in the sand between development and operations, anything client/public facing is the responsibility of a dedicated Operations support team. We modelled the account and subscription structure for stage/demo and production environments accordingly.

image

image

Why did we implement it this way?

Rational on Recommendations:

Departments

  • A way to maintain hard account limits/cut-outs to prevent budget overrun.
  • Keep departments to a minimum to reduce maintenance.
  • Departments are easy to create, and move accounts between them, should the need arise later.
  • Don’t optimise prematurely, but still leave room for flexibility. Two departments seem to fill budget cap requirements at present, if one program blows the budget for the full department it will be easy to split at a later date.

Accounts

  • Accounts split by department, to make it possible to move accounts to their own departments should it be required later.
  • Accounts should be controlled by a central “Core” team, suggestion is IT Infrastructure / IT Helpdesk, as maintenance plans require to be setup per subscription (as do build server and VSO rights).

Subscriptions

  • Due to maintenance setup requirements and the manageability overhead, we wanted to keep the overall number of subscriptions small.
  • Unlike Account and Department structures (which are virtual to users), switching and navigating Subscriptions is a “physical” workflow switch which means closing one subscription to get into the other. It is hence important that any developer should be able to navigate the subscription structure easily, and there is no confusion as to which application and app environments lives in which subscription. Hence, we recommended that subscriptions be kept at a department level with one subscription per environment, providing most developers with at most 2-3 subscriptions to filter between.
  • “New” asset types can be fully managed at resource or resource group level by Role based access control. “Old” assets such as service bus can only be managed by named person access. To make administration and lock down of QA easier, QA and Dev have been separated into their own subscription.
  • Subscription creation would be handled via a central team to ensure both naming and security consistency.

Resource Groups

  • Resource groups can be used as a security ring-fence for each client. For data protection data, and to facilitate easy deletion at a client’s request, no data will cross resource group boundaries.
  • Due to security setup and for naming consistency, resource groups should be provisioned (by script) by a central team (IT Infrastructure). The resources would then be deployed by Release Manager (by script) by the individual teams.
  • All assets for a given deployed customer will be contained in their own resource group for isolation purposes.
  • A “QA on Demand” environment would be temporarily spun up to test feature differences between clients on each build in another resource group not attached to the main pipeline.

The Filing Cabinet Analogy for the non-tech types

All of the above can be a little heavy for tech types, far less non-tech types, when trying to establish requirements. Jamie came up with an excellent analogy to explain this to upper management and why they required to care.

The Paper (our projects):

Traditionally developers cared about developing the software, with little thought given to the underlying hardware this would be hosted on – that was a problem for architects and the infrastructure team. In our new tear down/up DevOps cloud (insert buzz word) world, developers now specify the infrastructure setup and configuration as yet another project in their solution. Azure Resource Manager templates (ARM Templates) are the next evolution of PowerShell DSC with a particular focus on Azure. ARM Templates allow developers to configure their desired azure configuration and state based on a JSON description of all assets in their solution, further enabling the constantly repeatable nature of tear down and re-provision in a DevOps context.

image

The Folders (Resource Groups):

Azure now provides a logical grouping container called “Resource Groups” that we can group our infrastructure assets and deployed content into (i.e. different projects, clients etc). These allow you to manage RBAC (role based access control) of which technical staff has access to administer given assets, with a couple of exceptions at the current time of writing. Resource Groups are the perfect “folder” that groups items together with associated metadata, for example we can “Tag” a resource group with any given string we like that will appear on the billing breakdown the finance department receive.

image

The Drawer (A subscription):

Azure has a soft limit of 800 resource groups (or folders) per subscription (soft limit as a call to MS support and you can have this increased). Our client has multiple products/projects under development within a given program, as such each subscription will contain multiple Resource Groups. We likened the subscription to the drawer of a filing cabinet.

image

Note: In the diagram above we mention release manager 2015 and the concept of a bounce VM. This is to deal with specific limitations that currently exist in Release Manager at the time of writing (Aug/Sep 2015), we cover this later in part 6 of this series.

The Cabinet (The account containing multiple subscriptions):

Referring back to the Enterprise Portal, multiple subscriptions are managed by 1 account owner. For example our Dev and QA subscriptions are managed by 1 account per program.

image

Multiple Cabinets (The Department):

The top most level of the azure enterprise portal is the grouping of departments, which is a group of 1 or more accounts. It is at this level where budget caps are maintained. These caps, when breached, will knockout everything under that department from being usable. For our requirements it was agree that it was only necessary to have 2 caps, one for non prod and one for production with a very large cap (no cap is a really bad idea, I’ve heard the stories 1st hand of organisations racking up £10,000’s bill in several days).

Cost caps do not affect the reporting of cost within that department, using “tags” we can specify down to the asset level who should be crossed billed for the usage of a given deployed resource. This was a key factor in negotiating this simplified structure within our client’s organisation.

image

See any gaps in our thinking?

Hopefully the above has explained the general high level principles we considered in setting up the general azure structure. The rest of this series of posts will be more “tutorial” style on exactly step by step how we achieved this.

We would love any feedback you have on any of this series, please see free to use the comments below… I will respond.

0 Comments

Part 1 – General Overview & Account Structure.
Part 2 – Setting up the Azure and Visual Studio Online (VSO) structure and maintenance.
Part 3 – Setting up the Azure Environments with ARM and PowerShell.
Part 4 – Setting up Build Servers.
Part 5 – Setting up Continuous delivery (CD) on CI Development builds.
Part 6 - Setting up Release Manager for Dev nightly, QA, Stage and Prod deployments.

Part 6 - Setting up Release Manager for Dev nightly, QA, Stage and Prod deployments.

As described in Part 5 we will be using Release Manager for our full pipeline. At the present time VSO does not have a web interface for us to administer Release Manager, although a new web interface is in the pipeline as announced at Build 2015 (skip to 28min in for a demo), however the good news is that we can do this currently via the desktop client tools that are used for on premise deployments. If you do not have it already, download and install Release Management 2015 from MSDN here (note MSDN Enterprise subscription required for this download). Most of what I describe below is covered in this channel 9 video over here.

1). Once installed, open release manager 2015 client tools from your desktop and you should be presented with the below screen. Enter your VSO URL (eg https://YourCompany.visualstudio.com) and click ok.

image

2). You will now be prompted to sign into your Live/work/school account associated with your VSO subscription. Note: you must sign in for the first time as the live account that is declared as the VSO Account owner, after that you can do to Administrator – > manage users –> new and add any additional users you wish.

Before we go any further with Release Manager setup, we first need to address some limitations of the current Release Manager as of Aug 2015. Unfortunately we are unable to directly deploy to App Service environments from Release Manager, but there is a workaround to allow us to do this. Release manager currently allows us to deploy to Virtual Machines via cloud services fine, hence we are going to use Virtual Machines (of which we will require one in every subscription we are deploying into), to bounce our build to the appropriate App Service (or indeed any azure asset) contained in that subscription. As we are likely to be doing the VM setup in more than one subscription it is worth scripting this, luckily I have went through this pain already so please find a couple of PowerShell functions that will help you do this over here.

Tip: Remember to execute the below PowerShell commands before executing the setup script.

Tip 2: Remember to exclude this new resource group from your clear down scripts.


Add-AzureAccount
Get-AzureSubscription
Select-AzureSubscription -SubscriptionName "Developer Tools"
Switch-AzureMode AzureResourceManager
New-AzureResourceGroup -Name BuildBounce -Location NorthEurope -Force
Create-BuildBounceVM -AzureSubscriptionName "Development" -VMName "bldbounce" -Location "North Europe" -StorageAccountName "buildbouncedev" -VMSize "ExtraSmall" -OSName "Windows Server 2012 R2 Datacenter" -VMUserName "Ouradmin" -VMPassword "MyPasswordIs-01" -ResourceGroupName "BuildBounce"

3). Step 2 above will take 10min or so to run. While we are waiting we can gather the information for the next step. To do this we will need to download the publish settings file from https://manage.windowsazure.com/publishsettings that contains our subscription details. Open this file with notepad once downloaded.

Note: this file does contain sensitive data, you will want to delete this as soon as you have retrieved the data we require.

4). We now want to give Release Manager access to all our Azure subscriptions we intend to deploy into. To do this, go to the Administration Tab – > Manage Azure then click on the New Button. Using the information from the publish settings file we just downloaded in step 3, add as many azure environments as you require. You can reuse the storage account that you created in step 2, make sure step two is complete before clicking save.

image

6). Now we have our subscription added, we can now setup our “Stage Types” in release manager. Go To Administration –> Manage Pick Lists –> Stage Type and click the Add button. A stage type is the stages your release will go through on its way to prod, this is usually a one to one mapping to “Environments” but not always. Add all that you need, I have added Dev, QA, UAT-Stage, Demo, Prod

image

7). We can now setup our “environments”, i.e the physical stages a release moves through. You will need at least one for each Azure subscription you wish to move your release through. Go to Configure Paths –> and click on New vNext: Azure.

image

8). The new environment template will appear, now select the “Link Azure Environment” button at the top right.

image

9). Under Azure environments select the appropriate subscription and you should see the VM/Cloud service we created earlier. If you don’t please double check that you created “Classic” VMs and not the current ARM style, at this point in time Release Manager only supports classic VMs.

image

10). Select the name of the VM you created earlier and select link.

11). You will now be taken back to the New vNext:Azure template window. Now click “Link Azure Servers” and select the full VM name (eg. vmbounce.cloudapp.net…) before clicking the “link” button.

image

12). You can now save your template. If you wish you can restrict this environment to a particular stage by selecting the “Stage Type Security”, this is optional.

14). We now require to setup the vNext Release Path, still under “Configure Paths” select the vNext Release Paths tab and select New.

image

15). Click Add, select your stage (i.e dev), the environment we just saved in step 14 and any approvers you require for this stage. For simplicity I have set these to Automated. Now click save and close.

image

16). Now select the Configure Apps tab (top right), then the Components tab (top left). Now select “New vNext”. In the Source tab under “builds with Application” add a single “\” to the build drop location. Give your component a name, and click save and close.

image

17).Before we go any further we need to create a XAML build definition (release manager at this stage only supports Xaml builds Sad smile ). To do this open Visual Studio, navigate to team explorer, click builds, and click new xaml build.Configure your trigger as desired, scope your “Source settings” to the lowest possible tree level you can, and in the build defaults tab select the classic build controller we setup on part 4 of this blog series. Now select the process tab and enter the below as additional arguments to MSBuild. This makes sure MS Build and Release manager can interop together.


/p:DeployOnBuild=True /p:AutoParameterizationWebConfigCOnnectionStrings=False

image

18). Now back over to Release Manager and select Configure Apps, then select the “vNext Release Templates” sub tab at the top left. Click New and give your pipeline a name, select the release path we created in step 14, and check the box “Can Trigger a Release from a Build”. You will also want to select the build definition, select the edit button and select the team project and build definition. Note if the build definition list is empty you will first require to create a XAML build, step 17.

image

19). Now right click components from the toolbox and click add, select the Component we create at step 16 and click the “Link” button.

image

20). Now drag the “Deploy Using Ps/DSC” action onto your template, double click on it, select your bounce server name, enter the username and password your created when creating the VM with the script at step 2 above. Select the component name we created at step 16, and set the PSScriptPath to “Configuration\publish.ps1”, scroll down the component and set the “SkipCaCheck” to “True”.

image

21). You should now have a pipeline to push a build into a given environment. To provision other environments simply repeat steps 2 – 20 above.

Summary:

In this post we have now completed the setup of an end-to-end development pipeline in Azure. You should now be able to successfully check into VSO and have your project built, tested and deployed into a Azure environment.