0 Comments

Part 1 – General Overview & Account Structure.
Part 2 – Setting up the Azure and Visual Studio Online (VSO) structure and maintenance.
Part 3 – Setting up the Azure Environments with ARM and PowerShell.
Part 4 – Setting up Build Servers.
Part 5 – Setting up Continuous delivery (CD) on CI Development builds.
Part 6 - Setting up Release Manager for Dev nightly, QA, Stage and Prod deployments.

Part 1 – General Overview & Account Structure.

Over the past few months I have been working in conjunction with James Kewin consulting at a Financial Services client with the ambition to host a new greenfield project on Azure. Added to this the client wished to provide a template for migration of their existing infrastructure at a later date. In accordance with this goal James and I have spent several months of R&D in moving our enterprise client to the cloud, from a complete end-to-end Application Lifecycle Management point of view, with the latest in best practice throughout the pipeline.

This blog series is a combination of the recommendations we implemented, that address enterprise issues with a move online, and a step-by-step tutorial of how we set up Azure, Visual Studio Online, Release Manager and how we implemented many DevOps techniques such as continuous delivery, environment tear down and re-provision, and the obsession with automation that is required in this new “cloud” world.

Tip: Azure is constantly subject to change, what I write here is likely not to be valid in 6 months. Always double check current limitations and advancements. This series was first written in Jun-Sep 2015. It is also important to note that what is the correct setup for one organisation may not be correct for all, please get in touch with any comments or questions.

Azure Subscriptions & the Enterprise portal

image

Most of you will be aware of the Azure portal above which seems great, but how do you go about structuring this in the enterprise world of Dev, Test, Stage, Demo and production environments with the associated security lockdown and audit requirements across 100’s if not 1000’s of applications? Luckily if you have those kind of requirements you probably have a Microsoft Enterprise Agreement and as part of this Microsoft provides the Azure Enterprise portal which gives us some functionality to manage, report and plan our subscription structures. The enterprise portal adds another 2 layers on top of a standard subscription, the first being an “Account” which has a single administrator that has full control over multiple subscriptions, and the top most level being a “Department” which is a collection of multiple Accounts when we can set budgets for Azure usage.

Design principles

In designing out department, account and subscription structures there were several design principles we wanted to keep in mind. These were:

Principle

Reason
Maintenance

200+ subscriptions (1 for every application) is impractical to maintain subscription setup overheads.

Extensibility

There should be room to grow

FlexibilityAny decision now should be easy to change going forward
SecurityEnvironments & Customer data should be segregated and access control applied appropriately.

Azure Limits

Conscious of hard Azure subscription limits (500 resource groups in a subscription etc), as well as resource limitations and scale limits.

CostsHard “cut out” department limits to prevent accidental budget blowout.
ReportingIt should be easy for management to retrieve cost reporting at all levels.

Constraints of Azure Department, Account and Subscription structures

It is all very well having a set of design principles, but we had to be aware of the current constraints and considerations associated with how we structure the Azure Enterprise Portal in relation to these principles. At the current time of writing (Aug 2015) these are:

  • Cost Caps. Unlike an Azure public subscription, a subscription tied to an Enterprise Agreement only has hard cut out limits for your budget at the department level. All controls, and visibility, of costs are at a department level only, something your dev team is not likely to have access to. It is all too easy for a junior dev to create several whopping VM’s that costs £1,000’s per month each if you let them.

  • Subscription are difficult to move between accounts, but accounts are easy to move between departments. There is no way of moving a subscription between accounts yourself and although you can contact Microsoft support and have them move your subscription to another account for you, my personal experience in doing this has not been pleasant (hint it involved a great deal of downtime). As such it is easier to assume that once a subscription is attached to an account it is fixed there until such times as Microsoft add the functionality to the Azure portal like they have for moving an account to a different department with 0 effort and 0 downtime. **UPDATE** MS have now added this ability to the enterprise portal (Oct 2015)**

  • Subscription Security. Azure is very much still in “Migration” mode from the old https://manage.windowsazure.com to the new https://portal.azure.com/. While the new portal provides us Role Based Access Control (RBAC) to manage who has access to see, modify and create what asset, server, VM etc (and in turn the data associated with those assets) not all asset types are available in the new portal as yet, hence you are forced to provide your team with a greater level of access to a subscription than is perhaps desired. Notable exceptions to the new portal are Service Bus and Azure Active Directory. Hence locking down who has access to a production asset vs a dev, test, stage or demo asset within one subscription is currently not possible. Hence our environments will have to reside in multiple subscriptions for audit and security requirements, especially in financial services.

How did we implement the enterprise structure in an Azure Department/account/subscription structure?

It is important to note here that there are many ways to map your Enterprise structure into an Azure one. This is something you really want to get right from the outset, so take those extra few weeks and speak to all interested parties across the organization.

To understand why we implemented the structure below it is probably best I give some context. Our client is not too dissimilar to many medium / large enterprise organizations in that they have multiple “program of works” running simultaneously with each other, each with multiple teams/projects, each of which have their own Dev, QA, BA and Architecture teams. We wanted to find something that was going to be suitable for everyone while still allowing each “program” a deal of flexibility in how they operate.

At the current time our client also has a clear line in the sand between development and operations, anything client/public facing is the responsibility of a dedicated Operations support team. We modelled the account and subscription structure for stage/demo and production environments accordingly.

image

image

Why did we implement it this way?

Rational on Recommendations:

Departments

  • A way to maintain hard account limits/cut-outs to prevent budget overrun.
  • Keep departments to a minimum to reduce maintenance.
  • Departments are easy to create, and move accounts between them, should the need arise later.
  • Don’t optimise prematurely, but still leave room for flexibility. Two departments seem to fill budget cap requirements at present, if one program blows the budget for the full department it will be easy to split at a later date.

Accounts

  • Accounts split by department, to make it possible to move accounts to their own departments should it be required later.
  • Accounts should be controlled by a central “Core” team, suggestion is IT Infrastructure / IT Helpdesk, as maintenance plans require to be setup per subscription (as do build server and VSO rights).

Subscriptions

  • Due to maintenance setup requirements and the manageability overhead, we wanted to keep the overall number of subscriptions small.
  • Unlike Account and Department structures (which are virtual to users), switching and navigating Subscriptions is a “physical” workflow switch which means closing one subscription to get into the other. It is hence important that any developer should be able to navigate the subscription structure easily, and there is no confusion as to which application and app environments lives in which subscription. Hence, we recommended that subscriptions be kept at a department level with one subscription per environment, providing most developers with at most 2-3 subscriptions to filter between.
  • “New” asset types can be fully managed at resource or resource group level by Role based access control. “Old” assets such as service bus can only be managed by named person access. To make administration and lock down of QA easier, QA and Dev have been separated into their own subscription.
  • Subscription creation would be handled via a central team to ensure both naming and security consistency.

Resource Groups

  • Resource groups can be used as a security ring-fence for each client. For data protection data, and to facilitate easy deletion at a client’s request, no data will cross resource group boundaries.
  • Due to security setup and for naming consistency, resource groups should be provisioned (by script) by a central team (IT Infrastructure). The resources would then be deployed by Release Manager (by script) by the individual teams.
  • All assets for a given deployed customer will be contained in their own resource group for isolation purposes.
  • A “QA on Demand” environment would be temporarily spun up to test feature differences between clients on each build in another resource group not attached to the main pipeline.

The Filing Cabinet Analogy for the non-tech types

All of the above can be a little heavy for tech types, far less non-tech types, when trying to establish requirements. Jamie came up with an excellent analogy to explain this to upper management and why they required to care.

The Paper (our projects):

Traditionally developers cared about developing the software, with little thought given to the underlying hardware this would be hosted on – that was a problem for architects and the infrastructure team. In our new tear down/up DevOps cloud (insert buzz word) world, developers now specify the infrastructure setup and configuration as yet another project in their solution. Azure Resource Manager templates (ARM Templates) are the next evolution of PowerShell DSC with a particular focus on Azure. ARM Templates allow developers to configure their desired azure configuration and state based on a JSON description of all assets in their solution, further enabling the constantly repeatable nature of tear down and re-provision in a DevOps context.

image

The Folders (Resource Groups):

Azure now provides a logical grouping container called “Resource Groups” that we can group our infrastructure assets and deployed content into (i.e. different projects, clients etc). These allow you to manage RBAC (role based access control) of which technical staff has access to administer given assets, with a couple of exceptions at the current time of writing. Resource Groups are the perfect “folder” that groups items together with associated metadata, for example we can “Tag” a resource group with any given string we like that will appear on the billing breakdown the finance department receive.

image

The Drawer (A subscription):

Azure has a soft limit of 800 resource groups (or folders) per subscription (soft limit as a call to MS support and you can have this increased). Our client has multiple products/projects under development within a given program, as such each subscription will contain multiple Resource Groups. We likened the subscription to the drawer of a filing cabinet.

image

Note: In the diagram above we mention release manager 2015 and the concept of a bounce VM. This is to deal with specific limitations that currently exist in Release Manager at the time of writing (Aug/Sep 2015), we cover this later in part 6 of this series.

The Cabinet (The account containing multiple subscriptions):

Referring back to the Enterprise Portal, multiple subscriptions are managed by 1 account owner. For example our Dev and QA subscriptions are managed by 1 account per program.

image

Multiple Cabinets (The Department):

The top most level of the azure enterprise portal is the grouping of departments, which is a group of 1 or more accounts. It is at this level where budget caps are maintained. These caps, when breached, will knockout everything under that department from being usable. For our requirements it was agree that it was only necessary to have 2 caps, one for non prod and one for production with a very large cap (no cap is a really bad idea, I’ve heard the stories 1st hand of organisations racking up £10,000’s bill in several days).

Cost caps do not affect the reporting of cost within that department, using “tags” we can specify down to the asset level who should be crossed billed for the usage of a given deployed resource. This was a key factor in negotiating this simplified structure within our client’s organisation.

image

See any gaps in our thinking?

Hopefully the above has explained the general high level principles we considered in setting up the general azure structure. The rest of this series of posts will be more “tutorial” style on exactly step by step how we achieved this.

We would love any feedback you have on any of this series, please see free to use the comments below… I will respond.

0 Comments

In May this year Troy Hunt, a respected application security expert, asked the question "Do you really want bank grade security in your SSL?" accompanying this with an analysis of the Australian banks. This was soon followed several weeks later by Mark (second name unknown) who compiled the British Equivalent that continues to be updated every few days. Unfortunately not much progress has been made in the months since these articles were first written, so I wanted to try a slightly different angle.

While I don’t want to repeat what Mark has already done at a UK level, I do want to localise this to Scotland and twist this a little to cover other financial services institutions and explain why the financial IT industry in Scotland has far reaching effects right across the world. Hopefully by blogging on this I will reach these security professional in Scotland to make them aware, and hopefully prompt action. Lastly, I would like to share some experiences I have had in trying to inform these organisations of issues, and the success and frustration I have faced.

Before I progress though, it is important to note:

Security is a continual compromise between usability, practicality and maintaining backward compatibility. It is never black and white.

Why does SSL matter?

For the less technical among you, SSL is the technology equivalent of the lock on your front door. It is the first, and main, line of defence in protecting all your sensitive financial data as it passes across the internet (think https and the green bar in your browser). How this is implemented and configured matters greatly:

Would you put the dinky bolt you use on your shed on your front door?

While OWASP only list security misconfiguration at number 5 in the top 10 list of security threats, it is one of the easiest items on the list to fix TODAY! I don’t take pleasure from writing this particular post. I, like many of you use these very institutions myself, and I don’t want to shame anyone. However, we as an IT community in Scotland, need to do all we can in persuading some (not all) of these organisations to start taking security issues as seriously, and as urgently, as their PR and marketing departments claim they do. These organisations are not short of budget to deal with these issues, see £300m IT systems in a subsidiary of RBS alone, but some are short on agility to respond quickly. The banking industry in the UK is well known for being as agile as a hippo without legs.

Financial Services in Scotland – From an IT perspective.

Despite the frankly embarrassing commercial issues at some organisations in the past few years, Scotland is one of Europe’s leading financial centres and the second biggest financial hub in the UK outside of London, with a history of some ‘Scottish’ banks going back 300 years. Banks are only one part of this, added to ‘Banks’ is many other retail financial services providers, such as Standard Life, Aegon, Royal London, Scottish Widows and others that we perhaps trust with our most important life savings in the form of pension funds, savings and ISAs. I wanted to establish if other FS (Financial Services) institutions were taking the ‘bank grade security’ any more seriously than banks of the traditional form.

It is important to note here I am making a distinction on ‘Retail’ providers, I would be here for weeks doing this exercise if I was to extend this into the wholesale or services market. From initial research I came up with a list of 80 medium to large FS organisations employing IT staff here (think large fund managers, asset Managers, fund platforms etc).

Added to the 1000’s of IT jobs from ‘Scottish’ FS providers, and what most people don’t realise, is that banks and FS institutions from across the world have large IT footprints here in Scotland, with many shipping financial systems and websites worldwide (Ireland, Australia, Canada, America, Switzerland, South Africa, Germany and France are some of the countries I know from personal experience). HSBC, Barclays, Prudential, Morgan Stanley and others employee 1000’s of IT workers in Scotland, with JP Morgan alone expanding their European technology hub beyond 800 people recently.

The institutions I list below may not all be ‘Scottish’, but they do all employ large numbers of IT professionals here, and it is these professionals I hope to reach to encourage the correction of any part of the ship they are responsible for. This is not to say all these organisation will have their SSL managed here, but responsibility starts with those in the know - hopefully the correct people/teams can be found internally if they don’t reside locally.

The Results

It is important to note that having less than an “A” grade doesn't mean these organisations are lacking attention from their security teams, a “B” grade could be considered perfectly acceptable if known exploits are not viable at the present time, though the term viable in reference to RC4 is now on the borderline of practical and is worth keeping an eye on. However, those scoring grade “F” (those where exploits are extremely practical i.e POODLE) really should be working urgently to get these issues patched yesterday, and or mitigated by other means! The below results were analysed using Qualys SSL Labs tools.

** Update 31/8/15 - Aegon now showing an "A-" 2 working days after publish. Kudos to the cheetahs at Aegon for their quick action :-)

** Update 17/9/15 - HSBC now showing a strong "B" grade **

CompanyOverall GradeMore DetailsNotes
Royal Bank of Scotland (RBS)A.HereRBS moved from C to A since Marks initial May report.
Virgin MoneyA.Here
Morgan StanleyA.Here
Scottish WidowsA-Here
BarclaysA-Here
Standard LifeB.HereWeak DH key
NationwideB.HereRC4 cipher
Clysdale BankC.HereRC4 still supported
No TLS 1.2 support
Royal London (Scottish Life)C.HereRC4 still supported
No TLS 1.2 support
JP MorganC.HereSSL3 & RC4 still supported
HSBCC.HereSSL3 & RC4 still supported
No TLS 1.2 support
Bank of Scotland (HBOS)C.HereSSL3 & RC4 still supported
No TLS 1.2 support
Sainsbury's BankC.HereSSL3 & RC4 still supported
No TLS 1.2 support
HalifaxC.HereVulnerable to SSL 3 Poodle
SSL3 & RC4 still supported
No TLS 1.2 support
Lloyds BankC.HereVulnerable to SSL 3 Poodle
SSL3 & RC4 still supported
No TLS 1.2 support
TSBF.HereVulnerable to Poodle
SSL3 & RC4 still supported
No TLS 1.2 support
Tesco BankF.HereVulnerable to Poodle
No TLS 1.2 support
AegonF.HereVulnerable to Poodle
Vulnerable to SSL 3 Poodle
SSL2, SSL3 & RC4 still supported

Tip: Terminating your SSL on a windows server? Use this tool to configure your SSL to PCI compliance and SSL Labs A grade!

Summary

As can be seen above some institutions are doing OK, special recognition to RBS for their improvement in the last few months. On the other hand others (Halifax, Lloyds, TSB, Tesco Bank and Aegon) appear to have room to pull their trousers up in regards to patching of vulnerabilities , mitigation of them and general configuration. There also appears to be no difference between retail banks and other FS institutions, some coming near the top (Scottish Widows) while others look like they may be terminating SSL on something last configured in 1990 (Aegon, SSL2 support do you really need that?).

I contacted, or certainly tried to (none of them make it easy to report security concerns to the correct people who know what gobbledygook I speak of here) some of the organisations at the bottom of this list. After many frustrating weeks trying to establish who to contact, I managed to get a friend to put me in touch with the security team at one of these institutions who called me. I was told that they were aware of Troy's original article, Marks follow-up, and separately their poodle vulnerabilities were highlighted by their own PCI/DSS auditors. They claimed they had a “Program of Works underway” to remove the deficient kit they are terminating SSL on, but they could not disclose externally when they expected this to be completed.

While the organisations that still support SSL3 and RC4 may have valid legacy compatibility reasons for doing so - that’s a big ‘may’ that is worth questioning - those that have known exploitable vulnerabilities left unpatched or mitigated should be doing something yesterday to protect their customers security. Quite frankly the response I talked about above is simply not fast enough (early May to late Aug). In the case of old hardware that can’t be removed today, why not terminate the traffic at cloudflare. Yes, less than ideal with it being a world wide CDN etc etc and likely not to be PCI/DSS compliant, but still much better than leaving a Poodle sized hole in your SSL implementation (see my note at the start of this blog post about the trade off).

Can Financial Services organisations possibly move like a cheetah on security though?

On doing the study of non retail institutions I so happened to run a medium to large organisation that came out with a grade “C”. Fortunately I had direct access to the correct people inside this organisation, and in less than 24 hours that organisation moved from a “C” to an “A”. So yes, with enough effort, determination and empowerment of the correctly skilled staff it is possible for organisations in financial services to move like a cheetah on security concerns.

Would love to known your thoughts in the comments. Special thanks to several individuals for helping directly or indirectly with this post, you know who you are.

Cheers

Bryan

0 Comments

Part 1 – General Overview & Account Structure.
Part 2 – Setting up the Azure and Visual Studio Online (VSO) structure and maintenance.
Part 3 – Setting up the Azure Environments with ARM and PowerShell.
Part 4 – Setting up Build Servers.
Part 5 – Setting up Continuous delivery (CD) on CI Development builds.
Part 6 - Setting up Release Manager for Dev nightly, QA, Stage and Prod deployments.

Part 6 - Setting up Release Manager for Dev nightly, QA, Stage and Prod deployments.

As described in Part 5 we will be using Release Manager for our full pipeline. At the present time VSO does not have a web interface for us to administer Release Manager, although a new web interface is in the pipeline as announced at Build 2015 (skip to 28min in for a demo), however the good news is that we can do this currently via the desktop client tools that are used for on premise deployments. If you do not have it already, download and install Release Management 2015 from MSDN here (note MSDN Enterprise subscription required for this download). Most of what I describe below is covered in this channel 9 video over here.

1). Once installed, open release manager 2015 client tools from your desktop and you should be presented with the below screen. Enter your VSO URL (eg https://YourCompany.visualstudio.com) and click ok.

image

2). You will now be prompted to sign into your Live/work/school account associated with your VSO subscription. Note: you must sign in for the first time as the live account that is declared as the VSO Account owner, after that you can do to Administrator – > manage users –> new and add any additional users you wish.

Before we go any further with Release Manager setup, we first need to address some limitations of the current Release Manager as of Aug 2015. Unfortunately we are unable to directly deploy to App Service environments from Release Manager, but there is a workaround to allow us to do this. Release manager currently allows us to deploy to Virtual Machines via cloud services fine, hence we are going to use Virtual Machines (of which we will require one in every subscription we are deploying into), to bounce our build to the appropriate App Service (or indeed any azure asset) contained in that subscription. As we are likely to be doing the VM setup in more than one subscription it is worth scripting this, luckily I have went through this pain already so please find a couple of PowerShell functions that will help you do this over here.

Tip: Remember to execute the below PowerShell commands before executing the setup script.

Tip 2: Remember to exclude this new resource group from your clear down scripts.


Add-AzureAccount
Get-AzureSubscription
Select-AzureSubscription -SubscriptionName "Developer Tools"
Switch-AzureMode AzureResourceManager
New-AzureResourceGroup -Name BuildBounce -Location NorthEurope -Force
Create-BuildBounceVM -AzureSubscriptionName "Development" -VMName "bldbounce" -Location "North Europe" -StorageAccountName "buildbouncedev" -VMSize "ExtraSmall" -OSName "Windows Server 2012 R2 Datacenter" -VMUserName "Ouradmin" -VMPassword "MyPasswordIs-01" -ResourceGroupName "BuildBounce"

3). Step 2 above will take 10min or so to run. While we are waiting we can gather the information for the next step. To do this we will need to download the publish settings file from https://manage.windowsazure.com/publishsettings that contains our subscription details. Open this file with notepad once downloaded.

Note: this file does contain sensitive data, you will want to delete this as soon as you have retrieved the data we require.

4). We now want to give Release Manager access to all our Azure subscriptions we intend to deploy into. To do this, go to the Administration Tab – > Manage Azure then click on the New Button. Using the information from the publish settings file we just downloaded in step 3, add as many azure environments as you require. You can reuse the storage account that you created in step 2, make sure step two is complete before clicking save.

image

6). Now we have our subscription added, we can now setup our “Stage Types” in release manager. Go To Administration –> Manage Pick Lists –> Stage Type and click the Add button. A stage type is the stages your release will go through on its way to prod, this is usually a one to one mapping to “Environments” but not always. Add all that you need, I have added Dev, QA, UAT-Stage, Demo, Prod

image

7). We can now setup our “environments”, i.e the physical stages a release moves through. You will need at least one for each Azure subscription you wish to move your release through. Go to Configure Paths –> and click on New vNext: Azure.

image

8). The new environment template will appear, now select the “Link Azure Environment” button at the top right.

image

9). Under Azure environments select the appropriate subscription and you should see the VM/Cloud service we created earlier. If you don’t please double check that you created “Classic” VMs and not the current ARM style, at this point in time Release Manager only supports classic VMs.

image

10). Select the name of the VM you created earlier and select link.

11). You will now be taken back to the New vNext:Azure template window. Now click “Link Azure Servers” and select the full VM name (eg. vmbounce.cloudapp.net…) before clicking the “link” button.

image

12). You can now save your template. If you wish you can restrict this environment to a particular stage by selecting the “Stage Type Security”, this is optional.

14). We now require to setup the vNext Release Path, still under “Configure Paths” select the vNext Release Paths tab and select New.

image

15). Click Add, select your stage (i.e dev), the environment we just saved in step 14 and any approvers you require for this stage. For simplicity I have set these to Automated. Now click save and close.

image

16). Now select the Configure Apps tab (top right), then the Components tab (top left). Now select “New vNext”. In the Source tab under “builds with Application” add a single “\” to the build drop location. Give your component a name, and click save and close.

image

17).Before we go any further we need to create a XAML build definition (release manager at this stage only supports Xaml builds Sad smile ). To do this open Visual Studio, navigate to team explorer, click builds, and click new xaml build.Configure your trigger as desired, scope your “Source settings” to the lowest possible tree level you can, and in the build defaults tab select the classic build controller we setup on part 4 of this blog series. Now select the process tab and enter the below as additional arguments to MSBuild. This makes sure MS Build and Release manager can interop together.


/p:DeployOnBuild=True /p:AutoParameterizationWebConfigCOnnectionStrings=False

image

18). Now back over to Release Manager and select Configure Apps, then select the “vNext Release Templates” sub tab at the top left. Click New and give your pipeline a name, select the release path we created in step 14, and check the box “Can Trigger a Release from a Build”. You will also want to select the build definition, select the edit button and select the team project and build definition. Note if the build definition list is empty you will first require to create a XAML build, step 17.

image

19). Now right click components from the toolbox and click add, select the Component we create at step 16 and click the “Link” button.

image

20). Now drag the “Deploy Using Ps/DSC” action onto your template, double click on it, select your bounce server name, enter the username and password your created when creating the VM with the script at step 2 above. Select the component name we created at step 16, and set the PSScriptPath to “Configuration\publish.ps1”, scroll down the component and set the “SkipCaCheck” to “True”.

image

21). You should now have a pipeline to push a build into a given environment. To provision other environments simply repeat steps 2 – 20 above.

Summary:

In this post we have now completed the setup of an end-to-end development pipeline in Azure. You should now be able to successfully check into VSO and have your project built, tested and deployed into a Azure environment.

0 Comments

Part 1 – General Overview & Account Structure.
Part 2 – Setting up the Azure and Visual Studio Online (VSO) structure and maintenance.
Part 3 – Setting up the Azure Environments with ARM and PowerShell.
Part 4 – Setting up Build Servers.
Part 5 – Setting up Continuous delivery (CD) on CI Development builds.
Part 6 - Setting up Release Manager for Dev nightly, QA, Stage and Prod deployments.

Part 5 - Setting up Continuous delivery (CD) on CI Development builds.

Now we have a build server (see part 4) we can now setup our CI build and our Continuous deployment build to the development server. We have several options on how to do this:

- use Azure websites inbuilt “Continuous Delivery” from VSO. This is great for simply deployments, but as we start adding more complex features to our enterprise apps this is unlikely to suffice going forward.

- use VSO Resource Group deployment functionality from the vNext CI build. Unfortunately this is not an option available to all of us, see Abrish's comments on this MSDN article, hence until this is fully released we need something else.

- User VNext for CI and use XAML build and Release Manager for CD on a rolling build basis. We will be using Release Manager throughout the rest of our pipeline to production, although not ideal due to the performance overhead, especially for CD, this is the only real option available.

We chose to implement the last point as we felt it offered us a gentle way into the new build system while still giving us full compatibility with the functionality of Release Manager we required. To do this we first require to setup a vNext CI build from VSO. To do this:

1). First open up the VSO web interface and navigate to the “Build” tab. Click on the green plus button to create a new build template. Choose visual studio and click ok.

image

2). You should now see the New Build definition page. As this is for CI purposes only, we can delete the “Index sources…” and “Publish Build Artifacts” steps from the default template.

image

image

3). In the “Visual Studio Build” step select the “…” button next to the Solution and navigate to your Dev branch and in turn select the .sln (solution file). I also prefer my builds to be clean for every build, so I enabled the checkbox “Clean”

image

4). Now select the Visual Studio Test step and expand Advanced options. To better encourage best practice check the checkbox “Code Coverage Enabled”. If your unit test projects are in the same solution as your other projects you should not need to change the default “Test Assembly” location.

image

5). Now click on the “Repository” tab and change the cloak location for your build folder. In my case this is a folder inside the dev branch.

image

6). Now select the Triggers tab and enable “Continuous Integration (CI)”

image

7). Now select the “General” tab and change the Default Queue to the Build Server group we setup in blog post Part 4.

image

8). We are now good to go, click the Save button, give it a name, and check in some changes to test. Any issues please make sure you do not have NuGet packages checked in with your solution and that the “Restore NuGet Packages” checkbox is enabled on the Visual Studio Build step.

image

Summary:

We have now setup our CI build for our project, but still need a way of Continuously Deploying it to the given Azure Environment. In the next article I will discuss how this can be achieved currently with Release Manager 2015 on VSO while we wait for Release vNext to be released by Microsoft.

0 Comments

Part 1 – General Overview & Account Structure.
Part 2 – Setting up the Azure and Visual Studio Online (VSO) structure and maintenance.
Part 3 – Setting up the Azure Environments with ARM and PowerShell.
Part 4 – Setting up Build Servers.
Part 5 – Setting up Continuous delivery (CD) on CI Development builds.
Part 6 - Setting up Release Manager for Dev nightly, QA, Stage and Prod deployments.

Part 4 – Setting up Build Servers.

Now we have an azure environment, we need some way to build our assets before deploying them out there. With VSO we currently have 2x2 options. 2x2? yes, see the below grid and note a mix and match from each column.

TemplatesControllers
VNextHosted
XamlOur own VM

VSO and TFS2015 introduced new build templates that are night and day better than the old xaml ones, for an overview of why see Colin Dembovsky blog post. While Xaml builds are still currently supported it would appear they finally seem to be headed for the retirement home. The other 2 options we have with VSO is whether to use the hosted build controllers provided by VSO or provision our own VMs for building what we have defined in our templates (whether that is xaml or vNext). After some extensive testing, that is a blog post on its own, we came to the conclusion that at present the hosted build controllers are simply too slow for anything more than a couple of developers (20 times slower is some cases) – since this is an article on moving Enterprise to the cloud, our dev numbers are likely to be in the 10’s to 100’s. Due to the pricing model of “build minutes”, and just how dramatically slow the hosted builds are, the hosted build controllers on any non trivial project are equally as expensive as provisioning the VMs yourself.

Setting up the Build Server:

Luckily a Microsoft engineer, Thiago Almeida, has made setting up a build server and attaching it to VSO incredibly straightforward by providing an ARM template on his GitHub repository. Before using this though we have to setup Alternate Authentication Credentials on our VSO account so our build agent can access our VSO account.

1). Go over to your VSO account and sign in as a service account (a generic user you have setup for this purpose) with admin rights across the full VSO collection.

image

1.1). Click your name /the service accounts name and select my profile

image

1.2). In my profile select the security tab, then “Alternate Authentication Credentials”.

image

1.3). At the present time the build controller setup will not work with Personal Access Tokens, hence override this warning below and check the “Enable Alternate Authentication Credentials”. Now enter a username and password and click “Save”.

image

Setup Agent Pool (optional)

2). You can create a custom group that holds your build agents together. This can be useful if you plan on creating build agents in azure and on premise.

2.1). To do this go to the Agent Pool Admin panel at https://VSOUSERNAME.visualstudio.com/_admin/_AgentPool and select “New Pool”

image

2.2). Enter a pool name and click ok, note down this name for later.

Deploying the Build VM

3). Open up the PowerShell IDE on your desktop and run the following commands. This will first prompt you to log in, then display a list of all subscriptions your account has access to, and then selects the appropriate subscriptions that we wish to run our script in, before finally switching PowerShell over to the new mode.


Add-AzureAccount
Get-AzureSubscription
Select-AzureSubscription -SubscriptionName "Developer Tools"
Switch-AzureMode AzureResourceManager

3.1). We can now execute the below line of PowerShell which will use the given template provided by Thiago and deploy the assets into a resource group named “BuildServers”.


New-AzureResourceGroupDeployment -Name BuildServerSetup -ResourceGroupName BuildServers -TemplateUri https://raw.githubusercontent.com/azure/azure-quickstart-templates/master/visual-studio-vsobuildagent-vm/azuredeploy.json -vmVisualStudioVersion VS-2015-Enterprise-AzureSDK-2.7-WS2012R2

PowerShell will now prompt you for all parameters required by the script. Note: it will take many minutes for Azure to spin this environment up. Pay careful attention to the parameter guidelinesover here

image

Here is what I provided:

ParamValue
StorageNamecompanybuildserverstorage
deployLocation“North Europe”
vmNameCompanyBuildServer01
vmAdminUserNameadmin
vmAdminPassword***********
vmIPPublicDnsNamecompanybuildserver01
vsoAccountcompany
vsoUserThe user created above
vsoPass*********
poolName“Default”

Tip: use the following to clear down the resource group if you require multiple runs of the above.

Get-AzureResourceGroup BuildServers | ForEach-Object { Remove-AzureResource -ResourceId $_.ResourceId -Force}

3.2). An alternative to 3.1 above, is to hit the button below which will take you into the azure portal to deploy the ARM script manually. I would recommend 3.1 though; getting used to PowerShell, if you are not already, will make your life on Azure much easier.

3.3). You can now go back over to the Agent Pool admin page and expand the Agent pool you created, or the default pool if you did not. If the script was successful you should now see your new Build Server under the agent pool. If you don’t see this make sure you are part of the Team Foundation Server administrators group and try again.

Deploying a XAML Build Agent

Unfortunately in order for us to use Release Manager online, as of Aug/Sep 2015, we will require our builds to be XAML based and as such we can’t fully switch over to the nice new “Team Build vNExt” system. As such we must configure our VM we created at point 3 with a XAML build agent.

4). Firstly remote desktop (RDP) onto the build server you just created and download TFS 2015 from MSDN here.

4.1). Now run the TFS 2015 setup and when prompted select “Configure XAML Build Service” and select “Start Wizard”.

image

4.2). Select your feed back participation options on the next screen and click next.

image

4.3). You will now be asked to configure which TFS instance (VSO in our case). Select Browse, then servers, then add and enter your VSO address in the format https://YOURID.visualstudio.com

image

4.4). Click ok and you should be asked to sign into your live account, use and id with admin rights, and click login. If you are successful you should see the below screen.

image

4.5). Click on Close and then “Connect”.

4.6). You will now be taken back to the build service configuration screen, now click next.

image

4.7). On the next screen you will be prompted to select the amount of “Build Agents” you wish to place on this server. TFS best practice says 1 agent per core available on the CPU, but since this is azure and we can scale when we need, override the recommended and select “4” and click next.

image

4.8). You will now be prompted to select the account to run the agents as. You may wish to setup a custom user in your AD, but using the system account Local service should suffice in most cases. Select this and click on next.

image

4.9). Click on next again on the “review” screen, and the installer will run some checks. If all is successful you will be displayed with the below screen, click “Configure”.

image

4.10). Everything being ok you should now be presented with the below screen. You can now exit the wizard, a XAML build server should now be visible to you through the XAML Build templates created through Visual Studio.

image

Summary:

In this post we have setup a VM in Azure, setup a vNext build controller on that VM, and then setup a XAML build agent on that same VM and made them all available inside VSO to accept builds. In the next post I will show how we setup the build templates for the project.