**UPDATE** – InstaPage has been updated since the time of writing the below, however personally I kept the setup below.

Okay hands up, as a software developer I hate the idea of these codeless website builders. They produce un-optimised, SEO unfriendly, SLOW and buggy experiences often with a dire standard of code and a ridiculous on-going monthly cost.

Coding a website is not monetarily costly, for me, I have the relevant skills to code, design, optimise and secure a website myself. While getting a website up and running may not hit my physical wallet, to do anything properly takes an extraordinary amount of of time; something I have been trying to optimise in my life lately to find more time for projects. When you assign a cost to your time it really helps optimise the unnecessary parts (Twitter, Facebook and the like).

I left my inner techy shivering asking ‘what the hell are you thinking about’ and signed up for a website builder

The abbreviated background story

When someone I consider rather successful suggested to me that if I too wanted to build product and get traction I had to be able to verify ideas were going to be successful before committing time and money to them. Aka fail fast. So I left my inner techy shivering asking ‘what the hell are you thinking about’ and signed up for a website builder, or more particularly a landing page builder. After some very light research, I choose InstaPage. I will perhaps go into some detail in another blog about the product I built (https://www.motremind.co.uk), but basically the landing page just had to collect a email address and a couple of other little parts of data.

I signed up for the ~$33 dollars month to month plan, had the site built in 1 night and chucked some adds on AdWords to see the reaction. Surprisingly I did get some signups, so I spent the next couple of months building out the backend of the system, leaving the front end there collecting emails. When the backend and reminder system was finally built I set up a Twitter and Facebook account and prepared to announce to the world.

Before I did, I popped a support call into InstaPage as per their help centre article to enable SSL on my landing page. For those not in the know, $33 per month for hosting a trivial website is rather expensive, hence I had assumed after reading the help centre that of course SSL would be included in my plan and the support team would sort it out while I slept. In my haste to get officially going I announced to my friends my new product and invited them to like/follow the page of Twitter and Facebook. Big lesson learned I should have waited a day for the big reveal.

What no SSL, I am paying for this right?

I woke up the next morning with a reply to my support ticket telling me that SSL was a “Premium” feature (BTW no mention of this on the pricing page or I would have run for the hills months before). I jumped over to check out how many extra dollars they were trying to squeeze me for; my jaw hit the floor and rage built inside!. A minimum of $149 month to month ($127 if you want to give them the money upfront for a year) So $116 for SSL and a bunch of other features I would never use. Are they having a laugh? With projects like Let’s Encrypt offering free SSL Certs these days my inner techy is utterly appalled. These types of website builders are aimed at non-tech people not in the know, furthermore landing pages by their very nature are usually collecting some form of personal details, in my case this was only a email address…but still. I was taking some rather deserved flack from tech friends on Facebook by now for the lack of SSL.

What would I do if I developed the site from scratch?

Setting up SSL on InstaPage for Free while on a Basic plan

Well one thing is for sure, I was not stumping up that $116 extra dollars each month, my inner geek was livid by this point. I decided it was time to let the inner geek come back out and find a solution. What would geek me do if I was developing the site from scratch myself? Well with my security and performance hat on no professional external facing website I build is without Cloudflare unless there is a very good and specific reason. Usually I advise clients to stump up the cash and pay, but I was being cheap here – validating an idea. I jumped across to Cloudflare and checked out the features available on their free plan – more than adequate I thought. So here is how to do it:

Step 1: – Sign up for a Cloudflare account

Step 2: – Go to https://www.cloudflare.com/a/add-site and enter the URL of your InstaPage account.

Step 3: – Let Cloudflare work its magic to figure out the required settings to keep your site online (DNS for us geeks).

Step 4: – Find and Log into the portal your domain registrar (the people you purchased the domain name from) and update the “Name servers” to point them at CloudFlare.

Step 5: – Get a cup of tea and wait 10 min (24 hours max) and your domain should now be getting served by Cloudflare.


Step 6: – SEO optimise the SSL end point.

Technically the http and https version of your site is seen as two separate websites by Google and this can adversely effect your page rank as well as meaning people can still access the non secure version as well. Hence we want to redirect the http traffic to https. Our “Basic” InstaPage account does not let us get access to the code so doing traditional redirects is not going to work here. Cloudflare to the rescue again! Head back over to your CloudFlare account and select “Page Rules”. Set a rule to “Always use SSL” for “Http://Yourdomainname.com/*” (note the * at the end).


Step 7: – While we are at it, we might as well try and optimise some of the other downfalls of website builders, Page Speed. Google offers a free PageSpeed checker over here. You will likely find like I did that the page speed of your landing page is terrible, which is again bad for SEO. Hence add another Page Rule in Cloudflare to “Cache Everything” for “*.Yourdomain.com/*” (again note the *).

Step 8: – Re run the PageSpeed test and you will likely see a small improvement, unfortunately we will never make large gains without access to the code but a gain is a gain.

Any Question?

Any questions please do get in touch in the comments.

Do dheagh shlàinte


Part 1 – General Overview & Account Structure.
Part 2 – Setting up the Azure and Visual Studio Online (VSO) structure and maintenance.
Part 3 – Setting up the Azure Environments with ARM and PowerShell.
Part 4 – Setting up Build Servers.
Part 5 – Setting up Continuous delivery (CD) on CI Development builds.
Part 6 - Setting up Release Manager for Dev nightly, QA, Stage and Prod deployments.

Part 2 – Setting up the Azure and Visual Studio Online (VSO) structure and maintenance.

In part 1 of this series I discussed the general azure account and subscription structure that we settled on. In this post I will go into a little more detail how we achieved this, what automation we felt was critical to success and how we used VSO to accomplish our end-to-end Azure/Vso pipeline.



After the the above structure was setup in the Enterprise Portal, the next biggest thing to look at is identity.

User Identity:

Most Enterprise users will already have an Active Directory on premise, with a management policy already in place as to how users/computers are managed by a given team. Microsoft provides Azure AD Connect which is a service that will constantly sync changes between your on premise AD and your AD in Azure. Do not underestimate how long it will take to set this up, there are many prerequisites that you will have to go through with your on premise AD before you are able to sync it with azure. I really would not recommend progressing any further with Azure until you have this or an equivalent strategy for user and administrator access setup; we did and the later migration is going to be difficult.

Access Control:

The client was keen to embrace a less restrictive access control policy on developers than perhaps is ‘normal’ in order to achieve a low maintenance highly flexible environment, but with tight control around key environments for consistency control. It was decided that all developers would have full read-write control to their own programs dev subscription (i.e. all projects within the program they belong to), with full read permissions to the QA subscription and no rights to staging or production. To ensure consistency we developed a set of custom PowerShell scripts to hand over to the team in charge of account admin, this would ensure naming of groups were correct and reduce the chance of accidental escalation of privilege to other environments.

Tear down and re-provision:

For a ‘DevOps’ pipeline to work efficiently, and for all key stakeholders to have confidence in it, it is vital that everyone is encouraged / coerced into scripting every last detail of a project. To encourage this we setup Azure Automation Jobs that would tear down the resource groups within the development subscriptions every night (with a few minor exclusions for VM’s hooked up to release manager, we will come back to this later). This meant that we as developers had to ensure we added every last setting to our Azure Resource Manager templates (ARM) that described a given project.

In production a tear down and re-provision strategy is obviously not always practical, so we also wanted to test an inline upgrade. To do this, our QA deployment would delete the given resource group, retrieve the ARM template used in the current live environment, apply this to QA, before applying the new QA ARM script of our new deployment that would inline upgrade the environment. This requires a high level of co-ordination, and does have some downfalls (i.e. dependent on the changes between different version of your ARM script you may be unable to do a live deployment and have to arrange downtime for release).


Azure subscription location:

As with on premise, your VSO and build server access control needs to be extremely tight (having permissions to check in changes, release new builds and administer VSO or azure subscriptions is as good as having admin access to any PC/server/subscription in any environment that your are deploying to). We hence decided to separate VSO into its own Azure Subscription named “dev tools”, where we would also later place our build VM’s. This isn’t entirely necessary, especially now with RBAC in Azure, but it feels like a good separation of concerns where access control can be kept tight and interlinked resources between subscriptions can be kept to a minimum (i.e. Release manager has to be able to ‘see’ all subscriptions in order to release the software).


At the current time of writing VSO can only support 1 team collection per VSO account, unfortunately this means you either opt for multiple VSO accounts or flatten your structure and utilize team projects. Neither option is ideal, and it is good to hear that Microsoft are working to bring this feature to VSO, but after some trial and error with both approaches we settled on 1 VSO account with a flatter team project structure. This does mean we lost one level of abstraction, but on the whole this has not been a problem for the client. The clients on premise TFS 2013 only had a handful of collections anyway (one for each ‘Program of works’), so moving everything up one level and applying security at the team project level has mostly been okay. When/if Microsoft bring multiple collection support into VSO, the migration path should be much more straightforward than it would have been from multiple VSO accounts.

Branch structure:

This is one area with no change between Azure and on-premise. Following best practice set out in the book ‘Professional Team Foundation Server 2013', we implemented a streamlined branch structure. What branching structure you adopt will be very dependent on your team and the business around you, have a read at the different options setout in the book above and see what is best for you.

Build & Release:

VSO introduced ‘Build vNext’ in May 2015 and as far as possible we planned to utilize the many benefits this offered (no xaml build scripts being the main one). However, VSO has not yet released ‘Release vNext’ and until such time the current Release Manager 2015 for VSO has a dependency on XAML builds.

Microsoft has been clear that Build vNext is the future, and we were keen to establish a template for later migration, so we planned to utilized build vNext for our CI builds while having the XAML build (that was hooked into the Release Manager 2015 pipeline) operate on a “rolling build” after a given number of successful check-ins to perform our Continuous Delivery (CD). Please see part 5 of this series for further detail on the setup.


Hopefully this has given a insight into how we setup Azure and VSO for this client. The next set of posts I intend diving a little more technically in-depth into, so if you have not already done so now is a good time to sign up for azure.


Part 1 – General Overview & Account Structure.
Part 2 – Setting up the Azure and Visual Studio Online (VSO) structure and maintenance.
Part 3 – Setting up the Azure Environments with ARM and PowerShell.
Part 4 – Setting up Build Servers.
Part 5 – Setting up Continuous delivery (CD) on CI Development builds.
Part 6 - Setting up Release Manager for Dev nightly, QA, Stage and Prod deployments.

Part 1 – General Overview & Account Structure.

Over the past few months I have been working in conjunction with James Kewin consulting at a Financial Services client with the ambition to host a new greenfield project on Azure. Added to this the client wished to provide a template for migration of their existing infrastructure at a later date. In accordance with this goal James and I have spent several months of R&D in moving our enterprise client to the cloud, from a complete end-to-end Application Lifecycle Management point of view, with the latest in best practice throughout the pipeline.

This blog series is a combination of the recommendations we implemented, that address enterprise issues with a move online, and a step-by-step tutorial of how we set up Azure, Visual Studio Online, Release Manager and how we implemented many DevOps techniques such as continuous delivery, environment tear down and re-provision, and the obsession with automation that is required in this new “cloud” world.

Tip: Azure is constantly subject to change, what I write here is likely not to be valid in 6 months. Always double check current limitations and advancements. This series was first written in Jun-Sep 2015. It is also important to note that what is the correct setup for one organisation may not be correct for all, please get in touch with any comments or questions.

Azure Subscriptions & the Enterprise portal


Most of you will be aware of the Azure portal above which seems great, but how do you go about structuring this in the enterprise world of Dev, Test, Stage, Demo and production environments with the associated security lockdown and audit requirements across 100’s if not 1000’s of applications? Luckily if you have those kind of requirements you probably have a Microsoft Enterprise Agreement and as part of this Microsoft provides the Azure Enterprise portal which gives us some functionality to manage, report and plan our subscription structures. The enterprise portal adds another 2 layers on top of a standard subscription, the first being an “Account” which has a single administrator that has full control over multiple subscriptions, and the top most level being a “Department” which is a collection of multiple Accounts when we can set budgets for Azure usage.

Design principles

In designing out department, account and subscription structures there were several design principles we wanted to keep in mind. These were:



200+ subscriptions (1 for every application) is impractical to maintain subscription setup overheads.


There should be room to grow

FlexibilityAny decision now should be easy to change going forward
SecurityEnvironments & Customer data should be segregated and access control applied appropriately.

Azure Limits

Conscious of hard Azure subscription limits (500 resource groups in a subscription etc), as well as resource limitations and scale limits.

CostsHard “cut out” department limits to prevent accidental budget blowout.
ReportingIt should be easy for management to retrieve cost reporting at all levels.

Constraints of Azure Department, Account and Subscription structures

It is all very well having a set of design principles, but we had to be aware of the current constraints and considerations associated with how we structure the Azure Enterprise Portal in relation to these principles. At the current time of writing (Aug 2015) these are:

  • Cost Caps. Unlike an Azure public subscription, a subscription tied to an Enterprise Agreement only has hard cut out limits for your budget at the department level. All controls, and visibility, of costs are at a department level only, something your dev team is not likely to have access to. It is all too easy for a junior dev to create several whopping VM’s that costs £1,000’s per month each if you let them.

  • Subscription are difficult to move between accounts, but accounts are easy to move between departments. There is no way of moving a subscription between accounts yourself and although you can contact Microsoft support and have them move your subscription to another account for you, my personal experience in doing this has not been pleasant (hint it involved a great deal of downtime). As such it is easier to assume that once a subscription is attached to an account it is fixed there until such times as Microsoft add the functionality to the Azure portal like they have for moving an account to a different department with 0 effort and 0 downtime. **UPDATE** MS have now added this ability to the enterprise portal (Oct 2015)**

  • Subscription Security. Azure is very much still in “Migration” mode from the old https://manage.windowsazure.com to the new https://portal.azure.com/. While the new portal provides us Role Based Access Control (RBAC) to manage who has access to see, modify and create what asset, server, VM etc (and in turn the data associated with those assets) not all asset types are available in the new portal as yet, hence you are forced to provide your team with a greater level of access to a subscription than is perhaps desired. Notable exceptions to the new portal are Service Bus and Azure Active Directory. Hence locking down who has access to a production asset vs a dev, test, stage or demo asset within one subscription is currently not possible. Hence our environments will have to reside in multiple subscriptions for audit and security requirements, especially in financial services.

How did we implement the enterprise structure in an Azure Department/account/subscription structure?

It is important to note here that there are many ways to map your Enterprise structure into an Azure one. This is something you really want to get right from the outset, so take those extra few weeks and speak to all interested parties across the organization.

To understand why we implemented the structure below it is probably best I give some context. Our client is not too dissimilar to many medium / large enterprise organizations in that they have multiple “program of works” running simultaneously with each other, each with multiple teams/projects, each of which have their own Dev, QA, BA and Architecture teams. We wanted to find something that was going to be suitable for everyone while still allowing each “program” a deal of flexibility in how they operate.

At the current time our client also has a clear line in the sand between development and operations, anything client/public facing is the responsibility of a dedicated Operations support team. We modelled the account and subscription structure for stage/demo and production environments accordingly.



Why did we implement it this way?

Rational on Recommendations:


  • A way to maintain hard account limits/cut-outs to prevent budget overrun.
  • Keep departments to a minimum to reduce maintenance.
  • Departments are easy to create, and move accounts between them, should the need arise later.
  • Don’t optimise prematurely, but still leave room for flexibility. Two departments seem to fill budget cap requirements at present, if one program blows the budget for the full department it will be easy to split at a later date.


  • Accounts split by department, to make it possible to move accounts to their own departments should it be required later.
  • Accounts should be controlled by a central “Core” team, suggestion is IT Infrastructure / IT Helpdesk, as maintenance plans require to be setup per subscription (as do build server and VSO rights).


  • Due to maintenance setup requirements and the manageability overhead, we wanted to keep the overall number of subscriptions small.
  • Unlike Account and Department structures (which are virtual to users), switching and navigating Subscriptions is a “physical” workflow switch which means closing one subscription to get into the other. It is hence important that any developer should be able to navigate the subscription structure easily, and there is no confusion as to which application and app environments lives in which subscription. Hence, we recommended that subscriptions be kept at a department level with one subscription per environment, providing most developers with at most 2-3 subscriptions to filter between.
  • “New” asset types can be fully managed at resource or resource group level by Role based access control. “Old” assets such as service bus can only be managed by named person access. To make administration and lock down of QA easier, QA and Dev have been separated into their own subscription.
  • Subscription creation would be handled via a central team to ensure both naming and security consistency.

Resource Groups

  • Resource groups can be used as a security ring-fence for each client. For data protection data, and to facilitate easy deletion at a client’s request, no data will cross resource group boundaries.
  • Due to security setup and for naming consistency, resource groups should be provisioned (by script) by a central team (IT Infrastructure). The resources would then be deployed by Release Manager (by script) by the individual teams.
  • All assets for a given deployed customer will be contained in their own resource group for isolation purposes.
  • A “QA on Demand” environment would be temporarily spun up to test feature differences between clients on each build in another resource group not attached to the main pipeline.

The Filing Cabinet Analogy for the non-tech types

All of the above can be a little heavy for tech types, far less non-tech types, when trying to establish requirements. Jamie came up with an excellent analogy to explain this to upper management and why they required to care.

The Paper (our projects):

Traditionally developers cared about developing the software, with little thought given to the underlying hardware this would be hosted on – that was a problem for architects and the infrastructure team. In our new tear down/up DevOps cloud (insert buzz word) world, developers now specify the infrastructure setup and configuration as yet another project in their solution. Azure Resource Manager templates (ARM Templates) are the next evolution of PowerShell DSC with a particular focus on Azure. ARM Templates allow developers to configure their desired azure configuration and state based on a JSON description of all assets in their solution, further enabling the constantly repeatable nature of tear down and re-provision in a DevOps context.


The Folders (Resource Groups):

Azure now provides a logical grouping container called “Resource Groups” that we can group our infrastructure assets and deployed content into (i.e. different projects, clients etc). These allow you to manage RBAC (role based access control) of which technical staff has access to administer given assets, with a couple of exceptions at the current time of writing. Resource Groups are the perfect “folder” that groups items together with associated metadata, for example we can “Tag” a resource group with any given string we like that will appear on the billing breakdown the finance department receive.


The Drawer (A subscription):

Azure has a soft limit of 800 resource groups (or folders) per subscription (soft limit as a call to MS support and you can have this increased). Our client has multiple products/projects under development within a given program, as such each subscription will contain multiple Resource Groups. We likened the subscription to the drawer of a filing cabinet.


Note: In the diagram above we mention release manager 2015 and the concept of a bounce VM. This is to deal with specific limitations that currently exist in Release Manager at the time of writing (Aug/Sep 2015), we cover this later in part 6 of this series.

The Cabinet (The account containing multiple subscriptions):

Referring back to the Enterprise Portal, multiple subscriptions are managed by 1 account owner. For example our Dev and QA subscriptions are managed by 1 account per program.


Multiple Cabinets (The Department):

The top most level of the azure enterprise portal is the grouping of departments, which is a group of 1 or more accounts. It is at this level where budget caps are maintained. These caps, when breached, will knockout everything under that department from being usable. For our requirements it was agree that it was only necessary to have 2 caps, one for non prod and one for production with a very large cap (no cap is a really bad idea, I’ve heard the stories 1st hand of organisations racking up £10,000’s bill in several days).

Cost caps do not affect the reporting of cost within that department, using “tags” we can specify down to the asset level who should be crossed billed for the usage of a given deployed resource. This was a key factor in negotiating this simplified structure within our client’s organisation.


See any gaps in our thinking?

Hopefully the above has explained the general high level principles we considered in setting up the general azure structure. The rest of this series of posts will be more “tutorial” style on exactly step by step how we achieved this.

We would love any feedback you have on any of this series, please see free to use the comments below… I will respond.


Part 1 – General Overview & Account Structure.
Part 2 – Setting up the Azure and Visual Studio Online (VSO) structure and maintenance.
Part 3 – Setting up the Azure Environments with ARM and PowerShell.
Part 4 – Setting up Build Servers.
Part 5 – Setting up Continuous delivery (CD) on CI Development builds.
Part 6 - Setting up Release Manager for Dev nightly, QA, Stage and Prod deployments.

Part 6 - Setting up Release Manager for Dev nightly, QA, Stage and Prod deployments.

As described in Part 5 we will be using Release Manager for our full pipeline. At the present time VSO does not have a web interface for us to administer Release Manager, although a new web interface is in the pipeline as announced at Build 2015 (skip to 28min in for a demo), however the good news is that we can do this currently via the desktop client tools that are used for on premise deployments. If you do not have it already, download and install Release Management 2015 from MSDN here (note MSDN Enterprise subscription required for this download). Most of what I describe below is covered in this channel 9 video over here.

1). Once installed, open release manager 2015 client tools from your desktop and you should be presented with the below screen. Enter your VSO URL (eg https://YourCompany.visualstudio.com) and click ok.


2). You will now be prompted to sign into your Live/work/school account associated with your VSO subscription. Note: you must sign in for the first time as the live account that is declared as the VSO Account owner, after that you can do to Administrator – > manage users –> new and add any additional users you wish.

Before we go any further with Release Manager setup, we first need to address some limitations of the current Release Manager as of Aug 2015. Unfortunately we are unable to directly deploy to App Service environments from Release Manager, but there is a workaround to allow us to do this. Release manager currently allows us to deploy to Virtual Machines via cloud services fine, hence we are going to use Virtual Machines (of which we will require one in every subscription we are deploying into), to bounce our build to the appropriate App Service (or indeed any azure asset) contained in that subscription. As we are likely to be doing the VM setup in more than one subscription it is worth scripting this, luckily I have went through this pain already so please find a couple of PowerShell functions that will help you do this over here.

Tip: Remember to execute the below PowerShell commands before executing the setup script.

Tip 2: Remember to exclude this new resource group from your clear down scripts.

Select-AzureSubscription -SubscriptionName "Developer Tools"
Switch-AzureMode AzureResourceManager
New-AzureResourceGroup -Name BuildBounce -Location NorthEurope -Force
Create-BuildBounceVM -AzureSubscriptionName "Development" -VMName "bldbounce" -Location "North Europe" -StorageAccountName "buildbouncedev" -VMSize "ExtraSmall" -OSName "Windows Server 2012 R2 Datacenter" -VMUserName "Ouradmin" -VMPassword "MyPasswordIs-01" -ResourceGroupName "BuildBounce"

3). Step 2 above will take 10min or so to run. While we are waiting we can gather the information for the next step. To do this we will need to download the publish settings file from https://manage.windowsazure.com/publishsettings that contains our subscription details. Open this file with notepad once downloaded.

Note: this file does contain sensitive data, you will want to delete this as soon as you have retrieved the data we require.

4). We now want to give Release Manager access to all our Azure subscriptions we intend to deploy into. To do this, go to the Administration Tab – > Manage Azure then click on the New Button. Using the information from the publish settings file we just downloaded in step 3, add as many azure environments as you require. You can reuse the storage account that you created in step 2, make sure step two is complete before clicking save.


6). Now we have our subscription added, we can now setup our “Stage Types” in release manager. Go To Administration –> Manage Pick Lists –> Stage Type and click the Add button. A stage type is the stages your release will go through on its way to prod, this is usually a one to one mapping to “Environments” but not always. Add all that you need, I have added Dev, QA, UAT-Stage, Demo, Prod


7). We can now setup our “environments”, i.e the physical stages a release moves through. You will need at least one for each Azure subscription you wish to move your release through. Go to Configure Paths –> and click on New vNext: Azure.


8). The new environment template will appear, now select the “Link Azure Environment” button at the top right.


9). Under Azure environments select the appropriate subscription and you should see the VM/Cloud service we created earlier. If you don’t please double check that you created “Classic” VMs and not the current ARM style, at this point in time Release Manager only supports classic VMs.


10). Select the name of the VM you created earlier and select link.

11). You will now be taken back to the New vNext:Azure template window. Now click “Link Azure Servers” and select the full VM name (eg. vmbounce.cloudapp.net…) before clicking the “link” button.


12). You can now save your template. If you wish you can restrict this environment to a particular stage by selecting the “Stage Type Security”, this is optional.

14). We now require to setup the vNext Release Path, still under “Configure Paths” select the vNext Release Paths tab and select New.


15). Click Add, select your stage (i.e dev), the environment we just saved in step 14 and any approvers you require for this stage. For simplicity I have set these to Automated. Now click save and close.


16). Now select the Configure Apps tab (top right), then the Components tab (top left). Now select “New vNext”. In the Source tab under “builds with Application” add a single “\” to the build drop location. Give your component a name, and click save and close.


17).Before we go any further we need to create a XAML build definition (release manager at this stage only supports Xaml builds Sad smile ). To do this open Visual Studio, navigate to team explorer, click builds, and click new xaml build.Configure your trigger as desired, scope your “Source settings” to the lowest possible tree level you can, and in the build defaults tab select the classic build controller we setup on part 4 of this blog series. Now select the process tab and enter the below as additional arguments to MSBuild. This makes sure MS Build and Release manager can interop together.

/p:DeployOnBuild=True /p:AutoParameterizationWebConfigCOnnectionStrings=False


18). Now back over to Release Manager and select Configure Apps, then select the “vNext Release Templates” sub tab at the top left. Click New and give your pipeline a name, select the release path we created in step 14, and check the box “Can Trigger a Release from a Build”. You will also want to select the build definition, select the edit button and select the team project and build definition. Note if the build definition list is empty you will first require to create a XAML build, step 17.


19). Now right click components from the toolbox and click add, select the Component we create at step 16 and click the “Link” button.


20). Now drag the “Deploy Using Ps/DSC” action onto your template, double click on it, select your bounce server name, enter the username and password your created when creating the VM with the script at step 2 above. Select the component name we created at step 16, and set the PSScriptPath to “Configuration\publish.ps1”, scroll down the component and set the “SkipCaCheck” to “True”.


21). You should now have a pipeline to push a build into a given environment. To provision other environments simply repeat steps 2 – 20 above.


In this post we have now completed the setup of an end-to-end development pipeline in Azure. You should now be able to successfully check into VSO and have your project built, tested and deployed into a Azure environment.


Part 1 – General Overview & Account Structure.
Part 2 – Setting up the Azure and Visual Studio Online (VSO) structure and maintenance.
Part 3 – Setting up the Azure Environments with ARM and PowerShell.
Part 4 – Setting up Build Servers.
Part 5 – Setting up Continuous delivery (CD) on CI Development builds.
Part 6 - Setting up Release Manager for Dev nightly, QA, Stage and Prod deployments.

Part 5 - Setting up Continuous delivery (CD) on CI Development builds.

Now we have a build server (see part 4) we can now setup our CI build and our Continuous deployment build to the development server. We have several options on how to do this:

- use Azure websites inbuilt “Continuous Delivery” from VSO. This is great for simply deployments, but as we start adding more complex features to our enterprise apps this is unlikely to suffice going forward.

- use VSO Resource Group deployment functionality from the vNext CI build. Unfortunately this is not an option available to all of us, see Abrish's comments on this MSDN article, hence until this is fully released we need something else.

- User VNext for CI and use XAML build and Release Manager for CD on a rolling build basis. We will be using Release Manager throughout the rest of our pipeline to production, although not ideal due to the performance overhead, especially for CD, this is the only real option available.

We chose to implement the last point as we felt it offered us a gentle way into the new build system while still giving us full compatibility with the functionality of Release Manager we required. To do this we first require to setup a vNext CI build from VSO. To do this:

1). First open up the VSO web interface and navigate to the “Build” tab. Click on the green plus button to create a new build template. Choose visual studio and click ok.


2). You should now see the New Build definition page. As this is for CI purposes only, we can delete the “Index sources…” and “Publish Build Artifacts” steps from the default template.



3). In the “Visual Studio Build” step select the “…” button next to the Solution and navigate to your Dev branch and in turn select the .sln (solution file). I also prefer my builds to be clean for every build, so I enabled the checkbox “Clean”


4). Now select the Visual Studio Test step and expand Advanced options. To better encourage best practice check the checkbox “Code Coverage Enabled”. If your unit test projects are in the same solution as your other projects you should not need to change the default “Test Assembly” location.


5). Now click on the “Repository” tab and change the cloak location for your build folder. In my case this is a folder inside the dev branch.


6). Now select the Triggers tab and enable “Continuous Integration (CI)”


7). Now select the “General” tab and change the Default Queue to the Build Server group we setup in blog post Part 4.


8). We are now good to go, click the Save button, give it a name, and check in some changes to test. Any issues please make sure you do not have NuGet packages checked in with your solution and that the “Restore NuGet Packages” checkbox is enabled on the Visual Studio Build step.



We have now setup our CI build for our project, but still need a way of Continuously Deploying it to the given Azure Environment. In the next article I will discuss how this can be achieved currently with Release Manager 2015 on VSO while we wait for Release vNext to be released by Microsoft.