January 2016 feels like a milestone for me; my first rookie year in the contract/freelance world is over, yet my 10th year in IT has just begun. Here is my journey so far, and how I could not be more glad that I stumbled into a career that is different every day - Software Engineering.


Those who know me well will know my first love was and is Music, it was my best subject at school by far. In 2004 I had 2 years left at school and I really had to start focusing my subject choices on subjects that would allow me to get to University. I really did not have a clue if I wanted to go to University, far less what I wanted to do there. I decided I needed some work experience over the Easter break to ease that decision. Unfortunately there was some bureaucratic block that year on the school arranging any work experience activities (the old Health and Safety police were all over it) so I got out there and found myself some. A career in Music was my logical first place to start so I wrote to every recording studio I could find in Edinburgh offering my free labour over the Easter break. Fortunately a music rental, come talent agency, come instrument rental, come recording studio replied and I had a 2 week placement sorted out. I had a blast there and I used the two weeks to question everyone in sight. Unfortunately I realised soon after this that any “career” in music was going to be difficult, there was no structured career path (apart from teaching that I did not want to do), and opportunities were scarce if not totally non existent. Worse still, most of those that were available did not pay well.

I always seemed to do well in subjects I enjoyed at school, and a close second for me was Computing. On some further research I realised that jobs with computers were a plenty, there was a skill shortage in nearly every area and salaries were excellent. Fortunately for me the entry requirements for university to do computing or music are nearly identical (maths & or physics, English, music/computing) so hedging my bets going into my last 2 years was a little easier knowing this…take the subjects that allowed me to do either and make the choice in two years time.

Two years soon past, it was 2006 and I was finishing up my last year at school with a conditional offer to do Computer Science at Heriot Watt university in Edinburgh, with other backup offers to do Music at Edinburgh. By May/June 2006 I had several months off before I found out my final school grades and if I made the cut for university. I really did not have a clue at this stage what I wanted to do with computers, and really I needed some work experience again to help aid the decision. Some helpless begging paid off again and I secured a job interview for a summer job within the IT Department of my local council.

Don’t be afraid to ask, speak with any contact you have, and if you don’t have a contact write to everyone you can think of anyway - it worked for me. Getting experience before I had to was the best thing I ever did.

My first year in IT

Turns out I got the summer job for the short 6-8 weeks before I started university helping with general admin activities (paper was king back in 2006 and lots of paper needed filing Smile but I was extremely grateful just to get my foot in the door to do anything). I was 16/17 at this point, the youngest person on the full floor, if not the building of several hundred people… I was way out my depth in a professional environment at this age. To say I was wet behind the ears was an understatement; but I never have been shy at asking questions, and by interrogating those around me I learned quickly. I think it was some point during this time that I realised people skills are as, if not more, important than what you can do technically. With the right attitude you will always learn the technical aspects. Those few short weeks passed quickly and I knew by this stage I had been accepted into University and computing was most definitely the correct choice. Despite this, wanting to become a software engineer wasn't even on my radar at this stage.

That summer job turned into something else over the next few years; I went from helping with paperwork to doing 1st line support, to 2nd line support, 3rd line, on into software development and even automating a few processes and digitising some of those forms along the way (all while working a 2nd job in a shop some nights and the weekend). I left “for good” more than once to go back to uni (big thanks for all the whisky guys Smile ) only to end up back there between university semesters and even working part-time during term time. I can’t thank Bob, Wendy, Jennifer, Ann and so many others enough for the chance they gave me in their teams, my experience over those 4 years leapfrogged my career much more than just the technical skills I learned there. Being in a professional environment so young made me grow up fast, it gave me my thirst to know more, and being able to experience the full breadth of IT still to this day benefits me over others.

Post University

After university myself and a few uni mates tried to start a software company, mobile apps were all the range in 2009/10 and we were buzzing with ideas. Once again those big ears of mine were a little wet, and we were all a little naive in estimating how much we all needed to earn to actually make a living from doing it; far less how much money it takes to actually launch a product. Some of our ideas in those days have since proven absolutely valid, someone else had the same ideas and some of those ideas are now part of the Silicon Value Unicorn club (a company valued over $1 Billion).

After we gave up on this I ended up working for a music streaming company (think Spotify like company), the fit could not have been better for me with my 1st and 2nd favourite topics together at last. Yet again I was the youngest in the office, but to me this has always been a benefit. Being the underdog and feeling slightly out your depth makes you pull up the socks, learn quick, absorb everything from those around you and just get on with it. Never be afraid to ask for help from others, and don’t forget to return the favour when others ask you.

Two and a half years later I was ready for the next challenge; stagnating your technical or professional skills in this industry really is not an option and fortunately there is no shortage of great jobs for software engineers. If you don’t believe me take a look at the job boards yourself,or this article a friend of mine posted today on the jobs companies will be hiring for in 4 years time.

Next I took up a position with one of the UK’s largest automotive retailers on a multi million pound redevelopment and modernisation of their Point Of Sale system. I was joint first in the door, and the team grew, and grew over the next 18months. Eventually I fell into the position of leading a team of 12 extremely experienced guys and yet again my learning curve was exponential – trying to coordinate/manage people that are decades older than you is not easy and neither is dealing with half the cutting edge technology that was being used. Due to the incredible pace of change in software, every day is a school day. Unlike few others, in the software industry career progressions does not necessarily mean you have to go into management. It was for this reason that 2 and a half years after joining my time here had to come to an end, leading a team was great experience but I still enjoy the technical aspects too much at the present time to give up on that. I considered accepting a job offer I was made with another organisation as a technical architect (basically the job I was already doing minus the day to day man management), but added to this it had been a few years since I had attempted any of my own ideas.

My first year Contracting – a little rocky but extremely satisfying

In Sept 2014 I verbally accepted a contract offer and was in the process of getting the required paperwork in place days before I left for a holiday to Australia. I received a call from the agency while I was in the departure lounge to say the clients project had been cancelled with their client, and the offer was being withdrawn. Fortunately I had not handed my notice in for my permanent job at this point.

In Dec 2014 I accepted a 7 month contract/freelance position with one of the big financial services firms in Edinburgh. My road was a little bumpy again; the first day on site I found out there was some mix-up between the client and the agency I was contracting through…my 7 month contract was now 5 months. Despite my attempt to renegotiate terms the client were having none of it, I was the small fish and just had to take the hit. After this there was yet more problems with legal clauses in the contract that again I had to take the hit with. A little more water evaporated from behind my ears. Despite this the contract actually went well, the work was not quite as interesting as I was used to, but to me contracting offered a new freedom.

My next contract was with another financial services company much smaller than the first. They were extremely forward thinking for a FS institution and extremely proud of the “culture” they had about their offices, so much so I went through a total of 4 interviews to make sure I was the correct fit for the organisation. I was fortunate enough to work yet again with some incredibly talented people, getting new perspectives is always enlightening. The application the client was developing was attempting to use the latest in cloud computing technology – an area of software moving at light speed just now. So much so articles written only months prior were now out of date in most of the research I helped with. It was round about then I asked myself if anyone in this industry is an expert? The answer in most situations in a cloud first world is no. Attitude to learning, embracing change, managing risk and not forgetting the product needs shipped are all more important qualities to have as a software engineer today.

Unfortunately despite this contract being great technically, and culturally, the business application we were developing was scrapped. Only 1 month after negotiating contract extensions for another few months I was told the project was ending in 4 weeks time, a week before Christmas, and yet again hours before I was due to finish up for a scheduled 2 week holiday.

Contracting has many benefits:

- No “Career” pressure making you feel obliged to do more than your paid for. I am happy to say 35 hour weeks have now become my norm so far (goodbye to those occasional 70 hour weeks I had seen in the past).

- Taking holidays, all be it unpaid, whenever I like and for how long I like is a nice perk. The flexibility is great.

- The chance to run a Ltd company as a stepping stone hopefully into my own products. The shorter work weeks have allowed me the chance to work on my own ideas again.

- This is all proportionate to the risk but financially contracting is several orders of magnitude better than permanent work. Although not everything, this allowed me to take another programmer on for a few hours each week to help in getting my ideas into products.

Contracting is not for everyone, the risk associated with it has played on my mind several times in the last year. For anyone thinking about a leap into the contract market here is my advice:

- Make sure your outgoings personally are absolutely as low as you can.

- Make sure you have both a personal and business “buffer fund”; money you can forget you have for situations like the ones I have described above.

- Get used to going for interviews, you will be doing a lot of them. Practice here is key, you should already have rehearsed the answer to every possible question they may ask you before you go in.

- Make time for dealing with recruiters, going for lunch, have a chat about the market. Have your eyes open though, there are some good and some bad agencies, I can tell a few horror stories myself.

- Make an investment in learning tools. Being a contractor means you should be up and going in days, ready to hit the ground running with best practice and speed. Unless you take time to read blogs, industry news and watch tutorials your not going to be able to do this.

1 Mod 10

Well for those into Maths 1 % 10 = 1, and really that's how I feel every day in this industry. I may be 1 year into contracting, and 10 years into the IT/Software industry but every day is like my first day; this industry is moving at a phenomenal pace and anything I did 3 years ago is no longer valid or there is a new better way of doing it. There has never been a better time to be a software engineer, if you love learning new things and can embrace change software engineering probably has a job for you.

Do dheagh shlàinte



Part 1 – General Overview & Account Structure.
Part 2 – Setting up the Azure and Visual Studio Online (VSO) structure and maintenance.
Part 3 – Setting up the Azure Environments with ARM and PowerShell.
Part 4 – Setting up Build Servers.
Part 5 – Setting up Continuous delivery (CD) on CI Development builds.
Part 6 - Setting up Release Manager for Dev nightly, QA, Stage and Prod deployments.

Part 2 – Setting up the Azure and Visual Studio Online (VSO) structure and maintenance.

In part 1 of this series I discussed the general azure account and subscription structure that we settled on. In this post I will go into a little more detail how we achieved this, what automation we felt was critical to success and how we used VSO to accomplish our end-to-end Azure/Vso pipeline.



After the the above structure was setup in the Enterprise Portal, the next biggest thing to look at is identity.

User Identity:

Most Enterprise users will already have an Active Directory on premise, with a management policy already in place as to how users/computers are managed by a given team. Microsoft provides Azure AD Connect which is a service that will constantly sync changes between your on premise AD and your AD in Azure. Do not underestimate how long it will take to set this up, there are many prerequisites that you will have to go through with your on premise AD before you are able to sync it with azure. I really would not recommend progressing any further with Azure until you have this or an equivalent strategy for user and administrator access setup; we did and the later migration is going to be difficult.

Access Control:

The client was keen to embrace a less restrictive access control policy on developers than perhaps is ‘normal’ in order to achieve a low maintenance highly flexible environment, but with tight control around key environments for consistency control. It was decided that all developers would have full read-write control to their own programs dev subscription (i.e. all projects within the program they belong to), with full read permissions to the QA subscription and no rights to staging or production. To ensure consistency we developed a set of custom PowerShell scripts to hand over to the team in charge of account admin, this would ensure naming of groups were correct and reduce the chance of accidental escalation of privilege to other environments.

Tear down and re-provision:

For a ‘DevOps’ pipeline to work efficiently, and for all key stakeholders to have confidence in it, it is vital that everyone is encouraged / coerced into scripting every last detail of a project. To encourage this we setup Azure Automation Jobs that would tear down the resource groups within the development subscriptions every night (with a few minor exclusions for VM’s hooked up to release manager, we will come back to this later). This meant that we as developers had to ensure we added every last setting to our Azure Resource Manager templates (ARM) that described a given project.

In production a tear down and re-provision strategy is obviously not always practical, so we also wanted to test an inline upgrade. To do this, our QA deployment would delete the given resource group, retrieve the ARM template used in the current live environment, apply this to QA, before applying the new QA ARM script of our new deployment that would inline upgrade the environment. This requires a high level of co-ordination, and does have some downfalls (i.e. dependent on the changes between different version of your ARM script you may be unable to do a live deployment and have to arrange downtime for release).


Azure subscription location:

As with on premise, your VSO and build server access control needs to be extremely tight (having permissions to check in changes, release new builds and administer VSO or azure subscriptions is as good as having admin access to any PC/server/subscription in any environment that your are deploying to). We hence decided to separate VSO into its own Azure Subscription named “dev tools”, where we would also later place our build VM’s. This isn’t entirely necessary, especially now with RBAC in Azure, but it feels like a good separation of concerns where access control can be kept tight and interlinked resources between subscriptions can be kept to a minimum (i.e. Release manager has to be able to ‘see’ all subscriptions in order to release the software).


At the current time of writing VSO can only support 1 team collection per VSO account, unfortunately this means you either opt for multiple VSO accounts or flatten your structure and utilize team projects. Neither option is ideal, and it is good to hear that Microsoft are working to bring this feature to VSO, but after some trial and error with both approaches we settled on 1 VSO account with a flatter team project structure. This does mean we lost one level of abstraction, but on the whole this has not been a problem for the client. The clients on premise TFS 2013 only had a handful of collections anyway (one for each ‘Program of works’), so moving everything up one level and applying security at the team project level has mostly been okay. When/if Microsoft bring multiple collection support into VSO, the migration path should be much more straightforward than it would have been from multiple VSO accounts.

Branch structure:

This is one area with no change between Azure and on-premise. Following best practice set out in the book ‘Professional Team Foundation Server 2013', we implemented a streamlined branch structure. What branching structure you adopt will be very dependent on your team and the business around you, have a read at the different options setout in the book above and see what is best for you.

Build & Release:

VSO introduced ‘Build vNext’ in May 2015 and as far as possible we planned to utilize the many benefits this offered (no xaml build scripts being the main one). However, VSO has not yet released ‘Release vNext’ and until such time the current Release Manager 2015 for VSO has a dependency on XAML builds.

Microsoft has been clear that Build vNext is the future, and we were keen to establish a template for later migration, so we planned to utilized build vNext for our CI builds while having the XAML build (that was hooked into the Release Manager 2015 pipeline) operate on a “rolling build” after a given number of successful check-ins to perform our Continuous Delivery (CD). Please see part 5 of this series for further detail on the setup.


Hopefully this has given a insight into how we setup Azure and VSO for this client. The next set of posts I intend diving a little more technically in-depth into, so if you have not already done so now is a good time to sign up for azure.


I was having a problem with VSO Build.Preview / Build.VNext builds not being picked up by my self hosted VSO build agent (on an Azure VM). Unfortunately this is the 2nd or 3rd time I have hit into this problem, hence I felt a blog post was in order. When queuing a build I was greeted with the message “this build has been queued and is waiting to start”, this is normal at the start of every build, what is not normal is for this to not move on and sit there permanently.


  1. The first thing I always check in this situation is that the build server is available and enabled in VSO at https://YOURCOMPANYNAME.visualstudio.com/DefaultCollection/_admin/_AgentQueue, both were and my build agent was “Green”.
  2. I then RDP’d to the build sever to establish if the VSO agent was active in the processes tab, it was. I then kicked off another build and could instantly see activity on the agent, but this soon died off and I was again left with the above issue.
  3. I then prompted an update of the agent by visiting https://YOURCOMPANYNAME.visualstudio.com/DefaultCollection/_admin/_AgentQueue and selecting the queue on the left hand menu and selecting “Update all Agents”. I then repeated point 1 and 2, still no luck.
  4. Reboot the build server, tried step 1 and 2 again, still no luck.
  5. I found this article on msdn that described my exact issue, but that seemed to suggest this was a one off event related to a VSO update. So I left everything alone over night and tried the next day, same again. Coincidently, or not, I only ever appear to have these issues after Microsoft have rolled out a VSO update, to date I have not seen this outside the VSO update ship window.

At this stage I had no choice, I had to reinstall the agent.

Reinstall the Agent

  1. Go to https://YOURCOMPANYNAME.visualstudio.com/DefaultCollection/_admin/_AgentQueue and delete the build agent in your queue.
  2. Logon to your build sever, navigate to windows service explorer and locate the windows service “VSO Agent” (this name is dependent on if you accepted the default settings at point of install). Stop this service and then right click the service and go to properties, from here note down the install location of the service.
  3. Before we are reinstall the agent we require to delete the workspace that the agent was using. To do this open a developer command prompt (type cmd in the start menu and select the Visual Studio cmd prompt) and run the below commands to list and then delete the required workspaces.

tf workspaces /server:https://YOURCOMPANYNAME.visualstudio.com/DefaultCollection

from the above output note down the workspace and the owner and then execute

tf workspace /delete /server:https://YOURCOMPANYNAME.visualstudio.com/DefaultCollection “workspacename;owner”

Note the semi colon between workspacename and owner

4. Now open up windows explorer and navigate to the directory you noted down at step 3 and you will find in the agent folder a script named “ConfigureAgent.cmd”. Open a command prompt, navigate to this directory and run this cmd file.

5. You will be prompted if you wish to update the local agents settings, say yes then accept all default options (your settings should already be stored from the previous install).

6. Now try another build. For me this was all I had to do, if it still does not work at this point I would suggest repeating steps 1-3 before downloading a fresh agent from VSO.




I am currently working with a client that have an Azure API app that I would like to test for both performance and load. I was busy setting up Web performance and load testing using Visual Studio 2015 and Visual Studio Online, when I hit into strange assembly redirect issues. I eventually discovered the problem was due to the test runners inability to access the load tests app.config file. In this post I will explain how I got round this.

In order to setup the web performance test I first required to authenticate each test with the API Gateway which had Azure Active Directory Authentication in front of it. This meant I required to write a “request plug-in” for the web performance test that would get the “x-zumo-auth” token and append it to the request headers of any request going to the API. To do this I required a couple of packages from NuGet, which added the appropriate settings to my app.config. On running the tests I was hitting what looked like a classic dll file not found exception, however the binding redirects were correct, and so was “copy always” which means the dlls were getting copied to the correct test results output directory, but the test was still looking for an old version of a given dll.

I eventually discovered the app.config, that was included in the web and load test project, was not being used by the test runner. It turns out Visual Studio and VSO delegate the running of these test to “QTAgent_40.exe” which is located in the program files directory which in turn points to the test directory (not your bin folder) to pick up the dll containing the tests. All this means that when you do ConfigurationManager.AppSettings[configKey]; you are really reading the contents of QTAgent_40.exe app config, not your own. I came across this article from MS DevOps engineer Donovan Brown, which pointed me in the direction of my final solution below.

To solve the issue I read the assembly bindings in from my app config by parsing the xml file (no managed classes are available to access this area of the app.config) then I subscribed an event to listen out for any dlls that were not automatically being resolved, I then added a reference to my new static class and method at the top of my request plug in before calling the AD Authentication libraries. If the event fired I would go through the preloaded settings from my app.config and point the test to the correct version of the dll. Code below.



using System;
using System.Reflection;
using System.IO;
using System.Xml;

public static class AssemblyBinder
    private static XmlNodeList assemblyBindingFromAppContext;
    private static XmlNamespaceManager docNamepace;

    public static void Resolve()

        AppDomain.CurrentDomain.AssemblyResolve += delegate (object sender, ResolveEventArgs e)
            var requestedName = new AssemblyName(e.Name);
            foreach (XmlNode assembly in assemblyBindingFromAppContext)
                var selectSingleNode = assembly.SelectSingleNode("./bindings:assemblyIdentity/@name", docNamepace);
                if (selectSingleNode != null)
                    var filename = selectSingleNode.Value;

                    if (requestedName.Name == filename)
                        // might want to add version number etc checks on the requestedName
                            return Assembly.LoadFrom($"{filename}.dll");
                        catch (Exception ex)
                            throw new FileNotFoundException($"Could not find {filename}. The error is {ex.Message}");
            return null;

    private static void LoadConfig()
       var docname = Path.Combine(Environment.CurrentDirectory, $"{Assembly.GetExecutingAssembly().ManifestModule.Name}.config");

       var xmlDoc = new XmlDocument();

        docNamepace = new XmlNamespaceManager(xmlDoc.NameTable);
        docNamepace.AddNamespace("bindings", "urn:schemas-microsoft-com:asm.v1");

        if (xmlDoc.DocumentElement != null)
            assemblyBindingFromAppContext = xmlDoc.DocumentElement.SelectNodes("//bindings:dependentAssembly", docNamepace);


Part 1 – General Overview & Account Structure.
Part 2 – Setting up the Azure and Visual Studio Online (VSO) structure and maintenance.
Part 3 – Setting up the Azure Environments with ARM and PowerShell.
Part 4 – Setting up Build Servers.
Part 5 – Setting up Continuous delivery (CD) on CI Development builds.
Part 6 - Setting up Release Manager for Dev nightly, QA, Stage and Prod deployments.

Part 1 – General Overview & Account Structure.

Over the past few months I have been working in conjunction with James Kewin consulting at a Financial Services client with the ambition to host a new greenfield project on Azure. Added to this the client wished to provide a template for migration of their existing infrastructure at a later date. In accordance with this goal James and I have spent several months of R&D in moving our enterprise client to the cloud, from a complete end-to-end Application Lifecycle Management point of view, with the latest in best practice throughout the pipeline.

This blog series is a combination of the recommendations we implemented, that address enterprise issues with a move online, and a step-by-step tutorial of how we set up Azure, Visual Studio Online, Release Manager and how we implemented many DevOps techniques such as continuous delivery, environment tear down and re-provision, and the obsession with automation that is required in this new “cloud” world.

Tip: Azure is constantly subject to change, what I write here is likely not to be valid in 6 months. Always double check current limitations and advancements. This series was first written in Jun-Sep 2015. It is also important to note that what is the correct setup for one organisation may not be correct for all, please get in touch with any comments or questions.

Azure Subscriptions & the Enterprise portal


Most of you will be aware of the Azure portal above which seems great, but how do you go about structuring this in the enterprise world of Dev, Test, Stage, Demo and production environments with the associated security lockdown and audit requirements across 100’s if not 1000’s of applications? Luckily if you have those kind of requirements you probably have a Microsoft Enterprise Agreement and as part of this Microsoft provides the Azure Enterprise portal which gives us some functionality to manage, report and plan our subscription structures. The enterprise portal adds another 2 layers on top of a standard subscription, the first being an “Account” which has a single administrator that has full control over multiple subscriptions, and the top most level being a “Department” which is a collection of multiple Accounts when we can set budgets for Azure usage.

Design principles

In designing out department, account and subscription structures there were several design principles we wanted to keep in mind. These were:



200+ subscriptions (1 for every application) is impractical to maintain subscription setup overheads.


There should be room to grow

FlexibilityAny decision now should be easy to change going forward
SecurityEnvironments & Customer data should be segregated and access control applied appropriately.

Azure Limits

Conscious of hard Azure subscription limits (500 resource groups in a subscription etc), as well as resource limitations and scale limits.

CostsHard “cut out” department limits to prevent accidental budget blowout.
ReportingIt should be easy for management to retrieve cost reporting at all levels.

Constraints of Azure Department, Account and Subscription structures

It is all very well having a set of design principles, but we had to be aware of the current constraints and considerations associated with how we structure the Azure Enterprise Portal in relation to these principles. At the current time of writing (Aug 2015) these are:

  • Cost Caps. Unlike an Azure public subscription, a subscription tied to an Enterprise Agreement only has hard cut out limits for your budget at the department level. All controls, and visibility, of costs are at a department level only, something your dev team is not likely to have access to. It is all too easy for a junior dev to create several whopping VM’s that costs £1,000’s per month each if you let them.

  • Subscription are difficult to move between accounts, but accounts are easy to move between departments. There is no way of moving a subscription between accounts yourself and although you can contact Microsoft support and have them move your subscription to another account for you, my personal experience in doing this has not been pleasant (hint it involved a great deal of downtime). As such it is easier to assume that once a subscription is attached to an account it is fixed there until such times as Microsoft add the functionality to the Azure portal like they have for moving an account to a different department with 0 effort and 0 downtime. **UPDATE** MS have now added this ability to the enterprise portal (Oct 2015)**

  • Subscription Security. Azure is very much still in “Migration” mode from the old https://manage.windowsazure.com to the new https://portal.azure.com/. While the new portal provides us Role Based Access Control (RBAC) to manage who has access to see, modify and create what asset, server, VM etc (and in turn the data associated with those assets) not all asset types are available in the new portal as yet, hence you are forced to provide your team with a greater level of access to a subscription than is perhaps desired. Notable exceptions to the new portal are Service Bus and Azure Active Directory. Hence locking down who has access to a production asset vs a dev, test, stage or demo asset within one subscription is currently not possible. Hence our environments will have to reside in multiple subscriptions for audit and security requirements, especially in financial services.

How did we implement the enterprise structure in an Azure Department/account/subscription structure?

It is important to note here that there are many ways to map your Enterprise structure into an Azure one. This is something you really want to get right from the outset, so take those extra few weeks and speak to all interested parties across the organization.

To understand why we implemented the structure below it is probably best I give some context. Our client is not too dissimilar to many medium / large enterprise organizations in that they have multiple “program of works” running simultaneously with each other, each with multiple teams/projects, each of which have their own Dev, QA, BA and Architecture teams. We wanted to find something that was going to be suitable for everyone while still allowing each “program” a deal of flexibility in how they operate.

At the current time our client also has a clear line in the sand between development and operations, anything client/public facing is the responsibility of a dedicated Operations support team. We modelled the account and subscription structure for stage/demo and production environments accordingly.



Why did we implement it this way?

Rational on Recommendations:


  • A way to maintain hard account limits/cut-outs to prevent budget overrun.
  • Keep departments to a minimum to reduce maintenance.
  • Departments are easy to create, and move accounts between them, should the need arise later.
  • Don’t optimise prematurely, but still leave room for flexibility. Two departments seem to fill budget cap requirements at present, if one program blows the budget for the full department it will be easy to split at a later date.


  • Accounts split by department, to make it possible to move accounts to their own departments should it be required later.
  • Accounts should be controlled by a central “Core” team, suggestion is IT Infrastructure / IT Helpdesk, as maintenance plans require to be setup per subscription (as do build server and VSO rights).


  • Due to maintenance setup requirements and the manageability overhead, we wanted to keep the overall number of subscriptions small.
  • Unlike Account and Department structures (which are virtual to users), switching and navigating Subscriptions is a “physical” workflow switch which means closing one subscription to get into the other. It is hence important that any developer should be able to navigate the subscription structure easily, and there is no confusion as to which application and app environments lives in which subscription. Hence, we recommended that subscriptions be kept at a department level with one subscription per environment, providing most developers with at most 2-3 subscriptions to filter between.
  • “New” asset types can be fully managed at resource or resource group level by Role based access control. “Old” assets such as service bus can only be managed by named person access. To make administration and lock down of QA easier, QA and Dev have been separated into their own subscription.
  • Subscription creation would be handled via a central team to ensure both naming and security consistency.

Resource Groups

  • Resource groups can be used as a security ring-fence for each client. For data protection data, and to facilitate easy deletion at a client’s request, no data will cross resource group boundaries.
  • Due to security setup and for naming consistency, resource groups should be provisioned (by script) by a central team (IT Infrastructure). The resources would then be deployed by Release Manager (by script) by the individual teams.
  • All assets for a given deployed customer will be contained in their own resource group for isolation purposes.
  • A “QA on Demand” environment would be temporarily spun up to test feature differences between clients on each build in another resource group not attached to the main pipeline.

The Filing Cabinet Analogy for the non-tech types

All of the above can be a little heavy for tech types, far less non-tech types, when trying to establish requirements. Jamie came up with an excellent analogy to explain this to upper management and why they required to care.

The Paper (our projects):

Traditionally developers cared about developing the software, with little thought given to the underlying hardware this would be hosted on – that was a problem for architects and the infrastructure team. In our new tear down/up DevOps cloud (insert buzz word) world, developers now specify the infrastructure setup and configuration as yet another project in their solution. Azure Resource Manager templates (ARM Templates) are the next evolution of PowerShell DSC with a particular focus on Azure. ARM Templates allow developers to configure their desired azure configuration and state based on a JSON description of all assets in their solution, further enabling the constantly repeatable nature of tear down and re-provision in a DevOps context.


The Folders (Resource Groups):

Azure now provides a logical grouping container called “Resource Groups” that we can group our infrastructure assets and deployed content into (i.e. different projects, clients etc). These allow you to manage RBAC (role based access control) of which technical staff has access to administer given assets, with a couple of exceptions at the current time of writing. Resource Groups are the perfect “folder” that groups items together with associated metadata, for example we can “Tag” a resource group with any given string we like that will appear on the billing breakdown the finance department receive.


The Drawer (A subscription):

Azure has a soft limit of 800 resource groups (or folders) per subscription (soft limit as a call to MS support and you can have this increased). Our client has multiple products/projects under development within a given program, as such each subscription will contain multiple Resource Groups. We likened the subscription to the drawer of a filing cabinet.


Note: In the diagram above we mention release manager 2015 and the concept of a bounce VM. This is to deal with specific limitations that currently exist in Release Manager at the time of writing (Aug/Sep 2015), we cover this later in part 6 of this series.

The Cabinet (The account containing multiple subscriptions):

Referring back to the Enterprise Portal, multiple subscriptions are managed by 1 account owner. For example our Dev and QA subscriptions are managed by 1 account per program.


Multiple Cabinets (The Department):

The top most level of the azure enterprise portal is the grouping of departments, which is a group of 1 or more accounts. It is at this level where budget caps are maintained. These caps, when breached, will knockout everything under that department from being usable. For our requirements it was agree that it was only necessary to have 2 caps, one for non prod and one for production with a very large cap (no cap is a really bad idea, I’ve heard the stories 1st hand of organisations racking up £10,000’s bill in several days).

Cost caps do not affect the reporting of cost within that department, using “tags” we can specify down to the asset level who should be crossed billed for the usage of a given deployed resource. This was a key factor in negotiating this simplified structure within our client’s organisation.


See any gaps in our thinking?

Hopefully the above has explained the general high level principles we considered in setting up the general azure structure. The rest of this series of posts will be more “tutorial” style on exactly step by step how we achieved this.

We would love any feedback you have on any of this series, please see free to use the comments below… I will respond.