Ok, so lets start this off with a little background of where we came from and where we wanted to get.

Our current environment consisted of a development environment with a single server on the same domain as the deployment server. And a production environment at a 3rd party hosted environment not on the same domain with no direct connectivity to the build/deployment server. We used GitHub Enterprise on-premise for source control and TeamCity for continuous integration and deployments. Production deployments to each of the 7 servers was a manual process done with a bunch of powershell commands to pull servers out of the farm, update the code, warm up the site and put them back in.

What we wanted was to bring the production servers on-premise, on a seperate domain in our DMZ (for security), using the same build running in all environments with 0 downtime deployments and the ability to stop a deployment if something went wrong using a staging environment pointing to production. And additionaly (a hope) was to be able to build a new server with just the Windows OS and have the deployment make sure everything needed to run the site was installed automatically. Certificates, Server Roles, Features, permissions, App Pools, site bindings, etc. And of course having it all completely automated.

Our original process was TeamCity was used to build and deploy to a dev environment on every checkin to the development branch. This was done using MSBUILD with some command line parameters. Which is very, very limited in terms of deployments. It was also compiled and deployed to dev environment using the Debug configuration. When we were satisfied with the state of the dev environment, we would merge the dev branch into the release branch at which point TeamCity would then build a new production package using the Release configuration file containing all of the files needed to deploy to the production environment. We would then manually copy that production package to the server, run the powershell scripts to deploy to each of the servers. The entire production deployment started at 10pm and took about 2 hours to complete the 6 server deployment. Needless to say it was horrible.

To solve our requirements (with added bonuses) we chose to move the source to TFS on-premise (Reason #1) and switch from TeamCity to TFS for building (Reason #2) with Octopus as the deployment server (Reason #3)

Reason #1: TFS on-premise was chosen due to other projects already in it, our datawarehouse is the big one, which uses TFS version control. We also really liked all of the additional task features, gated checkins, and tons of other goodness. We needed to run the build-agents internally already because we weren't allowed to put octopus externally facing. And it's what the I.T. guys really wanted. For us, it just made sense. Using VSTS (the cloud) would have worked just as well and included the same features as TFS.

Reason #2: With the move to TFS for source control and the built-in build/release management in TFS and since we needed to change the entire build steps for this deployment change it made sense to move it all into TFS. And we're very, very glad we did. Work item's tied to builds and releases has been absolutely wonderful with project/task management.

Reason #3: We chose Octopus for the deployment engine with a couple of other projects prior to this. However, we would have chosen Octopus again for this project due to the workflow features (gated releases for example) and the customization that is available. We also looked at Microsoft's release management for doing the entire deployment process but it was a bit limited in what we could do with it.

Next up will be lessons learned after creating many projects in Octopus, both large and small.