Azure Site Recovery – Using PowerShell

This is a follow up post to accompany my previous post Azure Site Recovery – From Start to Finish. In that post, I demonstrate how you can go from a new shiny Azure subscription to fully protecting your VMware infrastructure and your physical servers using Azure Site Recovery. When I wrote the previous post (I use Word for blog posts) it was 32 pages, it contains a lot of detail on the process and tons of insight. This is the skinny version of that post. It is a PowerShell script hosted on GitHub that automates the steps for you. So instead of going through the article and doing something like this each time

In the Azure console:

  • Click Recovery Services vaults > vault.
  • In the Resource Menu, click Site Recovery > Prepare Infrastructure > Protection goal.
  • In Protection goal, select To Azure > Yes, with VMware vSphere Hypervisor.
  • Click > Source

 

You can instead do this.

Of course, it isn’t actually that easy. You must open PowerShell first, then run the script. I kid, there is more that you will have to customize in the script to make it work for you and I will walk through those next. First, I will walk through the PowerShell script. It may help make the process a bit more clear when reading the longer article.

Here in this short code block I create the resource group, storage account, my Vnet and subnet, and the actual Recovery Services vault. Which really isn’t a lot of clicking in the console to get this done either but it still isn’t as efficient or as fast as PowerShell.

 

In the next section, I simply create the vault key and then import it into Azure.

In this final section of the script policies are created and then uploaded to Azure

Additional Information

Make sure you have the lastet version of the Install-module AzureRM.SiteRecovery (4.2.1 today)

Install-module AzureRM.SiteRecovery

 

Change the email address to someone that has rights in Azure to complete all of this work.

LoginAzureRmAccount Credential (get-credential
Credential anthony@anthonyonazure.com)

 

There are a lot of lines of code that has been commented out in the middle of the script. This was the automation portion of the on-premise installation. It could easily be adopted inside this current script to install the Config & Process server on the server that had configured Azure. I also included a link for downloading the provider if you are protecting Hyper-V.

 

The last section is also commented out. Until you register an on-premise server with ASR the Protection Container doesn’t exist and the last line that contacts Azure will fail until that exists. This is the on-premise install you need to complete, that is also commented out. So you can see where this could all be automated.

To see if Protection Container has been created use the following PowerShell command.



Get-AzureRmSiteRecoveryProtectionContainer

 

If you are looking for the script it is on my GitHub repository, AnthonyOnAzure

Azure Site Recovery – From Start to Finish

This is a start to finish plan on how to use Azure Site Recovery (ASR) to protect your VMware and physical servers in the event of a disaster. You can protect far more than just VMware and physical servers with ASR but this is the most common scenario I encounter with customers and a straightforward place to begin.

Every infrastructure should have a business continuity and disaster recovery (BCDR) plan in place for times when there is an outage or worse when there is a disaster, natural or not. I have many customers who have DR site, meaning they have second data center that is a duplicate of all critical workloads hosted in their primary data center. As you can imagine, paying the bill for servers, cooling, floor space, racks, administration, electricity, so on that isn’t being used by the company to generate revenue, is never what your CIO wants to sign off on. But the alternative is to hope they never get phone call (assuming phones are up) from the CEO asking why those services are not up. I have been in the room when those phone calls take place and the usual reaction is shear panic. It is what we call a resume generating event for most CIO’s and likely one or more directors as well.

What is site recovery?

I will start with a simple conceptual design to explain how a typical design works to protect your infrastructure. Next, I will explain the logical design of a typical architecture. After the foundation has been established I explain the design principles and finally implementation. Along the way I will provide the details that lead to the design decisions I have chosen and possible alternative design decisions.

ASR supports two main orchestrated recovery scenarios. The first is orchestration of workload recovery between customer primary/secondary datacenters. The second is orchestration of workload recovery and migration between customer datacenter and Azure IaaS, or simply migration. In this disaster recovery scenario you will be launching your workload recovery in Azure IaaS only until you can resume operations on-premise.

This article is focused on the on-premise infrastructure. Using ASR to protect a mix of VMware servers and physical servers that also include physical storage. The concepts and methodologies here apply to workloads like SQL, Dynamics, protecting other platforms such as Hyper-V, VMM and Azure, the difference in these being the options available.

Other common scenarios ASR supports include: Replicate Active Directory and DNS, Protect SQL Server, Protect SharePoint, Protect Dynamics AX, Protect RDS, Protect Exchange, Protect SAP, Protect IIS, Protect Citrix XenApp and XenDesktop, Workload summary

In this conceptual diagram, on the left is the on-premise infrastructure that has servers we want to protect. On the right is Azure. In the on-premise portion notice all the servers are sending traffic to the Process & Config server, this server is used to communicate with Azure. It handles a number of things but most important is that it replicates the changes on your servers to Azure keeping the two in sync. The red cog on each server is the Mobility Service, this is what actually tracks the changes on your servers and tells that to the Process & Config server. On the right side (secondary site) of the diagram you see just the Recovery Vault and ASR Failover Orchestration. The Process & Config server replicates the servers data to the Recovery Vault where it is stored. When a disaster occurs, the configuration stored in the Recovery Vault is what Azure will use to build the Azure VM’s to duplicate your on-premise servers. It is important to note that replication happens directly with Azure storage, the traffic is not processed by the Site Recovery service, only meta-data required to orchestrate replication and failover is sent to the Site Recovery service. And that is the final piece, listed below as the ASR Failover Orchestration. What isn’t shown below is the Azure storage (blob storage). Azure storage is part of your Recovery Vault and to keep this diagram simple I decided not to show the storage separately because it doesn’t actually go from the Recovery Vault to the blob storage.

If you were designing recovery for a publicly hosted workload the conceptual design and even the services would be different. Adding in Traffic Manager hosted in Azure would then give you the ability to failover from on-premise into Azure without having to make public DNS changes and waiting for them to propagate.

This second diagram is a logical depiction of the same workload recovery process. On the left is the on-premise infrastructure, the right is Azure. I won’t explain this here because it would be more depth than is really warranted but if you need to understand where specific services are in the stack or how Azure compares to the logical design of VMware this diagram is for you.


Capacity Planning

Capacity planning is critical for a successful solution to DR. Without the proper tools and guidance this would be a challenging task for most. As an example, to how things different compared to the DR solutions designed decades ago. Here is an example…Azure Site Recovery requires a storage account in Azure. The storage account is defined in the replication settings of your recovery plan. This means that any virtual machine migrated using a recovery plan will be placed in the same configured storage account.

Pay close attention to this paragraph. Storage accounts have limits, one for those is the total storage (500 TB) and maximum concurrent IOPS for a single storage account is 20,000 IOPs. A single disk can have up to 500 IOPs. So, the maximum number of VHDs that can be placed into a storage account is 40 assuming that the disks are all concurrently using the max IOPS. If the VHDs are using a lower average IOPs, then more VHDs can be in a single storage account. This refers to standard not premium storage.

Instead of trying to calculate all of this thankfully you can instead just download the site recovery planning tool provided for free by Microsoft that will figure out all this and more for you. The details on how to use it are here. This tool needs to run for a couple weeks to collect data so you should download it now and get it collecting data.

There is also an equivalent tool for Hyper-V replication.


Capacity Planning continued

If the workloads you are protected are large enough you could be failing over more VMs than what is supported in a single storage account. If that is the case, you will need to automate the process of unprotecting the VMs, change their storage account and then apply protection to them again. You could also have a recovery plan call another recovery plan. Both of these scenarios are beyond the scope of this but it is important to understand the limits of storage accounts in Azure or your recovery could turn into a nightmare as you wait and wait for your now heavily throttled VHDs to boot. That tool will figure out where to place all the VHD’s to avoid this, automatically.

Prior to designing your recovery plan you should know how many storage accounts are required. This is based on the total size of all replicated data, the daily churn rate of your protected workload, and the days of retention. Essentially the total amount of storage (ts) that would be required after you have completed the initial replication to Azure, the amount of data that changes per day or churn rate (cr) times the number of days of retention (rd).

Or expressed simply as ts + (cr x rd). The problem with this calculation is it does not consider the IOPS limit of 20,000 per storage account. But you will also need to know what size VMs in Azure you will be running to really know what IOPS you will need for your storage account.

But there are other factors to consider when planning for scale and capacity of ASR. For instance, a Process server can only replicate a maximum of 2TB a day. A VM being replicated can only be assigned to a single Process server but more importantly is the sum of the daily churn of all VMs being replicated by the Process server that needs to be considered. You can add additional process servers to handle more than 2TB of churn per day…

Azure Site Recovery services requires that a Site Recovery Vault is created. The vault contains the Azure Site Recovery settings, the certificate and the vault key. The certificate is used as part of the data encryption process and must be created and uploaded into the vault. The vault key is used to allow communication from the provider to the vault. This is depicted in the logical diagram above.

If you read all of this after the note about the free tool that does a better job than I can in Excel, kudos. There is some important information in the previous paragraphs.

Carefully consider growth factor

Let’s assume we ran the tool for more than two weeks and have great data. We know have a really good idea on our capacity, we have the number of storage accounts we will need, the number of Azure cores required for a failover (and we won’t exceed them), we know where are VM storage placement is going to be, what VM’s are not compatible and why, we know how long the
initial replication into Azure will take, our daily churn rate, we have factored in our growth, we even did what-if analysis.

To Do’s:

Download the site recovery planning tool and run it!

Carefully consider growth factor

VM’s are not compatible and why

Review prerequisites for VMware replication

Networking

To begin planning your recovery you first need to determine if your failover requires your severs to maintain their same IP address. Since most of what you would failover in this scenario does require static IP addresses that means you will also need to failover your subnet as part of the recovery plan. Because Azure does not support stretch networking and because these are not externally facing workloads you will need to have a network connection between Azure and your on-premise data center for the services to be accessible. This can be in the form a VPN connection or ExpressRoute connection. You can even automate the building of your VPN connection as part of your recovery plan.

Review the prerequisites for VMware to Azure replication

Azure Virtual Networks

‘Azure virtual networks define the network connectivity within Azure datacenters, between Azure datacenters, and between Azure datacenters and on-premises datacenters. It is possible to configure Azure virtual networks to be isolated or to be connected to other virtual networks and allow traffic to route between the virtual networks.

Each virtual network definition has an address space pool and one or more subnets. Subnets should be defined within the virtual network prior to placing any virtual machines. Once a subnet has an IP Address in use, it cannot be modified without removing all devices using IP addresses.’

With the limitation of 2048 VMs per virtual network in Azure, a class B subnet is used, a subnet mask of 255.255.248.0 or /21 is the largest range of addresses that should be assigned without wasting addresses.

When you create a VM in Azure you must also deploy that VM to a Vnet with a subnet. You can no longer deploy a VM without a subnet like you could with Azure classic (ASM). That means in order to have a connection to your on-premise data center with a network IP range that matches your on-premise network those two should never talk.

Failing Over a Subnet

In this diagram, I show how to accomplish this. On the left, again is your on-premise datacenter, on the right is Azure. We are going to failover the server within the 192.168.1.0/24 subnet, and because it has a static IP, when it comes up in Azure it needs to have that same IP and come up on that same subnet, called our recovery network.


What is going to happen in this scenario is we are going to automate the building of the VPN connection using an runbook, the building of two additional VNets in Azure, and then the Vnet-to-Vnet connection between our recovery network and the new VNets, the 172.16.x.x Vnet that contains the two subnets. This will all be automated in our recovery plan, but if you wanted to use a persistent connection or needed to for other Azure services between your on-premise data center and Azure then you would have the additional VNets already in Azure (most likely) and then you would just automate the Vnet-to-Vnet connection so that it was not visible until you were failing over to Azure.

During the failover the router will programmatically failover the entire subnet from your on-premise network. Pay close attention to that statement, it means anything on that subnet will need to be included in your recovery plan or when your recovery plan runs any on-premise assets on that subnet will lose their network connection to that subnet. It also means that you will most likely need to make updates to routes as part of this process. The automated migration of the subnet and updating of routes is specific to your networking equipment and can’t be covered here.

Here are the steps you would need to follow to complete this scenario.

Steps required prior to any failover

  • Creation of the Azure Virtual network that will act as the recovery network (192.168.1.0/24 in the diagram above). This is done much later in the process.
  • Configure the VM properties by specifying the static IP address being used on-premise.

Steps required during a failover

Azure Vnet network and subnets need to be deployed (172.16.1.0/24 and 172.16.2.0/24 in the diagram above).

Alternatively, these could be deployed prior to a failover as part of the ASR setup.

VPN Site-to-Site (S2S) connection is required to be in place connecting on-premise and Azure networks.

It is important to note that creating the VPN Gateway device in Azure can take ~45 minutes, I just deployed one and it took 25 minutes. Depeding on your RPO, this may by acceptable.

Vnet to Vnet connection between Azure Networks.

  • After documenting, all of this and going through all of the steps. There are two issues with these previous steps. (1) The Recovery network is required at the time replication is enabled on protected systems. (2) Taking 25-45 minutes for the gateway to become available is going to be a problem for most people when they are planning for DR. As of today there is no way to simply shutdown a gateway device, you can delete one but it isn’t possible to turn it on configure it, and then turn it back off. If a customer already has VPN or ER or ER+VPN into Azure it isn’t going to impact them but other customers who host everything in Azure with only some on-premise or the other way around it complicates the decision.

To Do’s:

Identify and list your subnets to be failed over from production to Azure

List of static IP addresses for servers being protected by ASR

ASR Configuration

The image below is an example of a high-level architecture of ASR. The left side represents the on-premise enterprise or on-premise portion and Azure is on the right-side. Worth noting is the location of the Process & Config server and that each server being protected by ASR has the Mobility Service installed. In Azure the recovery network is included as well as the blob storage where the replicated data is stored. If you go back to the concept diagram neither the blob storage or VNet (recovery network) are shown. That was intentional but they are shown here since they have been covered in the previous topics.


Recovery Vault Setup

Before you can configure Azure or on-premise the Recovery Vault must be created in Azure. As you can see here even for backups everything relies on the Recovery Services Vault (Recovery Vault or vault).


Creating a Recovery Services Vault

The simplest way to create one would be with New-AzureRmRecoveryServicesVault. But, since we are going to reply on this for recovery we should plan and execute carefully so we will step through the configuration instead.

Assuming you do not already have a Recovery Services Vault already in Azure. Finding where to create one can be tricky, it is actually part of OMS, both ASR and Azure Backup are part of OMS. What I found easiest is to just type “Recovery Services” in the search bar, and it is the first one. “Backup and Site Recovery (OMS)”



Prepare Recovery Vault

If you are ready to create the Recovery Vault, click on Create.

This just takes you to a new set of blades to fill out the required parameters. You should already know where, geographically, you want your recovery vault based on the 2 week plus analysis. I haven’t talked about naming conventions in Azure but I have an entire presentation on how to do this in Azure, and how not to do it, which is what usually happens. If you have an established naming convention in Azure, and it is well thought out, use it. This isn’t your typical resource group that you are creating so there is some latitude for being creative should you not yet have a standard. Once you are happy with all the fields click the Create button and off we go.

Technically this is the Recovery Vault, which is almost all the settings, configuration and management of both ASR and Azure Backup. That little icon in the diagram doesn’t really do it justice.


Before we switch gears and move to the on-premise setup we need to do a few small things first.

Download our Config & Process server software, and the vault registration to go with it. And then make sure we have at least one storage account and Vnet (we will do the Vnet later).

In the Azure console:

  • Click Recovery Services vaults > vault.
  • In the Resource Menu, click Site Recovery > Prepare Infrastructure > Protection goal.
  • In Protection goal, select To Azure > Yes, with VMware vSphere Hypervisor.
  • Click > Source
  • Then click the plus next to Configuration Server to open the final blade where the link is to download the Config & Process server software.
  • On the Add Server blade, download the Config & Process server software (3.), and also download the vault registration key (4.), we will need it so don’t forget where it saves to.

On-premise setup

Now we move temporarily to on-premise setup.

This involves:

  • Installing and setting up the Config & Process server
  • Preparing VMware for discovery and communication with ASR
  • Preparing for and deploying the Mobility Service

Verify the hardware prerequisites for the Config & Process server

To install the Config & Process server you must also have a VMware VM ready and an account with the proper rights in VMware for discovery.

How to influence the bandwidth on your process server consumes

Installing the Config & Process server

Config & Process server minimum requirements

To install the Config & Process server, on the VMware VM you created, copy the unified installer for Azure Site Recovery that we downloaded from the Azure console.

  • On the configuration server VM, make sure that the system clock is synchronized with a Time Server. It should match. If it’s 15 minutes in front or behind, setup might fail.
  • Run setup as a Local Administrator on the configuration server VM.
  • Make sure TLS 1.0 is enabled on the VM.

After you launch the installer there are server clicks of the Next button which you can manage without my witty banter.



(This is a condensed version of the number the dialog boxes)

Preparing VMware

To prepare on-premises VMware servers to interact with Azure Site Recovery, and prepare VMware VMs for the installation of the Mobility service two accounts are required.

First up, Add the VMware account for automatic discovery

Once the Config & Process server install is completed, you will see CSPSConfigtool.exe on the desktop of the server.

Launch that to setup the account for communication between VMware and ASR.

Click Manage Accounts > Add Account.

In Account Details, add the account that will be used for automatic discovery.


Site Recovery automatically discovers VMs located on vSphere ESXi hosts, and/or managed by vCenter servers. To do this, Site Recovery needs credentials that can access vCenter servers and vSphere ESXi hosts. Create those as follows:

  • To use a dedicated account, create a role (at the vCenter level, with the permissions described here. Give it a name such as Azure_Site_Recovery.
  • Then, create a user on the vSphere host/vCenter server, and assign the role to the user. You specify this user account during Site Recovery deployment.

Mobility Service Install

Prepare an account to install the Mobility service agent on VMs.

The Mobility Service must be installed on all VMs you want to replicate. There are a number of ways to install the service, including manual installation, push installation from the Site Recovery process server, and installation using methods such as System Center Configuration Manager.

Each installation method follows their own set of rules so I have included links to all of them so you don’t get stuck here.

Install Mobility Service by using software deployment tools like System Center Configuration Manager

Install Mobility Service by using Azure Automation and Desired State Configuration (Automation DSC)

Install Mobility Service manually by using the graphical user interface (GUI)

Install Mobility Service manually at a command prompt

Install Mobility Service by push installation from Azure Site Recovery

The setup logs can be found under %ProgramData%\ASRSetupLogs\ASRUnifiedAgentInstaller.log

The AgentConfiguration logs can be found under %ProgramData%\ASRSetupLogs\ASRUnifiedAgentConfigurator.log

Push Installation

If you want to use push installation, you need to prepare an account that Site Recovery can use to access the VMs.

  • You can use a domain or local account
  • For Windows, if you’re not using a domain account, you need to disable Remote User Access control on the local machine.

To do this, in the register under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System, add the DWORD entry LocalAccountTokenFilterPolicy, with a value of 1.

That’s it for on-premise setup! There isn’t much to it except installing the server.

Enabling Replication for VMware in Azure

Ground rules

  • VMware VMs must have the Mobility service component installed. – If a VM is prepared for push installation, the process server automatically installs the Mobility service when you enable replication.
  • Your Azure user account needs specific permissions to enable replication of a VM to Azure
  • When you add or modify VMs, it can take up to 15 minutes or longer for changes to take effect, and for them to appear in the portal.
  • You can check the last discovered time for VMs in Configuration Servers > Last Contact At.
  • To add VMs without waiting for the scheduled discovery, highlight the configuration server (don’t click it), and click Refresh.
  • By default all disks on a machine are replicated. You can exclude disks from replication. For example you might not want to replicate disks with temporary data, or data that’s refreshed each time a machine or application restarts (for example pagefile.sys or SQL Server tempdb). When you add a machine to a replication plan you have to specify the OS disk, and can then choose which disks you want to include in the replication. Learn more

And we still need to create the recovery network. This is the Vnet that will get all the VM’s deployed to it during failover and is required when we enable replication.

Enable Replication


This creates the recovery (failover) Vnet with a single subnet. It also creates the External facing Vnet with a subnet. The critical portion of this setup is that the Vnet peering is not setup so that the recovery network does not connect over VPN or ExpressRoute to the on-premise network. This could cause an outage on its own if it were to happen.

As part of the Recovery Vault setup, protected items are configured. Adding a protected item in the vault allows you to establish a connection to the ASR Configuration & Process server. Once the connection is established, and the Mobility Service is deployed on all targeted systems, you can select individual servers to enable for protection. You can see here I have a system called DC2 that is failing to replicate and a number of other systems that the protection has just been enabled on. Until the service is installed on each VMware VM you will not see them listed in the Azure portal.


Once protection is enabled, the encryption and replication of the contents of the hard drives begin and is eventually copied from the Process & Config server to the storage account in the vault. Once the initial replication of the protected systems hard drives has occurred, they can be added to a recovery plan.

But before we get too deep into recovery we first need to cover replication, recovery does us no good without the data.

How to influence the bandwidth on your process server consumes

Replication

It is important to plan and execute when dealing with important things. Protecting your employer’s data is paramount and just as important is protecting their critical services. That means before you begin making replication policies you plan them out. Only after all stakeholders have signed off on the plan, can you then execute on that plan by creating the policies.

When crafting replication policies, these will depend on the workloads and how important their data is. SAP, Salesforce, those types, you cannot afford to lose the data in these systems without losing revenue. Your Configmgr server, well if you built your architecture like myself, other MVP’s, and Microsoft recommended, then it can be down for quite a while, possibly even more than a day before anyone even notices, even longer before problems really start. Keep in mind this is not your recovery plan, this is separate so the choices you make for replication are independent (to a point) from failover and recovery. It would be best to review you current replication policies and see if they need to be revised based on changes since they were created back in the 90’s. If you decide to review them or not when you create the policies in ASR it should already be planned out and just a matter of filling in the boxes. Don’t just wing it, this is someone’s data and likely somehow impacts revenue.

The basic replication policy has three settings, name, recovery point retention and app-consistent snapshot frequency.

The frequency of replication will depend on the type of systems being replicated.

For VMware and physical server replication frequency is not relevant as it continuously replicates changes.

SAN replication is synchronous and for Hyper-V replication VMs can be replicated every 30 seconds, 5 minutes, or 15 minutes.

You can of course, have multiple replication plans. And even though replication may be enabled, plans in place and systems talking with ASR, until you add them to a replication plan, they are not replicating their data.

Staggering systems is a common tactic for the initial replication to control the amount of data, time of day, and to get the highest priority workloads protected first.

To create a Replication Plan

To create a new replication policy, in the Azure console

  • Click Site Recovery infrastructure > Replication Policies > +Replication Policy.
  • In Create replication policy, specify a policy name.
  • In RPO threshold, specify the RPO limit. This value specifies how often data recovery points are created. An alert is generated if continuous replication exceeds this limit.

  • In Recovery point retention, specify (in hours) how long the retention window is for each recovery point. Replicated VMs can be recovered to any point in a window. Up to 24 hours retention is supported for machines replicated to premium storage, and 72 hours for standard storage.
  • In App-consistent snapshot frequency, specify how often (in minutes) recovery points containing application-consistent snapshots will be created.
  • Click OK to create the policy.


    Questions about security, encryption, compliance?

    Adding Systems to your Replication Plan

    After you have created the replication policy rules, you can select VMs to add to the policy.

  • Select “3. Virtual Machines” and then from the blade choose each VM you want to add. They are displayed 10 at a time so you can browse the pages of VMs or use the search bar.
  • Simply select the box next to the VM to add it.
  • When you are done selecting the VMs click the OK button.

    When you add a VM to the replication policy you must then specify which OS type, Windows or Linux, and then from the list of drives specify which drive is the OS disk. It will do its best to guess which one is the correct one but you have to have one selected for each VM to proceed.

    The far-right column, Disks To Replicate, this is where you can choose which disks you want to include in replication. I mentioned earlier you could exclude disks that didn’t need to be replicated, and this is where you configure that.

    Keep in mind that a VM can have only one replication policy. So, you may need to craft a special policy for that one system, or that one workload but if you do not choose to include one of its disks here, it will not be replicated as part of another policy since a system can only be in one replication policy.


    Now that we have replication working, we can take advantage of the time it takes for the initial replication to complete to prepare for and build our recovery plans.

    Setting Static IPs

    In most cases, and in ours, we require our systems on-premise to have a static IP and when the recovered system runs in Azure we need it to have the same static IP address on the same subnet as if nothing has happened. After a protected system has finished its initial replication we can configure these settings.

    To configure a static IP for a protected system, in the Azure console, open the Recovery Services vault.

    • Click Replicated Items > Select the name of the system you wish to configure. Create Recovery Plan.
    • Click Compute and Network on the replicated items Settings blade.
    • On the Compute and Network blade under the Network Properties select the appropriate Target Network > Target Subnet and then enter the static IP address in the Target IP field. When you are finished click Save.



Recovery Plans

A recovery plan allows you to establish a grouping of protected VMs/workloads. All the VMs within the recovery plan are failed over as a unit during a disaster recovery situation. Within the recovery plan you can create one or more groups of VMs to control the startup process when the failover occurs. Here you can see I am going to add the same VM DC2 to a recovery plan.


Once I have my protected items added to my recovery plan and create the recovery plan I can then customize the recovery plan such as grouping VMs. Here you can see the right blade has the group button.


There is actually four different customizations you can make to the recovery plans and there are options to run any of them before a group begins as well as after a group has finished.

Add new groups – allows you to add recovery plan groups to the current one.

Add a manual action – This actually pauses the failover for a manual action you may need to take or wait to complete.

Add a script – This allows you to run a script before and after each group.

Add Azure runbooks – This allows you to extend recovery plans to do almost anything you need. Automate tasks, create single step recovery, failback, check for flooding before continuing. Almost limitless. I will actually use this to achieve quite a bit of automation during a failover.

To create a recovery plan, in the Azure Console.

  1. Click Recovery Plans > Create Recovery Plan. Specify a name for the recovery plan, and a source and target. The source location must have virtual machines that are enabled for failover and recovery.

    For a VMware VM or a physical on-premises server to Azure, select a configuration server as the source, and Azure as the target.

  2. In Select virtual machines, select the virtual machines (or replication group) that you want to add to the default group (Group 1) in the recovery plan.

A this point you can customize the recovery plan using groups and the four additional customizations discussed above.

Achieving Faster Recovery Times Using Runbooks and Workflows

Back in the Networking section I discussed the process of failing over an entire subnet during a disaster. I also listed several other requirements to be automated for a successful recovery. To achieve success, we could do all the steps manually, document each step of each task and hope the process was still the same when we actually needed to go through it, or we could automate them and include them as part of our recovery plan.

In this section I am going to briefly talk about Azure Automation and describe what we need to achieve and then demonstrate how easy it can be to automate these complex and inter-connected processes.

The automation of this process is actually quite extensive and requires additional details to make it usable. At its highest level there are 8 steps in the process that have to be automated for a successful recovery.

  • Two VNets, each with a subnet, is created.
  • The peering between the new VNets is created.
  • The gateway subnet for the VPN connection on the external Vnet is created.
  • The local gateway connection and configuration for the VPN tunnel is created.
  • A request for a public IP address for the VPN GW is completed.
  • The VPN GW IP addressing configuration is created.
  • Then the VPN GW is built (this can take 45 mins)
  • The on-premise VPN device needs to be configured.
  • Then finally, the VPN connection between the Azure and the on-premise VPN device is established.

In the process of writing this I have discovered that my original idea didn’t quite fit as I had hoped, but for the purpose of showing off the power of PowerShell runbooks we will stick with the original “requirements” listed above.


Make sure your VPN device is compatible

Azure Automation

Before we add the runbooks to our recovery plan they first have to exist in Azure Automation. If you haven’t used Azure Automation yet you will need to set it up and specifically the automation account created.

What is Azure Automation?

Once you have Azure Automation setup and working you will also need to add PowerShell modules to your automation account. The runbooks will not work without them, it is similar to not have PowerShell modules installed on your server or work computer. If you are using a new automation account and have already added the modules to your subscription you do not need to add them again, they are installed per subscription.

If you do know how to use Azure Automation you may wonder why I don’t just use a PowerShell Workflow instead? The advantages to using a workflow over a runbook seem to make sense. I won’t get into the details of why here, but I wasn’t going to use parallel processing, we need these to run in serial, and the checkpoint doesn’t really buy us much here. Since they scripts are small the PowerShell runbook should complete faster as it doesn’t have to be compiled like the workflows does, and speed is really what we are after. There is an outage after all.

As much as I would like to spend a lot of time on automation this is already lengthy enough so I won’t walk through the process of creating each runbook, testing it, and publishing it. If you are new to runbooks in Azure here is a great short walkthrough on creating your first Azure PowerShell runbook that explains it well enough to reuse my runbooks.

In each recovery plan we can use Azure Automation, we can inject a pre or post action between each group in the recovery plan. Allowing us to literally run code before any other action is carried out by the recovery plan, and running codes as the last action in the recovery plan. And of course, in between those two points. I will choose to insert a pre-action at the start that will then kick of a runbook I have stored in my Azure Automation library.

When you select which type of action you want to run you are presented with the insert action blade where you can choose which runbook you want to run.

The recovery side is Azure in this case, since it is where the recovery is taking place. You must use the correct automation account to be able to see and run the runbook, for security reasons. And that is all you must do. The code to complete all the original tasks is in the block below. It could (should) be broken up into distinct runbooks instead of being combined but I didn’t think I would even get the time to build more than a couple commands. Enjoy.

Param(
[Parameter(Mandatory = $true)]
[string]$ResourceGroupName,
[Parameter(Mandatory = $true)]
[string]$Location = ‘westus’,
[Parameter(Mandatory = $true)]
[string]$SharedKey = ‘!k$x0VEZwy8&UQ9wO’,
[Parameter(Mandatory = $false)]
[string]$DnsServer,
[Parameter(Mandatory = $false)]
[string]$ExtVnetName = ‘ExternalVnet’,
[Parameter(Mandatory = $false)]
[string]$RecVentName = ‘RecoveryVnet’,
[Parameter(Mandatory = $true)]
[string]$VPNDeviceIP
)
# Authenticate to Azure AD
$Conn = Get-AutomationConnection -Name AzureRunAsConnection
Add-AzureRMAccount -ServicePrincipal -Tenant $Conn.TenantID -ApplicationId $Conn.ApplicationID -CertificateThumbprint $Conn.CertificateThumbprint

# Subnets for each net VNet
$ExternalSubnet = New-AzureRmVirtualNetworkSubnetConfig -Name ExternalSubnet -AddressPrefix “192.168.1.0/24”
$RecoverySubnet = New-AzureRmVirtualNetworkSubnetConfig -Name RecoverySubnet -AddressPrefix “172.16.1.0/24”

New-AzureRmResourceGroup -Name $ResourceGroupName -Location $Location

# Create new VNets -DnsServer $DnsServer
$VNet1 = New-AzureRmVirtualNetwork -ResourceGroupName $ResourceGroupName -Name ‘ExternalVnet’ -AddressPrefix ‘192.168.0.0/21’ -Location $Location -Subnet $ExternalSubnet
$VNet2 = New-AzureRmVirtualNetwork -ResourceGroupName $ResourceGroupName -Name ‘RecoveryVnet’ -AddressPrefix ‘172.16.0.0/21’ -Location $Location -Subnet $RecoverySubnet

# Create Vnet-to-Vnet Peering between new VNets
Add-AzureRmVirtualNetworkPeering -Name ‘ExternalVnetToRecoveryVnet’ -VirtualNetwork $VNet1 -RemoteVirtualNetworkId $VNet2.Id
Add-AzureRmVirtualNetworkPeering -Name ‘RecoveryVnetToExternalVnet’ -VirtualNetwork $VNet2 -RemoteVirtualNetworkId $VNet1.Id

# Create Gateway subnet for VPN connection on ExternalVnet
Add-AzureRmVirtualNetworkSubnetConfig -Name ‘GatewaySubnet’ -AddressPrefix ‘192.168.4.0/27’ -VirtualNetwork $VNet1
Set-AzureRmVirtualNetwork -VirtualNetwork $VNet1

# Create local (on-premise) gateway connection with multiple on-premise address prefixes
New-AzureRmLocalNetworkGateway -Name LocalSite -ResourceGroupName $ResourceGroupName -Location $Location -GatewayIpAddress $VPNDeviceIP -AddressPrefix @(‘10.0.0.0/24′,’20.0.0.0/24’)

# Request the PIP address for the VPN gateway
$GWPIP = New-AzureRmPublicIpAddress -Name GWPIP -ResourceGroupName $ResourceGroupName -Location $Location -AllocationMethod Dynamic

# Set VPN Gateway IP addressing configuration
$VNetVPN = Get-AzureRmVirtualNetwork -Name $ExtVnetName -ResourceGroupName $ResourceGroupName
$SubnetVPN = Get-AzureRmVirtualNetworkSubnetConfig -Name ‘GatewaySubnet’ -VirtualNetwork $VNetVPN
$GwIpConfig = New-AzureRmVirtualNetworkGatewayIpConfig -Name ‘GwIpConfig’ -SubnetId $SubnetVPN.Id -PublicIpAddressId $GWPIP.Id

# Create VPN Gateway
New-AzureRmVirtualNetworkGateway -Name ‘VNetGW’ -ResourceGroupName $ResourceGroupName -Location $Location -IpConfigurations $GwIpConfig -GatewayType VPN -VpnType RouteBased -GatewaySku VPNGW1

# Configure on-premise VPN device
# Shared key, PIP of GW

# Create VPN Connection – Set variables
$Gateway = Get-AzureRmVirtualNetworkGateway -Name ‘VNetGW’ -ResourceGroupName $ResourceGroupName
$Local = Get-AzureRmLocalNetworkGateway -Name LocalSite -ResourceGroupName $ResourceGroupName

# Create VPN Connection
New-AzureRmVirtualNetworkGatewayConnection -Name ‘ExtGWConnection’ -ResourceGroupName $ResourceGroupName -Location $Location -VirtualNetworkGateway1 $Gateway -LocalNetworkGateway2 $Local -ConnectionType IPsec -RoutingWeight 10 -SharedKey $SharedKey

 


As for the runbooks here is an example of the runbook I created in Azure that creates the two VNets and the peering between them.

When I run a test on in in the portal you can see it works. Notice on the left there are parameters to enter. These are based on the parameters specified in the runbook in the first few lines.


Testing Failover

It is recommended that when you are doing a test failover you choose a network that is isolated from your production recovery site network that you provided in Compute and Network settings for the virtual machine. By default when you create an Azure virtual network, it is isolated from other networks. This network should mimic your production network:

  1. Test network should have same number of subnets as that in your production network and with the same name as those of the subnets in your production network.
  2. Test network should use the same IP range as that of your production network.
  3. Update the DNS of the Test Network as the IP that you gave as target IP for the DNS virtual machine under Compute and Network settings. Go through test failover considerations for active directory section for more details.

Prepare to connect to Azure VMs after failover

Running the test failover

  1. Select Recovery Plans > test2
    . Click Test Failover.


  2. Select a Recovery Point to failover to. You can use one of the following options:
    1. Latest processed: This option fails over all virtual machines of the recovery plan to the latest recovery point that has already been processed by Site Recovery service. When you are doing test failover of a virtual machine, time stamp of the latest processed recovery point is also shown. If you are doing failover of a recovery plan, you can go to individual virtual machine and look at Latest Recovery Points tile to get this information. As no time is spent to process the unprocessed data, this option provides a low RTO (Recovery Time Objective) failover option.
      1. Latest app-consistent: This option fails over all virtual machines of the recovery plan to the latest application consistent recovery point that has already been processed by Site Recovery service. When you are doing test failover of a virtual machine, time stamp of the latest app-consistent recovery point is also shown. If you are doing failover of a recovery plan, you can go to individual virtual machine and look at Latest Recovery Points tile to get this information.
      2. Latest: This option first processes all the data that has been sent to Site Recovery service to create a recovery point for each virtual machine before failing them over to it. This option provides the lowest RPO (Recovery Point Objective) as the virtual machine created after failover will have all the data that has been replicated to Site Recovery service when the failover was triggered.
      3. Latest multi-VM processed: This option is only available for recovery plans that have at least one virtual machine with multi-VM consistency ON. Virtual machines that are part of a replication group failover to the latest common multi-VM consistent recovery point. Other virtual machines failover to their latest processed recovery point.
      4. Latest multi-VM app-consistent: This option is only available for recovery plans that have at least one virtual machine with multi-VM consistency ON. Virtual machines that are part of a replication group failover to the latest common multi-VM application-consistent recovery point. Other virtual machines failover to their latest application-consistent recovery point.
      5. Custom: If you are doing test failover of a virtual machine, then you can use this option to failover to a particular recovery point.
  3. Select an Azure virtual network: Provide an Azure virtual network where the test virtual machines would be created. Site Recovery attempts to create test virtual machines in a subnet of same name and using the same IP as that provided in Compute and Network settings of the virtual machine. If subnet of same name is not available in the Azure virtual network provided for test failover, then test virtual machine is created in the first subnet alphabetically. If same IP is not available in the subnet, then virtual machine gets another IP address available in the subnet. Read this section for more details
  4. If you’re failing over to Azure and data encryption is enabled, in Encryption Key select the certificate that was issued when you enabled data encryption during Provider installation. You can ignore this step if you have not enabled encryption on the virtual machine.
  5. Track failover progress on the Jobs tab. You should be able to see the test replica machine in the Azure portal.
  6. To initiate an RDP connection on the virtual machine, you will need to add a public ip on the network interface of the failed over virtual machine. If you are failing over to a Classic virtual machine, then you need to add an endpoint on port 3389
  7. Once you’re done, click Cleanup test failover on the recovery plan. In Notes record and save any observations associated with the test failover. This deletes the virtual machines that were created during test failover.

Prerequisites

This is a compiled list of prerequisites listed throughout the document. I didn’t feel like I did a good making them known in some parts so I wanted to put them all in a single place to hopefully make finding them easier.

Review the prerequisites for VMware to Azure replication

Prepare and account ot install the Mobility service agent on VMs

VM’s are not compatible and why

Config & Process server minimum requirements

Your Azure user account needs specific permissions to enable replication of a VM to Azure

To install the Config & Process server you must also have a VMware VM ready and an account with the proper rights in VMware for discovery.

Make sure your VPN device is compatible

To-Do’s

This is like the prerequisites section above. I wanted to capture all of the To-Do’s in one place so they were easier to find and utilize.

Download the site recovery planning tool and run it!

Carefully consider growth factor

VM’s are not compatible and why

Identify and list your subnets to be failed over from production to Azure

List of static IP addresses for servers being protected by ASR


STIG Compliance with SCAP and DCM in Configmgr

In this post, I demonstrate how to use System Center Configuration Manager to evaluate clients for STIG compliance.

The process at a high level is:

      1. Download the corresponding STIG compliance SCAP files.
      2. Convert the downloaded files into a CAB file for import.
      3. Import the CAB file into Configmgr
      4. Deploy the Configuration Baseline to the corresponding device collection
      5. Evaluate the compliance reporting from the clients
      6. Export the compliance data to SCAP format

Important information

For SCAP 1.0 – 1.2, If you don’t specify the benchmark/profile, then scaptodcm.exe will generate a DCM cab for each benchmark in the content file. Fun.

If you specify multiple values for a single variable in an external variable file, then the scaptodcm.exe tool will treat the values as an array.

The settings you may see in the screenshots are NOT worthy of a production site, my clients and site servers have very aggressive schedules because I don’t have the same concerns when running in production. DO NOT use my schedule settings.

Don’t confuse the two executables: scaptodcm.exe and dcmtoscap.exe, they both have different jobs.

When converting the compliance data to a results file, if the clients evaluated multiple datastream \ benchmark \ profiles use the -select parameter to specify the same datastream \ benchmark \ profile which was run on the client.

SCAP 3.0 extensions must be installed on your Configmgr server.

At the bottom you find two tables, one that defines the command line parameters and the other lists the relevant log files if you need to troubleshoot.

 

Download the appropriate SCAP files

1.  Download the SCAP content from: http://iase.disa.mil/stigs/scap/Pages/index.aspx

Windows 2012 and 2012 R2 DC STG Benchmark - ver 2, Re16 Windows 2012 and 2012 R2 MS STIG Benchmark - ver 2, Re16 10/28/2016 10/28/2016 130 KB 128 KB

2.  Extract the ZIP file to a folder for conversion. I just created a subfolder under the directory where scaptodcm.exe lives to keep things simple. You should choose the location that makes sense to you.

Program Files (x86) SCAP Extensions 2012 Name U Windows 2012 and 2012 R2 MS V2R6 STIG SCAP U Windows 2012 2012 R2 MS V2R6 STIG SCAP U Windows 2012 and 2012 R2 MS V2R6 STIG SCAP U Windows 2012 2012 R2 MS V2R6 STIG SCAP Benchmark-cpe-dicticnary.xml Benchmark-cpe-cval.xml Benchmark-cval.xml Benchmark-xccdf.xml Date modified g/15/2016 4:44 PM g/15/2016 4:44 PM g/15/2016 4:44 PM g/15/2016 4:44 PM Search 2012 Type XML Document XML Document XML Document XML Document Size 860

Idea In this example, we will import the Windows 2012 and 2012 R2 MS STIG Benchmark – Ver 2, Rel.6

3.  After extracting the zip file, from a command prompt with administrative permissions run the appropriate command line to convert the SCAP data stream file and XDCCF benchmark profile to a DCM .CAB file, assuming you are also using a SCAP 1.0 or 1.1 file.

In this example the file is SCAP 1.1 so the parameter -xccdf is required.

Below is an example command line of each type of content, 1.0/1.1, 1.2, and Oval. See my note above regarding the -select parameter.

SCAP 1.0/1.1 Content: scaptodcm.exe –xccdf scap_gov.nist_Test-Win10_xccdf.xml –cpe scap_gov.nist_Test-Win10_cpe.xml –out <folder> –select XCCDFBenchmarkID \ ProfileID

SCAP 1.2 Content: scaptodcm.exe –scap scap_gov.nist_USTCB-ie11.xml –out <folder> –select SCAPDataStreamID \ BenchMarkID \ ProfileID

SCAP OVAL Content: scaptodcm.exe –oval OvalFile.xml –variable OvalExternalVariableFile.xml –out <folder>

The conversion process may take a minute or more depending on the XML file.

Administrator: usage for SCAP 1.2: ScapToDcm. exe -scap ScapDataStreamFi1e [ -out OutputDirectory] [-select ScapDataStreamID/XccdfBenchma rkID/XccdfProfi1eID] [ -log LogFi1eName] [ -batch 588] Usage for SCAP 1.1/1. ø: ScapToDcm.exe -xccdf XccdfFi1e -cpe CPEFi1e [ -out OutputDirectory] [-select XccdfBenchmarkID/Xcc dfProfi1eID] [ -log LogFi1eName] [ -batch 588] Usage for OVAL: ScapToDcm.exe -oval OvalFi1e [ -variable OvalExterna1Variab1eFi1e] [ -out OutputDirectory] [ -log LogFi1eNa e] [ -batch 588] H: \Program Files (x86)\SCAP Extensions>scaptodcm.exe -xccdf . \ 2812\U R2 MS V2R6_STIG SCAP_1-1_Benc hmark-xccdf .xml -cpe . R2 MS V2R6_STIG_SCAP_1-1_Benchmark-cpe-dictionary.xm1 -out . \ 2812 Try to create log file: H: \Program Files (x86)\SCAP Extensions\ScapToDcm. log alidate the schema of XCCDF file H: \Program Files (x86)\SCAP MS V2R6 STIG SC P 1-1 Benchmark-xccdf .xml Successfully validate the schema of XCCDF file H: \Program Files (x86)\SCAP R2 MS V2R6 STIG SCAP 1-1 Benchmark-xccdf .xml Process XCCDF benchmark in file H: \Program Files Windows _ 2812 _ and _ 2812 R2 ms V2R6 STIG SCA? 1-1 Benchmark-xccdf .xml Process Process Process Process Process Process Process Process Process Process Process Process XCCDF XCCDF XCCDF XCCDF XCCDF XCCDF XCCDF XCCDF XCCDF XCCDF XCCDF XCCDF Benchmark Windows 2812 ms STIG Profile: Profile: Profile: Profile: Profile: Profile: Profile: Profile: Profile: Profile: Profile: MAC MAC- 1 MAC- 1 MAC- 2 MAC- 2 MAC- 2 MAC- 3 MAC- 3 MAC- 3 Classified-l - mission Critical Classified Public- I - mission Critical Public Sensitive- I - mission Critical Sensitive Classified-ll - mission Support Classified Public- II - mission Support Public Sensitive- II - mission Support Sensitive Classified-lll - Administrative Classified Public- III - Administrative Public Sensitive- III - Administrative Sensitive Disable Slow Rules-Disab1e Slow Rules 1 only Load file H: \Program Files Windows _ 2812 _ and _ 2812 R2 ms V2R6 STIG 1 for check content U Windows 2812 and 2812 R2 MS V2R6 STIG SCAP 1-1 Benchmark-oval .xml Process OVAL definitions in file H: \Program Files (x86)\SCAP R2 MS V2R6 STIG SC P 1-1 Benchmark-oval .xml Process OVAL: U Windows 2812 and 2812 R2 ms V2R6 STIG SCA? 1-1 Benchmark-oval .xml Successfully finished process OVAL: U R2 MS V2R6_STIG_SCAP_1-1_Benchmark-ova1 .xml Process CPE dictionary in file H: \Program Files (x86)\SCAP R2 MS V2R6 STIG SCAP 1 -1_Benchmark- cpe- dictionary . xml Load file H: \Program Files (x86)\SCAP R2 MS V2R6_STIG_SCAP_1-1_Benchmark-cpe-ova l.xml for check content R2 MS V2R6_STIG_SCAP_1-1_Benchmark-cpe-ova1 .xml Process OVAL definitions in file H: \Program Files (x86)\SCAP R2 MS V2R6 STIG SC P_l -1_Benchmark- cpe-oval . xml Process OVAL: U Windows _ 2812 _ and _ 2812 R2 ms .xml Successfully finished process OVAL: U R2 MS V2R6 STIG SCAP_1-1_Benchmark-cpe-ova1 .xml CCDF Benchmark: [Windows _ 2812 ms STIG] Version : update: Timestamp : Status: Status date: Title: Description : [2] [1/1/8881] [accepted] [18/28/2816] [Windows Server 2812 / 2812 R2 member Server Security Technical Implementation Guide] [The Windows Server 2812 / 2812 R2 member Server Security Technical Implementation Guide (STIG) is publ ished as a tool to improve the security of Department of Defense (DoD) information systems. Comments or proposed revisio ns to this document should be sent via e-mail to the following address: . mil.] XCCDF Profile: XCCDF Profile: XCCDF Profile: [MAC-1 Classified] [MAC-l_public] fmAC-1 Sensitive I

Once the conversion is completed the CAB file should be located in the directory specified in the command line with the -out parameter.

Next is to import the file in Configmgr for deployment, evaluation, and compliance reporting.

Import the Compliance Settings CAB files Configmgr

The next step in the process is to use the Configuration Manager Console to import the Compliance Settings-compliant .cab files into Configuration Manager. When you import the .cab files you created earlier in this process, one or more configuration items and configuration baselines are created in the Configuration Manager database. Later in the process, you can assign each of the configuration baselines to a computer collection in Configuration Manager.

To import the Compliance Settings CAB files into Configmgr

1.  In the Configuration Manager Console, navigate to Assets and Compliance > Compliance Settings > Configuration Baselines.

2.  In the menu bar, click the blue arrow Import Configuration Data.

3.  To begin the import process click the Add button.

4.  Browse to the directory that we specified earlier with the scaptodcm.exe tool -out <path\file>, and select the CAB file that was created. 

Click Yes in the Configuration Manager Security Warning dialog box.

Configuration Manager The publisher of Windows 2012_MS STIG.cab file could not be verfied. Are you sure that you want to import this file?

You should now see the CAB file listed in the Select Files page similar to this.

Machine generated alternative text: Import Configuration Data Wizard Select Files Select Files Summary Progress Confirmation Specify the files from which to import configuration items and configuration baselines Import configuration tams and configuration baselines from best-practices Configuration Packs and from other configuration data sources Files that contain configuration tams or configuration baselines Fitter Wndows 2012 MS STG cab Date Modified 12/20/201630732PM 41274 KB Create a new copy of the mported configuration baselines and configuration tams Previous Next

You could add more than one CAB file here, if you created or had multiple CAB files to import simply repeat the steps 3 and 4.

5.  Click the Next button.

This will start the verification process for the import, this can take several minutes and depends on the number of Configuration Items being imported.

Machine generated alternative text: Verifying Configuration Data Please watt while Configuration Manager verifies the selected files f there are many configuration tams and configuration baselines in tha fila this process might taka soma time to finish

6.  Once the verification is finished you will be at the Summary page which will list all the Configuration Items being imported.

Machine generated alternative text: Import Configuration Data Wizard Summary Select Files Summary Progress Confirmation Confirm the configuration data to be imported: The wizard will import the following configuration data - Configuration isms (408) oval mil disa fso windowstst 449300 oval mil disafso windowstst 461000 oval mil disafso windowstst 443102 oval mil disa fso windowstst 465500 oval mil disafso windowstst 460401 oval mil disa fso windowstst 473600 oval mil disa fso windowstst 487707 oval mil disa fso windowstst 442300 oval mil disa fso windowstst 450500 oval mil disa fso windowstst 454000 oval mil disa fso windowstst 469200 oval mil disafso windowstst 461800 oval mil disa fso windowstst 497300 oval mil disafso windowstst 468601 default default default default default default default default default default default default default default To change these settings. click Previous To apply the settings. click Previous Next

7.  Click the Next button and this will start the import into Configmgr. This may also take a few minutes.

Import Configuration Data Wizard Confirmation Select Files Summary Progress Completing the Import Configuration Data Wizard You have successfully completed the hport Configuration Data Wizard with the following details Configuration isms (408) oval mil disa fso windowstst 449300 oval mil disafso windowstst 461000 oval mil disafso windowstst 443102 oval mil disa fso windowstst 465500 oval mil disafso windowstst 460401 oval mil disa fso windowstst 473600 oval mil disa fso windowstst 487707 oval mil disa fso windowstst 442300 oval mil disa fso windowstst 450500 oval mil disa fso windowstst 454000 oval mil disa fso windowstst 469200 oval mil disafso windowstst 461800 oval mil disa fso windowstst 497300 oval mil disafso windowstst 468601 To close this wizard click Close Previous default [Success) default [Success) default [Success) default [Success) default [Success) default [Success) default [Success) default [Success) default [Success) default [Success) default [Success) default [Success) default [Success) default [Success) Next Summary

8.  Click the close button to exit the import wizard.

The new configuration baseline appears in the information pane of the Configuration Manager Console.

Configuration Baselines 16 items Search Name Windows Server 2012 / 2012 R2 Member Server Security Technical Implementation Gu de Status Enabled Deployed User Setting Date Modified 12/20/2016 3:12 PM Compliance Count Noncompliance Count Failure Count Modified By CORP\anth...

The name of the configuration baseline is taken from the display name section of the XCCDF/Datastream XML and is constructed using the following convention:

ABC[XYZ], where ABC is the XCCDF Benchmark ID, and XYZ is the XCCDF Profile ID (if a profile is selected).

 

Assign configuration baselines to the computer collections

Prior to assigning the configuration baseline to a collection of computers ensure that you have the appropriate collections created that targets the type of clients you want to assess.

Continuing with the same STIG benchmark as earlier I am going to target my Windows Server 2012 clients and Windows 2012 R2 clients, but excluding my domain controllers since they have a different set of compliance checks.

Machine generated alternative text: SRV 2012 Servers Query Statement Properties General Criteria Joins You can specify criteria to nam»v the query and the results that are Criteria : S"stem ResourceO eratin S' •Stem Name and Vereion is like System Resource Operating System Name and Version IS like 2%" System Resource Operating System Name and Version is like 3%" Computer System Domain Role is less than 4 Show Query Language

After creating the appropriate computer collections for the clients that you want to assess for compliance, you are ready to assign the Configuration Baselines that you imported. 

Here is the WQL syntax for my collection, which includes, Server 2012, Server 2012 R2, but excludes my domain controllers.

select SMS_R_SYSTEM.ResourceID,SMS_R_SYSTEM.ResourceType,SMS_R_SYSTEM.Name,SMS_R_SYSTEM.SMSUniqueIdentifier,SMS_R_SYSTEM.ResourceDomainORWorkgroup,SMS_R_SYSTEM.Client from SMS_R_System inner join SMS_G_System_COMPUTER_SYSTEM on SMS_G_System_COMPUTER_SYSTEM.ResourceID = SMS_R_System.ResourceId where SMS_R_System.OperatingSystemNameandVersion like “%Server%” and SMS_R_System.OperatingSystemNameandVersion like “%6.2%” or SMS_R_System.OperatingSystemNameandVersion like “%6.3%” and SMS_G_System_COMPUTER_SYSTEM.DomainRole < 4

The key in this query to exclude domain controllers is the “SMS_G_System_COMPUTER_SYSTEM.DomainRole < 4

DomainRole 4 = LM_Workstation, LM_Server, SQLServer, Backup_Domain_Controller, Timesource, NT, DFS

DomainRole 5 = LM_Workstation, LM_Server, SQLServer, Primary_Domain_Controller, Timesource, NT, DFS

DomainRole 3 = LM_Workstation, LM_Server, NT, Server_NT

DomainRole 1 = LM_Workstation, LM_Server, SQLServer, NT, Potential_Browser, Backup_Browser

NT indicates a workstation

Server_NT indicates a member server

Notice the PDC and BDC do not have either of these designations however.

SQL_Server, Timesource, DFS are all options and you may not see them in the roles description.

To see the descriptions and their ID number stored in the db you can this query

SELECT TOP (1000) [ResourceID],[Description0],[Domain0],[DomainRole0],[Manufacturer0],[Model0],[Name0],[NumberOfProcessors0],[Roles0],[Status0],[SystemType0],[UserName0]

FROM [CM_MSC].[dbo].[v_GS_COMPUTER_SYSTEM]

Note: Change the MSC to your site code to run the query.

To assign a configuration baseline to a computer collection

1.  In the Configuration Manager Console, go to Assets and ComplianceCompliance SettingsConfiguration Baselines.

2.  In the navigation pane, click <configuration_baseline>, where <configuration_baseline> is the name of the configuration baseline that you want to assign to a computer collection.

Configuration Baselines 16 items Search Name Windows Server 2012 / 2012 R2 Member Server Security Technical Implementation Gu de Status Enabled Deployed User Setting Date Modified 12/20/2016 3:12 PM Compliance Count Noncompliance Count Failure Count Modified By CORP\anth...

3.  In the menu bar, click the green arrow Deploy. (Notice a theme?)

4.  In the Deploy Configuration Baseline Dialog click the Browse button and select the collection you want to target.

Deploy Configuration Baselines Select the configuration baselines that you want to deploy to a collection Available configuration baselines Fittar„ WS2012R2 Member Server Security Comp A Copy of Offce2013 Computer Security WS2012R2 Domain Security Compliance v Z] Remediate noncompliant rules when supported Add > < Remove Z] Allow remediation outside the maintenance window Z] Generate an alert When compliance is below Data and time Z] Generate System Center Operations Manager alert Select the collection for this configuration baseline deployment S RV Al Wndows Servers Specify the compliance evaluation schedule for this configuration baseline @ Simple schedule Run every C) Custom schedule No custom schedule defined Selected configuration baselines Fittar„ Windows Server 2012 / 2012 R2 Member Customiza„

DO NOT select Remediation, this particular Baseline of Configuration Items does not contain any remediation, which will cause problems with your baseline if you select remediation.

FYI:  Both the Configuration Baseline, and the Configuration Items it contains, must be configured for remediation for remediation to work.

You will need to use your discretion on the alerts, and the schedule. I have mine set to 4 hours – but mine is a lab. The default client settings is every 7 days.

FYI:  If you have multiple baselines deployed (or just one) you can use Send Schedule tool in the Configmgr Toolkit to view a clients scheduled time to run evaluations. You can also use it to trigger the evaluation.

If you are going to target multiple collections you will need to repeat this process. If however you wanted to deploy multiple Configuration Baselines to the same collection you could select and add them without having to repeat this process.

Verify that the compliance data has been collected

Before exporting the compliance data back to SCAP format, we need to verify that the data has been collected. After you assign a Configuration Baseline to a computer collection, the client on each computer in that collection evaluates its settings and automatically gathers the compliance information. The client will then deliver that information to its management point, then the MP will deliver the information to the primary site where the client is assigned. Finally, the compliance information is stored in the Configuration Manager database.

From the client side you can see if the baseline has been downloaded by the client by viewing the Configurations tab in the Configmgr client properties.

Machine generated alternative text: Configuration Manager Properties Assigned configuration baselines Nama Windo',vs Server Last Evaluatin *ppb'

It will also indicate if the baseline has run, and what the results were, if you select the View Report you will get a report that shows pass / fail for each CI. You can see this client has run the baseline and is non-compliant.

Machine generated alternative text: Configuration Manager Properties Confi gurations Assigned configuration baselines Nama "indo',vs Server Revision Last EvaluatL Compliance Noncompliant *ppb'

Verify that the compliance data has been collected

There are a couple options in the Configmgr console to check and see if the data has been loaded into the database yet. Under the Monitoring tab > Deployments you can see the compliance percentage.

Machine generated alternative text: Icon Software Windows Server 2012 / 2012 R2... Collection SRV 2012 and 201 2R2 Purpose Requ ired Action Monitor Feature Typ Baseline Compliance %

You can also view the count of compliant and non-compliant clients, as well as failures under Assets and Compliance > Compliance Settings > Configuration Baselines.

Machine generated alternative text: Icon Name Windows Server 20121m Status Enabled Deployed Yes User Setting Date Mod fled 12/22/2016m Compliance Count Noncompliance Count Failure Count Modified By CORP\anthony

Both of these are still at zero…and it has been a while since the client completed its evaluation. I list the relevant log files below for troubleshooting but we don’t need those. If you are viewing the baseline in the console by selecting the Configuration Baselines node, look to the left of the green Deploy arrow we used earlier in the menu bar. The summarization has not run yet. You can see the schedule using the Schedule Summarization button (default is 2 hours), and even force it to run now by clicking the Run Summarization button.

Machine generated alternative text: Show Mem bers 'Schedule Summarization Run Su mmanzation Disa ble View Xml Definition Export Baseline Copy Refresh Delete Deploy Deployment

I forced mine to run and now I have the non-compliant results from my client showing in the console.

Machine generated alternative text: Compliance Count Noncompliance Count

Export compliance results to SCAP

The next task in the process is to export the Compliance Settings compliance data to SCAP format, which is an ARF report file in XML \ human-readable format.

Exporting the compliance data to an XCCDF \ DataStream ARF results file

1.  On your Configmgr server where you installed the SCAP extensions, click Start > All Programs > SCAP Extensions > SCAP Extensions.  Or you can open a command prompt and navigate to the folder.

2.  At the command prompt, enter the correct command line and press ENTER.

Note: The exe is dcmtoscap.exe not scaptodcm.exe like we used when creating the CAB file.

FYI:  If it helps to make sense of the naming of the file, in Configmgr 2007 Configuration Baselines and Configuration Items were part of the Desired Configuration Management feature, or DCM. Trust me, I wrote the chapter on it in the 2007 R2 book.

Machine generated alternative text: D: \Program Files (x86) \ SCAP Extensions>dcmtoscap.exe -xccdf . \ 2812\U R2 ms V2R6_STIG_SCAP_1-1_Benc hmark-xccdf .xml -cpe . \ 2812\U R2 ms V2R6 STIG_SCAP_1-1_Benchmark-cpe-dictionary.xm1 -server cm-cb. corp.configmgr.com -database Cm I•ISC -collection msc88814 -out . \output_

Here is my command line if you want to copy-n-pate it.

dcmtoscap.exe -xccdf .\2012\U_Windows_2012_and_2012_R2_MS_V2R6_STIG_SCAP_1-1_Benchmark-xccdf.xml -cpe .\2012\U_Windows_2012_and_2012_R2_MS_V2R6_STIG_SCAP_1-1_Benchmark-cpe-dictionary.xml -server cm-cb.corp.configmgr.com -database CM_MSC -collection MSC00014 -out .\output

Below are the command line parameters for each type.

For SCAP 1.0/1.1 content

dcmtoscap.exe –xccdf <xccdf.xml> –cpe <cpe.xml> –server <CMSiteServerMachineName> –database <CMSiteDatebaseName> –collection <deviceCollectionID> OR -machine <host name> -select <xccdfBenchmark \ profile> -out <outputResultFolder>

For SCAP 1.2 content (such as the latest USGCB content):

dcmtoscap.exe –scap <scapdatastreamfile.xml> –server <CMSiteServerMachineName> –database <CMSiteDatebaseName> –collection <deviceCollectionID> OR -machine <host name> -select <datastream \ xccdfBenchmark \ profile> -out <outputResultFolder>

For single OVAL file with external variables

dcmtoscap.exe –oval <singleOvalFile.xml> [-variable <externalVariableFile.xml>] –server <CMSiteServerMachineName> –database <CMSiteDatebaseName> –collection <deviceCollectionID> -out <outputResultFolder>

And if all goes well you should end up with something like this for your output.  

Machine generated alternative text: U Windows 2012 and 2012 R2 MS V2R6 STIG SCAP 1-1 U Windows 2012 and 2012 R2 MS V2R6 STIG SCAP 1-1 U Windows 2012 and 2012 R2 MS V2R6 STIG SCAP 1-1 Z] Windows 2012 MS STIG 16777221 ORC2012R2.txt Windows 2012 MS STIG 16777221 ORC2012R2xmI Benchmark-cpe-oval.xml-default 16777221 ORC2012R2.xmI Benchmark-ovalxml-default 16777221 ORC2012R2xml Benchmark-oval.xml-notapplicable 16777221 ORC2012R2.xmI 12/23/2016 2:53 AM 12/23/2016 2:53 AM 12/23/2016 2:53 AM 12/23/2016 2:53 AM 12/23/2016 AM XML Document XML Document XML Document Text Document XML Document 277 G 281 G 206 KB

And then we are done.  

Next will be how to use the Baseline Configurations and check for compliance against VM’s running in Azure.  Then, I will conclude with custom remediation when clients fail evaluation.

Command line parameters for SCAP extensions

ParameterUsage Required

server [SQLServer\SQLInstance]

Specify the name of Configmgr site database server and the SQL instance.

Yes

database [SQLDatabase]

Specify the name of the Configuration Manager site database.

Yes

collection [collection id]

Specify the collection id to generate the SCAP report.

Yes (when -machine is not specified)

machine [machine name]

Specify the computer name to generate the SCAP report.

Yes (when -collection is not specified)

-organization [organization name]

Specify the organization name, which would be displayed in report. It can be separated by ‘;’ to specify a multi-line organization name.

No

-type [thin / full / fullnosc]

Specify the OVAL result type: thin result or full result or full result without system characteristic.

No (if not specified, then the default value is full)

cpe

Specify the cpe-dictionary file

Yes (for SCAP 1.0 \ 1.1)

scap [scap data stream file]

Specify the SCAP data stream file.

Yes (for SCAP 1.2 data stream, mutually exclusive with -xccdf and -oval \ -variable)

xccdf [xccdf file]

Specify the XCCDF file.

Yes (for SCAP 1.0 \ 1.1 XCCDF, mutually exclusive with -scap and -oval \ -variable)

oval [oval file]

Specify the OVAL file.

Yes (for standalone OVAL file, mutually exclusive with -xccdf and -scap

-variable [oval external variable file]

Specify the OVAL external variable file.

No (optional for standalone OVAL file when there is an external OVAL variable file, mutually exclusive with -xccdf and -scap)

select [xccdf benchmark \ profile]

Select XCCDF benchmark, profile from either the SCAP data stream or XCCDF file.

Yes (for SCAP 1.0 \ 1.1 and 1.2. A selection must be made to generate a report to match the corresponding DCM baseline in Configuration Manager database)

-out [output directory]

Specify where to output the Compliance Settings cab file.

No (if not specified, then the output only lists the content without conversion)

-log [log file]

Specify the log file.

No (if not specified, then the log is written to SCAPToDCM.log \ DCMtoSCAP.log file)

-help / -?

Print out tool usage.

 No

 

 

 

Log Files

These are the relevant log files for troubleshooting the deployment and evaluation of Configuration Baselines

Log nameDescription
CIAgent.logRecords details about the process of remediation and compliance for compliance settings, software updates, and application management.
CIDownloader.logRecords details about configuration item definition downloads
CITaskManager.logRecords information about configuration item task scheduling.
DCMAgent.logRecords high-level information about the evaluation, conflict reporting, and remediation of configuration items and applications.
DCMReporting.logRecords information about reporting policy platform results into state messages for configuration items.
DcmWmiProvider.logRecords information about reading configuration item sync lets from Windows Management Instrumentation (WMI).
PolicyAgent.logRecords requests for policies made by using the Data Transfer service.
Scheduler.logRecords activities of scheduled tasks for all client operations.
StatusAgent.logRecords status messages that are created by the client components.
DataTransferService.logRecords all BITS communication for policy or package access.
PolicyEvaluator.logRecords details about the evaluation of policies on client computers, including policies from software updates.

 

 

 

 

 

 

 

Configmgr SQL Server Sizing and Optimization

SQL-Calc-02

6/19/2017 – Docs. com is being retired.  Here is the link on OneDrive.

Update: Link for calculator updated.  Any problems email, tweet, or comment.  Thanks for letting me know.

Update II: Link updated after continued issues downloading and editing by some.  I moved the file to docs.com for sharing.  Docs.com does require you to sign in to download using an O365 account, Facebook account, Outlook.com / Live.com / Hotmail.com account.  I have also added it to Dropbox if you really don’t want to sign in to download the calculator, you can get it here.  I will be testing out the collection features and posting other files to docs.com, creating an article with every doc isn’t always feasible.  Any problems, just leave a comment, email, tweet, so on.

Several weeks ago I wrote a post about sizing for Configmgr and a few days later I was discussing it with two friends and MVPs when they pointed out that my information about using a single file for the Configmgr data file was incorrect.  I had gotten this information from another friend who works with SQL so I shared that info with them and they told me I (and he were) WRONG!  It is a good thing my wife never reads my blog because if she saw that I had admitted publicly, in writing, that I was wrong, it would probably be printed up as wallpaper and used in every room in our house with a large all weather banner hanging from our roof for added effect.

So I have been doing a lot of reading on SQL over the last several weeks and I have updated my Configmgr database sizing worksheet quite a bit.  Including tabs regarding disk configuration and server memory calculations.  For most people, this information will be nothing more than interesting, for others like me who architect, build, and remediate Configmgr this can be quite handy.

There is quite a bit of information I learned that isn’t included in this worksheet and I will briefly mention some of them below should you want to do your own research into them, or just send me an email and I will be happy to answer any questions and share my research notes, stored in OneNote of course.

And if you find any errors, please let me know.

Optimize for Ad Hoc Workloads – True
Set the “AUTO_CLOSE” option to OFF
Rebuild Indexes CMMonitor maintenance db via Steve Thompson
Update Statistics Scheduled Tasks or Auto
Adjust Power Savings Plan to MAX
HBA Drivers and Queue Depth
Virtualized SQL requires additional consideration over physical
Exclude SQL processes, folders, and files from AV just like Inbox folders
Enable instant file initialization
NIC performance and tweaks for RSS / vRSS, VMQ
WSUS and SUP db need love too
Performance Counters
SQLServer:Buffer Manager -> Buffer cache hit ratio(>90-95%)
SQLServer:Buffer Manager ->Free pages(>640)
SQLServer:Buffer Manager ->Lazy writes/sec (<20)
SQLServer:Buffer Manager ->Page life expectancy (>300)
SQLServer:Buffer Manager ->Page reads/sec (<90)
SQLServer:Buffer Manager ->Page writes/sec (<90)
SQLServer:Memory Manager -> Target Server Memory (KB) (Target >= Total)
SQLServer:Memory Manager -> Total Server Memory (KB) (Target >= Total)

Configmgr Sizing Worksheets

Hierarchy Planning Worksheet v2.2

DB Sizing Worksheet v2.2

Several years ago Kent A. and someone else put out a couple spreadsheets on database sizing and server sizing for Configmgr.  I used them off and on for a few years and over time I have revised them based on the newer SQL server best practices (SQL 2012 R2+).

In the days of SQL 2008 and earlier, the common db practice was to have multiple files for the configmgr database files (MDF & NDF files), while the TempDB was configured to use a single file (MDF file).  And all databse transaction log files (LDF Files) were configured to use single files alongside each database.

TempDB or tempdb database: Is a globally used file available and is used for holding temporary objects such as local and global tables, stored procedures, table variables, row versions, and query results. The tempdb is temporary, it gets recreated at each reboot or SQL service start.  It is literally empty at each start, all data is purged and the db cannot be backed up.  The tempdb.mdf is the data file, templog.ldf is the log file for the tempdb.  The tempdb autogrowth setting can cause serious performance issues if the size of the db is too small and it is constantly growing the db.  A common practice when running a SQL server in an Azure VM is to put the temdb files on the D drive since the drive is non-persistent during reboots.

The best practice for SQL 2012 R2 and beyond is to use a single database file for the Configmgr database and multiple files for the TempDB files (MDF & NDF files).  The transaction log files (LDF files) are still configured for a single file as before.  My buddy Steve Thompson points this out in a recent post where he discusses the proper tempdb creation practices.  Steve is also a former SQL MVP, now a Configmgr MVP so he knows both well.

The topic of using a CAS or multiple primaries for a customer came up on a discussion list today and when I replied I had screen shots of my sql sizing spreadsheet and my site sizing spreadsheet and a few people asked me for copies of them.  They can be downloaded now and as they get updated I will do my best to make the new versions available.

Any questions please let me know.

P.S. SQL should always be local.

Azure Stack Home Lab Build

In the beginning there was…nothing

Microsoft Azure Stack is not something I could build and run on my desktop like my Configmgr lab, which runs on my W10 desktop while I use it for work as well.  I had to decide if I wanted to spend the money on a server or wait for the hardware to fall into my hands.  Because I saw this a real opportunity to learn something that would be valuable to my customers and myself, and because my wife…I mean CFO, approved the request I purchased the individual parts to build my own server for my home lab.

Having had a server in my home lab previously I knew of the drawbacks.  Noise and heat.  Lots of both.  Doing a customer demo when you are sweating and they cannot hear you makes doing a demo difficult.  That meant I had to meet the hardware requirements of Azure Stack, and it had to be quiet and cool.

And then there was…

After lots of research, I ordered all of my hardware.  Things have not changed much since I got my A+ back in 1995, thankfully.  And yes it did boot the first time I powered it on.  I won’t get into the hardware details but you can take a stab at it based on the images.

Empty Case
The case is a little larger than I expected. It has plenty of room to add additional drives later on though. It is behind my desk between the desk and the haze-grey wall you see here. It would actually fit under my desk but until I decide to move my Dell desktop it will stay back there.

 

IMG_0870
MSI motherboard. While a gaming motherboard it supports my needs, which don’t include gaming.
IMG_0874
Motherboard installed, i7 CPU installed, H20 powered CPU cooler installed and connected. The memory is installed but I later moved it for quad channel
Hard Drives
This shows the five drives mounted. It is a bit blurry but my Pearl Izumu shoes are in focus.
First boot
This shows the system running. Memory moved for quad channel, video card mounted, power supply, so on.
CPU Temp
Closer shot of the system running. You can see the “40” indicating the CPU temp.
If I were in charge of the world
And now Windows Server 2016 TP4 installing from USB. With a shot of my whiteboard cleaned off for Azure Stack testing. And also my daughters “If I Were In Charge Of The World” worksheet from first grade.
MAS Extracting
Azure Stack extracting in Server 2016 TP4

IMG_0887

You can download the bits for Azure Stack here, and this is a link to the instructions on how to setup Azure Stack.  The instrcutions assume you have met the list of hardware requirements listed here and other requirements listed here.

Since I had a few hours while it installed, I went for a run out at Goldmine trail, which is about 20 minutes from me.

IMG_0890

The install threw an error and I had to go visit a customer to upgrade them to Configmgr current branch, so it would have to wait until I got back.  At least, my server was nice and warm at home.

DCCX7560

Back at home.  My install on the base OS did not work.  It complained that my time was off.  There is no UTC/local time setting in my BIOS and here in AZ we don’t do DST, so today we are equal to MST.  All times and time zones matched so after talking to a couple Azure TSP’s I decided to install using the VHD method, which worked, not sure why but it did.  It only took about two hours to install instead of the estimated four.

After returning from the northwest…

I completed the setup in my lab and it was pretty painless to get running.  I have since reinstalled it on the same system, starting from scratch.

When setup is complete and you run Failover Cluster Manager, or Hyper-V Manager, you will see the list of VM’s running.  If the PowerShell script throws an error during setup, fix the issue and then run the script again and it will pick up where it left off.  At the end you should have something like this.

ClusterMGR-01

Azure Stack VMs

Here is a breakdown of each VM you have running once the installation of Azure Stack has completed:

ADVM Virtual machine that hosts Active Directory, DNS, and DHCP services for Microsoft Azure Stack. These infrastructure foundational services are required to bring up the Azure Stack as well as the ongoing maintenance.

ACSVM Virtual machine that hosts the Azure Consistent Storage services. These services run on the Service Fabric on a dedicated virtual machine.

MuxVM Virtual machine that hosts the Microsoft software load balancer component and network multiplexing services.

NCVM Virtual machine that hosts the Microsoft network controller component, which is a key component of the Microsoft software-defined networking technology. These services run on the Service Fabric on this dedicated virtual machine.

NATVM Virtual machine that hosts the Microsoft network address translation component. This enables outbound network connectivity from Microsoft Azure Stack.

xRPVM Virtual machine that hosts the core resource providers of Microsoft Azure Stack, including the Compute, Network, and Storage resource providers.

SQLVM Virtual machine that hosts SQL Servers which is used by various fabric services (ACS and xRP services).

PortalVM Virtual machine that hosts the Control Plane (Azure Resource Manager) and Azure portal services and various experiences (including services supporting admin experiences and tenant experiences).

ClientVM Virtual machine that is available to developers for installing PowerShell, Visuall Studio, and other tools.

And…Storage services in the operating system on the physical host include:

ACS Blob Service Azure Consistent Storage Blob service, which provides blob and table storage services. SoFS Scale-out File Server. ReFS CSV Resilient File System Cluster Shared Volume.Virtual Disk, Storage Space, and Storage Spaces Direct are the respective underlying storage technology in Windows Server to enable the Microsoft Azure Stack core storage resource provider.

 

Here is a graphic representation of the VM’s and services.

Azure Stack TP1 POC logical architecture

 Manually Shutting Down Azure Stack

If you need to manually shutdown and startup the VM’s this is the recommended order:

ClientVM, PortalVM, xRPVM, ACSVM, MuxVM, NCVM, SQLVM

Hyper-V will shut down the other three for you as you shutdown your server.  The start order is reverse.  If you need to manually shut down, after you start the VM’s ensure that all the services on xRPVM are running and happy, if it isn’t you will have issues since it hosts the core of the Azure Stack resources.

Azure Stack Portal

This is a set of images on the ClientVM.  Showing the portal view, then the ARM templates resources imported from GitHub, then as it appears in the marketplace.  Then a custom built VM template imported into the marketplace, using PowerShell to add a SQL 2014 custom VM into the platform image repository (PIR), and finally the custom SQL 2014 VM listed in the marketplace.

Image 001

Image 002

Image 003

Image 004

Image 005

Image 006

Image 008

Netflix Search Hacks

I am not a huge fan of TV or the movies, there are just so many other things I would rather be doing with that time.  But some days I just need to relax and not think about anything and that is usually when Netflix or Amazon comes in handy.  My biggest gripe about Netflix is the variety and so…

http://instantwatcher.com/

Sort by release year, rotten tomatoes freshness, new & noteworthy, so on.

A better way to search for Amazon Prime and Netflix videos” Consumer Reports

about a quadrillion times easier to browse than Netflix’s own site

Netflix Secret Categories

http://netflixcodes.me/

http://www.netflix.com/browse/genre/XXXXXXX (replace the X’s with the code in the url)

Extended list of categories and codes

http://ogres-crypt.com/public/NetFlix-Streaming-Genres2.html

Netflix: around the world

“We’ve done the hard work and uncovered the world’s largest list of Netflix genres with over 27,002 unique genres on the list”

http://www.finder.com/netflix/genre-list

Search every Netflix region

http://flixed.io/

To view Netflix in other countries use one of these services

https://www.smartflix.io/

https://www.unblock-us.com/

https://adfreetime.com/

https://www.filmefy.com/

https://flixsearch.io/

Chrome search extension (unofficial)

https://chrome.google.com/webstore/detail/netflix-super-browse/iejponamigpndjgdmnpelkohnbpancjf/