Encrypting Sql Server Backups using 7-Zip & Powershell

If like me, you’re moving off using litespeed compression to using native Sql Server 2012 compression, there is one feature that you’re likely to miss; encryption.  Here is how you can use 7-zip via powershell to implement encryption (the default encryption for 7-zip is AES-256).

if (-not (test-path "$env:ProgramFiles\7-Zip\7z.exe")) {throw "$env:ProgramFiles\7-Zip\7z.exe needed"}
set-alias sz "$env:ProgramFiles\7-Zip\7z.exe"

$path = "C:\PathToBackupFolder"
$password = "TestPassword"

$files = gci -Path $path | where{$_.Extension -eq ".bak"}
foreach($file in $files)
	$NewFile = "$path\$file" -replace ".bak",".bak.7z"
	$cmdOutput = sz a -tzip "$NewFile" $file.FullName -p$password
	if($cmdOutput[8] -eq "Everything is Ok")
		Remove-Item -Path $file.FullName
		Send-MailMessage -To "whomever@wherever.com" `
		-Subject "Error in compressing file on ServerName" `
		-Body "There was an error in compressing the file <b>$file</b> on ServerName.  Please look into it." `
		-SmtpServer "mail.wherever.com" `
		-From "server@reedbusiness.com" -BodyAsHtml

Identity Bug in Sql Server 2012

Apparently this bug has been hanging around since April 2012, but I only heard about it this morning.

	DROP TABLE dbo.TestTable

CREATE TABLE dbo.TestTable(

INSERT INTO dbo.TestTable(Value)
GO 101


INSERT INTO dbo.TestTable(Value)
GO 101

SELECT * FROM dbo.TestTable

Now look at the IDENTITY values for the rows:


Skips upwards by 1000.  Scary.

There is a connect item for this located here.  Please vote!  The connect item states that it is related to AlwaysOn.  This is not true.  This bug occurs on any restart of the sql service.

A work-around for this is to make sure to checkpoint the database before a restart, but obviously it’s not the best work-around as you don’t always control the failover (HA).  All that being said, if you checkpoint before a restart, you’ll not see the identity bug:



Also, I’ve just installed Sql Server 2012 SP1 CTP and the bug still exists.

Testing Sql Server AlwaysOn in AWS

This is part deux from the post Configuring Sql Server AlwaysOn in AWS.  I’m calling it “Testing Sql Server AlwaysOn in AWS 2; The Reckoning”.

Keep in mind, this is a multi-subnet configuration.  This plays into the testing below, but it actually works out to our advantage.  I’ve not tested AlwaysOn in a single subnet configuration yet.

Once again, a shout out is deserved to Jeremiah Peschka (blog | twitter), as I’d never have gotten this far without this post.

So, now we have our listener set up, ip’s configured, and we’re ready to start seeing how well the failovers work.  Let’s start by pinging the listener from the primary server:


So far so good.  Let’s ping it from the Secondary:


Awesome.  Let’s ping it from the DNS server:


Looking good.

So, let’s check connections with the native client.  On the desktop of the Primary, right-click and create a file called test.udl and save it.  Next, double-click the file.  It should bring up a dialog like this:


Click on the Provider tab, and change the provider to SQL Server Native Client 11.0, click Next and fill out the connection information, entering in your listener name in the “Select or enter a server name” drop-down.  Next, select or type in a database name and click Test Connection.

image image

If you typed everything in correctly, you should see this:


You should get the same results on the secondary as well.

Now, let’s manually fail the database over and re-test.  Go on the primary server, right-click on the Availability Group and click Failover…


Go through the various steps required to failover the Availability group to the secondary server.  I’ll skip images and descriptions for this process, as this is not the main focus of this post.

So, the Availability Group has been failed over.  Let’s test the primary, secondary & dns server connectivity to the listener via ping again:

Secondary (was primary):
No Love (host unreachable)
Primary (was secondary):
DNS Server:

No Love (timeout)


So, at this point only 1 out of 3 servers can connect.  Let’s check our UDL files again on the Secondary (used to be primary):


No love.  But wait!  Click on the All tab and find the property named Extended Properties, click Edit Value… and enter in MultiSubnetFailover=Yes.  Now, select the Connection tab and click Test Connection again.





The reason this works, is that the provider will look for this property and will aggressively try both IP addresses listed in DNS.  Refer to the section Connecting with MultiSubnetFailover in this article for more details.


What I’ve found is the process AWS uses for DNS updating itself to point to the correct host is inconsistent at best.  Sometimes it will update itself after 5 minutes, sometimes 15, sometimes never.  It’s no big deal for clients that use the SQL Server Native Client connection with the MultiSubnetFailover=Yes setting, but for other clients it will definitely present a problem. 

Let’s try one more thing.  Let’s cause an automatic failover.  Let’s go into the AWS console and stop the instance that the current primary is running on.  In my case, the Primary is running on RHDB1. 


Go into the AWS EC2 dashboard and stop the instance.


Okay, automatic failover.  RHDB2 is now the primary. 


Ping from the DNS server resolves, the UDL resolves, pinging the listener resolves.  Just for grins, fire the other server back up and lets shut down the primary via the AWS console again.  Currently, the Primary is RHDB2 and RHDB1 is the secondary. 


Let’s kill RHDB2.


Automatic failover again:


Pinging the listener from RHDB1 works just fine.  The UDL connection also works fine.  Ping from the DNS server, not immediately, but it seemed to resolve after about a minute.

By no means is this a 100% thorough test, but my results are promising.  Automatic failover seems to work just fine in this case.  What doesn’t work reliably is the DNS resolving.  But, if you are using the MultiSubnetFailover when connecting with the sql server native client, it more or less negates the dns not resolving issues. 

Configuring Sql Server AlwaysOn in AWS

This post primarily concentrates on setting up the listener for a SQL Server Availability Group in AWS in a multi-subnet cluster.  Just a warning, the listener does not function in AWS as you would expect on local instances.  Don’t assume this article solves listener issues.  It does not.  I’ll outline what the issues are with the listener in a later post.

I’m assuming you’ve already done the following:

  1. Installed Sql Server 2012 on a couple of EC2 instances located in different subnets.
  2. Created a few test databases (and done full backups) on one of said database EC2 instances.
  3. Created an availability group hosting said databases.

Before I begin, a shout out is deserved to Jeremiah Peschka (blog | twitter), as I’d never have gotten this figured without this post.

Create the listener

In your availability group, create your listener


Enter in the Listener DNS Name, enter in the port (I left the default port of 1433 for this demonstration to keep it simple), and select StaticIP for the Network Mode. 


Next, at the bottom of the New Availability Group Listener dialog, click the Add button.  This will bring up the Add IP Address dialog.  Enter in your static IP addresses.  If these IP addresses already exist, you’ll get this error message:

The Windows Server Failover Clustering (WSFC) resource control API returned error code 5057.  The WSFC service may not be running or may not be accessible in its current state, or the specified arguments are invalid.  For information about this error code, see "System Error Codes" in the Windows Development documentation.
The attempt to create the network name and IP address for the listener failed. The WSFC service may not be running or may be inaccessible in its current state, or the values provided for the network name and IP address may be incorrect. Check the state of the WSFC cluster and validate the network name and IP address with the network administrator.
The attempt to create the network name and IP address for the listener failed. The WSFC service may not be running or may be inaccessible in its current state, or the values provided for the network name and IP address may be incorrect. Check the state of the WSFC cluster and validate the network name and IP address with the network administrator. (Microsoft SQL Server, Error: 41009)


You’ll want to create new IP addresses for your listener that are not being used by the failover cluster itself.  The IP addresses will be added to DNS automatically, as will be illustrated further down in the post.  For example, currently on my servers I have addresses of & for one node, and & for the second node.  For my IP addresses for the listener, I will add IP addresses of &

Obviously, you have to run this as an account that has the proper permissions to add entries to DNS. In my case, I had god rights.  You’ll probably want to be a bit more restrictive. 


Next, if you look in your DNS, you’ll see it added the listener name with both IP’s. 


Next, go to your AWS console.  In the console under your EC2 Dashboard under Network & Security, click the Network Interfaces. 


Select the network interface that is associated with your AMI and click Manage Private IP Addresses.


Now add the IP address that the listener created to said network interface.  Do this for both IP addresses on each of the network interfaces.  Only add the .10 to the AMI in the .10 subnet, and add only the .11 to the AMI in the .11 subnet.  (It won’t allow you to add an 11 to the 10 subnet, but just trying to be clear.  Winking smile)


Your listener should now be ready for testing.  This is where the fun really begins.  I’ll split the testing out to another post, as this one is getting pretty big.

Implicit Conversions in Sql Server

Mismatches between data types on columns (or variables) can have disastrous results.  It’s bitten me twice this past week.  Take the following query:


Which results in the following plan being generated:


Four billion rows to an index spool?  Really?

The issue here was that the U.UserName column was defined as an NVARCHAR(256), whereas the SH.UserName column was defined as VARCHAR(256).  This results in the query having to convert all the rows in the SearchHistory table to be converted from NVARCHAR to VARCHAR (5,620,388 rows, plus since it’s a nested subquery, multiply that times the number of rows the parent query is returning).  This was pegging the cpu on a production server nightly. 

The way sql server determines which column will be converted is strictly based upon data type precedence.  A VARCHAR will always be converted upwards to an NVARCHAR rather than the inverse.  If you look at the data type precedence list, conversion will happen from the bottom upwards.

If I convert the U.UserName from an NVARCHAR to VARCHAR it avoids having to convert the entire 5-million plus row table into VARCHAR in order to do the join.  This fixes the symptom, but the best solution is to ensure that your types match in the first place.  This applies not only to joins, but to variables as well.

This blog post from Jonathan Kehayias will help you find any queries that may be doing implicit conversions.