Database Mirroring in Azure

Not many people that I know like to try things that they are not familiar with because unfamiliar is scary. However, working in startup, like my current company, basically forces one to always learn more and learn faster. Hence, after getting approval from the top management, my senior and I migrated our web applications to Microsoft Azure.

Just when we thought we did everything beautifully, our instances on Azure went down for 72 minutes on 4 August, one month after the migration. The reason given by Microsoft team is that there was an issue in one of the clusters within the DC. 3 weeks later, our database instance on Azure went down again for 22 minutes because of a scheduled system update.

Fortunately, Microsoft Singapore is willing to guide us to make high availability in our web applications possible. I am very happy to have Chun Siong, Technical Evangelist from Microsoft Singapore, to help us out.

Last month, Chun Siong successfully to have the database mirroring setup for our database instances on Azure. Since he did all of the work himself, in order to learn and to master the database mirroring, I had to do everything myself from the beginning again.

In this post, I will share the mistakes I made when I tried doing database mirroring myself so that I won’t repeat the same mistakes again.

Beginning of the Journey

There is an easy-to-follow tutorial available on MSDN about how to implement database mirroring in Azure. I used it as a reference to setup one principle database server, one mirror database server, and one witness server within the same availability set.

Elements in my simple database mirroring setup.
Elements in my simple database mirroring setup.

Mistake #1: Firewall Blocking Remote Access of SQL Server

If I had read the tutorial carefully, I wouldn’t have to make this mistake because it’s mentioned in the beginning of the tutorial.

I found out this mistake only when I tried to connect to the mirror server from the principal database server. It kept throwing me the Error 1418 saying that the mirror server was not reachable. After reading a checklist of the error, I found out that it’s because I never create an inbound rule on Windows Firewall to allow the access of the SQL server.

Thanks Chun Siong for pointing it out also. =)

By the way, on the article about Error 1418 (http://msdn.microsoft.com/en-us/library/aa337361.aspx), there is a checklist to check if everything is done correctly. I copied and pasted it below for quick reference.

  1. Make sure that the mirror database is ready for mirroring.
  2. Make sure that the name and port of the mirror server instance are correct.
  3. Make sure that the destination mirror server instance is not behind a firewall.
  4. Make sure that the principal server instance is not behind a firewall.
  5. Verify that the endpoints are started on the partners by using the state or state_desc column the of the sys.database_mirroring_endpoints catalog view. If either endpoint is not started, execute an ALTER ENDPOINT statement to start it.
  6. Make sure that the principal server instance is listening on the port assigned to its database mirroring endpoint and that and the mirror server instance is listening on its port. If a partner is not listening on its assigned port, modify the database mirroring endpoint to listen on a different port.

If the items above are not helpful to you, there is also another detailed blog post about this Error 1418 written by Pinal Dave.

There is a need to allow the access of SQL server in three instances.
There is a need to allow the access of SQL server in three instances.

Mistake #2: Typo when Creating Certificates

In the tutorial, the recommended way to deploy database mirroring is to use certificates. After certificates of three servers are created, we need to grant login permission on each server to another two servers. That is when we will use the certificates to create a common login account id called DBMirroringLogin.

I had one typo in the password in one of the certificates.  only realized it at the very end when I tried to connect to my witness server. So, yup. Be careful during the database mirroring configuration steps. One small mistake can waste us time to find out why.

Grant login permissions to other two servers.
Grant login permissions to other two servers.

Mistake #3: Mismatch Edition of Principal and Mirror

I only had time to learn database mirroring using my personal account after work. So I screamed in my room at the moment when I realized that it is not allowed to have mirror server using Standard Edition while the principal is not using Standard Edition.

The mirror server instance cannot be Standard Edition if the principal server instance is not Standard Edition.
The mirror server instance cannot be Standard Edition if the principal server instance is not Standard Edition.

So in the end, I shut down the mirror instance and created another virtual machine which has Enterprise Edition SQL Server installed. Fortunately, it could be done quite fast on Microsoft Azure. I did not want to use back the old name so I named the new mirror server mydb-01-kagami.

Kagami means "mirror" in Japanese. (Image Credit: Lucky Star)
Kagami means “mirror” in Japanese. (Image Credit: Lucky Star)

Mistake #4: Three Virtual Machines Not in Same Availability Set

The principal database, witness, and mirror database instances need to be put inside the same availability set.

When I was deploying the database mirroring, I forgot to have the witness instance in the same availability set as principal and mirror. So end up I couldn’t successfully connect to the witness from the principal.

Three instances need to be in the same available set.
Three instances need to be in the same available set.

Work and Learn

I spent about three days in Microsoft office to learn from Chun Siong. I then took another one month to do it myself. Wait, what? One month, seriously? Don’t be surprised. As usual, I have only little time (about half an hour per day) after work to do my personal projects. Sometimes, once I reached my room from office, I just jumped into bed and fell asleep within minutes. So, in fact, I only spent about 15 to 20 hours on learning database mirroring myself. Hence, I am really glad that I have colleagues as well as friends from Microsoft to be willing to support me in my learning journey.

Finally, some little notes to myself and readers who want to try out database mirroring (on Azure).

  1. Be very careful during the whole database mirroring configuration process. Don’t have typo or set something wrongly. You may need to delete and create a new instance because of the mistakes;
  2. Witness (but not principal and mirror) can use Express Edition of SQL Server. So, to save cost, please use that;
  3. Set database to full recovery model before backing up the database on principal;
  4. Remember to enable named pipes;
  5. Use Database Mirroring Monitor to understand more about the status of mirroring session.
  6. Some good resources to refer to:
It's enjoyable to work in Microsoft Singapore office. You can see the beautiful MBS from there.
It’s enjoyable to work in Microsoft Singapore office. You can see the beautiful MBS from there.

App_offline.htm: The Super Weapon to Take ASP .NET Website Down

ASP .NET

When I am doing maintenance work for my ASP .NET web application, I will always take down the web application temporarily. This is because the application domain will always restart when I deploy a new version of the application. Hence, in order to prevent online users to make request when the website is still under deployment, there is a need to take the website down and then show the online users a friendly message that my website is currently unavailable.

Two Steps to Take Website Down

Luckily, there is a very convenient way of doing that in ASP .NET 2.0 or later.

Firstly, a file with the name “App_offline.htm” needs to be created. This is the only web page that will be shown to the online visitors when the website is down. Thus, we will put our friendly messages to notify the visitors that the website is currently under maintenance in the web page.

Secondly, we will put this file in the root of the website virtual directory.

Finally, visit the website now. You will realize that no matter which web page you visit in the website, you will always be redirected to App_offline.htm, the page telling you that the website is under maintenance.

Website is under maintenance. Sorry about that!
Website is under maintenance. Sorry about that!

Things to Take Note of

Current Request Still Be Processed

According to an interesting experiment shared on Stack Overflow, only new request will be redirected to the App_Offline.htm. Existing requests when the App_Offline.htm is uploaded will still be processed.

Minimum of 512 Bytes

It turns out that Internet Explorer will show its own generic status code message if the App_Offline.htm contains less than 512 bytes.

Permission of App_offline.htm

Yes, check the permission of the file under Properties -> Security because if the App_offline.html doesn’t have correct permissions given, it will not work as expected as well.

Conclusion

As demonstrated above, it is indeed very easy to take an ASP .NET website down. What a convenient App_offline.htm!

AWSome Day – Learning AWS from Experts and IAM

AWS + IAM

It’s fortunate to work in a company which encourages employees to attend courses, workshops, and training to expand their skill set. Last month, when I told my boss about AWSome Day, a training event hold by AWS expert technical instructors, my boss immediately gave me one day leave (without deducting my annual leave) to attend the event. In addition, I’m glad to have awesome teammates who helped me to handle my work on that day so that I could concentrate during the event. Thus, I would like to write a series of blog posts to share about what I’ve learnt in AWSome Day.

Amazon AWSome Day

This is the second time the AWSome Day was organized in Singapore. Based on last year AWS Summit attendees, a lot of them were looking for more professional training from AWS, and thus AWSome Day once again came to Singapore. This year, the event is at Raffles City Convention Centre, which is just a 5-minute walk from my office. Oh my tian, that is so convenient!

AWSome Day, Awesome Place - Raffles City Convention Centre
AWSome Day, Awesome Place – Raffles City Convention Centre

The registration started at 8am. After that, Richard Harshman, the Head of AWS ASEAN, gave an opening keynote. He shared with us how AWS had removed barrier of entry to start a business online and to increase innovation. My friend who worked in MNC once told me that he was given access to powerful servers to do crazy stuff. I am not as lucky as him. I am working in a startup which does not have sufficient financial capability for that. Hence, I agreed with Richard that AWS (and other cloud computing services as well) does reduce the cost of innovation and experimentation.

Richard also shared with us a story how with the help of AWS, some startup in Malaysia managed to get a few million of visits monthly without an in-house system admin. Yup, our company also does not have a sysadmin. Normally, the work of sysadmin is done by the developers. Hence, we are always looking for a way to reduce the time used on sysadmin tasks so that developers have more time to focus on improving the applications to serve our customers better. So, cloud computing infrastructure with board and deep services to support online workload helps high volume and low margin businesses like ours.

Currently, our company is using both AWS and Microsoft Azure. So, when Richard shared a graph how both AWS and Microsoft are now leaders in cloud computing service, I was glad that we made a right choice to use services from both of them.

After the opening keynote, we had a short coffee break and then we began the 6-hour journey of AWS training which was done by Denny Daniel, Technical Trainer at AWS. Since the training covers many interesting topics, I will not blog all of them here because most of the readers will just tl;dr. I will only write what I learnt and I found useful in my career. So, if you are interested in the event, why not join the future training offered by AWS Singapore? =)

Episode 01: Who am I? I am, I am… I am Identity and Access Management (IAM)!

One of the main concerns about hosting our applications on clouds is security. One of the security tools provided by AWS is called Identity and Access Management, or IAM. It enables the system admin to manage users and their access rights in AWS. Hence, in AWS, each user accessing AWS will have their own security credentials and individual permissions to each AWS service and resource.

Create User
Create User

After users have been created, we will be given a one-time opportunity to download and keep the user security credentials (Access Key ID and Secret Access Key). Since the keys are displayed only for one time, once the secret key is lost, we must delete the access key and then create a new key.

IAM is secured by default. It means that, by default, IAM users do not have permission to create or modify Amazon EC2 resources. Hence, an IAM policy, which is just a JSON document specifying the rules, is needed.

Besides creating users, we are able to create groups. Thus, instead of assigning each similar user a same set of access control policies, we can also assign the users to a group and then bind the access control policies to the group. This undoubtedly eases the user management. In addition, AWS even allows us to customize the permissions based on a given template!

There are many, many permission templates available when creating a user group.
There are many, many permission templates available when creating a user group.

Another thing that I find interesting is how IAM works with tags.

In order to  manage Amazon EC2 resources effectively, we can now tag the resources ourselves with a combination of a key and a value. For example, we can tag our instances in EC2 by owner. So, we can have one instance tagged with “Environment=Production” and another instance tagged with “Environment=Test”. After that, we then can grant IAM user permission to the instances by using the tag with condition key ec2:ResourceTag/Environment.

Finally, in the event, Denny also shared with us a YouTube video about the best practices of using IAM. I am not sure if I got the one he was referring to. Anyway, the following video is what I found on YouTube.

The video is a bit long. So for those who say tl;dw, I summarize the 10 tips below.

  1. Create individual users. Do not just use root credential. Do not have one user account where everybody in the team uses to do everything;
  2. Manage permissions with groups so that only one change needed to update permissions for multiple users. Even now you only have one user in the team, it’s encourage to create a group for that user because at some point there will be new users who are going to need the same permissions;
  3. Grant leas privilege. Only grant the permissions that are required by the users to do their jobs. Less chance of people making mistakes. Avoid assigning asterisk (*) policy for permissions which means full access unless the account is for admin;
  4. Use a policy to force users having a strong password;

    Password Policy
    Password Policy
  5. Enable Multi-Factor Authentication (MFA) for privileged users;

    Enable MFA.
    Enable MFA.
  6. Use IAM roles for Amazon EC2 instances;
  7. Use IAM roles to share access without the need to share security credentials;
  8. Rotate security credentials regularly. Access keys need to be rotated. Make sure the old access keys have been deleted after the rotation;
  9. Restrict privileged access further with conditions. There are 2 types of conditions. One is AWS common condition, such as date, time, MFA, secure transport (allow traffic coming over SSL only), source of IP, etc. Another one is service-specified condition. Some services provide hundreds of conditions that we can control;
  10. Reduce or remove the use of root account.
"What? You are always using root credential?" The best practice of all: Don't use root access.
“What? You are always using root credential?” The best practice of all: Don’t use root access. (Image Credit: Is the Order a Rabbit?)

Next Episode

There are many topics about AWS covered during the event. IAM is just a small part of it. However, with just IAM alone, I already feel that there are too many areas in IAM waiting for me to discover. Hence, I will continue to write more about what I’ve learned in the future blog posts.

Also, due to the fact that I am new to AWS, if you spot anything wrong in my posts, feel free to correct me in the comment section below. =)

Successfully Sent An Email via hMailServer

After setting up a mail server on my laptop last year, I couldn’t successfully send an email to myself using the hMailServer.

Last month, a reader, Aaron Watson, said that it would be possible due to SMTP authentication. Thanks to his message, I had a new way to continue finding out why my hMailServer was not working.

First of all, I set a new IP range record in hMailServer with both Lower IP and Upper IP being the same as my Windows Private IPv4 Address.

Next, I unchecked one of the checkboxes which says “External to external email addresses” under the “Require SMTP authentication”.

Uncheck "External to external e-mail addresses" option.
Uncheck “External to external e-mail addresses” option.

Finally, hMailServer should have something like this as shown in the following screenshot.

Internet Option IP Range
Internet Option IP Range

With this step done, I can now successfully send email via hMailServer on my laptop. My laptop is also a SMTP server now. =D