TCP Listener on Microsoft Azure for IoT Devices

cloud-service-worker-role-automation-runbook.png

After working on the beacon projects back half a year ago, I was given a new task which is building a dashboard for displaying data collected from IoT devices. The IoT devices basically are GPS tracker with a few other additional sensors such as temperature and shaking detection.

I’m new to IoT field, so I’m going to share in this article what I had learnt and challenges I faced in this project so that it would benefit to juniors who are going to do similar things.

Project Requirements

We plan to have the service to receive data from the IoT devices to be on Microsoft Azure. There will be thousands or even millions of the same devices deployed eventually, so choosing cloud platform to help us scaling up easily.

We also need to store the data in order to display it on dashboard and reports for business use cases.

Challenge 1: Azure IoT Hub and The Restriction of Device Firmware

In the documentation of the device protocol, there is a set of instructions as follows.

First when device connects to server, module sends its IMEI as login request. IMEI is sent the same way as encoding barcode. First comes short identifying number of bytes written and then goes IMEI as text (bytes).

After receiving IMEI, server should determine if it would accept data from this module. If yes server will reply to module 01 if not 00.

I am not sure who wrote the documentation but I am certain that his English is not that easy to comprehend in the first read.

Anyway, this is a good indication that Azure IoT Hub will be helpful because it provides secure and reliable C2D (Cloud-to-Device) and D2C communication with HTTP, AMQP, and MQTT support.

However, when I further read the device documentation, I realized that the device could only send TCP packets over in a protocol the device manufacturer defined. In addition, the device doesn’t allow us to update its firmware at this moment, making it to send data using protocols accepted by Azure IoT Hub is impossible.

There is a fierce discussion about this on Stack Overflow. Unfortunately, none of the respondents understood what the OP was trying to say.

So, I have to say bye-bye to Azure IoT Hub and move on to build TCP Listener myself on Azure.

Challenge 2: Hosting TCP Listener on Azure

There is a great code sample on how to build a TCP listener in C# to listen for connections from TCP network clients.

So, where could we put this code at?

Could we use Azure App Service, such as Functions or Web Apps? Unfortunately, no. This is because only 80/TCP and 443/TCP are exposed publicly and the only protocol that works is HTTP. In addition, App Service is all IIS, the web server provides the entire platform, there is no room for long running processes or threads that can sit and wait for communication on another port outside of IIS.

The only easy option we have now is to use Azure Cloud Service with Worker Role. Worker Role does not use IIS and it can run our app standalone.

creating-worker-role.png
Creating a new Cloud Service project with one Worker Role on Visual Studio 2017.

A default template of WorkerRole class will be provided.

public class WorkerRole : RoleEntryPoint
{
    private readonly CancellationTokenSource cancellationTokenSource = new CancellationTokenSource();
    private readonly ManualResetEvent runCompleteEvent = new ManualResetEvent(false);

    public override void Run()
    {
        Trace.TraceInformation("TrackerTcpListener is running");

        try
        {
            this.RunAsync(this.cancellationTokenSource.Token).Wait();
        }
        finally
        {
            this.runCompleteEvent.Set();
        }
    }

    public override bool OnStart()
    { 
        // Set the maximum number of concurrent connections
        ServicePointManager.DefaultConnectionLimit = 12;

        // For information on handling configuration changes
        // see the MSDN topic at https://go.microsoft.com/fwlink/?LinkId=166357.

        bool result = base.OnStart();

        Trace.TraceInformation("TrackerTcpListener has been started");

        return result;
    }

    public override void OnStop()
    {
        Trace.TraceInformation("TrackerTcpListener is stopping");

        this.cancellationTokenSource.Cancel();
        this.runCompleteEvent.WaitOne();

        base.OnStop();

        Trace.TraceInformation("TrackerTcpListener has stopped");
    }

    private async Task RunAsync(CancellationToken cancellationToken)
    {
        // TODO: Replace the following with your own logic.
        while (!cancellationToken.IsCancellationRequested)
        {
            Trace.TraceInformation("Working");
            await Task.Delay(1000);
        }
    }
}

It’s obvious that the first method we are going to work on is the RunAsync method with a “TODO” comment.

However, before that, we need to define an IP Endpoint for this TCP listener so that we can tell the IoT device to send the packets to the specified port on the IP address.

worker-role-endpoints.png
Configuring Endpoints of a Cloud Service.

With endpoints defined, we can then proceed to modify the code.

private async Task RunAsync(CancellationToken cancellationToken)
{
    try
    {
        TcpClient client;

        while (!cancellationToken.IsCancellationRequested)
        {
            var ipEndPoint = RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["TcpListeningEndpoint1"].IPEndpoint;
            
            var listener = new System.Net.Sockets.TcpListener(ipEndPoint) { ExclusiveAddressUse = false };
            listener.Start();

            // Perform a blocking call to accept requests.
            client = listener.AcceptTcpClient();

            // Get a stream object for reading and writing
            NetworkStream stream = null;

            try
            {
                stream = client.GetStream();

                await ProcessInputNetworkStreamAsync(stream);
            }
            catch (Exception ex)
            {
                // Log the exception
            }
            finally
            {
                // Shutdown and end connection
                if (stream != null)
                {
                    stream.Close();
                }

                client.Close();

                listener.Stop();
            }
        }
    }
    catch (Exception ex)
    {
        // Log the exception
    }
}

The code for the method ProcessInputNetworkStreamAsync above is as follows.

private async Task ProcessInputNetworkStreamAsync(string imei, NetworkStream stream)
{
    Byte[] bytes = new Byte[5120];
    int i = 0;
    byte[] b = null;
    var receivedData = new List<string>();

    while ((i = stream.Read(bytes, 0, bytes.Length)) != 0)
    {
        receivedData = new List<string>();

        for (int reading = 0; reading < i; reading++)
        {
            using (MemoryStream ms = new MemoryStream())
            {
                ms.Write(bytes, reading, 1);
                b = ms.ToArray();
            }
            
            receivedData.Add(ConvertHexadecimalByteArrayToString(b));
        }

        Trace.TraceInformation("Received Data: " + string.Join(",", receivedData.ToArray()));

        // Respond from the server to device
        byte[] serverResponse = ConvertStringToHexadecimalByteArray("<some text to send back to the device>");
        stream.Write(serverResponse, 0, serverResponse.Length);
    }
}

You may wonder what I am doing above with ConvertHexadecimalByteArrayToString and ConvertStringToHexadecimalByteArray methods. They are needed because the packets used in the TCP protocol of the device is in hexadecimal. There is a very interesting discussion about how to do the conversion on Stack Overflow, so I won’t repeat it here.

Challenge 3: Multiple Devices

The code above is only handling one port. Unfortunately, the IoT device doesn’t send over the IMEI number or any other identification number of the device when the actual data pack is sent to the server. Hence, that means if there is more than one IoT device sending data to the same port, we will have no way to identify who is sending the data at the server side.

Hence, we need to make our TCP Listener to listen on multiple ports. The way I chose is to use List<Task> in the Run method as shown in the code below.

public override void Run()
{
    try
    {
        // Reading a list of ports assigned for trackers use
        ...

        var tasks = new List<Task>();
        
        foreach (var port in trackerPorts)
        {
            tasks.Add(this.RunAsync(this.cancellationTokenSource.Token, port));
        }
 
        Task.WaitAll(tasks.ToArray());
    }
    finally
    {
       this.runCompleteEvent.Set();
    }
}

Challenge 4: Worker Role Not Responding Irregularly

This turns out to be the biggest challenge in using Worker Role. After receiving data from the IoT devices for one or two days, the server was not recording any further new data even though the devices are working fine. So far, I’m still not sure about the cause even though there are people encountering similar issues as well.

Hence, I have to find a way to automatically restart the Worker Role for me. Thus, I decided to use PowerShell script to reboot the instance. There is a sample code on Microsoft Technet Gallery – Script Center which does similar thing.

I proceed to use Azure Automation which provides Runbooks to help handling the creation, deployment, monitoring, and maintenance of Azure resources. The Powershell Workflow Runbook that I use for rebooting the worker role daily is as follows.

workflow Reboot-CloudService
{
    Write-Output "Started!"
    
    $azureSubscriptionId = Get-AutomationVariable -Name "AzureSubscriptionId"
    $cloudServiceName = Get-AutomationVariable -Name "CloudServiceName"
    $workerRoleInstanceName = Get-AutomationVariable -Name "WorkerRoleInstanceName" 
    
    $myCredential = Get-AutomationPSCredential -Name "Chun Lin"
    Add-AzureAccount -Credential $myCredential
    
    Select-AzureSubscription -SubscriptionId $AzureSubscriptionId

    Write-Output "Restarting for cloud service: $cloudServiceName."

    ReSet-AzureRoleInstance -ServiceName $cloudServiceName -Slot "Production" -InstanceName $workerRoleInstanceName -Reboot

    Write-Output "Restarted successfully!"
}

In case you wonder where I defined the values for variables such as AzureSubscriptionId, CloudServiceName, and WorkerRoleInstanceName, as well as automation PowerShell credential, there are all easily found in the Azure Portal under “Share Resources” section of Azure Automation Account.

variables-and-credentials-in-automation.png
Providing credentials and variables for the Runbook.

After setting up the Runbook, we need to define schedules in Automation Account and then link it to the Runbook.

setting-schedules-for-automation.png
Setting up schedule and linking it to the Runbook.

There is another tool in the Azure Portal that I find it to be very useful to debug my PowerShell script in the Runbook. It is called the “Test Pane”. By using it, we can easily find out if the PowerShell script is correctly written to generate desired outcome.

test-pane.png
Test Pane available in Runbook.

After that, we can easily get a summary of how the job runs on Azure Portal, as shown in the following screenshot.

azure-automation.png
Job Statistics of Azure Automation.

Yup, that’s all what I had learnt in the December while everyone was enjoying the winter festivals. Please comment if you find a better alternative to handle the challenges above. Thanks in advance and happy new year to you!

References

Journey to Microsoft Azure: Good and Bad Times

I told my friends about problems I encountered on Microsoft Azure. One of my friends, Riza, then asked me to share my experience of hosting web applications on Azure during the Singapore Azure Community meetup two weeks ago.

Azure Community March Meetup in Microsoft Singapore office.
Azure Community March Meetup in Microsoft Singapore office. (Photo credit: Riza)

Problems with On-Premise Servers

Our web applications were hosted on-premise for about 9 years. Recently, we realized that our systems were running slower and slower. The clients kept receiving timeout exception. At the same time, we also ran out of storage space. We had to drive all the way to data centre which is about 15km away from our office just to connect one 1TB external hard disk to our server.

Hence, in one of our company meetings in June, we finally decided to migrate our web applications and databases to the cloud. None of the developers, besides me, knew about cloud hosting. Hence, we all agreed to use Microsoft Azure, the only cloud computing platform that I was familiar with.

Self Learning Microsoft Azure on MVA

When I first heard that the top management of our company had the intentions to migrate web applications to cloud last year, I already started to learn Azure on Microsoft Virtual Academy (MVA) at my own time and pace.

MVA is an online learning platform for public to get free IT training, including some useful introductory courses to Microsoft Azure, as listed below.

  1. Establish Microsoft Azure IaaS Technical Fundamentals
  2. Windows Azure for IT Pros Jump Start
  3. Microsoft Azure IaaS Deep Dive Jump Start
  4. SQL Server in Windows Azure Virtual Machines Jump Start

If you have noticed, the courses above are actually mostly related to IaaS. This is because IaaS was the most straightforward way for us who were going to migrate systems and databases from on-premise to the cloud. If we had chosen PaaS, we would need to redo our entire code base.

You can enjoy the fun shows presented by David and David on MVA
You can enjoy the fun shows presented by David and David on MVA

If you are more into reading books, you can also checkout some free eBooks about Microsoft Azure available on MVA. Personally, I didn’t read any of the book because I found watching MVA training videos was far more interesting.

I learnt after work and during weekends. I started learning Azure around March and the day we did the migration from on-premise to Azure was July. So I basically had a crash course of Azure in just four months.

Now I will say that the learning approach is not recommended. If you are going to learn Azure, it’s important to understand key concepts by reading books and talking to people who are more experience with Microsoft Azure and networking. Otherwise, you might encounter some problems that were hard to be fixed in later stage.

Migration at Midnight

Before doing a migration, we had to do some preparation work.

Firstly, we called our clients one by one. This is because we also hosted clients’ websites on our server. So, we need to inform them to update A record in their DNS. Later, we found out that, in fact, they should be using CNAME so that change of IP address on our side shouldn’t affect them.

Secondly, we prepared a file called app_offline.htm. This is a file to put in the root folder of our web applications hosted on our on-premise server. It would show a page telling our online users that our application was under maintenance no matter the user visited which web page.

Website is under maintenance. Sorry about that!
Website is under maintenance. Sorry about that!

Finally, we did backup for all our databases which were running on our on-premise servers. Due to the fact that our databases were big, it took about 20-30 minutes for us to just do a backup of one database. Of course, this could only be done right before we migrated to the cloud.

We chose to do the migration at midnight because we had many online transactions going on at daytime. In our company, only my senior and I were in charge of doing the migration. The following schedule listed the main activities during our midnight migration.

  • 2am – 3am: Uploading app_offline.htm and backing up databases
  • 3am – 4am: Restoring databases on Azure
  • 4am – 5am: Uploading web applications to Azure and updating DNS in Route 53

Complaints Received on First Day after Migration

We need to finish the migration by 5am because that is when our clients start logging in to our web applications. So, everything was done in a rush and thus we received a number of calls from our clients after 6am on the day.

Some clients complaining that our system became very slow. It turns out that this has to do with us not putting our web application and databases in the same virtual network (v-net). Without putting them in the same v-net, every time our web application called the databases, they had to go through the Internet, instead of the internal connection. Thus the connection was slow and expensive (Azure charged us for outbound data transfer).

We also received calls complaining their websites were gone. That was actually caused by them not updating their DNS records fast enough.

Another interesting problem is part of our system was rejected by our client’s network because they only allowed traffics from certain IP address to access. So, we had to give them the new IP address of our Azure server before everything can work at their side again.

Downtime: The Impact and Microsoft Responses

The web applications have been running for about 8 months on Azure environment since July 2014. We encountered roughly 10 downtimes. Some are because we setup wrongly. Some are due to the Azure platform errors, as reported by Microsoft Azure team.

Our first downtime happened on 4 August 2014, from 12pm to 1:30pm. It’s expected to have high volume to our websites at noon. So, the downtime caused us to loss a huge amount of online sales. The cause of the downtime was later reported by Microsoft Azure team as all our deployments were in the affected cluster in Southeast Asia data centre.

Traffic Manager Came to Rescue

That was when we started to plan to host the backup of all our web applications in another Azure data centre. We then use traffic manager to do a failover load balancing. We planned to carry that out so that when our primary server went down, the backup server was still be there running fine.

Azure Traffic Manager helps to redirect traffic to deployments in another DC when current DC fails to work.
Azure Traffic Manager helps to redirect traffic to deployments in another DC when current DC fails to work.

In the reply Microsoft Azure team sent us, they also mentioned that uptime SLA of virtual machine requires 2 or more instances. Hence, they highly recommended to implement the Availability set configuration for our deployment. Before that, we always thought that it’s sufficient to have one instance running. However, the planned maintenance in Azure was, in fact, quite frequent and sometimes the maintenance took a long time to complete.

Database Mirroring: DB Will Always be Available

So, in addition to the traffic manager, we also applied database mirroring to our setup. We then had three database servers, instead of just one. One as principal, one as witness, and one as mirror. Regarding steps on how we set that up can be find in my another post.

Elements in my simple database mirroring setup.
Elements in my simple database mirroring setup.

With all these setup, we thought the downtime would not happen again. However, soon we realized that the database mirroring was not working.

When the principal was down, there was auto failover. However, none of our web application could connect to the mirror. Also, when the original principal was back online, it would still be a mirror until I did a manual failover. After a few experiments with Microsoft engineers, we concluded that it could be due to the fact that our web applications were not in the same virtual network as the database instances.

Availability Set: At Least One VM is Running

Up to this point, I haven’t talked about configuring two virtual machines in an availability set. That is to make sure that in the same data centre, when one of the virtual machines goes down, another will still be up and running. However, for our web applications, due to the fact that they were all using old version of .NET framework, Azure Redis Cache Service couldn’t even help.

Our web applications use session state a lot. Hence, without Redis, an external session state provider, we had no choice but to use SQL Server as the external session state provider. Otherwise, we would be limited to run web applications on only one instance.

Soon, we found out that we couldn’t even use SQL Server mode for session state because some of the values stored in our session are not serialisable. We had no other option but to rely on Traffic Manager at that moment.

In October 2014, few days after we encountered our third downtime, Microsoft Azure announced the new distribution mode in Azure Load Balancer, called Source IP Affinity. We were so happy when we heard that because that means sticky session would be possible on Azure. Soon, we configured the second instance successfully in the same availability set.

Source IP Affinity
Source IP Affinity

High Availability

After all these have been done, there were still downtime or restarts for one of the virtual machine. However, thanks to load balancer and traffic manager, our websites were still up and running. Regarding the random restarts of virtual machines, Microsoft Azure team had investigated the issue and identified that some of them were due to platform bugs.

There are still more work needs to be done to achieve high availability for our web applications on Azure. If you are interested to find out more about high availability and disaster recovery on Azure, please read this article from Microsoft Azure.

Migrating Back to On-Premise?

When we were still using on-premise, we had only one web server and one database server. However, when we moved to Azure, we had to setup seven servers. So, it’s a challenge to explain to managers on the increase of the cost.

Sometimes, our developers would be also asked by manager if moving back to on-premise was a better option. I have no answer for that. However, if we migrated back to on-premise and there was a downtime happening, who would be in charge of fixing the problems rapidly?

Hence, what we can do now as developers, is to learn as much as we can on how to improve the performance as well as the stability of our web application on Azure. In addition, we will also need to seek help from Microsoft Azure team, if necessary, to introduce new cloud solution to our web applications.

Claudia Madobe, the heroine of Microsoft Azure, is cute but how much do we really know about her?
Claudia Madobe, the heroine of Microsoft Azure, is cute but how much do we really know about her? (Image Credit: Microsoft)

Setting Up MS SQL Server on Azure Virtual Machine

MS SQL Server 2012 + Azure VM

So, now we have an ASP .NET web application running on Microsoft Azure. What we are going to do next is to host our MS SQL Server on the cloud also.

There are two options available in Microsoft Azure to host our SQL database. One is the well-known Azure SQL Database, an implementation of Platform as a Service for a relational database service in the cloud. The other one option is introduced after the new Infrastructure as a Service capabilities of Microsoft Azure. It is now possible to easily deploy instances of MS SQL Server in Azure Virtual Machine.

Azure SQL Database or SQL Server in Azure VM?

Personally, I prefer to directly deploy SQL Server in the virtual machine. At least the entire process looks about the same as what I have already done in our on-premise database server. So, having SQL Server deployed on Azure virtual machine actually means that the developers do not need to make huge changes to our existing applications. In addition, it’s also because migrating existing applications to the cloud normally needs to emulate on-premises behaviour. In short, choosing SQL Server in Azure virtual machine saves the time on migration.

The following is a nice decision diagram that I found on MSDN blog for us to choose which option to use. Also, there is a comparison summary between those two options, Azure SQL Database or SQL Server in Azure Virtual Machine.

To use Azure SQL Database or SQL Server in Azure VM?
To use Azure SQL Database or SQL Server in Azure VM? (Image Credit: MSDN Windows Azure Blog)

 

Creating the Virtual Machine with MS SQL Server Installed

There entire process of creating a virtual machine to host the MS SQL Server is similar to the creation of virtual machine for Windows Server. The only main difference is probably the part of choosing an appropriate image. There are a few editions of SQL Server 2012 for us to choose. You can find a comprehensive comparison among them on MSDN website, again.

Choose "SQL Server 2012" image to deploy MS SQL Server on the new virtual machine.
Choose “SQL Server 2012” image to deploy MS SQL Server on the new virtual machine.

The following table shows the pricing of each edition running on Azure VM as well as the disk sizes available. Here I only pay attention to the memory intensive instances, i.e. A5, A6, and A7. They have larger RAM and disk sizes for the virtual machine and they are thus considered optimal for hosting databases and other high-throughput application. The data shown in the table is applicable for virtual machines deployed in Asia Pacific Southeast, i.e. Singapore.

Asia Pacific Southeast (Singapore) VM pricing for each edition of SQL Server
Asia Pacific Southeast (Singapore) VM pricing for each edition of SQL Server (screenshot taken on 20 April 2014)

Connect to SQL Server Database Engine on Azure VM

After the virtual machine is up and running, we can immediately RDP in to the VM. Then in there, we just need to launch Microsoft SQL Server Management Studio to access the database with the Windows Authentication.

Running SQL Server Management Studio on the virtual machine.
Running SQL Server Management Studio on the virtual machine.

Open TCP Port 1433

SQL Server typically uses TCP port 1433 for remote connections to the database. So, we need to add an endpoint as well as to open the port in the virtual machine firewall for this. However, to avoid security attack, it’s recommended to specify a different Public Port when creating the endpoint in Azure.

1433: A TCP port normally used by MS SQL Server for remote connection to the database.
1433: A TCP port normally used by MS SQL Server for remote connection to the database.

SQL Server Authentication

We need to change the server authentication to “SQL Server and Windows Authentication mode”. This enables us to create logins in SQL Server which are not based on Windows user accounts. Both the login ID and passwords will be stored in the SQL Server. This allows SQL Server to continue supporting our third-party applications that require SQL Server Authentication. After that, we just right-click on the server in Microsoft SQL Server Management Studio Object Explorer to restart the server.

SQL Server and Windows Authentication Mode
SQL Server and Windows Authentication Mode

Connecting Application to the SQL Server

To connect your ASP .NET web application with the database, in web.config, you can just key in the server name, port number together with login ID and password in the following connection string that is used to connect the instance of the SQL Server running on Azure VM.

<add key=”strDBconn” value=”Data Source=****.cloudapp.net,<port-number>;Initial Catalog=<database name>;UID=<login ID>;PWD=<login password>” />

Conclusion

The steps taken to deploy a Microsoft SQL Server on Azure virtual machine are quite straight-forward. There is also an official detailed documentation about provisioning a SQL Server Azure Virtual Machine. I like one of its diagrams which shows the two main connection paths. The complete diagram is shown below.

SQL Server Azure VM Connection Paths
SQL Server Azure VM Connection Paths (Image Credit: Microsoft Azure Documentation Center)

In addition, there are some other online resources which has more detailed discussion on several topics, such as

Deploy MongoDB to Azure: It’s Never Been Easier

WebMatrix + MongoLab + Windows Azure

This post is to continue the story of my MongoDB self-learning back in January. Also, the theme for March self-learning is about Windows Azure, thus I guess it’s a good opportunity to combine these two knowledge together. So, let’s continue the story now.

Basically, after the one-month MongoDB learning in January, I have successfully built a simple web application allowing users to add pinpoints on Google Map and store those info on MongoDB. However, all those are happening in local machine. So, how to do that if we would like to deploy it on, for example, Azure for the public to access?

Fortunately, with the help of Microsoft WebMatrix, the whole process is rather simple and straight-forward.

Deploy The Website in 3 Simple Steps

Firstly, there is a Publish feature available on WebMatrix. After adding your Windows account on WebMatrix, there is a simple Publish interface which allows you to publish our current website to either a new site or existing site on Azure.

Create a new website on Windows Azure with WebMatrix.
Create a new website on Windows Azure with WebMatrix.

Secondly, we need to create new MongoDB database on cloud. Windows Azure Store offers a web-based cloud MongoDB management tool, called MongoLab. Currently, MongoLab provides a free sandbox plan for the developers to try out MongoDB on Windows Azure. It also provides some other plans with Replica Set cluster on shared or dedicated Azure VMs. Normally those are for large and heavy traffic sites. For learning purpose, a free plan with 0.5 GB of storage is enough.

MongoLab: One of the MongoDB hosting platforms available on Windows Azure.
MongoLab: One of the MongoDB hosting platforms available on Windows Azure.

Thirdly, once the MongoLab service is added, we can now happily get the Connection Info of the database and then paste it to our code in WebMatrix.

var dbc = monk('mongodb://GCLMongoDB:.../GCLMongoDB');
Connection Info can be found on Windows Azure Portal.
Connection Info can be found on Windows Azure Portal.

Finally, we can just hit the Publish button on WebMatrix to launch the website on Windows Azure with MongoDB. Ta-da!

MongoLab Helpful Features

In MongoLab, we get to see the documents either in list view or even the table view. List view allows us to read all the documents stored in the collection in JSON format. We can scroll through a consecutive set of documents each in its entirely. By clicking on a document, we then can edit and delete the selected document.

Documents can be presented as JSON List View in MongoLab.
Documents can be presented as JSON List View in MongoLab.

In table view, we get to choose the format of the displayed table by defining how to translate JSON documents in the collection into a 2D table. This is especially useful for those who are familiar with relational database but are still new to document database.

Documents can be shown as table in MongoLab as well.
Documents can be shown as table in MongoLab as well.

In addition, there is an editor provided to do query. A friendly quick reference of query displayed at the side of the page to guide new developers along on how to do query also.

We can write queries in MongoLab too!
We can write queries in MongoLab too!

For the database backup, there is a charge of $0.50 per run + $0.02 per run per GB if we store our backups in MongoLab-owned cloud container. Hence, even for a small-sized database that I have above (2.49 KB), I will already be charged for around $15 monthly for 30 backups.

Conclusion

My friend once said that I used too much Microsoft developer tools and products without knowing what have really been handled by them in the background secretly. I think it’s kind of true. As we can see, to deploy both the website and MongoDB on Windows Azure, it took only a few simple steps as shown above. Thus, I’d encourage to learn in this way only if you are totally new to MongoDB and you would just like to have an overview of how a Node.JS website can work together with MongoDB on the cloud.

If you want to learn more about MongoDB, you can also checkout the following slides from the presentation in Singapore MongoDB User Group Meetup #2. The first half of the slides basically cover some fundamental knowledge about MongoDB which is quite useful for those who are new to this document database.