Nowadays all topics related to Cloud and Serverless terms have become very popular. One of such elements are Azure Functions. Microsoft defines it as:

Azure Functions is a solution for easily running small pieces of code, or “functions,” in the cloud. You can write just the code you need for the problem at hand, without worrying about a whole application or the infrastructure to run it. Functions can make development even more productive, and you can use your development language of choice, such as C#, F#, Node.js, Python or PHP. Pay only for the time your code runs and trust Azure to scale as needed. Azure Functions lets you develop serverless applications on Microsoft Azure.

And really I could write a lot about comfort of it usage, easiness of implementation and other features that are great in this solution. Azure Functions works perfect when you do not need to have huge backend. I tested this scenario many times and I was always satisfied.

Of course, at the beginning I made some mistakes and I would like to write about two most important ones.

Pre-compilation

First of those mistakes is pre-compilation of Azure Functions or maybe I will be more precise – lack of pre-compilation. In Azure Portal you can easily create and modify function without any other additional tools. It is very comfortable. You will get on a current basis report about errors and you can test your function there. But this approach has one major problem. Each function which has not been used for more than 5 minutes will be put into idle mode. Next execution of such function will take longer than normal one because your function will need to be compiled and indexed before it can be run again. This is called cold start. This scenario is quite typical for payment in consumption plan. Of course that time will not increase your bill for usage of Azure Functions.

There is very easy solution for this issue. You need to only upload already compiled function to Azure. It means that you need to create normal dll and define which functions should be executed. Such configuration should be placed in funcjon.json file. The main benefit will be decrease of function response time because it is already compiled. There are also other smaller benefits:

[list]

  • we can use all support of Visual Studio for code creation
  • it is easier to write unit tests
  • also it is easier to connect such solution with CI
  • moreover you can execute already implemented code as Azure Function quite fast
  • you do not need to have project.json to manage your nuget dependencies.

[/list]

Probably I should add a few words of comment to first three points. You should remember that right now there is no full working support for Azure Functions in Microsoft tools. There is no extension for Visual Studio 2017. You can find beta version of such extension for Visual Studio 2017 Preview. I believe nobody wants to install on their machine the same products in two different versions. In case of Visual Studio 2015 you can find extension that helps working with Azure Functions but still it is preview version and IntelliSense is not working there.

Atomic action

Second mistake was related to size of function. Function was too big and was doing more than one operation. According to Microsoft recommendation each function should do only one thing. We can say that function should take atomic action. Because of that Microsoft defined maximal time of each execution of function. It is 5 minutes. Each function which execution will take longer will be cancelled.

So we should think about what that atomic action means. I will start from not correct behaviour. One of my functions was a proxy. It provided WebAPI, which accepted data from form that have been filled by user and send them to next service by executing WebAPI of that service. It was just a few lines of code.

Was it atomic action or maybe not? Unfortunate it was not in scope of Azure Functions.

Function from example should be divided into two smaller ones. First function should collect data entered by user and put it to queue for latter processing. Then we should create second function that would process data from queue – in our example that function should call external WebAPI. With that approach we can send to user confirmation that we collected data and queued for latter processing very fast. Then usage of Azure Storage Queue can guarantee security of data. Microsoft claims that it is not possible to lose data from this type of queue. Only latter processing remains. By such architecture we can do that without user blocking.

That approach is not perfect. You should also remember about some drawbacks. First of them is that you will need to maintain more elements in cloud. And maybe more important one – error handling. In one bigger function we could control all errors in one place. With recommended approach we can detect and inform users about issues that occurs during addition items to queue. But it will be harder to inform user about errors that occurs in second function. We will need to add additional code that will handle that and it means that finally our solution will be even more complicated.