Why you shouldn’t trust the documentation ?

Today Microsoft Azure introduced the new F-series VM sizes (you can read about it here) and our software faced a bizarre issue.

When we want to create a new instance in Azure, we have an implementation that chooses VM size based on some criteria. One of the criteria is the number of data disks we need. As you probably know, each VM size comes with a specific amount of CPU cores and RAM memory, and it has limitation on how many data disks can be connected to it.

What the algorithm does is simply get the list of all available VM sizes in a specific region and filter them by our criteria and then choose the cheapest one from the list.

Up until now, when we had 3 data disks the algorithm chose “Standard_A2_v2” VM size which has 2 CPU cores, 4GiB of RAM and supports up to 4 data disks. Today the chosen VM size was “Standard_F1s” which (according to the spec) has 1 CPU cores, 2GiB of RAM and supports up to 2 data disks.

Our software started to fail with multiple “Cannot access memory” errors because our assumption was that the VM has at least 4GiB of RAM (and we are utilizing it pretty well).

The question that we tried to answer is how we chose a VM size that supports up to 2 data disks when the algorithm looked for VM size that supports at least 3 data disks.

Reviewing the code didn’t provide the answer so we went to check the API:

PS C:\Users\alexander> Login-AzureRmAccount
PS C:\Users\alexander> Get-AzureRmVmSize -Location "North Europe" | Sort-Object Name | ft Name, NumberOfCores, MemoryInMB, MaxDataDiskCount -AutoSize

Name                   NumberOfCores MemoryInMB MaxDataDiskCount
----                   ------------- ---------- ----------------
Standard_F1s                       1       2048                4
Standard_F2s                       2       4096                8

From the documentation (that can be found here) we see that:


So looks like someone here is lying 🙂

The real problem that caused our software to fail is a hidden assumption that we made that assumes minimal size of RAM memory (at least 4 GiB) for instances with more than 2 data disks. The solution is simple, add another criteria to the algorithm.

The lesson I learned (and not for the first time) is don’t trust the documentation and don’t do hidden assumptions… They will eventually break and hit you back.

In the same topic, can you guess what is the complexity of std::list size() function in C++ 11 ?

Hint: According to the documentation, it should be O(1).

– Alexander

Oh hi there 👋
It’s nice to meet you.

Sign up to receive awesome content in your inbox, as soon as it is published!