Algorithm complexity — asymptotic analysis

Glauber Cini
6 min readMar 15, 2021

--

As first post, I decided to write about asymptotic analysis, a topic that, in my opinion, should be a basic knowledge for every software engineer.

So, instead of base this post on several books, references, or even to many other blogs, I prefer to go straight to the point and explain in a brief but enlightening way.

The goal of this post is not to become the “ultimate reference” about the topic (it is lightyears away from it) but rather, to awake some interest for its relevance and become a reference for other posts of this same blog.

Big-O and the amount of time

The above image was used from a great but sintetic post written by Daniel Miessler, you can read it here.

Generally speaking, we can say that the Big-O is a measure where we can compare algorithms efficiency given the size of the input data.

However, whenever we are going to work with a large volume of data, I use graphics like the one in this post as a “thinking supporter”. The pain of building inefficient systems is extremely higger to the pain spent of possibly building and architecting them better.

To make easier the understand how much time each complexity takes, we assume in this post that every operation took 1 second to be done.

So…

Constant Complexity -> O(1):
Here the most common operations are the ones like math or logic operations.
Example:

int x = 10 * n;
//or
if(x < 10)...

Both are code snippets that “always will took the same amount of time” to be accomplished.
In this case we can say that for any size of input “n” the algorithm will always take 1 second to finish.

Logarithmic Complexity -> O(log n):
The logarithm function it’s like the inverse of exponential function. We can imagine (or even validate in the image above) that this series has a low execution time given the input.
Example:

public static int BinarySearch(int[] inputArray, int targetValue)
{
int min = 0;
int max = inputArray.Length - 1;
int guess; while (min <= max)
{
guess = (min + max) / 2; if (targetValue == inputArray[guess])
{
return guess;
}
else if (targetValue < inputArray[guess])
{
max = guess - 1;
}
else
{
min = guess + 1;
}
} return -1;
}

My line of thought about this kind of problem it’s like: “any problem that each iteration I can cut the problem size in half”. The binary search is perhaps the most classic example of this. However, it requires an ordered array to work. Here we already have some indications for the future thoughts, such as:

Is it better for me to keep my data sorted for search later, or is the cost of keeping sorted higher than a search in an unordered structure?

Speaking about time, it can also be evaluated using the logarithm function. Such as:

Array of 1 item: log(1) = 0 (less than 1 second)

Array of 10 items: log(10) = 1 (seconds)

Array of 100 items: log(100) = 2 (seconds)

Array of 1000 items: log(1000) = 3 (seconds)

Linear Complexity -> O(n):
Linear complexity is perhaps the most common in daily development. This is because this complexity increases at the same rate as the input size.

There’s no lacking of examples about this complexity, because an iterator that go through all the given input, will be, at least of linear complexity.
Example:

public static int UnsortedSearch(int[] inputArray, int targetValue)
{
for (int i = 0; i < inputArray.Length; i++)
{
if(inputArray[i] == targetValue)
{
return i;
}
} return -1;
}

Speaking about “search algorithms”, the above example comes in handy. It shows that it would work for any ordered list or not, at the cost that Big-O, its worst case, is “in seconds” the same as the size of the vector.
So we have:

Vector with 1 item: 1 second

Vector with 10 items: 10 seconds

Vector with 100 items: 100 seconds

Vector with 1000 items: 1000 seconds

Linearithmic Complexity -> O(n log n):
This explanation can be an union from the last two last complexities presented, we can think about it like if each iteration of a linear algorithm O(n), we had an extra step of complexity O(log n).
I believe that a reality closer example its Heapsort, and you can find a Java algorithm followed by a better explanation than mine here.

class HeapSort
{
public static void Sort(int[] elements)
{
BuildHeap(elements);

for (int swapToPos = elements.Length - 1; swapToPos > 0; swapToPos--)
{
ArraySwap(elements, 0, swapToPos);

Heapify(elements, swapToPos, 0);
}
}

private static void BuildHeap(int[] elements)
{
int lastParentNode = elements.Length / 2 - 1;

for (int i = lastParentNode; i >= 0; i--)
{
Heapify(elements, elements.Length, i);
}
}

private static void Heapify(int[] heap, int length, int parentPos)
{
while (true)
{
int leftChildPos = parentPos * 2 + 1;
int rightChildPos = parentPos * 2 + 2;

int largestPos = parentPos;

if (leftChildPos < length && heap[leftChildPos] > heap[largestPos])
{
largestPos = leftChildPos;
}

if (rightChildPos < length && heap[rightChildPos] > heap[largestPos])
{
largestPos = rightChildPos;
}

if (largestPos == parentPos)
{
break;
}

ArraySwap(heap, parentPos, largestPos);

parentPos = largestPos;
}
}

private static void ArraySwap(int[] vet, int offset1, int offset2)
{
int tmp = vet[offset1];
vet[offset1] = vet[offset2];
vet[offset2] = tmp;
}
}

Algorithms of this category tend to appear as “neither as efficient as linear ones, nor as slow as exponential ones”.

The spent time here is the sum of times of the last two complexities.

Quadratic Complexity -> O(N2):
This complexity is easily synthesized, because the analogy is very clear: when we have one iteration within the other ( for inside for). Once we add a new iteration within the existing iteration block, we also increase the exponent of complexity, for example: for inside for inside for: O(n 3).

I usually evaluate solutions that go into this magnitude as those that initially have a greater risk of being inefficient. From minute one of code.

Algorithms that fits into this group, usually have a it’s code near the snippet below:

for (int i = 0; i < n; i++)
{
for (int j = 0; j < n; j++)
{
//operations
}
}

Here the spend time it’s useful again:

Input size with 1 element = 1 second

Input size with 10 elements = 100 seconds

Input size with 100 elements = 10,000 seconds

Input size with 1000 elements = 10,000,000 seconds

Exponential Complexity -> O(2n):
This group of algorithms usually fit into thoughts like: “what are the possible values [of a list], which together would give a certain result?”, This problem differs from brute force because it does not contain permutations (3 + 5 is the same as 5 + 3). In college the classic examples are the recursive implementation of the Fibonacci series or the Hanoi Tower. Perhaps on a daily basis, the closest example is something like “I need to find a value of X, this is given by the sum of some items in a list, which we do not know which ones”. Or things like “if I can choose 3 flavors of pizza, how many possible combinations can I make?”. In short, the search for all “ subgroups of a group “.

class Subsets
{
public static List<decimal[]> Create(decimal[] values)
{
var result = new List<decimal[]>();
var subset = new Stack<decimal>();
int idx = 0;

Seek(values, result, subset, idx);

return result;
}

private static void Seek(decimal[] values, List<decimal[]> result, Stack<decimal> subset, int index)
{
if(subset.Count > 0) result.Add(subset.ToArray());

for (int i = index; i < values.Length; i++)
{
subset.Push(values[i]);

Seek(values, result, subset, i + 1);

subset.Pop();
}
}
}

This algorithm, in fact, has already been used by me, several times, to help some workmates to find which values were giving a certain sum, in order to help them to find possible financial or accounting differences.

However, this group of algorithms will always have long running times, as we can see below:

Input of size 1 = 2 seconds

Input of size 10 = 1024 seconds

Input of size 100 = 1267650600228229401496703205376 seconds

Input of sieze 1000 = ? seconds

Factorial Complexity -> O(n!):

This line of problems is worthy of knowledge, but not of going deeper here on the blog.
Factorial problems are, in an example, those that we need to search for all possible combinations and permutations to find the best result.

Conclusion

The blog does not intend to maintain this theoretical bias, but rather to try to bring technological problems faced in a real way.

However, it is inevitable that having an idea of the complexity of an algorithm is an indispensable step in trying to approach a new ways of code and performance gain.

--

--

Glauber Cini
Glauber Cini

Written by Glauber Cini

Software Performance Engineer wannabe

No responses yet