Introduction to Asymptotic Notations Big O (O), Big Omega (Ω) and Big Theta (Ө)
In this article, I will explain the theory behind theses asymptotic notations.
Assume that you are given a problem and ask to write a program to address that. If you have previous experience in coding we all know that there will be multiple ways to address a problem. In computer science, we use different algorithms and data structures to solve problems. There are a plethora of algorithms and data structures that you can use when developing a solution to a problem. Thus, when we chose an algorithm or a data structure we have to be conscious about the time and memory it takes. In computer science, we call this analysing the time and space complexity of an algorithm.
The time complexity of an algorithm quantifies the amount of time taken by an algorithm to run respect to the size of the input.
The space complexity of an algorithm is total space taken by the algorithm with respect to the input size. Note that space complexity includes both Auxiliary space which means the extra space or temporary space used by an algorithm and space used by inputs.
Usually, we use the algorithm that takes the least amount of time and memory to solve a problem. We try to be efficient. If you need to justify a selection of an algorithm over a set of alternatives to solve an issue, time and space complexity analysis is one good aspect. You may have seen these asymptotic notations a lot if you are experienced in working with data structures and algorithms.
Before start designing or analysing an algorithm you should get familiar with the terminology that is used in the context.
Big O Notation
Assume that you wrote an algorithm and execute it and find out that when the size of your input increases, the time it takes to execute the algorithm is increasing as shown in the following graph. Refer Graph_01 below.
The Y-axis denotes the time where X-axis denotes the input size. The function f(n)
represents your algorithm time-input relationship. Now let's try to understand what is the worst case or the upper bound of this function f(n)
. In order to analyse this, let's create another function cg(n)
where cg(n)
is greater than f(n)
after a particular input value which is n0
as per the Graph_01 below. In here n0
should be greater than or equal to 1 because the input cannot be 0.
In other words, after a given input value n0
, for any n
value which is greater than n0
,cg(n)
is greater than or equal to f(n)
.
Thus, we say that if f(n) ≤ cg(n) for any n ≥ n0 where c, n are real numbers and c > 0 and n0 ≥ 1 then f(n) = O(g(n))
.
Example: Go through the following example where I have taken a sample function f(n) = 4n + 3
and g(n) = n
. Now we need to see whether we can say f(n) = O(g(n))
. How we can check that? We have to try and see whether we can find values for c
and n0
where it will fulfil the condition if f(n) ≤ cg(n) for any n ≥ n0 where c, n are real numbers and c > 0 and n0 ≥ 1 then we say f(n) = O(g(n)).
If we are successful then we can say f(n) = O(g(n)) when f(n) = 4n + 3 and g(n) = n
. See the proof below in Figure_01.
Now we found that whenf(n) = 4n + 3 and g(n) = n
, we can say that f(n) = O(g(n))
. Notice that in here if g(n) = n
is satisfying the fact that f(n) = O(g(n))
then any function greater thann
that is g(n) = nˆ2, g(n) = nˆ3, ... g(n) = nˆn, g(n) = 2ˆn
will also satisfy the same. Yet, when we are dealing with big O, we are going to the least upper bound or tight upper bound. A tight upper bound is the best of all the worst case times that the algorithm can take.
Concluding the big O explanation here you can see that if your algorithm hold a function like 4n + 3
then you can say its time complexity is O(n)
.
Big Omega (Ω) Notation
Again let's assume that the following graph (Graph_02) represent your algorithm's time-number of inputs relationship. That means f(n)
below shows how the time of execution depends on the length of the input of the algorithm.
Again the Y-axis denotes the time where X-axis denotes the input size. The function f(n)
represents your algorithm time-input relationship. Now let's try to understand what is the best case or the lower bound of this function f(n)
. In order to analyse this scenario, let’s create another function cg(n)
where cg(n)
is less than f(n)
after a particular input value (which is n0
as per the Graph_02 below). In here againn0
should be greater than or equal to 1 (n ≥ 1)
because the input cannot be 0.
Thus, we say that if f(n) ≥ cg(n) for any n ≥ n0 where c, n are real numbers and c > 0 and n0 ≥ 1 then f(n) = Ω(g(n))
.
We can take the same example above.
Example: Let's take the same function f(n) = 4n + 3
and g(n) = n
. Now we need to see whether we can say f(n) = Ω(g(n))
. For that, we have to try and see whether we can find values for c
and n0
where it will fulfil the condition if f(n) ≥ cg(n) for any n ≥ n0 where c, n are real numbers and c > 0 and n0 ≥ 1 then f(n) = Ω(g(n)).
If we are successful then we can say f(n) = Ω(g(n)) when f(n) = 4n + 3 and g(n) = n
. See the proof below in Figure_02.
So 4n + 3
is lower bounded by the function n
. As in the Big O notation let's see whether 4n + 3
is lower bounded by nˆ2, nˆ3...
? In order to check this, as in the example above we have to try and see whether we can find values for c
and n0
where it will fulfil the condition 4n + 3 ≥ cnˆ2
while c, n are real numbers and c > 0 and n0 ≥ 1
. But you can see that it is not possible. Yet, other than finding nˆ2, nˆ3
the higher values let's take anything less than n
. For example (log n), (loglog n)
. If you try and check these values you will see satisfy the condition. However, here we take only the tight lower bound which means the greatest lower bound, the highest value which will lower bound the function.
Big Theta (Ө) Notation
You saw that in Big O definition we found the upper bound of the function f(n)
and in Big Omega we found the lower bound of the function f(n)
. In here we are looking at both upper bound and lower bound of the function f(n)
by changing the c
value of the cg(n)
function. If the function f(n)
bounded by c1g(n) and c2g(n)
after a given value of n
(let's say n0
) for all n ≥ n0
then we say f(n) = Ө(g(n))
. These constants c1
and c2
could be different. Refer Graph_03 below.
Here also Y-axis denotes the time where X-axis denotes the input size. The function f(n)
represents your algorithm time-input relationship. In here, any n
value which is greater than n0
, will satisfyc1g(n) ≤ f(n)
and c2g(n) ≥ f(n)
.
Thus, we say that if c1g(n) ≤ f(n) ≤ c2g(n) for any n ≥ n0 where c1, c2, n are real numbers and c1,c2 > 0 and n0 ≥ 1 then f(n) = Ө(g(n))
.
Again take the same example.
Example: Let’s take the same function f(n) = 4n + 3
and g(n) = n
. Now we need to see whether we can say f(n) = Ө(g(n))
. For that, we have to try and see whether we can find values for c1, c2
and n0
where it will fulfil the condition if c1g(n) ≤ f(n) ≤ c2g(n) for any n ≥ n0 where c1, c2, n are real numbers and c1,c2 > 0 and n0 ≥ 1 then f(n) = Ө(g(n))
. See the proof below in Figure_03.
It is important to remember that theta (Ө)is called asymptotically equal or we have an asymptotically tight bound. That means if the leading term of the function f(n)
is n
(In our examplef(n) = 4n + 3
so the leading term of 4n+ 3
is n
), then you can take n
as g(n)
or we say it is equal Ө(n)
. Likewise if the leading term of the function f(n)
is nˆ2
then it is equal to Ө(nˆ2)
.
If f(n) = nˆ2 + 3n
, then as n
becomes very large, the term 3n
becomes insignificant compared to nˆ2
. Then the function f(n)
is said to be "asymptotically equivalent to nˆ2
, as n → ∞
". This is often written symbolically as f(n) ~ nˆ2
, which is read as "f(n)
is asymptotic to nˆ2
. (wikipedia.org).
Well, those are the Big O (O), Big Omega (Ω) and Big Theta (Ө) asymptotic notations in a nutshell. Please leave your comment and feedback as always.
If you like this article and the content that I create, you can buy me a virtual coffee ️😊;
Cheers!