Teaching Kids Programming – Algorithmic Runtime and Space Complexity


Teaching Kids Programming: Videos on Data Structures and Algorithms

We analyse an algorithm for its complexity in terms of runtime and the space allocated. Algorithms can be compared by such so that we know which algorithm runs faster given the same input and is run on the same PC.

Let’s assume the given problem has input size of N for example, N numbers in a list.

The Space complexity is measured similar to Algorithmic complexity. Space complexity is the space we require based on the size of the input in order to run the algorithm.

O(1) Constant

These algorithms’ complexity do not vary by the input size. For example, return the number of elements in the array – usually can be done in O(1) constant time because the computer stores the size additionally to the numbers.

Common O(1) Constant examples include computing something based on the pre-known math equations. For example, compute the sum of the First N Positive Integers can be directly computed using the equation: n(n+1)/2.

O(Sqrt(N))

Checking if a number is a prime number runs at O(Sqrt(N)) because we can just check the possible factors from 2 to Sqrt(N) to see if it can be divided. In the worst case if it is a prime number, the complexity is O(Sqrt(N)) because it takes us to check Sqrt(N) numbers roughly – as we can ignore 1, and every even numbers bigger than 2.

O(LogN)

Binary Search falls into this category. Given a sorted list e.g. 8 elements, it takes maximum 3 (worst case) checks to find the number we are looking for, as we are halving the search space every time.

Linear O(N)

If we want to find out the max/min or the average, we need to iterate once the numbers in the array. The time required is T(N) and thus the complexity is O(N).

If we want to find the index of the first zero, the worst case time could be still T(N) when the zero is at the end of the array, and thus the complexity is still O(N) – worst case.

How about if we need to iterate first half or twice the array, the Time will be T(N/2) or T(2N) respectively. In this case the complexity is still O(N). We drop the constants.

O(NLogN)

QuickSort, Merge Sort has such complexity. However, in the worst case, QuickSort when pivot is not selected optimally, and the array is already sorted, the complexity is O(N^2).

O(N^2) Quadratic

Finding every possible pairs makes a O(N^2) algorithm. If we only consider the pair with index i smaller or equal to j, it is still O(N^2) as the Time is T(N^2/2) we drop the constants.

Simple Sorting Algorithms such as Bubble Sort, Selection Sort, Insertion Sort have O(N^2) complexity.

O(N^3), O(N!), O(2^N)

These algorithms are having a huge complexity. For example, a full permutation of a N-size sequence has complexity O(N!) factorial. And iterating all subsets of a set takes O(2^N) time.

The right column is only an indication of the worst you can go.
n ≤ 10 – O(n!)
n ≤ 20 – O(2^n)
n ≤ 500 – O(n^3)
n ≤ 5000 – O(n^2)
n ≤ 10,000 – O(n^2 logn)
n ≤ 10^6 – O(nlogn) or O(n)
n is large – O(1) or O(log n)

–EOF (The Ultimate Computing & Technology Blog) —

GD Star Rating
loading...
772 words
Last Post: Compute Largest Product of Contiguous Digits
Next Post: A Simple Atbash Cipher Implementation

The Permanent URL is: Teaching Kids Programming – Algorithmic Runtime and Space Complexity

Leave a Reply