โ† Back to Materials

๐Ÿ“œ Code Examples

All pseudocode from the lecture slides โ€” organized by chapter

๐Ÿ“š 6 Lectures
๐Ÿ“ 22 Code Examples
๐ŸŽฏ Covering Slides 7โ€“188
L1

Lecture 1 โ€” Correctness

1 example โ–ผ

sum โ€” Array Sum

Slide 7 W(n) = ฮ˜(n)
sum(array, len){
    sum = 0
    i = 0
    while(i < len){
        sum += array[i]
        i++
    }
    return sum
}
Example of pseudocode usage โ€” not any particular programming language, but precisely expresses the algorithm.

algori โ€” Find Maximum

Slide 21 W(n) = ฮ˜(n)
algori(Arr, len){
    i = 1
    x = Arr[0]
    while(i < len){
        if (Arr[i] > x){
            x = Arr[i]
        }
        i++
    }
    return x
}
Proof of total correctness using loop invariant: "x is the maximum among the first i elements".
L2

Lecture 2 โ€” Complexity

1 example โ–ผ

find โ€” Sequential Search

Slide 25 W(n) = n ยท A(n) = n/2
find(arr, len, key){
    i = 0
    while(i < len){
        if (arr[i] == key)
            return i
        i++
    }
    return -1
}
Time complexity depends on both data size (length) and particular input (position of key).
L3

Lecture 3 โ€” Searching & Positional Statistics

3 examples โ–ผ

Binary Search

Slide 53 W(n) = ฮ˜(log n) ยท S(n) = O(1)
search(S, len, key){

    l = 0
    r = len - 1

    while(l <= r){
        m = (l + r) / 2
        if(S[m] == key) return m
        else
            if(S[m] > key) r = m - 1
            else l = m + 1
    }

    return -1
}
Requires random access (RAM assumption). Sequence must be sorted. T(n) = T(n/2) + 1.

Partition Procedure

Slide 60 W(n) = n + O(1) ยท S(n) = O(1)
// input: S - sequence; l,r - left/right indexes
// output: index i of pivot M; array reorganised
partition(S, l, r)
  // Selects element M and re-organises S such that:
  //   all elements left of M are โ‰ค M
  //   all elements right of M are โ‰ฅ M
  // Returns the final index i of element M
The partition procedure specification. See L5 Slide 96 for Hoare's implementation.

Hoare's Algorithm (QuickSelect)

Slide 62 W(n) = ฮ˜(nยฒ) ยท A(n) = ฮ˜(n)
// Find k-th positional statistic using partition
quickSelect(S, l, r, k):
    // 1. call partition on the sequence
    m = partition(S, l, r)

    // 2. if returned index equals k โ†’ return S[k]
    if(m == k) return S[k]

    // 3. else recurse on the "left" or "right" sub-sequence
    if(k < m) return quickSelect(S, l, m - 1, k)
    else       return quickSelect(S, m + 1, r, k)
Invented by C.A.R. Hoare. Average time is linear (independently on k) โ€” much faster than the "repeated minimum" algorithm.
L4

Lecture 4 โ€” Sorting I (Selection, Insertion, Merge)

4 examples โ–ผ

Selection Sort

Slide 68 W(n) = A(n) = B(n) = ฮ˜(nยฒ)
selectionSort(S, len){
    i = 0
    while(i < len){
        mini = indexOfMin(S, i, len)
        swap(S, i, mini)
        i++
    }
}

// where:
// indexOfMin(S, i, len) - return index of minimum among S[j], i โ‰ค j < len
// swap(S, i, mini)      - swap positions of S[i] and S[mini]
Invariant: after iteration i, S[0..i-1] contains the i smallest elements in sorted order.

Insertion Sort

Slide 70 W(n) = ฮ˜(nยฒ) ยท B(n) = ฮ˜(n)
insertionSort(arr, len){

    for(next = 1; next < len; next++){

        curr = next;
        temp = arr[next];

        while((curr > 0) && (temp < arr[curr - 1])){

            arr[curr] = arr[curr - 1];
            curr--;
        }

        arr[curr] = temp;
    }
}
Best case ฮ˜(n) when array is already sorted โ€” only one comparison per iteration. Stable sort.

Merge Sort (Divide & Conquer)

Slide 76 W(n) = ฮ˜(n log n) ยท S(n) = ฮ˜(n)
mergeSort(S, len){
    // 1. divide the sequence into 2 halves
    // 2. sort each half separately (recursively)
    // 3. merge the sorted halves

    if(len <= 1) return S

    mid = len / 2
    left  = mergeSort(S[0..mid-1], mid)
    right = mergeSort(S[mid..len-1], len - mid)

    return merge(left, mid, right, len - mid)
}
Depth of recursion is logโ‚‚(len). On each level, merge is called for arrays of total length len.

Merge Function

Slide 78 W(n) = ฮ˜(n)
// input: a1, a2 - sorted sequences (of lengths len1, len2)
// output: return merged (and sorted) sequences a1 and a2

merge(a1, len1, a2, len2){

    i = j = k = 0;
    result[len1 + len2]  // memory allocation

    while((i < len1) && (j < len2))
        if(a1[i] < a2[j]) result[k++] = a1[i++];
        else result[k++] = a2[j++];

    while(i < len1) result[k++] = a1[i++];

    while(j < len2) result[k++] = a2[j++];

    return result;
}
Linear time merge of two sorted sequences. Requires ฮ˜(n) auxiliary space for the result array.
L5

Lecture 5 โ€” Sorting II (QuickSort, CountSort, RadixSort)

4 examples โ–ผ

Hoare's Partition (Implementation)

Slide 96 W(n) = n + O(1) ยท S(n) = O(1)
// input: a - array of integers; l,r - leftmost and rightmost indexes
// output: final index of pivot M; array reorganised

partition(a, l, r){

    i = l + 1;
    j = r;
    m = a[l];         // pivot = leftmost element
    temp;

    do{
        while((i < r) && (a[i] <= m)) i++;
        while((j > i) && (a[j] >= m)) j--;
        if(i < j) {temp = a[i]; a[i] = a[j]; a[j] = temp;}
    }while(i < j);

    // when (i==r):
    if(a[i] > m) {a[l] = a[i - 1]; a[i - 1] = m; return i - 1;}
    else {a[l] = a[i]; a[i] = m; return i;}
}
Hoare's partition scheme โ€” two inward-moving pointers, leftmost element as pivot. O(1) extra space.

QuickSort

Slide 97 W(n) = ฮ˜(nยฒ) ยท A(n) = ฮ˜(n log n)
// input: a - array of integers; l,r - leftmost and rightmost indexes
// (the procedure does not return anything)

quicksort(a, l, r){

    if(l >= r) return;
    k = partition(a, l, r);
    quicksort(a, l, k - 1);
    quicksort(a, k + 1, r);
}
Divide and conquer approach by C.A.R. Hoare. Space: O(n) worst case, O(log n) average (recursion stack).

CountSort

Slide 107 W(n) = ฮ˜(n + max) ยท Stable
// input: a - array of non-negative integers; l - its length

countSort(a, l){

    max = maxValue(a, l);
    l1 = max + 1;
    counts[l1];           // allocate counts array
    result[l];            // allocate result array
    for(i = 0; i < l1; i++) counts[i] = 0;

    for(i = 0; i < l; i++) counts[a[i]]++;
    for(i = 1; i < l1; i++) counts[i] += counts[i - 1];
    for(i = l - 1; i >= 0; i--)
        result[--counts[a[i]]] = a[i];
}
Based on direct addressing. Does NOT use comparisons. Pre-decrementation in the last line avoids shifting. Requires ฮ˜(n + max) space.

RadixSort

Slide 109 W(n) = ฮ˜(d ยท (n + k))
// RadixSort is a scheme, not a single algorithm.
// It applies a stable internal sorting algorithm
// to consecutive positions starting from the LAST position.

radixSort(a, len, d):   // d = number of positions (e.g. digits)
    for(pos = d - 1; pos >= 0; pos--)
        stableSort(a, len, by position pos)
        // e.g. countSort on the digit at position pos
Ideal for lexicographic sort of fixed-length objects (strings, multi-digit numbers). Uses any stable sort as internal algorithm โ€” CountSort is a natural choice.
L6

Lecture 6 โ€” Sequences & Abstract Data Structures

2 examples โ–ผ

Print SList

Slide 116 O(n)
print(SList l){
    node = l.head
    while(node != null){
        print node.element
        node = node.next
    }
}
Basic iteration over a singly linked list starting from the head.

Splice (DList)

Slide 120 O(1)
// Cut out sublist (a,...,b) and insert it after t
splice(a, b, t){
    a_prev = a.prev; b_next = b.next;
    a_prev.next = b_next; b_next.prev = a_prev;
    
    t_next = t.next;
    b.next = t_next; a.prev = t;
    t.next = a; t_next.prev = b;
}
Splice is a powerful O(1) operation in doubly-linked lists. Most other operations (move, push, pop) can be implemented using it.
L8

Lecture 8 โ€” Priority Queues & Heaps

5 examples โ–ผ

Upheap (Swim Up)

Slide 153 O(log n)
upheap(i)              // i > 0, heap ok under i
    key = heap[i]
    parent = i / 2
    while((parent > 0) && (heap[parent] > key))
        heap[i] = heap[parent]
        i = parent
        parent /= 2
    heap[i] = key
Called when key of parent[i] > key of i. The element "swims up" until heap order is restored.

Downheap (Sink Down)

Slide 154 O(log n)
downheap(i)
    l = 2*i              // left son
    r = 2*i + 1          // right son
    if l <= n and heap[l] < heap[i]:
        min = l
    else:
        min = i
    if r <= n and heap[r] < heap[min]:  // n is the size of heap
        min = r
    if min != i:
        swap(i, min)     // swap the elements under indexes (not indexes themselves)
        downheap(min)    // go down
Called when one child of i has a lower key. Recursion can be replaced with a while loop for efficiency.

PQ Operations on Binary Heap

Slide 155 insert O(log n) ยท delMin O(log n) ยท findMin O(1)
// insert(x): add x to the bottom and restore heap order upward
insert(x):
    add x at the bottom of the heap
    upheap(bottom)                  // O(log n)

// findMin(): root always holds the minimum
findMin():
    return heap[root]               // O(1)

// delMin(): move bottom element to root and restore heap order downward
delMin():
    min = heap[root]
    heap[root] = heap[bottom]
    remove bottom
    downheap(root)                  // O(log n)
    return min

BuildHeap (Construct)

Slide 156 O(n) โ€” not O(n log n)!
// Naive: n ร— insert โ†’ ฮ˜(n log n)
// Faster way โ€” bottom-up construction:

for(i = n/2; i > 0; i--) downHeap(i)
downHeap is called at most 2แตˆ times for nodes at depth d, each costing O(h โˆ’ d). Total: O(ฮฃ 2แตˆ(hโˆ’d)) = O(n).

HeapSort

Slide 158 W(n) = ฮ˜(n log n) ยท S(n) = O(1)
// Sort using a Priority Queue (pq = priority queue object)

heapSort(s):
    // Phase 1: insert all elements into PQ
    while(s.hasNext())
        pq.insert(s.next())

    // Phase 2: extract min repeatedly
    while(!pq.isEmpty())
        result.add(pq.delMin())
If we put the min element to the last released place in the array, we obtain O(1) space complexity (in-place).
L9

Lecture 9 โ€” Binary Search Trees

5 examples โ–ผ

BST Search (Recursive + Iterative)

Slide 185 O(h) where h = height
// Recursive version โ€” called with node == root
searchRecursive(node, key):
    if ((node == null) or (node.key == key)) return node
    if (key < node.key) return search(node.left, key)
    else return search(node.right, key)

// Iterative version โ€” called with node == root
searchIterative(node, key):
    while ((node != null) and (node.key != key))
        if (key < node.key) node = node.left
        else node = node.right
    return node

BST Minimum, Maximum & Successor

Slide 186 O(h)
minimum(node):         // called with node == root
    while (node.left != null) node = node.left
    return node

maximum(node):         // called with node == root
    while (node.right != null) node = node.right
    return node

successor(node):
    if (node.right != null) return minimum(node.right)
    p = node.parent
    while ((p != null) and (node == p.right))
        node = p
        p = p.parent
    return p

// predecessor is analogous to successor

BST Insert

Slide 187 O(h)
insert(node, key):
    if (key < node.key) then
        if node.left == null:
            n = create new node with key
            node.left = n
        else: insert(node.left, key)
    else:  // (key >= node.key)
        if node.right == null:
            n = create new node with key
            node.right = n
        else: insert(node.right, key)

BST Delete

Slide 188 O(h)
procedure delete(node, key)
    if (key < node.key) then
        delete(node.left, key)
    else if (key > node.key) then
        delete(node.right, key)
    else begin  // key == node.key
        if node is a leaf then
            deletesimple(node)
        else
            if (node.left != null) then
                find x = the rightmost node in node.left
                node.key := x.key;
                delete1(x);
            else
                proceed analogously for node.right
                (we are looking for the leftmost node now)
Three cases: (1) leaf โ€” just remove, (2) one child โ€” replace with child, (3) two children โ€” replace with predecessor/successor then delete that node.

Dynamic Ordered Set โ€” ADT Operations

Slide 183 BST: all O(h)
// Abstract data structure extending the dictionary (type K is linearly ordered)

search(K key)               // find element by key
insert(K key, V value)      // insert key-value pair
delete(K key)               // remove element by key
minimum()                   // return element with smallest key
maximum()                   // return element with largest key
predecessor(K key)          // return element with next smaller key
successor(K key)            // return element with next larger key
Hash table efficiently supports search/insert/delete but NOT the order-related operations (min, max, predecessor, successor).