Data Structure and Algorithm
Data Structure
Hashing
Implementation of Chaining

Implementation of Chaining

In hashing, there is a hash function that maps keys to some values. But these hashing functions may lead to a collision if two or more keys are mapped to the same value. Chaining avoids collisions. The idea is to make each cell of the hash table point to a linked list of records that have the same hash function value.

Let's create a hash function, such that our hash table has N number of buckets. To insert a node into the hash table, we need to find the hash index for the given key. It could be calculated using the hash function.

Example: hashIndex = key % noOfBuckets

  • Insert: Move to the bucket corresponding to the above-calculated hash index and insert the new node at the end of the list.
  • Delete: To delete a node from the hash table, calculate the hash index for the key, move to the bucket corresponding to the calculated hash index, search the list in the current bucket to find and remove the node with the given key (if found).

Example:

Let's say we have a hash table with 7 buckets (0, 1, 2, 3, 4, 5, 6). Keys arrive in the Order: 15, 11, 27, 8

  • 15 % 7 = 1
  • 11 % 7 = 4
  • 27 % 7 = 6
  • 8 % 7 = 1 (Collision at index 1! Add to the chain).

Method 1: Fixed Size Array

This method has no concept of rehashing. It only has a fixed size array i.e. fixed numbers of buckets.

Implementation

class Hash {
    constructor(bucket) {
        this.BUCKET = bucket;
        this.table = new Array(bucket);
        for (let i = 0; i < bucket; i++) {
            this.table[i] = [];
        }
    }
 
    hashFunction(key) {
        return Math.floor(key % this.BUCKET);
    }
 
    insertItem(key) {
        let index = this.hashFunction(key);
        this.table[index].push(key);
    }
 
    displayHash() {
        for (let i = 0; i < this.BUCKET; i++) {
            let str = i.toString();
            for (let j = 0; j < this.table[i].length; j++) {
                str += " --> " + this.table[i][j];
            }
            console.log(str);
        }
    }
}
 
// Example Usage
let h = new Hash(7);
h.insertItem(15);
h.insertItem(11);
h.insertItem(27);
h.insertItem(8);
h.displayHash();

Output:

0
1 --> 15 --> 8
2
3
4 --> 11
5
6 --> 27

Time Complexity (Method 1)

  • Search: O(1 + (n/m))
  • Delete: O(1 + (n/m)) Where n = Total elements in hash table, m = Size of hash table. n/m is the Load Factor. Load factor (α) must be as small as possible. If load factor increases, then the possibility of collision increases.
  • Expected chain length: O(α)
  • Expected time to search: O(1 + α)
  • Expected time to insert / delete: O(1 + α)

Auxiliary Space: O(1), since no extra space has been taken dynamically during operations other than the fixed buckets.


Method 2: Dynamic Rehashing

Let's discuss another method where we have no bounds on the number of buckets. The number of buckets will increase when the value of the load factor is greater than 0.5.

We will do rehashing when the value of the load factor is greater than 0.5. In rehashing, we double the size of the array and add all the values again to the new array (doubled size array is a new array) based on the hash function. The hash function should also be changed as it is dependent on the number of buckets. Therefore, the hash function behaves differently from the previous one.

  • Load Factor = number of elements in Hash Map / total number of buckets

Implementation

class Node {
    constructor(key, value) {
        this.key = key;
        this.value = value;
        this.next = null;
    }
}
 
class RehashMap {
    constructor() {
        this.numOfElements = 0;
        this.capacity = 5;
        this.arr = new Array(this.capacity).fill(null);
    }
 
    hashFunction(key) {
        // Simple hash for string keys
        let hash = 0;
        for (let i = 0; i < key.length; i++) {
            hash += key.charCodeAt(i);
        }
        return hash % this.capacity;
    }
 
    insert(key, value) {
        let bucketIndex = this.hashFunction(key);
        let head = this.arr[bucketIndex];
        
        // Check if key already exists
        while (head !== null) {
            if (head.key === key) {
                head.value = value;
                return;
            }
            head = head.next;
        }
 
        // Insert at head
        let newNode = new Node(key, value);
        newNode.next = this.arr[bucketIndex];
        this.arr[bucketIndex] = newNode;
        this.numOfElements++;
 
        // Check Load Factor
        let loadFactor = this.numOfElements / this.capacity;
        if (loadFactor > 0.5) {
            this.rehash();
        }
    }
 
    rehash() {
        console.log("Rehashing initiated...");
        let oldArr = this.arr;
        this.capacity = this.capacity * 2;
        this.arr = new Array(this.capacity).fill(null);
        this.numOfElements = 0;
 
        for (let i = 0; i < oldArr.length; i++) {
            let head = oldArr[i];
            while (head !== null) {
                this.insert(head.key, head.value);
                head = head.next;
            }
        }
    }
}

Complexity Analysis (Method 2)

  • Time Complexity of Insert: O(N). It takes O(N) time complexity because we are checking the load factor each time and when it is greater than 0.5 we call the rehashing function which takes O(N) time to re-insert all existing elements.
  • Space Complexity of Insert: O(N). It takes O(N) space complexity because we are creating a new array of doubled size and copying all the elements to the new array.
  • Time Complexity of Search: O(N). It takes O(N) time complexity in the absolute worst case if searching in a highly collided linked list of size N.
  • Space Complexity of Search: O(1). It takes O(1) space complexity because we are not using any extra space for searching.