DSA Roadmap
Introduction to Programming Languages
This roadmap focuses on foundational programming concepts and data structures, building up to more complex algorithms. The examples and explanations are presented in JavaScript to help you grasp the key principles.
1. Basics of Programming
Data Types
- Primitive: Basic data types like
int
,char
,long
,float
,boolean
, anddouble
represent single values in memory.let num = 10; // Number let char = 'A'; // Character let isActive = true; // Boolean let decimal = 3.14; // Double
- Non-Primitive: Complex data types like
String
,Long
,Double
,Integer
,Boolean
, andBigInt
are objects or collections of primitives.const string name = "Alice" // String const integer age = 30 const boolean isMember = true const bigInt = 123456789012345678901234567890n; // BigInt const double height = 5.8 // Double
Variables
- Declaration: Variables are containers for storing data values. In JavaScript, use
let
,const
, orvar
. - Scope: Determines where variables can be accessed within the code—global, local, or block scope.
- Constants: Variables declared with
const
that cannot be reassigned.
// Variable Declaration and Assignment
let x = 5 // Declaration and initialization
const PI = 3.14 // Constant declaration
// Variable Scope
function example() {
let localVar = 10 // Local scope
globalVar = 20 // Global scope
}
Conditional Cases
- If-Else Statements: Control flow structure that executes code based on a boolean condition.
- Ternary Operator: A shorthand way to write an
if-else
statement. Example:condition ? expr1 : expr2
. - Nested Conditions: Conditions within conditions, allowing more complex decision-making.
// If-Else Statement
if (condition) {
// Execute if condition is true
} else {
// Execute if condition is false
}
// Ternary Operator
result = (condition) ? value1 : value2
Switch Cases
- Switch Statement Syntax: A control statement that tests a variable against multiple values.
- Case Blocks: Each
case
matches a value and executes the corresponding code block. - Default Case: Executes if none of the
case
values match. - Fall-through and Break: Without a
break
statement, the code will continue executing the nextcase
block.
// Switch Statement
switch (expression) {
case value1:
// Code to execute if expression equals value1
break
case value2:
// Code to execute if expression equals value2
break
default:
// Code to execute if expression doesn't match any case
}
Loops
- For: Executes a block of code a specific number of times.
- While: Continues execution as long as a condition is true.
- Do While: Executes a block of code once, then repeats as long as a condition is true.
// For Loop
for (i = 0; i < n; i++) {
// Code to execute n times
}
// While Loop
while (condition) {
// Code to execute while condition is true
}
// Do While Loop
do {
// Code to execute at least once, and then while condition is true
} while (condition)
Array
- Multidimensional Arrays: Arrays within arrays, allowing for complex data structures like matrices.
- Dynamic Size Arrays: Arrays that can grow or shrink in size. In JavaScript, arrays are dynamic by nature.
// One-dimensional Array
array = [1, 2, 3, 4, 5]
// Multidimensional Array
matrix = [
[1, 2, 3],
[4, 5, 6],
[7, 8, 9]
]
Basic Data Structures
- HashMaps: A collection of key-value pairs that allows fast data retrieval.
- Sets: A collection of unique elements, useful for removing duplicates.
Design Principles
- DRY (Don't Repeat Yourself): Avoid redundancy by reusing code.
- KISS (Keep It Simple, Stupid): Write simple and clear code to reduce complexity.
- SOLID Principles: Guidelines for writing maintainable and scalable object-oriented code.
Object-Oriented Programming (OOP)
- Pillars:
- Encapsulation: Bundling data and methods within objects.
- Inheritance: Mechanism for a class to inherit properties and methods from another class.
- Polymorphism: The ability to process objects differently based on their data type.
- Abstraction: Hiding complex implementation details and showing only the essentials.
- Components:
- Classes: Templates for creating objects.
- Objects: Instances of classes.
- Static Members: Class-level variables or methods shared among all instances.
- Getters, Setters: Methods to get and set private properties.
Design Patterns
- Singleton: Ensures a class has only one instance.
- Builder: Simplifies the creation of complex objects.
- Bridge: Separates an object's interface from its implementation.
- Command: Encapsulates requests as objects.
- Iterator: Provides a way to access elements of a collection sequentially.
- Factory: Creates objects without specifying the exact class.
- Adapter: Allows incompatible interfaces to work together.
- Observer: Establishes a subscription mechanism to notify multiple objects of changes.
- Proxy: Provides a placeholder for another object to control access.
2. Building Blocks of DSA
Bit Operations
- AND, OR, NOT, XOR: Fundamental bitwise operations used to manipulate individual bits.
// AND Operation
result = a AND b
// OR Operation
result = a OR b
// NOT Operation
result = NOT a
// XOR Operation
result = a XOR b
Binary Operations
- Left Shifts, Right Shifts: Move bits left or right, effectively multiplying or dividing by powers of two.
- Count Bits: Count the number of set bits (1s) in a binary number.
// Left Shift
result = a << n // Shift bits of a to the left by n positions
// Right Shift
result = a >> n // Shift bits of a to the right by n positions
// Count Bits
count = countSetBits(a) // Function to count set bits in a binary number
Time & Space Complexities
-
O(1): Constant Time
Executes in the same time regardless of input size. The time required is constant and does not change as the size of the input grows. -
O(log n): Logarithmic Time
Execution time grows logarithmically as the input size increases. This is typical of algorithms that divide the problem in each step, such as binary search. -
O(√n): Square Root Time
Execution time grows proportional to the square root of the input size. This complexity often appears in algorithms involving factors or paired operations. -
O(n): Linear Time
Execution time grows linearly with the input size. If the input size doubles, the execution time also doubles. This is common in algorithms that need to process each element of an array or list. -
O(n log n): Linearithmic Time
Execution time grows in a combination of linear and logarithmic rates. This complexity is often seen in efficient sorting algorithms like Merge Sort and Quick Sort. -
O(n²): Quadratic Time
Execution time grows quadratically with the input size. This occurs with algorithms that involve nested iterations over the data, such as certain sorting algorithms. -
O(n³): Cubic Time
Execution time grows cubically with the input size. This complexity is typical in algorithms with three nested loops iterating over the input size. -
O(2ⁿ): Exponential Time
Execution time grows exponentially with the input size. This is often seen in algorithms that solve problems with multiple recursive calls, such as certain dynamic programming solutions. -
O(n*m): Bilinear(Two-Dimensional Linear)
Execution time grows proportionally to the product of two different input sizes. This is common in algorithms that involve nested iterations over two different data structures.
// Constant Time Complexity
function constantOperation() {
// O(1) - Executes in constant time
}
// Linear Time Complexity
function linearOperation(array) {
for (let i = 0; i < array.length; i++) {
// O(n) - Executes once for each element in the array
}
}
// Logarithmic Time Complexity
function logarithmicOperation(array) {
let left = 0;
let right = array.length - 1;
while (left <= right) {
let mid = Math.floor((left + right) / 2);
// O(log n) - Reduces the search space by half each iteration
}
}
// Square Root Time Complexity
function squareRootOperation(array) {
let limit = Math.sqrt(array.length);
for (let i = 0; i < limit; i++) {
// O(√n) - Iterates up to the square root of the array length
}
}
// Linearithmic Time Complexity
function linearithmicOperation(array) {
function mergeSort(arr) {
if (arr.length <= 1) return arr;
let mid = Math.floor(arr.length / 2);
let left = mergeSort(arr.slice(0, mid));
let right = mergeSort(arr.slice(mid));
// Merge operation: O(n)
return merge(left, right);
}
function merge(left, right) {
// Merging two sorted arrays: O(n)
}
mergeSort(array); // O(n log n) - Typical of efficient sorting algorithms
}
// Quadratic Time Complexity
function quadraticOperation(array) {
for (let i = 0; i < array.length; i++) {
for (let j = 0; j < array.length; j++) {
// O(n²) - Nested iterations over the array
}
}
}
// Cubic Time Complexity
function cubicOperation(array) {
for (let i = 0; i < array.length; i++) {
for (let j = 0; j < array.length; j++) {
for (let k = 0; k < array.length; k++) {
// O(n³) - Three nested iterations over the array
}
}
}
}
// Exponential Time Complexity
function exponentialOperation(n) {
if (n <= 0) return 1;
return exponentialOperation(n - 1) + exponentialOperation(n - 1);
// O(2^n) - Recursive calls that grow exponentially with input size
}
// Bilinear(Two-Dimensional Linear) Time Complexity
function bilinearOperation(array1, array2) {
for (let i = 0; i < array1.length; i++) {
for (let j = 0; j < array2.length; j++) {
// O(n * m) - Nested iterations over two different arrays
}
}
}
Number Systems
- Binary: Base-2 numeral system, using digits 0 and 1.
- Decimal: Base-10 numeral system, the most common system for human use.
- Conversion: Methods to convert between different number systems.
// Binary to Decimal Conversion
decimal = binaryToDecimal(binary)
// Decimal to Binary Conversion
binary = decimalToBinary(decimal)
3. Linear Data Structures
Arrays
- One-dimensional: A single line of elements, accessible by index.
- Multidimensional: Arrays within arrays, allowing for complex data structures like grids.
- Dynamic Size Arrays: Arrays that adjust their size automatically (e.g., JavaScript arrays).
// One-dimensional Array Access
element = array[index]
// Multidimensional Array Access
element = matrix[row][column]
Stacks
- Normal Stack: A LIFO (Last In, First Out) structure for storing data.
- Two Stack: Implementation of two stacks within a single array to save space.
- Min Stack: A stack that can return the minimum value in constant time.
// Stack Operations
push(stack, item)
item = pop(stack)
top = peek(stack)
Queues
- Normal Queues: A FIFO (First In, First Out) structure for processing elements.
- Queues with Stacks: Implementing queues using stacks to demonstrate versatility.
- Priority Queues: A queue where each element has a priority, and higher priority elements are dequeued before lower priority ones.
- Double Ended Queues: Queues that allow insertion and deletion at both ends.
// Queue Operations
enqueue(queue, item)
item = dequeue(queue)
Linked Lists
- Singly Linked List: A list where each node points to the next, with a single reference.
- Doubly Linked List: Each node points to both the next and previous nodes.
- Circular Linked List: The last node points back to the first node, forming a circle.
// Singly Linked List Node
node = { value, next }
// Operations
insertFront(head, value)
insertEnd(head, value)
deleteNode(head, value)
HashMaps
- Hash Functions: Functions that map data to a fixed-size array.
- Hash Tables: Data structures that implement associative arrays with hash functions.
- Linear Probing: A collision resolution method in hash tables.
- Quadratic Probing: A collision resolution technique that uses a quadratic function to find the next available slot.
- Double Hashing: A collision resolution method using two hash functions.
// HashMap Operations
put(key, value)
value = get(key)
remove(key)
4. Non-Linear Data Structures
Trees
- Types:
- Binary Tree: A tree where each node has at most two children.
- AVL Tree: A self-balancing binary search tree where the height difference between subtrees is at most one.
- Search:
- Breadth-First Search: Explores all nodes at the present depth before moving on to the nodes at the next depth level.
- Depth-First Search: Explores as far as possible along each branch before backtracking.
- Traversal:
- Pre-Order Traversal: Visit the root, then left subtree, then right subtree.
- Post-Order Traversal: Visit the left subtree, then right subtree, then root.
- In-Order Traversal: Visit the left subtree, root, then right subtree.
// Binary Tree Node
node = { value, left, right }
// Traversals
preOrder(node) {
visit(node)
preOrder(node.left)
preOrder(node.right)
}
inOrder(node) {
inOrder(node.left)
visit(node)
inOrder(node.right)
}
postOrder(node) {
postOrder(node.left)
postOrder(node.right)
visit(node)
}
Graphs
- Directed Graphs: Graphs where edges have a direction.
- Undirected Graphs: Graphs where edges have no direction.
// Graph Representation
graph = { vertex1: [edges], vertex2: [edges] }
// BFS (Breadth-First Search)
bfs(startVertex) {
queue.enqueue(startVertex)
while not queue.isEmpty() {
vertex = queue.dequeue()
visit(vertex)
for each neighbor of vertex {
if not visited(neighbor) {
queue.enqueue(neighbor)
}
}
}
}
// DFS (Depth-First Search)
dfs(vertex) {
visit(vertex)
for each neighbor of vertex {
if not visited(neighbor) {
dfs(neighbor)
}
}
}
Heaps
- Features & Common Use: A specialized tree-based data structure that satisfies the heap property, commonly used in priority queues.
Tries
- Features & Use Cases: A tree-like data structure used to store a dynamic set of strings, often for searching autocomplete systems.
5. Algorithms
Searching Algorithms
- Linear Search: Sequentially checks each element of the list until a match is found or the list ends.
- Binary Search: Efficiently searches a sorted list by repeatedly dividing the search interval in half.
- Ternary Search: Divides the search space into three parts and narrows down to one-third each time.
- Jump Search: Searches sorted arrays by jumping ahead by fixed steps and then performing a linear search in that range.
- Exponential Search: Searches through an unbounded or infinite list by checking increasingly larger ranges.
// Linear Search
linearSearch(array, target) {
for each item in array {
if item == target {
return index
}
}
return not found
}
// Binary Search
binarySearch(sortedArray, target) {
low = 0
high = length(sortedArray) - 1
while low <= high {
mid = (low + high) / 2
if sortedArray[mid] == target {
return mid
} else if sortedArray[mid] < target {
low = mid + 1
} else {
high = mid - 1
}
}
return not found
}
String Manipulations
- Reverse Strings: Reversing the characters in a string.
- Rotate Strings: Rotating characters in a string to the left or right.
- Remove Strings: Deleting specified characters or substrings.
- Insert Strings: Inserting characters or substrings into a string.
Sorting Algorithms
Bubble Sort
- Description: A simple comparison-based algorithm where adjacent elements are repeatedly swapped if they are in the wrong order. It is known for its simplicity but is inefficient for large datasets.
- Time Complexity: O(n²) in the worst and average cases.
- Space Complexity: O(1), as it sorts the list in place.
- Use Case: Suitable for small datasets or when the list is nearly sorted.
Insertion Sort
- Description: Builds the final sorted array one element at a time. It works by taking one element from the input data and inserting it into the correct position in the already sorted part of the array.
- Time Complexity: O(n²) in the worst and average cases, but O(n) for nearly sorted data.
- Space Complexity: O(1), as it sorts the list in place.
- Use Case: Efficient for small datasets or lists that are already mostly sorted.
Quick Sort
- Description: A divide-and-conquer algorithm that selects a 'pivot' element and partitions the array into two sub-arrays according to whether the elements are less than or greater than the pivot. The sub-arrays are then recursively sorted.
- Time Complexity: O(n log n) on average, but O(n²) in the worst case if the pivot selection is poor.
- Space Complexity: O(log n) due to the recursion stack.
- Use Case: Preferred for large datasets due to its average-case efficiency.
Selection Sort
- Description: Divides the input list into two parts: the sorted part at the beginning and the unsorted part at the end. It repeatedly selects the smallest (or largest, depending on sorting order) element from the unsorted part and swaps it with the leftmost unsorted element.
- Time Complexity: O(n²) in the worst and average cases.
- Space Complexity: O(1), as it sorts the list in place.
- Use Case: Useful when memory is a concern, though generally outperformed by other algorithms like Quick Sort.
Merge Sort
- Description: A stable, divide-and-conquer algorithm that splits the array into halves, recursively sorts each half, and then merges the sorted halves back together.
- Time Complexity: O(n log n) in all cases (worst, average, and best).
- Space Complexity: O(n) due to the additional space needed for merging.
- Use Case: Ideal for large datasets and linked lists, as it provides consistent performance.
Counting Sort
- Description: A non-comparison-based algorithm that counts the occurrences of each unique element in the input array. The count is then used to determine the position of each element in the sorted output array.
- Time Complexity: O(n + k), where
n
is the number of elements andk
is the range of the input data. - Space Complexity: O(n + k) due to the additional storage required for counting.
- Use Case: Suitable for sorting integers or other discrete data types with a limited range of values.
Bucket Sort
- Description: A non-comparison sorting algorithm that distributes the elements of an array into a number of buckets. Each bucket is then sorted individually, either using a different sorting algorithm or recursively applying the bucket sort.
- Time Complexity: O(n + k), where
n
is the number of elements andk
is the number of buckets. - Space Complexity: O(n + k), as it requires additional space for the buckets.
- Use Case: Effective for sorting floating-point numbers or uniformly distributed data.
Radix Sort
- Description: A non-comparison-based sorting algorithm that processes numbers digit by digit, starting from the least significant digit to the most significant digit. It uses a stable subroutine like Counting Sort to sort the digits.
- Time Complexity: O(nk), where
n
is the number of elements andk
is the number of digits in the largest number. - Space Complexity: O(n + k), where
k
is the base used for sorting. - Use Case: Best suited for sorting large numbers of integers or strings of equal length.
Heap Sort
- Description: A comparison-based algorithm that uses a binary heap data structure. The array is first converted into a max-heap (or min-heap), and then the largest (or smallest) element is repeatedly removed and placed in the correct position in the sorted array.
- Time Complexity: O(n log n) in all cases.
- Space Complexity: O(1), as it sorts the list in place.
- Use Case: Preferred in situations where memory usage is a concern and a guaranteed O(n log n) performance is required.
// Bubble Sort
bubbleSort(array) {
for i = 0 to length(array) - 1 {
for j = 0 to length(array) - i - 1 {
if array[j] > array[j + 1] {
swap(array[j], array[j + 1])
}
}
}
}
// Quick Sort
quickSort(array, low, high) {
if low < high {
pivotIndex = partition(array, low, high)
quickSort(array, low, pivotIndex - 1)
quickSort(array, pivotIndex + 1, high)
}
}
// Merge Sort
mergeSort(array) {
if length(array) <= 1 {
return array
}
mid = length(array) / 2
left = mergeSort(array[0..mid])
right = mergeSort(array[mid+1..end])
return merge(left, right)
}
// Heapify
heapify(array, n, i) {
largest = i
left = 2 * i + 1
right = 2 * i + 2
if left < n and array[left] > array[largest] {
largest = left
}
if right < n and array[right] > array[largest] {
largest = right
}
if largest != i {
swap(array[i], array[largest])
heapify(array, n, largest)
}
}
// Heap Sort
heapSort(array) {
n = length(array)
for i = n / 2 - 1 to 0 {
heapify(array, n, i)
}
for i = n - 1 to 0 {
swap(array[0], array[i])
heapify(array, i, 0)
}
}