Updated interview section with some patterns

This commit is contained in:
2025-01-15 17:56:35 -05:00
parent f69fcfc50b
commit 65ead5a6f2
4 changed files with 352 additions and 12 deletions

View File

@@ -31,3 +31,136 @@ The space complexity of a graph is O(v+e) where v is the number of vertices and
- Depth First Search Traversing: O(v+e)
- Bredth First Search Traversing: O(v+e)
## Searching
To search a graph, you can use either DFS or BFS.
```python
def DFS(graph, start):
visited = set() # keep track of the visited nodes
stack = [start] # use a stack to keep track of nodes to visit next
while stack:
node = stack.pop() # get the next node to visit
if node not in visited:
visited.add(node) # mark the node as visited
print(node, end=' ') # visit the node (print its value in this case)
stack.extend(graph[node] - visited) # add the node's neighbors to the stack, able to remove seen nodes in python for efficiency
def BFS(graph, start):
visited = set() # Keep track of the nodes that we've visited
queue = [start] # Use a queue to implement the BFS
while queue:
node = queue.pop() # Dequeue a node from front of queue
if node not in visited:
visited.add(node) # Mark the node as visited
print(node, end=' ') # Visit the node (print its value in this case)
queue.extend(graph[node]) # Enqueue all neighbours
graph = {
'A': ['B', 'C'],
'B': ['A', 'D', 'E'],
'C': ['A', 'F'],
'D': ['B'],
'E': ['B', 'F'],
'F': ['C', 'E'],
}
DFS(graph, 'A') # Output: A C F E B D
BFS(graph, 'A') # Output: A B C D E F
```
## Finding Paths
The below algorithms find all possible paths from the start to goal. This is assuming all edges have equal weights. If you need to find the shortest path with weighted graphs, you can add the weight to the tuple stored in the stack and find the shortest after all paths have been found.
```python
def dfs_paths(graph, start, goal):
stack = [(start, [start])]
while stack:
(vertex, path) = stack.pop()
# Again we are using python to subtract nodes already in the path
# You could also check if next is not in the current path instead
for next in graph[vertex] - set(path):
# if next in path: continue
if next == goal:
yield path + [next]
else:
stack.append((next, path + [next]))
# Will always have shortest path first (if all edges are equal)
def bfs_paths(graph, start, goal):
queue = [(start, [start])]
while queue:
(vertex, path) = queue.pop(0)
for next in graph[vertex] - set(path):
# if next in path: continue
if next == goal:
yield path + [next]
else:
queue.append((next, path + [next]))
```
### Finding Shortest Path (Djikstra's Algorithm)
```python
from collections import defaultdict
class Graph():
def __init__(self):
"""
self.edges is a dict of all possible next nodes
e.g. {'X': ['A', 'B', 'C', 'E'], ...}
self.weights has all the weights between two nodes,
with the two nodes as a tuple as the key
e.g. {('X', 'A'): 7, ('X', 'B'): 2, ...}
"""
self.edges = defaultdict(list)
self.weights = {}
def add_edge(self, from_node, to_node, weight):
# Note: assumes edges are bi-directional
self.edges[from_node].append(to_node)
self.edges[to_node].append(from_node)
self.weights[(from_node, to_node)] = weight
self.weights[(to_node, from_node)] = weight
def dijsktra(graph, initial, end):
# shortest paths is a dict of nodes
# whose value is a tuple of (previous node, weight)
shortest_paths = {initial: (None, 0)}
current_node = initial
visited = set()
while current_node != end:
visited.add(current_node)
destinations = graph.edges[current_node]
weight_to_current_node = shortest_paths[current_node][1]
for next_node in destinations:
weight = graph.weights[(current_node, next_node)] + weight_to_current_node
if next_node not in shortest_paths:
shortest_paths[next_node] = (current_node, weight)
else:
current_shortest_weight = shortest_paths[next_node][1]
if current_shortest_weight > weight:
shortest_paths[next_node] = (current_node, weight)
next_destinations = {node: shortest_paths[node] for node in shortest_paths if node not in visited}
if not next_destinations:
return "Route Not Possible"
# next node is the destination with the lowest weight
current_node = min(next_destinations, key=lambda k: next_destinations[k][1])
# Work back through destinations in shortest path
path = []
while current_node is not None:
path.append(current_node)
next_node = shortest_paths[current_node][0]
current_node = next_node
# Reverse path
path = path[::-1]
return path
```

View File

@@ -0,0 +1,170 @@
# Patterns
## Prefix Sum
Useful for finding the sum of elements in a subarray. In this pattern, you would store the sum of the elements up the that point in an array.
```python
array = [1,2,3,4,5,6,7]
prefix_sum = [1,3,6,10,15,21,28]
sum[i,j] = prefix_sum[j] - prefix_sum[i-1]
```
## Two Pointers
Can be helpful in array problems where you iterate from the beginning and end of the array, moving the pointers towards each other until they meet. You can also start from the middle and work outwards.
## Sliding Window
Helpful for finding subarrays that meet a criteria. You can check sums of subarrays by creating a window of length k, check the currently selected elements, and then when moving the window subtract the left-most element and add the new right-most element.
```python
def max_subarray(arr, k):
n = len(arr)
window_sum = sum(arr[:k])
max_sum = window_sum
max_start_index = 0
for i in range(n-k):
window_sum = window_sum - arr[i] + arr[i+k]
if window_sum > max_sum:
max_sum = window_sum
max_start_index = i + 1
return arr[max_start_index:max_start_index + k], max_sum
```
## Fast and Slow Pointers
Useful for working with arrays and linked lists. Slow pointer moves 1 node per loop, and the faster moves 2 nodes per loop. This can be used when trying to find a cycle in a linked list, or if trying to find the middle of a linked-list, since when the fast pointer is at the end, the slow pointer is in the middle.
## Linked List In-Place Reversal
You can do this by creating 3 pointers: `prev`, `curr`, and `next`.
```python
def reverse_linked_list(head):
prev = None
curr = head
while curr is not None:
next = curr.next
curr.next = prev
prev = curr
curr = next
return prev
```
## Monotonic Stack
Useful for finding the next greatest or next smaller element in an array.
```python
# Find the next greater element for each item in the array
def next_greater_element(arr):
n = len(arr)
stack = []
result = [-1] * n
for i in range(n):
while stack and arr[i] > arr[stack[-1]]:
result[stack.pop()] = arr[i]
stack.append(i)
return result
arr = [1,4,6,3,2,7]
result = next_greater_element(arr) # [4,6,7,7,7,-1]
```
## Top K Elements
This pattern helps find the top K elements in a dataset that satisfy a criteria (greatest, least, etc). This can be done by using a min/max-heap, which only has K slots, where the top element is the next one to check against. Since popping and inserting in a heap is O(logn), then the time complexity of this is O(n\*logk)
## Overlapping Intervals
This pattern can be used when solving problems involving integers or ranges where they may overlap and finding intersections. This is done by sorting all intervals by the start, and then checking if the interval overlaps with the previous interval (start of next element is greater than end of previous element).
```python
# Merging Intervals
[[1,3], [2,6], [8,10], [15,18]]
[[1,6], [8,10], [15,28]]
# Interval intersection
[[0,2], [5,10], [13,23], [24,25]], [[1,5], [8,12], [15,24], [25,26]]
[[1,2], [5,5], [8,10], [15,23], [24,24], [25,25]]
# Insert interval
[[1,2], [3,5], [6,7], [8,10], [12,16]], [4,8]
[[1,2], [3,10], [12,16]]
```
## Modified Binary Search
This pattern is helpful when solving problems where you need to find elements in an imperfectly sorted array (nearly sorted, rotated sorted, unknown length, containing duplicates). It can also be helpful finding the first or last occurance of an element, finding the square root of a number, and finding a peak element.
```python
def search_rotated_array(nums, target):
left, right = 0, len(nums)-1
while left <= right:
mid = (left + right) // 2
if nums[mid] == target:
return mid
if nums[left] <= nums[mid]:
if nums[left] <= target < nums[mid]:
right = mid - 1
else:
left = mid + 1
else:
if nums[mid] < target <= nums[right]:
left = mid + 1
else:
right = mid - 1
return -1
print(search_rotated_array([4,5,6,7,0,1,2]))
```
## Binary Tree Traversal
Traversing a tree can use either BFS or DFS, but which one you use can depend on what the use case is.
- To retrieve values, use DFS inorder traversal.
- To create a copy of a tree, use DFS preorder traversal.
- To process child nots before the parent (like deleting a tree), use DFS postorder traversal.
- If you need to explore all nodes at current level before moving on, use BFS.
## Matrix Traversal
Matrix traversal can be seen as graph problems, so you can use DFS or BFS to solve a lot of these problems.
```python
# Number of Islands problem
def count_islands(grid):
if not grid or not grid[0]: return 0
rows, cols = len(grid), len(grid[0])
islands = 0
def dfs(i,j):
if i < 0 or i >= rows or j < 0 or j >= cols or grid[i][j] != '1':
return
grid[i][j] = '0'
dfs(i+1, j)
dfs(i-1, j)
dfs(i, j+1)
dfs(i, j-1)
for i in range(rows):
for j in range(cols):
if grid[i][j] == "1":
dfs(i,j)
islands += 1
return islands
```
## Backtracking
This can be used to find all potential solution paths and backtracking the paths that do not lead to a solution. This can be helpful for finding permutations, combinations, and all possible paths from start to end points.

View File

@@ -19,3 +19,39 @@ Each node in a binary tree has at most 2 children. Many operations on a binary
### Binary Search Tree
A binary search tree or BST is a tree where the left child node is always less than the parent, and the right child node is always greater. This allows for very fast searching since it is easy to make a decision on which path to check down next.
## Searching
There are two main methods for searching trees: **Breadth First Search** and **Depth First Search**. In BFS, we traverse the tree level by level, and in DFS we traverse sub-tree by sub-tree, going down to the bottom and working our way up. BFS uses a queue (FIFO) and DFS uses a stack (LIFO). Within DFS we can traverse in order (Left-Root-Right), preorder (Root-Left-Right), or postorder (Left-Right-Root).
```python
def BFS(root):
if root is None:
return
queue = [root]
while queue:
node = queue.pop() # dequeue a node from the front of the queue
print(node.value, end=' ') # visit the node (print its value in this case)
# Can also be done with n children if not binary tree
# enqueue left child
if node.left:
queue.append(node.left)
# enqueue right child
if node.right:
queue.append(node.right)
def DFS(node):
if node is None:
return
# This example is pre-order, but just rearrange the order in which you check to make it in-order or post-order
# Visit the node (print its value in this case)
print(node.value, end=' ')
# Recursively call DFS on the left child
DFS(node.left)
# Recursively call DFS on the right child
DFS(node.right)
```

View File

@@ -17,7 +17,8 @@
{"text": "Stacks and Queues", "link": "/interview/ds/stackqueue"},
{"text": "Strings", "link": "/interview/ds/string"},
{"text": "Graphs", "link": "/interview/ds/graph"},
{"text": "Trees", "link": "/interview/ds/tree"}
{"text": "Trees", "link": "/interview/ds/tree"},
{"text": "Patterns", "link": "/interview/ds/patterns"}
]
},
{