Data Structures- Space and Time Complexity
Observing the time complexity of different algorithms
- Space and Time Complexity
- Constant O(1)
- Linear O(n)
- Quadratic O(n^2)
- Logarithmic O(logn)
- Exponential O(2^n)
- Hacks
- Question 1
- Question 2
- Question 3
- Question 4
- Question 5
- Question 6
- Question 7
- Question 8
- Question 9
- Question 10
Space and Time Complexity
Space complexity refers to the amount of memory used by an algorithm to complete its execution, as a function of the size of the input. The space complexity of an algorithm can be affected by various factors such as the size of the input data, the data structures used in the algorithm, the number and size of temporary variables, and the recursion depth. Time complexity refers to the amount of time required by an algorithm to run as the input size grows. It is usually measured in terms of the "Big O" notation, which describes the upper bound of an algorithm's time complexity.
Why do you think a programmer should care about space and time complexity?
- Space and time complexity are an integral consideration to make about the functionality and running quality of a program. Failing to consider space and time complexity can get in the way of the functionality of the program. Creating bounds for the complexity of a program makes sure that it functions accurately. For example, if time complexity is not considered, then a time-based process that uses the data outputted from a certain long process may fail.
- There's also a level of convenience for the user to consider. If a program is demanding on a computer and takes a long time to run, people won't want to use it.
Take a look at our lassen volcano example from the data compression tech talk. The first code block is the original image. In the second code block, change the baseWidth to rescale the image.
from IPython.display import Image, display
from pathlib import Path
# prepares a series of images
def image_data(path=Path("images/"), images=None): # path of static images is defaulted
for image in images:
# File to open
image['filename'] = path / image['file'] # file with path
return images
def image_display(images):
for image in images:
display(Image(filename=image['filename']))
if __name__ == "__main__":
lassen_volcano = image_data(images=[{'source': "Peter Carolin", 'label': "Lassen Volcano", 'file': "lassen-volcano.jpg"}])
image_display(lassen_volcano)
from IPython.display import HTML, display
from pathlib import Path
from PIL import Image as pilImage
from io import BytesIO
import base64
# prepares a series of images
def image_data(path=Path("images/"), images=None): # path of static images is defaulted
for image in images:
# File to open
image['filename'] = path / image['file'] # file with path
return images
def scale_image(img):
baseWidth = 625
#baseWidth = 1250
#baseWidth = 2500
#baseWidth = 5000 # see the effect of doubling or halfing the baseWidth
#baseWidth = 10000
#baseWidth = 20000
#baseWidth = 40000
scalePercent = (baseWidth/float(img.size[0]))
scaleHeight = int((float(img.size[1])*float(scalePercent)))
scale = (baseWidth, scaleHeight)
return img.resize(scale)
def image_to_base64(img, format):
with BytesIO() as buffer:
img.save(buffer, format)
return base64.b64encode(buffer.getvalue()).decode()
def image_management(image): # path of static images is defaulted
# Image open return PIL image object
img = pilImage.open(image['filename'])
# Python Image Library operations
image['format'] = img.format
image['mode'] = img.mode
image['size'] = img.size
image['width'], image['height'] = img.size
image['pixels'] = image['width'] * image['height']
# Scale the Image
img = scale_image(img)
image['pil'] = img
image['scaled_size'] = img.size
image['scaled_width'], image['scaled_height'] = img.size
image['scaled_pixels'] = image['scaled_width'] * image['scaled_height']
# Scaled HTML
image['html'] = '<img src="data:image/png;base64,%s">' % image_to_base64(image['pil'], image['format'])
if __name__ == "__main__":
# Use numpy to concatenate two arrays
images = image_data(images = [{'source': "Peter Carolin", 'label': "Lassen Volcano", 'file': "lassen-volcano.jpg"}])
# Display meta data, scaled view, and grey scale for each image
for image in images:
image_management(image)
print("---- meta data -----")
print(image['label'])
print(image['source'])
print(image['format'])
print(image['mode'])
print("Original size: ", image['size'], " pixels: ", f"{image['pixels']:,}")
print("Scaled size: ", image['scaled_size'], " pixels: ", f"{image['scaled_pixels']:,}")
print("-- original image --")
display(HTML(image['html']))
Do you think this is a time complexity or space complexity or both problem?
- I think it's a problem with both. Clearly, since the image wasn't able to display in VSCode and the process ran for around a minute and a half, this is not a well-optimized function for basic laptops.
numbers = list(range(1000))
print(numbers)
print(numbers[263])
ncaa_bb_ranks = {1:"Alabama",2:"Houston", 3:"Purdue", 4:"Kansas"}
#look up a value in a dictionary given a key
print(ncaa_bb_ranks[1])
Space
This function takes two number inputs and returns their sum. The function does not create any additional data structures or variables that are dependent on the input size, so its space complexity is constant, or O(1). Regardless of how large the input numbers are, the function will always require the same amount of memory to execute.
def sum(a, b):
return a + b
print(sum(90,88))
print(sum(.9,.88))
Time
An example of a linear time algorithm is traversing a list or an array. When the size of the list or array increases, the time taken to traverse it also increases linearly with the size. Hence, the time complexity of this operation is O(n), where n is the size of the list or array being traversed.
for i in numbers:
print(i)
Space
This function takes a list of elements arr as input and returns a new list with the elements in reverse order. The function creates a new list reversed_arr of the same size as arr to store the reversed elements. The size of reversed_arr depends on the size of the input arr, so the space complexity of this function is O(n). As the input size increases, the amount of memory required to execute the function also increases linearly.
def reverse_list(arr):
n = len(arr)
reversed_arr = [None] * n #create a list of None based on the length or arr
for i in range(n):
reversed_arr[n-i-1] = arr[i] #stores the value at the index of arr to the value at the index of reversed_arr starting at the beginning for arr and end for reversed_arr
return reversed_arr
print(numbers)
print(reverse_list(numbers))
Time
An example of a quadratic time algorithm is nested loops. When there are two nested loops that both iterate over the same collection, the time taken to complete the algorithm grows quadratically with the size of the collection. Hence, the time complexity of this operation is O(n^2), where n is the size of the collection being iterated over.
for i in numbers:
for j in numbers:
print(i,j)
Space
This function takes two matrices matrix1 and matrix2 as input and returns their product as a new matrix. The function creates a new matrix result with dimensions m by n to store the product of the input matrices. The size of result depends on the size of the input matrices, so the space complexity of this function is O(n^2). As the size of the input matrices increases, the amount of memory required to execute the function also increases quadratically.
- Main take away is that a new matrix is created.
def multiply_matrices(matrix1, matrix2):
m = len(matrix1)
n = len(matrix2[0])
result = [[0] * n] * m #this creates the new matrix based on the size of matrix 1 and 2
for i in range(m):
for j in range(n):
for k in range(len(matrix2)):
result[i][j] += matrix1[i][k] * matrix2[k][j]
return result
print(multiply_matrices([[1,2],[3,4]], [[3,4],[1,2]]))
Time
An example of a log time algorithm is binary search. Binary search is an algorithm that searches for a specific element in a sorted list by repeatedly dividing the search interval in half. As a result, the time taken to complete the search grows logarithmically with the size of the list. Hence, the time complexity of this operation is O(log n), where n is the size of the list being searched.
def binary_search(arr, low, high, target):
while low <= high:
mid = (low + high) // 2 #integer division
if arr[mid] == target:
return mid
elif arr[mid] < target:
low = mid + 1
else:
high = mid - 1
target = 263
result = binary_search(numbers, 0, len(numbers) - 1, target)
print(result)
Space
The same algorithm above has a O(logn) space complexity. The function takes an array arr, its lower and upper bounds low and high, and a target value target. The function searches for target within the bounds of arr by recursively dividing the search space in half until the target is found or the search space is empty. The function does not create any new data structures that depend on the size of arr. Instead, the function uses the call stack to keep track of the recursive calls. Since the maximum depth of the recursive calls is O(logn), where n is the size of arr, the space complexity of this function is O(logn). As the size of arr increases, the amount of memory required to execute the function grows logarithmically.
Time
An example of an O(2^n) algorithm is the recursive implementation of the Fibonacci sequence. The Fibonacci sequence is a series of numbers where each number is the sum of the two preceding ones, starting from 0 and 1. The recursive implementation of the Fibonacci sequence calculates each number by recursively calling itself with the two preceding numbers until it reaches the base case (i.e., the first or second number in the sequence). The algorithm takes O(2^n) time in the worst case because it has to calculate each number in the sequence by making two recursive calls.
def fibonacci(n):
if n <= 1:
return n
else:
return fibonacci(n-1) + fibonacci(n-2)
#print(fibonacci(5))
#print(fibonacci(10))
#print(fibonacci(20))
print(fibonacci(30))
#print(fibonacci(40))
Space
This function takes a set s as input and generates all possible subsets of s. The function does this by recursively generating the subsets of the set without the first element, and then adding the first element to each of those subsets to generate the subsets that include the first element. The function creates a new list for each recursive call that stores the subsets, and each element in the list is a new list that represents a subset. The number of subsets that can be generated from a set of size n is 2^n, so the space complexity of this function is O(2^n). As the size of the input set increases, the amount of memory required to execute the function grows exponentially.
def generate_subsets(s):
if not s:
return [[]]
subsets = generate_subsets(s[1:])
return [[s[0]] + subset for subset in subsets] + subsets
print(generate_subsets([1,2,3]))
#print(generate_subsets(numbers))
Using the time library, we are able to see the difference in time it takes to calculate the fibonacci function above.
- Based on what is known about the other time complexities, hypothesize the resulting elapsed time if the function is replaced.
The difference in timing between the functions seems to be based upon the mathematical multipliers associated. For example, in the exponential case below, the time will likely be close to double the previous because the product is continuously multipled by 2 for each n.
For logarithmic, the difference in timing would likely be less extreme as time goes on, since logarithm functions grow longer more than they do greater. It makes sense that quadratic gets greater and greater as n increases, for example.
import time
start_time = time.time()
print(fibonacci(34))
end_time = time.time()
total_time = end_time - start_time
print("Time taken:", total_time, "seconds")
start_time = time.time()
print(fibonacci(35))
end_time = time.time()
total_time = end_time - start_time
print("Time taken:", total_time, "seconds")
Hacks
- Record your findings when testing the time elapsed of the different algorithms.
- Although we will go more in depth later, time complexity is a key concept that relates to the different sorting algorithms. Do some basic research on the different types of sorting algorithms and their time complexity.
Complete the Time and Space Complexity analysis questions linked below. Practice
Why is time and space complexity important when choosing an algorithm?
- Time and space complexity are important considerations when choosing an algorithm because they directly impact the efficiency of the algorithm. The time complexity determines how long an algorithm takes to execute, and the space complexity determines how much memory the algorithm uses. For large datasets or real-time applications, it is crucial to choose an algorithm with low time complexity. Similarly, when working with limited memory resources or large datasets, choosing an algorithm with low space complexity is essential to avoid memory issues.
Should you always use a constant time algorithm / Should you never use an exponential time algorithm? Explain?
- No, the choice of an algorithm depends on the specific problem, context, and constraints you are working with. While constant time algorithms are generally more efficient than exponential time algorithms, there are cases where using an exponential time algorithm might be necessary or even preferable
What are some general patterns that you noticed to determine each algorithm's time and space complexity?
- Constant time complexity (O(1)): Algorithms that have a constant number of operations, regardless of the input size, exhibit constant time complexity.
- Linear time complexity (O(n)): If the number of operations is directly proportional to the input size, the algorithm has linear time complexity. For example, if you iterate through an array or list once, the time complexity is O(n).
- Quadratic time complexity (O(n^2)): Algorithms with nested loops that iterate through the input typically exhibit quadratic time complexity. For example, comparing each element of an array to every other element results in O(n^2) complexity.
- Logarithmic time complexity (O(log n)): If the algorithm reduces the input size by a factor (usually 2) in each iteration, it has logarithmic time complexity. Binary search is an example, as it divides the search space in half with each step.
a = 0
b = 0
for i in range(N):
a = a + random()
for i in range(M):
b = b + random()
My answer: 3
Reasoning: The time complexity of the two for loops is O(N) and O(M) respectively, but since they are not nested, the time complexity cannot be obtained by multiplying N and M. Rather, it would be represented as O(N + M). The size of the variables is constant and independent of the input size, so the space complexity can be expressed as O(1) or linear.
a = 0
for i in range(N):
for j in reversed(range(i,N)):
a = a + i + j
My answer: 4
Reasoning: These for loops are nested, making them a classic example of quadratic time complexity, which we covered in class. Thus, I could quickly determine that their time complexity is O(N*N).
k = 0
for i in range(n//2, n):
for j in range(2, n, pow(2,j)):
k = k + n / 2
My answer: 2 (Correct)
Reasoning: This one had me a bit unsure since I struggled to understand the range function notation. Perhaps my lack of experience with it was the issue. While I recognized the first loop as linear, the second loop appeared to be incrementing in powers of two, similar to logarithmic growth with a base of 2. Consequently, I determined the time complexity to be O(n * log(2)(n)) or, more commonly, O(nlog(n)).
My answer: 2
Reasoning: Upon encountering the term "asymptotically," I had to research its meaning. Consequently, I learned that it refers to X being more efficient than Y once n exceeds a certain positive constant limit.
a = 0
i = N
while (i > 0):
a += i
i //= 2
My answer: 4
Reasoning: This problem presented a strange scenario as it bore similarities to the log base 2 example we had earlier, but had not been expressed as a while loop yet. Ultimately, I managed to give an informed response, but lacked complete confidence in my answer.
My answer: 3
Reasoning: This is just a basic concept that we learned from the lesson.
My answer: 2
Reasoning: It appeared to be a typical textbook definition, but I was uncertain about the meaning of the term "primitive." Although I was aware that it did not involve determining the size of the data or merely counting the number of algorithms.
for i in range(n):
i=i*k
My answer: 3
Reasoning: From a mathematical perspective, the value can only be 3. The linear values of 1 or 2 are not viable since the program multiplies the variable i, which determines how long the program continues running, by k. Alternatively, we could interpret the program as running until k has multiplied by itself sufficiently to reach n, a point that would be graphically represented on a logarithmic scale.
value = 0
for i in range(n):
for j in range(i):
value=value+1
My answer: 4 wrong Correct answer: 3
Reasoning: This type of loop was new to me, and I struggled to convert it to Big O notation. After contemplating it further, I realized that the range function establishes n as a ceiling that can never be reached. For instance, if n were 2, the loop would iterate through 0 and 1 but never reach 2. Consequently, the time complexity of O(n * (n - 1)) seems appropriate.
My answer: 2
Reasoning: One reason why I responded in this manner is that I tend to answer "false" whenever a question about time complexity is coupled with the phrase "always runs faster." Additionally, even though I'm not familiar with the concept of "worst-case running time," I have observed that linear algorithms can occasionally outperform logarithmic ones, particularly when dealing with smaller values of n.