#### WANT TO UNDERSTAND EFFICIENCY OF PROGRAMS¶

Challenges in understanding efficiency of solution to a computational problem:

• a program can be implemented in many different ways
• you can solve a problem using only a handful of different algorithms
• would like to separate choices of implementation from choices of more abstract algorithm

#### HOW TO EVALUATE EFFICIENCY OF PROGRAMS¶

• measure with a timer
• count the operations
• abstract notion of order of growth

#### ORDERS OF GROWTH¶

Goals:

• want to evaluate program’s efficiency when input is very big
• want to express the growth of program’s run time as input size grows
• want to put an upper bound on growth
• do not need to be precise: “order of” not “exact” growth

#### MEASURING ORDER OF GROWTH: BIG OH NOTATION¶

• Big Oh notation measures an upper bound on the asymptotic growth, often called order of growth
• Big Oh or O() is used to describe worst case
• worst case occurs often and is the bottleneck when a program runs
• express rate of growth of program relative to the input size
• evaluate algorithm not machine or implementation

#### SIMPLIFICATION EXAMPLES¶

• drop constants and multiplicative factors

• focus on dominant terms

O(n^2): n^2+ 2n + 2 - highest order n^2

O(n^2): n^2+ 100000n + 3^1000 - highest order n^2

O(n): log(n) + n + 4 - n grows faster than log(n) for large numbers

O(n log(n)): 0.0001nlog(n) + 300n - nlog(n) is higher at large values

O(3^n): 2n^30+ 3^n - 3^n is larger than n^30 at larger values

### COMPLEXITY CLASSES ORDERED LOW TO HIGH¶

    O(1) : constant



↓

    O(log n) : logarithmic



↓

    O(n) : linear



↓

    O(n log n): loglinear



↓

    O(n^c) : polynomial



↓

    O(c^n) : exponential

#### ANALYZING PROGRAMS AND THEIR COMPLEXITY¶

• combinecomplexity classes

• analyze statements inside functions
• apply some rules, focus on dominant term
• Law of Addition for O():

• used with sequential statements
• O(f(n)) + O(g(n)) is O( f(n) + g(n) )
#for example, for i in range(n): print('a') for j in range(n*n): print('b') is O(n) + O(n*n) = O(n+n^2) = O(n^2) because of dominant term
• Law of Multiplication for O():
• used with nestedstatements/loops
• O(f(n)) X O(g(n)) is O( f(n) X g(n) )
for example, for i in range(n): for j in range(n): print('a') is O(n)*O(n) = O(n*n) = O(n^2) because the outer loop goes n times and the inner loop goes n times for every outer loop iter.

#### CONSTANT COMPLEXITY¶

• complexity independent of inputs
• very few interesting algorithms in this class, but can often have pieces that fit this class
• can have loops or recursive calls, but number of iterations or calls independent of size of input

#### LOGARITHMIC COMPLEXITY¶

• complexity grows as log of size of one of its inputs
• example:
• bisection search
• binary search of a list
In :
def intToStr(i):
digits = '0123456789'
if i == 0:
return '0'
result = ''
while i > 0:
result = digits[i%10] + result
i= i//10
return result


#### LINEAR COMPLEXITY¶

• searching a list in sequence to see if an element is present
• add characters of a string, assumed to be composed of decimal digits
In :
def addDigits(s):
val= 0
for c in s:
val += int(c)
return val

In :
def fact_iter(n):
prod = 1
for i in range(1, n+1):
prod *= i
return prod

In :
#recusive fucntion
def fact_recur(n):
""" assume n >= 0 """
if n <= 1:
return 1
else:
return n*fact_recur(n - 1)
#iterative and recursive factorial implementations are the same order of growth


#### POLYNOMIAL COMPLEXITY¶

• most common polynomial algorithms are quadratic, i.e., complexity grows with square of size of input
• commonly occurs when we have nested loops or recursive function calls
In :
# QUADRATIC COMPLEXITY

def isSubset(L1, L2):
for e1 in L1:
matched = False
for e2 in L2:
if e1 == e2:
matched = True
break
if not matched:
return False
return True

In :
def g(n):
""" assume n >= 0 """
x = 0
for i in range(n):
for j in range(n):
x += 1
return x


#### EXPONENTIAL COMPLEXITY¶

• recursive functions where more than one recursive call for each size of problem

• Towers of Hanoi
• many important problems are inherently exponential

• unfortunate, as cost can be high
• will lead us to consider approximate solutions more quickly
In :
def genSubsets(L):
res = []
if len(L) == 0:
return[[]]
smaller = genSubsets(L[:-1])
extra = L[-1:]
new = []
for small in smaller:
new.append(small+extra)
return smaller+new

In :
#Fibinnoci linear complexity _ O(n)
def fib_iter(n):
if n == 0:
return 0
elif n == 1:
return 1
else:
fib_i= 0
fib_ii= 1
for i in range(n-1):
tmp= fib_i
fib_i= fib_ii
fib_ii= tmp+ fib_ii
return fib_ii

In :
#Fibinnoci exponetial complexity  - O(2^n)
def fib_recur(n):
""" assumes n an int>= 0 """
if n == 0:
return 0
elif n == 1:
return 1
else:
return fib_recur(n-1) + fib_recur(n-2)


#### COMPLEXITY OF COMMON PYTHON FUNCTIONS¶

Lists: n is len(L)

• index O(1)
• store O(1)
• length O(1)
• append O(1)
• == O(n)
• remove O(n)
• copy O(n)
• reverse O(n)
• iteration O(n)
• in listO(n)

Dictionaries: n is len(d)

worst case

• index O(n)
• store O(n)
• length O(n)
• delete O(n)
• iteration O(n)

average case

• index O(1)
• store O(1)
• delete O(1)
• iteration O(n)

Reference

• edX course offered by MIT
• 6.00.1x Introduction to Computer Science and Programming Using Python