Challenges in understanding efficiency of solution to a computational problem:

- a program can be
**implemented in many different ways** - you can solve a problem using only a handful of different
**algorithms** - would like to separate choices of implementation from choices of more abstract algorithm

- measure with a
**timer** **count**the operations- abstract notion of
**order of growth**

Goals:

- want to evaluate program’s efficiency when input is very big
- want to express the growth of program’s run time as input size grows
- want to put an upper bound on growth
- do not need to be precise: “order of” not “exact” growth

- Big Oh notation measures an upper bound on the asymptotic growth, often called order of growth
- Big Oh or O() is used to describe worst case
- worst case occurs often and is the bottleneck when a program runs
- express rate of growth of program relative to the input size
- evaluate algorithm not machine or implementation

drop constants and multiplicative factors

focus on dominant terms

**O(n^2): n^2+ 2n + 2** - highest order n^2

**O(n^2): n^2+ 100000n + 3^1000** - highest order n^2

**O(n): log(n) + n + 4** - n grows faster than log(n) for large numbers

**O(n log(n)): 0.0001 nlog(n) + 300n** - nlog(n) is higher at large values

**O(3^n): 2n^30+ 3^n** - 3^n is larger than n^30 at larger values

```
O(1) : constant
```

↓

```
O(log n) : logarithmic
```

↓

```
O(n) : linear
```

↓

```
O(n log n): loglinear
```

↓

```
O(n^c) : polynomial
```

↓

` O(c^n) : exponential`

combinecomplexity classes

- analyze statements inside functions
- apply some rules, focus on dominant term

Law of Addition for O():

- used with sequential statements
**O(f(n)) + O(g(n)) is O( f(n) + g(n) )**

- Law of Multiplication for O():
- used with nestedstatements/loops
**O(f(n)) X O(g(n)) is O( f(n) X g(n) )**

- complexity independent of inputs
- very few interesting algorithms in this class, but can often have pieces that fit this class
- can have loops or recursive calls, but number of iterations or calls independent of size of input

- complexity grows as log of size of one of its inputs
- example:
- bisection search
- binary search of a list

In [4]:

```
def intToStr(i):
digits = '0123456789'
if i == 0:
return '0'
result = ''
while i > 0:
result = digits[i%10] + result
i= i//10
return result
```

- searching a list in sequence to see if an element is present
- add characters of a string, assumed to be composed of decimal digits

In [6]:

```
def addDigits(s):
val= 0
for c in s:
val += int(c)
return val
```

In [7]:

```
def fact_iter(n):
prod = 1
for i in range(1, n+1):
prod *= i
return prod
```

In [13]:

```
#recusive fucntion
def fact_recur(n):
""" assume n >= 0 """
if n <= 1:
return 1
else:
return n*fact_recur(n - 1)
#iterative and recursive factorial implementations are the same order of growth
```

- most common polynomial algorithms are quadratic, i.e., complexity grows with square of size of input
- commonly occurs when we have nested loops or recursive function calls

In [15]:

```
# QUADRATIC COMPLEXITY
def isSubset(L1, L2):
for e1 in L1:
matched = False
for e2 in L2:
if e1 == e2:
matched = True
break
if not matched:
return False
return True
```

In [16]:

```
def g(n):
""" assume n >= 0 """
x = 0
for i in range(n):
for j in range(n):
x += 1
return x
```

recursive functions where more than one recursive call for each size of problem

- Towers of Hanoi

many important problems are inherently exponential

- unfortunate, as cost can be high
- will lead us to consider approximate solutions more quickly

In [17]:

```
def genSubsets(L):
res = []
if len(L) == 0:
return[[]]
smaller = genSubsets(L[:-1])
extra = L[-1:]
new = []
for small in smaller:
new.append(small+extra)
return smaller+new
```

In [1]:

```
#Fibinnoci linear complexity _ O(n)
def fib_iter(n):
if n == 0:
return 0
elif n == 1:
return 1
else:
fib_i= 0
fib_ii= 1
for i in range(n-1):
tmp= fib_i
fib_i= fib_ii
fib_ii= tmp+ fib_ii
return fib_ii
```

In [3]:

```
#Fibinnoci exponetial complexity - O(2^n)
def fib_recur(n):
""" assumes n an int>= 0 """
if n == 0:
return 0
elif n == 1:
return 1
else:
return fib_recur(n-1) + fib_recur(n-2)
```

**Lists**: n is len(L)

- index O(1)
- store O(1)
- length O(1)
- append O(1)
- == O(n)
- remove O(n)
- copy O(n)
- reverse O(n)
- iteration O(n)
- in listO(n)

**Dictionaries**: n is len(d)

worst case

- index O(n)
- store O(n)
- length O(n)
- delete O(n)
- iteration O(n)

average case

- index O(1)
- store O(1)
- delete O(1)
- iteration O(n)

Reference

- edX course offered by MIT
- 6.00.1x Introduction to Computer Science and Programming Using Python