We've been using a worst case analysis so far. Why?
Back to a linear search of a list of size "n". In the worst case we'll scan the entire list -- performance O(n).
Why be so pessimistic?
- Let's say we assume best case (we find a match at the front of the list) -- performance O(1)
- Then we do a demo bragging about how fast our search is
- We open up a list with a trillion elements to show off
- We ask someone attending the demo to try a search
- They enter a search key for an element that is towards the end of the list (or they make a typo that produces no match)
- In both of those cases, our search performance is O(n)
- This is called the Law of Demos
- The "Law of Demos" could just as easily be called the "Law of Production Software"
- We can easily be overly optimistic
However, there are cases where we want more of an average case analysis. We'll look also.
Navigation
Algorithms
(Chapter 201 - Big-O Notation)