After graduating from Flatiron School I was wondering. What should I learn next? After a few attempts applying to engineering positions and taking some challenges, I stepped on a question I didn't know the answer to. I was asked to resolve an algorithm using Big O Notation and I didn't know what this was.
At the beginning of this journey diving into this new terminology, I had two important questions:
1 — What is an algorithm in software development and programming?
A programming algorithm is a procedure or formula used for solving a problem. It is based on conducting a sequence of specified actions in which these actions describe how to do something, and your computer will do it exactly that way every time.
2 — What is Big O Notation?
Is a simplified analysis of an algorithm’s efficiency. In other words, Big O notation tells you the number of operations an algorithm will make. We will be using Big O Notation to compare the different algorithms by the number of operations they make.
Big O establishes a worst-case run time
Imagine you are working in HR and you need to find the record of an employee. You are going to use a simple search algorithm to go through all the records in the company’s database.
That search takes O(n) time to run where N is the length of the database. This means that, in the worst-case scenario, you need to search through every single record.
Let's say when you run the simple search, you find this record at the very first entry in the database. You don’t have to look at every entry — you found it on your first try. So, a question? Does this algorithm take O(n) or did it take O(1)? Keeping in mind that O(n) is actually the whole database ( each element in the database) this algorithm took O(1) time because it was found on the first try. In this case, 0(1) is the best-case scenario. But Big O notation focuses on the worst-case scenario, which is 0(n) for simple search. It’s a reassurance that simple search will never be slower than O(n) time.
Examples of most common formula on Big O Notation:
O(n) = Simple search
O(log n) = Binary search
O(n * log n) = Quicksort
O(n2) = Selection sort
O(n!) = Traveling salesperson
I will make an example with the following challenge that was presented to me when applying for a software engineer position.
Giving the array “array = [1, 2, 3, 4, 50, 6, 7, 88, 9, 90]” Return the sum of each number inside the array.
Let's try this in ruby:
array = [1, 2, 3, 4, 50, 6, 7, 88, 9, 90]def sum_all_numbers(array) total = 0 array.each {|i| total += i } puts totalendputs sum_all_numbers(array) // the result will be = 260
This method runs in O(1) time (or “constant time”) relative to its input. The input array could be 1 item or 1,000 items, but this method would still just require one “step.”
In the next session, I will go over time complexity.