All the technical interview questions I've come across at one point or another, with an attempt (hopefully) at explaining some of the concepts!
Sunday, November 6, 2011
Big O notation
Logarithmic Running Time, an explanation
When computing the running time for some set of operations, how do you know whether something is log base 3 or 2 or 10. I will refer to log base x as log_x from here on out.
A set of operations will be said to take log_x steps if you're decreasing the work by a multiplicative factor. Take merge sort for example, we divide the list by 2, so that operation takes log_2 n (n being the size of the list). So if you're dividing the work by a constant amount each time, you end up with a logarithmic equation.
Also, all logs can be reduced to log_2. Why and how you might be wondering since you've probably noticed most running time of algorithms is always written in log_2.
That is because you can convert between any of the logs. They are all the same within a constant factor.
Equation:
log_b (n) = log_x (n) / log_x (b)
Example with converting from base 2 to 10:
log_2 (n) = log_10 (n) / log_10 (2)
Properties of Sorting Algorithms
Run time complexity:
Amount of steps/number of operations required to complete the algorithm.
Performance:
How fast can the algorithm run? This is a bit different then running time complexity because here you care about how long an operation takes. For example, if you need to look up something in a DB to complete your sorting, because let us say a list only has ids but you want it sorted according to names as well - looking up those names is expensive and takes time.
//how long it would take to carry out 10 operations of varying lengths:
10 operations of x (each x takes 10 ms) = 10 x 10 ms = 100 ms
10 operations of y (each y takes 100 ms) = 10 x 100 ms = 1000 ms
Another topic that falls under performance is cache locality. Whether you're frequently accessing similar items while you're processing a list is important, especially when your list is large and does not fit into memory. If you're constantly thrashing and moving things in and out of memory, you have bad cache locality. If you're working on a subset of a list, and then move on and work on the next subset without constantly thrashing (for example constantly traversing the entire list), you have good cache locality.
Good cache locality: quick sort
Bad cache locality: heap sort, merge sort (if not done in place)
Storage (in-place vs not):
Extra storage needed to hold things while sorting a list.
An algorithm is in-place if it does not need extra storage space while sorting the list (minus a few variables). For example, while the algorithm is sorting, it does not need to create a new list to put the sorted elements into, it can do the sorting and placing of elements in their proper place directly on the existing list.
Stable vs Non-Stable:
An algorithm is stable if it does not change the relative position of items that are equal.
So after sorting a list, the relative position of items that are equal stays the same guaranteed.
Sort the list below by GPA (high to low), input List below:
[(Anna, 4.0), (David, 4.0), (John, 2.0), (Kelly, 3.0)]
Stable should produce:
[(Anna, 4.0), (David, 4.0), (Kelly, 3.0), (John, 2.0)]
Relative position of Anna and David are maintained, Anna still comes before David.
It should not produce (while non-stable algorithms could produce this):
[(David, 4.0), (Anna, 4.0), (Kelly, 3.0), (John, 2.0)]
You care to chose a stable sort if you care about the current order (because it has some property you want to preserve) of the list you're about to sort.
So like the example above, if I have a list sorted by alphabetical order, and now I want to sort by GPA while keeping the alphabetical order sorting. I'd want to chose a stable algorithm.
Why not just sort and just take into account the various properties? Who cares which algorithm you use? Because it might be expensive to look up the data to attain the exact sorting you want. Let us say you have a list with ids and GPAs only. You want that list sorted by alphabetical order of names, even though names aren't included. If the list is presorted alphabetically, you want to retain that sorting. Because if you do not, then you have to do a db look up (which is slow) to get the name to factor it into your sorting equation when doing the sort.
Saturday, November 5, 2011
ClOsUrEs!
function A () {
count = 0;
function B() {
count++;
print count;
}
return B;
}
Event-Driven Programming
A programming concept where the program flow is based on events. The main way it works is that an event or an action takes place and a specific response is generated based on that event.
Many GUIs are event-driven and have a main UI loop that runs and receives events, such as mouse clicks, and then processes them.
Another example is interrupts (a signal to a component that indicates a change or response is needed). One example is hardware interrupts such as powering on/off a computer.
Asynchronous APIs and function callbacks could be used to implement event-driven programming.
Wednesday, November 2, 2011
Iterator Pattern
Sudo vs Su
Gang of Four
Wednesday, October 26, 2011
Mutex vs Semaphore
Object in Java
Queue, Deque, Stack
Thursday, October 20, 2011
List in Java
Data Binding
There are two types XML data binding and UI data binding. It's a technique that binds data with the backend/ability to access and retrieve that data...I guess you can call it logic. Any change in the data will be reflected in the element bound to that data.
Examples of UI binding: WPF, Cocoa with nibs and variables
An object is used to represent the XML data. One would access the object to see retrieve data about the XML.
Merge Sort
Merge sort is a divide and conquer algorithm that works by dividing a list continuously and then merging/sorting the smaller lists.
Average and Worst case running time: O(n logn)
n, because you visit all the elements to merge them. log n, because you're constantly split the array in 2, so n/2
Note: It is faster than quick sort because it has a lower constant.
Most implementations of quick sort require O(n) space. That's why some people prefer heap sort which is also O(nlogn) running time, but requires only constant space (O(1)).
An interesting note: Java uses a modified version of merge sort for their sort() algorithm
Implementation of merge sort that you can run:
import java.util.List;
import java.util.ArrayList;
class MergeSort {
List <Integer> list;
MergeSort() {
list = new ArrayList<Integer>(6);
System.out.println("size before we put anything in it: "+list.size());
list.add(new Integer(1));
list.add(new Integer(5));
list.add(new Integer(3));
list.add(new Integer(4));
list.add(new Integer(20));
list.add(new Integer(0));
list.add(new Integer(6));
list.add(new Integer(10));
}
public static void main (String args[]) {
MergeSort ms = new MergeSort();
ms.sort();
}
void sort() {
if (list != null) {
List<Integer> result = mergesorthelp(list);
for (Integer i: result) {
System.out.println(i);
}
}
}
List<Integer> mergesorthelp(List<Integer> list){
if(list.isEmpty() || list.size() == 1) {
return list;
}
List<Integer> a, b, result;
a = mergesorthelp(list.subList(0, (list.size()/2)));
b = mergesorthelp(list.subList((list.size()/2), list.size()));
result = merge(a, b);
return result;
}
List<Integer> merge (List<Integer> a, List<Integer> b) {
List<Integer> newList = new ArrayList<Integer>();
int indexA = 0;
int indexB = 0;
int sizeA = a.size();
int sizeB = b.size();
while (indexA < sizeA || indexB < sizeB) {
if (indexA < sizeA && indexB < sizeB) {
if (a.get(indexA) > b.get(indexB)) {
newList.add(b.get(indexB));
indexB++;
}
else {
newList.add(a.get(indexA));
indexA++;
}
}
else if (indexA < sizeA) {
newList.add(a.get(indexA));
indexA++;
}
else if (indexB < sizeB) {
newList.add(b.get(indexB));
indexB++;
}
}//end while
return newList;
}
}
Wednesday, October 19, 2011
Simple Java program to test how much time an action takes
Threads vs Processes
Tuesday, October 18, 2011
Red Black Tree
Hashtable vs HashMap
HashTable | HashMap |
Synchronized | Unsynchronized |
Does not allow null keys or values | Allows one null key and any number of null values |
Extends Dictionary | Extends AbstractMap |
Default capacity is 11 | Default capacity is 16 |
                     Both implement the Map interface | |
                     Default load factor of 0.75, then they resize |
I also read that HashMap does not guarantee the order of the map to remain constant over time.
Synchronized in Java
You can make a non-synchronized collection synchronized by adding the following line:
You can make a block of text/access to a class synchronized by doing this:
synchronized(m) {
Synchronized vs Unsynchronized
Synchronized | Unsynchronized |
StringBuffer | StringBuilder |
Vector | ArrayList |
HashTable | HashMap |
Note on synchronization: Only concurrent access to the same class is synchronized, you may still need to do additional synchronization outside of that class. You can't add or remove from the same class at the same time, but you might want additional synchronization, see example:
Thread 1 | Thread 2 |
if (! list.isEmpty()) { //print list } | for (i = 0; i < size; i++) { list.remove(); } |
Thread 1 obtains lock on the list and checks the it isn't empty. It suspends before it prints the list. Thread 2 removes a bunch of items. Thread 1 continues and now is printing out an empty list (not what it wanted to do).
dead beef
Lazy Initialization
Adsense vs Adwords
Monday, October 17, 2011
Bubble Sort
Best Case: O(n), everything already sorted. You take one pass to figure that out.
Worst case running time: O(n^2).
In place: Yes.
Code:
swap = false;
do {
swap = false;
for (i = 0; i < size -1; i ++) { //you want to stop at the one before last since you're comparing the next item
swap (arr[i], arr[i+1]);
swap = true;
}
} while(swap)
What you will realize is that with the first pass, you're placing the biggest element at the end of the array (fixed in its correct place). 2nd pass: 2nd biggest element, etc. So you don't really need to iterate through the whole array every time, but arr.size - the pass you are on (since the items will be fixed in place).
swap = false;
int index = arr.size;
do {
swap = false;
for (i = 0; i < index -1; i ++) { //you want to stop at the one before last since you're comparing the next item
swap (arr[i], arr[i+1]);
swap = true;
}
index--;
} while(swap)
You will realize that for every element that is not in its place(off by 1), you will need 1 full pass to fix it it + final pass to go through the array and make sure everything is good.
1 swap to get arr ordered = 2 passes
2 swaps to get arr ordered = 3 passes
n = n+1 passes
To keep it easy, we will define 1 pass to mean you've visited all the elements (n). In reality its more like n-i, with i being pass count because as you put items in place you don't need to traverse to that point in the array. So if you know an array is almost sorted, say 4 elements are out of place/4 swaps are needed for the array to be sorted, then Big O is O(5n). Might be better than some other algorithms.
So worst case, the element is very off (at the other end of the array), you will need n+1 passes to fix it, and each pass you're examining n items. Thus, O(n^2).
Wednesday, October 12, 2011
Nil, nil, null, NULL
Java
null - a reserved constant that denotes a pointer to a void object
Internally represented as zero. However, you do not compare an int to this.
Javascript
undefined - object hasn't even been assigned yet
null - it is assigned and its value is null (nothing)
if you compare the two, you'll get that they are equal to each other, but they are separate concepts and if you want to find out if a variable is undefined, don't compare it to null.
C
NULL - (void *)0, to be used with pointers
You cannot use this with ints, you might get an error.
C++
NULL - 0
Can be used with pointers and ints. Signifies pointer not pointing to anything.
Objective-C
nil - to be used with object (id) to indicate 'no value'.
Nil - to be used with class pointer to indicate or point the Objective C class to 'no value'.
NSNull - a way of representing nil/null. Where you cannot pass null or use a null value, use this non-null class to represent a null value.
Tuesday, October 11, 2011
There is a 1/365 chance that you share the same birthday with a random person, and 364/365 chance you don't. Don't take the wager.
360 degrees in a full circle. 90 degrees per quarter (that breaks up every 3 hours segment).
So that's 30 degrees per hour. Each hour can be broken up into 4 parts (one for every 15 minutes), 30/4 = 7.5 degrees.
The minute hand would point to 3, and the hour hand would be 1/4 of the way to 4. It is a quarter of the way to 4, because only 15 minutes (which is 1/4 of 60 mintes) have passed.
Number of people needed to click to make $20:
10 people to make a $1
X 20
200 people to make $20
Those 200 are the 20% that clicked.
(20/100) * x = 200
20 * x = 20000
x = 1000
So 20% of a 1000 people is 200. So a 1000 people.
Saturday, October 8, 2011
Why?
The blog will contain questions I've ever come across and will aim to explain concepts for other's benefit as well. Happy Coding!