Sunday, September 9, 2012

C Knowledge Based Questions

When can passing a struct in C be expensive?

When it is passed by value, causing the entire contents of the struct to be copied. For better performance, pass a pointer to the struct.


What does volatile in C mean?

Volatile is a qualifier in C that tells the compiler this variable is subject to change (by entities external to the code), so do not optimize. This means do not reorder related to that variable, nor cache its value (it should be read from memory every time).

This qualifier gets used in embedded systems programming, where the code may interface with hardware and be checking a register value.


Is C pass by reference or pass by value?

Pass by value means you cannot modify the original argument passed in. Pass by reference, allows you to modify the original argument passed in.

C is pass by value. You can mimic passing by reference, by passing a pointer/address to the memory you'd like to change, and then de-referencing the pointer to change its value.


What does typedef do in C?
It allows you to rename types. For example:

typedef int my_int;

my_int x;

typedef struct my_node {
} node;

node a_node; //vs. without typedef: struct my_node a_node;


Will the following program compile in C?

0 //or 0;

Answer: No. Anatomy of a C program may contain pre processer directives, global declarations, functions, and must contain the function main. You are missing a main function above, as well as the syntax for a variable declaration is incorrect. However, if you had the line '0;' inside of main, the code will compile.


What are pre processor directives?

They are instructions to the compiler to include the constants and fetch the contents of those listed files and include them before compiling the program.

Examples:
#include <stdio.h>
#define SIZE 10


What does 'const' mean?

Data that is declared const cannot be modified.

int x = 5;
const int * var = &x; //cannot modify what var points to
int * const var = &x; //pointer cannot be modified

Trick is to read right to left. In line 2, var is a pointer to an int that is constant. In line 3, var is a constant pointer to an int.

If a struct is declared const, all members of that struct take on that attribute.


Explain the difference between a struct and union?

unions are similar to structs in how they are syntactically defined (except for the keyword union vs struct) and how you can access their variables (dot vs arrow). However, unions only allocate enough memory for their largest data type, so all their variables share the same memory. If you set a member variable of a union, you shouldn't try accessing its other variables as the behavior is undefined. structs allocate enough memory for all their data, and you can access all the variables at once.


How do *, /, +, - get evaluated?

In order of precedence: (* or /), (+ or -). They get evaluated left to right.


What's the order of precedence of ||, !, &&?

In order of precedence: !, &&, ||

Fun fact: When ||'ing two expressions, if the first is true, the second expression does not get evaluated.


How do the postfix ++ and prefix ++ operands get evaluated?

++x; vs x++;

Postfix: x gets incremented before the expression is evaluated.
Prefix: x gets incremented after the expression is evaluated.

Postfix has higher precedence.


What should main return if all was successful?

0


What is the max value of unsigned char?

255
char is 1 byte, 8 bits, so (2^8) = 256 possible values
256 - 1 = 255
You need to represent zero, so that's why the minus one.


What is the min and max value of a signed char? 

-128 and 127 respectively. Representing 0 comes out of the positive values.


When you declare a pointer, what is it initialized to? How about int x;?

Nothing, random memory, based on what was on the stack previously. Accessing an initialized variable is undefined behavior.

Explain auto, static, register, and extern.

auto: default qualifier type. Means a variable has local scope to its function or block it was declared. It's visible only in that scope, and memory is freed upon exiting scope.

static: variables are initialized only once, and their value is remembered in subsequent calls. static for method and variables reduces visibility such that those symbols are only visible to that compilation unit.

register: this keyword suggests (hints) to the compiler to put the variable in a register. & cannot be used with variables specified with this type (in C).

extern: If this keyword is placed infront of a variable or function, that variable or function can be referenced across multiple C files. It extends that variable's or function's visibility. It doesn't define variables, only declares them. It's a way to declare a variable, but link against the actual definition/symbol of that variable in some other file. tl;dr; Find this symbol during linking.


When a floating point number is cast to an int, does it get truncated or rounded up?

ints truncate. If you want to round up, you'll have to account for that while the number is in float value.

int x = (int) 1.2;  //x = 1
     x = (int) 1.7;  //x = 1


Why can casting be dangerous?

Because you can lose part of the data. See example below, part of our floating point data is now lost.

float y = 3.1;
int x = (int) y;  //x is 3


What is argc?

Argument Count. It contains the number of arguments passed into the program + 1 (the executable/program name).


What does argv[0] correspond to?

The program name.

Thursday, September 6, 2012

OO Programming

Polymorphism

When a function or an object can take on different forms. Inheritance and virtual functions in C++, are an example of this.

Encapsulation

When you restrict access to parts of an object. Singleton pattern, Factory pattern, getters and setters are examples of this.

Composition

Aggregating simple objects into more complex objects (ie. when an object is made up of other objects). This is a good way of reusing code. For example, a car object, can contain a window and stereo object.

Sunday, November 6, 2011

Big O notation

Classification of algorithms running times given input X as X grows very large.

Smallest to Largest:
O(1) Constant
O(log n) Logarithmic
O(n^c) Fractional power, 0< c < 1
O(n) Linear
O(n log n) Linearithmic, O(n log n!)
O(n^2) Quadratic
O(n^x) Polynomial, x > 2
O(c^x) Exponential, c > 1
O(n!) Factorial


Logarithmic Running Time, an explanation

When computing the running time for some set of operations, how do you know whether something is log base 3 or 2 or 10. I will refer to log base x as log_x from here on out.

A set of operations will be said to take log_x steps if you're decreasing the work by a multiplicative factor. Take merge sort for example, we divide the list by 2, so that operation takes log_2 n (n being the size of the list). So if you're dividing the work by a constant amount each time, you end up with a logarithmic equation.

Also, all logs can be reduced to log_2. Why and how you might be wondering since you've probably noticed most running time of algorithms is always written in log_2.

That is because you can convert between any of the logs. They are all the same within a constant factor.

Equation:
log_b (n) = log_x (n) / log_x (b)

Example with converting from base 2 to 10:
log_2 (n) = log_10 (n) / log_10 (2)

log_2 (n) = log_10 (n) / constant


Real example of why a function's running time is logarithmic:
Take a list with 128 elements, apply merge sort to it. With merge sort, you keep dividing by 2:
128
64 64
32 32 32 32
16 16 16 16 16 16 16 16 16 16
..
2...2

At each step you process all n numbers, and there are 7 steps you have to do to reach the final answer, so that's where you get log_2 128 = 7

What the algorithm looks like if mapped on an x, y axis:
The logarithmic function measures the rate of growth. It is the inverse of the exponential function. As a result, it is a slow growth function (opposite of an exponential function).

An exponential function's graph is a U split down the middle with the origin at the bottom of the U starting at (0,0). We care only about the +x, + y axis. Since the logarithm function is the inverse, its graph is a U turned on its right side. Half of the U is on the +x, +y axis, the other on the +x, -y axis with the origin (0, 0). You can tell for very large input, growth will be slow.

Inverse relationship example:
log_2 128 = 7
2 ^7 = 128

Properties of Sorting Algorithms

Run time complexity:

Amount of steps/number of operations required to complete the algorithm.

Performance:

How fast can the algorithm run? This is a bit different then running time complexity because here you care about how long an operation takes. For example, if you need to look up something in a DB to complete your sorting, because let us say a list only has ids but you want it sorted according to names as well - looking up those names is expensive and takes time.

//how long it would take to carry out 10 operations of varying lengths:
10 operations of x (each x takes 10 ms) = 10 x 10 ms = 100 ms
10 operations of y (each y takes 100 ms) = 10 x 100 ms = 1000 ms

Another topic that falls under performance is cache locality. Whether you're frequently accessing similar items while you're processing a list is important, especially when your list is large and does not fit into memory. If you're constantly thrashing and moving things in and out of memory, you have bad cache locality. If you're working on a subset of a list, and then move on and work on the next subset without constantly thrashing (for example constantly traversing the entire list), you have good cache locality.

Good cache locality: quick sort
Bad cache locality: heap sort, merge sort (if not done in place)

Storage (in-place vs not):

Extra storage needed to hold things while sorting a list.

An algorithm is in-place if it does not need extra storage space while sorting the list (minus a few variables). For example, while the algorithm is sorting, it does not need to create a new list to put the sorted elements into, it can do the sorting and placing of elements in their proper place directly on the existing list.

Stable vs Non-Stable:

An algorithm is stable if it does not change the relative position of items that are equal.
So after sorting a list, the relative position of items that are equal stays the same guaranteed.

Sort the list below by GPA (high to low), input List below:
[(Anna, 4.0), (David, 4.0), (John, 2.0), (Kelly, 3.0)]

Stable should produce:
[(Anna, 4.0), (David, 4.0), (Kelly, 3.0), (John, 2.0)]

Relative position of Anna and David are maintained, Anna still comes before David.

It should not produce (while non-stable algorithms could produce this):
[(David, 4.0), (Anna, 4.0), (Kelly, 3.0), (John, 2.0)]

The sorting is correct, but the relative position of David and Anna is not the original.

You care to chose a stable sort if you care about the current order (because it has some property you want to preserve) of the list you're about to sort.

So like the example above, if I have a list sorted by alphabetical order, and now I want to sort by GPA while keeping the alphabetical order sorting. I'd want to chose a stable algorithm.

Why not just sort and just take into account the various properties? Who cares which algorithm you use? Because it might be expensive to look up the data to attain the exact sorting you want. Let us say you have a list with ids and GPAs only. You want that list sorted by alphabetical order of names, even though names aren't included. If the list is presorted alphabetically, you want to retain that sorting. Because if you do not, then you have to do a db look up (which is slow) to get the name to factor it into your sorting equation when doing the sort.

Saturday, November 5, 2011

ClOsUrEs!

Tell me about closures, mentioning something other than their cool sounding name!

It is where functions access non-local variables, variables outside of their scope, and the functions get used at some later point in time (potentially outside of their scope). It is as if you're wrapping the function code and the reference to the outside variables in a sack for later use.

The typical scenario is when embedded functions (functions that are within a function) access their parent's function's variables (aka Lexical closure). Let us say that embedded function is then returned to be used later on. That is closure. That function is associated with state which is bound to the variables outside of its scope to be used later on. A closure is created that can be used later in the code that is made up of that function code and a reference to the outside variables it is accessing.

I guess closure got its naming due to the function capturing not just its input but the input around it (the non local variables), as if its surrounding it and closing in on it for later use.

Some languages have this built in, other's don't.

Some languages require the non-local variables be in the parent's scope (lexical closure), other's dont.

//Closure example, pseudo code
function A () {
count = 0;
function B() {
count++;
print count;
}
return B;
}

A closure is created that consists of B's code and a reference to the variable count. You can capture the closure for later use. Since a closure will be created from the code above (count and B()), that closure will not be on the stack because it will have to live outside of the lifetime of the function that is creating the closure (function A).

This is different from static in Java/C++ and global variables in C++, because here each time you access the function you're not reusing the same count variable, each function has its own count variable.

Closures are usually stored on the heap. You could do an optimization that if a closure's lifetime is within the function that created it, you could keep it on the stack. But in general that's not the case.

Languages such as Java don't have closures built in. In these languages, you'd have to pass outside variables into functions that want to use them. Actually in Java, you can't just have embedded functions like that. You'd have to have an object that invokes them. How to implement closures in Java is another post all together.

Event-Driven Programming

What is event-driven programming?

A programming concept where the program flow is based on events. The main way it works is that an event or an action takes place and a specific response is generated based on that event.

Many GUIs are event-driven and have a main UI loop that runs and receives events, such as mouse clicks, and then processes them.

Another example is interrupts (a signal to a component that indicates a change or response is needed). One example is hardware interrupts such as powering on/off a computer.

Asynchronous APIs and function callbacks could be used to implement event-driven programming.