Saturday, October 4, 2014

A list of references in machine learning and programming languages

 

Today I have to dispose some papers that I’ve printed during my PhD study. Most of them are research papers or notes, and I had great joy when I first read them. For some of them, I constantly refer back to. Since all these materials are freely available online, I now write down this list and may find them more easily. (In terms of reading, I still prefer papers, however they consume too much space.)

04 Oct 2014

Machine learning/statistics

Minka, A comparison of numerical optimizers for logistic regression.

          Estimating a Gamma distribution.

          Beyond Newton’s method.

Steyvers, Multidimensional scaling.

von Luxburg, A tutorial of spectral clustering.

Das et. al, Google news personalization: scalable online collaborative filtering.

Hofmann, Probabilistic latent semantic analysis.

Deerwester et. al, Indexing by latent semantic analysis.

Yi Wang, Distributed gibbs sampling of latent dirichlet allocation, the gritty details.

Blei et. al, latent dirichlet allocation.

Herbrich et. al, TreeSkill: a bayesian skill rating system.

                      Web-scale bayesian click-through rate prediction for Sponsored search..

                      Matchbox: large scale online bayesian recommendations

Wickham, Tidy data.

                The split-apply-combine strategy for data analysis.

DiMaggio, Mapping great circles tutorial.

Breiman, Statistical modeling: the two cultures. (with comments)

Wagstaff, Machine learning that matters.

Shewchuk, An introduction to the conjugate gradient method without the agonizing pain.

Mohan et. al, Web-search ranking with initialized gradient boosted regression trees.

Burges, From RankNet to LambdaRank to KambdaMART: an overview.

 

Programming

Goodger, Code Like a Pythonista: Idiomatic Python.

C++ FAQ, inheritance.

Wadler: Monads for functional programming.

            How to make ad-hoc polymorphism less ad hoc.

            Comprehending monads.

            The essence of functional programming.

Jones: A system of constructor classes: overloading and implicit higher-order polymorphism.

          Functional programming with overloading and higher-order polymorphism.

          Implementing type classes.

          Monad transformers and modular interpreters.

Yorgey, The Typeclassopedia.

Augustsson, Implementing Haskell overloading.

Damas and Milner, Pricipal type-schemes for functional programs.

Wehr et. al, ML modules and haskell type classes: a constructive comparison.

Bernardy et. al, A comparison of C++ concepts and Haskell type classes.

Garcia et. al, A comparative study of language support for generic programming.

Conchon et. al, Designing a generic graph library using ML functors.

Fenwick, A New Data Structure for Cumulative Frequency Tables.

Petr Mitrchev, Fenwick tree range updates. and his blog.

Flanagan et. al, The essence of compiling with continuations.

Haynes et. al, Continuations and coroutines.

Notes on CLR garbage collector.

Odersky et. al, an overview of the scala programming language.

                       lightweight modular staging: a pragmatic approach to runtime code genration and compiled dsls.

                       javascript as an embedded dsl

                       StagedSAC: a case study in performance-oriented dsl development

http://petr-mitrichev.blogspot.hk/2013/05/fenwick-tree-range-updates.html

F# language reference, chapter 5, types and type constraints.

Remy, Using, undertanding, and unraveling the ocaml language.

Carette et. al, Finally tagless, partially evaluated.

Gordon et. al, A model-learner pattern for bayesian reasoning.

                     Measure transformer semantics for bayesian machine learning

Daniels et. al, Experience report: Haskell in computational biology.

Kiselyo et. al, Embedded probabilistic programming.

Hudak, Conception, evollution, and application of functional programming languages.

Hudak et al, A gentle introduction to Haskell 98.

                  A history of Haskell: Being lazy with class.

Yallop et. al, First-class modules: hidden power and tantalizing promises.

Lammel et. al, Software extension and integration with type classes.

HaskellWiki, OOP vs type classes.

                   All about monads.

 

Course notes

Tao, linear algebra.

MIT 18.440 Probability and Random Variables.OCW link.

MIT 18.05 Introduction to Probability and Statistics. OCW link.

MIT 6.867 Machine learning. Good problem sets.  OCW link.

MIT 18.01 Single Variable Calculus. OCW link.

Princeton COS424, Interacting with Data.

Stanford CS229. Machine learning.

Cornell CS3110, Data Structures and Functional Programming.

FISH 554 Beautiful Graphics in R. (formally FISH507H, course website not available now)

Haskell course notes, CS240h, Stanford. recent offering.

Liang Huang’s course notes, Python, Programming Languages.

MISC

Balanced Search Trees Made Simple (in C#)

Fast Ensembles of Sparse Trees (fest)

An implementation of top-down splaying.

Largest Rectangle in a Histogram.

Monday, July 14, 2014

R Notes: Functions

R's semantics is modeled after Scheme. Scheme is a functional language and R is functional too. I am writing about the functions in R and many R's strange usages are just syntax sugars of special function calls.

What is rownames(x) <- c('A','B','C')?

y <- c(1, 2, 3, 4, 5, 6)
x <- matrix(y, nrow = 3, ncol = 2)
rownames(x) <- c("A", "B", "C")
x


##   [,1] [,2]
## A 1 4
## B 2 5
## C 3 6


How can we assign c('A','B','C') to the return value of rownames(x), yet the assignment effect is done on x?! This is because the statement is a syntax sugar for the following function call:



x <- `rownames<-`(x, c("A1", "B1", "C1"))


First, rownames<- is indeed a function! Because it has special characters, to apply it we need put " around the function name. Second, "rownames<-"(x, c('A1','B1','C1')) is a pure function call. It returns a new copy of x and does not change the row names of x. To take the assignment effect, we must assign the return value back to x explicitly.



The technical term for functionname<- is replacement function. The general rule:



f(x) <- y is transformed as x <- "f<-(x,y)".



Besides rownames, there are many other functions that have a twin replacement function, for example, diag and length.



A special case is index operator "[". Its replacement function is "[<-":



y <- c(1, 2, 3)
"["(y,1)


## [1] 1


`[<-`(y, 2, 100)


## [1]   1 100   3


More examples at R language defination 3.4.4.



Operators are functions



Operators in R are indeed function calls.



"+"(c(1,2), 2)  # same as c(1,2) + 2


## [1] 3 4


Because operators are functions, we can define new operators using function syntax:



# strict vector add
"%++%" <- function(x, y) {
n <- length(x)
m <- length(y)
if (n != m) {
stop("length does not match")
} else {
x + y
}
}


Self-defined operators become more powerful when used with S3 objects (see below).



S3 functions



There are two object systems in R, S3 and S4. S3 objects are lists, so its fields are accessed by the same operator to access list fields $. S4 objects are intended to be safer than S4 objects. Its fields are accessed by @. Many packages, especially those with a long history, such as lars, rpart and randomForest, use S3 objects. S3 objects are easier to use and understand.



Knowing how to implement an S3 object is very useful when we need to check the source code of a specific R package. And when we want to release a small piece of code for others to use, S3 objects provide a simple interface.



The easiest way to understand S3 objects is the following analogy:





















R ANSI C
S3 object/R list object C struct
S3 functions functions on struct


struct MyVec {
int *A;
int n;
};

int safe_get(struct MyVec vec, int i) {
if (i<0 || i>=vec.n) {
fprintf(stderr, "index error");
exit(1);
}
return vec.A[i];
}



In R, the S3 object is implemented as:



vec <- list()
vec$A <- c(1, 2, 3)
vec$n <- length(vec$A)
class(vec) <- "MyVec"


In the S3 object system, the method names cannot be set freely as those in C. They must follow a pattern: “functionname.classname”. Here my class name is MyVec, so all the methods names must end with .MyVec.



"[.MyVec" <- function(vec, i) {
if (i <= 0 || i > vec$n) {
stop("index error")
}
vec$A[i]
}


Let's implement the replacement function too:



"[<-.MyVec" <- function(vec, i, value) {
if (i <= 0 || i > vec$n)
stop("index error")
vec$A[i] <- value
vec
}


Let's play with MyVec objects:



vec[3]


## [1] 3


vec[2] <- 100
vec[30]


## Error: index error


We can also add other member functions for MyVec such as summary.MyVec, print.MyVec, plot.Vec, etc. To use these functions, we don't have to specify the full function names, we can just use summary, print, and plot. R will inspect the S3 class type (in our case, it is MyVec) and find the corresponding functions automatically.



Default parameters and ...



Many functions in R have a long list of parameters. For example plot function. It would becomes tedious and even impossible for the end user to assign the values for every parameter. So to have a clean interface, R supports default parameters. A simple example below:



add2 <- function(a, b = 10) {
a + b
}
add2(5)


## [1] 15


What I want to emphasize in this section is ..., which is called variable number of arguments. And it is universal in R to implement good function interfaces. If you read the documents of R's general functions such as summary and plot, most of their interfaces include ....



Consider the following case: I am implementing a statistical function garchFit, say GARCH model calibration, I used a optimizer optim which has a lot of parameters. Now I need to think about the API of my GARCH calibration function because I want others to use it as well. Shall I expose parameters of optim in garchFit's parameters? Yes, since I want to give the users of my function some freedom in optimizing. But as we know a single procedure in optim such as l-bfgs would have many parameters. On one side, I want to give the user the option to specify these parameters, on the the side, if I expose all of them in my garchFit, the parameter list would go too long. ... comes to the rescue! See the following example:



f1 <- function(a = 1, ...) {
a * f2(...)
}
f2 <- function(b = 5, ...) {
b * f3(...)
}
f3 <- function(c = 10) {
c
}
f1()


## [1] 50


f1(a = 1, b = 2)


## [1] 20


f1(c = 3)


## [1] 15


A simple user of f1 would only need to study its exposed parameter a, while advanced users have options to specify parameters in f2 and f3 when calling f1.



Global variables



Global variables are readly accessible from any R functions. However, to change the value of a global variable, we need a special assignment operator <<-. Python has similar rules. See the following example:



a <- 100
foo <- function() {
b <- a # get a's value
a <- 10 # change a's value fails, (actually done: creates a local variable a, and assign 10)
c(a, b)
}
foo()


## [1]  10 100


a


## [1] 100


boo <- function() {
a <<- 10
}
boo()
a


## [1] 10


Here our scopes have two layers “global” and top-layer functions (foo and boo). When there are more layers, i.e., nested functions, <<- operator finds the variable with the same name in the closet layer for assignment. But it is generally very bad practice to have same variable names across different function layers (except for variables like i, j, x). assign is more general, check ?assign for document.



Variable scope



I think this is the most tricky part of R programming for C programmers. Because block syntax {...} does not introduce a new scope in R while C/C++/Java/C#/etc all introduce a new scope! In R, only a function introduce a new scope.



Please see my previous post: Subtle Variable Scoping in R



quote, subsititude, eval, etc.



Many language-level features of R such as debugging function trace is implemented in R itself, rather than by interpreter hack because R supports meta-programming. I will write another post for these special functions.

Saturday, July 12, 2014

R Notes: vectors

 

R is different from C family languages. It has a C syntax, but a Lisp semantics. Programmers from C/C++/Java world would find many usages in R adhoc and need to memorize special cases. This is because they use R from a C's perspective. R is a very elegant language if we unlearn some C concepts and know R’s rules. I am writing several R notes to explain several important R language rules. This is the first note.

 

The atomicity of R vectors

The atomic data structure in R is vector. This is so different from any C family language. In C/C++, built-in types such as int and char are atomic data structures while C array (a continuous data block in memory) is obviously not the simplest type. In R, vector is indeed the most basic data structure. There is no scalar data structure in R – you cannot have a scalar int in R as int x = 10 in C.

The atomicity of R vectors is written in many documents. The reason that it is usually skipped by R learners is that many R users come from C in which array is a composite data structure. Many seemingly special cases in R language all comes from the atomicity of R vectors. And I will try to cover them coherently.

Display

x <- 10  # equivalent to x <- c(10)
x # or equivalent to print(x)


## [1] 10


y <- c(10, 20)
y


## [1] 10 20


What does [1] mean in the output? It means that the output is a vector and from index 1, the result is ... x is a vector of length 1, so its value is [1] 10, while y is a vector of length 2, so its value is [1] 10 20. For a vector with longer length, the output contains more indices to assist human reading:



z <- 1:25
print(z)


##  [1]  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
## [24] 24 25


Vectors with different types



Though vectors in R are atomic. There are different vectors: int vector, float vector, complex vector, character vector and logical vector. Int and float vectors are numeric vectors. In above, we have seen int vectors. Let's see more types of vectors below:



x <- c(1, 2.1)
mode(x)


## [1] "numeric"


y <- c("a", "bb", "123")
mode(y)


## [1] "character"


z <- complex(real = 1, imaginary = 1)
mode(z)


## [1] "complex"


Notice that in R, string (In R's term: character type) is like int, float, logical types. It is not a vector of chars. R does not differentiate between a character and a sequences of characters. R has a set of special functions such as paste and strsplit for string processing, however R's character type is not a composite type and it is not a vector of chars either!



matrix and array



Matrix is a vector with augmented properties and this makes matrix an R class. Its core data structure is still a vector. See the example below:



y <- c(1, 2, 3, 4, 5, 6)
x <- matrix(y, nrow = 3, ncol = 2)
class(x)


## [1] "matrix"


rownames(x) <- c("A", "B", "C")
colnames(x) <- c("V1", "V2")
attributes(x)


## $dim
## [1] 3 2
##
## $dimnames
## $dimnames[[1]]
## [1] "A" "B" "C"
##
## $dimnames[[2]]
## [1] "V1" "V2"


x


##   V1 V2
## A 1 4
## B 2 5
## C 3 6


as.vector(x)


## [1] 1 2 3 4 5 6


In R, arrays are less frequently used. A 2D arrays is indeed a matrix. To find more: ?array. We can say that an array/matrix is a vector (augmented with dim and other properties). But we cannot say that a vector is an array. In OOP terminology, array/matrix is a subtype of vector.



operators



Because the fundamental data structure in R is vector, all the basic operators are defined on vectors. For example, + is indeed vector addition while adding two vectors with length 1 is just a special case.



When the lengths of the two vectors are not of the same length, then the shorter one is repeated to the same length as the longer one. For example:



x <- c(1, 2, 3, 4, 5)
y <- c(1)
x + y # y is repeated to (1,1,1,1,1)


## [1] 2 3 4 5 6


z <- c(1, 2)
x + z # z is repeated to (1,2,1,2,1), a warning is triggered


## Warning: longer object length is not a multiple of shorter object length


## [1] 2 4 4 6 6


+,-,*,/,etc. are vector operators. When they are used on matrices, their semantics are the same when dealing with vectors – a matrix is treated as a long vector concatenated column by column. So do not expect all of them to work properly as matrix operators! For example:



x <- c(1, 2)
y <- matrix(1:6, nrow = 2)
x * y


##      [,1] [,2] [,3]
## [1,] 1 3 5
## [2,] 4 8 12


For matrix multiplication, we shall use the dedicated operator:



x %*% y  # 1 x 2 * 2 x 3 = 1 x 3


##      [,1] [,2] [,3]
## [1,] 5 11 17


y %*% x  # dimension does not match, c(1,2) is a row vector, not a col vector!


## Error: non-conformable arguments


The single-character operators are all operated on vectors and would expect generate a vector of the same length. So &, |, etc, are vector-wise logic operators.  While &&, ||, etc are special operators that generates a logic vector with length 1 (usually used in IF clauses).



x <- c(T, T, F)
y <- c(T, F, F)
x & y


## [1]  TRUE FALSE FALSE


x && y


## [1] TRUE


math functions



All R math functions take vector inputs and generate vector outputs. For example:



exp(1)


## [1] 2.718


exp(c(1))


## [1] 2.718


exp(c(1, 2))


## [1] 2.718 7.389


sum(matrix(1:6, nrow = 2))  # matrix is a vector, for row/col sums, use rowSums/colSums


## [1] 21


cumsum(c(1, 2, 3))


## [1] 1 3 6


which.min(c(3, 1, 2))


## [1] 2


sqrt(c(3, 2))


## [1] 1.732 1.414


NA and NULL



NA is a valid value. NULL means empty.



print(NA)


## [1] NA


print(NULL)


## NULL


c(NA, 1)


## [1] NA  1


c(NULL, 1)


## [1] 1


 



 



*I find Knitr integrated with RStudio IDE is very helpful to write tutorials.

Sunday, March 30, 2014

Book review: The Geometry of Multivariate Statistics

 

This book was written by the late professor Thomas D. Wickends, and is by far the best book on understanding linear regression I have read. As written in the preface, most books on multivariate statistics approach it via two aspects: algebra and computation. Algebraic approach to the multivariate statistics, and multiple regression in particular, is a self-contained one. It is built on linear algebra and probability and therefore can use these two to develop all the theorems and algorithms, and give rigorous proves on the correctness and error bounds of the methods. Computational approach is to teach the students on how to run the software and interpret the results. For example, reading the p-values of the regression coefficients. However the computational approach, because of its lack of diving into the theories, does not tell one how F-test and the corresponding p-values will change when the properties of the data change. And the computational approach would leave statements like “insignificant p-values don’t mean no correlation with the target” at a superficial level without any deep understanding. The geometrical approach taken by Prof. Wickends has the exact aim: to understand the meanings behind the linear algebra equations and the output of the software!

Geometry is a perfect tool for understanding multivariate statistics because first it can help understand linear algebra (quite a few linear algebra books take a geometry approach) and second it can help understand probability distribution as well. For one-dimension distributions, we think in terms of PDF and CDF curves. And for multivariate distributions, the pdf contour in the high dimensional space is elliptical. This book combines these two geometrical understandings in a unified framework and develops several key concepts of multivariate statistics in only 160 pages!

The main feature of this book is that every concept is developed geometrically, without using linear algebra or calculus at all. In viewing the data points, there are two spaces to view them: variable space (row space) and subject space (column space). When the number of variables exceeds three, it becomes hard to think about/visualize the data points in the variable space, yet it is still conceptually very easy to see them in the subject space! And most of the concepts in this book are developed in this subject space, only occasionally the author switches to the variable space to provide another view (e.g. in the PCA chapter). For example, the linear regression equation:

clip_image002

(all vectors here are demeaned, so there is no offset parameter a)

Basically this equation defines a linear space that are spanned by x_1 to x_p. The aim of the linear regression is to make \hat{y} as close to the target y as possible. In the subject space:

clip_image004

Where e is the residual vector, which is perpendicular to the space defined by xs when the distance (|e|) is minimized. By setting the perpendicular constraints as equations, we can drive the normal equation of linear regression, which in an algebraic approach is usually defined directly as X’X b = X’y rather than derived from geometrical intuition.

Other multivariate statistics book (e.g. Johnson & Wichern) though at places introduce a geometrical explanation. These bits are however embedded inside a large set of linear algebra equations. That makes the geometrical explanation in those book only supportive for the algebra equations, and therefore are not self-contained! While in this book, everything has to be developed geometrically, there is no corner that the author can hide a geometry concept that has not developed before. For example, this book gives a very good explain of the demean operation. In multiple regression, the demean operation introduces a space defined by a 1 vector (1,1,…1)’, and the original y and xs share their mean components in this same space. And this space is perpendicular to the residual vector e, therefore e’1 = 0! (If e’1!=0, then e has a non-zero projection to 1-space, which can then be absorbed into the constant offset in the regression equation.)

For practitioners, section 4.2 (goodness of fit) and chapter 5 of this book are most useful. The coefficient of determination is interpreted geometrically as:

clip_image006

Chapter 5’s title is Configurations of multiple regression vectors, which explains how the regression becomes for different configurations of xs. When the predictors (xs) are highly correlated, the regression becomes unstable (more analysis in Chapter 6). And the book proposes several approaches to deal with the multicollinearity problem. One of them is not to minimize |e| alone (as the position of the regression space defined by the collinear xs is not stable), and |e| + some criterion that stabilizes the position of the regression space. This would lead to ridge regression and other variants. Ridge regression is l2-regularized regression, and the regularization is to “control model complexity”. And in the situation of multicollinearity, we can a much better understanding of what on earth is model complexity through geometry!

Other points in this book: The geometry of linear algebra meets the geometry of probability in Chapter 6 (tests). Geometry interpretation of analysis of variance (Sec 3.4 & Chapter 8), PCA (Chapter 9), and CCA (Chapter 10). Each chapter of this book refreshes some of my memory and adds something new, geometrically. Strongly recommended for anyone who works with regression problems and does data mining in general.

This last paragraph is not about this particular book, but is about my motivation of reading statistics books recently. Many data miners understand regression as loss minimization, probably plus knowing that a small p-value is good indication of a useful feature (this is indeed not accurate). I was such a data miner until half a year ago. But few of them know how the p-value in multiple regression (and in other models, e.g. logistic regression) is actually calculated. Software packages nowadays are very advanced and make models like linear regression/pca/cca seem to be as simple as one line of code in R/Matlab. But if we don’t know them by hand and by heart and understand the various properties of these simple models, how can we feel confident when applying them? On the other side, as researchers in data mining, how can we confidently modify existing models to avoid their weakness in a particular application if we only have a shallow understanding of the weakness of a model? Therefore, for example, we cannot stop our understanding of regularization at only one sentence “controlling model complexity.” We need go deeper than that, to know what is behind that abstract term “model complexity”. This is why recently while writing my phd thesis, I am also learning and re-learning statistics and machine learning models. I should have done this say four years ago! But it is never too late to learn, and learning them after doing several successful data mining projects gives me a new perspective.

Monday, October 7, 2013

A Collection of Interview Questions on Arrays

 

One dimensional array is probably the most important data types in programming languages. First of all, computer memory supports random access and can be treated as a big array. Second, other data structures can be implemented/simulated using arrays, e.g. strings, stacks, queues, binary heaps, linked lists, binary search trees, graphs can all be implemented using arrays. Because of this fundamentality, questions on arrays are commonly asked during a technical interview. The hardness level of such questions may vary from implementing a strcpy function to applying algorithmic tricks. In this post, I will try to cover a diverse set of these array questions, and I will keep this post updated when encountering new problems.

Coding interview questions usually test two aspects:

1. The ability to write correct and stylish programs. The interviewee should write syntactically correct C-family language code, and prove the correctness of the program by checking boundary conditions, loop invariants, etc. If the problem is relatively easy, not only the correctness but also the cleanness and readability of the code count.

2. The ability to think quickly and solve hard algorithmic problems. Google/Facebook/MSFT tend to ask tricky questions in their interviews. Most experienced programmers don’t think that using ACM/ICPC and algorithmic problems really test a programmer’s ability to code. However, this is a good way to see if the interviewee is clever or not (and clever people learn and adapt fast). More importantly, the coding of these problems will show how can you compact several coding tricks in, say, 20 lines of code.

Actually many problems here test both abilities, and in many cases writing correct programs with a good style weighted more than knowing the algorithmic trick to a specific problem.

Several problems in my collection are taken from a problem series by Chen Liren, link. If you read Chinese, I highly recommend this series, which contains many good questions and experience reports from real interviews.

 

 

The problem descriptions and some keywords use bold fonts.

 

Maximum subarray problem. This problem is so famous that it has its own Wikipedia page. It was popularized by Jon Bentley in his CACM column Programming Pearls and later collected in the book with the same name.

This problem can be solved in O(n) with O(1) space using dynamic programming. The DP solution itself is not very interesting. What interests me is two others solutions, one using a technique called cumulative sum, and the other using a more general technique -- divide and conquer.

Cumulative sum is a very useful pre-computing technique to allow O(1) range queries. It costs O(n^2) for the pre-computing thought.

Divide and conquer applies on a lot of sequence problems, including many sorting and searching algorithms. The trick is to design an O(n) merging step. In this problem O(n) merging is simple. In other problems, e.g. closest point pair, O(n) merging is non-trivial.

An extension of this problem is to find the maximum submatrix. An O(n^3) algorithm is expected.

Another variant of this problem: find two disjoints subarrays A and B such that |sum(A)-sum(B)| is maximized. Suppose A is on the left and B is on the right, for position i, we need the maximum/minimum subarray in range [1,i], and [i+1,n].

Given an array that is a rotation of a sorted array (elements are unique). The rotation of an array is to split the original array into two parts and concatenate the left part to the tail of the right part. E.g., {3,4,5,1,2} is a rotation of {1,2,3,4,5}. Now given this rotated array, find the smallest element.

Obviously we can do a linear scan and find the first number that is smaller than its left number. But linear scan costs O(n). Because the original array is sorted, it is natural to know binary search might get the running time to O(log n). In designing any binary search algorithm, it is important to prove that its correctness using loop invariant. [Bentley’s pearls and CLRS both cover loop invariant.] The algorithm sketch:

l=1, r=n

while a[l]>a[r] do
if (l+1==r) then
return r
m=(l+r)/2
if a[m]>a[l] then
l=m
else
r=m
return a[l]

Another binary search problem: given a sorted array, if A[i] == i, i is called magic index. Find the magic index.


Another binary search problem: given an array A=BC where B is a sorted array with non-increasing order, and C is a sorted array with non-decreasing order. Find B and C.


Not binary search, but incremental search: given an array A in which any two consecutive numbers diff by 1. Find the index of a target number T. E.g. A= {4,5,6,5,6,7,8,9,10,9}, and T=9.


 


Given an array of integers, write an algorithm that shuffles the array so that all odd numbers and ahead of even numbers.


This problem tests the understanding of quicksort algorithm. In quick sort’s partition procedure, once we fix a pivot, there is an O(n) algorithm that puts all elements less than or equal to the pivot to the left, and all elements that are greater to the right. In this problem, we shall redefine the comparison function that any odd number is “smaller” than any even number.


If C/C++ used, then write a general partition procedure which takes a comparison function pointer(C)/comparison operator(C++) as input would be nice.


A variant of this problem: 3-way partition. Given an array and i, design an in-place algorithm that puts all number < A[i] to the left, and all numbers = A[i] in the middle, and the rest in the right.


Another variant of partition algorithm: sort an array of 1/2/3 (don’t count # of 1,2,3), use a new partition algorithm instead.


 


Inversions. Given an array, calculates the numbers of pairs (i,j) such that A[i]>A[j].


This problem has several solutions. One is to modify the merge procedure in merge sort, to count the inversions in each merge procedure. Other solutions require using advanced data structures, and we don’t cover them here.

long long res; 

void merge(int l, int m, int r)
{
int n1=m-l+1, n2=r-m;
for (int i=0; i<n1; i++) L[i]=a[l+i];
L[n1] = INF;
for (int i=0; i<n2; i++) R[i]=a[m+i+1];
R[n2] = INF;

int j=0,k=0;
for (int i=l; i<=r; i++)
if (L[j]<=R[k]) a[i]=L[j++];
else { a[i]=R[k++]; res+=n1-j; }
}

void merge_sort(int l, int r)
{
if (l<r) {
int m=(l+r)/2;
merge_sort(l,m);
merge_sort(m+1,r);
merge(l,m,r);
}
}

Majority. Given an array of n integers, find the number that occurs more n/2 times, or report that such a number does not exist. E.g., in array {1,2,2,2,3}, 2 is such a number.


Solution 1: If such a number exists, then the n/2-th number in the sorted array must equal to it. Then we can first use a linear algorithm to find the n/2-th number in an unsorted array. In the literature, this k-th number problem is called selection problem, which has a complicated linear algorithm by Blum et. al. (four of the five authors of this paper were awarded Turing award!), and a O(n)-average time algorithm using quicksort variant.


Solution 2: a property of this problem: removing any two numbers that are different -> the frequent number remains the same. A walk through of the array using this property can get a candidate of the frequent number. Another linear scan checks whether the candidate occurs more than n/2 times.


A variant of this problem is to find a number that occurs more than n/3 times. The above two solutions still apply.


 


Given an array of integers, concatenate all numbers in some order so that the combined string is lexicological smallest. E.g., {3,32,321}, solution is 321323.


Define a new comparison operator of two numbers and sort the array. The new comparator is

compare x y 
if str(xy) > str(yx) then
1
else if str(xy) < str(yx) then
-1
else
0

The tricky part is to prove that this does yield the smallest string. To prove this order is correct, we need to prove three properties: reflexive, symmetric and transitive.


Given a sorted array and an integer x, output the number of times this integer occurs in the array.


Linear scan costs O(n). A straightforward binary search can spot the first occurrence of this integer, and we continue to count until the next number is not equal to it. However, this integer can occur O(n) times. A workaround is to do binary search again at the first occurrence. Suppose the first occurrence is at position i, and there are m numbers to the right. First, we check position i + m/2, if equal, we check i + m/2 + m/4, if not equal, we check i + m/4, and so on. However, this is non-trivial to get correct because there are a few boundary conditions.


The best solution to this problem is to use two binary search functions in C++ STL: lower_bound and upper_bound. lower_bound finds the smallest position i such that A[i]>=x, upper_bound finds the smallest position i such that A[i]>x. The solution to this problem is simply upper_bound(x)-lower_bound(x). The two functions already take care of boundary conditions and cases where x does not occur in the array.


Given an array of integers, all integers occur exactly twice except for one. Find the only integer that occurs once.


Use bit trick. x xor x = 0. (if a bit occurs twice, set it to 0)


Variant: Given an array, except for one X, all other numbers occur exactly 3 times. Find X.


Now bit trick does not work. However, the idea is the same. If a bit occurs 3 times, set it to 0.


 


Integers and sum. Subproblem 1: given a sorted array, find two numbers that sum to m. Subproblem 2: the array is not sorted. Subproblem 3: find four numbers that sum to m.


For subproblem 1, there is a trick called ruler reading. Keep two pointers to the left and right of the array, and move the pointers until the sum if m. O(n)


For subproblem 2, hasing or sorting.


For subproblem 3, there is trick called two-head search. First, we get the sums of all pairs (O(n^2)), sort these sums or put them in a hash table. for a+b+c+d=m, we now only need to enumerate a and b, and check whether c+d exists using binary search or hashing.


Another problem using ruler reading technique: given an array of positive integers, and S. Find a subarray with minimal length and the sum of the subarray is >= S.


Arrange 0 to n-1 in a circle, starting from 0, delete every m-th number. Output the delete numbers in sequence. E.g, {0,1,2,3}, m=2, the sequence is 1,3,2,0.


Solution: O(n), in every step, use module operation to find the exact number.


Given an array of integers, find a maximum subset that constitutes a sequence of consecutive integers. E.g., {15, 7, 12, 6, 14, 13, 9, 11}, the answer is {11,12,…,15}.


Obvious solution is to sort the numbers in O(n log n) and then do linear scan. The solution requires a running time better than O(n log n). Hint: make the consecutive numbers into same set, and use disjoint set algorithms to have O(n log log n) running time.


Given k sorted arrays, find a range [a,b] with minimal length such that each array has at least one element in [a,b].


Example,


1: 4, 10, 15, 24, 26


2: 0, 9, 12, 20


3: 5, 18, 22, 30


Minimal range is [20,24].


If we have all the numbers in k arrays sorted, and mark the numbers with labels 1 to k, then our minimal range should contain all k labels. This range checking can actually be done while we do merge sort of the k arrays. Hint: set a pointer for each array.


Given an array of 0/1, find the longest subarray such that the number of 0s and the number of 1s are the same.


Using the cumulative sum technique, the problem can be solved in O(n^2). Hint: fill 0 with -1, and use cumulative sum technique again.


Largest Rectangle in a Histogram. Given an array of heights, calculate the rectangle with largest area.


Solution: pre-compute two arrays L and R, L[i] indicates the most left position where h[L[i]]>=h[i].The following C++ code snippet calculates L (R would be similar):

int t = 0; 
for (int i=0; i<n; i++) {
while (t>0 && h[st[t-1]] >= h[i]) t--;
L[i] = t==0?0:(st[t-1]+1);
st[t++]=i;
}

Many other solutions to this problem: http://www.informatik.uni-ulm.de/acm/Locals/2003/html/judge.html


Variant: Given an matrix with 0/1, find the max submatrix with only 1s.


A much easier problem: given an unsorted array A, find in linear time, i and j (j>i), such that A[j] – A[i] is maximized.


In-place shuffle array: 1, 2, 3, …, 2n => 1, n+1, 2, n+2, etc. solution: Peiyush Jain, A Simple In-Place Algorithm for In-Shu?e. http://arxiv.org/pdf/0805.1598.pdf


Coins change problem. Given an array A and X, find how many subsets of A sum to X. Solution: DP.


Given two arrays S and T. Each is a permutation of 0 to n-1. By swapping 0 and another number repeatedly, change S to T. If minimal swaps are required, how to do it?


Solution: get a position correct in every step. If minimal swaps are required, use BFS/short path algorithm.


Given an unsorted array, find the longest arithmetic progression. Case 1: keep order of the original array. Case 2: no need to keep order.


Solution:


For case 1: use this hash data structure to store the n(n-1)/2 pairs of difference:


map<int, set<pair<int, int> >


difference -> sorted set of (i,j) pairs [difference = a[j]-a[i] ]


For case 2: since the order does not matter, we first sort the array in O(nlogn), and then the longest arithmetic progression is a sub sequence of the sorted array. This can be solved by DP:


dp[i][j] = dp[j][k] + 1, if i<j<k, a[i]-a[j]=a[j]-a[k]


dp[i][n-1] = 2, for all i < n-1


 


Given an array of n unique numbers from 1 to n, delete 1/2/k numbers, how to identify the deleted number/numbers?


For k=1, calculate sum.


For k=2, calculate sum and sum of squares.


For k=3, calculate sum, sum of squares, and sum of cubes.


Time: O(nk)


Another solutions: count the 0/1 bit at each position. Time: O(n log n)


An array of length n+1 is populated with integers in the range [1, n]. Find a duplicate integer (just one, not all) in linear time with O(1) space. The array is read-only and may not be modified. Variation: what if the array may be written into but must be left unmodified by the algorithm?


Solution: http://gurmeet.net/puzzles/duplicate-integer/ and


http://www.quora.com/Programming-Interviews/What-are-the-best-programming-interview-questions-youve-ever-asked-or-been-asked


The best solution requires solving several nontrivial subproblems, which are actually linked list problems!


Given an unsorted array A of n distinct natural numbers, find the first natural number that does not appear in the array.


 


I first saw this problem when reading R. Bird’s pearls of functional algorithm design, in which this problem is the first pearl. There are at least two good solutions (favor different tradeoffs) to this problem:


1. From 0 to n, there must be an integer that does not appear in A. We use an extra bool array with size n+1 and scan the array to mark occurred numbers. Then we can scan the bool array to find the first non-mark number as the solution. This solution is very easy to implement and costs O(n) time and O(n) space. Can we reduce the space cost?


2. Divide and conquer. Let’s check this property: from low to low+n, there must be a number that does not occur in A where low is the smallest (possible) number in A. We just need to guarantee that A does not have numbers smaller than low. Besides low, if we also know the upper bound value of A (we define upper bound as the number that is bigger than all numbers in A). So there are at most high-low distinct numbers in A. If and only if high-low==n, the numbers in [low, high) all occur in A once. If high-low>n, then there is a number in [low, high) that does not occur in A.


With this property, we can design a divide and conquer algorithm that in each step discard half of the numbers, the following is an implementation from Bird’s book:

minfree xs          = minfrom 0 (length xs, xs)
minfrom a (n, xs) | n == 0 = a
| m == b-a = minfrom b (n-m, vs) -- discard the numbers < b
| otherwise = minfrom a (m, us) -- discard the numbers >= b
where (us, vs) = partition (< b) xs -- partition in quicksort
b = a + 1 + n div 2
m = length us

This method uses O(n) time and O(log n) space. But the space cost can be reduced to O(1) if we write a iterative version of the above procedure. And different from linear selection algorithm which is only O(n) on average case, the algorithm here is strictly O(n).


 


 


Two problems on permutation arrays:


Find the next permutation. This function is implemented in C++ STL. Its Wikipedia page has a solution. The following is an OCaml implementation:

let next_permutation a = 
let swap i j =
let tmp = a.(i) in

a.(i) <- a.(j);
a.(j) <- tmp;
in
let n = Array.length a in
let k = ref 0 in
let i = ref (-1) in
begin
(* find largest i such that a(i) < a(i+1) *)
while !k<n-1 do
if a.(!k) < a.(!k + 1) then
i := !k;
k := !k + 1
done;
if !i = -1 then None (* if such i does not exist, no next permutation *)
else begin
(* find largest j such that a(j) > a(i) *)
let k = ref (!i + 1) in
let j = ref !k in
while !k<n do
if a.(!k)>a.(!i) then
j := !k;
k := !k + 1
done;
(* swap i j *)
swap !i !j;
(* reverse all after index i *)
let k1 = ref (!i + 1) in
let k2 = ref (n - 1) in
while !k1 < !k2 do
swap !k1 !k2;
k1 := !k1 + 1;
k2 := !k2 - 1;
done;
Some a;
end
end

Find the inverse of a permutation. E.g. A = [4,2,1,3] is a permutation, its inverse is B = [3,2,4,1], i.e. B[A[i]] = i (index starts from 1).


Solution: inverse several circle linked lists.


 


Problems with no good solutions yet:


 


Given an array A with length n, filled with 1 to n. Some numbers occur multiple times, some don’t occur. Identify these two groups of numbers in O(n) time and O(1) space.


Given an array, some numbers occur once, some twice, and only one X three time. Find X. (No solution yet)

  

Sunday, September 22, 2013

Kindle Wordbook -- Making a Personalized Dictionary from Books You’ve Read

 

Kindle Paperwhite is really a good e-reader. Since I had it I have read and re-read several books with great joy and ease. As I am not a native English speaker, I rely on the dictionary function in Paperwhite. Of course, this dictionary looking function is implemented similarly in other software running on Android and iOS. However I found Paperwhite offers the best touch experience (In the appendix of the article, I compare the dictionary looking function in Paperwhite with iBooks and Duokan reader). Paperwhite is also more suitable for reading plain words than a tablet/phone, in terms of light, size, and battery life.

Kindle stores all the highlighted texts, bookmarks, and notes in a text file: My Clippings.txt. When I look one word in the dictionary, I will immediately highlight it. This highlighting action does not cost any extra touch because one touch is anyway needed to close the dictionary window.

my-clipping

Figure 1. My Clippings.txt opened in a text editor.

One problem with highlighted words is that they are just words -- they don’t carry the meanings in the dictionary, nor do they contain the context of the highlighted words. For many words that I have looked up before, when I later encounter them again, I still could not remember their meanings.

So my idea is to parse My Clippings.txt and find the highlighted words out and then

1) find the Chinese meanings of them using an online dictionary, and

2) find all occurrences in my past readings.

For the first task, I can use the API of an online dictionary. After some searching, I find www.wordreference.com provides a simple API. An alternative is dict.baidu.com, which can be accessed via HTTP requests.

If I just aim to get the Chinese meanings of the unknown words, then I only need to do the first task. However by merely looking at the Chinese translations of the words, I may easily forget some of them; and even I remember the vague meanings of these words, I still don’t know when and how to use these words. The vocabulary that we apply in writing and speaking is much less than that we can recognize while reading. By reviewing these words in their context, I can remember them better, and more importantly, by learning their usages in different texts by different authors, I can summarize when and how they are used. For example, I get to know the word `remorse’ while reading Mary Shelley’s Frankenstein and this word is then linked with various episodes and images in the novel. The word not only has a meaning, but has a more concrete feeling. Acquiring such a concrete feeling, I think, is a perquisite towards using the word confidently and correctly.

So I wrote a little Python program to do the two tasks automatically for me. The program can be downloaded at:

https://github.com/yinz/pwdict

And have a look at my recent words in browser:

image

Figure 2. View word book in a browser.

Python is very good at trying out ideas and making things done quickly because of its vast libraries and dynamic nature. In this project, I need HTTP requests, parsing json text, reading epub files, and parsing English sentences. There are libraries for all these tasks in Python. The HTTP and json libraries are built-in. And I modified epub2txt project to extract the pure text content from epub formats, which are typical book formats used across various mobile reading software. Parsing an article into individual sentences seems easy, but can be tricky in many special cases. So I used the NLTK library for this.

Appendix. Why dictionary looking in Paperwhite is better

Paperwhite: one touch to open dictionary, and one touch to back (marking the word does not cost extra touches)

Duokan and iBooks: two touches to open dictionary, and one touch to go back. Marking the word cost extra two touches!

dict-compare

Wednesday, July 17, 2013

Subtle Variable Scoping in R

 

A languages manual usually defines how a language behaves, but does not warn you in cases where you assume a feature should be supported but isn’t. As an example, I will talk about the subtle variable scoping in R language.

{} code blocks

A lot of programmers coming from C/C++/Java will assume that code blocks inside {} also introduce a new scope. However, in dynamic languages like R/Python/JavaScript/Matlab, code blocks do not introduce new scopes; only function does. This difference may cause some subtle bugs.

For example, the following R function returns a list of quadratic function objects:

make_funcs <- function(a, b, c_){
n <- length(a)
fs <- list()
for (i in 1:n) {
fs[[i]] <- function(x) {
cat(sprintf('eval %.1f*x^2+%.1f*x+%.1f where x = %.1f\n', a[i], b[i], c_[i], x))
a[i]*x*x + b[i]*x + c_[i]
}
}
cat(sprintf('variable i is still in the scope and has value %d\n', i))
fs
}


The input to this functions is three vectors of numbers which represent the three coefficients in quadratic forms. And let’s make three objects using the following coefficients:



a <- c(1,2,3)
b <- c(4,5,6)
c_ <- c(-1,-1,-1)


fs <- make_funcs(a, b, c_)

fs[[1]](1)
fs[[2]](1)
fs[[3]](1)


We are supposed to get three different function values. However, all the three functions are the same after checking the output:



> fs[[1]](1)
eval 3.0*x^2+6.0*x+-1.0 where x = 1.0
[1] 8
> fs[[2]](1)
eval 3.0*x^2+6.0*x+-1.0 where x = 1.0
[1] 8
> fs[[3]](1)
eval 3.0*x^2+6.0*x+-1.0 where x = 1.0
[1] 8


It seems that the three fs[i] use the same variable i when they are evaluated. That is, when the three functions are created, the R interpreter just remembers i as a variable in its parent function. Then the result can be explained: after the loop is finished, the variable i has value of 3, and it is still inside the scope of make_func.



Let’s see how will we write make_func in F#:



// a direct translation 
let make_funcs (a: int array, b:int array, c: int array) =
let n = Array.length a
let fs = new ResizeArray<(int -> int)>()
for i=0 to n-1 do
fs.Add(fun x -> a.[i]*x*x + b.[i]*x + c.[i])
fs.ToArray()


// a more functional translation
let make_funcs2 (a: int array, b:int array, c: int array) =
Array.zip3 a b c
|> Array.map (
fun (a0, b0, c0) ->
(fun x -> a0*x*x + b0*x + c0))


The following code would make three different functions as we expect:



let a = [| 1; 2; 3 |]
let b = [| 4; 5; 6 |]
let c = [| -1; -1; -1 |]

let fs = make_funcs (a, b, c)

fs.[0](1) // 4
fs.[1](1) // 6
fs.[2](1) // 8


Why F# code works as expected? When the three functions are created, they also know that variable i shall be found in the parent scope; however the three is have three independent scopes!



As the behavior in R’s version is definitely not what we want. How to make three different functions? Answer is make three different is in side a new function:



make_funcs2 <- function(a, b, c_){
n <- length(a)
fs <- list()
for (i in 1:n) {
fs[[i]] <-
(function() {
j <- i
function(x) {
cat(sprintf('eval %.1f*x^2+%.1f*x+%.1f where x = %.1f\n', a[j], b[j], c_[j], x))
a[j]*x*x + b[j]*x + c_[j]
}
}) ()
}
fs
}


Each time in the for loop, I create a new function and defines a variable j inside it, and the new function return the function(x). Notice that this new function is executed for n times in the for loop, therefore creates n different js.



This trick is ubiquitously used in JavaScript. For example instead of writing,



{
var a = 1;
//code blocks
}


we define a function and execute it immediately to make local variables:



(function() {
var a_is_hidden_from_outside = 1;
// in other words, no new variable in the global space is introduced.
}) ()


The assignment operators <- and <--



It seems that the block syntax {} can be translated as (function() {}) () in R/JavaScript. But in R, things can be more subtle. See the third version of make_funcs:



make_funcs3 <- function(a, b, c_){
n <- length(a)
fs <- list()
for (i in 1:n)
(function() {
j <- i
fs[[i]] <-
function(x) {
cat(sprintf('eval %.1f*x^2+%.1f*x+%.1f where x = %.1f\n', a[j], b[j], c_[j], x))
a[j]*x*x + b[j]*x + c_[j]
}
print(length(fs))
}) ()

fs
}

a <- c(1,2,3)
b <- c(4,5,6)
c_ <- c(-1,-1,-1)

fs <- make_funcs3(a, b, c_)

fs[[1]](1)


We translate {} after for loop as (function () {}) (), and now the assignment of fs[i] is inside the function wrapper. However, the code would not run correctly:



> fs <- make_funcs(a, b, c_)
ls length = 1
ls length = 2
ls length = 3

>
> fs[[1]](1)
Error in fs[[1]] : subscript out of bounds
> length(fs)
[1] 0


Obviously the variable fs is growing when the for loop is executed, however the variable fs inside the for loop is a different one from the one outside the for loop. And we find that the variable fs outside the for loop is only initialized but has not been added any new elements.



The assignment operator <- creates a new variables inside a function! It won’t search if the same variable name is in its parent environments! To do that which is what we suppose R to do, we have to use the <<- operator:



make_funcs3 <- function(a, b, c_){
n <- length(a)
fs <- list()
for (i in 1:n)
(function() {
j <- i
fs[[i]] <<-
function(x) {
cat(sprintf('eval %.1f*x^2+%.1f*x+%.1f where x = %.1f\n', a[j], b[j], c_[j], x))
a[j]*x*x + b[j]*x + c_[j]
}
cat(sprintf('ls length = %d\n', length(fs)))
}) ()
fs
}


And now this function should run as we expected.



Summary



In using R for interactive data analysis and plotting, most of time we won’t deal with these subtle language features. We just copy/paste some code snippet from R help and online and modify it to suit our own data analysis. However when we are into R programming, these issues do occur and will bite us when we assume our experience in C++/Java would also work in R.