Saturday, October 4, 2014

A list of references in machine learning and programming languages

 

Today I have to dispose some papers that I’ve printed during my PhD study. Most of them are research papers or notes, and I had great joy when I first read them. For some of them, I constantly refer back to. Since all these materials are freely available online, I now write down this list and may find them more easily. (In terms of reading, I still prefer papers, however they consume too much space.)

04 Oct 2014

Machine learning/statistics

Minka, A comparison of numerical optimizers for logistic regression.

          Estimating a Gamma distribution.

          Beyond Newton’s method.

Steyvers, Multidimensional scaling.

von Luxburg, A tutorial of spectral clustering.

Das et. al, Google news personalization: scalable online collaborative filtering.

Hofmann, Probabilistic latent semantic analysis.

Deerwester et. al, Indexing by latent semantic analysis.

Yi Wang, Distributed gibbs sampling of latent dirichlet allocation, the gritty details.

Blei et. al, latent dirichlet allocation.

Herbrich et. al, TreeSkill: a bayesian skill rating system.

                      Web-scale bayesian click-through rate prediction for Sponsored search..

                      Matchbox: large scale online bayesian recommendations

Wickham, Tidy data.

                The split-apply-combine strategy for data analysis.

DiMaggio, Mapping great circles tutorial.

Breiman, Statistical modeling: the two cultures. (with comments)

Wagstaff, Machine learning that matters.

Shewchuk, An introduction to the conjugate gradient method without the agonizing pain.

Mohan et. al, Web-search ranking with initialized gradient boosted regression trees.

Burges, From RankNet to LambdaRank to KambdaMART: an overview.

 

Programming

Goodger, Code Like a Pythonista: Idiomatic Python.

C++ FAQ, inheritance.

Wadler: Monads for functional programming.

            How to make ad-hoc polymorphism less ad hoc.

            Comprehending monads.

            The essence of functional programming.

Jones: A system of constructor classes: overloading and implicit higher-order polymorphism.

          Functional programming with overloading and higher-order polymorphism.

          Implementing type classes.

          Monad transformers and modular interpreters.

Yorgey, The Typeclassopedia.

Augustsson, Implementing Haskell overloading.

Damas and Milner, Pricipal type-schemes for functional programs.

Wehr et. al, ML modules and haskell type classes: a constructive comparison.

Bernardy et. al, A comparison of C++ concepts and Haskell type classes.

Garcia et. al, A comparative study of language support for generic programming.

Conchon et. al, Designing a generic graph library using ML functors.

Fenwick, A New Data Structure for Cumulative Frequency Tables.

Petr Mitrchev, Fenwick tree range updates. and his blog.

Flanagan et. al, The essence of compiling with continuations.

Haynes et. al, Continuations and coroutines.

Notes on CLR garbage collector.

Odersky et. al, an overview of the scala programming language.

                       lightweight modular staging: a pragmatic approach to runtime code genration and compiled dsls.

                       javascript as an embedded dsl

                       StagedSAC: a case study in performance-oriented dsl development

http://petr-mitrichev.blogspot.hk/2013/05/fenwick-tree-range-updates.html

F# language reference, chapter 5, types and type constraints.

Remy, Using, undertanding, and unraveling the ocaml language.

Carette et. al, Finally tagless, partially evaluated.

Gordon et. al, A model-learner pattern for bayesian reasoning.

                     Measure transformer semantics for bayesian machine learning

Daniels et. al, Experience report: Haskell in computational biology.

Kiselyo et. al, Embedded probabilistic programming.

Hudak, Conception, evollution, and application of functional programming languages.

Hudak et al, A gentle introduction to Haskell 98.

                  A history of Haskell: Being lazy with class.

Yallop et. al, First-class modules: hidden power and tantalizing promises.

Lammel et. al, Software extension and integration with type classes.

HaskellWiki, OOP vs type classes.

                   All about monads.

 

Course notes

Tao, linear algebra.

MIT 18.440 Probability and Random Variables.OCW link.

MIT 18.05 Introduction to Probability and Statistics. OCW link.

MIT 6.867 Machine learning. Good problem sets.  OCW link.

MIT 18.01 Single Variable Calculus. OCW link.

Princeton COS424, Interacting with Data.

Stanford CS229. Machine learning.

Cornell CS3110, Data Structures and Functional Programming.

FISH 554 Beautiful Graphics in R. (formally FISH507H, course website not available now)

Haskell course notes, CS240h, Stanford. recent offering.

Liang Huang’s course notes, Python, Programming Languages.

MISC

Balanced Search Trees Made Simple (in C#)

Fast Ensembles of Sparse Trees (fest)

An implementation of top-down splaying.

Largest Rectangle in a Histogram.

Monday, July 14, 2014

R Notes: Functions

R's semantics is modeled after Scheme. Scheme is a functional language and R is functional too. I am writing about the functions in R and many R's strange usages are just syntax sugars of special function calls.

What is rownames(x) <- c('A','B','C')?

y <- c(1, 2, 3, 4, 5, 6)
x <- matrix(y, nrow = 3, ncol = 2)
rownames(x) <- c("A", "B", "C")
x


##   [,1] [,2]
## A 1 4
## B 2 5
## C 3 6


How can we assign c('A','B','C') to the return value of rownames(x), yet the assignment effect is done on x?! This is because the statement is a syntax sugar for the following function call:



x <- `rownames<-`(x, c("A1", "B1", "C1"))


First, rownames<- is indeed a function! Because it has special characters, to apply it we need put " around the function name. Second, "rownames<-"(x, c('A1','B1','C1')) is a pure function call. It returns a new copy of x and does not change the row names of x. To take the assignment effect, we must assign the return value back to x explicitly.



The technical term for functionname<- is replacement function. The general rule:



f(x) <- y is transformed as x <- "f<-(x,y)".



Besides rownames, there are many other functions that have a twin replacement function, for example, diag and length.



A special case is index operator "[". Its replacement function is "[<-":



y <- c(1, 2, 3)
"["(y,1)


## [1] 1


`[<-`(y, 2, 100)


## [1]   1 100   3


More examples at R language defination 3.4.4.



Operators are functions



Operators in R are indeed function calls.



"+"(c(1,2), 2)  # same as c(1,2) + 2


## [1] 3 4


Because operators are functions, we can define new operators using function syntax:



# strict vector add
"%++%" <- function(x, y) {
n <- length(x)
m <- length(y)
if (n != m) {
stop("length does not match")
} else {
x + y
}
}


Self-defined operators become more powerful when used with S3 objects (see below).



S3 functions



There are two object systems in R, S3 and S4. S3 objects are lists, so its fields are accessed by the same operator to access list fields $. S4 objects are intended to be safer than S4 objects. Its fields are accessed by @. Many packages, especially those with a long history, such as lars, rpart and randomForest, use S3 objects. S3 objects are easier to use and understand.



Knowing how to implement an S3 object is very useful when we need to check the source code of a specific R package. And when we want to release a small piece of code for others to use, S3 objects provide a simple interface.



The easiest way to understand S3 objects is the following analogy:





















R ANSI C
S3 object/R list object C struct
S3 functions functions on struct


struct MyVec {
int *A;
int n;
};

int safe_get(struct MyVec vec, int i) {
if (i<0 || i>=vec.n) {
fprintf(stderr, "index error");
exit(1);
}
return vec.A[i];
}



In R, the S3 object is implemented as:



vec <- list()
vec$A <- c(1, 2, 3)
vec$n <- length(vec$A)
class(vec) <- "MyVec"


In the S3 object system, the method names cannot be set freely as those in C. They must follow a pattern: “functionname.classname”. Here my class name is MyVec, so all the methods names must end with .MyVec.



"[.MyVec" <- function(vec, i) {
if (i <= 0 || i > vec$n) {
stop("index error")
}
vec$A[i]
}


Let's implement the replacement function too:



"[<-.MyVec" <- function(vec, i, value) {
if (i <= 0 || i > vec$n)
stop("index error")
vec$A[i] <- value
vec
}


Let's play with MyVec objects:



vec[3]


## [1] 3


vec[2] <- 100
vec[30]


## Error: index error


We can also add other member functions for MyVec such as summary.MyVec, print.MyVec, plot.Vec, etc. To use these functions, we don't have to specify the full function names, we can just use summary, print, and plot. R will inspect the S3 class type (in our case, it is MyVec) and find the corresponding functions automatically.



Default parameters and ...



Many functions in R have a long list of parameters. For example plot function. It would becomes tedious and even impossible for the end user to assign the values for every parameter. So to have a clean interface, R supports default parameters. A simple example below:



add2 <- function(a, b = 10) {
a + b
}
add2(5)


## [1] 15


What I want to emphasize in this section is ..., which is called variable number of arguments. And it is universal in R to implement good function interfaces. If you read the documents of R's general functions such as summary and plot, most of their interfaces include ....



Consider the following case: I am implementing a statistical function garchFit, say GARCH model calibration, I used a optimizer optim which has a lot of parameters. Now I need to think about the API of my GARCH calibration function because I want others to use it as well. Shall I expose parameters of optim in garchFit's parameters? Yes, since I want to give the users of my function some freedom in optimizing. But as we know a single procedure in optim such as l-bfgs would have many parameters. On one side, I want to give the user the option to specify these parameters, on the the side, if I expose all of them in my garchFit, the parameter list would go too long. ... comes to the rescue! See the following example:



f1 <- function(a = 1, ...) {
a * f2(...)
}
f2 <- function(b = 5, ...) {
b * f3(...)
}
f3 <- function(c = 10) {
c
}
f1()


## [1] 50


f1(a = 1, b = 2)


## [1] 20


f1(c = 3)


## [1] 15


A simple user of f1 would only need to study its exposed parameter a, while advanced users have options to specify parameters in f2 and f3 when calling f1.



Global variables



Global variables are readly accessible from any R functions. However, to change the value of a global variable, we need a special assignment operator <<-. Python has similar rules. See the following example:



a <- 100
foo <- function() {
b <- a # get a's value
a <- 10 # change a's value fails, (actually done: creates a local variable a, and assign 10)
c(a, b)
}
foo()


## [1]  10 100


a


## [1] 100


boo <- function() {
a <<- 10
}
boo()
a


## [1] 10


Here our scopes have two layers “global” and top-layer functions (foo and boo). When there are more layers, i.e., nested functions, <<- operator finds the variable with the same name in the closet layer for assignment. But it is generally very bad practice to have same variable names across different function layers (except for variables like i, j, x). assign is more general, check ?assign for document.



Variable scope



I think this is the most tricky part of R programming for C programmers. Because block syntax {...} does not introduce a new scope in R while C/C++/Java/C#/etc all introduce a new scope! In R, only a function introduce a new scope.



Please see my previous post: Subtle Variable Scoping in R



quote, subsititude, eval, etc.



Many language-level features of R such as debugging function trace is implemented in R itself, rather than by interpreter hack because R supports meta-programming. I will write another post for these special functions.

Saturday, July 12, 2014

R Notes: vectors

 

R is different from C family languages. It has a C syntax, but a Lisp semantics. Programmers from C/C++/Java world would find many usages in R adhoc and need to memorize special cases. This is because they use R from a C's perspective. R is a very elegant language if we unlearn some C concepts and know R’s rules. I am writing several R notes to explain several important R language rules. This is the first note.

 

The atomicity of R vectors

The atomic data structure in R is vector. This is so different from any C family language. In C/C++, built-in types such as int and char are atomic data structures while C array (a continuous data block in memory) is obviously not the simplest type. In R, vector is indeed the most basic data structure. There is no scalar data structure in R – you cannot have a scalar int in R as int x = 10 in C.

The atomicity of R vectors is written in many documents. The reason that it is usually skipped by R learners is that many R users come from C in which array is a composite data structure. Many seemingly special cases in R language all comes from the atomicity of R vectors. And I will try to cover them coherently.

Display

x <- 10  # equivalent to x <- c(10)
x # or equivalent to print(x)


## [1] 10


y <- c(10, 20)
y


## [1] 10 20


What does [1] mean in the output? It means that the output is a vector and from index 1, the result is ... x is a vector of length 1, so its value is [1] 10, while y is a vector of length 2, so its value is [1] 10 20. For a vector with longer length, the output contains more indices to assist human reading:



z <- 1:25
print(z)


##  [1]  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
## [24] 24 25


Vectors with different types



Though vectors in R are atomic. There are different vectors: int vector, float vector, complex vector, character vector and logical vector. Int and float vectors are numeric vectors. In above, we have seen int vectors. Let's see more types of vectors below:



x <- c(1, 2.1)
mode(x)


## [1] "numeric"


y <- c("a", "bb", "123")
mode(y)


## [1] "character"


z <- complex(real = 1, imaginary = 1)
mode(z)


## [1] "complex"


Notice that in R, string (In R's term: character type) is like int, float, logical types. It is not a vector of chars. R does not differentiate between a character and a sequences of characters. R has a set of special functions such as paste and strsplit for string processing, however R's character type is not a composite type and it is not a vector of chars either!



matrix and array



Matrix is a vector with augmented properties and this makes matrix an R class. Its core data structure is still a vector. See the example below:



y <- c(1, 2, 3, 4, 5, 6)
x <- matrix(y, nrow = 3, ncol = 2)
class(x)


## [1] "matrix"


rownames(x) <- c("A", "B", "C")
colnames(x) <- c("V1", "V2")
attributes(x)


## $dim
## [1] 3 2
##
## $dimnames
## $dimnames[[1]]
## [1] "A" "B" "C"
##
## $dimnames[[2]]
## [1] "V1" "V2"


x


##   V1 V2
## A 1 4
## B 2 5
## C 3 6


as.vector(x)


## [1] 1 2 3 4 5 6


In R, arrays are less frequently used. A 2D arrays is indeed a matrix. To find more: ?array. We can say that an array/matrix is a vector (augmented with dim and other properties). But we cannot say that a vector is an array. In OOP terminology, array/matrix is a subtype of vector.



operators



Because the fundamental data structure in R is vector, all the basic operators are defined on vectors. For example, + is indeed vector addition while adding two vectors with length 1 is just a special case.



When the lengths of the two vectors are not of the same length, then the shorter one is repeated to the same length as the longer one. For example:



x <- c(1, 2, 3, 4, 5)
y <- c(1)
x + y # y is repeated to (1,1,1,1,1)


## [1] 2 3 4 5 6


z <- c(1, 2)
x + z # z is repeated to (1,2,1,2,1), a warning is triggered


## Warning: longer object length is not a multiple of shorter object length


## [1] 2 4 4 6 6


+,-,*,/,etc. are vector operators. When they are used on matrices, their semantics are the same when dealing with vectors – a matrix is treated as a long vector concatenated column by column. So do not expect all of them to work properly as matrix operators! For example:



x <- c(1, 2)
y <- matrix(1:6, nrow = 2)
x * y


##      [,1] [,2] [,3]
## [1,] 1 3 5
## [2,] 4 8 12


For matrix multiplication, we shall use the dedicated operator:



x %*% y  # 1 x 2 * 2 x 3 = 1 x 3


##      [,1] [,2] [,3]
## [1,] 5 11 17


y %*% x  # dimension does not match, c(1,2) is a row vector, not a col vector!


## Error: non-conformable arguments


The single-character operators are all operated on vectors and would expect generate a vector of the same length. So &, |, etc, are vector-wise logic operators.  While &&, ||, etc are special operators that generates a logic vector with length 1 (usually used in IF clauses).



x <- c(T, T, F)
y <- c(T, F, F)
x & y


## [1]  TRUE FALSE FALSE


x && y


## [1] TRUE


math functions



All R math functions take vector inputs and generate vector outputs. For example:



exp(1)


## [1] 2.718


exp(c(1))


## [1] 2.718


exp(c(1, 2))


## [1] 2.718 7.389


sum(matrix(1:6, nrow = 2))  # matrix is a vector, for row/col sums, use rowSums/colSums


## [1] 21


cumsum(c(1, 2, 3))


## [1] 1 3 6


which.min(c(3, 1, 2))


## [1] 2


sqrt(c(3, 2))


## [1] 1.732 1.414


NA and NULL



NA is a valid value. NULL means empty.



print(NA)


## [1] NA


print(NULL)


## NULL


c(NA, 1)


## [1] NA  1


c(NULL, 1)


## [1] 1


 



 



*I find Knitr integrated with RStudio IDE is very helpful to write tutorials.

Sunday, March 30, 2014

Book review: The Geometry of Multivariate Statistics

 

This book was written by the late professor Thomas D. Wickends, and is by far the best book on understanding linear regression I have read. As written in the preface, most books on multivariate statistics approach it via two aspects: algebra and computation. Algebraic approach to the multivariate statistics, and multiple regression in particular, is a self-contained one. It is built on linear algebra and probability and therefore can use these two to develop all the theorems and algorithms, and give rigorous proves on the correctness and error bounds of the methods. Computational approach is to teach the students on how to run the software and interpret the results. For example, reading the p-values of the regression coefficients. However the computational approach, because of its lack of diving into the theories, does not tell one how F-test and the corresponding p-values will change when the properties of the data change. And the computational approach would leave statements like “insignificant p-values don’t mean no correlation with the target” at a superficial level without any deep understanding. The geometrical approach taken by Prof. Wickends has the exact aim: to understand the meanings behind the linear algebra equations and the output of the software!

Geometry is a perfect tool for understanding multivariate statistics because first it can help understand linear algebra (quite a few linear algebra books take a geometry approach) and second it can help understand probability distribution as well. For one-dimension distributions, we think in terms of PDF and CDF curves. And for multivariate distributions, the pdf contour in the high dimensional space is elliptical. This book combines these two geometrical understandings in a unified framework and develops several key concepts of multivariate statistics in only 160 pages!

The main feature of this book is that every concept is developed geometrically, without using linear algebra or calculus at all. In viewing the data points, there are two spaces to view them: variable space (row space) and subject space (column space). When the number of variables exceeds three, it becomes hard to think about/visualize the data points in the variable space, yet it is still conceptually very easy to see them in the subject space! And most of the concepts in this book are developed in this subject space, only occasionally the author switches to the variable space to provide another view (e.g. in the PCA chapter). For example, the linear regression equation:

clip_image002

(all vectors here are demeaned, so there is no offset parameter a)

Basically this equation defines a linear space that are spanned by x_1 to x_p. The aim of the linear regression is to make \hat{y} as close to the target y as possible. In the subject space:

clip_image004

Where e is the residual vector, which is perpendicular to the space defined by xs when the distance (|e|) is minimized. By setting the perpendicular constraints as equations, we can drive the normal equation of linear regression, which in an algebraic approach is usually defined directly as X’X b = X’y rather than derived from geometrical intuition.

Other multivariate statistics book (e.g. Johnson & Wichern) though at places introduce a geometrical explanation. These bits are however embedded inside a large set of linear algebra equations. That makes the geometrical explanation in those book only supportive for the algebra equations, and therefore are not self-contained! While in this book, everything has to be developed geometrically, there is no corner that the author can hide a geometry concept that has not developed before. For example, this book gives a very good explain of the demean operation. In multiple regression, the demean operation introduces a space defined by a 1 vector (1,1,…1)’, and the original y and xs share their mean components in this same space. And this space is perpendicular to the residual vector e, therefore e’1 = 0! (If e’1!=0, then e has a non-zero projection to 1-space, which can then be absorbed into the constant offset in the regression equation.)

For practitioners, section 4.2 (goodness of fit) and chapter 5 of this book are most useful. The coefficient of determination is interpreted geometrically as:

clip_image006

Chapter 5’s title is Configurations of multiple regression vectors, which explains how the regression becomes for different configurations of xs. When the predictors (xs) are highly correlated, the regression becomes unstable (more analysis in Chapter 6). And the book proposes several approaches to deal with the multicollinearity problem. One of them is not to minimize |e| alone (as the position of the regression space defined by the collinear xs is not stable), and |e| + some criterion that stabilizes the position of the regression space. This would lead to ridge regression and other variants. Ridge regression is l2-regularized regression, and the regularization is to “control model complexity”. And in the situation of multicollinearity, we can a much better understanding of what on earth is model complexity through geometry!

Other points in this book: The geometry of linear algebra meets the geometry of probability in Chapter 6 (tests). Geometry interpretation of analysis of variance (Sec 3.4 & Chapter 8), PCA (Chapter 9), and CCA (Chapter 10). Each chapter of this book refreshes some of my memory and adds something new, geometrically. Strongly recommended for anyone who works with regression problems and does data mining in general.

This last paragraph is not about this particular book, but is about my motivation of reading statistics books recently. Many data miners understand regression as loss minimization, probably plus knowing that a small p-value is good indication of a useful feature (this is indeed not accurate). I was such a data miner until half a year ago. But few of them know how the p-value in multiple regression (and in other models, e.g. logistic regression) is actually calculated. Software packages nowadays are very advanced and make models like linear regression/pca/cca seem to be as simple as one line of code in R/Matlab. But if we don’t know them by hand and by heart and understand the various properties of these simple models, how can we feel confident when applying them? On the other side, as researchers in data mining, how can we confidently modify existing models to avoid their weakness in a particular application if we only have a shallow understanding of the weakness of a model? Therefore, for example, we cannot stop our understanding of regularization at only one sentence “controlling model complexity.” We need go deeper than that, to know what is behind that abstract term “model complexity”. This is why recently while writing my phd thesis, I am also learning and re-learning statistics and machine learning models. I should have done this say four years ago! But it is never too late to learn, and learning them after doing several successful data mining projects gives me a new perspective.