There are 3 posts in this series:
1. An F# wrapper for Weka. The minimal wrapper in F# for Weka.
2. More features. (not available yet) This post will contain improvement over the minimal wrapper, e.g. more Dataset processing function, some plot functionality, etc.
3. Tutorial/Usage examples. (this post) This post is for end users, who might have no interested in reading the implementation details, but rather knowing how to use this wrapper to perform data mining tasks in .Net.
The easiest way is to use Visual Studio 2010 with the latest F# PowerPack. Download the release/source code at http://wekasharp.codeplex.com/ and change the dll references to ones at the bin\Release folder.
Visual Studio 2008 should also be able open the project file. Only the parallel functionality cannot be used in 2008 as Parallel Sequence is only added in .Net 4.0.
The support for Mono is not prepared yet. However, it should be easy to figure out.
As all necessary runtimes are also included in the release, so the user does not need to download Weka or IKVM runtimes.
The user is encourage to look at the script code in Script.fsx and run them to have a general feeling of the wrapper. These examples cover typical usage of WekaSharp.
Reading the other two posts is also very useful for using WekaSharp, and more importantly, for changing the source code of WekaSharp on your own need.
As Weka is GPL, I must make WekaSharp under GPL.
The Efficiency Concern
People may concern how fast is the wrapper compared to the original Weka in Java. I haven’t thoroughly tested this yet. Some casual tests show that they are just the same, at least on Windows platforms. For instance, I run a 3-fold cross validation on the letter dataset using J48 decision trees, both the wrapper and the Weka (run from GUI) use about 20 seconds. It is quite surprising that IKVM’s compiler does so good a job.
As I will show later, the WekaSharp has some parallel constructs, which enable us to utilize multi-core more conveniently.
I’ve provided a module to do profiling. The design is similar to that of Matlab, but better:) You can use Profile.tic() and Profile.toc(“str”) pair to time the execution of F# code. Or You can get multiple timers by using Profile.getWatch().
The Dataset and IO
The Dataset module contains functions to manipulate datasets. Most of the functions are pure, i.e., they don’t change the input dataset and create a new dataset after processing.
Current 4 types of data files are supported: Weka’s ARFF format, LibSvm, Csv and SvmLight. Both loading and saving are supported, i.e. there are 8 functions.
The following code shows how to load an ARFF dataset and set the last column as the label:
let sonar =
let iris =
Data mining algorithms usually have parameters, sometimes very complicated parameters. F# provides very convenient syntax construct for optional parameters.
E.g. the full set parameters of an SVM has the SVM type, the kernel type, the C value, and the parameters for the kernel, etc. Maybe you just need to set the C value: Parameter.SVM.MakePara(C = c).
Here Parameter is a module for parameters setting. For each data mining algorithm, there are two members:
* DefaultPara. The default parameter string, which is the same as ones used in Weka GUI.
* MakePara. A function to make different parameters.
Here are several more complicated examples of .MakePara method for different algorithms:
Parameter.SVM.MakePara(kernelType = Parameter.SVMKernelType.LinearKernel, C = 10.0)
Parameter.KMeans.MakePara(K=5, maxIter=10, seed=10)
Parameter.KNN.MakePara(distanceFunction = Parameter.Manhattan, K = 3)
and you can also use the IDE support to find which parameters are supported in a data mining algorithm:
Classification and its evaluation
WekaSharp supports most of the common classification algorithms:
type ClassifierType =
J48 | LogReg | NeuralNet | KNN | NaiveBayes | SVM | LibLinear | LinReg | AdaBoost | Bagging
There are three types of classification tasks:
type ClaEvalTask =
| CrossValidation of int * Dataset * ClassifierType * parameters
| RandomSplit of float * Dataset * ClassifierType * parameters
| TrainTest of Dataset * Dataset * ClassifierType * parameters
To run this task, you need the evalClassify method in Eval module. The following code shows a complete example using J48 as the classifier:
(* playing decision trees on Iris dataset *)
// load the dataset
let iris =
// describe 3 kinds of classification tasks
let j48Tt = TrainTest(iris, iris, ClassifierType.J48, Parameter.J48.DefaultPara)
let j48Cv = CrossValidation(5, iris, ClassifierType.J48, Parameter.J48.DefaultPara)
let j48Rs = RandomSplit(0.7, iris, ClassifierType.J48, Parameter.J48.DefaultPara)
// perform the task and get result
let ttAccuracy = j48Tt |> Eval.evalClassify |> Eval.getAccuracy
let cvAccuracy = j48Cv |> Eval.evalClassify |> Eval.getAccuracy
let rsAccuracy = j48Rs |> Eval.evalClassify |> Eval.getAccuracy
The evalClassify function returns a result object, you can use “.” to that object in the IDE to find out various types of results available. In the above, we use predefined functions to get the accuracy from it.
Clustering and its evaluation
Performing clustering is very similar to classification. You can define two types of clustering:
type CluEvalTask =
| ClusterWithLabel of Dataset * ClustererType * parameters
| DefaultCluster of Dataset * ClustererType * parameters
ClusterWithLabel means that you will need to use the label of the dataset to do evaluation. DefaultCluster does not require the dataset has label (actually, it does now allow datasets to have labels either), so the result will only contain the clustering assignments, but not accuracy, etc.
The following code shows a complete clustering example:
let irisLabeled =
let kmeansTask = ClusterWithLabel(irisLabeled, ClustererType.KMeans, Parameter.KMeans.MakePara(K=3))
let emTask = ClusterWithLabel(irisLabeled, ClustererType.EM, Parameter.EM.MakePara(K=3))
let dbscanTask = ClusterWithLabel(irisLabeled, ClustererType.DBScan, Parameter.DBScan.DefaultPara)
let kmeansResult = Eval.evalClustering kmeansTask |> Eval.getClusterSummary
let emResult = Eval.evalClustering emTask |> Eval.getClusterSummary
let dbscanResult = Eval.evalClustering dbscanTask |> Eval.getClusterSummary
Bulk & parallel processing tasks
Sometimes, you need to run multiple tasks. E.g. you need to run the same task multiple times to see the mean and variance of the result, or you need to try different parameters for an algorithm, or you simply have different data mining tasks to run.
The following example shows how to create a bulk of tasks and run them:
// load the data set
let sonar =
// set different parameters
let Cs = [0.01; 0.1; 1.; 10.; 50.; 100.; 500.; 1000.; 2000.; 5000. ]
// make the tasks with the parameter set
let tasks =
|> List.map (fun c -> Parameter.SVM.MakePara(C = c))
|> List.map (fun p -> CrossValidation(3, sonar, ClassifierType.SVM, p))
// the accuracy result
let results =
|> List.map Eval.getAccuracy
Profile.toc("sequential time: ")
Here I created different SVM tasks for different C values, run them and get the accuracy as a list.
F# provides very easy syntax to perform multiple tasks at the same time. Thus I provide evalBulkClassifyParallel method:
let resultsParallel =
|> List.map Eval.getAccuracy
Profile.toc("parallel (PSeq) time: ")
// sequential time: : 9767.804800 ms
// parallel (PSeq) time: : 6154.715500 ms
As the profiling shows, on a two-core machine, parallel executing the tasks does boost the speed.
is not finished, but it is still quite usable. To plot the different accuracies for different Cs in the SVM, we can use:
// do the plot
lc.column(y = results, xname = "differnet C", yname = "Accuracy", title = "SVM on iris",
isValueShownAsLabel = true ) |> display
Making Datasets from Memory
All the above examples use data from data files. In practice, we might want to convert the data in memory, e.g. in an array into a Weka dataset. The following examples shows how to create dataset from F# arrays:
(* create dataset from F# arrays *)
// make the data array
let data = [| 0.; 0.;
1.; 0.; |]
let xorArray = Array2D.init 4 2 (fun i j -> data.[i*2 + j])
// make weka dataset from array
let xor0 = Dataset.from2DArray xorArray false
// add labels
let xor = xor0 |> Dataset.addClassLabels ["T"; "T"; "F"; "F"]
// make svm tasks
let rbfTask = TrainTest(xor, xor, ClassifierType.SVM, Parameter.SVM.DefaultPara)
let linearTask = TrainTest(xor, xor, ClassifierType.SVM, Parameter.SVM.MakePara(kernelType = Parameter.SVMKernelType.LinearKernel, C = 10.0) )
// rbf svm gets 100% accuracy
let rbfAccuracy = rbfTask |> Eval.evalClassify |> Eval.getAccuracy
// linear svm does not work on XOR data set
let linearAccuracy = linearTask |> Eval.evalClassify |> Eval.getAccuracy
As a user of WekaSharp, I already benefit from it as I can process the data and run data mining algorithms over it both in F#. Originally, I needed to write my data into ARFF file, call Weka, parse the output to get the result. And the F# solution is far more declarative.
Another benefit is that we can write extensions to Weka in F#!