Kaggle Competition Past Winner Solutions

We learn more from code, and from great code. Not necessarily always the 1st ranking solution, because we also learn what makes a stellar and just a good solution. I will post solutions I came upon so we can all learn to become better!

I collected the following source code and interesting discussions from the Kaggle held competitions for learning purposes. Not all competitions are listed because I am only manually collecting them, also some competitions are not listed due to no one sharing. I will add more as time goes by. Thank you.


Prudential Life Insurance Assessment [Mon 23 Nov 2015– Mon 15 Feb 2016]
Rossmann Store Sales [Wed 30 Sep 2015– Mon 14 Dec 2015]
Airbnb New User Bookings [Wed 25 Nov 2015– Thu 11 Feb 2016]


Walmart Recruiting: Trip Type Classification [Mon 26 Oct 2015– Sun 27 Dec 2015 ]


Algorithmic Trading Challenge

Allstate Purchase Prediction Challenge

Amazon.com – Employee Access Challenge

AMS 2013-2014 Solar Energy Prediction Contest

Belkin Energy Disaggregation Competition

Challenges in Representation Learning: Facial Expression Recognition Challenge

Challenges in Representation Learning: The Black Box Learning Challenge

Challenges in Representation Learning: Multi-modal Learning

Detecting Insults in Social Commentary

EMI Music Data Science Hackathon

Galaxy Zoo – The Galaxy Challenge

Global Energy Forecasting Competition 2012 – Wind Forecasting

KDD Cup 2013 – Author-Paper Identification Challenge (Track 1)

KDD Cup 2013 – Author Disambiguation Challenge (Track 2)

Large Scale Hierarchical Text Classification

Loan Default Prediction – Imperial College London

Merck Molecular Activity Challenge

MLSP 2013 Bird Classification Challenge

Observing the Dark World

PAKDD 2014 – ASUS Malfunctional Components Prediction

Personalize Expedia Hotel Searches – ICDM 2013

Predicting a Biological Response

Predicting Closed Questions on Stack Overflow

See Click Predict Fix

See Click Predict Fix – Hackathon

StumbleUpon Evergreen Classification Challenge

The Analytics Edge (15.071x)

The Marinexplore and Cornell University Whale Detection Challenge

Walmart Recruiting – Store Sales Forecasting

Source: http://www.chioka.in/kaggle-competition-solutions/


Read More

Installing Kafka on Mac OSX

Apache Kafka is a highly-scalable publish-subscribe messaging system that can serve as the data backbone in distributed applications. With Kafka’s Producer-Consumer model it becomes easy to implement multiple data consumers that do live monitoring as well persistent data storage for later analysis.

STEP 1: Installation, the best way to install the latest version of the Kafka server on OS X and to keep it up to date is via Homebrew.

$ brew install kafka

this will also install all dependencies, like zookeeper which is required to run the server. (more…)

Read More

Lucene In-Memory Search Example and Sample Code

More sample code:


import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.queryParser.ParseException;
import org.apache.lucene.queryParser.QueryParser;
import org.apache.lucene.search.*;
import org.apache.lucene.store.RAMDirectory;
import org.apache.lucene.util.Version;
import java.io.IOException;
public class LuceneTest{
   public static void main(String[] args) {
      // Construct a RAMDirectory to hold the in-memory representation
      // of the index.
      RAMDirectory idx = new RAMDirectory();
      try {
         // Make an writer to create the index
         IndexWriter writer =
                 new IndexWriter(idx, 
                         new StandardAnalyzer(Version.LUCENE_30), 
         // Add some Document objects containing quotes
         writer.addDocument(createDocument("Theodore Roosevelt",
                 "It behooves every man to remember that the work of the " +
                         "critic, is of altogether secondary importance, and that, " +
                         "in the end, progress is accomplished by the man who does " +
         writer.addDocument(createDocument("Friedrich Hayek",
                 "The case for individual freedom rests largely on the " +
                         "recognition of the inevitable and universal ignorance " +
                         "of all of us concerning a great many of the factors on " +
                         "which the achievements of our ends and welfare depend."));
         writer.addDocument(createDocument("Ayn Rand",
                 "There is nothing to take a man's freedom away from " +
                         "him, save other men. To be free, a man must be free " +
                         "of his brothers."));
         writer.addDocument(createDocument("Mohandas Gandhi",
                 "Freedom is not worth having if it does not connote " +
                         "freedom to err."));
         // Optimize and close the writer to finish building the index
         // Build an IndexSearcher using the in-memory index
         Searcher searcher = new IndexSearcher(idx);
         // Run some queries
         search(searcher, "freedom");
         search(searcher, "free");
         search(searcher, "progress or achievements");
      catch (IOException ioe) {
         // In this example we aren't really doing an I/O, so this
         // exception should never actually be thrown.
      catch (ParseException pe) {
    * Make a Document object with an un-indexed title field and an
    * indexed content field.
   private static Document createDocument(String title, String content) {
      Document doc = new Document();
      // Add the title as an unindexed field...
      doc.add(new Field("title", title, Field.Store.YES, Field.Index.NO));
      // ...and the content as an indexed field. Note that indexed
      // Text fields are constructed using a Reader. Lucene can read
      // and index very large chunks of text, without storing the
      // entire content verbatim in the index. In this example we
      // can just wrap the content string in a StringReader.
      doc.add(new Field("content", content, Field.Store.YES, Field.Index.ANALYZED));
      return doc;
    * Searches for the given string in the "content" field
   private static void search(Searcher searcher, String queryString)
           throws ParseException, IOException {
      // Build a Query object
      QueryParser parser = new QueryParser(Version.LUCENE_30, 
              new StandardAnalyzer(Version.LUCENE_30));
      Query query = parser.parse(queryString);
      int hitsPerPage = 10;
      // Search for the query
      TopScoreDocCollector collector = TopScoreDocCollector.create(5 * hitsPerPage, false);
      searcher.search(query, collector);
      ScoreDoc[] hits = collector.topDocs().scoreDocs;
      int hitCount = collector.getTotalHits();
      System.out.println(hitCount + " total matching documents");
      // Examine the Hits object to see if there were any matches
      if (hitCount == 0) {
                 "No matches were found for \"" + queryString + "\"");
      } else {
         System.out.println("Hits for \"" +
                 queryString + "\" were found in quotes by:");
         // Iterate over the Documents in the Hits object
         for (int i = 0; i < hitCount; i++) {
            ScoreDoc scoreDoc = hits[i];
            int docId = scoreDoc.doc;
            float docScore = scoreDoc.score;
            System.out.println("docId: " + docId + "\t" + "docScore: " + docScore);
            Document doc = searcher.doc(docId);
            // Print the value that we stored in the "title" field. Note
            // that this Field was not indexed, but (unlike the
            // "contents" field) was stored verbatim and can be
            // retrieved.
            System.out.println("  " + (i + 1) + ". " + doc.get("title"));
            System.out.println("Content: " + doc.get("content"));            





Read More


I pro­vide a basic index­ing and retrieval code using the PyLucene 3.0 API.Lucene In Action (2nd Ed) cov­ers Lucene 3.0, but the PyLucene code sam­ples for have not been updated for the 3.0 API, only the Java ones. Unfor­tu­nately, there is cur­rently lit­tle (no?) exam­ple PyLucene code in blo­gos­phere. If you have links to more Lucene 3.0 tuto­ri­als and sam­ples, please share them in the comments.


In the spirit of Lingpipe’s Lucene 2.4 in 60 sec­onds, here are rel­e­vant PyLucene 3.0 code snip­pets from my biased-text-sample project, for index­ing and retrieval. (more…)

Read More

NiFi: Thinking Differently About DataFlow

Recently a question was posed to the Apache NiFi (Incubating) Developer Mailing List about how best to use Apache NiFi to perform Extract, Transform, Load (ETL) types of tasks. The question was “Is it possible to have NiFi service setup and running and allow for multiple dataflows to be designed and deployed (running) at the same time?”

The idea here was to create several disparate dataflows that run alongside one another in parallel. Data comes from Source X and it’s processed this way. That’s one dataflow. Other data comes from Source Y and it’s processed this way. That’s a second dataflow entirely. Typically, this is how we think about dataflow when we design it with an ETL tool. And this is a pretty common question for new NiFi users. With NiFi, though, we tend to think about designing dataflows a little bit differently. Rather than having several disparate, “stovepiped” flows, the preferred approach with NiFi is to have several inputs feed into the same dataflow. Data can then be easily routed (via RouteOnAttribute, for example) to “one-off subflows” if need be.

One of the benefits to having several disparate dataflows, though, is that it makes it much easier to answer when someone comes to you and says “I sent you a file last week. What did you do with it?” or “How do you process data that comes from this source?” You may not know exactly what happened to a specific file that they sent you, step-by-step, because of the different decision points in the flow, but at least you have a good idea by looking at the layout of the dataflow. (more…)

Read More

Apache Nifi (aka HDF) data flow across data center

Short Description:

This article provides a step by step overview of how to setup cross data center data flow using Apache Nifi.


Traditionally enterprises have been dealing with data flows or data movement within their data centers. But as the world has become more flattened and global presence of companies has become a norm, enterprises are faced with the challenge of collecting and connecting data from their global footprint. This problem was daunting NSA a decade ago and they came up with a solution for this using a product which was later named as Apache Nifi.

Apache nifi is a easy to use, powerful, and reliable system to process and distribute data. Within Nifi, as you will see, I will be able to build a global data flow with minimal to no Coding. You can learn the details about Nifi from Apache Nifi website. This is one of most well documented Apache projects.

The focus of this article to just look at one specific feature within Nifi that I believe no other software product does it as well as Nifi. And this feature is “site to site” protocol data transfer.

Business use case

One of the classic business problem is to push data from a location that has a small IT footprint, to the main data center, where all the data is collected and connected. This small IT footprint could be a oil rig at the middle of the ocean, a small bank location at a remote mountain in a town, a sensor on a vehicle so on and so forth. So, your business wants a mechanism to push the data generated at various location to say Headquarters in a reliable fashion, with all the bells and whistles of an enterprise data flow which means maintain lineage, secure, provenance, audit, ease of operations etc.

The data that’s generated at my sources are of various formats such as txt, csv, json, xml, audio, image etc.. and they could of various size ranges from few MBs to GBs. I wanted to break these files into smaller chunks as I have a low bandwidth at my source data centers and want to stich them together at the destination and load that into my centralized Hadoop data lake.

Solution Architecture

Apache Nifi (aka Hortonworks Data Flow) is a perfect tool to solve this problem. The overall architecture looks something like Fig 1.

We have a Australian & Russian data center from where we want to move the data to US Headquarters. We will have what we call edge instance of nifi that will be sitting in Australian & Russian data center, that will act as a data acquisition points. We will then have a Nifi processing cluster in US where we will receive and process all these data coming from global location. We will build this end to end flow without any coding but rather by just a drag and drop GUI interface.

Build the data flow

Here are the high level steps to build the overall data flow.

Step1) Setup a Nifi instance at Australian data center that will act as data acquisition instance. I will create a local instance of Nifi that will act as my Australian data center.

Step2) Setup Nifi instance on a CentOS based virtual machine that will act as my Nifi data processing instance. This could be cluster of Nifi as well but, in my case it will be just a single instance.

Step3) Build Nifi data flow for the processing instance. This will have an input port that will indicate that this instance can accept data from other Nifi instances.

Step4) Build Nifi data for the data acquisition instance. This will have a “remote process group” that will talk to the Nifi data processing instance via site-to-site protocol.

Step5) Test out the overall flow.

Attached is the document that provides detailed step by step instruction on how to set this up.



Read More

Accurately Measuring Model Prediction Error

When assessing the quality of a model, being able to accurately measure its prediction error is of key importance. Often, however, techniques of measuring error are used that give grossly misleading results. This can lead to the phenomenon of over-fitting where a model may fit the training data very well, but will do a poor job of predicting results for new data not used in model training. Here is an overview of methods to accurately measure model prediction error.

Measuring Error

When building prediction models, the primary goal should be to make a model that most accurately predicts the desired target value for new data. The measure of model error that is used should be one that achieves this goal. In practice, however, many modelers instead report a measure of model error that is based not on the error for new data but instead on the error the very same data that was used to train the model. The use of this incorrect error measure can lead to the selection of an inferior and inaccurate model.

Naturally, any model is highly optimized for the data it was trained on. The expected error the model exhibits on new data will always be higher than that it exhibits on the training data. As example, we could go out and sample 100 people and create a regression model to predict an individual’s happiness based on their wealth. We can record the squared error for how well our model does on this training set of a hundred people. If we then sampled a different 100 people from the population and applied our model to this new group of people, the squared error will almost always be higher in this second case.

It is helpful to illustrate this fact with an equation. We can develop a relationship between how well a model predicts on new data (its true prediction error and the thing we really care about) and how well it predicts on the training data (which is what many modelers in fact measure).

True Prediction Error=Training Error+Training OptimismTrue Prediction Error=Training Error+Training Optimism

Here, Training Optimism is basically a measure of how much worse our model does on new data compared to the training data. The more optimistic we are, the better our training error will be compared to what the true error is and the worse our training error will be as an approximation of the true error.

The Danger of Overfitting

In general, we would like to be able to make the claim that the optimism is constant for a given training set. If this were true, we could make the argument that the model that minimizes training error, will also be the model that will minimize the true prediction error for new data. As a consequence, even though our reported training error might be a bit optimistic, using it to compare models will cause us to still select the best model amongst those we have available. So we could in effect ignore the distinction between the true error and training errors for model selection purposes.

Unfortunately, this does not work. It turns out that the optimism is a function of model complexity: as complexity increases so does optimism. Thus we have a our relationship above for true prediction error becomes something like this:

True Prediction Error=Training Error+f(Model Complexity)True Prediction Error=Training Error+f(Model Complexity)

How is the optimism related to model complexity? As model complexity increases (for instance by adding parameters terms in a linear regression) the model will always do a better job fitting the training data. This is a fundamental property of statistical models 1. In our happiness prediction model, we could use people’s middle initials as predictor variables and the training error would go down. We could use stock prices on January 1st, 1990 for a now bankrupt company, and the error would go down. We could even just roll dice to get a data series and the error would still go down. No matter how unrelated the additional factors are to a model, adding them will cause training error to decrease.

But at the same time, as we increase model complexity we can see a change in the true prediction accuracy (what we really care about). If we build a model for happiness that incorporates clearly unrelated factors such as stock ticker prices a century ago, we can say with certainty that such a model must necessarily be worse than the model without the stock ticker prices. Although the stock prices will decrease our training error (if very slightly), they conversely must also increase our prediction error on new data as they increase the variability of the model’s predictions making new predictions worse. Furthermore, even adding clearly relevant variables to a model can in fact increase the true prediction error if the signal to noise ratio of those variables is weak.

Let’s see what this looks like in practice. We can implement our wealth and happiness model as a linear regression. We can start with the simplest regression possible where Happiness=a+b Wealth+ϵHappiness=a+b Wealth+ϵ and then we can add polynomial terms to model nonlinear effects. Each polynomial term we add increases model complexity. So we could get an intermediate level of complexity with a quadratic model like Happiness=a+b Wealth+c Wealth2+ϵHappiness=a+b Wealth+c Wealth2+ϵ or a high-level of complexity with a higher-order polynomial like

Happiness=a+b Wealth+c Wealth2+d Wealth3+e Wealth4+f Wealth5+g Wealth6+ϵHappiness=a+b Wealth+c Wealth2+d Wealth3+e Wealth4+f Wealth5+g Wealth6+ϵ.


The figure below illustrates the relationship between the training error, the true prediction error, and optimism for a model like this. The scatter plots on top illustrate sample data with regressions lines corresponding to different levels of model complexity.

Training, optimism and true prediction error.

Increasing the model complexity will always decrease the model training error. At very high levels of complexity, we should be able to in effect perfectly predict every single point in the training data set and the training error should be near 0. Similarly, the true prediction error initially falls. The linear model without polynomial terms seems a little too simple for this data set. However, once we pass a certain point, the true prediction error starts to rise. At these high levels of complexity, the additional complexity we are adding helps us fit our training data, but it causes the model to do a worse job of predicting new data.

This is a case of overfitting the training data. In this region the model training algorithm is focusing on precisely matching random chance variability in the training set that is not present in the actual population. We can see this most markedly in the model that fits every point of the training data; clearly this is too tight a fit to the training data.

Preventing overfitting is a key to building robust and accurate prediction models. Overfitting is very easy to miss when only looking at the training error curve. To detect overfitting you need to look at the true prediction error curve. Of course, it is impossible to measure the exact true prediction curve (unless you have the complete data set for your entire population), but there are many different ways that have been developed to attempt to estimate it with great accuracy. The second section of this work will look at a variety of techniques to accurately estimate the model’s true prediction error.

An Example of the Cost of Poorly Measuring Error

Let’s look at a fairly common modeling workflow and use it to illustrate the pitfalls of using training error in place of the true prediction error 2. We’ll start by generating 100 simulated data points. Each data point has a target value we are trying to predict along with 50 different parameters. For instance, this target value could be the growth rate of a species of tree and the parameters are precipitation, moisture levels, pressure levels, latitude, longitude, etc. In this case however, we are going to generate every single data point completely randomly. Each number in the data set is completely independent of all the others, and there is no relationship between any of them.

For this data set, we create a linear regression model where we predict the target value using the fifty regression variables. Since we know everything is unrelated we would hope to find an R2 of 0. Unfortunately, that is not the case and instead we find an R2 of 0.5. That’s quite impressive given that our data is pure noise! However, we want to confirm this result so we do an F-test. This test measures the statistical significance of the overall regression to determine if it is better than what would be expected by chance. Using the F-test we find a p-value of 0.53. This indicates our regression is not significant.

If we stopped there, everything would be fine; we would throw out our model which would be the right choice (it is pure noise after all!). However, a common next step would be to throw out only the parameters that were poor predictors, keep the ones that are relatively good predictors and run the regression again. Let’s say we kept the parameters that were significant at the 25% level of which there are 21 in this example case. Then we rerun our regression.

In this second regression we would find:

  • An R2 of 0.36
  • A p-value of 5*10-4
  • 6 parameters significant at the 5% level

Again, this data was pure noise; there was absolutely no relationship in it. But from our data we find a highly significant regression, a respectable R2 (which can be very high compared to those found in some fields like the social sciences) and 6 significant parameters!

This is quite a troubling result, and this procedure is not an uncommon one but clearly leads to incredibly misleading results. It shows how easily statistical processes can be heavily biased if care to accurately measure error is not taken.


Read More


A time series is a sequence of data points, typically consisting of successive measurements made over a time interval. Forecasting is the use of a model to predict future based on past informations. This problem is a bit different to what most known as the pure cross-sectional problem.

In this post, I take the recent Kaggle challenge as example, sharing the finding and tricks I used. The competition – Rossmann Store Sales – attracted 3,738 data scientists, making it the second most popular competition by participants ever. Rossmann is a drug store giant operates over 3,000 stores in European, who challenged Kagglers to forecast 6 weeks of daily sales for 1,115 stores. The data is mainly comprised of store index, store type, competitor store information, holiday event, promotion event, whether store open, customers, and the sales which is what we’re tasked to predict.

Doing time series forecasting, a few things specific to time-series you need to know about are

  1. time-dependent features.
  2. validating by time.

I will walk thorough them latter. (more…)

Read More

Getting Started with Markov Chains

There are number of R packages devoted to sophisticated applications of Markov chains. These include msm and SemiMarkov for fitting multistate models to panel data,mstate for survival analysis applications, TPmsm for estimating transition probabilities for 3-state progressive disease models, heemod for applying Markov models to health care economic applications, HMM and depmixS4 for fitting Hidden Markov Modelsandmcmc for working with Monte Carlo Markov Chains. All of these assume some considerable knowledge of the underlying theory. To my knowledge only DTMCPackand the relatively recent package,  markovchain, were written to facilitate basic computations with Markov chains.

In this post, we’ll explore some basic properties of discrete time Markov chains using the functions provided by the markovchain package supplemented with standard R functions and a few functions from other contributed packages. “Chapter 11”, of Snell’s online probability book will be our guide. The calculations displayed here illustrate some of the theory developed in this document. In the text below, section numbers refer to this document.

A large part of working with discrete time Markov chains involves manipulating the matrix of transition probabilities associated with the chain.  This first section of code replicates the Oz transition probability matrix from section 11.1 and uses theplotmat() function from the diagram package to illustrate it. Then, the efficient operator %^% from the expm package is used to raise the Oz matrix to the third power. Finally, left matrix multiplication of OZ^3 by the distribution vector u = (1/3, 1/3, 1/3) gives the weather forecast three days ahead.


Read More

Hadoop filesystem at Twitter

Twitter runs multiple large Hadoop clusters that are among the biggest in the world. Hadoop is at the core of our data platform and provides vast storage for analytics of user actions on Twitter. In this post, we will highlight our contributions to ViewFs, the client-side Hadoop filesystem view, and its versatile usage here.

ViewFs makes the interaction with our HDFS infrastructure as simple as a single namespace spanning all datacenters and clusters. HDFS Federation helps with scaling the filesystem to our needs for number of files and directories while NameNode High Availability helps with reliability within a namespace. These features combined add significant complexity to managing and using our several large Hadoop clusters with varying versions. ViewFs removes the need for us to remember complicated URLs by using simple paths. Configuring ViewFs itself is a complex task at our scale. Thus, we run TwitterViewFs, a ViewFs extension we developed, that dynamically generates a new configuration so we have a simple holistic filesystem view.

Hadoop at Twitter: scalability and interoperability
Our Hadoop filesystems host over 300PB of data on tens of thousands of servers. We scale HDFS by federating multiple namespaces. This approach allows us to sustain a high HDFS object count (inodes and blocks) without resorting to a single large Java heap size that would suffer from longGC pauses and the inability to use compressed oops. While this approach is great for scaling, it is not easy for us to use because each member namespace in the federation has its own URI. We use ViewFs to provide an illusion of a single namespace within a single cluster. As seen in Figure 1, under the main logical URI we create a ViewFs mount table with links to the appropriate mount point namespaces for paths beginning with /user, /tmp, and /logs, correspondingly.

Read More