Cross-domain Heuristic Search Challenge 2011

Normalized function value plots

In 2011, our research group participated into a hyper-heuristic competition organized by the ASAP research group at University of Nottingham. Although we didn't get in any of the first places, some of our results suggested that Machine Learning could be beneficial when coupled with meta-heuristics.

After the end of the competition, we spent some effort in analyzing the results. The first aspect that drawed our attention was that in spite of the fact that many algorithms got solutions with similar costs, the competition's F1 ranking was putting them far apart. So we decided that, for the sake of analysis, a different ranking shall be used.

We decided to rank them based on their normalized function value which, we believe, is more representative of the differences between algorithms quality. In particular, for each instance of a problem domain, a participant's function value (obtained by taking the median of 31 repetitions) is rescaled in [0,1] by applying the following formula:

Normalization formula

where H is the set of all hyper-heuristics, hk is the hyper-heuristic being normalized, p is the problem domain and i is the problem instance at hand. Below are the plots of the results distributions based on this measure, ordered by the median of the normalized function value (the lower the better).

SAT

SAT

Bin Packing

Bin Packing

Personnel Scheduling

Personnel Scheduling

Flow Shop Scheduling

Flow Shop Scheduling

TSP

TSP

VRP

VRP

Overall

Overall

Code

The plots where produced using the R language and the ggplot2 plotting system. You can download both the code and the data frame. The suggested usage is the following (inside the R console):

library(ggplot2) # load needed libraries
library(plyr)

source("normalied.R") # load code
load("experiments.dat") # load data frame
chescPlot(experiments, "SAT") # to produce per-domain plots
chescPlot(experiments) # to produce overall plots