Imbalanced data sets are a special case for classification problem where the class distribution is not uniform among the classes. Typically, they are composed by two classes: The majority (negative) class and the minority (positive) class.
These type of sets suppose a new challenging problem for Data Mining, since standard classification algorithms usually consider a balanced training set and this supposes a bias towards the majority class. Each data file has the following structure:
 @relation: Name of the data set
 @attribute: Description of an attribute (one for each attribute)
 @inputs: List with the names of the input attributes
 @output: Name of the output attribute
 @data: Starting tag of the data
The rest of the file contains all the examples belonging to the data set, expressed in comma sepparated values format.
All the Imbalanced data sets presented in this webpage are partitioned using a 5folds stratified cross validation. Note that dividing the dataset into 5 folds is considered in order to dispose of a sufficient quantity of minority class examples in the test partitions. In this way, test partition examples are more representative of the underlying knowledge.
We divide our Imbalanced data sets into the following sections:
 Imbalance ratio between 1.5 and 9
 Imbalance ratio higher than 9  Part I
 Imbalance ratio higher than 9  Part II
 Imbalance ratio higher than 9  Part III
 Multiple class imbalanced problems
 Noisy and Borderline Examples
Imbalance ratio between 1.5 and 9
From Fernández, A., García, S., del Jesus, M. J., and Herrera, F. 2008. A study of the behaviour of linguistic fuzzy rule based classification systems in the framework of imbalanced datasets. Fuzzy Sets and Systems 159, 18 (Sep. 2008), 23782398. 

Below you can find all the Imbalanced data sets available with imbalance ratio between 1.5 and 9. For each data set, it is shown its name and its number of instances, attributes (Real/Integer/Nominal valued) and imbalance ratio value.
The table allows to download each data set in KEEL format (inside a ZIP file). Additionally, it is possible to obtain the data set already partitioned, by means of a 5folds cross validation procedure.
By clicking in the column headers, you can order the table by names (alphabetically), by the number of examples, attributes or IR. Clicking again will sort the rows in reverse order.
Imbalance ratio higher than 9  Part I
From Fernández, A., García, S., del Jesus, M. J., and Herrera, F. 2008. A study of the behaviour of linguistic fuzzy rule based classification systems in the framework of imbalanced datasets. Fuzzy Sets and Systems 159, 18 (Sep. 2008), 23782398. 

From Fernández, A., del Jesus, M. J., and Herrera, F. 2009. Hierarchical fuzzy rule based classification systems with genetic rule selection for imbalanced datasets. Int. J. Approx. Reasoning 50, 3 (Mar. 2009), 561577. 

Below you can find the first block of the Imbalanced data sets available with imbalance ratio higher than 9. For each data set, it is shown its name and its number of instances, attributes (Real/Integer/Nominal valued) and imbalance ratio value.
The table allows to download each data set in KEEL format (inside a ZIP file). Additionally, it is possible to obtain the data set already partitioned, by means of a 5folds cross validation procedure.
By clicking in the column headers, you can order the table by names (alphabetically), by the number of examples, attributes or IR. Clicking again will sort the rows in reverse order.
Below you can find the second block of the Imbalanced data sets available with imbalance ratio higher than 9. For each data set, it is shown its name and its number of instances, attributes (Real/Integer/Nominal valued) and imbalance ratio value.
The table allows to download each data set in KEEL format (inside a ZIP file). Additionally, it is possible to obtain the data set already partitioned, by means of a 5folds cross validation procedure.
By clicking in the column headers, you can order the table by names (alphabetically), by the number of examples, attributes or IR. Clicking again will sort the rows in reverse order.
Below you can find the third block of the Imbalanced data sets available with imbalance ratio higher than 9. For each data set, it is shown its name and its number of instances, attributes (Real/Integer/Nominal valued) and imbalance ratio value.
The table allows to download each data set in KEEL format (inside a ZIP file). Additionally, it is possible to obtain the data set already partitioned, by means of a 5folds cross validation procedure.
By clicking in the column headers, you can order the table by names (alphabetically), by the number of examples, attributes or IR. Clicking again will sort the rows in reverse order.
Below you can find all the Multiclass Imbalanced data sets available. For each data set, it is shown its name and its number of instances, attributes (Real/Integer/Nominal valued) and imbalance ratio value.
The table allows to download each data set in KEEL format (inside a ZIP file). Additionally, it is possible to obtain the data set already partitioned, by means of a 5folds cross validation procedure.
By clicking in the column headers, you can order the table by names (alphabetically), by the number of examples, attributes or IR. Clicking again will sort the rows in reverse order.
Noisy and Borderline Examples
From K. Napierala, J. Stefanowski, S. Wilk. Learning from Imbalanced Data in Presence of Noisy and Borderline Examples. 7th International Conference on Rough Sets and Current Trends in Computing (RSCTC2010). LNCS 6086, Springer 2010, Warsaw (Poland, 2010) 158167. 

Below you can find several synthetic Imbalanced data sets used in the above paper and whose examples are divided into 3 categories by the authors: safe, borderline and noisy examples.
 Borderline examples are located in the area surrounding class boundaries, where the minority and majority classes overlap.
 Safe examples are placed in relatively homogeneous areas with respect to the class label.
 Noisy examples are individuals from one class occurring in safe areas of the other class.
For each data set, it is shown its name and its number of instances, attributes (Real/Integer/Nominal valued) and imbalance ratio value.
The table allows to download each data set in KEEL format (inside a ZIP file). Additionally, it is possible to obtain the data set already partitioned, by means of a 5folds cross validation procedure.
By clicking in the column headers, you can order the table by names (alphabetically), by the number of examples, attributes or IR. Clicking again will sort the rows in reverse order.
This subsection contains a collection of some of the previous data sets already preprocessed by several oversampling techniques. For each technique, a ZIP file containing 5folds cross validation partitions for each of the data sets of this page is provided. Moreover, a brief description and references about each method can be found below:
Imbalance ratio between 1.5 and 9
Type of preprocessing  Data sets 
SMOTE  
SMOTE+ENN  
SMOTE+Tomek Links  
Imbalance ratio higher than 9  Part I
Type of preprocessing  Data sets 
SMOTE  
SMOTE+ENN  
SMOTE+Tomek Links  
SMOTERSB*  
Imbalance ratio higher than 9  Part II
Type of preprocessing  Data sets 
SMOTE  
SMOTE+ENN  
SMOTE+Tomek Links  
Bordeline 1  
Bordeline 2  
SafeLevels  
SMOTERSB*  
 SMOTE: The Synthetic Minority Oversampling Technique (Chawla et al, 2002) is an oversampling technique of the minority class. It works by taking each minority class sample and introducing synthetic examples along the line segments joining any/all of the k minority class nearest neighbours.
 SMOTE+ENN: This method consists of the application of the Edited Nearest Neighbor rule (ENN, Wilson, 1972) as cleaning method over the data set obtained by the application of SMOTE. It was proposed by Batista et al, 2004, where the use of 3 neighbors for ENN is suggested.
 SMOTE+Tomek Links: This method consists of the application of Tomek Links (Tomek, 1976) as cleaning method over the data set obtained by the application of SMOTE. It was proposed by Batista et al, 2004.
 Bordeline: This methods only oversample or strengthen the borderline minority examples (Han et al, 2005). First, it finds out the borderline minority examples P; then, synthetic examples are generated from them and are added to the original training set. This method, for every minority examples (pi) calculate its m nearest neighbors from the whole training set. The number of majority examples among the m nearest neighbors is n. If all the m nearest neighbors are majority examples, pi is considered to be noise and is not operated in the following step. If m/2 <= n < m, namely the number of pi's majority nearest neighbors is larger than the number of its minority ones, pi is considered to be easily misclassified and put into a set called DANGER. If 0 <= n < m/2, pi is safe and does not need to participate in the following steps. The examples in the DANGER set are the borderline data of the minority class P. For each example in DANGER, we calculate its k nearest neighbors from P and we operate similarly to SMOTE.
 SafeLevels: This method (Bunkhumpornpat et al, 2009) computes for each positive instance its safe level before generating synthetic instances. Each synthetic instance is positioned closer to the largest safe level, so all synthetic instances are generated only in safe regions.
 SMOTERSB*: This method (Ramentol et al, 2011) first applies the SMOTE algorithm, and then, it only selects the minority synthetic examples that belong to the lower approximation using Rough Set Theory (Pawlak, 1982). This process is repeated until the training set is balanced.
Collecting Data Sets
If you have some example data sets and you would like to share
them with the rest of the research community by means of this page, please be so
kind as to send your data to the Webmaster Team with the following information:
 People answerable for the data (full name, affiliation, email, web page,
...).
 training and test data sets considered, preferably in ASCII format.
 A brief description of the application.
 References where it is used.
 Results obtained by the methods proposed by the authors or used for comparison.
 Type of experiment developed.
 Any additional useful information.
Collecting Results
If you have applied your methods to some of the problems
presented here we will be glad of showing your results in this page. Please be so kind as to send the following information to Webmaster Team:
 Name of the application considered and type of experiment developed.
 Results obtained by the methods proposed by the authors or used for comparison.
 References where the results are shown.
 Any additional useful information.
Contact Us
If you are interested on being informed of each update made in
this page or you would like to comment on it, please contact with the Webmaster Team.
