Abstract
Both supervised and unsupervised machine learning algorithms are often based on regression to the mean. However, the mean can easily be biased by unevenly distributed data, i.e., outlier records. Batch normalization methods address this problem to some extent, but they also influence the data. In text-based data, the problem is even more pronounced, as distance distinctions between outlier records diminish with increasing dimensionality. The ultimate solution to achieving unbiased data is identifying the outliers. To address this issue, multidimensional scaling (MDS) and agglomerative-based techniques are proposed for detecting outlier records in text-based data. For both methods, two of the most common distance metrics are applied: Euclidean distance and cosine distance. Furthermore, in the MDS approach, both metric and non-metric versions of the algorithm are used, whereas in the agglomerative approach, the last-p and level cutoff techniques are applied. The methods are also compared with a raw-data-based method, which selects the most distant element from the others based on a given distance metric. Experiments were conducted on overlapping subsets of a dataset containing roughly 2000 records of descriptive image captions. The algorithms were also compared in terms of efficiency with a proposed algorithm and evaluated through human judgment based on the described images. Unsurprisingly, the cosine distance turned out to be the most effective distance metric. The metric-MDS-based algorithm appeared to outperform the others based on human evaluation. The presented algorithms successfully identified outlier records.