Randomized Outlier Detection with Trees

Author: S. Buschjäger, P. J. Honysz, K. Morik
Journal: Int. J. Data Sci. Anal.
Year: 2020

Citation information

S. Buschjäger, P. J. Honysz, K. Morik,
Int. J. Data Sci. Anal.,
2020,
13,
91-104,
https://doi.org/10.1007/s41060-020-00238-w

Isolation Forest (IF) is a popular outlier detection algorithm that isolates outlier
observations from regular observations by building multiple random isolation trees. The average number of comparisons required to isolate a given observation can then be used as a measure of its outlierness. Multiple extensions of this approach have been proposed in literature including the Extended Isolation Forest (EIF) as well as the SCiForest Forest. However, we find a lack of theoretical explanation on why IF, EIF, and SciForest offer such good practical performance.
In this paper, we present a theoretical framework that views these approaches from a distributional viewpoint. Using this viewpoint we show that isolation-based approaches first accurately approximate the data’s distribution and then secondly approximate the coefficients of mixture components using the average path length. Using this framework we derive the Generalized Isolation Forest (GIF) that also trains random isolation trees, but combining them moves beyond using the average path-length. That is, GIF splits the data into multiple sub-spaces by sampling random splits as do the original IF-variants do and directly estimates the mixture coefficients of a mixture distribution to score the outlierness on entire regions of data. In an extensive evaluation we compare GIF with 18 state of the art outlier detection methods on 14 different datasets. We show that GIF outperforms three competing tree-based methods and has a competitive performance to other nearest-neighbor approaches while having a lower runtime. Last, we highlight a use-case study that uses GIF to detect transaction fraud in financial data.