Clustering High-dimensional Data: Balancing Abstraction and Representation

Claudia Plant, Lena G. M. Bauer, Christian Böhm

Wednesday, January 21, 2026, 10:45 a.m. - 12:30 p.m. at Singapore EXPO

Summary and Outline

How to find a natural grouping of a large real data set? Clustering requires a balance between abstraction and representation. To identify clusters, we need to abstract from superfluous details of individual objects, such as background or lighting in images as shown in Figure 1.

Figure 1: Conflicting Goals Abstraction and Representation

Figure 1: Conflicting Goals Abstraction and Representation

But we also need a rich representation that emphasizes the key features shared by groups of objects that distinguish them from other groups of objects. Each clustering algorithm implements a different trade-off between abstraction and representation. Classical K-means implements a high level of abstraction – details are simply averaged out – combined with a very simple representation – all clusters are Gaussians in the original data space.

We will see how approaches to subspace and deep clustering support high-dimensional and complex data by allowing richer representations. However, with increasing representational expressiveness comes the need to explicitly enforce abstraction in the objective function to ensure that the resulting method performs clustering and not just representation learning. We will see how current deep clustering methods define and enforce abstraction through centroid-based and density-based clustering losses. Balancing the conflicting goals of abstraction and representation is challenging.

Ideas from subspace clustering help by learning one latent space for the information that is relevant to clustering and another latent space to capture all other information in the data. The tutorial ends with an outlook on future research in clustering. In our view, future methods will more adaptively balance abstraction and representation to improve performance, energy efficiency and interpretability. By automatically finding the sweet spot between abstraction and representation, the human brain is very good at clustering and other related tasks such as single-shot learning. So, there is still much to be explored.

For more details, see our paper "Clustering High-dimensional Data: Balancing Abstraction and Representation: Tutorial at AAAI 2026" [1]

Presentation and Ressources

The best way to experience the opportunities as well as the challenges Deep Clustering entails is to experiment with the algorithms yourself. Hence, we provide a Google Colab notebook [2] which includes pre-trained models of a variety of Deep Clustering algorithms, that have been trained on a subset of the German Traffic Sign Recognition Benchmark (GTSRB) data set [3]. The programming language is Python and the implementations of the algorithms are used from the Python ClustPy package [4,5]. Data loading and pre-processing of the GTSRB images is already provided in the notebook (See Figure 2.)

Figure 2

Figure 2

The pre-trained models can as well be loaded to compare their results. Optionally, the algorithms can be executed and trained anew with different settings to see the influence of certain parameter settings on the clustering results.

Figure 3

Figure 3

Figure 3 shows the according code cell for the algorithm DEC [6]. Setting ‘TRAIN = False’ loads the pre-trained model and ‘TRAIN = True’ retrains the algorithm. In the DEC(…) class the parameters, such as the number of clustering epochs can be adjusted. Detailed descriptions of the algorithm-specific hyperparameters can be found on the git repository of the ClustPy package [4].

Happy experimenting!

Should you not be familiar with Google Colab, here is a quick starting guide, how to run the notebook:

  1. Open the notebook link in Google Colab and make sure you are signed into a Google account.
  2. Click Runtime → Change runtime type and select GPU (recommended) or CPU, then click Save. Loading our pre-trained models only works, when connected to GPU.
  3. Run the cells from top to bottom using Runtime → Run all or by pressing the ⏵ button on each cell.
  4. When prompted, allow access/permissions (e.g., for Google Drive) and follow any on-screen instructions in the output area.
  5. You can either load and evaluate the provided models (in the cells) or train your own by running the training cells and adjusting parameters as needed (see Fig. 3).
  6. If something fails, re-run the cell (or Runtime → Restart and run all) to reset the environment and try again.

For more details, we recommend this Beginner’s Guide [7] and the introduction on Colab itself [8].

[1] http://arxiv.org/abs/2601.11160

[2] https://drive.google.com/file/d/12mnc2I-2ygq4Xh-g1E4nT9NowMbk7U8R/view?usp=drive_link (Note: You might have to click 'Open with Colab' in the top right to see the notebook.)

[3] https://www.kaggle.com/datasets/meowmeowmeowmeowmeow/gtsrb-german-traffic-sign

[4] https://github.com/collinleiber/ClustPy

[5] https://ieeexplore.ieee.org/document/10411702

[6] Xie, J., Girshick, R., & Farhadi, A. (2016, June). Unsupervised deep embedding for clustering analysis. In International conference on machine learning (pp. 478-487). PMLR.

[7] https://www.marqo.ai/blog/getting-started-with-google-colab-a-beginners-guide

[8] https://colab.research.google.com/?hl=en

[9] Tutorial slides