The Beginner's Guide to Dimensionality Reduction

Explore the methods that data scientists use to visualize high-dimensional data.

July 12, 2018

Art or science?

Your browser has just loaded information about roughly 800 artworks from the collection at the Metropolitan Museum of Art. The museum has publicly released a large dataset about their collection [5], we are displaying just a small fraction. They are positioned randomly.

Hover over an artwork to see its details.

Each artwork includes basic metadata, such as its title, artist, date made, medium, and dimensions. Data scientists like to call metadata for each data point (artwork) features. Below are 10 artworks from the dataset.


Year
Title
Artist
1680
Spindle-back armchair
1875
Armchair
Pottier and Stymus Manufacturing Company
1776
Basket
Myer Myers
1650
Basin
Master Potter A
1710
Two-handled Bowl
Cornelius Kierstede
1876
The Bryant Vase
James Horton Whitehouse|Tiffany & Co.|Eugene J. Soligny|Augustus Saint-Gaudens
1765
Bureau table
John Townsend
1880
Cabinet
Daniel Pabst
1866
Cabinet
Alexander Roux
1835
Celery vase
Boston & Sandwich Glass Company
Loading...

Projecting onto a line

These features can be thought of as vectors existing in a high-dimensional space. We want to visualize the vectors, however we can’t show all the dimensions at once.

The answer is to project the data into a lower dimension, one that can be visualized. This kind of projection is called an embedding.

Computing 1-dimensional embedding requires taking each artwork and computing a single number to describe it. A benefit of reducing to 1D is that the numbers, and the artworks, can be sorted on a line.

On the right you see the artwork positioned according to their average pixel brightness. Notice that the images are sorted, with the darkest images appearing at the top and the brightest images on the bottom!

For the mathematically inclined

Dimensionality reduction can be formulated mathematically in the context of a given dataset. Consider a dataset represented as a matrix XX , where XX is of size m×nm \times n , where mm represents the number of rows of XX , and nn representes the number of columns.

Typically, the rows of the matrix are data points and the columns are features. Dimensionality reduction will reduce the number of features of each data point, turning XX into a new matrix, XX' , of size m×dm \times d , where d<nd < n . For visualizations we typically set dd to be 1, 2 or 3.

Say m=nm=n , that is XX is a square matrix. Performing dimensionality reduction on XX and setting d=2d=2 will change it from a square matrix to a tall, rectangular matrix.

X=[xxxxxxxxx][xxxxxx]=X X = \begin{bmatrix} x & x & x \\ x & x & x \\ x & x & x \end{bmatrix} \implies \begin{bmatrix} x' & x' \\ x' & x' \\ x' & x' \end{bmatrix} = X'

Each data point only has two features now, i.e., each data point has been reduced from a 3 dimensional vector to a 2 dimensional vector.

Embedding data in two dimensions

The same brightness feature can be used to position the artworks in 2D space instead of 1D. The pieces have more room to spread out.

On the right you see a simple 2-dimensional embedding based on image brightness, but this isn’t the only way to position the artworks. In fact, there are many, and some projections are more useful than others.

Use the slider to vary the influence that the brightness and artwork age have in determining the embedding positions. As you move the slider from brightness to artwork age, the embedding changes from highlighting bright and dark images, and starts to cluster recent modern-day images in the bottom left corner whereas older artworks are moved farther away (hover over images to see their date).

Artwork Age Brightness

Real-world algorithms

We just showed an example of a user-driven embedding, where the exact influence of each feature is known in the embedding. However, you may have noticed that it’s hard to find meaningful combinations of feature weights.

State-of-the-art algorithms can find an optimal combination of features so that distances in the high dimensional space are preserved in the embedding. Use the tool below to project the artworks using three commonly used algorithms.

In this example we are performing reduction on the pixels of each image: each image is flattened into a single vector, where each pixel represents one feature. We then reduce these vectors to two dimensions.

Principal component analysis

Pros:

  • Relatively computationally cheap.
  • Can save embedding model to then project new data points into the reduced space.

Cons:

  • Linear reduction limits information that can be captured; not as discriminably clustered as other algorithms.

There are many algorithms that compute a dimensionality reduction of a dataset. Simpler algorithms such as principal component analysis (PCA) maximize the variance in the data to produce the best possible embedding. More complicated algorithms, such as t-distributed stochastic neighbor embedding (t-SNE) [2], iteratively produce highly clustered embeddings. Unfortunately, whereas before the influence of each feature was explicitly known, one must relinquish control to the algorithm to determine the best embedding— that means that it is not clear what features of the data are used to compute the embedding. This can be problematic for misinterpreting what an embedding is showing [10].

Dimensionality reduction, and more broadly the field of unsupervised learning, is an active area of research where researchers are developing new techniques to create better embeddings. A new technique, uniform manifold approximation and projection (UMAP) [4], is a non-linear reduction that aims to create visually striking embeddings fast, scaling to larger datasets.

Try it for yourself

Dimensionality reduction is a powerful tool to better understand high-dimensional data. If you have your own dataset and wish to visualize it using dimensionality reduction, there are a number of different algorithms [3] and implementations available. In Python, the scikit-learn package [7, 8] provides APIs for many unsupervised dimensionality reduction algorithms, as well as manifold learning: an approach to non-linear dimensionality reduction.

Regarding the three algorithms discussed above, you can find Python implementations of the algorithms we used for the artworks here: PCA, t-SNE [2], and UMAP [4].

Acknowledgments

  • This article was created using Idyll.
  • The source code is available on Github.

References

  1. Uber die stetige Abbildung einer Linie auf ein Flachenstuck.
    David Hilbert.
    Dritter Band: Analysis Grundlagen der Mathematik Physik Verschiedenes , 1890.
  2. Visualizing data using t-SNE.
    Laurens van der Maaten, Geoffrey Hinton.
    Journal of machine learning research, 2008.
  3. Dimensionality reduction: a comparative review.
    Laurens van der Maaten, Eric Postma, Jaap Van den Herik.
    Tilburg University Technical Report, TiCC-TR, 2009.
  4. UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction.
    Leland McInnes, John Healy.
    arXiv, 2018.
  5. The Metropolitan Museum of Art Open Access.
    The Metropolitan Museum of Art.
    Github, 2017.
  6. Analysis of the clustering properties of the Hilbert space-filling curve.
    Bongki Moon, Hosagrahar V Jagadish, Christos Faloutsos, Joel H. Saltz.
    IEEE Transactions on knowledge and data engineering, 2001.
  7. Scikit-learn: Machine Learning in Python.
    F. Pedregosa, G. Varoquaux, A. Gramfort, and V. Michel, B. Thirion, O. Grisel, M. Blondel, and P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, E. Duchesnay.
    Journal of Machine Learning Research, 2011.
  8. API design for machine learning software: experiences from the scikit-learn project.
    Lars Buitinck, Gilles Louppe, Mathieu Blondel, Fabian Pedregosa, Andreas Mueller, Olivier Grisel, Vlad Niculae, Peter Prettenhofer, Alexandre Gramfort, Jaques Grobler, Robert Layton, Jake VanderPlas, Arnaud Joly, Brian Holt, Gael Varoquaux.
    ECML PKDD Workshop: Languages for Data Mining and Machine Learning, 2013.
  9. Embedding Projector: Interactive visualization and interpretation of embeddings.
    Daniel Smilkov, Nikhil Thorat, Charles Nicholson, Emily Reif, Fernanda B Viegas, Martin Wattenberg.
    arXiv, 2016.
  10. How to Use t-SNE Effectively.
    Martin Wattenberg, Fernanda Viegas, Ian Johnson.
    Distill, 2016.