MDS
Multi-dimensional scaling of a FluidDataSet
Multidimensional Scaling transforms a DataSet to a lower number of dimensions while trying to preserve the distance relationships between the data points, so that even with fewer dimensions, the differences and similarities between points can still be observed and used effectively.
First, MDS computes a distance matrix by calculating the distance between every pair of points in the dataset. It then positions all the points in the lower number of dimensions (specified by numDimensions
) and iteratively shifts them around until the distances between all the points in the lower number of dimensions is as close as possible to the distances in the original dimensional space.
Unlike the other dimensionality reduction algorithms, MDS does not have a fit
or transform
method, nor does it have the ability to transform data points in buffers. This is essentially because the algorithm needs to do the fit & transform as one process using just the data provided in the source DataSet and therefore incorporating new data points would require a re-fitting of the model.
What makes MDS more flexible than the other dimensionality reduction algorithms in FluCoMa (PCA and UMAP) is that MDS allows for different measures of distance to be used when computing the distance matrix (see the list below). This allows you to explore different ways of measuring the distance of the data points (i.e., comparing their similarity or difference) during the dimensionality reduction process. Exploring different measures of difference may create different musical relationships between points in the data.
Comparing Measures of Distance
Below are different plots of two dimensional representations of the same MFCC analyses (originally in 13 dimensions) using all the distance metrics available in MDS. The colour is arbitrarily assigned so that you can track the location changes of each point in space.
Just by looking at these plots, there’s no real way of knowing which distance measure might be best for a given data set or application, however, using different distance measures will create different lower dimensional representations of data that may have significant musical impacts!
Distance Measures
- Manhattan Distance: The sum of the absolute value difference between points in each dimension. This is also called the Taxicab Metric.
- Euclidean Distance: Square root of the sum of the squared differences between points in each dimension (Pythagorean Theorem). This metric is the default, as it is the most commonly used.
- Squared Euclidean Distance: Square the Euclidean Distance between points. This distance measure more strongly penalises larger distances, making them seem more distant, which may reveal more clustered points.
- Minkowski Max Distance: The distance between two points is reported as the largest difference between those two points in any one dimension. Also called the Chebyshev Distance or the Chessboard Distance.
- Minkowski Min Distance: The distance between two points is reported as the smallest difference between those two points in any one dimension.
- Symmetric Kullback Leibler Divergence: The Symmetric Kullback Leibler Divergence computes the distance between two points by finding the relative entropy when comparing each to the other: given point A, how likely is point B, and, given point B, how likely is point A? These differences sum to create the measured distance between two points. Because the first part of this computation uses the logarithm of the values, using the Symmetric Kullback Leibler Divergence only makes sense with non-negative data.