8 comments

  • romanfll 10 hours ago
    Author here. I built this because I needed to run dimensionality reduction entirely in the browser (client-side) for an interactive tool. The standard options (UMAP, t-SNE) were either too heavy for JS/WASM or required a GPU backend to run at acceptable speeds for interactive use.

    This approach ("Sine Landmark Reduction") uses linearised trilateration—similar to GPS positioning—against a synthetic "sine skeleton" of landmarks.

    The main trade-offs:

    It is O(N) and deterministic (solves Ax=b instead of iterative gradient descent).

    It forces the topology onto a loop structure, so it is less accurate than UMAP for complex manifolds (like Swiss Rolls), but it guarantees a clean layout for user interfaces.

    It can project ~9k points (50 dims) to 3D in about 2 seconds on a laptop CPU. Python implementation and math details are in the post. Happy to answer questions!

    • lmeyerov 9 hours ago
      Fwiw, we are heavy UMAP users (pygraphistry), and find UMAP CPU fine for interactive use at up to 30K rows and GPU at 100K rows, then generally switch to a trained mode when > 100K rows. Our use case is often highly visual - see correlations, and link together similar entities into explorable & interactive network diagrams. For headless, like in daily anomaly detection, we will do this to much larger scales.

      We see a lot of wide social, log, and cyber data where this works, anywhere from 5-200 dim. Our bio users are trickier, as we can have 1K+ dimensions pretty fast. We find success there too, and mostly get into preconditioning tricks for those.

      At the same time, I'm increasingly thinking of learning neural embeddings in general for these instead of traditional clustering algorithms. As scales go up, the performance argument here goes up too.

      • romanfll 1 hour ago
        The shift from Explicit Reduction to GNNs/Embeddings is where the high-end is going in my view… We hit this exact fork in the road with our forecasting/anomaly detection engine (DriftMind). We considered heavy embedding models but realised that for edge streams, we couldn't afford the inference cost or the latency of round-tripping to a GPU server. It feels like the domain is splitting into 'Massive Server-Side Intelligence' (I am a big fan of Graphistry) and 'Hyper-Optimized Edge Intelligence' (where we are focused).
      • abhgh 7 hours ago
        I was not aware this existed and it looks cool! I am definitely going to take out some time to explore it further.

        I have a couple of questions for now: (1) I am confused by your last sentence. It seems you're saying embeddings are a substitute for clustering. My understanding is that you usually apply a clustering algorithm over embeddings - good embeddings just ensure that the grouping produced by the clustering algo "makes sense".

        (2) Have you tried PaCMAP? I found it to produce high quality and quick results when I tried it. Haven't tried it in a while though - and I vaguely remember that it won't install properly on my machine (a Mac) the last time I had reached out for it. Their group has some new stuff coming out too (on the linked page).

        [1] https://github.com/YingfanWang/PaCMAP

        • lmeyerov 6 hours ago
          We generally run UMAP on regular semi-structured data like database query results. We automatically feature encode that for dates, bools, low-cardinality vals, etc. If there is text, and the right libs available, we may also use text embeddings for those columns. (cucat is our GPU port of dirtycat/skrub, and pygraphistry's .featurize() wraps around that).

          My last sentence was on more valuable problems, we are finding it makes sense to go straight to GNNs, LLMs, etc and embed multidimensional data that way vs via UMAP dim reductions. We can still use UMAP as a generic hammer to control further dimensionality reductions, but the 'hard' part would be handled by the model. With neural graph layouts, we can potentially even skip the UMAP for that too.

          Re:pacmap, we have been eyeing several new tools here, but so far haven't felt the need internally to go from UMAP to them. We'd need to see significant improvements given the quality engineering in UMAP has set the bar high. In theory I can imagine some tools doing better in the future, but the creators have't done the engineering investment, so internally, we rather stay with UMAP. We make our API pluggable, so you can pass in results from other tools, and we haven't heard much from that path from others.

          • abhgh 4 hours ago
            Thank you. Your comment about LLMs to semantically parse diverse data, as a first step, makes sense. In fact come to think of it, in the area of prompt optimization too - such as MIPROv2 [1] - the LLM is used to create initial prompt guesses based on its understanding of data. And I agree that UMAP still works well out of the box and has been pretty much like this since its introduction.

            [1] Section C.1 in the Appendix here https://arxiv.org/pdf/2406.11695

    • threeducks 8 hours ago
      Without looking at the code, O(N * k) with N = 9000 points and k = 50 dimensions should take in the order of milliseconds, not seconds. Did you profile your code to see whether there is perhaps something that takes an unexpected amount of time?
      • romanfll 5 hours ago
        The '2 seconds' figure comes from the end-to-end time on a standard laptop. I quoted 2s to set realistic expectations for the user experience, not the CPU cycle count. You are right that the core linear algebra (Ax=b) is milliseconds. The bottleneck is the DOM/rendering overhead, but strictly speaking, the math itself is blazing fast.
        • moralestapia 3 hours ago
          This is great Roman, congrats on the amazing work :)!
      • jdhwosnhw 2 hours ago
        Thats not how big-O notation works. You don’t know what proportionality constants are being hidden by the notation so you cant make any assertions about absolute runtimes
        • threeducks 1 hour ago
          It is true that big-O notation does not necessarily tell you anything about the actual runtime, but if the hidden constant appears suspiciously large, one should double-check whether something else is going on.
      • donkeybeer 8 hours ago
        If he wrote the for loop in python instead of numpy or C or whatever it could be a plausible runtime.
      • yorwba 7 hours ago
        Each of the N data points is processed through several expensive linear algebra operations. O(N * k) just expresses that if you double N, the runtime also at most doubles. It doesn't mean it has to be fast in an absolute sense for any particular value of N and k.
        • akoboldfrying 7 hours ago
          Didn't read TFA, but it's hard to think of a linear algebra operation that is both that slow and takes time independent of n and k.
    • yxhuvud 5 hours ago
      FWIW, there are iterative SVD implementations out there that could potentially be useful as well in certain contexts when you get more data over time in a streamed manner.
      • leecarraher 2 hours ago
        are you referring to this paper https://arxiv.org/abs/1501.01711 ? i believe they won best paper at icml or other impact journal. the published paper and algorithm i recall being compact and succinct, something that took less than a day to implement.
    • aoeusnth1 10 hours ago
      This is really cool! Are you considering publishing a paper on it? This seems conceptually similar to landmark MDS / Isomap, except using PCA on the landmark matrix instead of MDS. (https://cannoodt.dev/2019/11/lmds-landmark-multi-dimensional...)
      • romanfll 9 hours ago
        Thanks! You nailed the intuition! Yes, it shares DNA with Landmark MDS, but we needed something strictly deterministic for the UI. Re: Publishing: We don't have a paper planned for this specific visualisation technique yet. I just wanted to open-source it because it solved a major bottleneck for our dashboard. However, our main research focus at Thingbook is DriftMind (a cold start streaming forecaster and anomaly detector, preprint here: https://www.researchgate.net/publication/398142288_DriftMind...). That paper is currently under peer review! It shares the same 'efficiency-first' philosophy as this visualisation tool
  • zipy124 6 hours ago
    Something seems off here. t-SNE should not be taking 15-25 seconds for only 5k points and 20 dimensions, but rather somewhere like 1-2 seconds. Also since the given alternative is not as good, you would probably be able to reduce the iterations somewhat with t-SNE if speed is wanted at the risk of quality. Alternatively UMAP for this would be milliseconds, bordering on real-time with aggressive tuning.
  • trgn 4 hours ago
    Glad to see 2d mapping is still of interest. 20 years ago, information visualization, data cartography, exploratory analytics, etc.. was pretty alive, but it never really took off and found a reliable niche in the industry, or real end user application. Why map it, when the machine can just tell you.

    Would be nice to see it come back. Would love to browse for books and movies on maps again, rather that getting lists regurgitated at me.

  • rundev 5 hours ago
    The claim of linear runtime is only true if K is independent of the dataset size, so it would have been nice to see an exploration of how different values of K impact results. I.e. does clustering get better for larger K, if so how much? The values 50 and 100 seem arbitrary and even suspiciously close to sqrt(N) for the 9K dataset.
    • romanfll 4 hours ago
      Thanks for your comment.

      To clarify: K is a fixed hyperparameter in this implementation, strictly independent of N. Whether we process 9k points or 90k points, we keep K at ~100. We found that increasing K yields diminishing returns very quickly. Since the landmarks are generated along a fixed synthetic topology, increasing K essentially just increases resolution along that specific curve, but once you have enough landmarks to define the curve's structure, adding more doesn't reveal new topology… it just adds computational cost to the distance matrix calculation. Re: sqrt(N): That is purely a coincidence!

  • jmpeax 9 hours ago
    > They typically need to compare many or all points to each other, leading to O(N²) complexity.

    UMAP is not O(n^2) it is O(n log n).

    • romanfll 9 hours ago
      Thanks for your comment! You are right, Barnes-Hut implementation brings UMAP down to O(N log N). I should have been more precise in the document. The main point is that even O(N log N) could be too much if you run this in a browser.. Thanks for clarifying!
      • emil-lp 8 hours ago
        If k=50, then I'm pretty sure O(n log n) beats O(nk).
        • romanfll 4 hours ago
          You are strictly correct for a single pass! log2(9000)~13, which is indeed much smaller than k=50. The missing variable in that comparison is Iterations. t-SNE and UMAP are iterative optimisation algorithms. They repeat that O(N log N) step hundreds of times to converge. My approach is a closed-form linear solution (Ax=b) that runs exactly once. So the wall-clock comparison is effectively: Iterations * (N log N) VS 1 * (N *k) That need for convergence is where the speedup comes from, not the complexity class per se.
  • benob 8 hours ago
    Is there a pip installable version?
    • romanfll 4 hours ago
      Not yet, but coming...
  • memming 10 hours ago
    first subsample a fixed number of random landmark points from data, then...
    • romanfll 9 hours ago
      Thanks for your comment. You are spot on, that is effectively the standard Nyström/Landmark MDS approach.

      The technique actually supports both modes in the implementation (synthetic skeleton or random subsampling). However, for this browser visualisation, we default to the synthetic sine skeleton for two reasons:

      1. Determinism: Random landmarks produce a different layout every time you calculate the projection. For a user interface, we needed the layout to be identical every time the user loads the data, without needing to cache a random seed. 2. Topology Forcing: By using a fixed sine/loop skeleton, we implicitly 'unroll' the high-dimensional data onto a clean reduced structure. We found this easier for users to visually navigate compared to the unpredictable geometry that comes from a random subset

      • HelloNurse 8 hours ago
        You don't need a "proper" random selection: if your points are sorted deterministically and not too adversarially, any reasonably unbiased selection (e.g. every Nth point) is pseudorandom.
  • aw123 8 hours ago
    i asked an llm to test it on the digits dataset and [here](https://imgur.com/a/XAt0VRU) are the results.

    ``` import numpy as np import time import matplotlib.pyplot as plt from sklearn.base import BaseEstimator, TransformerMixin from sklearn.decomposition import PCA from sklearn.preprocessing import StandardScaler, MinMaxScaler from sklearn.datasets import load_digits from sklearn.metrics import pairwise_distances from sklearn.manifold import TSNE

    try: import umap HAS_UMAP = True except ImportError: HAS_UMAP = False print("Warning: 'umap-learn' not installed. Comparison will be skipped.")

    class SineLandmarkReduction(BaseEstimator, TransformerMixin): def __init__(self, n_components=2, n_landmarks=50, mode='data_derived', # 'sine' or 'data_derived' distance_warping=1.0, random_state=42): self.n_components = n_components self.n_landmarks = n_landmarks self.mode = mode self.p = distance_warping self.random_state = random_state self.rng = np.random.RandomState(random_state)

        def _generate_sine_landmarks(self, n_features):
            """Generates the high-dim 'sine skeleton'."""
            a = self.rng.uniform(0.5, 2.0, n_features)
            omega = self.rng.uniform(0.5, 1.5, n_features)
            phi = self.rng.uniform(0, 2 * np.pi, n_features)
            
            t = np.linspace(0, 2 * np.pi, self.n_landmarks)
            
            L_high = (a[:, None] * np.sin(omega[:, None] * t + phi[:, None])).T
            return L_high
    
        def fit(self, X, y=None):
            self.scaler = StandardScaler()
            X_scaled = self.scaler.fit_transform(X)
            n_samples, n_features = X_scaled.shape
            
            if self.mode == 'sine':
                self.L_high = self._generate_sine_landmarks(n_features)
                l_min, l_max = self.L_high.min(), self.L_high.max()
                x_min, x_max = X_scaled.min(), X_scaled.max()
                self.L_high = (self.L_high - l_min) / (l_max - l_min) * (x_max - x_min) + x_min
                
            else: # 'data_derived'
                indices = self.rng.choice(n_samples, self.n_landmarks, replace=False)
                self.L_high = X_scaled[indices].copy()
    
            self.pca_landmarks = PCA(n_components=self.n_components)
            self.L_low = self.pca_landmarks.fit_transform(self.L_high)
            
            self.L0_low = self.L_low[0]
            self.L_others_low = self.L_low[1:]
            
            self.A = 2 * (self.L_others_low - self.L0_low)
            self.A_pinv = np.linalg.pinv(self.A)
            
            self.L0_sq_norm = np.sum(self.L0_low**2)
            self.Li_sq_norms = np.sum(self.L_others_low**2, axis=1)
            
            return self
    
        def transform(self, X):
            X_scaled = self.scaler.transform(X)
            
            D = pairwise_distances(X_scaled, self.L_high, metric='euclidean')
            
            if self.p != 1.0:
                D = np.power(D, self.p)
                
            D_sq = D**2
            
            d0_sq = D_sq[:, 0:1]
            di_sq = D_sq[:, 1:]
            
            term_dist = d0_sq - di_sq
            term_geom = self.Li_sq_norms - self.L0_sq_norm
            
            B = term_dist + term_geom
            
            Y = np.dot(self.A_pinv, B.T).T
            
            return Y
    
    if HAS_UMAP: digits = load_digits() X = digits.data y = digits.target

        print(f"Dataset: Digits (N={X.shape[0]}, D={X.shape[1]})")
        print("-" * 40)
    
        start = time.time()
        umap_reducer = umap.UMAP(n_neighbors=15, min_dist=0.1, random_state=42)
        X_umap = umap_reducer.fit_transform(X)
        umap_time = time.time() - start
        print(f"UMAP Time:  {umap_time:.4f} s")
    
        start = time.time()
        tsne_reducer = TSNE(n_components=2, perplexity=30, random_state=42)
        X_tsne = tsne_reducer.fit_transform(X)
        tsne_time = time.time() - start
        print(f"t-SNE Time: {tsne_time:.4f} s")
    
        start = time.time()
        slr = SineLandmarkReduction(n_landmarks=50, mode='data_derived', distance_warping=0.5)
        X_slr = slr.fit_transform(X)
        slr_time = time.time() - start
        print(f"SLR Time:   {slr_time:.4f} s")
        print("-" * 40)
        print(f"SLR vs UMAP Speedup:  {umap_time / slr_time:.1f}x")
        print(f"SLR vs t-SNE Speedup: {tsne_time / slr_time:.1f}x")
    
        fig, axes = plt.subplots(1, 3, figsize=(15, 5))
        
        sc1 = axes[0].scatter(X_umap[:, 0], X_umap[:, 1], c=y, cmap='Spectral', s=5, alpha=0.7)
        axes[0].set_title(f"UMAP\nTime: {umap_time:.3f}s")
        axes[0].axis('off')
        
        sc2 = axes[1].scatter(X_tsne[:, 0], X_tsne[:, 1], c=y, cmap='Spectral', s=5, alpha=0.7)
        axes[1].set_title(f"t-SNE\nTime: {tsne_time:.3f}s")
        axes[1].axis('off')
        
        sc3 = axes[2].scatter(X_slr[:, 0], X_slr[:, 1], c=y, cmap='Spectral', s=5, alpha=0.7)
        axes[2].set_title(f"Sine Landmark Reduction (SLR)\nTime: {slr_time:.3f}s")
        axes[2].axis('off')
        
        plt.tight_layout()
        plt.savefig('comparison_plot.png', dpi=150, bbox_inches='tight')
        print("\nPlot saved to comparison_plot.png")
        plt.show()
    
    else: print("Please install umap-learn to run the comparison: pip install umap-learn") ```