Decision trees – the unreasonable power of nested decision rules

(mlu-explain.github.io)

164 points | by mschnell 4 hours ago

8 comments

  • srean 39 minutes ago
    A 'secret weapon' that has served me very well for learning classifiers is to first learn a linear classifier. I am almost hesitant to give this away (kidding).

    Use the non-thresholded version of that linear classifier output as one additional feature dimension over which you learn a decision tree. Then wrap this whole thing up as a boosted tree.

    One of the reasons why it works so well, is that it plays to their strengths:

    (i) Decision trees have a hard time fitting linear functions (they have to stair-step a lot, therefore many internal nodes) and

    (ii) linear functions are terrible where equi-label regions have a partitioned structure.

    In the decision tree building process the first cut would usually be on the synthetic linear feature added, which would earn it the linear classifier accuracy right away, leaving the DT algorithm to work on the part where the linear classifier is struggling. This idea is not that different from boosting.

    One could also consider different (random) rotations of the data to form a forest of trees build using steps above, but was usually not necessary. Or rotate so that the axes so that all are orthogonal to the linear classifier.

    One place were DT struggle is when the features themselves are very (column) sparse, not many places to place the cut.

  • lokimedes 1 hour ago
    When I worked at CERN around 2010, Boosted Decision Trees were the most popular classifier, exactly due to the (potential for) explainability along with its power of expression. We had a cultural aversion for neural networks back then, especially if the model was used in physics analysis directly. Times have changed…
    • srean 3 minutes ago
      > Times have changed…

      This makes me a little concerned.

      Ptolemaic system achieved a far better fit (over Copernican) because his was a universal approximator (epicyclic system is a form of Fourier analysis). But the epicycles were not the right thing to use to work out the causal mechanics.

      In Physics we would want to doore than just curve fitting.

    • wodenokoto 1 hour ago
      Are boosted decision trees the same as a boosted random forest?
      • hansvm 37 minutes ago
        The training process differs, but the resulting model only differs in data rather than code -- you evaluate a bunch of trees and add them up.

        For better or for worse (usually for better), boosted decision trees work harder to optimize the tree structure for a given problem. Random forests rely on enough trees being good enough.

        Ignoring tree split selection, one technique people sometimes do makes the two techniques more related -- in gradient boosting, once the splits are chosen it's a sparse linear algebra problem to optimize the weights/leaves (iterative if your error is not MSE). That step would unify some part of the training between the two model types.

      • boccaff 1 hour ago
        short answer: No.

        longer answer: Random forests use the average of multiple trees that are trained in a way to reduce the correlation between trees (bagging with modified trees). Boosting trains sequentially, with each classifier working on the resulting residuals so far.

        I am assuming that you meant boosted decision trees, sometimes gradient boosted decisions trees, as usually one have boosted decision trees. I think xgboost added boosted RF, and you can boost any supervised model, but it is not usual.

  • fooker 2 hours ago
    Fun fact - single bit neural networks are decision trees.

    In theory, this means you can 'compile' most neural networks into chains of if-else statements but it's not well understood when this sort of approach works well.

  • kqr 2 hours ago
    Experts' nebulous decision making can often be modelled with simple decision trees and even decision chains (linked lists). Even when the expert thinks their decision making is more complex, a simple decision tree better models the expert's decision than the rules proposed by the experts themselves.

    I've long dismissed decision trees because they seem so ham-fisted compared to regression and distance-based clustering techniques but decision trees are undoubtedly very effective.

    See more in chapter seven of the Oxford Handbook of Expertise. It's fascinating!

    • ablob 1 hour ago
      I once saw a visualization that basically partitioned decisions on a 2D plane. From that perspective, decision trees might just be a fancy word for kD-Trees partitioning the possibility space and attaching an action to the volumes.

      Given that assumption, the nebulous decision making could stem from expert's decisions being more nuanced in the granularity of the surface separating 2 distinct actions. It might be a rough technique, but nonetheless it should be able to lead to some pretty good approximations.

      • srean 1 hour ago
        You have this thing a little backwards that it is unintentionally hilarious.

        Decision trees predate KD trees by a decade.

        Both use recursive partitioning of function domain a fundamental and an old idea.

  • zelphirkalt 2 hours ago
    Decision trees are great. My favorite classical machine learning algorithm or group of algorithms, as there are many slight variations of decision trees. I wrote a purely functional (kind of naive) parallelized implementation in GNU Guile: https://codeberg.org/ZelphirKaltstahl/guile-ml/src/commit/25...

    Why "naive"? Because there is no such thing as NumPy or data frames in the Guile ecosystem to my knowledge, and the data representation is therefore probably quite inefficient.

    • srean 1 hour ago
      What benefit does numpy or dataframes bring to decision tree logic over what is available in Guile already ? Honest question.

      Guile like languages are very well suited for decision trees, because manipulating and operating on trees is it's mother tongue. Only thing that would be a bit more work would be to compile the decision tree into machine code. Then one doesn't have traverse a runtime structure, the former being more efficient.

      BTW take a look at Lush, you might like it.

      https://lush.sourceforge.net/

      https://news.ycombinator.com/item?id=2406325

      If you are looking for vectors and tensors in Guile, there is this

      https://wedesoft.github.io/aiscm/

      • zelphirkalt 38 minutes ago
        I think data frames are quite memory efficient and can store non-uniform data types (as can vectors in Guile). Generally, a ton of work has gone into making operations on data frames fast. I don't think a normal vector or multi-dimensional array can easily compete. Data frames are probably also compiled to some quite efficient machine code. Not sure whether Guile's native data structures can match that. Maybe they can.

        Also I think I did not optimize for memory usage, and my implementation might keep copies of subsets of data points for each branch. I was mostly focused on the algorithm, not that much on data representation.

        Another point, that is not really efficiency related, is that data frames come with lots of functionality to handle non-numeric data. If I recall correctly, they have functionality like doing one-hot encoding and such things. My implementation simply assumes all you have is numbers.

        There might also be efficiency left on the table in my implementation, because I use the native number types of Guile, which allow for arbitrarily large integers (which one might not need in many cases) and I might even have used fractions, instead of inexact floats.

        I guess though, with good, suitable data structures and a bit of reworking the implementation, one could get a production ready thing out of my naive implementation, that is even trivially parallelized and still would have the linear speedup (within some bounds only, probably, because decision trees usually shouldn't be too deep, to avoid overfitting) that my purely functional implementation enables.

        Thanks for the links!

        • srean 32 minutes ago
          For linear algebraic transformation applied to several rows at once, I wholeheartedly agree.

          Not so convinced about decision trees though (that process one row at a time).

          Yeah, unless you had to deal with arbitrarily large integer features, Guile integers would come with a big efficiency hit.

      • boccaff 59 minutes ago
        tree algorithms on sklearn use parallel arrays to represent the tree structure.
  • xmprt 3 hours ago
    Interesting website and great presentation. My only note is that the color contrast of some of the text makes it hard to read.
    • thesnide 3 hours ago
      exactly my thought. and here thr reader view of FF is a godsend.

      having 'accessible' content is not only for people with disabilities, it also help with bad color taste.

      well, at least bad taste for readable content ;)

  • moi2388 2 hours ago
    That was beautifully presented!
  • Jaxon_Varr 3 hours ago
    [dead]