This year, on 20 March 2020, researchers from MIT presented a method, or approach, to shrink machine learning model that not only effectively reduces its size, but has been shown to outperform other methods of reducing the size of machine learning models consistently.
Alex Renda, or @alex_renda_ on Twitter, called it a pruning algorithm that fits in a Tweet.
His tweet is here.
Alex also mentioned that “the standard things people do to prune their models are crazy complicated”. And, truly, no argument there. He’s absolutely right.
This is exciting because as more and more machine learning models get onto smaller devices like our smartphones and laptops, they need to be made nice and small in order to use less storage, run faster, and to use less processing power which in turns saves your battery.
Most importantly, a nicely pruned model which performs well is critical because that means that you can run features and functions powered by machine learning directly on your device, giving you fast response and a great experience.
We’re looking forward to this new research being applied.