A method to shrink machine learning models effectively

Researchers from MIT find a way to shrink machine learning models effectively.

38 sec read

This year, on 20 March 2020, researchers from MIT presented a method, or approach, to shrink machine learning model that not only effectively reduces its size, but has been shown to outperform other methods of reducing the size of machine learning models consistently.

Alex Renda, or @alex_renda_ on Twitter, called it a pruning algorithm that fits in a Tweet.

His tweet is here.

Alex also mentioned that “the standard things people do to prune their models are crazy complicated”. And, truly, no argument there. He’s absolutely right.

This is exciting because as more and more machine learning models get onto smaller devices like our smartphones and laptops, they need to be made nice and small in order to use less storage, run faster, and to use less processing power which in turns saves your battery.

Most importantly, a nicely pruned model which performs well is critical because that means that you can run features and functions powered by machine learning directly on your device, giving you fast response and a great experience.

We’re looking forward to this new research being applied.

Eugene Ching Founder of Qavar, an AI and cybersecurity company. We use machine learning to bring insights into your business, and defend you against digital threats.

Don't miss out. Find out how leveraging AI or automation can help you.

Subscribe to receive practical tips, advice and ideas on how AI, machine learning and technology can help you grow your business.