How to Use the MSE in Data Science
Mean squared error (MSE) for performance evaluation and optimization
Let’s talk about a crucial piece that’s subtly missing from most discussions about the mean squared error (MSE): metrics and loss functions are not quite the same thing. You need two separate scoring functions for model performance evaluation and optimization in machine learning. The MSE could be either or neither… that’s up to you.
In a previous article, I explained this at a high level, but I didn’t give you concrete examples. In this article, I’ll show instead of tell, so you can see more clearly what I mean by performance evaluation versus optimization. I’ll use the mean squared error (MSE) for the demo, but please bear in mind that the MSE is an easy metric, but not always the best choice for your needs. Let’s dive in!
What is the MSE?
The mean squared error (MSE) is one of many metrics you could use to measure your model’s performance. You calculate the MSE by finding the errors, squaring them, and taking the mean. If this feels fuzzy or if you’re not sure what a model error is, I recommend taking a quick detour to my gentle MSE intro article before continuing.
Why might we wish to calculate the MSE?
Two* main reasons:
- Performance evaluation: at a glance, how well is our model doing? In other words, can we get a quick read on what we’re working with?
- Model optimization: is this the best possible fit or can we improve it? In other words, which model gets closest to our datapoints?
If you enjoy learning from video, feel free to follow along here:
Performance evaluation (metric)
The goal of performance evaluation is for a person (you, me, whoever) to read a score and understand something about our model.