Page 61 - Reclaim YOUR DIGITAL GOLD (with DesignLayout Dec3) (Clickable) (Dexxi-FLIP-Audio)_Neat
P. 61
DATA COLLECTION HARVESTING
the output using those values.As you may haveguessed,
it performs quite poorly. However, we can test the
accuracy of our model’s forecasts by comparing them to
the expected results, and then we can change the values
we use for W and b so that our model produces more
accurate forecasts.
This procedure is then repeated. One round of weight
and bias updates is referred to as a “training step.”
Let’stake a closer look at what this means for our dataset
in this specific scenario and context. At first glance, it
appears that we’ve simply drawn a random line through
the data. As the training progresses, it gets closer and
closer to an optimal separation of wine and beer.
EVALUATION PHA SE
After the training phase is completed, it is time to
evaluate the model to see if it is any good. The dataset
that we previouslyset aside comes into play at this point
in the process. The evaluation phase allows us to test
our model using data that was not previously used for
training.This statistic allows us to forecast how well the
model will perform when applied to data that it has not
yet been exposed to. This is meant to be an indication of
how the model might perform in the real world.
According to the commonly used rule of thumb, a good
training-evaluationsplit should be in the 80/20 or 70/30
range.The amount of the dataset that was initially used
as the source determines a significant portion of this. If
you havea large amount of data, a smaller percentage of
it may be sufficient for the evaluationdataset.
41