Page 70 - Data Science Algorithms in a Week
P. 70
54 Olmer Garcia and Cesar Diaz
minimizing errors. There are two types of learning: classification and
regression.
Classification: In this type, the problem inputs are divided into two or
more classes, and the learner must produce a model that maps blind
inputs to one or more of these classes. This problem characterizes most
of the pattern recognition tasks.
Regression: When the outputs’ space is formed by the outputs
representing values of continuous variables (the outputs are continuous
rather than discrete), then the learning task is known as the problem of
regression or function learning.
Unsupervised Learning: when the data is a sample of objects without
associated target values, the problem is known as unsupervised learning. In
this case, there is not an instructor. The learning algorithm does not have
labels, leaving it on its own to find some “structure” in its input. We have
training samples of objects, with the possibility of extracting some
“structure” from them. If the structure exists, it is possible to take advantage
of this redundancy and find a short description of the data representing
specific similarity between any pairs of objects.
Reinforcement Learning: The complication with reinforcement learning is to
find how to learn what to do to maximize a given reward. Indeed, in this
type, feedback is provided in terms of rewards and punishments. The learner
is assumed to gain information about the actions. A reward or punishment is
given based on the level of success or failure of each action. The ergodicity is
important in reinforcement learning.
Semi-supervised Learning: Consists of the combination of supervised and
unsupervised learning. In some books, it refers to a mixed of unlabeled data
with labeled data to make a better learning system (Camastra, & Vinciarelli,
2007).
Deep Learning
Deep learning has become a popular term. Deep learning can be defined as the use of
neural networks with multiple layers in big data problems. So, why is it perceived as a
“new” concept, if neural networks have been studied since the 1940s? This is because
parallel computing created by graphics processing units (GPU), distributed systems along
with efficient optimization algorithms have led to the use of neural networks in
contemporary/complex problems (e.g., voice recognition, search engines, and
autonomous vehicles). To better understand this concept, we first present a brief review
of neural networks; and then proceed to present some common concepts of deep learning.