Page 169 - AWSAR 2.0
P. 169
or bad. This decision, in entirety, depends on questions such as:
Whether the hospitality extended was adequate? Was the place clean? Was there proper ambient lighting? Was the place crowded? Was the food served in time? How was the service? Did the cost justify the taste?
If we can form such questionnaires that are almost perfect, then it would make our brain eligible to decide on our experience at the restaurant. Similarly, a neural network produces output with the help of its intricate neuro-synaptic circuitry. By the way, you may be wondering: How did we form the above questions related to our restaurant experience? A straight-forward answer to this may be: We have gone to many restaurants in our life and those experiences made our preparation of questionnaire quite easier. Isn’t it? Some directly come from our own experience, and some come from what other experts say about their experiences with various
restaurants.
A special kind of AI tool that
can help us form these external
features from past experience
along with some expert
knowledge is called Inductive
Logic Programming (ILP). This
is the tool that can also help us
produce features using data
and expert knowledge defined
in terms of logical clauses. We
build our ILP engine with the
idea of a hide-and-seek game,
where there are infinite locations where a set of hiders can hide and the goal of the ILP engine is to find them. Essentially, a hider means a good feature that will go as an input to our neural network.
The Setting
Let’s go back to our village version of hide-and-seek that we have described earlier.
Mr. Tirtharaj Dash || 145
Let there be n possible locations. The hider has to select one of these locations and hide there untiltheseekerfindshim.Thehider’sselection of a hiding location is based on preferential ordering, that is, the hider assigns some probabilities to each location for selection. This forms a distribution over locations called ‘hider distribution’. Naturally, the hider does not disclose this distribution to the seeker. In the worst-case scenario, the seeker will not be able to find the hider until he visits (opens) all locations other than the one where the hider is hiding. In this case, the seeker makes n-1 mistakes before he finds the hider. Can the seeker do better than this? The answer to this is “Yes”. But, the very first idea that comes to our mind is:
If the seeker could know the hider distribution exactly, then every time the hider hides using his distribution, the seeker would be able to find him.
Isn’t it?
Interestingly, our theory proves that this leads again to the worst-case result of n-1. Further, our theory finds a seeker who is a bit more random than the hider. Let’s see what we meant exactly here.
Let’s assume that the hider distribution is known to the seeker and the seeker knows that his performance will be bad if he uses the same hider
distribution. So, he constructs a distribution in which the locations for the hider’s selection probabilities are low get higher importance and the locations for which the hider’s selection probabilities are high gets lower importance. Essentially, this seeker distribution has more randomness than the hider’s distribution, but, it does minimize the mean value of mistakes made by the seeker before finding the hider.
Whether the hospitality extended was adequate? Was the place clean? Was there proper ambient lighting? Was the place crowded? Was the food served in time? How was the service? Did the cost justify the taste?