Page 133 - ASBIRES-2017_Preceedings
P. 133
INCREASING THE SPEED OF SEARCHING PROCESS IN MYSQL DATABASE FOR DYNAMIC DATA
The Figure 2 shows the average search resource utilization. Since the developments
times taken by each and every action source are not strictly based on the domain, this
where the slowest action source which slow solution could be applied commonly for
down the system when searching can be other projects as well with suitable
identified. modifications. Creating the data cubes was a
challenge without using any tools and
depending on only the basic concept of data
cubes. That would be an advantage in the
future since the solution is not depending on
any framework or utility. As further
improvements to improve this Business
Intelligence part the database could be
scheduled to keep statistical data according
to the following structure:
Figure 2: Graph indicating the average Keep 3 weeks of row data
search time of action sources This is to trace reasons for any recent errors
The Figure 3 indicates the search times recorded in statistical data.
for the products available where the product After 3 weeks, keep 2 weeks of data
which gives results in the lowest search time calculated per minute.
can be identified.
According to this, the data which is older
than 3 weeks will be reduced into per
minute average data and the row data will be
deleted.
Keep hourly data for one month from that
point.
Data older than 5 weeks will be
averaged into hourly data and the rest will
be deleted. After a month, keep daily data
Figure 3 : Graph indicating the average for 3 months After 3 months, keep weekly
search time of products data for 1 year Keep monthly data from 1
year onwards .Furthermore, these time
periods need to be analyzed according to the
5 RESULTS AND DISCUSSION
client requirements to be relevant to their
The Elastic search configurations were usage patterns. According to this the storage
up and running successfully in the testing savings with the suggested solution will be
environment after some effort. as follows:
That could be considered as a successful For 3 weeks keep original data (No
implementation because the testing had compression)
proven the stability of the migration of the For next 2 weeks (per minute)
databases in the testing environment. 60 * 24 * 14 = 20160 rows
Therefore, after the client’s confirmation the For next month (per hour)
changes can also be released to the live 24 * 30 = 720 rows
production environment. For next three months (per day)
30 * 3 = 90 rows
The storage issue was resolved by the
data cubes. This part of the research was not For next one year (per week)
4 * 12 = 48 rows
communicated properly with the client For another 5 years (per month)
Therefore; this would be an additional 12 * 5 = 60 row
development for the enhancement of
123