Page 50 - Banking Finance June 2024
P. 50
ARTICLE
B. Bias and fairness in AI algorithms: Biases originating E. Deepfakes: Deepfakes is portmanteau of 'Deep Learning'
from the data used for training AI systems, including and 'Fakes' & is a type of synthetic media which is used to
emotional AI, may be introduced into them. According to a manipulate one person's identity with the other. It can also
2018 study by academics at MIT and Stanford, face refer to computer generated images, which do not exist in
recognition software from well-known tech companies, such actual. The deepfake technology leverage technology from
as IBM, Microsoft, and Face++, showed notable biases, Deep Learning and facial recognition algorithms and artificial
especially against women and those with darker skin tones. neural networks such as variational autoencoders (VAEs) and
It's crucial to carefully select representative and diverse generative adversarial networks (GANs). The newest
training datasets and use strategies to reduce biases in the techniques used by cybercriminals are deepfake videos. Many
models themselves if you want AI algorithms to operate cases have already been filed in India.
fairly.
Conclusion:
C. The potential for emotional manipulation: To summarize, Emotional AI, or affective computing,
Emotional AI has the ability to be used for manipulating represents a more advanced tool in artificial intelligence,
people for a range of purposes, such as social engineering, aiming to enable machines with the ability to recognize,
marketing, and politics. Using psychological profiling and interpret, and simulate human emotions correctly. It's a
targeted advertising, the now-defunct data firm Cambridge field that intersects technology with psychology, cognitive
Analytica affected voter behavior during the 2016 US science, and human-computer interaction, leading to the
presidential election. This is one notable incident. Protecting creation of systems that can sense and respond to the
the public interest requires establishing rules and laws to emotional states of humans. Affective computing or
stop the unethical use of emotional AI for manipulation. Emotional AI is also growing in importance as a tool in the
banking industry, where it is used in areas such as marketing
D. Balancing AI-driven emotional support with and customer service, security and compliance, and risk
human interaction: While AI-driven emotional support assessment methods.
technologies, like therapeutic chatbots and adaptive
learning systems, can be incredibly beneficial, it's crucial to However, there are few challenges like Data privacy issues,
strike a balance when it comes to maintaining strong human biases in machine algorithm, emotional manipulation by
ties. The Journal of Medical Internet Research released a machines and recently developed Deepfakes. The proper
2020 study that found that while AI-driven mental health development and application of emotional AI depend heavily
therapies can be helpful, they shouldn't replace or take the on addressing these ethical issues and difficulties. We can
place of human therapists. Promoting the amalgamation of maximize the potential advantages of emotional AI while
artificial intelligence-powered instruments with human lowering the risks and guaranteeing a beneficial effect on
assistance frameworks can guarantee that technology society by encouraging responsible dialogues, setting rules,
amplifies, instead of supplanting, human relationships. and encouraging moral behaviour.
Note: 'Views and opinion expressed in the article are of
author's and not of the Bank.'
References:
https://www.meity.gov.in/,
https://www.idrbt.ac.in/,
https://indiaai.gov.in/,
https://mitsloan.mit.edu/,
https://www.researchandmarkets.com/,
https://economictimes.indiatimes.com/,
Banking magazines and Journals
44 | 2024 | JUNE | BANKING FINANCE