AI in Finance: Need for explainability and trust
Speaker: Nitendra Rajput – Gurgaon, IndiaTopic(s): Artificial Intelligence, Machine Learning, Computer Vision, Natural language processing
Abstract
The area of Artificial Intelligence has gone through tremendous technological advancement in the past few years. Such advancement means that algorithms are now able to go beyond the lab and work on real-world problems and datasets. The accuracies have increased significantly so that such algorithms can be relied upon for key decision making. This now brings us to a stage where humans are depending on algorithms to solve critical problems such as making a decision of life-and-death while driving, making a decision on what drug to provide to which patient, making a decision on which company to acquire. However such big decisions need some reasoning so that decision-makers can start to believe the output of AI systems.
In this talk, we will elaborate on a sub-area within Artificial Intelligence which deals with the ability to justify, explain and sell the output of an AI system. We will discuss how certain machine algorithms can be designed to be naturally explainable and how the output of a machine learning algorithm can be made interpretable and hence trustable by the decision-makers.
About this Lecture
Number of Slides: 25Duration: 60 minutes
Languages Available: English
Last Updated:
Request this Lecture
To request this particular lecture, please complete this online form.
Request a Tour
To request a tour with this speaker, please complete this online form.
All requests will be sent to ACM headquarters for review.