Tesh (Nitesh) Goyal is the Head of User Research on Responsible AI Tools at Google Research. His work at Google has led to launch of ML based tools like Harassment Manager to empower targets of online harassment, ML based moderation to reduce online toxic content production on platforms like OpenWeb, and multiple NLP based tools that reduce biased sensemaking in criminal justice. He received his MSc in Computer Science from UC, Berkeley and RWTH Aachen, prior to receiving his PhD from Cornell University in Information Science. His research has been supported by German Govt. Fellowship, National Science Foundation, and MacArthur Genius Grant. Frequently collaborating with industry (Google Research, Yahoo Labs, HP Labs, Bloomberg Labs), he has published in top-tier HCI conferences and journals (CHI, CSCW, JASIST, ICTD, ICIC and Ubicomp/IMWUT) and received two best paper honorable mention awards (CHI, CSCW) and one nomination (ICTD Journal).
Tesh has served on the Organization Committee for ACM SIGCHI conferences multiple times including Tech Program Chair at CHI 2023, GDI Chair at CHI 2021, Doctoral Consortium Chair at IMX 2021, D&I Lunch Chair at CHI 2018-2020, and over 10 times as Associate Chair at multiple CHI and CSCW conferences since 2016. Tesh has also been appointed as Adjunct Professor at NYU Computer Science Department. Tesh's work is frequently covered in press, and has been invited to present at academic institutions in N America, Europe, and Asia on Responsible AI, and Sensemaking.
To request a single lecture/event, click on the desired lecture and complete the Request Lecture Form.
?You have to prove the threat is real?: Understanding the needs of Female Journalists and Activists
Online harassment is a major societal challenge that impacts multiple communities. Some members of the community, like female journalists and activists, bear significantly higher impacts since...
Effects of Sensemaking Translucence on Distributed Collaborative Analysis
Collaborative sensemaking requires that analysts share their information and insights with each other, but this process of sharing runs the risks of prematurely focusing the investigation on...
Impact of Data Annotator Identity on ML Model Outcomes: Unpacking Specialized Rater Pools
Machine learning models are commonly used to detect toxicity in online conversations. These models are trained on datasets annotated by human raters. We explore how raters' self-described...
Intelligent Interruption Management using Electro Dermal Activity based Physiological Sensor for Collaborative Sensemaking
Sensemaking tasks are difficult to accomplish with limited time and attentional resources because analysts are faced with a constant stream of new information. While this information is often...
Leveraging AI Responsibly in Sensemaking for Successful Human AI Workflows
My research vision is to enable expert and non-experts to successfully make sense of complex world problems. As a Human-Computer Interaction researcher, I iteratively focus on studying how...
RAMPARTS: Supporting Sensemaking with Spatially-Aware Mobile Interactions
Synchronous colocated collaborative sensemaking requires that analysts share their information and insights with each other. The challenge is to know when is the right time to share what...
To request a tour with this speaker, please complete this online form.
If you are not requesting a tour, click on the desired lecture and complete the Request this Lecture form.
All requests will be sent to ACM headquarters for review.