ADS Invited Speakers

From video streaming to telehealth: data driven approaches to building user facing products

Healthcare is different from any other application domain, or is it not? While it is true that there are specific aspects, such as high stakes decisions and a complex regulatory framework, that make healthcare somewhat different, it is also the case that many of the lessons learned from building data-driven products in other domains translate remarKably well into healthcare. This is particularly so because healthcare is also a user facing domain, where users can be both patients or healthcare professionals. Given that data has shown to improve user experience while ensuring quality and scalability, few would argue that healthcare cannot benefit from being much more data-driven than it has traditionally been. In this talk, I will describe how this experience building impactful data and AI solutions into user facing products for decades can be leveraged to revolutionize telehealth. At Curai, we combine approaches such as state-of-the-art large language models with expert systems in areas such as NLP, vision, and automated diagnosis to augment and scale doctors, and to improve user experience and healthcare outcomes. We will see some of those applications while analyzing the role of data and ML algorithms in making them possible.


Bio: Xavier Amatriain (Ph.D.) is co-founder and CTO of Curai, a series B health tech startup. Previous to this, he led Engineering at Quora and was Research/Engineering Director at Netflix, where he started and led the Algorithms team building the famous Netflix recommendations. Prior to this, he was a researcher both in academia and industry. With over 100 research publications (and 5k citations), Xavier is best known for his work on AI and machine learning in general, and recommender systems in particular. For the past five years at Curai, Xavier has led teams at the intersection of product development and medical AI research and engineering. Curai has built an end-to-end virtual primary care service that provides high-quality medical care for under $10 a month, thanks to its care delivery platform and state–of-the-art medical AI. On the research side, Curai focuses on medical language understanding and generation, and multimodal medical reasoning.

Training deep vision models in low-data regimes

Large-scale vision models, pre-trained on vast quantities of often unlabelled data, give state-of-the-art performance when fine-tuned for a wide variety of downstream tasks. For these downstream tasks, the amount of training data available is usually orders of magnitude less than what was used during pre-training. And in many cases, there is little to no labelled data available. In this talk, I discuss approaches to leveraging pre-trained models for vision tasks in low-data regimes. Focusing on video segmentation and object detection, I show that incorporating domain-specific and modality-specific inductive biases lead to improved model performance when training data is limited.


Bio: Naila Murray obtained a BSE in electrical engineering from Princeton University in 2007. In 2012, she received her Ph.D. from the Universitat Autonoma de Barcelona, in affiliation with the Computer Vision Center. She joined Xerox Research Centre Europe in 2013 as a research scientist in the computer vision team, working on topics including fine-grained visual categorization, image retrieval and visual attention. From 2015 to 2019 she led the computer vision team at Xerox Research Centre Europe, and continued to serve in this role after its acquisition and transition to becoming NAVER LABS Europe. In 2019, she became the director of science at NAVER LABS Europe. In 2020, she joined Facebook AI Research where she is a senior research engineering manager for EMEA. She has served as area chair for ICLR 2018, ICCV 2019, ICLR 2019, CVPR 2020, ECCV 2020, and program chair for ICLR 2021. Her current research interests include few-shot learning and domain adaptation.

Applications of data science for autonomous vehicles

Autonomous vehicles (AVs) have the potential to improve public health as well as revolutionize many sectors of the economy. To realize that potential, AV deployment needs to be safe, responsible, and efficient/cost-effective. Data science, a combination of disciplines including statistics, data engineering, computer science, and operations research, with a bias toward applications, can support these AV deployment goals. We describe two (high-level) case studies applying data science at Cruise. The first considers how to test AV performance in a simulated environment. Specifically, we sketch a framework for measuring “scenario realism”, a metric for how likely a given scenario is to occur on-road. This metric includes multiple dimensions/sub-metrics, such as “Perception realism” and “policy realism”. We then describe some potential benefits of both realistic and “unrealistic” scenarios to help measure and improve AV safety. A second case study describes how data science is used to help improve the efficiency of AV deployment. Specifically, given that AVs travel along a road network to serve rider demand, various constraints imposed by this network are discussed and methods to optimize service subject to these constraints are described.


Bio: Geoffrey Chi-Johnston is a Senior Staff Tech Lead Manager at Cruise. He supports the Safety Data Science team, which builds algorithms for quantifying and benchmarking autonomous vehicle performance. Prior to Cruise, Geoff worked at Apple on the Apple Watch, where he helped develop the Fall Detection algorithm as well as supporting other health and activity features. He is a co-inventor on multiple patent grants and applications. He received a National Science Foundation Graduate Research Fellowship, a PhD from Columbia University in Sustainable Development and was a Postdoctoral Fellow at Johns Hopkins University. His academic work focused on mathematical modeling of infectious diseases and evaluation of intervention strategies, including co-authored papers in Science, PNAS, and PLoS Computational Biology.

LinkedIn

Designing Performant Recommender Systems Using Linear Programming based Global Inference

Large scale recommender systems conduct many millions of inferences on each given day, involving millions of users and items. Each online inference (recommending a small set of items to a user) is usually done using a scoring function that combines many estimated utilities (e.g., pClck, pView, pSkip) to score candidate items. The combination is usually formed in an ad hoc fashion. Linear programming (LP) offers a systematic approach to form this combination by posing global inference objectives and constraints on the utilities. The same LP approach easily extends to applications in which users and items need to be matched while satisfying a huge number of item level global constraints. Such LPs are of an extreme scale - having many billions of variables. A scalable LP solver named DuaLip has been developed for such LPs and open-sourced. A practical design approach using DuaLip has also been used successfully in several applications. This talk will give an overview of the problem, the class of applications, the DuaLip LP solver, and the practical design approach.


Bio: Keerthi is a Principal Staff Researcher in the AI Group of Linkedin where he works on Distributed training of machine learning and AI systems, Huge scale Linear programming, and Information extraction projects. Prior to LI he was a Distinguished researcher in Criteo Research working on fundamental and applied research problems in computational advertising. Previous to that, he was in Microsoft first with the CISL team in Big Data and later with the FAST division of Microsoft Office. Before which, he was with the Machine Learning Group of Yahoo! Research, in Santa Clara, CA. Prior to joining Yahoo! Research, he worked for 11 years at the Indian Institute of Science, Bangalore, and for 5 years at the National University of Singapore. During those sixteen years his research focused on the development of practical algorithms for a variety of areas, such as machine learning, robotics, computer graphics and optimal control. Overall, he have published more than 100 papers in leading journals and conferences. Keerthi is an Action Editor of JMLR (Journal of Machine Learning Research) since 2008. Previously, he was an Associate Editor for the IEEE Transactions on Automation Science and Engineering.

JP Morgan Chase

Task Centric AI

The use of AI is gaining traction as organizations realize the advantages of using algorithms to streamline and improve the accuracy of tasks. In this talk, Sameena will build the case for task centric AI, using a variety of use cases from industry. Step through use cases that range from those that involve large amounts of data to no data, from detecting key insights to rare unexpected events, and from incorporating active domain expertise to passive sensor feedback.


Bio: Sameena Shah is a Managing Director and AI Executive at JP Morgan, where she and the team work across the firm to create AI technologies for business transformation and growth. She is a highly accomplished leader with over 20 years of experience in AI, engineering, and data. Her leadership has resulted in award-winning AI technologies that have transformed products and businesses. Previously, Sameena was Managing Director at S&P Global where she led the firm’s strategy and development for Augmented Intelligence. Prior to that, Sameena worked at Thomson Reuters, a Schonfeld securities hedge fund, Yahoo! Research, and ran her AI consultancy firm. Sameena has a PhD in AI, a MS in Computer Science from IIT Delhi, and a BS in Electronics Engineering. She is passionate about AI and change, and is a frequent invited speaker at top forums including Ted talks, and keynotes at premier AI conferences (IJCAI 2021). She is a recipient of several scientific and industry awards including Microsoft top PhD thesis in the country award, Cloudera top AI/ML application award, Google Women in Engineering award, United States CTO office nominee, and a JPMC prolific inventor with 30+ patents and 60+ peer reviewed publications.

Microsoft

PLM-NLG Model based Online Advertising Automation

Pre-trained large language models (PLM) have made breakthroughs in transfer learning of a wide range of NLP tasks, including intent representation and language generation. In this talk we will present some recent work we have done in PLM for natural language generation to improve different aspects of model capability and model transferability, as well as their applications in online advertising automation. To address the limitation of NLG models’ generative commonsense reasoning performance, we developed Knowledge Filtering and Contrastive learning Network (KFCNet) which retrieves external knowledge base to enrich the inputs with high quality contextual prototypes and applies contrastive learning separately to each of the encoder and decoder, within a general encoder--decoder architecture. KFCNet achieves state-of-the-art performance in many generative reasoning tasks and is deployed in Microsoft advertising system for query rewriting. Targeting NLG tasks in the marketing and advertising domains, we developed CULG model, a cross-lingual commercial universal language generation model. We proposed 4 commercial generation tasks and a two-stage training strategy for pre-training. Extensive experiments demonstrated that the proposed strategy yields performance improvements on several commercial generation tasks by a large margin as compared to single-stage pre-training. To speed up NLG model inference without sacrificing performance, we developed BANG, a new pretraining model to Bridge the gap between Autoregressive (AR) and Non-autoregressive (NAR) Generation. The pretrained BANG model can simultaneously support AR, NAR and semi-NAR generation to meet different requirements. Experiments on many NLG tasks show that BANG improves NAR and semi-NAR performance significantly as well as attaining comparable performance with strong AR pretrained models, with much higher inference speed. BANG is deployed in Microsoft online advertising systems for online serving to handle.


Bio: Ruofei Zhang is a VP & Distinguished Engineer in Microsoft Bing Ads, he oversees R&D and engineering of query/ads understanding and matching algorithms, relevance ranking, NLP and Computer Vision machine learning models, large-scale distributed serving systems that power Ads retrieval in Microsoft Advertising Marketplace. He also drives defining business growth strategies, technical directions, product roadmaps, and operating cadence of Microsoft Shopping and Vertical Ads. Prior to joining Microsoft, Ruofei was a R&D Director and Principal Scientist at Yahoo Labs, managing its Data Mining and Relevance Optimization Group in Advertising Science Department. Ruofei has co-authored two monograph books of Multimedia Data Mining and Deep Learning technologies respectively, published more than 60 papers on premier journals and top conferences in the areas of machine learning, data mining, NLP, and computer vision, and has been granted 23 US patents. He has been regularly serving as program committee members, reviewers, speakers, panelists for numerous leading academic conferences and journals as well as federal agencies including NSF. Ruofei received 2022 Distinguished Alumni Award from Thomas Watson Engineering & Applied Science College of State University of New York Binghamton, and is also on the Industry Advisory Board for its Computer Science Department.

Accelerating eye movement research via ML-based smartphone gaze technology

Eye movements are thought to be a window to the mind, and have been extensively studied across Neuroscience, Psychology and HCI. However, progress in this area has been severely limited as the underlying eye tracking technology relies on specialized hardware that is expensive (upto $30,000) and hard to scale. In this talk, I will present our recent work from Google, which shows that ML applied to smartphone selfie cameras can enable accurate gaze estimation, comparable to state-of-the-art hardware based mobile eye trackers, at 1/100th the cost and without any additional hardware. Via extensive experiments, we show that our smartphone gaze tech can successfully replicate key findings from prior eye movement research in Neuroscience and Psychology, across a variety of tasks including traditional oculomotor tasks, saliency analyses on natural images and reading comprehension. We also show that smartphone gaze could serve as a potential digital biomarker for detecting mental fatigue. These results show that smartphone gaze technology has the potential to unlock advances by scaling eye movement research, and enabling new applications for improved wellness and accessibility, such as gaze-based interaction for patients with ALS/stroke that cannot otherwise interact with devices


Bio: Vidhya Navalpakkam is a Principal research scientist and leads an interdisciplinary team in Google research, focused on modeling human attention and behavior at scale. Her work is at the intersection of Computer Science, Neuroscience and Psychology. Prior to joining Google 10 years ago, she worked briefly at Yahoo research. She enjoyed modeling attention mechanisms in the brain during her postdoc at Caltech, and PhD at USC. She has a Bachelors in Computer Science from IIT, Kharagpur.

An overview of AWS AI/ML’s recent contributions to open source ML tools: Accelerating discovery and innovation

High-quality open source software has played an important role in expanding and democratizing technology. In the field of data science and machine learning, open source has been the norm by which scientific advances are disseminated and a critical factor to the rapid development and deployment of data-driven solutions to an ever increasing set of problems and domains. Open source ML tools have allowed individual scientists and practitioners to benefit from the work of their peers and harness their work and expertise. This talk will provide an overview of some of AWS AI/ML’s efforts in developing and contributing to open source tools that advance the science and infrastructure in important and emerging areas of machine learning. This includes AutoGluon, an easy to use AutoML tool for text, image, and tabular data, Deep Graph Library, a framework that simplifies the development, training, and use of graph neural networks, and DoWhy, a library for causal inference that allows robust estimation of causal effect. These tools bring state-of-the-art and easy to use ML capabilities to the hands of ML practitioners and developers to accelerate discovery and innovation.


Bio: George Karypis is a Distinguished McKnight University Professor at the University of Minnestoa, Twin Cities and an Amazon Scholar & Sr. Principal Scientist at Amazon Web Services (AWS). His research interests spans the areas of data mining, high performance computing, information retrieval, collaborative filtering, bioinformatics, cheminformatics, and scientific computing. His research has resulted in the development of software libraries for serial and parallel graph partitioning (METIS and ParMETIS), hypergraph partitioning (hMETIS), for parallel Cholesky factorization (PSPASES), for collaborative filtering-based recommendation algorithms (SUGGEST), clustering high dimensional datasets (CLUTO), finding frequent patterns in diverse datasets (PAFI), and for protein secondary structure prediction (YASSPP). He has coauthored over 250 papers on these topics and two books (“Introduction to Protein Structure Prediction: Methods and Algorithms” (Wiley, 2010) and “Introduction to Parallel Computing” (Publ. Addison Wesley, 2003, 2nd edition)). In addition, he is serving on the program committees of many conferences and workshops on these topics, and on the editorial boards of the IEEE Transactions on Knowledge and Data Engineering, ACM Transactions on Knowledge Discovery from Data, Data Mining and Knowledge Discovery, Social Network Analysis and Data Mining Journal, International Journal of Data Mining and Bioinformatics, the journal on Current Proteomics, Advances in Bioinformatics, and Biomedicine and Biotechnology. At Amazon, his team works on areas such as large-scale distributed training of deep learning models, model compression, natural language processing (NLP), graph neural networks (GNNs), multi-modal represenattion learning and multi-task learning.

Bayesian Health

Accountable AI Evaluation Framework for Intelligent Care Augmentation and Adaptive AI in Healthcare

The recent controversies of Health AI applications and algorithmic bias have sparked an ongoing debate about the ethics and responsibility of such applications on patient care that would directly affect outcomes and healthcare quality. Following the 21st Century Care Act, FDA has released guidelines in software as a medical device (SaMD) and Real-World Evidence (RWE), approved more than 60 SaMD applications, and incorporated RWE in more than 100 decisions of the recent applications of new drugs and biologics. However, as the nature of Health AI applications has incurred a new set of considerations in AI adoption, we still need to curate more best practice examples and converge on a set of industry standards for defining and enabling responsible and ethical AI in healthcare. This talk will review the recent development of the accountable AI evaluation and deployment framework. We will also discuss a couple of case studies, such as the recently published Sepsis studies in Nature Medicine, to assess the impact of clinical AI adoption and behavioral change on patient outcomes and care quality (e.g., the reduction of mortality rate by 18.2% with the clinical adoption rate of 90%). In addition, with the increased staff shortage and clinician burnout rate, the healthcare industry is also going through significant consolidations and transitions, putting AI adoption at the center of business priorities. Therefore, this talk will also introduce an AI Implementation Checklist and illustrate the best practice in the full spectrum of real-world evidence validation and bias detection and mitigation.


Bio: Pei-Yun Sabrina Hsueh (Ph.D., FAMIA) is the global health AI leader and a pioneer in personal health informatics at Bayesian Health Inc. She is currently serving on the Practitioners Board of the ACM and as the Vice-Chair of the AMIA 2022 SPC and the incoming Co-Chair of the AMIA AI Evaluation Showcase 2023. Previously at IBM Research, she co-chaired the Health Informatics Professional Community and was elected as an IBM Academy of Technology Member. In her roles, she is actively leading the industrial best practice in health AI, with a focus on establishing a responsible and ethical AI governance framework and operationalizing AI in workflows. Her dedication has won her recognitions such as the AMIA Distinguished Paper Award, Fellow of the AMIA, Google European Anita Borg Scholar, High-Value Inventions, Eminence and Excellence, and Manager Choice awards. She is on the Editorial Board of Sensors Journal, Frontiers in Public Health, and JAMIA OPEN Special Issue on Precision Medicine. Her commitment has led to 20+ patents, 50+ technical articles, two new textbooks: Machine Learning for Medicine and Healthcare (in prep.) and Personal Health Informatics - Patient nt Participation in Precision Health (in print by Springer Nature).