NeurIPS 2022 Tutorials (2024)


Tutorials

The Role of Meta-learning for Few-shot Learning

Eleni Triantafillou

[ Virtual ]

Abstract

While deep learning has driven impressive progress, one of the toughest remaining challenges is generalization beyond the training distribution. Few-shot learning is an area of research that aims to address this, by striving to build models that can learn new concepts rapidly in a more "human-like" way. While many influential few-shot learning methods were based on meta-learning, recent progress has been made by simpler transfer learning algorithms, and it has been suggested in fact that few-shot learning might be an emergent property of large-scale models. In this talk, I will give an overview of the evolution of few-shot learning methods and benchmarks from my point of view, and discuss the evolving role of meta-learning for this problem. I will discuss lessons learned from using larger and more diverse benchmarks for evaluation and trade-offs between different approaches, closing with a discussion about open questions.

Link to slides: https://drive.google.com/file/d/1ZIULjhFjyNqjSS10p-5CDaqgzlrZcaGD/view?usp=sharing

Foundational Robustness of Foundation Models

Pin-Yu Chen · Sijia Liu · Sayak Paul

[ Virtual ]

Abstract

Foundation models adopting the methodology of deep learning with pre-training on large-scale unlabeled data and finetuning with task-specific supervision are becoming a mainstream technique in machine learning. Although foundation models hold many promises in learning general representations and few-shot/zero-shot generalization across domains and data modalities, at the same time they raise unprecedented challenges and considerable risks in robustness and privacy due to the use of the excessive volume of data and complex neural network architectures. This tutorial aims to deliver a Coursera-like online tutorial containing comprehensive lectures, a hands-on and interactive Jupyter/Colab live coding demo, and a panel discussion on different aspects of trustworthiness in foundation models.

Lifelong Learning Machines

Tyler Hayes · Dhireesha Kudithipudi · Gido van de Ven

[ Virtual ]

Abstract

Incrementally learning new information from a non-stationary stream of data, referred to as lifelong learning, is a key feature of natural intelligence, but an open challenge for deep learning. For example, when artificial neural networks are trained on samples from a new task or data distribution, they tend to rapidly lose previously acquired capabilities, a phenomenon referred to as catastrophic forgetting. In stark contrast, humans and other animals are able to incrementally learn new skills without compromising those that were learned before. Numerous deep learning methods for lifelong learning have been proposed in recent years, but yet a substantial gap remains between the lifelong learning abilities of artificial and biological neural networks.

In this tutorial, we start by asking what key capabilities a successful lifelong learning machine should have. We then review the current literature on lifelong learning, and we ask how far we have come. We do this in two parts. First, we review the popular benchmarks and setups currently used in the literature, and we critically assess to what extent they measure progress relevant for lifelong learning applications in the real world. Second, we review the strategies for lifelong learning that have been explored so far, and we …

Neurosymbolic Programming

Swarat Chaudhuri · Jennifer J Sun · Armando Solar-Lezama

[ Virtual ]

Abstract

This tutorial will provide an overview of recent advances in Neurosymbolic Programming. The objective in this area is to learn neurosymbolic programs, which combine elements of both neural networks and classical symbolic programs with the aim of inheriting the benefits of both. A key advantage of the neurosymbolic programmiing approach is that here, one learns models that look more like the models that domain experts write by hand in code, but that are also more expressive than classical interpretable models in machine learning. Also, neurosymbolic programs can more easily incorporate prior knowledge and are easier to analyze and verify. From the point of view of techniques, neurosymbolic programming combines ideas from machine learning and program synthesis and represents an exciting new contact point between the two communities. This tutorial will cover a broad range of basic concepts in the area, including neurosymbolic architectures, domain-specific languages, architecture/program search algorithms, meta-learning algorithms such as library learning, and applications to science and autonomy. Our panel will discuss open challenges in the field and ways in which machine learning and programming languages researchers can come together to address them. The tutorial is an abridged version of the tutorial at the Neurosymbolic Programming summer school …

Advances in NLP and their Applications to Healthcare

Ndapa Nakashole

[ Virtual ]

Abstract

Recent advances in Natural Language Processing (NLP) have propelled the state of the art to new highs. One such advance is the use of external memory to support reasoning in deep learning models such as Transformers.
Without external memory to store sufficient background knowledge, reasoning in NLP systems must be performed based on limited information leading to poor performance on knowledge-rich tasks. Conversely, NLP systems with access to external memory have resulted in significant performance gains on many important tasks including question answering (QA) and other tasks associated with QA such as fact verification, and entity linking. The tutorial will present : 1) an overview of state of the art approaches for representing background knowledge in addressable memory, and 2) applications in the healthcare domain.

Probabilistic Circuits: Representations, Inference, Learning and Applications

Antonio Vergari · YooJung Choi · Robert Peharz

[ Virtual ]

Abstract

In several real-world scenarios, decision making involves complex reasoning, i.e., the ability to answer complex probabilistic queries. Moreover, in many sensitive domains like health- care and economical decision making, the result of these queries is required to be exact as approximations without guarantees would make the decision making process brittle. In all these scenarios, tractable probabilistic inference and learning are becoming more and more mandatory. In this tutorial, we will introduce the framework of probabilistic circuits (PCs) under which one can learn deep generative models that guarantee exact inference in polynomial (often linear) time. After certain recent algorithmic and theoretical results, which we will discuss in this tutorial, PCs have achieved impressive results in probabilistic modeling, sometimes outperforming intractable models such as variational autoencoders. We will show the syntax and semantics of PCs and show how several commonly used ML models -- from Gaussian mixture models to HMMs and decision trees -- can be understood as computational graphs within the PC framework. We will discuss how PCs are special cases of neural networks, when restricting network with certain structural properties enables different tractability scenarios. This unified view of probabilistic ML models opens up a range of ways to learn PCs …

Algorithmic fairness: at the intersections

Golnoosh Farnadi · Q.Vera Liao · Elliot Creager

[ Virtual ]

Abstract

As machine learning models permeate every aspect of decision making systems in consequential areas such as healthcare, banking, hiring and education, it has become critical for these models to satisfy trustworthiness desiderata such as fairness, privacy, robustness and interpretability. Initially studied in isolation, recent work has emerged at the intersection of these different fields of research, leading to interesting questions on how fairness can be achieved under privacy, interpretability and robustness constraints. Given the interesting questions that emerge at the intersection of these different fields, this tutorial aims to investigate how these different topics relate, and how they can augment each other to provide better or more suited definitions and mitigation strategies for algorithmic fairness. We are particularly interested in addressing open questions in the field, such as: how algorithmic fairness is compatible with privacy constraints? What are the trade-offs when we consider algorithmic fairness at the intersection of robustness? Can we develop fair and explainable models? We will also articulate some limitations of technical approaches to algorithmic fairness, and discuss critiques that are coming from outside of computer science.

Advances in Bayesian Optimization

Janardhan Rao Doppa · Virginia Aglietti · Jacob Gardner

[ Virtual ]

Abstract

Many engineering, scientific, and industrial applications including automated machine learning (e.g., hyper-parameter tuning) involve making design choices to optimize one or more expensive to evaluate objectives. Some examples include tuning the knobs of a compiler to optimize performance and efficiency of a set of software programs; designing new materials to optimize strength, elasticity, and durability; and designing hardware to optimize performance, power, and area. Bayesian Optimization (BO) is an effective framework to solve black-box optimization problems with expensive function evaluations. The key idea behind BO is to build a cheap surrogate model (e.g., Gaussian Process) using the real experimental data; and employ it to intelligently select the sequence of function evaluations using an acquisition function, e.g., expected improvement (EI).

The goal of this tutorial is to present recent advances in BO by focusing on challenges, principles, algorithmic ideas and their connections, and important real-world applications. Specifically, we will cover recent work on acqusition functions, BO methods for discrete and hybrid spaces, BO methods for high-dimensional input spaces, causal BO, and key innovations in BoTorch toolbox along with a hands-on demonstration.

Incentive-Aware Machine Learning: A Tale of Robustness, Fairness, Improvement, and Performativity

Chara Podimata

[ Virtual ]

Abstract

When an algorithm can make consequential decisions for people's lives, people have an incentive to respond to the algorithm strategically in order to obtain a more desirable decision. This means that unless the algorithm adapts to this strategizing, it may end up creating policy decisions that are incompatible with the original policy's goal. This has been the mantra of the rapidly growing research area of incentive-aware Machine Learning (ML). In this tutorial, we introduce this area to the broader ML community. After a primer on the basic background needed, we introduce the audience to the four perspectives that have been studied so far: the robustness perspective (where the decision-maker tries to create algorithms that are robust to strategizing), the fairness perspective (where we study the inequalities that arise or are reinforced as a result of strategizing), the improvement perspective (where the learner tries to incentivize effort exertion towards actually improving their points), and the performativity perspective (where the decision-maker wishes to achieve a notion of stability in these settings).

Data Compression with Machine Learning

Karen Ullrich · Yibo Yang · Stephan Mandt

[ Virtual ]

Abstract

The efficient communication of information has enormous societal and environmental impact, and stands to benefit from the machine learning revolution seen in other fields. Through this tutorial, we hope to disseminate the ideas of information theory and compression to a broad audience, overview the core methodologies in learning-based compression (i.e., neural compression), and present the relevant technical challenges and open problems defining a new frontier of probabilistic machine learning. Besides covering the technical grounds, we will also explore the broader underlying themes and future research in our panel discussion, focusing on the interplay between computation and communication, the role of machine learning, and societal considerations such as fairness, privacy, and energy footprint as we strive to make our learning and information processing systems more efficient.

Creative Culture and Machine Learning

Negar Rostamzadeh · Cheng-Zhi Anna Huang · Mark Riedl

[ Virtual ]

Abstract

Creative domains render a big part of modern society, having a significant influence on economy and cultural life. During the last decade, fast development of ML technologies such as Generative models, led to creation of multiple creative applications. In this tutorial, we talk about co-creativity and generative art in computer vision, NLP, interactive music generation and the interplay between these modalities .

While there are opportunities for ML to empower artists to create and distribute their work, there are risks and harms when using these technologies in cultural contexts. These include harms arising from tools intended to support the creative process (e.g. biased or unsafe output, such as deepfakes) or harms incurred in creative output (e.g. individual or systemic inequity). At the same time, these systems can have broader and long-term social implications on the shape and diversity of culture more generally, which we discuss in this tutorial.

Finally in this tutorial, we discuss open questions on co-creation process, interplay between modalities, assessment of creative systems, and broader impact of these technologies and potential harms that can stem from these models.

Fair and Socially Responsible ML for Recommendations: Challenges and Perspectives

Ashudeep Singh · Manish Raghavan · Hannah Korevaar

[ Virtual ]

Abstract

Algorithmic rankings have become increasingly common in the online world. From social media to streaming services and from e-commerce to hiring, ranking has become the primary way online users ingest information. In many cases, recommendations have implications for both the users and items (or creators) being ranked. These systems are also increasingly personalized to viewers, relying on imperfect information to learn and respond to user preferences. In recent years, the machine learning community has become increasingly aware of potential harms to consumers (e.g. echo chambers, addictive design, virality of harmful content) and creators (e.g. access to opportunity, misattribution and appropriation). In this tutorial we will explore the current state of research on responsible recommendations and the primary challenges with understanding, evaluating and training these systems for user and content providers. This tutorial additionally presents the primary challenges in applying this research in practice. The perspectives and methods presented in this tutorial will apply to recommendation systems generally and will not include any specific information regarding actual recommendation products. The tutorial is designed to be accessible to a broad audience of machine learning practitioners, some background in predictive systems and ranking is beneficial but not required.

Theory and Practice of Efficient and Accurate Dataset Construction

Frederic Sala · Ramya Korlakai Vinayak

[ Virtual ]

Abstract

Data is one of the key drivers of progress in machine learning. Modern datasets require scale far beyond the ability of individual domain experts to produce. To overcome this limitation, a wide variety of techniques have been developed to build large datasets efficiently, including crowdsourcing, automated labeling, weak supervision, and many more. This tutorial describes classical and modern methods for building datasets beyond manual hand-labeling. It covers both theoretical and practical aspects of dataset construction. Theoretically, we discuss guarantees for a variety of crowdsourcing, active learning-based, and weak supervision techniques, with a particular focus on generalization properties of downstream models trained on the resulting datasets. Practically, we describe several popular systems implementing such techniques and their use in industry and beyond. We cover both the promise and potential pitfalls of using such methods. Finally, we offer a comparison of automated dataset construction versus other popular approaches to dealing with a lack of large amounts of labeled data, including few- and zero-shot methods enabled by foundation models.

Successful Page Load

NeurIPS uses cookies to remember that you are logged in. By using our websites, you agree to the placement of cookies. Our Privacy Policy »

NeurIPS 2022 Tutorials (2024)

FAQs

What is the acceptance rate for NeurIPS 2022? ›

NeurIPS 2022 Statistics
StatisticsTotalAccept
NeurIPS 202210411 min: 1.60, max: 8.20 avg: 5.97, std: 0.722671 (25.66%) min: 3.60, max: 8.20 avg: 6.04, std: 0.64
NeurIPS 20219122 min: 2.50, max: 8.70 avg: 6.38, std: 0.622334 (25.59%) min: 4.60, max: 8.70 avg: 6.45, std: 0.53
NeurIPS 202094671899 (20.06%)

What is NeurIPS 2022 workshop on tackling climate change with machine learning? ›

For this iteration of the workshop, the keynote talks and panel discussions were particularly focused on exploring the theme of climate change-informed metrics for AI, focusing both on (a) the domain-specific metrics by which AI systems should be evaluated when used as a tool for climate action, and (b) the climate ...

Do NeurIPS workshops have proceedings? ›

In some situations, you may invite submission for both proceedings and non-proceedings. For example, full-length papers go to proceedings, extended abstracts go to non-proceedings. Note that NeurIPS will not itself publish proceedings for workshops: workshops will need to set up their own proceedings if desired.

Where is NeurIPS 2022 conference? ›

NeurIPS 2022 will be a Hybrid Conference with a physical component at the New Orleans Convention Center during the first week, and a virtual component the second week.

What is a good NeurIPS score? ›

Overall score.

9: Top 15% of accepted papers, strong accept. 8: Top 50% of accepted papers, clear accept. 7: Good paper, accept.

Is NeurIPS a top conference? ›

1. NeurIPS – Neural Information Processing Systems* Description: NeurIPS is one of the premier conferences on machine learning and computational neuroscience.

What is the impact factor of NeurIPS? ›

While ISI does not officially estimate the impact factor for conferences, this Microsoft page suggests that the impact factor of NeurIPS has a steady state value ~30 - 40.

Can machine learning solve climate change? ›

The use of artificial intelligence (AI) can contribute to the fight against climate change. Existing AI systems include tools that predict weather, track icebergs and identify pollution. AI can also be used to improve agriculture and reduce its environmental impact, the World Economic Forum says.

How can ICT solve climate change? ›

ICT can be used in a number of ways to study and manage the environment, locally and globally. These come under three broad headings: observation, analysis, and sharing of data. Satellite-based sensors monitor and provide information on barometric pressure, water temperature and wave action.

Does NeurIPS provide food? ›

Meals ARE NOT provided.

However, there will be water and snacks available for your child. If your child has any dietary allergies please indicate them on the registration form below.

Can anyone attend NeurIPS? ›

You must be a full time student in an accredited undergraduate, masters or graduate program or have submitted an accepted paper while you were a full time student. You will also need to present the documentation when you check in at the registration desk.

How many accepted papers are there in NeurIPS? ›

This year's organizers received a record number of paper submissions. Of the 13,300 submitted papers that were reviewed by 968 Area Chairs, 98 senior area chairs, and 396 Ethics reviewers 3,540 were accepted after 502 papers were flagged for ethics reviews.

How many people attend NeurIPS? ›

NeurIPS Statistics
StatisticsTotalLocation
NeurIPS 20219122 min: 2.50, max: 8.70 avg: 6.38, std: 0.62Virtual, -
NeurIPS 20209467Virtual, -
NeurIPS 20196743Vancouver, Canada
NIPS 20184856Montreal, Canada
8 more rows

Where will NeurIPS be in 2024? ›

The Thirty-eighth annual conference is held Mon. Dec 9th through Sun the 15th, 2024 at the Vancouver Convention Center.

How many submissions are there in NeurIPS 2022? ›

Get Ready for the NeurIPS 2022 Datasets and Benchmarks Track

This year, we received 447 submissions on a breadth of topics, out of which 163 have been accepted for publication. The acceptance rate was 36.46%. Please explore the list of accepted papers.

What was the rejection rate of NeurIPS? ›

Machine Learning and Learning Theory
ConferenceLong PaperShort Paper
NeurIPS'1921.1% (1428/6743) (36 orals, 164 spotlights and 1228 posters)-
NeurIPS'2020.1% (1900/9454) (105 orals, 280 spotlights and 1515 posters)-
NeurIPS'2125.7% (2344/9122) (55 orals, 260 spotlights and 2029 posters)-
57 more rows

How many papers get accepted to NeurIPS? ›

Number of Unique Authors Explodes

At the same time, the number of submissions accepted to NeurIPS has even more dramatically ballooned, from 411 in 2014 to 3584 in 2023.

What is the acceptance rate for DAC 2022? ›

Acceptance rates for manuscript publication are uniform across all topic areas and they have been hovering between 20%-25% for the past several years.

What is the acceptance rate of ICC 2022? ›

After a rigorous review process, 962 papers were accepted, thus yielding an acceptance rate of 38.03%. The geographic breakdown of the authors of the accepted papers is as follows: 49.7% from Asia-Pacific, 27.2% from Europe, Middle-East and Africa (EMEA), and the rest from Americas.

References

Top Articles
Latest Posts
Article information

Author: Virgilio Hermann JD

Last Updated:

Views: 6238

Rating: 4 / 5 (41 voted)

Reviews: 80% of readers found this page helpful

Author information

Name: Virgilio Hermann JD

Birthday: 1997-12-21

Address: 6946 Schoen Cove, Sipesshire, MO 55944

Phone: +3763365785260

Job: Accounting Engineer

Hobby: Web surfing, Rafting, Dowsing, Stand-up comedy, Ghost hunting, Swimming, Amateur radio

Introduction: My name is Virgilio Hermann JD, I am a fine, gifted, beautiful, encouraging, kind, talented, zealous person who loves writing and wants to share my knowledge and understanding with you.