Posted by Wenhao Yu and Fei Xia, Research Scientists, Google
Empowering end-users to interactively teach robots to perform novel tasks is a crucial capability for their successful integration into real-world applications. For example, a user may want to teach a robot dog to perform a new trick, or teach a manipulator robot how to organize a lunch box based on user preferences. The recent advancements in large language models (LLMs) pre-trained on extensive internet data have shown a promising path towards achieving this goal. Indeed, researchers have explored diverse ways of leveraging LLMs for robotics, from step-by-step planning and goal-oriented dialogue to robot-code-writing agents.
While these methods impart new modes of compositional generalization, they focus on using language to link together new behaviors from an existing library of control primitives that are either manually engineered or learned a priori. Despite having internal knowledge about robot motions, LLMs struggle to directly output low-level robot commands due to the limited availability of relevant training data. As a result, the expression of these methods are bottlenecked by the breadth of the available primitives, the design of which often requires extensive expert knowledge or massive data collection.
In “Language to Rewards for Robotic Skill Synthesis”, we propose an approach to enable users to teach robots novel actions through natural language input. To do so, we leverage reward functions as an interface that bridges the gap between language and low-level robot actions. We posit that reward functions provide an ideal interface for such tasks given their richness in semantics, modularity, and interpretability. They also provide a direct connection to low-level policies through black-box optimization or reinforcement learning (RL). We developed a language-to-reward system that leverages LLMs to translate natural language user instructions into reward-specifying code and then applies MuJoCo MPC to find optimal low-level robot actions that maximize the generated reward function. We demonstrate our language-to-reward system on a variety of robotic control tasks in simulation using a quadruped robot and a dexterous manipulator robot. We further validate our method on a physical robot manipulator.
The language-to-reward system consists of two core components: (1) a Reward Translator, and (2) a Motion Controller. TheReward Translator maps natural language instruction from users to reward functions represented as python code. The Motion Controller optimizes the given reward function using receding horizon optimization to find the optimal low-level robot actions, such as the amount of torque that should be applied to each robot motor.
LLMs cannot directly generate low-level robotic actions due to lack of data in pre-training dataset. We propose to use reward functions to bridge the gap between language and low-level robot actions, and enable novel complex robot motions from natural language instructions.
Reward Translator: Translating user instructions to reward functions
The Reward Translator module was built with the goal of mapping natural language user instructions to reward functions. Reward tuning is highly domain-specific and requires expert knowledge, so it was not surprising to us when we found that LLMs trained on generic language datasets are unable to directly generate a reward function for a specific hardware. To address this, we apply the in-context learning ability of LLMs. Furthermore, we split the Reward Translator into two sub-modules: Motion Descriptor and Reward Coder.
Motion Descriptor
First, we design a Motion Descriptor that interprets input from a user and expands it into a natural language description of the desired robot motion following a predefined template. This Motion Descriptor turns potentially ambiguous or vague user instructions into more specific and descriptive robot motions, making the reward coding task more stable. Moreover, users interact with the system through the motion description field, so this also provides a more interpretable interface for users compared to directly showing the reward function.
To create the Motion Descriptor, we use an LLM to translate the user input into a detailed description of the desired robot motion. We design prompts that guide the LLMs to output the motion description with the right amount of details and format. By translating a vague user instruction into a more detailed description, we are able to more reliably generate the reward function with our system. This idea can also be potentially applied more generally beyond robotics tasks, and is relevant to Inner-Monologue and chain-of-thought prompting.
Reward Coder
In the second stage, we use the same LLM from Motion Descriptor for Reward Coder, which translates generated motion description into the reward function. Reward functions are represented using python code to benefit from the LLMs’ knowledge of reward, coding, and code structure.
Ideally, we would like to use an LLM to directly generate a reward function R (s, t) that maps the robot state s and time t into a scalar reward value. However, generating the correct reward function from scratch is still a challenging problem for LLMs and correcting the errors requires the user to understand the generated code to provide the right feedback. As such, we pre-define a set of reward terms that are commonly used for the robot of interest and allow LLMs to composite different reward terms to formulate the final reward function. To achieve this, we design a prompt that specifies the reward terms and guide the LLM to generate the correct reward function for the task.
The internal structure of the Reward Translator, which is tasked to map user inputs to reward functions.
Motion Controller: Translating reward functions to robot actions
The Motion Controller takes the reward function generated by the Reward Translator and synthesizes a controller that maps robot observation to low-level robot actions. To do this, we formulate the controller synthesis problem as a Markov decision process (MDP), which can be solved using different strategies, including RL, offline trajectory optimization, or model predictive control (MPC). Specifically, we use an open-source implementation based on the MuJoCo MPC (MJPC).
MJPC has demonstrated the interactive creation of diverse behaviors, such as legged locomotion, grasping, and finger-gaiting, while supporting multiple planning algorithms, such as iterative linear–quadratic–Gaussian (iLQG) and predictive sampling. More importantly, the frequent re-planning in MJPC empowers its robustness to uncertainties in the system and enables an interactive motion synthesis and correction system when combined with LLMs.
Examples
Robot dog
In the first example, we apply the language-to-reward system to a simulated quadruped robot and teach it to perform various skills. For each skill, the user will provide a concise instruction to the system, which will then synthesize the robot motion by using reward functions as an intermediate interface.
Dexterous manipulator
We then apply the language-to-reward system to a dexterous manipulator robot to perform a variety of manipulation tasks. The dexterous manipulator has 27 degrees of freedom, which is very challenging to control. Many of these tasks require manipulation skills beyond grasping, making it difficult for pre-designed primitives to work. We also include an example where the user can interactively instruct the robot to place an apple inside a drawer.
Validation on real robots
We also validate the language-to-reward method using a real-world manipulation robot to perform tasks such as picking up objects and opening a drawer. To perform the optimization in Motion Controller, we use AprilTag, a fiducial marker system, and F-VLM, an open-vocabulary object detection tool, to identify the position of the table and objects being manipulated.
Conclusion
In this work, we describe a new paradigm for interfacing an LLM with a robot through reward functions, powered by a low-level model predictive control tool, MuJoCo MPC. Using reward functions as the interface enables LLMs to work in a semantic-rich space that plays to the strengths of LLMs, while ensuring the expressiveness of the resulting controller. To further improve the performance of the system, we propose to use a structured motion description template to better extract internal knowledge about robot motions from LLMs. We demonstrate our proposed system on two simulated robot platforms and one real robot for both locomotion and manipulation tasks.
Acknowledgements
We would like to thank our co-authors Nimrod Gileadi, Chuyuan Fu, Sean Kirmani, Kuang-Huei Lee, Montse Gonzalez Arenas, Hao-Tien Lewis Chiang, Tom Erez, Leonard Hasenclever, Brian Ichter, Ted Xiao, Peng Xu, Andy Zeng, Tingnan Zhang, Nicolas Heess, Dorsa Sadigh, Jie Tan, and Yuval Tassa for their help and support in various aspects of the project. We would also like to acknowledge Ken Caluwaerts, Kristian Hartikainen, Steven Bohez, Carolina Parada, Marc Toussaint, and the greater teams at Google DeepMind for their feedback and contributions.
Heterogeneous Memory Management (HMM) is a CUDA memory management feature that extends the simplicity and productivity of the CUDA Unified Memory programming…
Heterogeneous Memory Management (HMM) is a CUDA memory management feature that extends the simplicity and productivity of the CUDA Unified Memory programming model to include system allocated memory on systems with PCIe-connected NVIDIA GPUs. System allocated memory refers to memory that is ultimately allocated by the operating system; for example, through malloc, mmap, the C++ new operator (which of course uses the preceding mechanisms), or related system routines that set up CPU-accessible memory for the application.
Previously, on PCIe-based machines, system allocated memory was not directly accessible by the GPU. The GPU could only access memory that came from special allocators such as cudaMalloc or cudaMallocManaged.
With HMM enabled, all application threads (GPU or CPU) can directly access all of the application’s system allocated memory. As with Unified Memory (which can be thought of as a subset of, or precursor to HMM), there is no need to manually copy system allocated memory between processors. This is because it is automatically placed on the CPU or GPU, based on processor usage.
Within the CUDA driver stack, CPU and GPU page faults are typically used to discover where the memory should be placed. Again, this automatic placement already happens with Unified Memory—HMM simply extends the behavior to cover system allocated memory as well as cudaMallocManaged memory.
This new ability to directly read or write to the full application memory address space will significantly improve programmer productivity for all programming models built on top of CUDA: CUDA C++, Fortran, standard parallelism in Python, ISO C++, ISO Fortran, OpenACC, OpenMP, and many others.
In fact, as the upcoming examples demonstrate, HMM simplifies GPU programming to the point that GPU programming is nearly as accessible as CPU programming. Some highlights:
Explicit memory management is not required for functionality when writing a GPU program; therefore, an initial “first draft” program can be small and simple. Explicit memory management (for performance tuning) can be deferred to a later phase of development.
GPU programming is now practical for programming languages that do not distinguish between CPU and GPU memory.
Large applications can be GPU-accelerated without requiring large memory management refactoring, or changes to third-party libraries (for which source code is not always available).
As an aside, new hardware platforms such as NVIDIA Grace Hopper natively support the Unified Memory programming model through hardware-based memory coherence among all CPUs and GPUs. For such systems, HMM is not required, and in fact, HMM is automatically disabled there. One way to think about this is to observe that HMM is effectively a software-based way of providing the same programming model as an NVIDIA Grace Hopper Superchip.
To learn more about CUDA Unified Memory, see the resources section at the end of this post.
Unified Memory before HMM
The original CUDA Unified Memory feature introduced in 2013 enables you to accelerate a CPU program with only a few changes, as shown below:
Before HMM CPU only
void sortfile(FILE* fp, int N) {
char* data;
data = (char*)malloc(N);
fread(data, 1, N, fp);
qsort(data, N, 1, cmp);
use_data(data);
free(data);
}
After HMM CUDA Unified Memory (2013)
void sortfile(FILE* fp, int N) {
char* data;
cudaMallocManaged(&data, N);
fread(data, 1, N, fp);
qsort>>(data, N, 1, cmp);
cudaDeviceSynchronize();
use_data(data);
cudaFree(data);
}
This programming model is simple, clear, and powerful. Over the past 10 years, this approach has enabled countless applications to easily benefit from GPU acceleration. And yet, there is still room for improvement: note the need for a special allocator: cudaMallocManaged, and the corresponding cudaFree.
What if we could go even further, and get rid of those? That’s exactly what HMM does.
Unified Memory after HMM
On systems with HMM (detailed below), continue using malloc and free:
Before HMM CPU only
void sortfile(FILE* fp, int N) {
char* data;
data = (char*)malloc(N);
fread(data, 1, N, fp);
qsort(data, N, 1, cmp);
use_data(data);
free(data);
}
After HMM CUDA Unified Memory + HMM (2023)
void sortfile(FILE* fp, int N) {
char* data;
data = (char*)malloc(N);
fread(data, 1, N, fp);
qsort>>(data, N, 1, cmp);
cudaDeviceSynchronize();
use_data(data);
free(data)
}
With HMM, the memory management is now identical between the two.
System allocated memory and CUDA allocators
GPU applications using CUDA memory allocators work “as is” on systems with HMM. The main difference in these systems is that system allocation APIs like malloc, C++ new, or mmap now create allocations that may be accessed from GPU threads, without having to call any CUDA APIs to tell CUDA about the existence of these allocations. Table 1 captures the differences between the most common CUDA memory allocators on systems with HMM:
Memory allocatorson systems with HMM
Placement
Migratable
Accessible from:
CPU
GPU
RDMA
System allocated malloc, mmap, …
First-touch GPU or CPU
Y
Y
Y
Y
CUDA managed cudaMallocManaged
Y
Y
Y
N
CUDA device-only cudaMalloc, …
GPU
N
N
Y
Y
CUDA host-pinned cudaMallocHost, …
CPU
N
Y
Y
Y
Table 1. Overview of system and CUDA memory allocators on systems with HMM
In general, selecting the allocator that better expresses the application intent may enable CUDA to deliver better performance. With HMM, these choices become performance optimizations that do not need to be done upfront, before accessing the memory from the GPU for the first time. HMM enables developers to focus on parallelizing algorithms first, and performing memory allocator-related optimizations later, when their overhead improves performance.
Seamless GPU acceleration for C++, Fortran, and Python
HMM makes it significantly easier to program NVIDIA GPUs with standardized and portable programming languages like Python that do not distinguish between CPU and GPU memory and assume all threads may access all memory, as well as programming languages described by international standards like ISO Fortran and ISO C++.
These languages provide concurrency and parallelism facilities that enable implementations to automatically dispatch computations to GPUs and other devices. For example, since C++ 2017, the standard library algorithms from the header accept execution policies that enable implementations to run them in parallel.
Sorting a file in place from the GPU
For example, before HMM, sorting a file larger than CPU memory in-place was complicated, requiring sorting smaller parts of the file first, and merging them into a fully-sorted file afterwards. With HMM, the application may map the file on disk into memory using mmap, and read and write to it directly from the GPU. For more details, see the HMM sample code file_before.cpp and file_after.cpp on GitHub.
Before HMM Dynamic Allocation
void sortfile(FILE* fp, int N) {
std::vector buffer;
buffer.resize(N);
fread(buffer.data(), 1, N, fp);
// std::sort runs on the GPU:
std::sort(std::execution::par,
buffer.begin(), buffer.end(),
std::greater{});
use_data(std::span{buffer});
}
After HMM CUDA Unified Memory + HMM (2023)
void sortfile(int fd, int N) {
auto buffer = (char*)mmap(NULL, N,
PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
// std::sort runs on the GPU:
std::sort(std::execution::par,
buffer, buffer + N,
std::greater{});
use_data(std::span{buffer});
}
The NVIDIA C++ Compiler (NVC++) implementation of the parallel std::sort algorithm sorts the file on the GPU when using the -stdpar=gpu option. There are many restrictions on the use of this option, as detailed in the HPC SDK documentation.
Before HMM: GPU may only access dynamically allocated memory on the heap within code compiled by NVC++. That is, automatic variables on CPU thread stacks, global variables, and memory-mapped files are not accessible from the GPU (see examples below).
After HMM: GPU may access all system allocated memory, including data dynamically allocated on the heap in CPU code compiled by other compilers and third-party libraries, automatic variables on CPU thread stacks, global variables in CPU memory, memory-mapped files, and so on
Atomic memory operations and synchronization primitives
HMM supports all memory operations, which includes atomic memory operations. That is, programmers may use atomic memory operations to synchronize GPU and CPU threads with flags. While certain parts of the C++ std::atomic APIs use system calls that are not available on the GPU yet, such as std::atomic::wait and std::atomic::notify_all/_one APIs, most of the C++ concurrency primitive APIs are available and readily useful to perform message passing between GPU and CPU threads.
void main() {
// Variables allocated with cudaMallocManaged
std::atomic* flag;
int* msg;
cudaMallocManaged(&flag, sizeof(std::atomic));
cudaMallocManaged(&msg, sizeof(int));
new (flag) std::atomic(0);
*msg = 0;
// Start a different CPU thread…
auto t = std::jthread([&] {
// … that launches and waits
// on a GPU kernel completing
std::for_each_n(
std::execution::par,
&msg, 1, [&](int& msg) {
// GPU thread writes message…
*msg = 42; // all accesses via ptrs
// …and signals completion…
flag->store(1); // all accesses via ptrs
});
});
// CPU thread waits on GPU thread
while (flag->load() == 0); // all accesses via ptrs
// …and reads the message:
std::cout
After HMM CPU←→GPU message passing
void main() {
// Variables on CPU thread stack:
std::atomic flag = 0; // Atomic
int msg = 0; // Message
// Start a different CPU thread…
auto t = std::jthread([&] {
// … that launches and waits
// on a GPU kernel completing
std::for_each_n(
std::execution::par,
&msg, 1, [&](int& msg) {
// GPU thread writes message…
msg = 42;
// …and signals completion…
flag.store(1);
});
});
// CPU thread waits on GPU thread
while (flag.load() == 0);
// …and reads the message:
std::cout
Before HMM CPU←→GPU locks
void main() {
// Variables allocated with cudaMallocManaged
ticket_lock* lock; // Lock
int* msg; // Message
cudaMallocManaged(&lock, sizeof(ticket_lock));
cudaMallocManaged(&msg, sizeof(int));
new (lock) ticket_lock();
*msg = 0;
// Start a different CPU thread…
auto t = std::jthread([&] {
// … that launches and waits
// on a GPU kernel completing
std::for_each_n(
std::execution::par,
&msg, 1, [&](int& msg) {
// GPU thread takes lock…
auto g = lock->guard();
// … and sets message (no atomics)
msg += 1;
}); // GPU thread releases lock here
});
{ // Concurrently with GPU thread
// … CPU thread takes lock…
auto g = lock->guard();
// … and sets message (no atomics)
msg += 1;
} // CPU thread releases lock here
t.join(); // Wait on GPU kernel completion
std::cout
After HMM CPU←→GPU locks
void main() {
// Variables on CPU thread stack:
ticket_lock lock; // Lock
int msg = 0; // Message
// Start a different CPU thread…
auto t = std::jthread([&] {
// … that launches and waits
// on a GPU kernel completing
std::for_each_n(
std::execution::par,
&msg, 1, [&](int& msg) {
// GPU thread takes lock…
auto g = lock.guard();
// … and sets message (no atomics)
msg += 1;
}); // GPU thread releases lock here
});
{ // Concurrently with GPU thread
// … CPU thread takes lock…
auto g = lock.guard();
// … and sets message (no atomics)
msg += 1;
} // CPU thread releases lock here
t.join(); // Wait on GPU kernel completion
std::cout
Accelerate complex HPC workloads with HMM
Research groups working on large and long-lived HPC applications have yearned for years for more productive and portable programming models for heterogeneous platforms. m-AIA is a multi-physics solver spanning almost 300,000 lines of code developed at the Institute of Aerodynamics at RWTH Aachen, Germany. See Accelerating a C++ CFD Code with OpenACC for more information. Instead of using OpenACC for the initial prototype, it is now partially accelerated on GPUs using the ISO C++ programming model described above, which was not available when the prototype work was done.
HMM enabled our team to accelerate new m-AIA workloads that interface with GPU-agnostic third-party libraries such as FFTW and pnetcdf, which are used for initial conditions and I/O and are oblivious to the GPU directly accessing the same memory.
Leverage memory-mapped I/O for fast development
One of the interesting features that HMM provides is memory-mapped file I/O directly from the GPU. It enables developers to directly read files from supported storage or /disk without staging them in system memory and without copying the data to the high bandwidth GPU memory. This also enables application developers to easily process input data larger than the available physical system memory, without constructing an iterative data ingestion and computation workflow.
To demonstrate this functionality, our team wrote a sample application that builds a histogram of hourly total precipitation for every day of the year from the ERA5 reanalysis dataset. For more details, see The ERA5 global reanalysis.
The ERA5 dataset consists of hourly estimates of several atmospheric variables. In the dataset, total precipitation data for each month is stored in a separate file. We used 40 years of total precipitation data from 1981–2020, which sum to 480 input files aggregating to ~1.3 TB total input data size. See Figure 1 for example results.
Figure 1.Average monthly rainfall over the northern hemisphere (left) and normalized average hourly rainfall for the months of February and August (right)
Using the Unix mmap API, input files can be mapped to a contiguous virtual address space. With HMM, this virtual address can be passed as input to a CUDA kernel which can then directly access the values to build a histogram of total precipitation for each hour for all the days in a year.
The resulting histogram will reside in GPU memory and can be used to easily compute interesting statistics such as average monthly precipitation over the northern hemisphere. As an example, we also computed average hourly precipitation for the months of February and August. To see the code for this application, visit HMM_sample_code on GitHub.
Before HMM Batch and pipeline memory transfers
size_t chunk_sz = 70_gb;
std::vector buffer(chunk_sz);
for (fp : files)
for (size_t off = 0; off
histogram>>(dev, N, out);
cudaDeviceSynchronize();
}
The CUDA Toolkit and driver will automatically enable HMM whenever it detects that your system can handle it. The requirements are documented in detail in the CUDA 12.2 Release Notes: General CUDA. You’ll need:
To detect systems in which GPUs may access system allocated memory, query the cudaDevAttrPageableMemoryAccess.
In addition, systems such as the NVIDIA Grace Hopper Superchip support ATS, which has similar behavior to HMM. In fact, the programming model for HMM and ATS systems is the same, so merely checking for cudaDevAttrPageableMemoryAccess suffices for most programs.
However, for performance tuning and other advanced programming, it is possible to discern between HMM and ATS by also querying for cudaDevAttrPageableMemoryAccessUsesHostPageTables. Table 2 shows how to interpret the results.
Attribute
HMM
ATS
cudaDevAttrPageableMemoryAccess
1
1
cudaDevAttrPageableMemoryAccessUsesHostPageTables
0
1
Table 2. CUDA device attributes to query HMM and ATS support
For portable applications that are only interested in querying whether the programming model exposed by HMM or ATS is available, querying the ‘pageable memory access’ property usually suffices.
Unified Memory performance hints
There are no changes to the semantics of pre-existing Unified Memory performance hints. For applications that are already using CUDA Unified Memory on hardware-coherent systems like NVIDIA Grace Hopper, the main change is that HMM enables them to run “as is” on more systems within the limitations mentioned above.
The pre-existing Unified Memory hints also work with system allocated memory on HMM systems:
A preferred location for the memory: cudaMemAdviseSetPreferredLocation, or
A device that will access the memory: cudaMemAdviseSetAccessedBy, or
A device that will be mostly reading the memory that will be infrequently modified: cudaMemAdviseSetReadMostly.
A little more advanced: there is a new CUDA 12.2 API, cudaMemAdvise_v2, that enables applications to choose which NUMA node a given memory range should prefer. This comes into play when HMM places the memory contents on the CPU side.
As always, memory management hints may either improve or degrade performance. Behavior is application and workload dependent, but none of the hints impacts the correctness of the application.
Limitations of HMM in CUDA 12.2
The initial HMM implementation in CUDA 12.2 delivers new features without regressing the performance of any pre-existing applications. The limitations of HMM in CUDA 12.2 are documented in detail in the CUDA 12.2 Release Notes: General CUDA. The main limitations are:
HMM is only available for x86_64, and other CPU architectures are not yet supported.
GPU atomic operations on file-backed memory and HugeTLBfs memory are not supported.
fork(2) without a following exec(3) is not fully supported.
Page migrations are handled in chunks of 4 KB-page size.
Stay tuned for future CUDA driver updates that will address HMM limitations and improve performance.
Summary
HMM simplifies the programming model by removing the need for explicit memory management for GPU programs that run on common PCIe-based (x86, typically) computers. Programmers can simply use malloc, C++ new, and mmap calls directly, just as they already do for CPU programming.
HMM further boosts programmer productivity by enabling a wide variety of standard programming language features to be safely used within CUDA programs. There is no need to worry about accidentally exposing system allocated memory to a CUDA kernel.
HMM enables a seamless transition to and from the new NVIDIA Grace Hopper Superchip, and similar machines. On PCIe-based machines, HMM provides the same simplified programming model as that used on the NVIDIA Grace Hopper Superchip.
VMware Inc. and NVIDIA today announced the expansion of their strategic partnership to ready the hundreds of thousands of enterprises that run on VMware’s cloud infrastructure for the era of generative AI.
Servers Featuring NVIDIA L40S GPUs and NVIDIA BlueField Coming Soon From Dell Technologies, Hewlett Packard Enterprise and Lenovo to Support VMware Private AI Foundation with NVIDIALAS VEGAS, …
The latest advancements in AI for gaming are in the spotlight today at Gamescom, the world’s largest gaming conference, as NVIDIA introduced a host of technologies, starting with DLSS 3.5, the next step forward of its breakthrough AI neural rendering technology. DLSS 3.5, NVIDIA’s latest innovation in AI-powered graphics is an image quality upgrade incorporated Read article >
On the eve of Gamescom, NVIDIA announced NVIDIA DLSS 3.5 featuring Ray Reconstruction — a new neural rendering AI model that creates more beautiful and realistic ray-traced visuals than traditional rendering methods — for real-time 3D creative apps and games.
Bill Dally — one of the world’s foremost computer scientists and head of NVIDIA’s research efforts — will describe the forces driving accelerated computing and AI in his keynote address at Hot Chips, an annual gathering of leading processor and system architects. Dally will detail advances in GPU silicon, systems and software that are delivering Read article >
Bill Dally — one of the world’s foremost computer scientists and head of NVIDIA’s research efforts — will describe the forces driving accelerated computing and AI in his keynote address at Hot Chips, an annual gathering of leading processor and system architects. Dally will detail advances in GPU silicon, systems and software that are delivering Read article >
Posted by Catherine Armato, Program Manager, Google
This week, the 24th Annual Conference of the International Speech Communication Association (INTERSPEECH 2023) is being held in Dublin, Ireland, representing one of the world’s most extensive conferences on research and technology of spoken language understanding and processing. Experts in speech-related research fields gather to take part in oral presentations and poster sessions and to build collaborations across the globe.
We are excited to be a Platinum Sponsor of INTERSPEECH 2023, where we will be showcasing more than 20 research publications and supporting a number of workshops and special sessions. We welcome in-person attendees to drop by the Google Research booth to meet our researchers and participate in Q&As and demonstrations of some of our latest speech technologies, which help to improve accessibility and provide convenience in communication for billions of users. In addition, online attendees are encouraged to visit our virtual booth in Topia where you can get up-to-date information on research and opportunities at Google. Visit the @GoogleAI Twitter account to find out about Google booth activities (e.g., demos and Q&A sessions). You can also learn more about the Google research being presented at INTERSPEECH 2023 below (Google affiliations in bold).