submitted by /u/dark-night-rises [visit reddit] [comments] |
Month: March 2021
This year at GTC, we have a new track for Game Developers, where you can attend sessions for free, covering the latest in ray tracing, optimizing game performance, and content creation in NVIDIA Omniverse.
This year at GTC we have a new track for Game Developers where you can attend free sessions, covering the latest in ray tracing, optimizing game performance, and content creation in NVIDIA Omniverse.
Check out our top sessions below for those working in the gaming industry:
- Ray Tracing in Cyberpunk 2077
Learn how ray tracing was used to create the visuals in the game, and how the developers at CD Projekt RED used extensive ray tracing techniques to bring the bustling Night City to life.
Evgeny Makarov, Developer Technology Engineer, NVIDIA
Jakub Knapik, Art Director at CDPR
- Our Sniper Elite 4 Journey – Lessons in Porting AAA Action Games to the Nintendo Switch
The Asura engine, entirely developed in-house by Rebellion, has allowed the independent developer/publisher the maximum creative and technical freedom. Rebellion has overcome enormous technical challenges and built on years of Nintendo development experience to bring their flagship game, “Sniper Elite 4,” to the Switch platform. Learn how a crack team took a AAA game targeting PS4/XB1 and got it running on a Nintendo Switch. Through a journey of Switch releases, you’ll see how Rebellion optimized “Sniper Elite 4” beyond what anyone thought was possible to deliver a beautiful and smooth experience.
Arden Aspinall, Studio Head, Rebellion North
- Ray Tracing in One Weekend
This presentation will assume the audience knows nothing about ray tracing. It is a guide for the first day in country. But rather than a broad survey it will dig deep on one way to make great looking images (the one discussed in the free ebook Ray Tracing in One Weekend). There will be no API or language discussed: all pseudocode. There will be no integrals, density functions, derivatives, or other topic inappropriate for polite company discussed.
Pete Shirley, Distinguished Research Engineer, NVIDIA
- LEGO Builder’s Journey: Rendering Realistic LEGO Bricks Using Ray Tracing in Unity
Learn how we render realistic-looking LEGO dioramas in real time using Unity high-definition render pipeline and ray tracing. Starting from a stylized look, we upgraded the game to use realistic rendering on PC to enhance immersion in the game play and story. From lighting and materials to geometry processing and post effects, you’ll get a deep insight into what we’ve done to get as close to realism as possible with a small team in a limited time — all while still using the same assets for other versions of the game.
Mikkel Fredborg, Technical Lead, Light Brick Studio
- Introduction to Real Time Ray Tracing with Minecraft
This talk is aimed at graphics engineers that have little or no experience with ray tracing. It serves as a gentle introduction to many topics, including “What is ray tracing?”, “How many rays do you need to make an image?”, “The importance of [importance] sampling. (And more importantly, what is importance sampling?)”, “Denoising”, “The problem with small bright things”. Along the way, you will learn about specific implementation details from Minecraft.
Oli Wright, GeForce DevTech, NVIDIA
Visit the GTC website to view the entire Game Development track and to register for the free conference.
University of Waterloo researchers are using deep learning and computer vision to develop autonomous exoskeleton legs to help users walk, climb stairs, and avoid obstacles.
University of Waterloo researchers are using deep learning and computer vision to develop autonomous exoskeleton legs to help users walk, climb stairs, and avoid obstacles.
The project, described in an early-access paper on IEEE Transactions on Medical Robotics and Bionics, fits users with wearable cameras. AI software processes the camera’s video stream, and is being trained to recognize surrounding features such as stairs and doorways, and then determine the best movements to take.
“Our control approach wouldn’t necessarily require human thought,” said Brokoslaw Laschowski, Ph.D. candidate in systems design engineering and lead author on the project. “Similar to autonomous cars that drive themselves, we’re designing autonomous exoskeletons that walk for themselves.”
People who rely on exoskeletons for mobility typically operate the devices using smartphone apps or joysticks.
“That can be inconvenient and cognitively demanding,” said Laschowski, who works with engineering professor John McPhee, the Canada Research Chair in Biomechatronic System Dynamics. “Every time you want to perform a new locomotor activity, you have to stop, take out your smartphone and select the desired mode.”
The researchers are using NVIDIA TITAN GPUs for neural network training and real-time image classification of walking environments. They collected 923,000 images of human locomotion environments to create a database dubbed ExoNet — which was used to train the initial model, developed using the TensorFlow deep learning framework.
Still in development, the exoskeleton system must learn to operate on uneven terrain and avoid obstacles before becoming fully functional. To boost battery life, the team plans to use human motion to help charge the devices.
The recent paper analyzed how the power a person uses to go from a sitting to standing position could create biomechanical energy usable to charge the robotic exoskeletons.
Read the University of Waterloo news release for more >>
The researchers’ latest paper is available here. The original paper, published in 2019 at the IEEE International Conference on Rehabilitation Robotics, was a finalist for a best paper award.
Lurked Reddit for a while but need some help with something I’m programming. I’m trying to create a multilayer perceptron in Tensorflow – from what I can understand an MLP is almost like a basic form of neural network that can be built upon and become other networks (adding in convolution layers turning it into a CNN). In Tensorflow/Keras I am creating a sequential object and then adding layers to it – is this how an MLP is meant to be created by those libraries or is there a more direct way?
Also, I know that whenever my model is compiled it generates random weight distributions from a seed – is there a way I can extract the seed used from a trained model so I can keep the one that produces the smallest loss value?
submitted by /u/Greedy-Snow808
[visit reddit] [comments]
Hello everybody,
that’s my fist post here, so pleas be nice 🙂 I’m totaly new to tensorflow, so this is a beginners guide and no deep dive.
Like you may now the new free MIT intro to Deep Learning Course is online. some of the there given Models are kinda Memory hungry so here the solution:
CAUTION: think while coping form online Tutorials!
First of all it is a bless to work with the tensorflow/tensorflow:latest-gpu Docker Container so Yea, just do it.
first some dependencys, the notebooks do need python3-opencv and the lab 1 needs abcmidi and timidity
apt install python3-opencv abcmidi timidity
to edit the code in a personal directory and not in the container you need a non root user
adduser nonroot
login to the user
su - nonroot
install your editor, it’s jupyter lab for me
pip install jupyterlab
start jupyter lab on 0.0.0.0 in the bound directory
jupyter lab --ip 0.0.0.0
add those lines on the top before importing tensorflow
import os os.environ['TF_FORCE_GPU_ALLOW_GROWTH'] = 'true'
and those after importing tensorflow as tf
physical_devices = tf.config.list_physical_devices('GPU') try: tf.config.experimental.set_memory_growth(physical_devices[0], True) except: # Invalid device or cannot modify virtual devices once initialized. pass
Tip: add
%config Completer.use_jedi = False
if you have problems with autocomplete.
I hope that helps somebody!
submitted by /u/deep-and-learning
[visit reddit] [comments]
To make it easier to leverage NVIDIA accelerated compute, we’ve added support for launching RAPIDS + Dask on the latest NVIDIA A100 GPUs in the cloud.
Dask is an accessible and powerful solution for natively scaling Python analytics. Using familiar interfaces, it allows data scientists familiar with PyData tools to scale big data workloads easily. Dask is such a powerful tool that we have adopted it throughout a variety of projects at NVIDIA. When paired with RAPIDS, data practitioners can distribute big data workloads across massive NVIDIA GPU clusters.
To make it easier to leverage NVIDIA accelerated compute, we’ve added support for launching RAPIDS + Dask on the latest NVIDIA A100 GPUs in the cloud, allowing users and enterprises to get the most out of their data.
Spin-Up NVIDIA GPU Clusters Quickly with Dask Cloud Provider
While Dask makes scaling analytics workloads easy, distributing workloads in Cloud environments can be tricky. Dask-CloudProvider is a package that provides native Cloud integration, making it simple to get started on Amazon Web Services, Google Cloud Platform, or Microsoft Azure. Using native Cloud tools, data scientists, machine learning engineers, and DevOps engineers can stand-up infrastructure and start running workloads in no time.
RAPIDS builds upon Dask-CloudProvider to make spinning-up the most powerful NVIDIA GPU instances easy with raw virtual machines. While AWS, GCP, and Azure have great managed services for data scientists, these implementations can take time to adopt new GPU architectures. With Dask-CloudProvider and RAPIDS, users and enterprises can leverage the latest NVIDIA A100 GPUs, providing 20x more performance than the previous generation. With 40GB of GPU memory each and 600GB/s NVLINK connection, NVIDIA A100 GPUs are a supercharged workhorse for enterprise-scale data science workloads. Dask-CloudProvider and RAPIDS provide an easy way to get started with A100s without having to configure raw VMs from scratch.
RAPIDS strives to make NVIDIA accelerated data science accessible to a broader data-driven audience. With Dask, RAPIDS allows data scientists to solve enterprise-scale problems in less time and with less pain. For a deeper understanding of the latest RAPIDS features and integrations, read more here.
Over the years, online multiplayer games have exploded in popularity, captivating millions of players across the world. This popularity has also exponentially increased demands on game designers, as players expect games to be well-crafted and balanced — after all, it’s no fun to play a game where a single strategy beats all the rest.
In order to create a positive gameplay experience, game designers typically tune the balance of a game iteratively:
- Stress-test through thousands of play-testing sessions from test users
- Incorporate feedback and re-design the game
- Repeat 1 & 2 until both the play-testers and game designers are satisfied
This process is not only time-consuming but also imperfect — the more complex the game, the easier it is for subtle flaws to slip through the cracks. When games often have many different roles that can be played, with dozens of interconnecting skills, it makes it all the more difficult to hit the right balance.
Today, we present an approach that leverages machine learning (ML) to adjust game balance by training models to serve as play-testers, and demonstrate this approach on the digital card game prototype Chimera, which we’ve previously shown as a testbed for ML-generated art. By running millions of simulations using trained agents to collect data, this ML-based game testing approach enables game designers to more efficiently make a game more fun, balanced, and aligned with their original vision.
Chimera
We developed Chimera as a game prototype that would heavily lean on machine learning during its development process. For the game itself, we purposefully designed the rules to expand the possibility space, making it difficult to build a traditional hand-crafted AI to play the game.
The gameplay of Chimera revolves around the titular chimeras, creature mash-ups that players aim to strengthen and evolve. The objective of the game is to defeat the opponent’s chimera. These are the key points in the game design:
- Players may play:
- creatures, which can attack (through their attack stat) or be attacked (against their health stat), or
- spells, which produce special effects.
- Creatures are summoned into limited-capacity biomes, which are placed physically on the board space. Each creature has a preferred biome and will take repeated damage if placed on an incorrect biome or a biome that is over capacity.
- A player controls a single chimera, which starts off in a basic “egg” state and can be evolved and strengthened by absorbing creatures. To do this, the player must also acquire a certain amount of link energy, which is generated from various gameplay mechanics.
- The game ends when a player has successfully brought the health of the opponent’s chimera to 0.
Learning to Play Chimera
As an imperfect information card game with a large state space, we expected Chimera to be a difficult game for an ML model to learn, especially as we were aiming for a relatively simple model. We used an approach inspired by those used by earlier game-playing agents like AlphaGo, in which a convolutional neural network (CNN) is trained to predict the probability of a win when given an arbitrary game state. After training an initial model on games where random moves were chosen, we set the agent to play against itself, iteratively collecting game data, that was then used to train a new agent. With each iteration, the quality of the training data improved, as did the agent’s ability to play the game.
The ML agent’s performance against our best hand-crafted AI as training progressed. The initial ML agent (version 0) picked moves randomly. |
For the actual game state representation that the model would receive as input, we found that passing an “image” encoding to the CNN resulted in the best performance, beating all benchmark procedural agents and other types of networks (e.g. fully connected). The chosen model architecture is small enough to run on a CPU in reasonable time, which allowed us to download the model weights and run the agent live in a Chimera game client using Unity Barracuda.
An example game state representation used to train the neural network. |
In addition to making decisions for the game AI, we also used the model to display the estimated win probability for a player over the course of the game. |
Balancing Chimera
This approach enabled us to simulate millions more games than real players would be capable of playing in the same time span. After collecting data from the games played by the best-performing agents, we analyzed the results to find imbalances between the two of the player decks we had designed.
First, the Evasion Link Gen deck was composed of spells and creatures with abilities that generated extra link energy used to evolve a player’s chimera. It also contained spells that enabled creatures to evade attacks. In contrast, the Damage-Heal deck contained creatures of variable strength with spells that focused on healing and inflicting minor damage. Although we had designed these decks to be of equal strength, the Evasion Link Gen deck was winning 60% of the time when played against the Damage-Heal deck.
When we collected various stats related to biomes, creatures, spells, and chimera evolutions, two things immediately jumped out at us:
- There was a clear advantage in evolving a chimera — the agent won a majority of the games where it evolved its chimera more than the opponent did. Yet, the average number of evolves per game did not meet our expectations. To make it more of a core game mechanic, we wanted to increase the overall average number of evolves while keeping its usage strategic.
- The T-Rex creature was overpowered. Its appearances correlated strongly with wins, and the model would always play the T-Rex regardless of penalties for summoning into an incorrect or overcrowded biome.
From these insights, we made some adjustments to the game. To emphasize chimera evolution as a core mechanism in the game, we decreased the amount of link energy required to evolve a chimera from 3 to 1. We also added a “cool-off” period to the T-Rex creature, doubling the time it took to recover from any of its actions.
Repeating our ‘self-play’ training procedure with the updated rules, we observed that these changes pushed the game in the desired direction — the average number of evolves per game increased, and the T-Rex’s dominance faded.
By weakening the T-Rex, we successfully reduced the Evasion Link Gen deck’s reliance on an overpowered creature. Even so, the win ratio between the decks remained at 60/40 rather than 50/50. A closer look at the individual game logs revealed that the gameplay was often less strategic than we would have liked. Searching through our gathered data again, we found several more areas to introduce changes in.
To start, we increased the starting health of both players as well as the amount of health that healing spells could replenish. This was to encourage longer games that would allow a more diverse set of strategies to flourish. In particular, this enabled the Damage-Heal deck to survive long enough to take advantage of its healing strategy. To encourage proper summoning and strategic biome placement, we increased the existing penalties on playing creatures into incorrect or overcrowded biomes. And finally, we decreased the gap between the strongest and weakest creatures through minor attribute adjustments.
New adjustments in place, we arrived at the final game balance stats for these two decks:
Deck | Avg # evolves per game (before → after) |
Win % (1M games) (before → after) |
Evasion Link Gen | 1.54 → 2.16 | 59.1% → 49.8% |
Damage Heal | 0.86 → 1.76 | 40.9% → 50.2% |
Conclusion
Normally, identifying imbalances in a newly prototyped game can take months of playtesting. With this approach, we were able to not only discover potential imbalances but also introduce tweaks to mitigate them in a span of days. We found that a relatively simple neural network was sufficient to reach high level performance against humans and traditional game AI. These agents could be leveraged in further ways, such as for coaching new players or discovering unexpected strategies. We hope this work will inspire more exploration in the possibilities of machine learning for game development.
Acknowledgements
This project was conducted in collaboration with many people. Thanks to Ryan Poplin, Maxwell Hannaman, Taylor Steil, Adam Prins, Michal Todorovic, Xuefan Zhou, Aaron Cammarata, Andeep Toor, Trung Le, Erin Hoffman-John, and Colin Boswell. Thanks to everyone who contributed through playtesting, advising on game design, and giving valuable feedback.
Build Your Own AI-Powered Q&A Service
You can now build your own AI-powered Q&A service with the step-by-step instructions provided in this four-part blog series.
Conversational AI, the ability for machines to understand and respond to human queries, is being widely adopted across industries as enterprises see the value of this technology through solutions like chatbots and virtual assistants to better support their customers while lowering the cost of customer service.
You can now build your own AI-powered Q&A service with the step-by-step instructions provided in this four-part blog series. All the software resources you will need, from the deep learning frameworks to pre-trained models to inference engines are available from the NVIDIA NGC catalog – a hub of GPU-optimized software.
The blog series walks through:
- Part 1: Leveraging pre-trained models to build custom models with your training dataset
- Part 2: Optimizing the custom model to provide lower latency and higher throughput
- Part 3: Running inference on your custom models
- Part 4: Deploying the virtual assistant in the cloud
While the blog instructions use GPU-powered cloud instances, these instructions will work for on-prem systems as well.
Build your virtual assistant today with these instructions or join us at NVIDIA GTC for free on April 13th for our session “Accelerating AI Workflows at GTC” to learn step-by-step how to build a conversational AI solution using artifacts from the NGC catalog.
torch time series, final episode: Attention
This is the final post in a four-part introduction to time-series forecasting with torch. These posts have been the story of a quest for multiple-step prediction, and by now, we’ve seen three different approaches: forecasting in a loop, incorporating a multi-layer perceptron (MLP), and sequence-to-sequence models. Here’s a quick recap.
-
As one should when one sets out for an adventurous journey, we started with an in-depth study of the tools at our disposal: recurrent neural networks (RNNs). We trained a model to predict the very next observation in line, and then, thought of a clever hack: How about we use this for multi-step prediction, feeding back individual predictions in a loop? The result , it turned out, was quite acceptable.
-
Then, the adventure really started. We built our first model “natively” for multi-step prediction, relieving the RNN a bit of its workload and involving a second player, a tiny-ish MLP. Now, it was the MLP’s task to project RNN output to several time points in the future. Although results were pretty satisfactory, we didn’t stop there.
-
Instead, we applied to numerical time series a technique commonly used in natural language processing (NLP): sequence-to-sequence (seq2seq) prediction. While forecast performance was not much different from the previous case, we found the technique to be more intuitively appealing, since it reflects the causal relationship between successive forecasts.
Today we’ll enrich the seq2seq approach by adding a new component: the attention module. Originally introduced around 20141, attention mechanisms have gained enormous traction, so much so that a recent paper title starts out “Attention is Not All You Need”2.
The idea is the following.
In the classic encoder-decoder setup, the decoder gets “primed” with an encoder summary just a single time: the time it starts its forecasting loop. From then on, it’s on its own. With attention, however, it gets to see the complete sequence of encoder outputs again every time it forecasts a new value. What’s more, every time, it gets to zoom in on those outputs that seem relevant for the current prediction step.
This is a particularly useful strategy in translation: In generating the next word, a model will need to know what part of the source sentence to focus on. How much the technique helps with numerical sequences, in contrast, will likely depend on the features of the series in question.
Data input
As before, we work with vic_elec, but this time, we partly deviate from the way we used to employ it. With the original, bi-hourly dataset, training the current model takes a long time, longer than readers will want to wait when experimenting. So instead, we aggregate observations by day. In order to have enough data, we train on years 2012 and 2013, reserving 2014 for validation as well as post-training inspection.
vic_elec_daily <- vic_elec %>% select(Time, Demand) %>% index_by(Date = date(Time)) %>% summarise( Demand = sum(Demand) / 1e3) elec_train <- vic_elec_daily %>% filter(year(Date) %in% c(2012, 2013)) %>% as_tibble() %>% select(Demand) %>% as.matrix() elec_valid <- vic_elec_daily %>% filter(year(Date) == 2014) %>% as_tibble() %>% select(Demand) %>% as.matrix() elec_test <- vic_elec_daily %>% filter(year(Date) %in% c(2014), month(Date) %in% 1:4) %>% as_tibble() %>% select(Demand) %>% as.matrix() train_mean <- mean(elec_train) train_sd <- sd(elec_train)
We’ll attempt to forecast demand up to fourteen days ahead. How long, then, should be the input sequences? This is a matter of experimentation; all the more so now that we’re adding in the attention mechanism. (I suspect that it might not handle very long sequences so well).
Below, we go with fourteen days for input length, too, but that may not necessarily be the best possible choice for this series.
n_timesteps <- 7 * 2 n_forecast <- 7 * 2 elec_dataset <- dataset( name = "elec_dataset", initialize = function(x, n_timesteps, sample_frac = 1) { self$n_timesteps <- n_timesteps self$x <- torch_tensor((x - train_mean) / train_sd) n <- length(self$x) - self$n_timesteps - 1 self$starts <- sort(sample.int( n = n, size = n * sample_frac )) }, .getitem = function(i) { start <- self$starts[i] end <- start + self$n_timesteps - 1 lag <- 1 list( x = self$x[start:end], y = self$x[(start+lag):(end+lag)]$squeeze(2) ) }, .length = function() { length(self$starts) } ) batch_size <- 32 train_ds <- elec_dataset(elec_train, n_timesteps, sample_frac = 0.5) train_dl <- train_ds %>% dataloader(batch_size = batch_size, shuffle = TRUE) valid_ds <- elec_dataset(elec_valid, n_timesteps, sample_frac = 0.5) valid_dl <- valid_ds %>% dataloader(batch_size = batch_size) test_ds <- elec_dataset(elec_test, n_timesteps) test_dl <- test_ds %>% dataloader(batch_size = 1)
Model
Model-wise, we again encounter the three modules familiar from the previous post: encoder, decoder, and top-level seq2seq module. However, there is an additional component: the attention module, used by the decoder to obtain attention weights.
Encoder
The encoder still works the same way. It wraps an RNN, and returns the final state.
encoder_module <- nn_module( initialize = function(type, input_size, hidden_size, num_layers = 1, dropout = 0) { self$type <- type self$rnn <- if (self$type == "gru") { nn_gru( input_size = input_size, hidden_size = hidden_size, num_layers = num_layers, dropout = dropout, batch_first = TRUE ) } else { nn_lstm( input_size = input_size, hidden_size = hidden_size, num_layers = num_layers, dropout = dropout, batch_first = TRUE ) } }, forward = function(x) { x <- self$rnn(x) # return last states for all layers # per layer, a single tensor for GRU, a list of 2 tensors for LSTM x <- x[[2]] x } )
Attention module
In basic seq2seq, whenever it had to generate a new value, the decoder took into account two things: its prior state, and the previous output generated. In an attention-enriched setup, the decoder additionally receives the complete output from the encoder. In deciding what subset of that output should matter, it gets help from a new agent, the attention module.
This, then, is the attention module’s raison d’être: Given current decoder state and well as complete encoder outputs, obtain a weighting of those outputs indicative of how relevant they are to what the decoder is currently up to. This procedure results in the so-called attention weights: a normalized score, for each time step in the encoding, that quantify their respective importance.
Attention may be implemented in a number of different ways. Here, we show two implementation options, one additive, and one multiplicative.
Additive attention
In additive attention, encoder outputs and decoder state are commonly either added or concatenated (we choose to do the latter, below). The resulting tensor is run through a linear layer, and a softmax is applied for normalization.
attention_module_additive <- nn_module( initialize = function(hidden_dim, attention_size) { self$attention <- nn_linear(2 * hidden_dim, attention_size) }, forward = function(state, encoder_outputs) { # function argument shapes # encoder_outputs: (bs, timesteps, hidden_dim) # state: (1, bs, hidden_dim) # multiplex state to allow for concatenation (dimensions 1 and 2 must agree) seq_len <- dim(encoder_outputs)[2] # resulting shape: (bs, timesteps, hidden_dim) state_rep <- state$permute(c(2, 1, 3))$repeat_interleave(seq_len, 2) # concatenate along feature dimension concat <- torch_cat(list(state_rep, encoder_outputs), dim = 3) # run through linear layer with tanh # resulting shape: (bs, timesteps, attention_size) scores <- self$attention(concat) %>% torch_tanh() # sum over attention dimension and normalize # resulting shape: (bs, timesteps) attention_weights <- scores %>% torch_sum(dim = 3) %>% nnf_softmax(dim = 2) # a normalized score for every source token attention_weights } )
Multiplicative attention
In multiplicative attention, scores are obtained by computing dot products between decoder state and all of the encoder outputs. Here too, a softmax is then used for normalization.
attention_module_multiplicative <- nn_module( initialize = function() { NULL }, forward = function(state, encoder_outputs) { # function argument shapes # encoder_outputs: (bs, timesteps, hidden_dim) # state: (1, bs, hidden_dim) # allow for matrix multiplication with encoder_outputs state <- state$permute(c(2, 3, 1)) # prepare for scaling by number of features d <- torch_tensor(dim(encoder_outputs)[3], dtype = torch_float()) # scaled dot products between state and outputs # resulting shape: (bs, timesteps, 1) scores <- torch_bmm(encoder_outputs, state) %>% torch_div(torch_sqrt(d)) # normalize # resulting shape: (bs, timesteps) attention_weights <- scores$squeeze(3) %>% nnf_softmax(dim = 2) # a normalized score for every source token attention_weights } )
Decoder
Once attention weights have been computed, their actual application is handled by the decoder. Concretely, the method in question, weighted_encoder_outputs(), computes a product of weights and encoder outputs, making sure that each output will have appropriate impact.
The rest of the action then happens in forward(). A concatenation of weighted encoder outputs (often called “context”) and current input is run through an RNN. Then, an ensemble of RNN output, context, and input is passed to an MLP. Finally, both RNN state and current prediction are returned.
decoder_module <- nn_module( initialize = function(type, input_size, hidden_size, attention_type, attention_size = 8, num_layers = 1) { self$type <- type self$rnn <- if (self$type == "gru") { nn_gru( input_size = input_size, hidden_size = hidden_size, num_layers = num_layers, batch_first = TRUE ) } else { nn_lstm( input_size = input_size, hidden_size = hidden_size, num_layers = num_layers, batch_first = TRUE ) } self$linear <- nn_linear(2 * hidden_size + 1, 1) self$attention <- if (attention_type == "multiplicative") attention_module_multiplicative() else attention_module_additive(hidden_size, attention_size) }, weighted_encoder_outputs = function(state, encoder_outputs) { # encoder_outputs is (bs, timesteps, hidden_dim) # state is (1, bs, hidden_dim) # resulting shape: (bs * timesteps) attention_weights <- self$attention(state, encoder_outputs) # resulting shape: (bs, 1, seq_len) attention_weights <- attention_weights$unsqueeze(2) # resulting shape: (bs, 1, hidden_size) weighted_encoder_outputs <- torch_bmm(attention_weights, encoder_outputs) weighted_encoder_outputs }, forward = function(x, state, encoder_outputs) { # encoder_outputs is (bs, timesteps, hidden_dim) # state is (1, bs, hidden_dim) # resulting shape: (bs, 1, hidden_size) context <- self$weighted_encoder_outputs(state, encoder_outputs) # concatenate input and context # NOTE: this repeating is done to compensate for the absence of an embedding module # that, in NLP, would give x a higher proportion in the concatenation x_rep <- x$repeat_interleave(dim(context)[3], 3) rnn_input <- torch_cat(list(x_rep, context), dim = 3) # resulting shapes: (bs, 1, hidden_size) and (1, bs, hidden_size) rnn_out <- self$rnn(rnn_input, state) rnn_output <- rnn_out[[1]] next_hidden <- rnn_out[[2]] mlp_input <- torch_cat(list(rnn_output$squeeze(2), context$squeeze(2), x$squeeze(2)), dim = 2) output <- self$linear(mlp_input) # shapes: (bs, 1) and (1, bs, hidden_size) list(output, next_hidden) } )
seq2seq module
The seq2seq module is basically unchanged (apart from the fact that now, it allows for attention module configuration). For a detailed explanation of what happens here, please consult the previous post.
seq2seq_module <- nn_module( initialize = function(type, input_size, hidden_size, attention_type, attention_size, n_forecast, num_layers = 1, encoder_dropout = 0) { self$encoder <- encoder_module(type = type, input_size = input_size, hidden_size = hidden_size, num_layers, encoder_dropout) self$decoder <- decoder_module(type = type, input_size = 2 * hidden_size, hidden_size = hidden_size, attention_type = attention_type, attention_size = attention_size, num_layers) self$n_forecast <- n_forecast }, forward = function(x, y, teacher_forcing_ratio) { outputs <- torch_zeros(dim(x)[1], self$n_forecast)$to(device = device) encoded <- self$encoder(x) encoder_outputs <- encoded[[1]] hidden <- encoded[[2]] # list of (batch_size, 1), (1, batch_size, hidden_size) out <- self$decoder(x[ , n_timesteps, , drop = FALSE], hidden, encoder_outputs) # (batch_size, 1) pred <- out[[1]] # (1, batch_size, hidden_size) state <- out[[2]] outputs[ , 1] <- pred$squeeze(2) for (t in 2:self$n_forecast) { teacher_forcing <- runif(1) < teacher_forcing_ratio input <- if (teacher_forcing == TRUE) pred$unsqueeze(3) else y[ , t - 1] out <- self$decoder(pred$unsqueeze(3), state, encoder_outputs) pred <- out[[1]] state <- out[[2]] outputs[ , t] <- pred$squeeze(2) } outputs } )
When instantiating the top-level model, we now have an additional choice: that between additive and multiplicative attention. In the “accuracy” sense of performance, my tests did not show any differences. However, the multiplicative variant is a lot faster.
net <- seq2seq_module("gru", input_size = 1, hidden_size = 32, attention_type = "multiplicative", attention_size = 8, n_forecast = n_timesteps) # training RNNs on the GPU currently prints a warning that may clutter # the console # see https://github.com/mlverse/torch/issues/461 # alternatively, use # device <- "cpu" device <- torch_device(if (cuda_is_available()) "cuda" else "cpu") net <- net$to(device = device)
Training
Just like last time, in model training, we get to choose the degree of teacher forcing. Below, we go with a fraction of 0.0, that is, no forcing at all.
optimizer <- optim_adam(net$parameters, lr = 0.001) num_epochs <- 100 train_batch <- function(b, teacher_forcing_ratio) { optimizer$zero_grad() output <- net(b$x$to(device = device), b$y$to(device = device), teacher_forcing_ratio) target <- b$y$to(device = device) loss <- nnf_mse_loss(output, target) loss$backward() optimizer$step() loss$item() } valid_batch <- function(b, teacher_forcing_ratio = 0) { output <- net(b$x$to(device = device), b$y$to(device = device), teacher_forcing_ratio) target <- b$y$to(device = device) loss <- nnf_mse_loss(output, target) loss$item() } for (epoch in 1:num_epochs) { net$train() train_loss <- c() coro::loop(for (b in train_dl) { loss <-train_batch(b, teacher_forcing_ratio = 0.3) train_loss <- c(train_loss, loss) }) cat(sprintf("nEpoch %d, training: loss: %3.5f n", epoch, mean(train_loss))) net$eval() valid_loss <- c() coro::loop(for (b in valid_dl) { loss <- valid_batch(b) valid_loss <- c(valid_loss, loss) }) cat(sprintf("nEpoch %d, validation: loss: %3.5f n", epoch, mean(valid_loss))) }
# Epoch 1, training: loss: 0.83752 # Epoch 1, validation: loss: 0.83167 # Epoch 2, training: loss: 0.72803 # Epoch 2, validation: loss: 0.80804 # ... # ... # Epoch 99, training: loss: 0.10385 # Epoch 99, validation: loss: 0.21259 # Epoch 100, training: loss: 0.10396 # Epoch 100, validation: loss: 0.20975
Evaluation
For visual inspection, we pick a few forecasts from the test set.
net$eval() test_preds <- vector(mode = "list", length = length(test_dl)) i <- 1 vic_elec_test <- vic_elec_daily %>% filter(year(Date) == 2014, month(Date) %in% 1:4) coro::loop(for (b in test_dl) { input <- b$x output <- net(b$x$to(device = device), b$y$to(device = device), teacher_forcing_ratio = 0) preds <- as.numeric(output) test_preds[[i]] <- preds i <<- i + 1 }) test_pred1 <- test_preds[[1]] test_pred1 <- c(rep(NA, n_timesteps), test_pred1, rep(NA, nrow(vic_elec_test) - n_timesteps - n_forecast)) test_pred2 <- test_preds[[21]] test_pred2 <- c(rep(NA, n_timesteps + 20), test_pred2, rep(NA, nrow(vic_elec_test) - 20 - n_timesteps - n_forecast)) test_pred3 <- test_preds[[41]] test_pred3 <- c(rep(NA, n_timesteps + 40), test_pred3, rep(NA, nrow(vic_elec_test) - 40 - n_timesteps - n_forecast)) test_pred4 <- test_preds[[61]] test_pred4 <- c(rep(NA, n_timesteps + 60), test_pred4, rep(NA, nrow(vic_elec_test) - 60 - n_timesteps - n_forecast)) test_pred5 <- test_preds[[81]] test_pred5 <- c(rep(NA, n_timesteps + 80), test_pred5, rep(NA, nrow(vic_elec_test) - 80 - n_timesteps - n_forecast)) preds_ts <- vic_elec_test %>% select(Demand, Date) %>% add_column( ex_1 = test_pred1 * train_sd + train_mean, ex_2 = test_pred2 * train_sd + train_mean, ex_3 = test_pred3 * train_sd + train_mean, ex_4 = test_pred4 * train_sd + train_mean, ex_5 = test_pred5 * train_sd + train_mean) %>% pivot_longer(-Date) %>% update_tsibble(key = name) preds_ts %>% autoplot() + scale_color_hue(h = c(80, 300), l = 70) + theme_minimal()
(#fig:unnamed-chunk-11)A sample of two-weeks-ahead predictions for the test set, 2014.
We can’t directly compare performance here to that of previous models in our series, as we’ve pragmatically redefined the task. The main goal, however, has been to introduce the concept of attention. Specifically, how to manually implement the technique – something that, once you’ve understood the concept, you may never have to do in practice. Instead, you would likely make use of existing tools that come with torch (multi-head attention and transformer modules), tools we may introduce in a future “season” of this series.
Thanks for reading!
Photo by David Clode on Unsplash
Bahdanau, Dzmitry, Kyunghyun Cho, and Yoshua Bengio. 2014. “Neural Machine Translation by Jointly Learning to Align and Translate.” CoRR abs/1409.0473. http://arxiv.org/abs/1409.0473.
Dong, Yihe, Jean-Baptiste Cordonnier, and Andreas Loukas. 2021. “Attention is Not All You Need: Pure Attention Loses Rank Doubly Exponentially with Depth.” arXiv E-Prints, March, arXiv:2103.03404. http://arxiv.org/abs/2103.03404.
Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. “Attention Is All You Need.” arXiv E-Prints, June, arXiv:1706.03762. http://arxiv.org/abs/1706.03762.
Vinyals, Oriol, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey E. Hinton. 2014. “Grammar as a Foreign Language.” CoRR abs/1412.7449. http://arxiv.org/abs/1412.7449.
Xu, Kelvin, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C. Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Bengio. 2015. “Show, Attend and Tell: Neural Image Caption Generation with Visual Attention.” CoRR abs/1502.03044. http://arxiv.org/abs/1502.03044.
This is the final post in a four-part introduction to time-series forecasting with torch. These posts have been the story of a quest for multiple-step prediction, and by now, we’ve seen three different approaches: forecasting in a loop, incorporating a multi-layer perceptron (MLP), and sequence-to-sequence models. Here’s a quick recap.
-
As one should when one sets out for an adventurous journey, we started with an in-depth study of the tools at our disposal: recurrent neural networks (RNNs). We trained a model to predict the very next observation in line, and then, thought of a clever hack: How about we use this for multi-step prediction, feeding back individual predictions in a loop? The result , it turned out, was quite acceptable.
-
Then, the adventure really started. We built our first model “natively” for multi-step prediction, relieving the RNN a bit of its workload and involving a second player, a tiny-ish MLP. Now, it was the MLP’s task to project RNN output to several time points in the future. Although results were pretty satisfactory, we didn’t stop there.
-
Instead, we applied to numerical time series a technique commonly used in natural language processing (NLP): sequence-to-sequence (seq2seq) prediction. While forecast performance was not much different from the previous case, we found the technique to be more intuitively appealing, since it reflects the causal relationship between successive forecasts.
Today we’ll enrich the seq2seq approach by adding a new component: the attention module. Originally introduced around 20141, attention mechanisms have gained enormous traction, so much so that a recent paper title starts out “Attention is Not All You Need”2.
The idea is the following.
In the classic encoder-decoder setup, the decoder gets “primed” with an encoder summary just a single time: the time it starts its forecasting loop. From then on, it’s on its own. With attention, however, it gets to see the complete sequence of encoder outputs again every time it forecasts a new value. What’s more, every time, it gets to zoom in on those outputs that seem relevant for the current prediction step.
This is a particularly useful strategy in translation: In generating the next word, a model will need to know what part of the source sentence to focus on. How much the technique helps with numerical sequences, in contrast, will likely depend on the features of the series in question.
Data input
As before, we work with vic_elec, but this time, we partly deviate from the way we used to employ it. With the original, bi-hourly dataset, training the current model takes a long time, longer than readers will want to wait when experimenting. So instead, we aggregate observations by day. In order to have enough data, we train on years 2012 and 2013, reserving 2014 for validation as well as post-training inspection.
vic_elec_daily <- vic_elec %>% select(Time, Demand) %>% index_by(Date = date(Time)) %>% summarise( Demand = sum(Demand) / 1e3) elec_train <- vic_elec_daily %>% filter(year(Date) %in% c(2012, 2013)) %>% as_tibble() %>% select(Demand) %>% as.matrix() elec_valid <- vic_elec_daily %>% filter(year(Date) == 2014) %>% as_tibble() %>% select(Demand) %>% as.matrix() elec_test <- vic_elec_daily %>% filter(year(Date) %in% c(2014), month(Date) %in% 1:4) %>% as_tibble() %>% select(Demand) %>% as.matrix() train_mean <- mean(elec_train) train_sd <- sd(elec_train)
We’ll attempt to forecast demand up to fourteen days ahead. How long, then, should be the input sequences? This is a matter of experimentation; all the more so now that we’re adding in the attention mechanism. (I suspect that it might not handle very long sequences so well).
Below, we go with fourteen days for input length, too, but that may not necessarily be the best possible choice for this series.
n_timesteps <- 7 * 2 n_forecast <- 7 * 2 elec_dataset <- dataset( name = "elec_dataset", initialize = function(x, n_timesteps, sample_frac = 1) { self$n_timesteps <- n_timesteps self$x <- torch_tensor((x - train_mean) / train_sd) n <- length(self$x) - self$n_timesteps - 1 self$starts <- sort(sample.int( n = n, size = n * sample_frac )) }, .getitem = function(i) { start <- self$starts[i] end <- start + self$n_timesteps - 1 lag <- 1 list( x = self$x[start:end], y = self$x[(start+lag):(end+lag)]$squeeze(2) ) }, .length = function() { length(self$starts) } ) batch_size <- 32 train_ds <- elec_dataset(elec_train, n_timesteps, sample_frac = 0.5) train_dl <- train_ds %>% dataloader(batch_size = batch_size, shuffle = TRUE) valid_ds <- elec_dataset(elec_valid, n_timesteps, sample_frac = 0.5) valid_dl <- valid_ds %>% dataloader(batch_size = batch_size) test_ds <- elec_dataset(elec_test, n_timesteps) test_dl <- test_ds %>% dataloader(batch_size = 1)
Model
Model-wise, we again encounter the three modules familiar from the previous post: encoder, decoder, and top-level seq2seq module. However, there is an additional component: the attention module, used by the decoder to obtain attention weights.
Encoder
The encoder still works the same way. It wraps an RNN, and returns the final state.
encoder_module <- nn_module( initialize = function(type, input_size, hidden_size, num_layers = 1, dropout = 0) { self$type <- type self$rnn <- if (self$type == "gru") { nn_gru( input_size = input_size, hidden_size = hidden_size, num_layers = num_layers, dropout = dropout, batch_first = TRUE ) } else { nn_lstm( input_size = input_size, hidden_size = hidden_size, num_layers = num_layers, dropout = dropout, batch_first = TRUE ) } }, forward = function(x) { x <- self$rnn(x) # return last states for all layers # per layer, a single tensor for GRU, a list of 2 tensors for LSTM x <- x[[2]] x } )
Attention module
In basic seq2seq, whenever it had to generate a new value, the decoder took into account two things: its prior state, and the previous output generated. In an attention-enriched setup, the decoder additionally receives the complete output from the encoder. In deciding what subset of that output should matter, it gets help from a new agent, the attention module.
This, then, is the attention module’s raison d’être: Given current decoder state and well as complete encoder outputs, obtain a weighting of those outputs indicative of how relevant they are to what the decoder is currently up to. This procedure results in the so-called attention weights: a normalized score, for each time step in the encoding, that quantify their respective importance.
Attention may be implemented in a number of different ways. Here, we show two implementation options, one additive, and one multiplicative.
Additive attention
In additive attention, encoder outputs and decoder state are commonly either added or concatenated (we choose to do the latter, below). The resulting tensor is run through a linear layer, and a softmax is applied for normalization.
attention_module_additive <- nn_module( initialize = function(hidden_dim, attention_size) { self$attention <- nn_linear(2 * hidden_dim, attention_size) }, forward = function(state, encoder_outputs) { # function argument shapes # encoder_outputs: (bs, timesteps, hidden_dim) # state: (1, bs, hidden_dim) # multiplex state to allow for concatenation (dimensions 1 and 2 must agree) seq_len <- dim(encoder_outputs)[2] # resulting shape: (bs, timesteps, hidden_dim) state_rep <- state$permute(c(2, 1, 3))$repeat_interleave(seq_len, 2) # concatenate along feature dimension concat <- torch_cat(list(state_rep, encoder_outputs), dim = 3) # run through linear layer with tanh # resulting shape: (bs, timesteps, attention_size) scores <- self$attention(concat) %>% torch_tanh() # sum over attention dimension and normalize # resulting shape: (bs, timesteps) attention_weights <- scores %>% torch_sum(dim = 3) %>% nnf_softmax(dim = 2) # a normalized score for every source token attention_weights } )
Multiplicative attention
In multiplicative attention, scores are obtained by computing dot products between decoder state and all of the encoder outputs. Here too, a softmax is then used for normalization.
attention_module_multiplicative <- nn_module( initialize = function() { NULL }, forward = function(state, encoder_outputs) { # function argument shapes # encoder_outputs: (bs, timesteps, hidden_dim) # state: (1, bs, hidden_dim) # allow for matrix multiplication with encoder_outputs state <- state$permute(c(2, 3, 1)) # prepare for scaling by number of features d <- torch_tensor(dim(encoder_outputs)[3], dtype = torch_float()) # scaled dot products between state and outputs # resulting shape: (bs, timesteps, 1) scores <- torch_bmm(encoder_outputs, state) %>% torch_div(torch_sqrt(d)) # normalize # resulting shape: (bs, timesteps) attention_weights <- scores$squeeze(3) %>% nnf_softmax(dim = 2) # a normalized score for every source token attention_weights } )
Decoder
Once attention weights have been computed, their actual application is handled by the decoder. Concretely, the method in question, weighted_encoder_outputs(), computes a product of weights and encoder outputs, making sure that each output will have appropriate impact.
The rest of the action then happens in forward(). A concatenation of weighted encoder outputs (often called “context”) and current input is run through an RNN. Then, an ensemble of RNN output, context, and input is passed to an MLP. Finally, both RNN state and current prediction are returned.
decoder_module <- nn_module( initialize = function(type, input_size, hidden_size, attention_type, attention_size = 8, num_layers = 1) { self$type <- type self$rnn <- if (self$type == "gru") { nn_gru( input_size = input_size, hidden_size = hidden_size, num_layers = num_layers, batch_first = TRUE ) } else { nn_lstm( input_size = input_size, hidden_size = hidden_size, num_layers = num_layers, batch_first = TRUE ) } self$linear <- nn_linear(2 * hidden_size + 1, 1) self$attention <- if (attention_type == "multiplicative") attention_module_multiplicative() else attention_module_additive(hidden_size, attention_size) }, weighted_encoder_outputs = function(state, encoder_outputs) { # encoder_outputs is (bs, timesteps, hidden_dim) # state is (1, bs, hidden_dim) # resulting shape: (bs * timesteps) attention_weights <- self$attention(state, encoder_outputs) # resulting shape: (bs, 1, seq_len) attention_weights <- attention_weights$unsqueeze(2) # resulting shape: (bs, 1, hidden_size) weighted_encoder_outputs <- torch_bmm(attention_weights, encoder_outputs) weighted_encoder_outputs }, forward = function(x, state, encoder_outputs) { # encoder_outputs is (bs, timesteps, hidden_dim) # state is (1, bs, hidden_dim) # resulting shape: (bs, 1, hidden_size) context <- self$weighted_encoder_outputs(state, encoder_outputs) # concatenate input and context # NOTE: this repeating is done to compensate for the absence of an embedding module # that, in NLP, would give x a higher proportion in the concatenation x_rep <- x$repeat_interleave(dim(context)[3], 3) rnn_input <- torch_cat(list(x_rep, context), dim = 3) # resulting shapes: (bs, 1, hidden_size) and (1, bs, hidden_size) rnn_out <- self$rnn(rnn_input, state) rnn_output <- rnn_out[[1]] next_hidden <- rnn_out[[2]] mlp_input <- torch_cat(list(rnn_output$squeeze(2), context$squeeze(2), x$squeeze(2)), dim = 2) output <- self$linear(mlp_input) # shapes: (bs, 1) and (1, bs, hidden_size) list(output, next_hidden) } )
seq2seq module
The seq2seq module is basically unchanged (apart from the fact that now, it allows for attention module configuration). For a detailed explanation of what happens here, please consult the previous post.
seq2seq_module <- nn_module( initialize = function(type, input_size, hidden_size, attention_type, attention_size, n_forecast, num_layers = 1, encoder_dropout = 0) { self$encoder <- encoder_module(type = type, input_size = input_size, hidden_size = hidden_size, num_layers, encoder_dropout) self$decoder <- decoder_module(type = type, input_size = 2 * hidden_size, hidden_size = hidden_size, attention_type = attention_type, attention_size = attention_size, num_layers) self$n_forecast <- n_forecast }, forward = function(x, y, teacher_forcing_ratio) { outputs <- torch_zeros(dim(x)[1], self$n_forecast)$to(device = device) encoded <- self$encoder(x) encoder_outputs <- encoded[[1]] hidden <- encoded[[2]] # list of (batch_size, 1), (1, batch_size, hidden_size) out <- self$decoder(x[ , n_timesteps, , drop = FALSE], hidden, encoder_outputs) # (batch_size, 1) pred <- out[[1]] # (1, batch_size, hidden_size) state <- out[[2]] outputs[ , 1] <- pred$squeeze(2) for (t in 2:self$n_forecast) { teacher_forcing <- runif(1) < teacher_forcing_ratio input <- if (teacher_forcing == TRUE) pred$unsqueeze(3) else y[ , t - 1] out <- self$decoder(pred$unsqueeze(3), state, encoder_outputs) pred <- out[[1]] state <- out[[2]] outputs[ , t] <- pred$squeeze(2) } outputs } )
When instantiating the top-level model, we now have an additional choice: that between additive and multiplicative attention. In the “accuracy” sense of performance, my tests did not show any differences. However, the multiplicative variant is a lot faster.
net <- seq2seq_module("gru", input_size = 1, hidden_size = 32, attention_type = "multiplicative", attention_size = 8, n_forecast = n_timesteps) # training RNNs on the GPU currently prints a warning that may clutter # the console # see https://github.com/mlverse/torch/issues/461 # alternatively, use # device <- "cpu" device <- torch_device(if (cuda_is_available()) "cuda" else "cpu") net <- net$to(device = device)
Training
Just like last time, in model training, we get to choose the degree of teacher forcing. Below, we go with a fraction of 0.0, that is, no forcing at all.
optimizer <- optim_adam(net$parameters, lr = 0.001) num_epochs <- 100 train_batch <- function(b, teacher_forcing_ratio) { optimizer$zero_grad() output <- net(b$x$to(device = device), b$y$to(device = device), teacher_forcing_ratio) target <- b$y$to(device = device) loss <- nnf_mse_loss(output, target) loss$backward() optimizer$step() loss$item() } valid_batch <- function(b, teacher_forcing_ratio = 0) { output <- net(b$x$to(device = device), b$y$to(device = device), teacher_forcing_ratio) target <- b$y$to(device = device) loss <- nnf_mse_loss(output, target) loss$item() } for (epoch in 1:num_epochs) { net$train() train_loss <- c() coro::loop(for (b in train_dl) { loss <-train_batch(b, teacher_forcing_ratio = 0.3) train_loss <- c(train_loss, loss) }) cat(sprintf("nEpoch %d, training: loss: %3.5f n", epoch, mean(train_loss))) net$eval() valid_loss <- c() coro::loop(for (b in valid_dl) { loss <- valid_batch(b) valid_loss <- c(valid_loss, loss) }) cat(sprintf("nEpoch %d, validation: loss: %3.5f n", epoch, mean(valid_loss))) }
# Epoch 1, training: loss: 0.83752 # Epoch 1, validation: loss: 0.83167 # Epoch 2, training: loss: 0.72803 # Epoch 2, validation: loss: 0.80804 # ... # ... # Epoch 99, training: loss: 0.10385 # Epoch 99, validation: loss: 0.21259 # Epoch 100, training: loss: 0.10396 # Epoch 100, validation: loss: 0.20975
Evaluation
For visual inspection, we pick a few forecasts from the test set.
net$eval() test_preds <- vector(mode = "list", length = length(test_dl)) i <- 1 vic_elec_test <- vic_elec_daily %>% filter(year(Date) == 2014, month(Date) %in% 1:4) coro::loop(for (b in test_dl) { input <- b$x output <- net(b$x$to(device = device), b$y$to(device = device), teacher_forcing_ratio = 0) preds <- as.numeric(output) test_preds[[i]] <- preds i <<- i + 1 }) test_pred1 <- test_preds[[1]] test_pred1 <- c(rep(NA, n_timesteps), test_pred1, rep(NA, nrow(vic_elec_test) - n_timesteps - n_forecast)) test_pred2 <- test_preds[[21]] test_pred2 <- c(rep(NA, n_timesteps + 20), test_pred2, rep(NA, nrow(vic_elec_test) - 20 - n_timesteps - n_forecast)) test_pred3 <- test_preds[[41]] test_pred3 <- c(rep(NA, n_timesteps + 40), test_pred3, rep(NA, nrow(vic_elec_test) - 40 - n_timesteps - n_forecast)) test_pred4 <- test_preds[[61]] test_pred4 <- c(rep(NA, n_timesteps + 60), test_pred4, rep(NA, nrow(vic_elec_test) - 60 - n_timesteps - n_forecast)) test_pred5 <- test_preds[[81]] test_pred5 <- c(rep(NA, n_timesteps + 80), test_pred5, rep(NA, nrow(vic_elec_test) - 80 - n_timesteps - n_forecast)) preds_ts <- vic_elec_test %>% select(Demand, Date) %>% add_column( ex_1 = test_pred1 * train_sd + train_mean, ex_2 = test_pred2 * train_sd + train_mean, ex_3 = test_pred3 * train_sd + train_mean, ex_4 = test_pred4 * train_sd + train_mean, ex_5 = test_pred5 * train_sd + train_mean) %>% pivot_longer(-Date) %>% update_tsibble(key = name) preds_ts %>% autoplot() + scale_color_hue(h = c(80, 300), l = 70) + theme_minimal()
(#fig:unnamed-chunk-11)A sample of two-weeks-ahead predictions for the test set, 2014.
We can’t directly compare performance here to that of previous models in our series, as we’ve pragmatically redefined the task. The main goal, however, has been to introduce the concept of attention. Specifically, how to manually implement the technique – something that, once you’ve understood the concept, you may never have to do in practice. Instead, you would likely make use of existing tools that come with torch (multi-head attention and transformer modules), tools we may introduce in a future “season” of this series.
Thanks for reading!
Photo by David Clode on Unsplash
Bahdanau, Dzmitry, Kyunghyun Cho, and Yoshua Bengio. 2014. “Neural Machine Translation by Jointly Learning to Align and Translate.” CoRR abs/1409.0473. http://arxiv.org/abs/1409.0473.
Dong, Yihe, Jean-Baptiste Cordonnier, and Andreas Loukas. 2021. “Attention is Not All You Need: Pure Attention Loses Rank Doubly Exponentially with Depth.” arXiv E-Prints, March, arXiv:2103.03404. http://arxiv.org/abs/2103.03404.
Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. “Attention Is All You Need.” arXiv E-Prints, June, arXiv:1706.03762. http://arxiv.org/abs/1706.03762.
Vinyals, Oriol, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey E. Hinton. 2014. “Grammar as a Foreign Language.” CoRR abs/1412.7449. http://arxiv.org/abs/1412.7449.
Xu, Kelvin, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C. Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Bengio. 2015. “Show, Attend and Tell: Neural Image Caption Generation with Visual Attention.” CoRR abs/1502.03044. http://arxiv.org/abs/1502.03044.