I have to handle a huge amount of samples, where each sample contains unique time series. The goal is to feed this data into the Tensorflow LSTM model and predict some features. I have created the tf timeseries_dataset_from_arraygenerator function to feed the data to the TF model, but I haven’t figured out how to create a generator function when I have multiple samples. If I use the usual pipeline, tf timeseries_dataset_from_arrayoverlap the time series of two individual samples.
Does anyone have an idea how to effectively pass a time series of multiple samples to the TF model?
E.g. the Human Activity Recognition Dataset is one such dataset where each person has a separate long, time series, and each user’s time series can be further parsed with the SLIDING/ROLLING WINDOS-like timeseries_dataset_from_arrayfunction.
Here is a simpler example:
I want to use timeseries_dataset_from_arrayto generate samples for the TF model. Example: sample 1 where column 0 has 0, sample 2 starts where column 0 has 100. Here is a simpler example:
So I was using my GPU for a while and I’m not sure when it started but now when I check I realize my GPU is not being picked up. I’m definitely sure I was on GPU before and I went through the pain of installing it so I know most of my setup should be right…
standard tensorflow unistalled, only tensorflow-gpu installed
C:Program FilesNVIDIA GPU Computing ToolkitCUDAv10.1bin is on my path for deps like cusolver64_10.dll
C:toolscudabin is also on my path for cudnn64_7.dll
GPU Operator 1.9 includes support NVIDIA DGX A100 systems with DGX OS and streamlined installation processes.
NVIDIA GPU Operator allows organizations to easily scale NVIDIA GPUs on Kubernetes.
By simplifying the deployment and management of GPUs with Kubernetes, the GPU Operator enables infrastructure teams to scale GPU applications error-free, within minutes, automatically.
GPU Operator 1.9 is now available and includes several key features, among other updates, that allow users to get started faster and maintain uninterrupted service.
GPU Operator 1.9 includes:
Support for NVIDIA DGX A100 systems with DGX OS
Streamlined installation process
Support for DGX A100 with DGX OS
With 1.9, the GPU Operator automatically deploys the software required for initializing the fabric on NVIDIA NVSwitch systems, including the DGX A100 when used with DGX OS. Once initialized, all GPUs can communicate with one another at full NVLink bandwidth to create an end-to-end scalable computing platform.
The DGX A100 features the world’s most advanced accelerator, enabling enterprises to consolidate training, inference, and analytics into a unified, easy-to-deploy AI infrastructure. And now, with GPU Operator support, organizations can take their applications from training to scale with the world’s most advanced systems.
Streamlined installation process
With previous versions of GPU Operator, organizations using GPU Operator with OpenShift needed to apply additional entitlements from Red Hat in order to successfully use the GPU Operator. As entitlement keys expired, users would need to re-apply them to ensure that their workflow was not interrupted.
GPU Operator 1.9 now supports entitlement-free driver containers for OpenShift. This is done by leveraging Driver-Toolkit images provided by RedHat with necessary kernel packages preinstalled for building NVIDIA kernel modules. Users no longer need to ensure that valid certificates with an RHEL subscription are always applied for running GPU Operator. More importantly for disconnected clusters, it eliminates dependencies on private package repositories.
Version 1.9 also includes support for preinstalled drivers with the MIG Manager, support for preinstalled MOFED to use GPUDirect RDMA, automatic detection of container runtime, and automatic disabling of NOUVEAU – all designed to make it easier for users to get started and continue GPU-accelerated Kubernetes.
Additionally, GPU Operator 1.9 automatically detects the container runtime installed on the worker node. There is no need to specify the container runtime at install time.
GPU Operator requires Nouveau to be disabled. With previous GPU Operator versions, the K8s admin had to disable Nouveau as documented here. GPU Operator 1.9 automatically detects if Nouveau is enabled and disables it for you.
GPU Operator Resources
The following resources are available for using NVIDIA GPU Operator:
AI can design chips no human could, said Bill Dally in a virtual keynote today at the Design Automation Conference (DAC), one of the world’s largest gatherings of semiconductor engineers. The chief scientist of NVIDIA discussed research in accelerated computing and machine learning that’s making chips smaller, faster and better. “Our work shows you can Read article >
There’s an old axiom that the best businesses thrive during periods of uncertainty. No doubt, that will be tested to the limits as 2022 portends upheaval on a grand scale. Pandemic-related supply chain disruptions are affecting everything from production of cars and electronics to toys and toilet paper. At the same time, global food prices Read article >
I’m at my wits end, every single tutorial for classification I find for Tensorflow 1.x is a goddamn MNIST tutorial, they always skip the basics, and I need the basics.
I have a test set with 6 numerical features and a label that is binary 1 or 0.
With Tensorflow 2.0 I can easily just use something like this
model = tf.keras.Sequential([ tf.keras.layers.Dense(21, activation="tanh"), tf.keras.layers.Dense(10, activation="sigmoid"), tf.keras.layers.Dense(1, activation="sigmoid") ]) model.compile( loss=tf.keras.losses.binary_crossentropy, optimizer=tf.keras.optimizers.Adam(0.001), metrics=['accuracy'] ) history = model.fit(X_train, y_train, epochs=25)
For the life of me I don’t know how to go about doing this in Tensorflow 1.x.
At the forefront of AI innovation, NVIDIA continues to push the boundaries of technology in machine learning, self-driving cars, robotics, graphics, and more.
At the forefront of AI innovation, NVIDIA continues to push the boundaries of technology in machine learning, self-driving cars, robotics, graphics, and more. NVIDIA researchers will present 20 papers at the thirty-fifth annual conference on Neural Information Processing Systems (NeurIPS) from December 6 to December 14, 2021.
Here are some of the featured papers:
Alias-Free Generative Adversarial Networks (StyleGAN3) Tero Karras, Miika Aittala, Samuli Laine, Erik Härkönen, Janne Hellsten, Jaakko Lehtinen, Timo Aila | Paper | GitHub | Blog
StyleGAN3, a model developed by NVIDIA Research, will be presented on Tuesday, December 7 from 12:40 AM – 12:55 AM PST, advances the state-of-the-art in generative adversarial networks used to synthesize realistic images. The breakthrough brings graphics principles in signal processing and image processing to GANs to avoid aliasing: a kind of image corruption often visible when images are rotated, scaled or translated.
Video 1. Results from the StyleGAN3 model
EditGAN: High-Precision Semantic Image Editing Huan Ling*, Karsten Kreis*, Daiqing Li, Seung Wook Kim, Antonio Torralba, Sanja Fidler | Paper | GitHub
EditGAN, a novel method for high quality, high precision semantic image editing, allowing users to edit images by modifying their highly detailed part segmentation masks, e.g., drawing a new mask for the headlight of a car. EditGAN builds on a GAN framework that jointly models images and their semantic segmentations, requiring only a handful of labeled examples, making it a scalable tool for editing. The poster session will be held on Thursday, December 9 from 8:30 AM – 10:00 AM PST.
Video 2. The video showcases EditGAN in an interactive demo tool.
SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo | Paper | GitHub
SegFormer, a simple, efficient yet powerful semantic segmentation framework which unifies Transformers with lightweight multilayer perception (MLP) decoders. SegFormer has two appealing features: 1) SegFormer comprises a novel hierarchically structured Transformer encoder which outputs multiscale features. It does not need positional encoding, thereby avoiding the interpolation of positional codes which leads to decreased performance when the testing resolution differs from training. 2) SegFormer avoids complex decoders. The poster will be presented on Tuesday, December 7 from 8:30 AM – 10:00 AM PST.
Video 3. The video shows the excellent zero-shot robustness of SegFormer on the Cityscapes-C dataset.
DIB-R++: Learning to Predict Lighting and Material with a Hybrid Differentiable Renderer Wenzheng Chen, Joey Litalien, Jun Gao, Zian Wang, Clement Fuji Tsang, Sameh Khamis, Or Litany, Sanja Fidler | Paper
DIB-R++, a deferred, image-based renderer which supports these photorealistic effects by combining rasterization and ray-tracing, taking advantage of their respective strengths—speed and realism. The poster session is on Thursday, December 9 from 4:30 PM – 6:00 PM PST.
Image 1. DIB-R++ is a hybrid renderer that combines rasterization and ray tracing together. Given a 3D mesh M, we employ (a) a rasterization-based renderer to obtain diffuse albedo, surface normals and mask maps. In the shading pass (b), we then use these buffers to compute the incident radiance by sampling or by representing lighting and the specular BRDF using a spherical Gaussian basis. Depending on the representation used in (c), we can render with advanced lighting and material effect (d).
In addition to the papers at NeurIPS 2021, researchers and developers can accelerate 3D deep learning research with new Kaolin features:
Kaolin is launching new features to accelerate 3D deep learning research. Updates to the NVIDIA Omniverse Kaolin app will bring robust visualization of massive point clouds. Updates to the Kaolin library will include support for tetrahedral meshes, rays management functionality, and a strong speedup to DIB-R. To learn more about Kaolin, watch the recent GTC session.
Image 2. Results from NVIDIA Kaolin
To view the complete list of NVIDIA Research accepted papers, workshop and tutorials, demos, and to explore job opportunities at NVIDIA, visit the NVIDIA at NeurIPS 2021 website.
I am trying to increase the training speed of my model by using mixed precision and the nvidia gpu tensor cores. For this, I just use the keras mixed precision, but the speed increment is only of 10%. Then I found the nividia ngc container, which is optimized for their gpus, and with mixed precision I can increase the training speed a 60%, although with float32 the speed in lower than native. I would like to have at least the speed increase of ngc container natively, what do I need to do?