Categories
Misc

Good resource for data sets?

Hey, all.

I’m quite new to TensorFlow and machine learning in general, and I would like to know if there are any wonderful resources out there holding large data sets to train on.

My end goal is to train an algorithm to identify dead pixels in images, so if there are any resources that specifically contain image sets or, if I’m incredibly lucky, contain image sets with dead pixels, those would be ideal.

Thanks in advance.

submitted by /u/Mongdoman
[visit reddit] [comments]

Categories
Misc

Custom Dynamic Loss function: No gradients provided for any variable:

Hey all!

I am using an RGB dataset for my x train and the loss is calculated in a dynamic loss function that gets the distances of pairs and compares them against the ideal distance dist_train. Here is the model:

class MyModel(Model): def __init__(self): super(MyModel, self).__init__() self.d1 = Dense(3, activation='relu') self.flatten = Flatten() self.d2 = Dense(3, activation='relu') self.d3 = Dense(2) def call(self, x): x = self.d1(x) x = self.flatten(x) x = self.d2(x) return self.d3(x) # Create an instance of the model model = MyModel() optimizer = tf.keras.optimizers.Adam() train_loss = tf.keras.metrics.Mean(name='train_loss') test_loss = tf.keras.metrics.Mean(name='test_loss') @tf.function def train_step(rgb): with tf.GradientTape() as tape: predictions = model(rgb, training=True) loss = tf_function(predictions) gradients = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients(zip(gradients, model.trainable_variables)) train_loss(loss) 

Here is the loss function and the tf.function wrapping it:

def mahal_loss(output): mahal = sp.spatial.distance.pdist(output, metric='mahalanobis') mahal = sp.spatial.distance.squareform(mahal, force='no', checks=True) new_distance = [] mahal = np.ma.masked_array(mahal, mask=mahal==0) for i in range(len(mahal)): pw_dist = mahal[i, indices_train[i]] new_distance.append(pw_dist) mahal_loss = np.mean((dist_train - new_distance)**2) return mahal_loss @tf.function(input_signature=[tf.TensorSpec(None, tf.float32)]) def tf_function(pred): y = tf.numpy_function(mahal_loss, [pred], tf.float32) return y 

Running the model:

EPOCHS = 5 for epoch in range(EPOCHS): train_loss.reset_states() test_loss.reset_states() for i in x_train: train_step(i) print( f'Epoch {epoch + 1}, ' f'Loss: {train_loss.result()}, ' f'Test Loss: {test_loss.result()}, ' ) 

I believe the reason I am running into problems lies in the dynamic loss function, as I need to calculate the distance between certain pairs to get the results I expect. This means that inside the loss function I have to calculate the mahalanobis distance of each pair to get the ones I will compare against the correct distances. The error I get is the following:

 in user code: <ipython-input-23-0e975da5cbc2>:15 train_step * optimizer.apply_gradients(zip(gradients, model.trainable_variables)) C:Anaconda3envscolour_envlibsite-packageskerasoptimizer_v2optimizer_v2.py:622 apply_gradients ** grads_and_vars = optimizer_utils.filter_empty_gradients(grads_and_vars) C:Anaconda3envscolour_envlibsite-packageskerasoptimizer_v2utils.py:72 filter_empty_gradients raise ValueError("No gradients provided for any variable: %s." % ValueError: No gradients provided for any variable: ['my_model/dense/kernel:0', 'my_model/dense/bias:0', 'my_model/dense_1/kernel:0', 'my_model/dense_1/bias:0', 'my_model/dense_2/kernel:0', 'my_model/dense_2/bias:0']. 

submitted by /u/Acusee
[visit reddit] [comments]

Categories
Misc

ValueError: `validation_split` is only supported for Tensors or NumPy arrays, found following input: RaggedTensor

I have following inputs to be train on CNN.

x = np.array(Images)

y = [ [[0]], [[76., 5., 9., 1., 0., 0.], [54., 4., 10., 51.]] ]

Since the ‘y’ input is a n-dimensions array of non-uniform sizes, I used RaggedTensor to represent ‘y’ input and fed it to the network.

y = tf.ragged.constant(y)

cnn_model.fit(x, y, epochs = 10, batch_size=32, validation_split=0.30)

I am receiving following error:

ValueError: validation_split is only supported for Tensors or NumPy arrays, found following types in the input: [<class ‘tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor’>]

If I convert ‘y’ to numpy.ndarray and fit it to the model, I get following error,

cnn_model.fit(x, y.numpy(), epochs = 10, batch_size=32, validation_split=0.30)

ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type numpy.ndarray).

I would want to train this input ‘y’ of n-dimensional array to the model, kindly suggest which datatype representation would be suitable regarding this.

submitted by /u/sarvna
[visit reddit] [comments]

Categories
Misc

Compile+train Keras model without REDUCE_PROD operation?

I’m trying to build a model and convert it to use in TensorFlow Lite for Microcontrollers. I’m having an issue where every Keras model I generate contains a REDUCE_PROD operator (even a completely basic model consisting of a single Dense(1) layer). However, the TF Lite for Microcontrollers runtime doesn’t support the REDUCE_PROD operator and flags an error upon attempting to load the model.

Is there a way I can exclude this operator when generating a model? Am I missing something?

Thanks!

submitted by /u/maha9000
[visit reddit] [comments]

Categories
Misc

load_model doesn’t work when using RSquare from tensorflow_addons as metric

I have a model that uses R2 as a metric. Since AFAIK there isn’t one natively implemented in TF, I use the one from the tensorflow-addons package. However, when I try to load this model after saving, it fails with the error:

type of argument “y_shape” must be a tuple; got list instead

Here is a minimal working example that produces this error:

from tensorflow.keras.models import load_model, Sequential from tensorflow.keras.layers import Dense, Input import tensorflow as tf import tensorflow_addons as tfa model = Sequential() model.add(Input(5)) model.add(Dense(5)) model.add(Dense(5)) model.compile(metrics = [tfa.metrics.RSquare(y_shape=(5,))]) model.save('test_model.h5') model = load_model('test_model.h5') 

RSquare works fine during training but I need to be able to load the model later (and load models I have already saved). I have tried using the custom_objects argument to load_model but this makes no difference. Any suggestions?

Thanks in advance!

submitted by /u/DustinBraddock
[visit reddit] [comments]

Categories
Misc

Individual Losses Per Node?

I am attempting to train a 3 layer neural network that predicts maximum and minimum survival duration. The final layer has two outputs (corresponding to a prediction of maximum/minimum survival) and I have written a custom loss function. However I have realised that I need to apply the loss differently depending on which node I am evaluating.

What would be the best way of approaching this? Would I be better off training two separate models to predict maximum and minimum survival?

Thank you

submitted by /u/Disastrous-Buy-6645
[visit reddit] [comments]

Categories
Misc

New NVIDIA Kaolin Library Release Streamlines 3D Deep Learning Research Workflows

3D deep learning researchers can build on more cutting edge algorithms and simplify their workflows with the latest version of the Kaolin PyTorch Library.

3D deep learning researchers can build on the latest algorithms to simplify and accelerate workflows using the Kaolin PyTorch Library, available now.

NVIDIA Kaolin library, first released in November 2019, was originally written in the NVIDIA Toronto AI lab as an internship project. After writing repetitive boilerplate code and copying algorithmic components for several projects, the researchers started development of a PyTorch library bringing common functionality for 3D deep learning (3D DL) to one place. Since its first release, Kaolin library has grown into a mature codebase with robust and optimized utilities and algorithms for 3D deep learning.

The Kaolin library brings 3D deep learning researchers utilities to accelerate their workflows, as well as reusable research components to provide a basis for future innovations. For example, Kaolin simplifies handling and processing of complex 3D datasets used for training. It also includes writers for 3D checkpoints that can be visualized in a companion Omniverse Kaolin App with the latest NVIDIA RTX technology. And it provides building blocks like conversions between 3D representations, useful 3D loss functions for training, and differentiable rendering. The Kaolin team is dedicated to deliver continuous improvements and ship new algorithmic building blocks to power 3D DL innovation.

The latest Kaolin library release includes a new representation, structured point clouds (SPC), a sparse octree-based acceleration data structure, with highly efficient convolution and ray tracing capabilities. SPCs are useful for scaling up and accelerating neural implicit representations, popular in 3D DL research today. It also powers the latest version of NeuralLOD training, delivering up to 30x reduction in memory, and speeding up training time 3x.

Visualization from Charles Loop, Model Courtesy of Qianyi Zhou, Stanford. Real-time volume rendering with Kaolin’s SPC. Colors represent the number of “hits” per ray, efficiently computed through sparse SPC structure.

It also includes a new lightweight Tensorboard-style web dashboard called Dash3D. Users can leverage this tool to inspect checkpoints of 3D predictions produced by DL models during training, even on remote hardware configurations.

Lightweight visualization of 3D model predictions that evolve during training in the new Kaolin Dash3D.

The library release improves support for 3D datasets, including new datasets (SHREC, ModelNet), additional formats (.off) and speedups for the USD 3D file format, resulting in 10x improvement in load time efficiency during training over popular obj format. In addition, new tutorials for differentiable rendering and 3D checkpoints are included.

See official change log for additional details on Kaolin library release. Researchers can download the Kaolin library on GitHub today. 

The library’s companion  Omniverse Kaolin App is available through NVIDIA Omniverse. Download the NVIDIA Omniverse open beta today to get started. For additional support, join the Omniverse Discord server or the Omniverse forums to chat with the community.

Categories
Misc

Announcing Latest Nsight Graphics 2021.4 – Download Now

Learn about the latest release of Nsight Graphics 2021.4, an all-in-one graphics debugger and profiler to help game developers get the most out of NVIDIA hardware.

Nsight Graphics 2021.4 is an all-in-one graphics debugger and profiler to help game developers get the most out of NVIDIA hardware. From analyzing API setup, to solving nasty bugs, to providing deep insight into how applications use the GPU for better performance, Nsight Graphics is the ultimate tool.

The latest release is available to download now >>

Key features include:

  • GPU Trace with One-shot capture feature
  • GPU Trace now supports applications that utilize Vulkan-CUDA interop
  • Analysis view for GPU Trace
  • Resizable BAR capabilities

GPU Trace

GPU Trace introduces a new capture type called One-shot. The One-shot capture type supports profiling applications, which do not have a specific frame beginning and ending. This makes it easier to profile and optimize tools that rely on compute workloads—such as generating normal maps or optimizing geometry/LODs. One-shot captures are supported for D3D12 and Vulkan applications using compute or ray tracing features. Ray tracing with DirectML and WinML is also supported.

Figure 1. GPU Trace

Trace Analysis helps identify work regimes with the most potential for performance improvement. Select the “Analyze” button after taking a GPU Trace, and the advanced analysis engine will provide a new report with explanations and suggestions on how to improve GPU utilization. 

Figure 2. Trace Analysis

In March 2021, NVIDIA introduced new Resizable BAR capabilities with Game Ready GeForce drivers. Users with a compatible motherboard and GPU can enable all of the GPU memory to be accessed by the CPU at once. GPU Trace also reveals if BAR memory transfers are happening efficiently. View more information >> 

Figure 3. Resizable BAR

Using VK_NV_cuda_kernel_launch, it is now possible to launch CUDA kernels from a Vulkan graphics application without the overhead of the context switch. GPU Trace now supports this capability.

Figure 4. CUDA kernels

C++ Captures

When working with C++ Captures, it can be useful to open up an integrated development environment with a project that allows for code browsing or modification. In this release, the added button in the C++ Capture document opens up a Visual Studio environment with the associated project, taking advantage of Visual Studio’s native CMake support

Figure 5. Added C++ Capture button

Read the Nsight Graphics 2021.4 release notes >>
Check out the GDC session on DevTools for Harnessing Ray Tracing in Games >>

Please continue to use the integrated feedback button that lets you send comments, feature requests, and bugs directly. You can send feedback anonymously, or provide an email, for follow up. 

Just click on the little speech bubble at the top right of the window. 

Figure 5. Feedback form

Resources

Categories
Misc

New Machine Learning Model Taps into the Problem-Solving Potential of Satellite Data

New research creates a low-cost and easy-to-use machine learning model to analyze streams of data from earth-imaging satellites.

New research from a group of scientists at UC Berkeley is giving data-poor regions across the globe the power to analyze data-rich satellite imagery. The study, published in Nature Communications, develops a machine learning model resource-constrained organizations and researchers can use to draw out regional socioeconomic and environmental information. Being able to evaluate local resources remotely could help guide effective interventions and benefit communities globally. 

“We saw that many researchers—ourselves included—were passing up on this valuable data source because of the complexities and upfront costs associated with building computer vision pipelines to translate raw pixel values into useful information. We thought that there might be a way to make this information more accessible while maintaining the predictive skill offered by state-of-the-art approaches. So, we set about constructing a way to do this,” said coauthor Ian Bolliger, who worked on the study while pursuing a PhD in Energy and Resources at UC Berkeley.

At any given time, hundreds of image-collecting satellites circle the earth, sending massive amounts of information to databases daily. This data holds valuable insight into global challenges, including health, economic, and environmental conditions—even offering a look into data-poor and remote regions.

Combining satellite imagery with machine learning (SIML) has become an effective tool for turning these raw data streams into usable information. Researchers have used SIML on a broad-range of studies, from calculating poverty rates, to water availability, to educational access. However, most SIML projects capture information on a narrow topic, creating data tailored to a specific study and location. 

The researchers sought to create an accessible system capable of analyzing and organizing satellite images from multiple sources while lowering compute requirements. The tool they created, called the Multi-Task Observation using Satellite Imagery & Kitchen Sinks (MOSAIKS), does this by using a relatively simpler and more efficient unsupervised machine learning algorithm. 

“We designed MOSAIKS keeping in mind that a single satellite image simultaneously holds information about many different prediction variables (like forest cover or population density.) We chose to use an unsupervised embedding of the imagery to create a statistical summary of each image. The unsupervised nature of the featurization step makes the learning and prediction steps of the pipeline very fast, while the specifics of how those features are computed from imagery are well suited to satellite image data,” said coauthor Esther Rolf, a Ph.D. student in computer science at Berkeley.

To develop the model, the researchers used CUDA-accelerated NVIDIA V100 Tensor Core GPUs on AWS. The publicly available CodeOcean capsule, which provides code, compute, and storage, for anyone to interactively run, uses NVIDIA GPUs.

Figure 1. Training data (left) and predictions using a single featurization of daytime imagery (right). Insets (far right) marked by black squares in global maps. Training sample is a uniform random sampling of 1,000,000 land grid cells, 498,063 for which imagery were available and could be matched to task labels. 

“We want policymakers in resource-constrained settings and without specialized computational expertise to be able to painlessly gather satellite imagery, build a model of a variable they care about (say, the presence of adequate sanitation systems), and test whether this model is actually performing well. If they can do this, it will dramatically improve the usefulness of this information in implementing policy objectives,” Bolliger said.

Currently the team is developing and testing a public-facing web interface tool, making it easy for people to query for MOSAIKS features in user-specified locations. The researchers encourage interested researchers to sign up for the beta version.


Read the full article in Nature Communications >>
Read more >>   

Categories
Offsites

Discovering Anomalous Data with Self-Supervised Learning

Anomaly detection (sometimes called outlier detection or out-of-distribution detection) is one of the most common machine learning applications across many domains, from defect detection in manufacturing to fraudulent transaction detection in finance. It is most often used when it is easy to collect a large amount of known-normal examples but where anomalous data is rare and difficult to find. As such, one-class classification, such as one-class support vector machine (OC-SVM) or support vector data description (SVDD), is particularly relevant to anomaly detection because it assumes the training data are all normal examples, and aims to identify whether an example belongs to the same distribution as the training data. Unfortunately, these classical algorithms do not benefit from the representation learning that makes machine learning so powerful. On the other hand, substantial progress has been made in learning visual representations from unlabeled data via self-supervised learning, including rotation prediction and contrastive learning. As such, combining one-class classifiers with these recent successes in deep representation learning is an under-explored opportunity for the detection of anomalous data.

In “Learning and Evaluating Representations for Deep One-class Classification”, presented at ICLR 2021, we outline a 2-stage framework that makes use of recent progress on self-supervised representation learning and classic one-class algorithms. The algorithm is simple to train and results in state-of-the-art performance on various benchmarks, including CIFAR, f-MNIST, Cat vs Dog and CelebA. We then follow up on this in “CutPaste: Self-Supervised Learning for Anomaly Detection and Localization”, presented at CVPR 2021, in which we propose a new representation learning algorithm under the same framework for a realistic industrial defect detection problem. The framework achieves a new state-of-the-art on the MVTec benchmark.

A Two-Stage Framework for Deep One-Class Classification
While end-to-end learning has demonstrated success in many machine learning problems, including deep learning algorithm designs, such an approach for deep one-class classifiers often suffer from degeneration in which the model outputs the same results regardless of the input.

To combat this, we apply a two stage framework. In the first stage, the model learns deep representations with self-supervision. In the second stage, we adopt one-class classification algorithms, such as OC-SVM or kernel density estimator, using the learned representations from the first stage. This 2-stage algorithm is not only robust to degeneration, but also enables one to build more accurate one-class classifiers. Furthermore, the framework is not limited to specific representation learning and one-class classification algorithms — that is, one can easily plug-and-play different algorithms, which is useful if any advanced approaches are developed.

A deep neural network is trained to generate the representations of input images via self-supervision. We then train one-class classifiers on the learned representations.

Semantic Anomaly Detection
We test the efficacy of our 2-stage framework for anomaly detection by experimenting with two representative self-supervised representation learning algorithms, rotation prediction and contrastive learning.

Rotation prediction refers to a model’s ability to predict the rotated angles of an input image. Due to its promising performance in other computer vision applications, the end-to-end trained rotation prediction network has been widely adopted for one-class classification research. The existing approach typically reuses the built-in rotation prediction classifier for learning representations to conduct anomaly detection, which is suboptimal because those built-in classifiers are not trained for one-class classification.

In contrastive learning, a model learns to pull together representations from transformed versions of the same image, while pushing representations of different images away. During training, as images are drawn from the dataset, each is transformed twice with simple augmentations (e.g., random cropping or color changing). We minimize the distance of the representations from the same image to encourage consistency and maximize the distance between different images. However, usual contrastive learning converges to a solution where all the representations of normal examples are uniformly spread out on a sphere. This is problematic because most of the one-class algorithms determine the outliers by checking the proximity of a tested example to the normal training examples, but when all the normal examples are uniformly distributed in an entire space, outliers will always appear close to some normal examples.

To resolve this, we propose distribution augmentation (DA) for one-class contrastive learning. The idea is that instead of learning representations from the training data only, the model learns from the union of the training data plus augmented training examples, where the augmented examples are considered to be different from the original training data. We employ geometric transformations, such as rotation or horizontal flip, for distribution augmentation. With DA, the training data is no longer uniformly distributed in the representation space because some areas are occupied by the augmented data.

Left: Illustrated examples of perfect uniformity from the standard contrastive learning. Right: The reduced uniformity by the proposed distribution augmentation (DA), where the augmented data occupy the space to avoid the uniform distribution of the inlier examples (blue) throughout the whole sphere.

We evaluate the performance of one-class classification in terms of the area under receiver operating characteristic curve (AUC) on the commonly used datasets in computer vision, including CIFAR10 and CIFAR-100, Fashion MNIST, and Cat vs Dog. Images from one class are given as inliers and those from remaining classes are given as outliers. For example, we see how well cat images are detected as anomalies when dog images are inliers.

   CIFAR-10       CIFAR-100       f-MNIST       Cat v.s. Dog   
Ruff et al. (2018) 64.8
Golan and El-Yaniv (2018) 86.0 78.7 93.5 88.8
Bergman and Hoshen (2020) 88.2 94.1
Hendrycks et al. (2019) 90.1
Huang et al. (2019) 86.6 78.8 93.9
2-stage framework: rotation prediction    91.3±0.3 84.1±0.6 95.8±0.3 86.4±0.6
2-stage framework: contrastive (DA) 92.5±0.6 86.5±0.7 94.8±0.3 89.6±0.5
Performance comparison of one-class classification methods. Values are the mean AUCs and their standard deviation over 5 runs. AUC ranges from 0 to 100, where 100 is perfect detection.

Given the suboptimal built-in rotation prediction classifiers typically used for rotation prediction approaches, it’s notable that simply replacing the built-in rotation classifier used in the first stage for learning representations with a one-class classifier at the second stage of the proposed framework significantly boosts the performance, from 86 to 91.3 AUC. More generally, the 2-stage framework achieves state-of-the-art performance on all of the above benchmarks.

With classic OC-SVM, which learns the area boundary of representations of normal examples, the 2-stage framework results in higher performance than existing works as measured by image-level AUC.

Texture Anomaly Detection for Industrial Defect Detection
In many real-world applications of anomaly detection, the anomaly is often defined by localized defects instead of entirely different semantics (i.e., being different in general). For example, the detection of texture anomalies is useful for detecting various kinds of industrial defects.

The examples of semantic anomaly detection and defect detection. In semantic anomaly detection, the inlier and outlier are different in general, (e.g., one is a dog, the other a cat). In defect detection, the semantics for inlier and outlier are the same (e.g., they are both tiles), but the outlier has a local anomaly.

While learning representations with rotation prediction and distribution-augmented contrastive learning have demonstrated state-of-the-art performance on semantic anomaly detection, those algorithms do not perform well on texture anomaly detection. Instead, we explored different representation learning algorithms that better fit the application.

In our second paper, we propose a new self-supervised learning algorithm for texture anomaly detection. The overall anomaly detection follows the 2-stage framework, but the first stage, in which the model learns deep image representations, is specifically trained to predict whether the image is augmented via a simple CutPaste data augmentation. The idea of CutPaste augmentation is simple — a given image is augmented by randomly cutting a local patch and pasting it back to a different location of the same image. Learning to distinguish normal examples from CutPaste-augmented examples encourages representations to be sensitive to local irregularity of an image.

The illustration of learning representations by predicting CutPaste augmentations. Given an example, the CutPaste augmentation crops a local patch, then pasties it to a randomly selected area of the same image. We then train a binary classifier to distinguish the original image and the CutPaste augmented image.

We use MVTec, a real-world defect detection dataset with 15 object categories, to evaluate the approach above.

  DOCC
(Ruff et al., 2020)  
  U-Student
(Bergmann et al., 2020)  
  Rotation Prediction     Contrastive (DA)     CutPaste  
87.9 92.5 86.3 86.5 95.2
Image-level anomaly detection performance (in AUC) on the MVTec benchmark.

Besides image-level anomaly detection, we use the CutPaste method to locate where the anomaly is, i.e., “patch-level” anomaly detection. We aggregate the patch anomaly scores via upsampling with Gaussian smoothing and visualize them in heatmaps that show where the anomaly is. Interestingly, this provides decently improved localization of anomalies. The below table shows the pixel-level AUC for localization evaluation.

  Autoencoder
(Bergmann et al., 2019)  
  FCDD
(Ruff et al., 2020)  
  Rotation Prediction     Contrastive (DA)     CutPaste  
86.0 92.0 93.0 90.4 96.0
Pixel-level anomaly localization performance (in AUC) comparison between different algorithms on the MVTec benchmark.

Conclusion
In this work we introduce a novel 2-stage deep one-class classification framework and emphasize the importance of decoupling building classifiers from learning representations so that the classifier can be consistent with the target task, one-class classification. Moreover, this approach permits applications of various self-supervised representation learning methods, attaining state-of-the-art performance on various applications of visual one-class classification from semantic anomaly to texture defect detection. We are extending our efforts to build more realistic anomaly detection methods under the scenario where training data is truly unlabeled.

Acknowledgements
We gratefully acknowledge the contribution from other co-authors, including Jinsung Yoon, Minho Jin and Tomas Pfister. We release the code in our GitHub repository.