Categories
Misc

Trash Talk: Startup’s AI-Driven Detection System Primed to Take a Bite Out of Global Waste

Of the 8.3 billion tons of virgin plastic waste created each year, despite decades of efforts to reduce the amount that ends up in landfills, only about 9 percent gets recycled. London-based computer vision startup Recycleye looks to give those recycling numbers a big boost with its AI-driven system for identifying waste materials. By automating Read article >

The post Trash Talk: Startup’s AI-Driven Detection System Primed to Take a Bite Out of Global Waste appeared first on The Official NVIDIA Blog.

Categories
Misc

Architecture Firm Brings New Structure to Design Workflows With Real-Time Rendering and Virtual Collaboration

When working on future skyscrapers, bridges or other projects, Kohn Pedersen Fox looks beyond traditional processes. The global architecture firm aims to find the most creative and optimal design using advanced technologies like generative design, deep learning and immersive visualization. And during design reviews, KPF relies on collaborative sessions so their teams, clients and stakeholders Read article >

The post Architecture Firm Brings New Structure to Design Workflows With Real-Time Rendering and Virtual Collaboration appeared first on The Official NVIDIA Blog.

Categories
Misc

Find the Love We Shared in September: NVIDIA Canvas Update Paints With New Styles

NVIDIA Canvas, the AI-powered painting app that enables artists to paint by material, using AI to turn doodles into beautiful artwork, released an update today introducing custom styles. Now users can apply the look and feel or “style” of their own images to their final Canvas painting. Supporting the new Canvas update is the September Read article >

The post Find the Love We Shared in September: NVIDIA Canvas Update Paints With New Styles appeared first on The Official NVIDIA Blog.

Categories
Misc

Second partial derivative of ANN with respect to model input returns NoneType

This post is a follow up on this one: https://www.reddit.com/r/tensorflow/comments/pk5dqj/custom_loss_function_error_attributeerror/

Basically I need to compute 3 derivatives of the ANN I’m training with respect to (wrt) some input variables. I need those derivatives for a custom loss function.

I finally managed to calculate the 2 first order partial derivatives. The problem is in the second order derivative. It returns NoneType and I don’t know why. I’ve already tried different examples to no avail. For example tried the Jacobian (https://www.tensorflow.org/api_docs/python/tf/GradientTape#jacobian).

import pandas as pd from tensorflow import keras import tensorflow as tf from tensorflow.keras import layers, losses import numpy as np # Hyperparameters n_hidden_layers = 2 # Number of hidden layers. n_units = 128 # Number of neurons of the hidden layers. n_batch = 64 # Number of observations used per gradient update. n_epochs = 30 # Sample data x_train = {'strike': [200, 2925], 'Time to Maturity': [0.312329, 0.0356164], "RF Rate": [0.08, 2.97], "Sigma 20 Days Annualized": [0.123251, 0.0837898], "Underlying Price": [1494.82, 2840.69] } call_X_train = pd.DataFrame(x_train, columns = ['strike', "Time to Maturity", "RF Rate", "Sigma 20 Days Annualized", "Underlying Price"] ) x_test = {'strike': [200], 'Time to Maturity': [0.0356164], "RF Rate": [2.97], "Sigma 20 Days Annualized": [0.0837898], "Underlying Price": [2840.69] } call_X_test = pd.DataFrame(x_test, columns = ['strike', "Time to Maturity", "RF Rate", "Sigma 20 Days Annualized", "Underlying Price"] ) y_train = np.array([1285.25, 0.8]) call_y_train = pd.Series(y_train) y_test = np.array([0.8]) call_y_test = pd.Series(y_test) # Creates hidden layers def hl(tensor, n_units): hl_output = layers.Dense(n_units, activation = layers.LeakyReLU(alpha = 1))(tensor) # alpha = 1 makes the function LeakyReLU C^inf return hl_output # Create model using Keras' Functional API def mlp3_call(n_hidden_layers, n_units): # Create input layer inputs = keras.Input(shape = (call_X_train.shape[1],)) x = layers.LeakyReLU(alpha = 1)(inputs) # Create hidden layers for _ in range(n_hidden_layers): x = hl(x, n_units) # Create output layer outputs = layers.Dense(1, activation = keras.activations.softplus)(x) # Actually create the model model = keras.Model(inputs=inputs, outputs=outputs) return model # Custom loss function def constrained_mse(y_true, y_pred): mse = losses.mse(y_true, y_pred) x = tf.convert_to_tensor(call_X_train, np.float32) with tf.GradientTape() as tape: tape.watch(x) with tf.GradientTape(persistent=True) as tape2: tape2.watch(x) y = model(x) grad_y = tape2.gradient(y, x) dy_dstrike = grad_y[0, 0] dy_dttm = grad_y[0, 1] d2y_dstrike2 = tape.gradient(dy_dstrike, x[:,0]) loss = mse + dy_dstrike + dy_dttm + d2y_dstrike2 return loss model = mlp3_call(n_hidden_layers, n_units) model.compile(loss = constrained_mse, optimizer = keras.optimizers.Adam(),) history = model.fit(call_X_train, call_y_train, batch_size = n_batch, epochs = n_epochs, validation_split = 0.01, verbose = 1) 

submitted by /u/Snoo37084
[visit reddit] [comments]

Categories
Misc

Confused about tf.keras.layers.Flatten

The following example

import numpy as np import tensorflow as tf model = tf.keras.Sequential() model.add(tf.keras.layers.Dense(2, activation='relu', input_shape=(2,2,))) model.add(tf.keras.layers.Flatten()) model.add(tf.keras.layers.Dense(1)) tx = np.random.rand(2,2) res = model(tx) print(res) 

Gives error

ValueError: Input 0 of layer dense_1 is incompatible with the layer: expected axis -1 of input shape to have value 4 but received input with shape (2, 2) 

But if I comment out the line with Flatten layer, then everything works fine

What is wrong with this code and how do I properly flatten output layer?

submitted by /u/warpod
[visit reddit] [comments]

Categories
Misc

What is the best way to recalculate a recommendation system, if the dataset changes?

submitted by /u/uvcrtok
[visit reddit] [comments]

Categories
Misc

How do you create a model which takes as input a string and passes it to a tokenizer?

As I asked here on StackOverflow, I’m having problems building a model with strings as input since the input layer is a tf.keras.Input(shape=(1,), dtype=tf.string, name=’text’) but the BERT tokenizer expects a string. How do you extract the input string from the keras input?

submitted by /u/childintime9
[visit reddit] [comments]

Categories
Misc

What is the Yolov4 MakeFile Config for 3080 GPU?

What is the Yolov4 MakeFile Config for 3080 GPU?

submitted by /u/-JuliusSeizure
[visit reddit] [comments]

Categories
Misc

Classification predictions completely different base on data size, though data doesn’t change

Hello, I’ve just started learning and messing around with neural networks. I’m not sure if this is a problem, or this is how neural networks work, but I’ve noticed, that whenever I try to predict a binary classification outcome with my model, the predictions vary completely based on the size of the data i pass it.

For example, if I try to predict a single outcome with one row of data, I get something like 0.4. Then if I add another row of data and predict again, the first prediction of row 1 becomes 0.9, even though the data in row 1 did not change, I only added an additional row of data for an additional prediction.

My training data consists of 1266 entries with 54 features. I’ve tried reducing the batch_size to 1, different optimizers, number of layers, number of neurons and the result is mostly the same. Is this normal behavior?

submitted by /u/CandyPoper
[visit reddit] [comments]

Categories
Misc

Pushing Forward the Frontiers of Natural Language Processing

Idea generation, not hardware or software, needs to be the bottleneck to the advancement of AI, Bryan Catanzaro, vice president of applied deep learning research at NVIDIA, said this week at the AI Hardware Summit. “We want the inventors, the researchers and the engineers that are coming up with future AI to be limited only Read article >

The post Pushing Forward the Frontiers of Natural Language Processing  appeared first on The Official NVIDIA Blog.