![]() |
submitted by /u/insanetech_ [visit reddit] [comments] |
TensorFlow 2 Pocket Reference ebook

![]() |
submitted by /u/insanetech_ [visit reddit] [comments] |
Hi, i am just starting with Tensorflow for my AI and i ran into an error i don’t know how to solve
[2021-09-06 21:55:50.461476: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-09-06 21:55:51.050032: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 2781 MB memory: -> device: 0, name: NVIDIA GeForce GTX 970, pci bus id: 0000:01:00.0, compute capability: 5.2
C:UsersGamerAppDataLocalProgramsPythonPython39libsite-packageskerasoptimizer_v2optimizer_v2.py:355: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
warnings.warn(
2021-09-06 21:55:51.508673: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of the MLIR Optimization Passes are enabled (registered 2)
Epoch 1/200
Traceback (most recent call last):
File “C:UsersGamereclipse-workspaceAItraining_jarvis.py”, line 69, in <module>
model.fit(np.array(training_1), np.array(training_2), epochs=200, batch_size=5, verbose=2)
File “C:UsersGamerAppDataLocalProgramsPythonPython39libsite-packageskerasenginetraining.py“, line 1184, in fit
tmp_logs = self.train_function(iterator)
File “C:UsersGamerAppDataLocalProgramsPythonPython39libsite-packagestensorflowpythoneagerdef_function.py”, line 885, in __call__
result = self._call(*args, **kwds)
File “C:UsersGamerAppDataLocalProgramsPythonPython39libsite-packagestensorflowpythoneagerdef_function.py”, line 933, in _call
self._initialize(args, kwds, add_initializers_to=initializers)
File “C:UsersGamerAppDataLocalProgramsPythonPython39libsite-packagestensorflowpythoneagerdef_function.py”, line 759, in _initialize
self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
File “C:UsersGamerAppDataLocalProgramsPythonPython39libsite-packagestensorflowpythoneagerfunction.py“, line 3066, in _get_concrete_function_internal_garbage_collected
graph_function, _ = self._maybe_define_function(args, kwargs)
File “C:UsersGamerAppDataLocalProgramsPythonPython39libsite-packagestensorflowpythoneagerfunction.py“, line 3463, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File “C:UsersGamerAppDataLocalProgramsPythonPython39libsite-packagestensorflowpythoneagerfunction.py“, line 3298, in _create_graph_function
func_graph_module.func_graph_from_py_func(
File “C:UsersGamerAppDataLocalProgramsPythonPython39libsite-packagestensorflowpythonframeworkfunc_graph.py”, line 1007, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File “C:UsersGamerAppDataLocalProgramsPythonPython39libsite-packagestensorflowpythoneagerdef_function.py”, line 668, in wrapped_fn
out = weak_wrapped_fn().__wrapped__(*args, **kwds)
File “C:UsersGamerAppDataLocalProgramsPythonPython39libsite-packagestensorflowpythonframeworkfunc_graph.py”, line 994, in wrapper
raise e.ag_error_metadata.to_exception(e)
TypeError: in user code:
C:UsersGamerAppDataLocalProgramsPythonPython39libsite-packageskerasenginetraining.py:853 train_function *
return step_function(self, iterator)
C:UsersGamerAppDataLocalProgramsPythonPython39libsite-packageskerasenginetraining.py:842 step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
C:UsersGamerAppDataLocalProgramsPythonPython39libsite-packagestensorflowpythondistributedistribute_lib.py:1286 run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
C:UsersGamerAppDataLocalProgramsPythonPython39libsite-packagestensorflowpythondistributedistribute_lib.py:2849 call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
C:UsersGamerAppDataLocalProgramsPythonPython39libsite-packagestensorflowpythondistributedistribute_lib.py:3632 _call_for_each_replica
return fn(*args, **kwargs)
C:UsersGamerAppDataLocalProgramsPythonPython39libsite-packageskerasenginetraining.py:835 run_step **
outputs = model.train_step(data)
C:UsersGamerAppDataLocalProgramsPythonPython39libsite-packageskerasenginetraining.py:787 train_step
y_pred = self(x, training=True)
C:UsersGamerAppDataLocalProgramsPythonPython39libsite-packageskerasenginebase_layer.py:1037 __call__
outputs = call_fn(inputs, *args, **kwargs)
C:UsersGamerAppDataLocalProgramsPythonPython39libsite-packageskerasenginesequential.py:369 call
return super(Sequential, self).call(inputs, training=training, mask=mask)
C:UsersGamerAppDataLocalProgramsPythonPython39libsite-packageskerasenginefunctional.py:414 call
return self._run_internal_graph(
C:UsersGamerAppDataLocalProgramsPythonPython39libsite-packageskerasenginefunctional.py:550 _run_internal_graph
outputs = node.layer(*args, **kwargs)
C:UsersGamerAppDataLocalProgramsPythonPython39libsite-packageskerasenginebase_layer.py:1037 __call__
outputs = call_fn(inputs, *args, **kwargs)
C:UsersGamerAppDataLocalProgramsPythonPython39libsite-packageskeraslayerscore.py:212 call
output = control_flow_util.smart_cond(training, dropped_inputs,
C:UsersGamerAppDataLocalProgramsPythonPython39libsite-packageskerasutilscontrol_flow_util.py:105 smart_cond
return tf.__internal__.smart_cond.smart_cond(
C:UsersGamerAppDataLocalProgramsPythonPython39libsite-packagestensorflowpythonframeworksmart_cond.py:56 smart_cond
return true_fn()
C:UsersGamerAppDataLocalProgramsPythonPython39libsite-packageskeraslayerscore.py:208 dropped_inputs
noise_shape=self._get_noise_shape(inputs),
C:UsersGamerAppDataLocalProgramsPythonPython39libsite-packageskeraslayerscore.py:197 _get_noise_shape
for i, value in enumerate(self.noise_shape):
TypeError: ‘int’ object is not iterable]
I guess it’s about the model.fit line but i am not sure, for reference here is a bit of my code:
[training_1 = list(training_ai[:,0])
training_2 = list(training_ai[:,1])
model = Sequential()
model.add(Dense(128, input_shape=(len(training_1[0]),),activation=’relu’))
model.add(Dropout(0,5))
model.add(Dense(64, activation = ‘relu’))
model.add(Dropout(0,5))
model.add(Dense(len(training_2[0]),activation=’softmax’))
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss=’categorical_crossenropy’, optimizer=sgd, metrics=[‘accuracy’])
model.fit(np.array(training_1), np.array(training_2), epochs=200, batch_size=5, verbose=2)]
I would be happy if you could help me with this Error
submitted by /u/HeroOfComputers
[visit reddit] [comments]
Hey, all.
I’m quite new to TensorFlow and machine learning in general, and I would like to know if there are any wonderful resources out there holding large data sets to train on.
My end goal is to train an algorithm to identify dead pixels in images, so if there are any resources that specifically contain image sets or, if I’m incredibly lucky, contain image sets with dead pixels, those would be ideal.
Thanks in advance.
submitted by /u/Mongdoman
[visit reddit] [comments]
Hey all!
I am using an RGB dataset for my x train and the loss is calculated in a dynamic loss function that gets the distances of pairs and compares them against the ideal distance dist_train. Here is the model:
class MyModel(Model): def __init__(self): super(MyModel, self).__init__() self.d1 = Dense(3, activation='relu') self.flatten = Flatten() self.d2 = Dense(3, activation='relu') self.d3 = Dense(2) def call(self, x): x = self.d1(x) x = self.flatten(x) x = self.d2(x) return self.d3(x) # Create an instance of the model model = MyModel() optimizer = tf.keras.optimizers.Adam() train_loss = tf.keras.metrics.Mean(name='train_loss') test_loss = tf.keras.metrics.Mean(name='test_loss') @tf.function def train_step(rgb): with tf.GradientTape() as tape: predictions = model(rgb, training=True) loss = tf_function(predictions) gradients = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients(zip(gradients, model.trainable_variables)) train_loss(loss)
Here is the loss function and the tf.function wrapping it:
def mahal_loss(output): mahal = sp.spatial.distance.pdist(output, metric='mahalanobis') mahal = sp.spatial.distance.squareform(mahal, force='no', checks=True) new_distance = [] mahal = np.ma.masked_array(mahal, mask=mahal==0) for i in range(len(mahal)): pw_dist = mahal[i, indices_train[i]] new_distance.append(pw_dist) mahal_loss = np.mean((dist_train - new_distance)**2) return mahal_loss @tf.function(input_signature=[tf.TensorSpec(None, tf.float32)]) def tf_function(pred): y = tf.numpy_function(mahal_loss, [pred], tf.float32) return y
Running the model:
EPOCHS = 5 for epoch in range(EPOCHS): train_loss.reset_states() test_loss.reset_states() for i in x_train: train_step(i) print( f'Epoch {epoch + 1}, ' f'Loss: {train_loss.result()}, ' f'Test Loss: {test_loss.result()}, ' )
I believe the reason I am running into problems lies in the dynamic loss function, as I need to calculate the distance between certain pairs to get the results I expect. This means that inside the loss function I have to calculate the mahalanobis distance of each pair to get the ones I will compare against the correct distances. The error I get is the following:
in user code: <ipython-input-23-0e975da5cbc2>:15 train_step * optimizer.apply_gradients(zip(gradients, model.trainable_variables)) C:Anaconda3envscolour_envlibsite-packageskerasoptimizer_v2optimizer_v2.py:622 apply_gradients ** grads_and_vars = optimizer_utils.filter_empty_gradients(grads_and_vars) C:Anaconda3envscolour_envlibsite-packageskerasoptimizer_v2utils.py:72 filter_empty_gradients raise ValueError("No gradients provided for any variable: %s." % ValueError: No gradients provided for any variable: ['my_model/dense/kernel:0', 'my_model/dense/bias:0', 'my_model/dense_1/kernel:0', 'my_model/dense_1/bias:0', 'my_model/dense_2/kernel:0', 'my_model/dense_2/bias:0'].
submitted by /u/Acusee
[visit reddit] [comments]
I have following inputs to be train on CNN.
x = np.array(Images)
y = [ [[0]], [[76., 5., 9., 1., 0., 0.], [54., 4., 10., 51.]] ]
Since the ‘y’ input is a n-dimensions array of non-uniform sizes, I used RaggedTensor to represent ‘y’ input and fed it to the network.
y = tf.ragged.constant(y)
cnn_model.fit(x, y, epochs = 10, batch_size=32, validation_split=0.30)
I am receiving following error:
ValueError: validation_split is only supported for Tensors or NumPy arrays, found following types in the input: [<class ‘tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor’>]
If I convert ‘y’ to numpy.ndarray and fit it to the model, I get following error,
cnn_model.fit(x, y.numpy(), epochs = 10, batch_size=32, validation_split=0.30)
ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type numpy.ndarray).
I would want to train this input ‘y’ of n-dimensional array to the model, kindly suggest which datatype representation would be suitable regarding this.
submitted by /u/sarvna
[visit reddit] [comments]
I’m trying to build a model and convert it to use in TensorFlow Lite for Microcontrollers. I’m having an issue where every Keras model I generate contains a REDUCE_PROD operator (even a completely basic model consisting of a single Dense(1) layer). However, the TF Lite for Microcontrollers runtime doesn’t support the REDUCE_PROD operator and flags an error upon attempting to load the model.
Is there a way I can exclude this operator when generating a model? Am I missing something?
Thanks!
submitted by /u/maha9000
[visit reddit] [comments]
I have a model that uses R2 as a metric. Since AFAIK there isn’t one natively implemented in TF, I use the one from the tensorflow-addons package. However, when I try to load this model after saving, it fails with the error:
type of argument “y_shape” must be a tuple; got list instead
Here is a minimal working example that produces this error:
from tensorflow.keras.models import load_model, Sequential from tensorflow.keras.layers import Dense, Input import tensorflow as tf import tensorflow_addons as tfa model = Sequential() model.add(Input(5)) model.add(Dense(5)) model.add(Dense(5)) model.compile(metrics = [tfa.metrics.RSquare(y_shape=(5,))]) model.save('test_model.h5') model = load_model('test_model.h5')
RSquare works fine during training but I need to be able to load the model later (and load models I have already saved). I have tried using the custom_objects argument to load_model but this makes no difference. Any suggestions?
Thanks in advance!
submitted by /u/DustinBraddock
[visit reddit] [comments]
I am attempting to train a 3 layer neural network that predicts maximum and minimum survival duration. The final layer has two outputs (corresponding to a prediction of maximum/minimum survival) and I have written a custom loss function. However I have realised that I need to apply the loss differently depending on which node I am evaluating.
What would be the best way of approaching this? Would I be better off training two separate models to predict maximum and minimum survival?
Thank you
submitted by /u/Disastrous-Buy-6645
[visit reddit] [comments]
3D deep learning researchers can build on more cutting edge algorithms and simplify their workflows with the latest version of the Kaolin PyTorch Library.
3D deep learning researchers can build on the latest algorithms to simplify and accelerate workflows using the Kaolin PyTorch Library, available now.
NVIDIA Kaolin library, first released in November 2019, was originally written in the NVIDIA Toronto AI lab as an internship project. After writing repetitive boilerplate code and copying algorithmic components for several projects, the researchers started development of a PyTorch library bringing common functionality for 3D deep learning (3D DL) to one place. Since its first release, Kaolin library has grown into a mature codebase with robust and optimized utilities and algorithms for 3D deep learning.
The Kaolin library brings 3D deep learning researchers utilities to accelerate their workflows, as well as reusable research components to provide a basis for future innovations. For example, Kaolin simplifies handling and processing of complex 3D datasets used for training. It also includes writers for 3D checkpoints that can be visualized in a companion Omniverse Kaolin App with the latest NVIDIA RTX technology. And it provides building blocks like conversions between 3D representations, useful 3D loss functions for training, and differentiable rendering. The Kaolin team is dedicated to deliver continuous improvements and ship new algorithmic building blocks to power 3D DL innovation.
The latest Kaolin library release includes a new representation, structured point clouds (SPC), a sparse octree-based acceleration data structure, with highly efficient convolution and ray tracing capabilities. SPCs are useful for scaling up and accelerating neural implicit representations, popular in 3D DL research today. It also powers the latest version of NeuralLOD training, delivering up to 30x reduction in memory, and speeding up training time 3x.
It also includes a new lightweight Tensorboard-style web dashboard called Dash3D. Users can leverage this tool to inspect checkpoints of 3D predictions produced by DL models during training, even on remote hardware configurations.
The library release improves support for 3D datasets, including new datasets (SHREC, ModelNet), additional formats (.off) and speedups for the USD 3D file format, resulting in 10x improvement in load time efficiency during training over popular obj format. In addition, new tutorials for differentiable rendering and 3D checkpoints are included.
See official change log for additional details on Kaolin library release. Researchers can download the Kaolin library on GitHub today.
The library’s companion Omniverse Kaolin App is available through NVIDIA Omniverse. Download the NVIDIA Omniverse open beta today to get started. For additional support, join the Omniverse Discord server or the Omniverse forums to chat with the community.
Learn about the latest release of Nsight Graphics 2021.4, an all-in-one graphics debugger and profiler to help game developers get the most out of NVIDIA hardware.
Nsight Graphics 2021.4 is an all-in-one graphics debugger and profiler to help game developers get the most out of NVIDIA hardware. From analyzing API setup, to solving nasty bugs, to providing deep insight into how applications use the GPU for better performance, Nsight Graphics is the ultimate tool.
The latest release is available to download now >>
Key features include:
GPU Trace introduces a new capture type called One-shot. The One-shot capture type supports profiling applications, which do not have a specific frame beginning and ending. This makes it easier to profile and optimize tools that rely on compute workloads—such as generating normal maps or optimizing geometry/LODs. One-shot captures are supported for D3D12 and Vulkan applications using compute or ray tracing features. Ray tracing with DirectML and WinML is also supported.
Trace Analysis helps identify work regimes with the most potential for performance improvement. Select the “Analyze” button after taking a GPU Trace, and the advanced analysis engine will provide a new report with explanations and suggestions on how to improve GPU utilization.
In March 2021, NVIDIA introduced new Resizable BAR capabilities with Game Ready GeForce drivers. Users with a compatible motherboard and GPU can enable all of the GPU memory to be accessed by the CPU at once. GPU Trace also reveals if BAR memory transfers are happening efficiently. View more information >>
Using VK_NV_cuda_kernel_launch, it is now possible to launch CUDA kernels from a Vulkan graphics application without the overhead of the context switch. GPU Trace now supports this capability.
When working with C++ Captures, it can be useful to open up an integrated development environment with a project that allows for code browsing or modification. In this release, the added button in the C++ Capture document opens up a Visual Studio environment with the associated project, taking advantage of Visual Studio’s native CMake support
Read the Nsight Graphics 2021.4 release notes >>
Check out the GDC session on DevTools for Harnessing Ray Tracing in Games >>
Please continue to use the integrated feedback button that lets you send comments, feature requests, and bugs directly. You can send feedback anonymously, or provide an email, for follow up.
Just click on the little speech bubble at the top right of the window.