Categories
Misc

AI on the Aisles: Startup’s Jetson-powered Inventory Management Boosts Revenue

Penn State University pals Brad Bogolea and Mirza Shah were living in Silicon Valley when they pitched Jeff Gee on their robotics concepts. Fortunately for them, the star designer was working at the soon-to-shutter Willow Garage robotics lab. So the three of them — Shah was also a software engineer at Willow — joined together Read article >

The post AI on the Aisles: Startup’s Jetson-powered Inventory Management Boosts Revenue appeared first on The Official NVIDIA Blog.

Categories
Misc

(Windows) TensorFlow not detecting the cudart64_110.dll file

Yesterday, I installed the latest CUDA toolkit (11.2), but
TensorFlow said there was no cudart64_110.dll file. So, I then
installed CUDA toolkit 11.0, which has this file, but TensorFlow
still cannot find the file.

I am running Windows 10 Home Edition.

submitted by /u/Comprehensive-Ad3963

[visit reddit]

[comments]

Categories
Misc

How to write a code that can compute and display the loss and accuracy of the trained model on the test set?

I’m rather quite embarrassed recently for flooding this forum
thread with mostly novice questions. I’m still a newbie, still
struggling to figure out how the code works in TensorFlow. Pardon
me for doing that. Is there any template code where I can compute
and display the loss and accuracy of the trained model on the test
set?

submitted by /u/edmondoh001

[visit reddit]

[comments]

Categories
Misc

My AI model doesn’t provide me with ‘accuracy’, it always say its 0. Why is that?

My AI model doesn’t provide me with ‘accuracy’, it always say
its 0. Why is that?

my code:

(the data i use to train is just a list of a few thousand high
prices of BTC)

# split a univariate sequence into samples
def split_sequence(sequence, n_steps):
X, y = list(), list()
for i in tqdm(range(len(sequence)), desc=’Creating training
array’):
# find the end of this pattern
end_ix = i + n_steps
# check if we are beyond the sequence/,010,0,
if end_ix > len(sequence)-1:
break
# gather input and output parts of the pattern
seq_x, seq_y = sequence[i:end_ix], sequence[end_ix]
X.append(seq_x)
y.append(seq_y)
return np.array(X), np.array(y)
def create_training_sequence(price_type=’high’, n_steps=5):
df = pd.read_csv(‘candlesticks.csv’)
df = df.drop([‘Unnamed: 0’, ‘open’, ‘close’, ‘volume’], axis=1)
if price_type == ‘high’:
sequence = df.drop([‘closeTime’, ‘low’], axis=1)
if price_type == ‘low’:
sequence = df.drop([‘closeTime’, ‘high’], axis=1)
print(sequence.head())

X, Y = split_sequence(sequence[price_type], n_steps)
print(X[1])
# reshape from [samples, timesteps] into [samples, timesteps,
features]
n_features = 1
return X, Y

import tensorflow as TheFuckImDoingHere
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, LSTM
from sklearn.model_selection import train_test_split
use_CPU = False
epochs = 100
n_steps = 60
batch_size = n_steps
n_features = 1
x, y = create_training_sequence(n_steps=n_steps)
x_train, x_test, y_train, y_test = train_test_split(x, y,
test_size=0.2)
x_train = x_train.reshape((x_train.shape[0], x_train.shape[1],
n_features))
x_test = x_test.reshape((x_test.shape[0], x_train.shape[1],
n_features))
print(‘————————-‘)
print(‘x_train shape: ‘+str(x_train.shape)+’, x_test shape:
‘+str(x_test.shape))
print(‘————————-‘)
if use_CPU == True:
#limit ram and cpu usage
TheFuckImDoingHere.config.threading.set_intra_op_parallelism_threads(2)
TheFuckImDoingHere.config.threading.set_inter_op_parallelism_threads(2)
# define model
model = Sequential()
model.add(LSTM(64, activation=’relu’, return_sequences=True,
input_shape=(n_steps, n_features)))
model.add(LSTM(128, activation=’relu’, return_sequences=True))
model.add(LSTM(256, activation=’relu’, return_sequences=True))
model.add(LSTM(128, activation=’relu’, return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(64, activation=’relu’))
model.add(Dropout(0.2))
model.add(Dense(1))
model.compile(optimizer=’adam’, loss=’mse’, metrics=[‘acc’,
‘mse’])
model.summary()
#
callback = TheFuckImDoingHere.keras.callbacks.EarlyStopping(
monitor=’val_loss’, min_delta=0, patience=15, verbose=1,
mode=’min’, baseline=None, restore_best_weights=False)
#
# fit model
if use_CPU == True:
# Run inference on CPU
with TheFuckImDoingHere.device(‘/CPU:0’):
hist = model.fit(x_train, y_train, epochs=epochs,
batch_size=batch_size, validation_data=(x_test, y_test),
callbacks=[callback])
elif use_CPU == False:
# Run inference on GPU
with TheFuckImDoingHere.device(‘/GPU:0’):
hist = model.fit(x_train, y_train, epochs=epochs,
batch_size=batch_size, validation_data=(x_test, y_test),
callbacks=[callback])
prediction = model.predict(x_test[0].reshape((1, x_train.shape[1],
n_features)), verbose=0)
print(‘Prediction is: ‘ + str(prediction))
print(‘Real value is: ‘ + str(y_test[0]))
#print evaluation
loss1, acc1, mse1 = model.evaluate(x_test, y_test)
print(f”Loss is {loss1:.20E},nAccuracy is {float(acc1)*100},nMSE
is {mse1:.8E}”)

submitted by /u/Chris-hsr

[visit reddit]

[comments]

Categories
Misc

I have to say this is my biggest nightmare for this project.


I have to say this is my biggest nightmare for this project.

I was implementing a MLP neural network architecture, just
starting to work on the Flatten, Dense Layers, with the final layer
having a 10-way softmax output. The problem cropped up even without
using the to.categorical function

I run into the error when my loss function is
‘categorical_crossentropy’.


https://preview.redd.it/ai5jmj3qbs761.png?width=1148&format=png&auto=webp&s=aec4e8667d7c7ca2a7fee41eec4e1616c0a38471

And then, I changed my loss function to
‘sparse_categorical_crossentropy’,

I was running into this problem, as shown below


https://preview.redd.it/ysedtfy4fs761.png?width=1094&format=png&auto=webp&s=78e64df51c7cceb9eb9a4b9860d4ca46fc4ba056

I am stuck. I don’t know where did the error come from? Can
someone enlighten me. I really appreciated it. It’s quite a tough
journey for me in this TensorFlow journey.

Just some extra info: I’m currently working on the SVHN dataset, which
has an image dataset of over 600,000 digit images in all, and is a
harder dataset than MNIST as the numbers appear in the context of
natural scene images. SVHN is obtained from house numbers in Google
Street View images.

I set

X_train = train[‘X’]

y_train = train[‘y’]

X_test = test[‘X’]

y_test = test[‘y’]

The shape of X_train is (73257, 32, 32, 3) and y_train is
(73257, 1)

After which, I do this step,

X_train= X_train.mean(axis=-1,keepdims=True)

X_test= X_test.mean(axis=-1,keepdims=True)

So, the shape of X_train will be (73257, 32, 32, 1) and X_test
is (26032, 32, 32, 1)

Next, I did this

X_train = X_train.astype(np.float32)/255

X_test= X_test.astype(np.float32)/255

list_labels= np.unique(y_train)

list_labels

This gives me an output of : array([ 1, 2, 3, 4, 5, 6, 7, 8, 9,
10], dtype=uint8)

Then, I did this

y_train_one_hot = to_categorical(y_train-1, num_classes=10)

y_test_one_hot= to_categorical(y_test-1, num_classes=10)

For my model architecture, it’s quite simple:


https://preview.redd.it/f5q1274jds761.png?width=1028&format=png&auto=webp&s=a1363d9ec7863f4a31c93e3d6f829dd0b9d69979


https://preview.redd.it/9xs6654les761.png?width=1218&format=png&auto=webp&s=b269e6b4d467304881162580c7a1f73196e3e31b

That’s where I get this error box:

Train on 62268 samples, validate on 10989 samples Epoch 1/30 128/62268 [..............................] - ETA: 44sWARNING:tensorflow:Can save best model only with loss available, skipping. WARNING:tensorflow:Early stopping conditioned on metric `loss` which is not available. Available metrics are: 128/62268 [..............................] - ETA: 1:29 --------------------------------------------------------------------------- InvalidArgumentError Traceback (most recent call last) <ipython-input-14-b1b279107f36> in <module> 10 early_stopping= tf.keras.callbacks.EarlyStopping(monitor='loss', patience=3) 11 ---> 12 history= model.fit(X_train, y_train, batch_size=128, epochs=30, validation_split= 0.15, callbacks=[checkpoint_best,early_stopping]) /opt/conda/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs) 726 max_queue_size=max_queue_size, 727 workers=workers, --> 728 use_multiprocessing=use_multiprocessing) 729 730 def evaluate(self, /opt/conda/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, **kwargs) 322 mode=ModeKeys.TRAIN, 323 training_context=training_context, --> 324 total_epochs=epochs) 325 cbks.make_logs(model, epoch_logs, training_result, ModeKeys.TRAIN) 326 /opt/conda/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py in run_one_epoch(model, iterator, execution_function, dataset_size, batch_size, strategy, steps_per_epoch, num_samples, mode, training_context, total_epochs) 121 step=step, mode=mode, size=current_batch_size) as batch_logs: 122 try: --> 123 batch_outs = execution_function(iterator) 124 except (StopIteration, errors.OutOfRangeError): 125 # TODO(kaftan): File bug about tf function and errors.OutOfRangeError? /opt/conda/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2_utils.py in execution_function(input_fn) 84 # `numpy` translates Tensors to values in Eager mode. 85 return nest.map_structure(_non_none_constant_value, ---> 86 distributed_function(input_fn)) 87 88 return execution_function /opt/conda/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py in __call__(self, *args, **kwds) 455 456 tracing_count = self._get_tracing_count() --> 457 result = self._call(*args, **kwds) 458 if tracing_count == self._get_tracing_count(): 459 self._call_counter.called_without_tracing() /opt/conda/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py in _call(self, *args, **kwds) 485 # In this case we have created variables on the first call, so we run the 486 # defunned version which is guaranteed to never create variables. --> 487 return self._stateless_fn(*args, **kwds) # pylint: disable=not-callable 488 elif self._stateful_fn is not None: 489 # Release the lock early so that multiple threads can perform the call /opt/conda/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py in __call__(self, *args, **kwargs) 1821 """Calls a graph function specialized to the inputs.""" 1822 graph_function, args, kwargs = self._maybe_define_function(args, kwargs) -> 1823 return graph_function._filtered_call(args, kwargs) # pylint: disable=protected-access 1824 1825 @property /opt/conda/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py in _filtered_call(self, args, kwargs) 1139 if isinstance(t, (ops.Tensor, 1140 resource_variable_ops.BaseResourceVariable))), -> 1141 self.captured_inputs) 1142 1143 def _call_flat(self, args, captured_inputs, cancellation_manager=None): /opt/conda/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py in _call_flat(self, args, captured_inputs, cancellation_manager) 1222 if executing_eagerly: 1223 flat_outputs = forward_function.call( -> 1224 ctx, args, cancellation_manager=cancellation_manager) 1225 else: 1226 gradient_name = self._delayed_rewrite_functions.register() /opt/conda/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py in call(self, ctx, args, cancellation_manager) 509 inputs=args, 510 attrs=("executor_type", executor_type, "config_proto", config), --> 511 ctx=ctx) 512 else: 513 outputs = execute.execute_with_cancellation( /opt/conda/lib/python3.7/site-packages/tensorflow_core/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name) 65 else: 66 message = e.message ---> 67 six.raise_from(core._status_to_exception(e.code, message), None) 68 except TypeError as e: 69 keras_symbolic_tensors = [ /opt/conda/lib/python3.7/site-packages/six.py in raise_from(value, from_value) InvalidArgumentError: Received a label value of 10 which is outside the valid range of [0, 10). Label values: 2 4 10 8 7 4 1 7 3 2 9 3 1 1 5 10 3 1 7 2 3 4 10 5 2 5 1 5 8 9 10 9 7 5 6 2 9 5 10 2 3 3 7 6 6 1 8 8 10 5 8 10 5 4 8 5 1 6 1 4 2 2 2 1 8 6 4 2 2 1 7 3 7 1 7 2 1 10 1 5 4 1 4 4 7 2 1 3 1 3 2 6 4 7 2 3 2 2 10 3 5 3 1 1 1 6 1 5 2 7 1 1 4 2 1 10 2 3 7 5 6 8 2 6 5 1 3 5 [[node loss/dense_2_loss/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits (defined at /opt/conda/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py:1751) ]] [Op:__inference_distributed_function_757] Function call stack: distributed_function 

I have to really thank you guys for having to read my lengthy
post. I feel sorry about that.

submitted by /u/edmondoh001

[visit reddit]

[comments]

Categories
Misc

Reading the tutorials — When to use two `GradientTape`?

I am reading the advanced tutorials of TF 2.4, and I am confused
about the need to use two instances of GradientTape. This is the
case in the Pix2Pix
and
Deep Convolutional GAN
examples, while the
CycleGAN
example uses a singe, persistent GradientTape.

It seems to me that the first approach makes both GradientTapes
record the operations of both networks, which sounds wasteful.
Intuitively, the second approach makes way more sense to me, should
use half as much memory as the first.

When should one use the first and the second approaches?

submitted by /u/rmk236

[visit reddit]

[comments]

Categories
Misc

installing tensorflow gpu is making me want to cry

why google

submitted by /u/cereal_final

[visit reddit]

[comments]

Categories
Misc

Tensorflow tutorial on neural machine translation code understanding difficulty

In tensorflow tutorial on neural machine translation this

In loss_function () function they have masked loss on padded
tokken, but my question is won’t crossenteopy function itself
cancel out padded token loss term so why do masking

submitted by /u/AI_Astronaut9852

[visit reddit]

[comments]

Categories
Misc

Can someone explain to me what is error " AttributeError: ‘NoneType’ object has no attribute ‘endswith’ " trying to say?

My code is

def get_checkpoint_every_epoch(): checkpoint_every_epoch = 'model_checkpoints_every_epoch' checkpoints = ModelCheckpoint(filepath=checkpoint_every_epoch, frequency= 'epoch', save_weights_only=True, verbose=1) return checkpoints def get_checkpoint_best_only(): checkpoint_best_path = 'model_checkpoints_best_only/checkpoint' checkpoint_best= ModelCheckpoint(filepath=checkpoint_best_path, save_weights_only= True, monitor= 'val_accuracy', save_best_only= True, verbose=1) return checkpoint_best def get_early_stopping(): early_stopping= tf.keras.callbacks.EarlyStopping(monitor='val_accuracy', patience=3) return early_stopping checkpoint_every_epoch = get_checkpoint_every_epoch() checkpoint_best_only = get_checkpoint_best_only() early_stopping = get_early_stopping() 

Followed by this,

def get_model_last_epoch(model): model_last_epoch_file = tf.train.latest_checkpoint("checkpoints_every_epoch") model.load_weights(model_last_epoch_file) return model def get_model_best_epoch(model): model_best_epoch_file = tf.train.latest_checkpoint("checkpoints_best_only") model.load_weights(model_best_epoch_file) return model model_last_epoch = get_model_last_epoch(get_new_model(x_train[0].shape)) model_best_epoch = get_model_best_epoch(get_new_model(x_train[0].shape)) print('Model with last epoch weights:') get_test_accuracy(model_last_epoch, x_test, y_test) print('') print('Model with best epoch weights:') get_test_accuracy(model_best_epoch, x_test, y_test) 

This is where, I get this error:

AttributeError Traceback (most recent call last) <ipython-input-18-b6d169507ca4> in <module> 3 # Verify that the second has a higher validation (testing) accuarcy. 4 ----> 5 model_last_epoch = get_model_last_epoch(get_new_model(x_train[0].shape)) 6 model_best_epoch = get_model_best_epoch(get_new_model(x_train[0].shape)) 7 print('Model with last epoch weights:') <ipython-input-17-4c8cba016afe> in get_model_last_epoch(model) 12 model_last_epoch_file = tf.train.latest_checkpoint("checkpoints_every_epoch") 13 ---> 14 model.load_weights(model_last_epoch_file) 15 16 return model /opt/conda/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in load_weights(self, filepath, by_name) 179 raise ValueError('Load weights is not yet supported with TPUStrategy ' 180 'with steps_per_run greater than 1.') --> 181 return super(Model, self).load_weights(filepath, by_name) 182 183 @trackable.no_automatic_dependency_tracking /opt/conda/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/network.py in load_weights(self, filepath, by_name) 1137 format. 1138 """ -> 1139 if _is_hdf5_filepath(filepath): 1140 save_format = 'h5' 1141 else: /opt/conda/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/network.py in _is_hdf5_filepath(filepath) 1447 1448 def _is_hdf5_filepath(filepath): -> 1449 return (filepath.endswith('.h5') or filepath.endswith('.keras') or 1450 filepath.endswith('.hdf5')) 1451 AttributeError: 'NoneType' object has no attribute 'endswith' 

What does it mean? Sorry, I’m just a newbie, need some
enlightenment. I wish you a merry christmas. Thanks a lot!

submitted by /u/edmondoh001

[visit reddit]

[comments]

Categories
Misc

How Come I Get Different Results From a TF Tutorial on my Machine?

So I am following a YouTube series here: https://www.youtube.com/watch?v=CA0PQS1Rj_4

And this person also posted their code on GitHub:
https://github.com/musikalkemist/Deep-Learning-Audio-Application-From-Design-to-Deployment/tree/master/4-%20Making%20Predictions%20with%20the%20Speech%20Recognition%20System

(I removed the model.h5 and data.json since I wanted to use a
model generated on my own PC)

I run the train.py which trains the model and get this as a
result: https://pastebin.com/mZSXK25v

When i test the “down.wav”, it predicts “right”: https://pastebin.com/Up8EvNyc

When I tested “left.wav”, it predicts “down”: https://pastebin.com/vzWzTV4X

How come I get different results, in fact completely wrong
results no matter what I test, despite getting a 0.9358
accuracy?

submitted by /u/TuckleBuck88

[visit reddit]

[comments]