Categories
Misc

Have a Holly, Jolly Gaming Season on GeForce NOW

Happy holidays, members. This GFN Thursday is packed with winter sales for several games streaming on GeForce NOW, as well as seasonal in-game events. Plus, for those needing a last minute gift for a gamer in their lives, we’ve got you covered with digital gift cards for Priority memberships. To top it all off, six Read article >

The post Have a Holly, Jolly Gaming Season on GeForce NOW appeared first on The Official NVIDIA Blog.

Categories
Misc

Advent of Code 2021 in pure TensorFlow – day 5. A bit of computer vision inside :)

Advent of Code 2021 in pure TensorFlow - day 5. A bit of computer vision inside :) submitted by /u/pgaleone
[visit reddit] [comments]
Categories
Misc

TensorFlow Tutorial 1/6 –Setup PC to recognize playing cards (Windows 10 – Anaconda – Python 3.7)

TensorFlow Tutorial 1/6 --Setup PC to recognize playing cards (Windows 10 - Anaconda - Python 3.7) submitted by /u/aliza-kelly
[visit reddit] [comments]
Categories
Misc

DQN agent good for stochastic game? What other technique would be better?

Browsing through freely available sources I find both statements: DQN is good / is not good for stochastic environments.

As far as I understand it, the Q-Network predicts the expected return of an action in a state, which can then be used to decide e.g. greedily; and training makes that prediction better. If the environment is stochastic, repeated learning should nudge the prediction to the distribution center as the loss minimum.

So it should work, but might need a lot of time to get there (law of great numbers), especially since the game is being played by 2 agents suffering from the same problem, and being part of the “environment” stochastic behaviour for the opponent!

Maybe there is another technique in Deep Learning / Reinforcement Learning much better suited for such a strongly stochastic environment? Any advices?

submitted by /u/JJhome2
[visit reddit] [comments]

Categories
Misc

Grab your Digital Copy of Tensorflow Workshop – HURRY

Packt has Published “The TensorFlow Workshop ”

Grab your digital copy now if you feel you are interested.

As part of our marketing activities, we are offering free digital copies of the book in return for unbiased feedback in the form of a reader review.

Get started with TensorFlow fundamentals to build and train deep learning models with real-world data, practical exercises, and challenging activities.

Here is what you will learn from the book:

  1. Get to grips with TensorFlow’s mathematical operations
  2. Pre-process a wide variety of tabular, sequential, and image data
  3. Understand the purpose and usage of different deep learning layers
  4. Perform hyperparameter-tuning to prevent overfitting of training data
  5. Use pre-trained models to speed up the development of learning models
  6. Generate new data based on existing patterns using generative models

Key Features

  • Understand the fundamentals of tensors, neural networks, and deep learning
  • Discover how to implement and fine-tune deep learning models for real-world datasets
  • Build your experience and confidence with hands-on exercises and activities

Please comment below or DM me for more details

submitted by /u/RoyluisRodrigues
[visit reddit] [comments]

Categories
Misc

NVIDIA BlueField Sets New World Record for DPU Performance

Data centers need extremely fast storage access, and no DPU is faster than NVIDIA’s  BlueField-2. Recent testing by NVIDIA shows that a single BlueField-2 data processing unit reaches 41.5 million input/output operations per second (IOPS) — more than 4x more IOPS than any other DPU. The BlueField-2 DPU delivered record-breaking performance using standard networking protocols Read article >

The post NVIDIA BlueField Sets New World Record for DPU Performance appeared first on The Official NVIDIA Blog.

Categories
Misc

3D Artist Turns Hobby Into Career, Using Omniverse to Turn Sketches Into Masterpieces

It was memories of playing Pac-Man and Super Mario Bros while growing up in Colombia’s sprawling capital of Bogotá that inspired Yenifer Macias’s award-winning submission for the #CreateYourRetroverse contest, featured above. The contest asked NVIDIA Omniverse users to share scenes that visualize where their love for graphics began. For Macias, that passion goes back to Read article >

The post 3D Artist Turns Hobby Into Career, Using Omniverse to Turn Sketches Into Masterpieces appeared first on The Official NVIDIA Blog.

Categories
Misc

How Omniverse Wove a Real CEO — and His Toy Counterpart — Together With Stunning Demos at GTC

It could only happen in NVIDIA Omniverse — the company’s virtual world simulation and collaboration platform for 3D workflows. And it happened during an interview with a virtual toy model of NVIDIA’s CEO, Jensen Huang. “What are the greatest …” one of Toy Jensen’s creators asked, stumbling, then stopping before completing his scripted question. Unfazed, Read article >

The post How Omniverse Wove a Real CEO — and His Toy Counterpart — Together With Stunning Demos at GTC  appeared first on The Official NVIDIA Blog.

Categories
Misc

Quantized conversion from TF to TFLite

Hi,
I’m working on a project that uses Edge TPU and I need to use appropriate models[1] converted from Tensorflow form. I need a face recognition and decided to use FaceNet implementation and took the model from here: [2]. I have it working on my PC, but when I tried to convert the model to tflite and compile it for edgetpu (using steps presented in [3]), I ended up with all of the resulting embeddings (output tensors) to be the same. They are in proper form (128D vector with uint8 values (as opposed to float32 values of TF model)), but they are all the same.

Does anyone has any idea what might be the reason of that? Is such conversion impossible on already pre-trained model or am I missing something obvious?

References:

[1] https://coral.ai/docs/edgetpu/models-intro/

[2] https://github.com/nyoki-mtl/keras-facenet

[3] https://www.tensorflow.org/lite/performance/post_training_integer_quant

submitted by /u/surprajs
[visit reddit] [comments]

Categories
Misc

Manipulating batches prior to sending them to the model

I have a somewhat unique issue that I cannot solve because nothing on Google comes up.

My data are one-hot encoded DNA sequences of VARYING length. This is easily stored in a jagged NumPy array (4 x n x m), where n = number of samples, m = length of sequence (may vary.) However, the size requirements after zero-padding the entire array (padded by max sequence length) is insane and I need to avoid doing that.

The solution I have thought up is as follows:

  1. Generated jagged numpy array (varying input lengths)
  2. Extract k sequences from this large array where k = batch size
  3. Zero-pad the batch
  4. Pass to model
  5. Repeat from step 2

Any help would be greatly appreciated. Thanks!

submitted by /u/RAiD78
[visit reddit] [comments]