![]() |
submitted by /u/aliza-kelly [visit reddit] [comments] |

![]() |
submitted by /u/aliza-kelly [visit reddit] [comments] |
Packt has Published “The TensorFlow Workshop ”
Grab your digital copy now if you feel you are interested.
As part of our marketing activities, we are offering free digital copies of the book in return for unbiased feedback in the form of a reader review.
Get started with TensorFlow fundamentals to build and train deep learning models with real-world data, practical exercises, and challenging activities.
Here is what you will learn from the book:
Key Features
Please comment below or DM me for more details
submitted by /u/RoyluisRodrigues
[visit reddit] [comments]
It was memories of playing Pac-Man and Super Mario Bros while growing up in Colombia’s sprawling capital of Bogotá that inspired Yenifer Macias’s award-winning submission for the #CreateYourRetroverse contest, featured above. The contest asked NVIDIA Omniverse users to share scenes that visualize where their love for graphics began. For Macias, that passion goes back to Read article >
The post 3D Artist Turns Hobby Into Career, Using Omniverse to Turn Sketches Into Masterpieces appeared first on The Official NVIDIA Blog.
Data centers need extremely fast storage access, and no DPU is faster than NVIDIA’s BlueField-2. Recent testing by NVIDIA shows that a single BlueField-2 data processing unit reaches 41.5 million input/output operations per second (IOPS) — more than 4x more IOPS than any other DPU. The BlueField-2 DPU delivered record-breaking performance using standard networking protocols Read article >
The post NVIDIA BlueField Sets New World Record for DPU Performance appeared first on The Official NVIDIA Blog.
It could only happen in NVIDIA Omniverse — the company’s virtual world simulation and collaboration platform for 3D workflows. And it happened during an interview with a virtual toy model of NVIDIA’s CEO, Jensen Huang. “What are the greatest …” one of Toy Jensen’s creators asked, stumbling, then stopping before completing his scripted question. Unfazed, Read article >
The post How Omniverse Wove a Real CEO — and His Toy Counterpart — Together With Stunning Demos at GTC appeared first on The Official NVIDIA Blog.
I have a somewhat unique issue that I cannot solve because nothing on Google comes up.
My data are one-hot encoded DNA sequences of VARYING length. This is easily stored in a jagged NumPy array (4 x n x m), where n = number of samples, m = length of sequence (may vary.) However, the size requirements after zero-padding the entire array (padded by max sequence length) is insane and I need to avoid doing that.
The solution I have thought up is as follows:
Any help would be greatly appreciated. Thanks!
submitted by /u/RAiD78
[visit reddit] [comments]
Hi,
I’m working on a project that uses Edge TPU and I need to use appropriate models[1] converted from Tensorflow form. I need a face recognition and decided to use FaceNet implementation and took the model from here: [2]. I have it working on my PC, but when I tried to convert the model to tflite and compile it for edgetpu (using steps presented in [3]), I ended up with all of the resulting embeddings (output tensors) to be the same. They are in proper form (128D vector with uint8 values (as opposed to float32 values of TF model)), but they are all the same.
Does anyone has any idea what might be the reason of that? Is such conversion impossible on already pre-trained model or am I missing something obvious?
References:
[1] https://coral.ai/docs/edgetpu/models-intro/
[2] https://github.com/nyoki-mtl/keras-facenet
[3] https://www.tensorflow.org/lite/performance/post_training_integer_quant
submitted by /u/surprajs
[visit reddit] [comments]
Meet the electric vehicle that’s truly future-proof. Electric-automaker NIO took the wraps off its fifth mass-production model, the ET5, during NIO Day 2021 last week. The mid-size sedan borrows from its luxury and performance predecessors for an intelligent vehicle that’s as agile as it is comfortable. Its AI features are powered by the NIO Adam Read article >
The post Living in the Future: NIO ET5 Sedan Designed for the Autonomous Era With NVIDIA DRIVE Orin appeared first on The Official NVIDIA Blog.
Imagine picking out a brand new car — only to find a chip in the paint, rip in the seat fabric or mark in the glass. AI can help prevent such moments of disappointment for manufacturers and potential buyers. Mariner, an NVIDIA Metropolis partner based in Charlotte, North Carolina, offers an AI-enabled video analytics system Read article >
The post Detect That Defect: Mariner Speeds Up Manufacturing Workflows With AI-Based Visual Inspection appeared first on The Official NVIDIA Blog.
Here in this example: Using pre-trained word embeddings | Keras, we can see that by providing pre-trained word embedding in embedding layer initialization, we can boost the performance of the model. But before doing that, they are removing the tokens which are not available in the current data-set. But I wonder, if it is helpful or not. If we kept all the tokens wouldn’t it be more helpful to classify text? As in that case the unknown words would also get some representations and which would be helpful in further classification process. Why we are not taking this advantage? Correct me if I am mistaken.
submitted by /u/hafizcse031
[visit reddit] [comments]