Categories
Misc

Help setting up tf-GPU and cuDNN.

I am trying to get my GPU to train. Gtx 1660 ti, tf 2.4.1, cuda
11.2, python 3.8.7

My NN was taking 15 minutes per epoch on some dummy data so I am
setting up GPU training. At one point I got through 13 epochs
before it got stuck (maybe ran out of memory?). Many github
resolutions later I am stuck at one of two errors:

CUBLAS_STATUS_ALLOC_FAILED CUDNN_STATUS_EXECUTION_FAILED

The only tickets I have found online have been resolved by
setting memory limit or setting “allow_growth” to true. Twice this
has gotten me past the first error, but isnt not working
consistently. Ultimately I end up at the second error either
way.

Has anyone encounter this and not had the widely reported
solution work? Thanks in advance if anyone can help me. Just spent
waaaaay too long trying to get this going and finally am out of
ways to google.

submitted by /u/skeerp

[visit reddit]

[comments]

Categories
Misc

Can anybody help me with running posenet python version of posenet tensorflowjs by rwightman?

Git link https://github.com/ArimaValanImmanuel/posenet-python

submitted by /u/Section_Disastrous

[visit reddit]

[comments]

Categories
Misc

Upgrading to tf2 modifies all my .py files

I’m upgrading my tensorflow version and using their
tf_upgrade_v2 script. I don’t run into any issues but all my python
files register as “modified” in git, even when there are no
changes? I poked around google and some people mention that my file
permissions may be changing, so I set the core.filemode to false in
my .git/config and then retry to to the upgrade, but I am still
seeing file changes. I diffed the files and I see zero changes. I
believe it could be the EOL, but I tried setting core.autcrlf to
false as well and that still gives me all these files as modified.
Has anyone encountered this? Running ubuntu 20.04.1.

submitted by /u/Woodhouse_20

[visit reddit]

[comments]

Categories
Misc

How can I convert a TensorFlow Dataset into a Pandas DataFrame?

I have tf dataset with images and labels and want to convert it
to a Pandas DataFrame, since that’s the object required in an
AzureML pipeline designer.

I’m a beginner working with tensorflow and after googling for a
couple of hours I haven’t found anything.

I’d appreciate any tips on how to do this.

submitted by /u/juliansorel

[visit reddit]

[comments]

Categories
Misc

I published part 1 of a tutorial that shows how to transform vanilla autoencoders into variational autoencoders

Autoencoders have a number of limitations for generative tasks.
That’s why they need a power-up to become Variational
Autoencoders. In my new video, I explain the first step to
transform an autoencoder into a VAE. Specifically, I discuss how
VAEs use multivariate normal distributions to encode input data
into a latent space and why this is awesome for generative tasks.
Don’t worry – I also explain what multivariate normal
distributions are!

This video is part of a series called “Generating Sound with
Neural Networks”. In this series, you’ll learn how to generate
sound from audio files and spectrograms 🎧 🎧 using Variational
Autoencoders 🤖 🤖

Here’s the video:


https://www.youtube.com/watch?v=b8AzCgY1gZI&list=PL-wATfeyAMNpEyENTc-tVH5tfLGKtSWPp&index=9

submitted by /u/diabulusInMusica

[visit reddit]

[comments]

Categories
Misc

What does the shape of a spectrogram really mean?


What does the shape of a spectrogram really mean?
submitted by /u/Metecko

[visit reddit]

[comments]
Categories
Misc

Can’t use tensorflow 2 (I need tensorflow 2cant use 1) because of no protobuf version working for it.

If I have any other version of protobuf except for 3.6.0 I will
get “ImportError: DLL load failed: The specified procedure could
not be found” but if I use protobuf 3.6.0 I get
“AttributeError:
‘google.protobuf.pyext._message.RepeatedCompositeCo’ object has
no attribute ‘append’” this error occurs when I try to build
the model.

I have tried every 2.x version of tensorflow have reinstalled
python 3.6 I have made sure my path variables are correct. I can
find no useful information on the internet. I have tried countless
versions of protobuf. Please help! I have no clue what the hell is
going on.

Maybe upgrade python 3.6 to 3.7? as I have previously had
tensorflow 2.x working on python 3.7 but I don’t know.

submitted by /u/FunnyForWrongReason

[visit reddit]

[comments]

Categories
Misc

Any tutorials that you can recommend?

So I understood the attention mechanism ( Bahdanau Attention,
2017 paper) and I was looking for the implementation of the paper
and then I landed on the tensorflow website which has a tutorial on
attention mechanism. Nut Frankly speaking, I found it very hard to
understand the code. Are there any tutorials that you can share
that will help me to understand the code of the attention
mechanism.

submitted by /u/Consistent_Ad767

[visit reddit]

[comments]

Categories
Misc

Fastest way to develop a custom translation model with RNN?

I’m a Python web developer, so I have some professional coding
experience, but I’m a complete novice when it comes to machine
learning.

In short, I have a dataset (csv form) with 65,000 sentences in
two languages. One of the languages is real, the other is not. I’d
live to quickly dive into an RNN example online so that I can train
a model based on this dataset, but all of the examples seem to
prefer that I use existing, binary datasets (that I can’t
read).

My laptop is relatively old, and processing a dataset properly
can take a week, so every example I’ve attempted to adapt to my
needs has cost lots of time and lots of heartache when I discover
that I can’t use it.

Is there an RNN translation tutorial that anyone would recommend
for the purpose of translating between an existing corpus and a
constructed language? I can do research on any terms listed below,
but the topic of machine learning has so regularly stumped me that,
even though I know easy examples for what I want to do probably
already exist, I don’t even know where to start.

Thank you for your time!

submitted by /u/ehowardhill

[visit reddit]

[comments]

Categories
Misc

How to Optimize Self-Driving DNNs with TensorRT

Register for our upcoming webinar to learn how to use TensorRT to optimize autonomous driving DNNs for robust AV development.

When it comes to autonomous vehicle development, to ensure the highest level of safety, one of the most important areas of evaluation is performance.

High-performance, energy-efficient compute enables developers to balance the complexity, accuracy and resource consumption of the deep neural networks (DNN) that run in the vehicle. Getting the most out of hardware computing power requires optimized software.

NVIDIA TensorRT is an SDK for high-performance deep learning inference. It includes a deep learning inference optimizer and runtime that delivers low latency and high-throughput for deep learning inference applications, such as autonomous driving.

You can register for our upcoming webinar on Feb. 3 to learn how to use TensorRT to optimize autonomous driving DNNs for robust autonomous vehicle development.

Manage Massive Workloads

DNN-based workloads in autonomous driving are incredibly complex, with a variety of computation-intensive layer operations just to perform computer vision tasks. 

Managing these types of operations requires optimized compute performance, however, it isn’t always the case that the theoretical peak performance of hardware translates to any software achievable execution. TensorRT ensures developers can tackle these massive workloads without leaving any performance on the table.

By performing optimization at every stage of processing — from tooling, to ingesting DNNs, to inference — TensorRT ensures the most efficient operations possible.

The SDK is also seamless to use, allowing developers to toggle different settings depending on the platform. For example, lower precision, i.e., FP16 or INT8, is used to enable higher compute throughput and lower memory bandwidth on Tensor Core. In addition, workloads can be shifted from the GPU to the deep learning accelerator (DLA).

Master the Model Backbone

This webinar will show how TensorRT for AV development works in action, tackling one of the chunkiest portions in the inference pipeline — the model backbone.

Many developers use off-the-shelf model backbones (for example, ResNets or EfficientNets) to get started on solving computer vision tasks such as object detection or semantic segmentation. However, these backbones aren’t always performance-optimized, creating bottlenecks down the line. TensorRT addresses these problems by optimizing trained neural networks to generate deployment-ready inference engines that maximize GPU inference performance and power efficiency.

Learn from NVIDIA experts how to leverage these tools in autonomous vehicle development. Register today for the Feb 3rd webinar, plus catch up on past TensorRT and DriveWorks webinars.