Categories
Misc

Tensorflow-gpu makes everything slow

Hello,

I hope it is okay to post this question here. I have installed tensorflow-gpu through the Anaconda Navigator because I have a RTX3090 that I would like to use. However, when using the environment that where I have tensorflow-gpu installed everything is super slow. Like just executing the model without training takes for ever. Even something as simple as

model = Sequential()

model.add(Dense(10, activation=”relu”)

model.add(Dense(1, activation=”sigmoid”)

model.compile(optimizer=”rmsprop”, loss=”binary_crossentropy”)

Does anyone have clue what might be the issue? Thank you in advance!

submitted by /u/0stkreutz
[visit reddit] [comments]

Categories
Misc

Jetson Project of the Month: Creating Intelligent Music with the Neurorack Deep AI-based Synthesizer

An image of the Neurorack synthesizer with NVIDIA Jetson Nano.This Jetson Project of the Month enhances synthesizer-based music by applying deep generative models to a classic Eurorack machine.An image of the Neurorack synthesizer with NVIDIA Jetson Nano.

Are you a fan of synthesizer-driven bands like Depeche Mode, Erasure, or Kraftwerk? Did you ever think of how cool it would be to create your own music with a synthesizer at home? And what if that process could be enhanced with the help of NVIDIA Jetson Nano?  

The latest Jetson Project of the Month has found a way to do just that, bringing together a Eurorack synthesizer with a Jetson Nano to create the Neurorack. This musical audio synthesizer is the first to combine the power of deep generative models and the compactness of a Eurorack machine.

“The goal of this project is to design the next generation of musical instruments, providing a new tool for musicians while enhancing the musician’s creativity. It proposes a novel approach to think [about] and compose music,” noted the app developers, who are members of Artificial Creative Intelligence and Data Science (ACIDS) group, based at the IRCAM Laboratory in Paris, France. “We deeply think that AI can be used to achieve this quest.”

The real-time capabilities of the Neurorack rely on Jetson Nano’s processing power and Ninon Devis’ research into crafting trained models that are lightweight in both computation and memory footprint.

“Our original dream was to find a way to miniaturize deep models and allow them inside embedded audio hardware and synthesizers. As we are passionate about all forms of synthesizers, and especially Eurorack, we thought that it would make sense to go directly for this format as it was more fun! The Jetson Nano was our go-to choice right at the onset … It allowed us to rely on deep models without losing audio quality, while maintaining real-time constraints,” said Devis.

Watch a demo of the project in action here:

The developers had several key design considerations as they approached this project, including:

  • Musicality: the generative model chosen can produce sounds that are impossible to synthesize without using samples.
  • Controllability: the interface that they picked is handy and easy to manipulate.
  • Real-time: the hardware behaves like a traditional synthesizer and is equally reactive.
  • Ability to standalone: it can be played without a computer.

As the developers note in their NVIDIA Developer Forum post about this project: “The model is based on a modified Neural-Source Filter architecture, which allows real-time descriptor-based synthesis of percussive sounds.”

Neurorack uses PyTorch deep audio synthesis models (see Figure 1) to produce sounds that typically require samples, is easy to manipulate and doesn’t require a separate computer.

A diagram showing how the Neurorack is architected with its own hardware and NVIDIA Jetson Nano.
Figure 1: The diagram shows the overall structure of the module and the relations between the hardware and software (green) components.

The hardware features four control voltage (CV) inputs and two gates (along with a screen, rotary, and button for handling the menus), which all communicate with specific Python libraries. The behavior of these controls (and the module itself) is highly dependent on the type of deep model embedded. For this first version of the Neurorack, the developers implemented a descriptor-based impact sounds generator, described in their GitHub documentation.

The Eurorack hardware and software were developed with equal contributions from Ninon Devis, Philippe Esling, and Martin Vert on the ACIDS team. According to their website, ACIDS is “hell-bent on trying to model musical creativity by developing innovative artificial intelligence models.”

The project code and hardware design are free, open-source, and available in their GitHub repository.

The team hopes to make the project accessible to musicians and people interested in AI/embedded computing as well.

“We hope that this project will raise the interest of both communities! Right now reproducing the project is slightly technical, but we will be working on simplifying the deployment and hopefully finding other weird people like us,” Devis said. “We strongly believe that one of the key aspects in developing machine learning models for music will lead to the empowerment of creative expression, even for nonexperts.”

More detail on the science behind this project is available on their website and in their academic paper.

Two of the team members, Devis and Esling, have formed a band using the instruments they developed. They are currently working on a full-length live act, which will feature the Neurorack and plan to perform during the next SynthFest in France this April.

Sign up now for Jetson Developer Day taking place at NVIDIA GTC on March 21. This full-day event led by world-renowned experts in robotics, edge AI, and deep learning, will give you a unique deep dive into building next-generation AI-powered applications and autonomous machines.

Categories
Misc

How would you make an intentionally bad CNN?

Hey folks,

I’m a data science lecturer and for one of my assignments this year, I want to challenge my students to fix and optimise a CNN coded in keras/TF. The gist is I need to code up a model that is BAD—something full of processing bottlenecks to slow it down, and hyperparameters that hamper the models ability to learn anything. The students will get the model, and will be tasked with “fixing” it—tidying up the input pipeline so that it runs efficiently and adjust the model parameters so that it actually fits properly.

I have a few ideas already, mostly setting up the input pipeline in a convoluted order, using suboptimal activations, etc. But I’m curious to hear other suggestions!

submitted by /u/Novasry
[visit reddit] [comments]

Categories
Misc

TF2 Source code for Custom training loop with "Custom layers", "XLA compiling", "Distributed learning", and "Gradient accumulator"

Hi, guys 🤗

I just want to share my Github repository for the Custom training loop with “Custom layers,” “XLA compiling,” “Distributed learning,” and “Gradient accumulator.”

As you know, TF2 operates better on a static graph, so TF2 with XLA compiling is easy and powerful. However, to my knowledge, there is no source code or tutorial for XLA compiling for distributed learning. Also, TF2 doesn’t natively provide a gradients accumulator, which is a well-known strategy for small hardware users.

My source code provides all of them and makes it possible to train ResNet-50 with 512 mini-batch sizes on two 1080ti. All parts are XLA compiled so that the training loop is sufficiently fast considering old-fashioned GPUs.

Actually, this repository is source code for a search-based filter pruning algorithm, so if you want to know about it, please look around Readme and the paper.

https://github.com/sseung0703/EKG

submitted by /u/sseung0703
[visit reddit] [comments]

Categories
Misc

Problems with version

submitted by /u/TxiskoAlonso
[visit reddit] [comments]

Categories
Misc

Problem with Tensorflow version

Hey there I need to rewrite this code for my project but I don’t now how to do it. Can some one help me?

from tensorflow.contrib.layers import flatten

I am trying to run this code on jupyter notebooks

https://github.com/PooyaAlamirpour/TrafficSignClassifier

submitted by /u/TxiskoAlonso
[visit reddit] [comments]

Categories
Misc

Best Overall Training for TensorFlow2 Cert Prep

My interest in Reinforcement Learning is quickly turning into an obsession; that being said, the video training around TensorFlow2 Google Cert Prep seems to vary widely in content and quality.

I’ve been following along with Jose Portilla on udemy and have begun going thru the Packt Master AI books, and I’ve looked into the DeepLearning.AI TensorFlow Developer Professional Certificate course but it doesn’t look appealing.

Can anyone recommend a course that helped them learn Tensorflow2 and RL. I keep going down rabbit holes.

submitted by /u/Comfortable-Tale2992
[visit reddit] [comments]

Categories
Misc

Slow TF dataset generator

Hi All,

I’m facing a weird slowness issue when trying to use generators for creating dataset. Details : https://stackoverflow.com/questions/71459793/tensorflow-slow-processing-with-generator

Can someone from the community take a look at this generator code and help me understand what I’m doing wrong ?

def getSplit(original_list, n): return [original_list[i:i + n] for i in range(0, len(original_list), n)] # # 200 files -> 48 Mb (1 file) # 15 files in memory at a time # 5 generators # 3 files per generator # def pandasGenerator(s3files, n=3): print(f"Processing: {s3files} to : {tf.get_static_value(s3files)}") s3files = tf.get_static_value(s3files) s3files = [str(s3file)[2:-1] for s3file in s3files] batches = getSplit(s3files, n) for batch in batches: t = time.process_time() print(f"Processing Batch: {batch}") panda_ds = pd.concat([pd.read_parquet(s3file) for s3file in batch], ignore_index=True) elapsed_time = time.process_time() - t print(f"base_read_time: {elapsed_time}") for row in panda_ds.itertuples(index=False): pan_row = dict(row._asdict()) labels = pan_row.pop('label') yield dict(pan_row), labels return def createDS(s3bucket, s3prefix): s3files = getFileLists(bucket=s3bucket, prefix=s3prefix) dataset = (tf.data.Dataset.from_tensor_slices(getSplit(s3files, 40)) .interleave( lambda files: tf.data.Dataset.from_generator(pandasGenerator, output_signature=( { }, tf.TensorSpec(shape=(), dtype=tf.float64)), args=(files, 3)), num_parallel_calls=tf.data.AUTOTUNE )).prefetch(tf.data.AUTOTUNE) return dataset 

submitted by /u/h1t35hv1
[visit reddit] [comments]

Categories
Misc

Solving Indentation on VSCode with Ctrl+Alt+Down button

Solving Indentation on VSCode with Ctrl+Alt+Down button submitted by /u/g00phy
[visit reddit] [comments]
Categories
Misc

Try-On’s Tattoos

What model should I use or would anyone suggest for try-on tattoos? I want the size of the try-ons to be adjustable.

submitted by /u/codamanicac
[visit reddit] [comments]