Categories
Misc

How would you make an intentionally bad CNN?

Hey folks,

I’m a data science lecturer and for one of my assignments this year, I want to challenge my students to fix and optimise a CNN coded in keras/TF. The gist is I need to code up a model that is BAD—something full of processing bottlenecks to slow it down, and hyperparameters that hamper the models ability to learn anything. The students will get the model, and will be tasked with “fixing” it—tidying up the input pipeline so that it runs efficiently and adjust the model parameters so that it actually fits properly.

I have a few ideas already, mostly setting up the input pipeline in a convoluted order, using suboptimal activations, etc. But I’m curious to hear other suggestions!

submitted by /u/Novasry
[visit reddit] [comments]

Categories
Misc

TF2 Source code for Custom training loop with "Custom layers", "XLA compiling", "Distributed learning", and "Gradient accumulator"

Hi, guys 🤗

I just want to share my Github repository for the Custom training loop with “Custom layers,” “XLA compiling,” “Distributed learning,” and “Gradient accumulator.”

As you know, TF2 operates better on a static graph, so TF2 with XLA compiling is easy and powerful. However, to my knowledge, there is no source code or tutorial for XLA compiling for distributed learning. Also, TF2 doesn’t natively provide a gradients accumulator, which is a well-known strategy for small hardware users.

My source code provides all of them and makes it possible to train ResNet-50 with 512 mini-batch sizes on two 1080ti. All parts are XLA compiled so that the training loop is sufficiently fast considering old-fashioned GPUs.

Actually, this repository is source code for a search-based filter pruning algorithm, so if you want to know about it, please look around Readme and the paper.

https://github.com/sseung0703/EKG

submitted by /u/sseung0703
[visit reddit] [comments]

Categories
Misc

Problems with version

submitted by /u/TxiskoAlonso
[visit reddit] [comments]

Categories
Misc

Problem with Tensorflow version

Hey there I need to rewrite this code for my project but I don’t now how to do it. Can some one help me?

from tensorflow.contrib.layers import flatten

I am trying to run this code on jupyter notebooks

https://github.com/PooyaAlamirpour/TrafficSignClassifier

submitted by /u/TxiskoAlonso
[visit reddit] [comments]

Categories
Misc

Slow TF dataset generator

Hi All,

I’m facing a weird slowness issue when trying to use generators for creating dataset. Details : https://stackoverflow.com/questions/71459793/tensorflow-slow-processing-with-generator

Can someone from the community take a look at this generator code and help me understand what I’m doing wrong ?

def getSplit(original_list, n): return [original_list[i:i + n] for i in range(0, len(original_list), n)] # # 200 files -> 48 Mb (1 file) # 15 files in memory at a time # 5 generators # 3 files per generator # def pandasGenerator(s3files, n=3): print(f"Processing: {s3files} to : {tf.get_static_value(s3files)}") s3files = tf.get_static_value(s3files) s3files = [str(s3file)[2:-1] for s3file in s3files] batches = getSplit(s3files, n) for batch in batches: t = time.process_time() print(f"Processing Batch: {batch}") panda_ds = pd.concat([pd.read_parquet(s3file) for s3file in batch], ignore_index=True) elapsed_time = time.process_time() - t print(f"base_read_time: {elapsed_time}") for row in panda_ds.itertuples(index=False): pan_row = dict(row._asdict()) labels = pan_row.pop('label') yield dict(pan_row), labels return def createDS(s3bucket, s3prefix): s3files = getFileLists(bucket=s3bucket, prefix=s3prefix) dataset = (tf.data.Dataset.from_tensor_slices(getSplit(s3files, 40)) .interleave( lambda files: tf.data.Dataset.from_generator(pandasGenerator, output_signature=( { }, tf.TensorSpec(shape=(), dtype=tf.float64)), args=(files, 3)), num_parallel_calls=tf.data.AUTOTUNE )).prefetch(tf.data.AUTOTUNE) return dataset 

submitted by /u/h1t35hv1
[visit reddit] [comments]

Categories
Misc

Best Overall Training for TensorFlow2 Cert Prep

My interest in Reinforcement Learning is quickly turning into an obsession; that being said, the video training around TensorFlow2 Google Cert Prep seems to vary widely in content and quality.

I’ve been following along with Jose Portilla on udemy and have begun going thru the Packt Master AI books, and I’ve looked into the DeepLearning.AI TensorFlow Developer Professional Certificate course but it doesn’t look appealing.

Can anyone recommend a course that helped them learn Tensorflow2 and RL. I keep going down rabbit holes.

submitted by /u/Comfortable-Tale2992
[visit reddit] [comments]

Categories
Misc

Solving Indentation on VSCode with Ctrl+Alt+Down button

Solving Indentation on VSCode with Ctrl+Alt+Down button submitted by /u/g00phy
[visit reddit] [comments]
Categories
Misc

Try-On’s Tattoos

What model should I use or would anyone suggest for try-on tattoos? I want the size of the try-ons to be adjustable.

submitted by /u/codamanicac
[visit reddit] [comments]

Categories
Misc

Hello! Can anybody tell me how can I extract eye Landmarks like in DLib using Tensor Flow?

Hi! I am making a drowsiness application and have to extract eye landmarks. Can anybody help how can extract them? Kindly Help! Thank You!

submitted by /u/znoman09
[visit reddit] [comments]

Categories
Misc

Purpose of scope in tf.keras.LSTM layers?

I’ve a piece of code that I believe was written in TF2, but based on a repo written in TF1.

I am trying to run it in TF 1.52

It expressively invokes a piece of LSTM code that causes an error (unknown parameter “scope”)

net = tf.keras.layers.LSTM(32, return_sequences=True, dropout=0.4, recurrent_dropout=0.4)(net, scope=’lstm1′, training=is_training)

net = tf.keras.layers.LSTM(32, dropout=0.4, recurrent_dropout=0.4)(net, scope=’lstm2′, training=is_training)

All of the other layers have their scope parameter defined as part of a custom layer definition, (with tf.variable_scope(scope, reuse=reuse_weights) as sc )

Without the scope param in the LSTM layers, the kernel fails. I believe the problem is in the lack of a custom layer definition for the LSTM layers with the scope defined accordingly, but I’m not totally sure

submitted by /u/dxjustice
[visit reddit] [comments]