Categories
Misc

Tensorflow Image Resize, Crop and Centering Advice


Tensorflow Image Resize, Crop and Centering Advice

Hi there!

Looking for a way of manipulating something like this:


From image (3456px x 5184px)

To something like this:


To image (1500px x 1500px)

There are a decent number of variations on this, e.g. more or
less zoomed in (depending on garment length), front and back sides
of the garment, two different mannequins, some images without a
mannequin etc. I have around 2500 garments, so around 5000 images
front and back.

I’ve got some basic experience with TensorFlow and Keras, having
completed a traffic flow prediction project for uni which used past
traffic data fed into a stacked auto encoder network. Pretty
inexperienced in this area though.

I have a few questions:

  1. Is it even something that I’d want to be doing with TensorFlow?
    It feels like something I could hack together by adding extra info
    to the image filenames and using a library like Pillow, but there
    are some variations which means it may not work great in all
    circumstances, plus, using ML would be a more interesting
    project.
  2. If yes, I saw that TensorFlow has an image processing library
    which seems like what I need, but I’m unsure on where to get
    started using it
  3. Are there any good examples/tutorials/videos focused on image
    manipulation like this?

I’ve done a bit of research though haven’t had much luck, but
feel free to call me an idiot if I’ve missed an obvious,
preexisting project or solution.

Any and all help would be greatly appreciated!

submitted by /u/ljackmanl

[visit reddit]

[comments]

Categories
Misc

Predict from loaded BERT model

I was trying to make a prediction from a loaded tensorflow
model. Though I’m not sure if it’s correct how I previously saved
it, specifically I have doubts about code inside serving_input_fn()
function (MAX_SEQ_LENGTH=128):

def serving_input_fn(): feature_spec = { "input_ids" : tf.FixedLenFeature([MAX_SEQ_LENGTH], tf.int64), "input_mask" : tf.FixedLenFeature([MAX_SEQ_LENGTH], tf.int64), "segment_ids" : tf.FixedLenFeature([MAX_SEQ_LENGTH], tf.int64), "label_ids" : tf.FixedLenFeature([], tf.int64) } serialized_tf_example = tf.placeholder(dtype=tf.string,shape=[None],name='input_example_tensor') receiver_tensors = {'example': serialized_tf_example} features = tf.parse_example(serialized_tf_example, feature_spec) return tf.estimator.export.ServingInputReceiver(features, receiver_tensors) estimator.export_saved_model('gs://bucket/trained_model, serving_input_receiver_fn=serving_input_fn) 

When I try to predict from loaded model:

from tensorflow.contrib import predictor predict_fn = predictor.from_saved_model(LOAD_PATH) input_features_test = convert_examples_to_features( test_examples,label_list, MAX_SEQ_LENGTH, tokenizer) predictions = predict_fn({'example':input_features_test[0]}) 

it returns this error:

ValueError: Cannot feed value of shape () for Tensor
‘input_example_tensor:0’, which has shape ‘(?,)’

How should I change serving_input_fn() method?

If you want to reproduce it: github_repo (you
should download variables from
here
and put it in trained_model/1608370941/ folder)


This
is the tutorial I followed to fine tune BERT model on
google cloud TPU.

submitted by /u/spaceape__

[visit reddit]

[comments]

Categories
Misc

tensorflow could not load librart cudnn_ops_infer64_8.dll .error code 126

first of all it was “tensorflow-gpu test is false” issue for me.
but i managed to run github repo below which is my goal.

https://github.com/cysmith/neural-style-tf

then i came up with “Could not load library
cudnn_ops_infer64_8.dll. Error code 126

Please make sure cudnn_ops_infer64_8.dll is in your library
path!” error.

i’ve “cudnn_ops_infer64_8.dll.”in my downloads folder.Because i
tried to match perfect cuda- tensorflow for my gpu.

[cmd pic shows github repo works until error][1]

[1]: https://i.stack.imgur.com/Gw1Yl.png

tensorflow:2.3.0

python:3.7.9

CUDA:v10.1

cudnn:cudnn-10.1-windows10-x64-v7.5.0.56

gpu:nvidia 840m

i’m stucked at this point.i’m new to ML and tensorflow and just
want to try a simple project.thx^^

yes i added “cudnn_ops_infer64_8.dll” in PATH as
c:downloads…bin. Nothing changed.

submitted by /u/elyakubu

[visit reddit]

[comments]

Categories
Misc

Export fine-tuned BERT Model trained on Cloud TPU to HDF5 format

I’m using Colab environment to fine-tune a BERT Model (for
reference this is the
Notebook
_with_Cloud_TPU_Sentence_Classification_Tasks.ipynb)).
How can I export fine tuned model (it’s a TPUEstimator object) to
HDF5 format? I need to use the trained model locally on CPU.

submitted by /u/spaceape__

[visit reddit]

[comments]

Categories
Misc

Possibly serious issue with tf.image.per_image_standarization

I came across this issue in my own projects, and found the
issue
linked here
on the TensorFlow github, but I feel like it isn’t
getting much traction for the potential severity of the
problem.

Basically there was a non-release push to TF between 1.14 and
1.15 that broke some functionality for the
tf.image.per_image_standarization routine when used on unsigned
integer inputs. The majority of information content in images ends
up getting lost because of the naïve type conversions done in
per_image_standardization after 1.14. This isn’t addressed in
documentation, and is pretty clearly a major change in behavior
befitting a major release, but was introduced before a major
release, likely pointing to an untested edge case.

I’m concerned that the issue isn’t getting much traction but
could potentially impact labs all over the place. The simple
solution is to convert your unsigned int images to float before
calling per_image_standardization, but that isn’t obvious from any
of the documentation, and used to be handled naturally by the
method.

Thoughts?

Edit: formatting.

submitted by /u/DrSparkle713

[visit reddit]

[comments]

Categories
Misc

[100% OFF] Object Detection Web App with TensorFlow, OpenCV and Flask


[100% OFF] Object Detection Web App with TensorFlow, OpenCV and Flask
submitted by /u/codeeuler1

[visit reddit]

[comments]
Categories
Misc

Model training stalls forever after just a few batches.

I posted
this as an issue on Github
, maybe someone here will have a
magic solution:

  • TensorFlow version: 2.4.0-rc4 (also tried with stable
    2.4.0)
  • TensorFlow Git version: v2.4.0-rc3-20-g97c3fef64ba
  • Python version: 3.8.5
  • CUDA/cuDNN version: CUDA 11.0, cuDNN 8.0.4
  • GPU model and memory: Nvidia RTX 3090, 24GB RAM

Model training regularly freezes for large models.

Sometimes the first batch or so works, but then just a few
batches later and training seems stuck in a loop. From my activity
monitor, I see GPU CUDA use hovering around 100%. This goes on for
minutes or more, with no more batches being trained.

I don’t see an OOM error, nor does it seem like I’m hitting
memory limits in activity monitor or nvidia-smi.

I would expect the first batch to take a bit longer, then any
subsequent batches to take less than <1s. Never have a random
batch take minutes or stall forever.

Run through all the cells in the notebook shared below to
initialize the model, then run the final cell just a few times.
Eventually it will hang and never finish.


https://github.com/not-Ian/tensorflow-bug-example/blob/main/tensorflow%20error%20example.ipynb

Smaller models train quickly as expected, however I think even
then they eventually stall out after training many, many batches. I
had another similar, small VAE like in my example that trained for
5k-10k batches overnight before stalling.

Someone suggested I set a hard memory limit on the GPU like
this:

gpus = tf.config.experimental.list_physical_devices('GPU') tf.config.experimental.set_virtual_device_configuration(gpus[0], [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=1024 * 23)]) 

And finally, I’ve tried using the hacky ptxas.exe file from CUDA
11.1 in my CUDA 11.0 installation. This seems to remove a warning?
But still no change.

Open to any other ideas, thanks.

submitted by /u/Deinos_Mousike

[visit reddit]

[comments]

Categories
Misc

newbie here^^ ;trying to build tensorflow to old gpu

i’ve geforce 840m. it is cuda 5.0.My project has dependencies as
tensorflow ,opencv ,cuda 7.5+ and cudnn 5.0+.(https://github.com/dvschultz/neural-style-tf)

i keep getting this error

“W tensorflow/stream_executor/platform/default/dso_loader.cc:59]
Could not load dynamic library ‘cudart64_101.dll’; dlerror:
cudart64_101.dll not found”

tensorflow doesnt see my gpu.

1-is it because i’ve higher cuda version than my gpu?

2-is it because my tensorflow version 2.3.1 ?

thx.

submitted by /u/elyakubu

[visit reddit]

[comments]

Categories
Misc

Inception to the Rule: AI Startups Thrive Amid Tough 2020

2020 served up a global pandemic that roiled the economy. Yet the startup ecosystem has managed to thrive and even flourish amid the tumult. That may be no coincidence. Crisis breeds opportunity. And nowhere has that been more prevalent than with startups using AI, machine learning and data science to address a worldwide medical emergency Read article >

The post Inception to the Rule: AI Startups Thrive Amid Tough 2020 appeared first on The Official NVIDIA Blog.

Categories
Misc

Shifting Paradigms, Not Gears: How the Auto Industry Will Solve the Robotaxi Problem

A giant toaster with windows. That’s the image for many when they hear the term “robotaxi.” But there’s much more to these futuristic, driverless vehicles than meets the eye. They could be, in fact, the next generation of transportation. Automakers, suppliers and startups have been dedicated to developing fully autonomous vehicles for the past decade, Read article >

The post Shifting Paradigms, Not Gears: How the Auto Industry Will Solve the Robotaxi Problem appeared first on The Official NVIDIA Blog.