Categories
Misc

Face trained modal works but detects all faces and the faces it was supposed to detect

Hey all!

I have trained a TensorFlow modal with faces of people I want to detect.

While it detects the people and gives the correct label to the faces, I trained the model to detect. If I point the webcam at a face that I did not train the model with, it gives a label of one of the people I trained the model with.

I’ve tried many things to stop this, but nothing has worked.

I can share all the code and faces I am trying to detect if needed, but is there any way to stop this?

Any advice is greatly appreciated! I’m still learning TensorFlow and while I’m a little better than my pervious posts I’m still learning!

Thanks!

submitted by /u/Adhesive_Hooks
[visit reddit] [comments]

Categories
Misc

HELP! Persisting CUDA error with tensorflow

Hi everyone. I’m trying to make tensorflow use my NVIDIA GTX 1060 gpu in my laptop. I created a python environment and installed tensorflow, python, pip, etc. I am using Ubuntu on Windows (so wsl-ubuntu). On CMD, the nvidia-smi command is showing my GPU. But with tensorflow, I get the following error:

2022-01-26 21:45:36.677191: E tensorflow/stream_executor/cuda/cuda_driver.cc:271] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected 2022-01-26 21:45:36.678074: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (DESKTOP-P8QAQC0): /proc/driver/nvidia/version does not exist Num GPUs Available: 0 

I have CUDA 11.5 and 11.6 installed, with cudNN 8.3.2.44 installed. I manually copied and pasted the files into the CUDA directory and ran the exe (exe didn’t seem to install files though). I am not sure what else to do. Help would be really appreciated!

submitted by /u/AryInd
[visit reddit] [comments]

Categories
Misc

New on NGC: Security Reports, Latest Containers for PyTorch, TensorFlow, HPC and More

This month the NGC catalog added new containers, model resumes, container security scan reports, and more to help identify and deploy AI software faster.

The NVIDIA NGC catalog is a hub for GPU-optimized deep learning, machine learning, and HPC applications. With highly performant software containers, pretrained models, industry-specific SDKs, and Jupyter Notebooks the content helps simplify and accelerate end-to-end workflows. 

New features, software, and updates to help you streamline your workflow and build your solutions faster on NGC include:

Model resumes

The NGC catalog offers state-of-the-art pretrained models that help you build your custom models faster with just a fraction of the training data.

Now, every model comes with a resume that provides information on model architecture, training parameters, training datasets, performance, and limitations to help you make informed decisions before downloading the model. They also include instructions on how to use the model so you can focus on AI development.

View the demo video and explore models for applications like speech and computer vision in various industries including Retail, Healthcare, Smart Cities, and Manufacturing.

Container security scan reports

All the container images in the NGC catalog are scanned for CVEs, malware, crypto keys, open ports, and more.

Now, the containers come with a security scan report, which provides a security rating of that image, breakdown of CVE severity by package, and links to detailed information on CVEs. 

The scan reports are available on the latest as well as the previous versions of the images and with the entire NGC catalog scanned every 30 days. If you’re using an older version with high or critical severity, the scan report will flag the vulnerabilities and suggest remedies.

View the demo video for more details and explore application containers for deep learning, machine learning, and HPC.

TAO Toolkit

The latest version of the TAO Toolkit is now available for download. The TAO Toolkit, a CLI, and Jupyter notebook-based version of TAO, brings together several new capabilities to help you speed up your model creation process. 

Key highlights include:

Deep learning software

The most popular deep learning frameworks for training and inference are updated monthly. Pull the latest version (v22.01) of:

M-Star CFD

M-Star CFD is a multiphysics modeling package used to simulate fluid flow, heat transfer, species transport, chemical reactions, particle transport, and rigid-body dynamics. 

M-Star CFD contains M-Star Build (to prepare models and specify simulation parameters), M-Star Solve (to run simulations), and M-Star Post (to render and plot data.)

HPC applications

Latest versions of the popular HPC applications are also available in the NGC catalog.

Visit the NGC catalog to see how the GPU-optimized software can help simplify workflows and speedup solution times.

Categories
Misc

tensorflow_datasets … OverflowError? 😭

Hello. Although I have searched online, I don’t understand what’s wrong. Is my laptop not strong enough?? Is it because I am using Anaconda?? I was just trying to follow along with this tutorial: “Tensorflow – Convolutional Neural Networks: Evaluating the Model | Learn | freeCodeCamp.org” 🤷‍♀️

  • 1.) It is installed: Requirement already satisfied: colorama in c:usersglassanaconda3libsite-packages (from tqdm->tensorflow-datasets) (0.4.4)
  • 2.) Reset the kernel & tried to import: import tensorflow_datasets as tfds
  • 3.) The error:
    ~anaconda3libsite-packagestensorflow_datasetsvision_languagewitwit.py in <module>
    23 import tensorflow_datasets.public_api as tfds
    24
    —> 25 csv.field_size_limit(sys.maxsize)
    26
    27 _DESCRIPTION = “””
    OverflowError: Python int too large to convert to C long

submitted by /u/spinach_pi
[visit reddit] [comments]

Categories
Offsites

Resolving High-Energy Impacts on Quantum Processors

Quantum processors are made of superconducting quantum bits (qubits) that — being quantum objects — are highly susceptible to even tiny amounts of environmental noise. This noise can cause errors in quantum computation that need to be addressed to continue advancing quantum computers. Our Sycamore processors are installed in specially designed cryostats, where they are sealed away from stray light and electromagnetic fields and are cooled down to very low temperatures to reduce thermal noise.

However, the world is full of high-energy radiation. In fact, there’s a tiny background of high-energy gamma rays and muons that pass through everything around us all the time. While these particles interact so weakly that they don’t cause any harm in our day-to-day lives, qubits are sensitive enough that even weak particle interactions can cause significant interference.

In “Resolving Catastrophic Error Bursts from Cosmic Rays in Large Arrays of Superconducting Qubits”, published in Nature Physics, we identify the effects of these high-energy particles when they impact the quantum processor. To detect and study individual impact events, we use new techniques in rapid, repetitive measurement to operate our processor like a particle detector. This allows us to characterize the resulting burst of errors as they spread through the chip, helping to better understand this important source of correlated errors.

The Dynamics of a High-Energy Impact
The Sycamore quantum processor is constructed with a very thin layer of superconducting aluminum on a silicon substrate, onto which a pattern is etched to define the qubits. At the center of each qubit is the Josephson junction, a superconducting component that defines the distinct energy levels of the qubit, which are used for computation. In a superconducting metal, electrons bind together into a macroscopic, quantum state, which allows electrons to flow as a current with zero resistance (a supercurrent). In superconducting qubits, information is encoded in different patterns of oscillating supercurrent going back and forth through the Josephson junction.

If enough energy is added to the system, the superconducting state can be broken up to produce quasiparticles. These quasiparticles are a problem, as they can absorb energy from the oscillating supercurrent and jump across the Josephson junction, which changes the qubit state and produces errors. To prevent any energy from being absorbed by the chip and producing quasiparticles, we use extensive shielding for electric and magnetic fields, and powerful cryogenic refrigerators to keep the chip near absolute zero temperature, thus minimizing the thermal energy.

A source of energy that we can’t effectively shield against is high-energy radiation, which includes charged particles and photons that can pass straight through most materials. One source of these particles are tiny amounts of radioactive elements that can be found everywhere, e.g., in building materials, the metal that makes up our cryostats, and even in the air. Another source is cosmic rays, which are extremely energetic particles produced by supernovae and black holes. When cosmic rays impact the upper atmosphere, they create a shower of high-energy particles that can travel all the way down to the surface and through our chip. Between radioactive impurities and cosmic ray showers, we expect a high energy particle to pass through a quantum chip every few seconds.

When a high-energy impact event occurs, energy spreads through the chip in the form of phonons. When these arrive at the superconducting qubit layer, they break up the superconducting state and produce quasiparticles, which cause the qubit errors we observe.

When one of these particles impinges on the chip, it passes straight through and deposits a small amount of its energy along its path through the substrate. Even a small amount of energy from these particles is a very large amount of energy for the qubits. Regardless of where the impact occurs, the energy quickly spreads throughout the entire chip through quantum vibrations called phonons. When these phonons hit the aluminum layer that makes up the qubits, they have more than enough energy to break the superconducting state and produce quasiparticles. So many quasiparticles are produced that the probability of the qubits interacting with one becomes very high. We see this as a sudden and significant increase in errors over the whole chip as those quasiparticles absorb energy from the qubits. Eventually, as phonons escape and the chip cools, these quasiparticles recombine back into the superconducting state, and the qubit error rates slowly return to normal.

A high-energy particle impact (at time = 0 ms) on a patch of the quantum processor, showing error rates for each qubit over time. The event starts by rapidly spreading error over the whole chip, before saturating and then slowly returning to equilibrium.

Detecting Particles with a Computer
The Sycamore processor is designed to perform quantum error correction (QEC) to improve the error rates and enable it to execute a variety of quantum algorithms. QEC provides an effective way of identifying and mitigating errors, provided they are sufficiently rare and independent. However, in the case of a high-energy particle going through the chip, all of the qubits will experience high error rates until the event cools off, producing a correlated error burst that QEC won’t be able to correct. In order to successfully perform QEC, we first have to understand what these impact events look like on the processor, which requires operating it like a particle detector.

To do so, we take advantage of recent advances in qubit state preparation and measurement to quickly prepare each qubit in their excited state, similar to flipping a classical bit from 0 to 1. We then wait for a short idle time and measure whether they are still excited. If the qubits are behaving normally, almost all of them will be. Further, the qubits that experience a decay out of their excited state won’t be correlated, meaning the qubits that have errors will be randomly distributed over the chip.

However, during the experiment we occasionally observe large error bursts, where all the qubits on the chip suddenly become more error prone all at once. This correlated error burst is a clear signature of a high-energy impact event. We also see that, while all qubits on the chip are affected by the event, the qubits with the highest error rates are all concentrated in a “hotspot” around the impact site, where slightly more energy is deposited into the qubit layer by the spreading phonons.

To detect high-energy impacts, we rapidly prepare the qubits in an excited state, wait a little time, and then check if they’ve maintained their state. An impact produces a correlated error burst, where all the qubits show a significantly elevated error rate, as shown around time = 8 seconds above.

Next Steps
Because these error bursts are severe and quickly cover the whole chip, they are a type of correlated error that QEC is unable to correct. Therefore, it’s very important to find a solution to mitigate these events in future processors that are expected to rely on QEC.

Shielding against these particles is very difficult and typically requires careful engineering and design of the cryostat and many meters of shielding, which becomes more impractical as processors grow in size. Another approach is to modify the chip, allowing it to tolerate impacts without causing widespread correlated errors. This is an approach taken in other complex superconducting devices like detectors for astronomical telescopes, where it’s not possible to use shielding. Examples of such mitigation strategies include adding additional metal layers to the chip to absorb phonons and prevent them from getting to the qubit, adding barriers in the chip to prevent phonons spreading over long distances, and adding traps for quasiparticles in the qubits themselves. By employing these techniques, future processors will be much more robust to these high-energy impact events.

As the error rates of quantum processors continue to decrease, and as we make progress in building a prototype of an error-corrected logical qubit, we’re increasingly pushed to study more exotic sources of error. While QEC is a powerful tool for correcting many kinds of errors, understanding and correcting more difficult sources of correlated errors will become increasingly important. We’re looking forward to future processor designs that can handle high energy impacts and enable the first experimental demonstrations of working quantum error correction.

Acknowledgements
This work wouldn’t have been possible without the contributions of the entire Google Quantum AI Team, especially those who worked to design, fabricate, install and calibrate the Sycamore processors used for this experiment. Special thanks to Rami Barends and Lev Ioffe, who led this project.

Categories
Misc

Hatch Me If You Can: Startup’s Sorting Machines Use AI to Protect Healthy Fish Eggs

Fisheries collect millions upon millions of fish eggs, protecting them from predators to increase fish yield and support the propagation of endangered species — but an issue with gathering so many eggs at once is that those infected with parasites can put healthy ones at risk. Jensorter, an Oregon-based startup, has created AI-powered fish egg Read article >

The post Hatch Me If You Can: Startup’s Sorting Machines Use AI to Protect Healthy Fish Eggs appeared first on The Official NVIDIA Blog.

Categories
Misc

EarlyStopping: ‘patience’ count is reset when tuning in Keras

I’m using keras-tuner to perform a hyperparameter optimization of a neural network.

I’m using a Hyperband optimization, and I call the search method as:

import keras_tuner as kt tuner = kt.Hyperband(ann_model, objective=Objective('val_loss', direction="min"), max_epochs=100, factor=2, directory=/path/to/folder, project_name="project_name", seed=0) tuner.search(training_gen(), epochs=50, validation_data=valid_gen(), callbacks=[stop_early], steps_per_epoch=1000, validation_freq=1, validation_steps=100) 

where the EarlyStopping callback is defined as:

stop_early = tf.keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0.1, mode='min', patience=15) 

Hyperband initially trains many models (each one with a different combination of the hyperparameters previously chosen) for only 2 epochs; then, it discards poor performing models and it only trains the most promising ones, step by step, with an increasing number of epochs at each step (the final goal is to discard all models except one, the best perfoming one).

So the training of a specific model is not performed in one shot, but it’s perfomed by steps, where in each of them Keras saves the state of the training.

By setting max_epochs=100, I noticed that the training of a model is performed by these steps (called “Runnning trials“):

  1. firstly, from epoch 1 to epoch 3;
  2. secondly, from 4 to 7;
  3. then, from 8 to 13;
  4. then, from 14 to 25;
  5. then, from 26 to 50;
  6. and finally, from 51 to 100.

So, at the end of each “Running trial”, Keras saves the state of the training, in order to continue, at the next “Running trial”, the training from that state.

By setting patience=15: during “Runnning trials” 1), 2), 3), 4) of the list above, EarlyStopping could not operate because the number of training epochs is less than patience; thus, EarlyStopping could operate only during “Running trials” 5) and 6) of the list above.

Initially I thought that the patience count started at epoch 1 and should never reset itself when a new “Running trial” begins, but I noticed that the EarlyStopping callback stops the training at epoch 41, thus during the “Running trial” 5), which goes from epoch 26 to 50 .
Thus it seems to me that, at the beginning of each “Running trial”, patience count is reset; indeed: EarlyStopping arrests the training at epoch 41, the first epoch at which EarlyStopping is able to operate, because: start_epoch + patience = 26 + 15 = 41..

Is it normal/expected behavior that patience is automatically reset at the beginning of each “Running trial” while using Keras Hyperband tuning?

submitted by /u/RainbowRedditForum
[visit reddit] [comments]

Categories
Misc

UK Biobank Advances Genomics Research with NVIDIA Clara Parabricks

UK Biobank is broadening scientists’ access to high-quality genomic data and analysis by making its massive dataset available in the cloud alongside NVIDIA GPU-accelerated analysis tools. Used by more than 25,000 registered researchers around the world, UK Biobank is a large-scale biomedical database and research resource with deidentified genetic datasets, along with medical imaging and Read article >

The post UK Biobank Advances Genomics Research with NVIDIA Clara Parabricks appeared first on The Official NVIDIA Blog.

Categories
Misc

A beginner question

tf.flags.DEFINE_string('config', '', 'Path to the file with configurations') 

What does this mean? What will be a better document to learn the basics of TF?

submitted by /u/Admirable-Study-626
[visit reddit] [comments]

Categories
Misc

Text prediction project – Finally managed to break the plateau by increasing the prob. of keeping weights. But I’m damn sure it will plateau again at some point, maybe the only thing could be changed at this point is the learning rate?

Text prediction project - Finally managed to break the plateau by increasing the prob. of keeping weights. But I'm damn sure it will plateau again at some point, maybe the only thing could be changed at this point is the learning rate? submitted by /u/Smsm1998
[visit reddit] [comments]