SANTA CLARA, Calif., Dec. 29, 2021 — NVIDIA will present at the following event for the financial community:
24th Annual Needham Virtual Growth Conference
Monday, Jan. 10, 2022, at 9:30 a.m….
SANTA CLARA, Calif., Dec. 29, 2021 — NVIDIA will present at the following event for the financial community:
24th Annual Needham Virtual Growth Conference
Monday, Jan. 10, 2022, at 9:30 a.m….
Hey guys, I know this may not be the perfect place, but I though some of you may have the skills and the interest to apply to some recent job openings. I you are not interested in these jobs just ignore them, downvote them, but there may be other who not only find them useful but they can make a difference if you are interested in them apply directly to the link, we are searching for North America based individuals (for tax reasons)only, thanks!
submitted by /u/saffeergunx
[visit reddit] [comments]
Hi all, I’m new to Tensorflow.
I’m interested in incorporating one of these models into my application (after converting them to a tflite file). But none of the download links are working. Any idea why this is? I’m specifically interested in SSD models.
submitted by /u/BuckyOFair
[visit reddit] [comments]
Recognized as one of tech’s top podcasts, the NVIDIA AI Podcast is approaching 3 million listens in five years, as it sweeps across topics like robots, data science, computer graphics and renewable energy. Its 150+ episodes reinforce the extraordinary capabilities of AI — from diagnosing disease to boosting creativity to helping save the Earth — Read article >
The post AI Podcast Wrapped: Top Five Episodes of 2021 appeared first on The Official NVIDIA Blog.
I’m wondering how the pre-trained models actually handle the variety of inputs? Do the original layer weights stay exactly the same?
Many thanks in advance for your care & time.
submitted by /u/talhak
[visit reddit] [comments]
I have been looking around and haven’t been able to find the answer to this one. I am having trouble trying to resume training on a CGAN that I am working with after loading the h5 file. When I try to start training the model again after loading the files, the generator loss will begin to move towards zero very quickly, within 3-4 epochs.
Below is some of the code for loading the models and resuming training. Any help or suggestions would be greatly appreciated!
Loading Models:
d_model = load_model('Aeon5/cgan_model/discriminator_0_to_83.h5') g_model = load_model('Aeon5/cgan_model/generator_0_to_83.h5') gen_noise, gen_label = g_model.input gen_output = g_model.output gan_output = d_model([gen_output, gen_label]) combined = Model([gen_noise, gen_label], gan_output) opt = Adam(lr=0.0002, beta_1=0.5) combined.compile(loss=['binary_crossentropy'], optimizer=opt)
Resuming Training:
def resume_train(epochs, start, generator, discriminator, combined_model, latent_dim, data_loader, name_append, batch_size=50): for epoch in range(start, epochs): random = np.random.randint(0, 11) for index in range(int(50000/batch_size)): valid = np.ones((batch_size, 1)) fake = np.zeros((batch_size, 1)) idx = np.random.randint(0, 50000, batch_size) x_train = data_loader.get_img_batch(idx) y_train = data_loader.get_label_batch(idx) x_train = (x_train.astype(np.float32) - 127.5)/127.5 if index % 100 == random: valid = np.zeros((batch_size, 1)) + (np.random.random()*0.1) fake = np.ones((batch_size, 1)) - (np.random.random()*0.1) noise = np.random.randn(batch_size, latent_dim) gen_img = generator.predict([noise, y_train]) d_loss_real, _ = discriminator.train_on_batch([x_train, y_train], valid) d_loss_fake, _ = discriminator.train_on_batch([gen_img, y_train], fake) d_loss = 0.5*(np.add(d_loss_real, d_loss_fake)) sample_label = np.random.randint(0, 10, batch_size).reshape(-1, 1) valid = np.ones((batch_size, 1)) g_loss = combined_model.train_on_batch([noise, sample_label], valid) if index % (batch_size) == 0: sample_images(epoch, latent_dim, generator, data_loader) print("%d [D loss: %f] [G loss: %f]" % (epoch, d_loss, g_loss)) #Save the combined model and the generator name = './cgan_model/combined_' + name_append + '.h5' combined_model.save(name) name = './cgan_model/generator_' + name_append + '.h5' generator.save(name) name = './cgan_model/discriminator_' + name_append + '.h5' discriminator.save(name)
submitted by /u/Wrathnut
[visit reddit] [comments]
What better way to look back at NVIDIA’s top five videos of 2021 than to hop into the cockpit of a virtual plane flying over Taipei. That was how NVIDIA’s Jeff Fisher and Manuvir Das invited viewers into their COMPUTEX keynote on May 31. Their aircraft sailed over the city’s green hills and banked around Read article >
The post It Was a Really Virtual Year: Top Five NVIDIA Videos of 2021 appeared first on The Official NVIDIA Blog.
Where I work, we need to quantize our models to run them quick enough, and we found that Quantization Aware Training is the only one that has a chance of retaining the desired accuracy. Using Post-training Quantization incurs too many losses.
However, QAT is incredibly difficult and cumbersome in TF 2 because it only applies to models defined through the functional API, whereas many interesting models use for example the object-oriented approach of defining a model.
Does anyone know if there are plans to make QAT easier to use in the future?
submitted by /u/wattnurt
[visit reddit] [comments]
I have been searching for some quality tutorials on tensorflow for quite a while and I can’t find some good ones.
Can anyone please suggest me any tutorial (I would prefer video) to learn tensorflow with hands on (I mean using it practically too not just go through docs only)?
submitted by /u/outofthisworld420
[visit reddit] [comments]
First of all, nice to meet you, I’m new. Well, I already read him a lot about the theory and he had even bought courses, but I feel that the examples were already old or not useful. I have also been reading some posts in this sub and I have noticed that tensorflow and keras have their inefficiencies, so I don’t know what tools and resources to use to start with.
Beforehand thank you very much.
submitted by /u/Cextremo
[visit reddit] [comments]