Categories
Misc

How to encrypt a tflite model

Hi,I am trying to run tflite model on browser. It would run client side, I have converted the model to wasm format and am able to run it successfully on browser.

Since, It would be client side, The tflite model would be accessible to everyone. Is it possible to encrypt the model in anyway, So not everyone has access to the model?

The application is built using mediapipe framework, Not sure if it would change the solution.

Thanks!

submitted by /u/cvmldlengineer
[visit reddit] [comments]

Categories
Misc

New Public Workshops Now Available from the NVIDIA Deep Learning Institute

For the first time ever, the NVIDIA Deep Learning Institute (DLI) is making its popular instructor-led workshops available to the general public.

For the first time ever, the NVIDIA Deep Learning Institute (DLI) is making its popular instructor-led workshops available to the general public. 

With the launch of public workshops this week, enrollment will be open to individual developers, data scientists, researchers, and students. NVIDIA is increasing accessibility and the number of courses available to participants around the world. Now anyone can learn from world-class NVIDIA instructors in courses on AI, accelerated computing, and data science. 

Previously, DLI workshops were only available to large organizations that wanted dedicated and specialized training for their in-house developers, or to individuals attending NVIDIA GTC

Boost Your Skills with Industry Leading Training

Job growth in the tech industry continues and advanced software development skills in deep learning, data science, and accelerated computing are highly sought after. DLI workshops offer a comprehensive learning experience which includes hands-on exercises and guidance from expert instructors certified by DLI. Courses are delivered virtually and in many time zones to reach developers worldwide. In addition to English, many courses are offered in other languages including Chinese and Japanese. 

With the introduction of DLI workshops for individuals, NVIDIA is making it easier for anyone to access world-class training. Registration fees cover learning materials, instructors, and access to fully configured GPU accelerated development servers for hands-on exercises.

The current lineup of DLI workshops for individuals includes:

March 2021

  • Fundamentals of Accelerated Computing with CUDA Python
  • Applications of AI for Predictive Maintenance

April 2021

  • Fundamentals of Deep Learning
  • Applications of AI for Anomaly Detection
  • Fundamentals of Accelerated Computing with CUDA C/C++
  • Building Transformer-Based Natural Language Processing Applications
  • Deep Learning for Autonomous Vehicles – Perception
  • Fundamentals of Accelerated Data Science with RAPIDS
  • Accelerating CUDA C++ Applications with Multiple GPUs
  • Fundamentals of Deep Learning for Multi-GPUs

May 2021

  • Building Intelligent Recommender Systems
  • Fundamentals of Accelerated Data Science with RAPIDS
  • Deep Learning for Industrial Inspection
  • Building Transformer-Based Natural Language Processing Applications
  • Applications of AI for Anomaly Detection

Visit the DLI website for details on each course and the full schedule of upcoming workshops, which is regularly updated with new training opportunities.

A complete list of DLI courses are available in the DLI course catalog.

Register today for a DLI instructor-led workshop for individuals. Space is limited so sign up early. For more information, email nvdli@nvidia.com.

Categories
Misc

Tensorflow pratice vs Mathematics

Hello there,

i often see the following code for regression problems(we have here a linear regression)

import tensorflow.compat.v1 as tf
import numpy as np
import matplotlib.pyplot as plt
learning_rate = 0.01
training_epochs = 100
x_train = np.linspace(-1, 1, 101)
y_train = 2 * x_train + np.random.randn(*x_train.shape) * 0.33
X = tf.placeholder(tf.float32)
Y = tf.placeholder(tf.float32)
def model(X, w):
return tf.multiply(X, w)
w = tf.Variable(0.0, name=”weights”)
y_model = model(X, w)
cost = tf.square(Y-y_model)
train_op = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
for epoch in range(training_epochs):
for (x, y) in zip(x_train, y_train):
sess.run(train_op, feed_dict={X: x, Y: y})
w_val = sess.run(w)
sess.close()
plt.scatter(x_train, y_train)
y_learned = x_train*w_val
plt.plot(x_train, y_learned, ‘r’)
plt.show()

But isnt that wrong? My problems are these lines:

for epoch in range(training_epochs):for (x, y) in zip(x_train, y_train):sess.run(train_op, feed_dict={X: x, Y: y})

Why is it a problem? Because if you look how we do it in pure mathematics it doesnt fit. We have the MSE function in math and we do gradient descent over the hole function. But here it seems that they are doing gradient descent just over the parts of MES function in line

“for (x, y) in zip(x_train, y_train):sess.run(train_op, feed_dict={X: x, Y: y})”

What do i mean with that? MSE=g1(x)+g2(x)+…+gn(x) and it seems like they do graph descent on g1(x) then on g2(x) and so on.How does TensorFlow exactly calculus in the back?
My problem is that through feed_dict={X: x, Y: y} only just one function will be called. Lets say x=1 and y=2 Tensorflow will go to X and Y then it will go to def model and only will call one part of the function of MSE lets say g1(x) but you need to go over all MSE with graph descent?

submitted by /u/te357
[visit reddit] [comments]

Categories
Misc

[D] : Any good resources for object detection using TensorFlow? Stuck on it!

submitted by /u/anotsohypocritesoul
[visit reddit] [comments]

Categories
Misc

Why getting NaN values for custom Dice loss in Keras?

I am using Keras for boundary/contour detection using a Unet. When I use binary cross-entropy as the loss, the losses decrease over time as expected the predicted boundaries look reasonable

However, I have tried custom loss for Dice with varying LRs, none of them are working well.

smooth = 1e-6 def dice_coef(y_true, y_pred): y_true_f = K.flatten(y_true) y_pred_f = K.flatten(y_pred) intersection = K.sum(y_true_f * y_pred_f) return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth) def dice(y_true, y_pred): return 1-dice_coef(y_true, y_pred) 

the loss values don’t improve. That is, it will show something like

loss: nan - dice: .9607 - val_loss: nan - val_dice: .9631 

I get NaNs for the losses and values for dice and val_dice that barely change as the epochs iterate. This is regardless of what I use for the LR, whether it be .01 to 1e-6

The dimensions of the train images/labels looks like N x H x W x 1, where N is the number of images, H/W are the height/width of each image

can anyone help?

submitted by /u/74throwaway
[visit reddit] [comments]

Categories
Misc

Creating a tensor using linspace function

Hello Guys,

I’m challenging myself to create simple 1 dimension tensor that consist of integers, range from 1-10 using the linespace function and with a shape of 6. However I haven’t been successful doing that. How do I fix this ?

My code:

[1,2,3,4,5,6,7,8,9,10]

torch.linspace(1, 1, 10)

submitted by /u/destin95
[visit reddit] [comments]

Categories
Misc

Feelin’ Like a Million MBUX: AI Cockpit Featured in Popular Mercedes-Benz C-Class

It’s hard not to feel your best when your car makes every commute a VIP experience. This week, Mercedes-Benz launched the redesigned C-Class sedan and C-Class wagon, packed with new features for the next generation of driving. Both models prominently feature the latest MBUX AI cockpit, powered by NVIDIA, delivering an intelligent user interface for Read article >

The post Feelin’ Like a Million MBUX: AI Cockpit Featured in Popular Mercedes-Benz C-Class appeared first on The Official NVIDIA Blog.

Categories
Misc

Omniverse Assets Available for Download on TurboSquid

TurboSquid and NVIDIA are collaborating to curate thousands of USD models that are available today and ready to use with NVIDIA Omniverse.

TurboSquid and NVIDIA are collaborating to curate thousands of USD models that are available today and ready to use with NVIDIA Omniverse.

Many developers using Omniverse are experiencing enhanced workflows with virtual collaboration and photorealistic simulation. The open platform, which is available now in open beta, enables teams around the world to simultaneously collaborate in real time, using their favorite 3D applications. 

TurboSquid has an extensive library of 3D models that users can easily drag and drop into Omniverse, allowing them to immediately start collaborating with others. This helps developers save time as they can immediately start exploring Omniverse without worrying about importing or exporting content, model preparation, or polycounts. Users can load TurboSquid’s USD models in Omniverse connectors, and Omniverse ensures consistent quality between teams, contractors, and ecosystems. 

To get started, download the NVIDIA Omniverse Launcher from nvidia.com/omniverse. Run the Omniverse Launcher and install Omniverse Create or Omniverse View apps, then import TurboSquid 3D content and start creating.

Learn more by visiting TurboSquid’s Omniverse page, and check out the 3D tool sets now available.

Categories
Misc

Meet the Researcher: Lorenzo Baraldi, Artificial Intelligence for Vision, Language and Embodied AI

This month, we spotlight Lorenzo Baraldi, Assistant Professor at the University of Modena and Reggio Emilia in Italy.

‘Meet the Researcher’ is a monthly series in which we spotlight different researchers in academia who are using NVIDIA technologies to accelerate their work. This month, we spotlight Lorenzo Baraldi, Assistant Professor at the University of Modena and Reggio Emilia in Italy.

Before working as a professor, Baraldi was a research intern at Facebook AI Research. He serves as an Associate Editor of the Pattern Recognition Letters journal and works at the integration of Vision, Language, and Embodied AI.

What are your research areas of focus?

I work within the AimageLab research group on Computer Vision and Deep Learning. I focus mainly on the integration of vision, language, and action. The final goal of our research is to develop agents that can perceive and act in our world while being capable of communicating with humans.

What motivated you to pursue this research area of focus?

Combining the ability to perceive the visual world around us, with that of acting and that of expressing in natural language is something that humans do quite naturally and is one of the keys to human intelligence. In the last few years, we have witnessed tremendous achievements in areas that consider only one of those abilities: Computer Vision, Natural Language Processing, and Robotics. How to combine these abilities, instead, still needs to be understood and is a thrilling field of research.

Tell us about your current research projects.

We are mainly working in three directions: 1 – we integrate vision and language, for example by developing algorithms that can describe images in natural language. A recent paper of this work was presented at CVPR Transformer-based model for image captioning; 2 – we integrate vision and action, by developing agents for autonomous navigation. We are interested in agents moving in indoor and outdoor scenarios, and possibly interacting with people, also in crowded situations; 3 – we integrate all of this with the ability to understand language, for instance by training agents that can move following an instruction or curiosity-driven agents that can describe what they see along their path.

Overview of Baraldi’s image captioning approach. Building on a Transformer-like encoder-decoder architecture, the approach includes a memory-aware region encoder that augments self-attention with memory vectors.

What problems or challenges does your research address?

I think one of the main challenges we need to solve is to find the right way of integrating multi-modal information, which can come from either visual, textual, or motorial perception. In other words, we need to find the right architecture for dealing with this information, that is why a lot of our research involves the design of new architectures. Secondly, most of the approaches we design are generative and sequential: we generate sentences, we generate actions or paths for robots, and so on. Again, how to generate sequences conditioned on multi-modal information is still a challenge.

Sentences generated on the ACVR Robotic Vision Challenge dataset.

What is the (expected) impact of your work on the field/community/world?

If the research efforts that the community is devoting to this area will be successful, we will have algorithms that can understand us and help us in our daily lives, seeing with us and acting in the world to help us. I think in the long run this might also change the way we interact with computers, which might become a lot easier and language-based.

How have you used NVIDIA technology either in your current or previous research?

Performing large-scale training on NVIDIA GPUs is one of the most important ingredients which power our research, and I am sure this will become even more important in the next future. We do that locally, with a distributed GPU cluster in our lab, and we do that at a bigger scale in conjunction with CINECA, the Italian supercomputing center, and with the NVIDIA AI Technical Centre (NVAITC) of Modena. The partnership we have with NVAITC and CINECA has not only increased our computational capacity, but has also provided us with the knowledge and support we needed to exploit the technologies NVIDIA provides, at their maximum. I would say this collaboration is really having an important impact on our research capabilities.

Did you achieve any breakthroughs in that research or any interesting results using NVIDIA technology?

Most, if not all, of the research works we carry out, are somehow powered by NVIDIA technologies. Apart from the results on the integration of vision, language, and action, we also have a few other research lines of which I am particularly proud. One is related to video understanding: detecting people and objects, understanding their relationships, and finding the best way of extracting Spatio-temporal features is an important challenge. Sometimes we also like to apply our research to the cultural heritage: using NVIDIA GPUs we have developed algorithms for retrieving paintings in natural language, and generative networks for translating artworks to reality

What is next for your research?

Even though things are evolving rapidly in our area, there are still a lot of key issues that need to be addressed, and that is what our lab concentrating on. One is that going beyond the limitations of traditional supervised learning and fighting dataset bias: in the end, we would like our algorithms to describe and understand any connection between images and text, not just those that are annotated in current datasets. To this end, are working towards algorithms that can describe objects which are not present in the training dataset, and we constantly explore the new possibilities given by self-supervised and weakly-supervised learning. How to properly manage the temporal dimension is also another key issue that has been central in our research, and which has brought advancements in terms of new architectural design, not only for managing sequences of words, but also for understanding video streams.

Any advice for new researchers?

There are at least three capabilities I would recommend pursuing. One is to learn to code well and elegantly because translating ideas to reality is always going to involve implementation. The second is to learn to have good ideas: that is potentially the trickiest part, but it is even more important because every valuable research needs to start from a good idea. I think reading papers, especially from the past, and think openly, freely and on a large-scale is of great help in this sense. The third is time management: always focus on what is impactful.

Baraldi’s colleague, Matteo Tomei, will be presenting their lab’s recent work at NVIDIA GTC in April, “More Efficient and Accurate Video Networks: A New Approach to Maximize the Accuracy/Computation Trade-off”.

Categories
Misc

I saw some tesla k80 graphics acceleration cards they have no display port there for helping workloads are these any good for tensorflow AI building

Price:140$ Specs

Cuda cores: 4,992

Core speed: 562-875 per card

RAM 24GB

RAM speed: 480GB/s

This is a 2 pci slot card basically 2 cards in 1 No cooling included (I got a plan for that)

Display card will be my old gtx 950

submitted by /u/isaiahii10
[visit reddit] [comments]