Categories
Misc

New DRIVE OS and DriveWorks Updates Enable Streamlined AV Software Development

DRIVE OS and DriveWorks releases are now available on NVIDIA DRIVE Developer, providing DRIVE OS users access to DriveWorks middleware and even more updates.

You asked, we listened: DRIVE OS and DriveWorks releases are now available on NVIDIA DRIVE Developer, providing DRIVE OS users access to DriveWorks middleware and even more updates.

With these releases, developers have access to the latest DRIVE OS and DriveWorks software for autonomous vehicle development, including new features, without having to wait for DRIVE Software updates.

The foundation of the NVIDIA DRIVE software stack, NVIDIA DRIVE OS is designed specifically for accelerated computing and artificial intelligence. It includes NvMedia for sensor input processing, NVIDIA CUDA for efficient parallel computing implementations, NVIDIA TensorRT™ for real-time AI inference, and specialized developer tools and modules that allow developers to access the accelerated hardware engines. 

The NVIDIA DriveWorks SDK provides functionality fundamental to autonomous vehicle development, consisting of a sensor abstraction layer (SAL), sensor plugins, data recorder, vehicle I/O support, and a deep neural network (DNN) framework. It’s modular, open and designed to be compliant with automotive industry software standards.

And now, these key components for autonomous vehicle software development are even more accessible to developers, with frequent updates to unlock performance on the NVIDIA DRIVE AGX platform for greater flexibility and capabilities.

Laying a Foundation with DRIVE OS

DRIVE OS is a robust operating system for autonomous vehicle development, providing access to the underlying compute accelerators in DRIVE AGX Xavier.

New for this release is NvMedia Sensor Input Processing Library (SIPL), an image processing API, targeted for safety. SIPL adds sensor device and query block sources, in addition to source files and libraries to an expanded range of sensor modules. It also delivers safety proxy support for Linux — a mechanism that makes it easier to develop safety applications on non-safety platforms. 

Updated for this release, NvStreams enables efficient allocation, sharing and synchronizing of data buffers across the SoC, dGPU and CPU engine APIs, making it easy for developers to move large data buffers for processing. 

Also included is the latest TensorRT with dynamic shape, reformat-free I/O, explicit precision, pointwise layer fusion and shuffle elimination, as well as new plugins and samples to help developers take advantage of the platform.

Going Further with DriveWorks

On top of DRIVE OS, DriveWorks enables applications to help incorporate software into the vehicle. These include integrating automotive sensors within the software stack, accelerating camera and lidar data processing, interfacing with the vehicle, accelerating inference for perception and calibrating multiple sensor modalities with precision.

Key DriveWorks highlights include the integration of the DriveWorks SAL with NvMedia SIPL, enabling recording of additional GMSL cameras such as the Sony IMX390 and the ON Semi AR0820. Additionally, DriveWorks SAL now supports even more sensors out-of-the-box, such as the Luminar H3 and Ouster OS2-128 lidars as well as the U-blox ZED-F9P GNSS module. 

As always, developers can integrate their own sensors into DriveWorks using the Sensor Plugin Framework. 

Finally, the DriveWorks SAL also now includes a new Time Sensor module for synchronizing timestamps of supported sensors. This module maintains time correlation information correspondence data and supports the conversion between the different clocks used to timestamp sensor data.  

For image processing, the release adds new algorithms to run on the NVIDIA DRIVE AGX Programmable Vision Accelerator (PVA) in addition to the GPU. A new DNN tensor module wraps raw tensor data into a structure, allowing the user to define dimensions and layouts. It also supports the traversal of complex layouts as well as the ability to lock/unlock the data to prevent simultaneous operations. 

By making DRIVE OS and DriveWorks releases available together, developers now have the latest and greatest DRIVE OS features and performance together with seamless autonomous vehicle integration and utilities provided by DriveWorks SDK.

DRIVE AGX developers may access the latest release here.

Categories
Misc

How to Optimize Self-Driving DNNs with TensorRT

Register for our upcoming webinar to learn how to use TensorRT to optimize autonomous driving DNNs for robust AV development.

When it comes to autonomous vehicle development, to ensure the highest level of safety, one of the most important areas of evaluation is performance.

High-performance, energy-efficient compute enables developers to balance the complexity, accuracy and resource consumption of the deep neural networks (DNN) that run in the vehicle. Getting the most out of hardware computing power requires optimized software.

NVIDIA TensorRT is an SDK for high-performance deep learning inference. It includes a deep learning inference optimizer and runtime that delivers low latency and high-throughput for deep learning inference applications, such as autonomous driving.

You can register for our upcoming webinar on Feb. 3 to learn how to use TensorRT to optimize autonomous driving DNNs for robust autonomous vehicle development.

Manage Massive Workloads

DNN-based workloads in autonomous driving are incredibly complex, with a variety of computation-intensive layer operations just to perform computer vision tasks. 

Managing these types of operations requires optimized compute performance, however, it isn’t always the case that the theoretical peak performance of hardware translates to any software achievable execution. TensorRT ensures developers can tackle these massive workloads without leaving any performance on the table.

By performing optimization at every stage of processing — from tooling, to ingesting DNNs, to inference — TensorRT ensures the most efficient operations possible.

The SDK is also seamless to use, allowing developers to toggle different settings depending on the platform. For example, lower precision, i.e., FP16 or INT8, is used to enable higher compute throughput and lower memory bandwidth on Tensor Core. In addition, workloads can be shifted from the GPU to the deep learning accelerator (DLA).

Master the Model Backbone

This webinar will show how TensorRT for AV development works in action, tackling one of the chunkiest portions in the inference pipeline — the model backbone.

Many developers use off-the-shelf model backbones (for example, ResNets or EfficientNets) to get started on solving computer vision tasks such as object detection or semantic segmentation. However, these backbones aren’t always performance-optimized, creating bottlenecks down the line. TensorRT addresses these problems by optimizing trained neural networks to generate deployment-ready inference engines that maximize GPU inference performance and power efficiency.

Learn from NVIDIA experts how to leverage these tools in autonomous vehicle development. Register today for the Feb 3rd webinar, plus catch up on past TensorRT and DriveWorks webinars.

Categories
Misc

On the Road Again: GeForce NOW Alliance Expanding to Turkey, Saudi Arabia and Australia

Bringing more games to more gamers, our GeForce NOW game-streaming service is coming soon to Turkey, Saudi Arabia and Australia. Turkcell, Zain KSA and Pentanet are the latest telcos to join the GeForce NOW Alliance. By placing NVIDIA RTX Servers on the edge, GeForce NOW Alliance partners deliver even lower latency gaming experiences. And this Read article >

The post On the Road Again: GeForce NOW Alliance Expanding to Turkey, Saudi Arabia and Australia appeared first on The Official NVIDIA Blog.

Categories
Misc

Take Note: Otter.ai CEO Sam Liang on Bringing Live Captions to a Meeting Near You

Sam Liang is making things easier for the creators of the NVIDIA AI Podcast — and just about every remote worker. He’s the CEO and co-founder of Otter.ai, which uses AI to produce speech-to-text transcriptions in real time or from recording uploads. The platform has a range of capabilities, from differentiating between multiple people, to Read article >

The post Take Note: Otter.ai CEO Sam Liang on Bringing Live Captions to a Meeting Near You appeared first on The Official NVIDIA Blog.

Categories
Misc

Batch training in tf 2.0

When performing custom batch training in the training loop,
which one should be used?

tf.gradient_tape or train_on_batch?

What is the difference?

submitted by /u/SuccMyStrangerThings

[visit reddit]

[comments]

Categories
Misc

I got this error while trying to run the webcam_demo.py example in Posenet library from tensorflow. how to resolve this? #46575

I got this error/warning while trying to run the webcam_demo.py
example in Posenet library from Tensorflow. how to resolve
this?

This is the Git Repo from where I forked this
code : posenet-python

and This is my Output Screen :

>>>

RESTART: A:PythonScriptsPosenet-Forked —
OGCodeposenet-python-masterwebcam_demo.py

Cannot find model file ./_modelsmodel-mobilenet_v1_101.pb,
converting from tfjs…

WARNING:tensorflow:From
A:Pythonlibsite-packagestensorflowpythontoolsfreeze_graph.py:127:
checkpoint_exists (from
tensorflow.python.training.checkpoint_management) is deprecated and
will be removed in a future version.

Instructions for updating:

Use standard file APIs to check for files with this prefix.

Traceback (most recent call last):

File “A:PythonScriptsPosenet-Forked —
OGCodeposenet-python-masterwebcam_demo.py”, line 66, in
<module>

main()

File “A:PythonScriptsPosenet-Forked —
OGCodeposenet-python-masterwebcam_demo.py”, line 20, in main

model_cfg, model_outputs = posenet.load_model(args.model,
sess)

File “A:PythonScriptsPosenet-Forked —
OGCodeposenet-python-masterposenetmodel.py“, line 42, in load_model

convert(model_ord, model_dir, check=False)

File “A:PythonScriptsPosenet-Forked —
OGCodeposenet-python-masterposenetconvertertfjs2python.py“, line 198, in
convert

initializer_nodes=””)

File
“A:Pythonlibsite-packagestensorflowpythontoolsfreeze_graph.py”,
line 361, in freeze_graph

checkpoint_version=checkpoint_version)

File
“A:Pythonlibsite-packagestensorflowpythontoolsfreeze_graph.py”,
line 190, in freeze_graph_with_def_protos

var_list=var_list, write_version=checkpoint_version)

File
“A:Pythonlibsite-packagestensorflowpythontrainingsaver.py“, line 835, in __init__

self.build()

File
“A:Pythonlibsite-packagestensorflowpythontrainingsaver.py“, line 847, in build

self._build(self._filename, build_save=True,
build_restore=True)

File
“A:Pythonlibsite-packagestensorflowpythontrainingsaver.py“, line 885, in _build

build_restore=build_restore)

File
“A:Pythonlibsite-packagestensorflowpythontrainingsaver.py“, line 489, in _build_internal

names_to_saveables)

File
“A:Pythonlibsite-packagestensorflowpythontrainingsavingsaveable_object_util.py”,
line 362, in validate_and_slice_inputs

for converted_saveable_object in saveable_objects_for_op(op,
name):

File
“A:Pythonlibsite-packagestensorflowpythontrainingsavingsaveable_object_util.py”,
line 223, in saveable_objects_for_op

yield ResourceVariableSaveable(variable, “”, name)

File
“A:Pythonlibsite-packagestensorflowpythontrainingsavingsaveable_object_util.py”,
line 95, in __init__

self.handle_op = var.op.inputs[0]

IndexError: tuple index out of range

>>>

My Git
Issue Link

submitted by /u/Section_Disastrous

[visit reddit]

[comments]

Categories
Misc

Tensorflow implementation of Bleu score

I’m looking for a tensorflow implementation of BLEU score
similar to the nltk implementation. The reason I can’t use nltk is
because I need to calculate bleu score per each TPU replica result.
I cannot append predictions across replicas and then use nltk to
calculate BLEU for the entire corpus as I would prefer. The reason
is described in this stackoverflow post
https://stackoverflow.com/questions/60842868/how-can-i-merge-the-results-from-strategy-in-tensorflow-2

submitted by /u/International_Fix_94

[visit reddit]

[comments]

Categories
Misc

Need help with intent based personal assistant / chatbot

Hello all! I have spent some time working on my chatbot and it’s
working pretty well. I have a json file that stores all my intents,
but I have come across a problem that I don’t know how to solve. I
want to have an “other” tag. This tag should be called whenever the
input doesn’t match any other patterns or tags. The goal of this is
so that if no tags are matched, I have a separate set of
instructions for my program to follow in such cases. Does anyone
have any idea how I can go about this? Is there a certain pattern I
should have or what? Also, another question I have is what if a
certain pattern I have has variables in it, for example, “Play
Clocks by Coldplay”. In the case of “Play {songName} by {artist}” a
constant pattern cannot be used since the user can come up with any
combination of song names and artists. Any help is appreciated.
Thank you in advance!

submitted by /u/Rafhay101

[visit reddit]

[comments]

Categories
Misc

Tensorflow: Confusion on how javascript bundling with rollup affects exports/namespaces/etc.

submitted by /u/ApproximateIdentity

[visit reddit]

[comments]

Categories
Misc

A Trusted Companion: AI Software Keeps Drivers Safe and Focused on the Road Ahead

NVIDIA DRIVE IX is an open, scalable cockpit software platform that provides AI functions to enable a full range of in-cabin experiences, including intelligent visualization with augmented reality and virtual reality, conversational AI and interior sensing. 

The post A Trusted Companion: AI Software Keeps Drivers Safe and Focused on the Road Ahead appeared first on The Official NVIDIA Blog.