Categories
Misc

AI-Powered Video Analytics at GTC: Making Physical Spaces Smarter And Safer

There’s a deep lineup of IVA sessions covering applications in smart spaces such as airports, railway transit hubs, smart traffic systems, and autonomous machines, with developer sessions for vision-AI optimization with Pre-trained models, DeepStream SDK, and Transfer Learning Toolkit.

Find out how to make our important physical spaces smarter using the most widely deployed IoT devices – video cameras.

NVIDIA GTC will be hosted on April 12-16. With over 1,400 breakthrough sessions for all technical levels, those registered have access to topic experts, networking events, and a front-row seat to NVIDIA CEO Jensen Huang’s keynote.

There’s a deep lineup of Intelligent Video Analytics sessions covering applications in smart spaces such as airports, railway transit hubs, smart traffic systems, and autonomous machines, with developer sessions for vision-AI optimization with Pre-trained models, DeepStream SDK, and Transfer Learning Toolkit.

Here are a few spotlight sessions to look out for:

  • [S32797] Train Smarter not Harder with NVIDIA Pre-trained models and Transfer Learning Toolkit 3.0
    Learn how the world’s top AI teams combine pre-trained models and transfer learning to supercharge their AI vision development,
  • [S32798] Bringing Scale and Optimization to Video Analytics Pipelines with NVIDIA DeepStream SDK
    This talk provides  a sneak peek at the next version of DeepStream. With all new intuitive GUI and development tools, it offers a zero-coding paradigm which further simplifies application development.
  • [CWES1127] Transfer Learning Toolkit and DeepStream SDK for Vision AI/Intelligent Video Analytics
    Get your questions answered on how to build and deploy vision AI applications for traffic engineering, parking management, sports analytics, retail, or smart workspaces for occupancy analytics and more.
  • [S31869] How Cities are Turning AI into Cost Savings
    Learn how the City of Raleigh, North Carolina, is building new AI-powered video analytics capabilities with ESRI’s ArcGIS into their traffic operations and turning real-time roadway insights into cost savings.
  • [S32032] Accelerating Azure Edge AI Vision Deployments
    Explore how GPU-accelerated model training and inference can span from the cloud to the edge, and how to leverage Azure Machine Learning and Live Video Analytics to create compelling solutions.
  • [S31845] AI-Enabled Video Analytics Improves Airline Operational Efficiency
    Get insights on how Seattle-Tacoma International Airport (SEA-TAC) is implementing AI video analytics to help improve overall airport operations.
  • [E31902] How AI Enabled Video Analytics Saves Lives and Money at Metropolitan Rail Networks
    Learn how AI-based video analytics solutions can be used to save money and increase safety operational efficiency in Metro rail networks with a case study from the UK rail industry.
  • [SS32770] Driving Operational Efficiency with NVIDIA Transfer Learning Toolkit, Pre-trained Models, and DeepStream SDK
    Learn how to build business value from Vision AI deployments using NVIDIA TLT, pre-trained models, and DeepStream SDK with ADLINK, including examples such as detecting loitering and intrusion
  • [SS33151] Designing AI Enabled Real-time Video Analytics at Scale
    Join experts from Quantiphi to learn how to address several engineering and costing challenges faced when going from an intelligent video analytics pilot to large-scale implementation.
  • [SS33127] Building Efficient and Intelligent Networks Using Network Edge AI Platform
    Lanner will partner with Tensor Network to discuss how NVIDIA AI can be structured in a networked approach where AI workloads can be distributed within the edge networks.

Check out additional speakers and sessions on the Intelligent Video Analytics topic page. Or, if you’re already registered, check out the pre-packaged playlists to get your schedule started.

>> Register for free on the GTC website

Image credit: Datafromsky

Categories
Misc

Sweden’s AI Catalyst: 300-Petaflops Supercomputer Fuels Nordic Research

A Swedish physician who helped pioneer chemistry 200 years ago just got another opportunity to innovate. A supercomputer officially christened in honor of Jöns Jacob Berzelius aims to establish AI as a core technology of the next century. Berzelius (pronounced behr-zeh-LEE-us) invented chemistry’s shorthand (think H20) and discovered a handful of elements including silicon. A Read article >

The post Sweden’s AI Catalyst: 300-Petaflops Supercomputer Fuels Nordic Research appeared first on The Official NVIDIA Blog.

Categories
Misc

Flower Identifier

Hi folks, so I have followed this tutorial and Im pretty new to tensor flow but basically what I need to know is, is there any tutorials similar to this which teach you how to make the model run it in app but instead of live detecting in camera that it detects from images from the users gallery/camera roll. Any links/advice would be great thanks https://codelabs.developers.google.com/codelabs/recognize-flowers-with-tensorflow-on-android/#0

submitted by /u/Lostcause89
[visit reddit] [comments]

Categories
Misc

John Snow Labs Spark-NLP 3.0.0: Supporting Spark 3.x, Scala 2.12, more Databricks runtimes, more EMR versions, performance improvements & lots more

John Snow Labs Spark-NLP 3.0.0: Supporting Spark 3.x, Scala 2.12, more Databricks runtimes, more EMR versions, performance improvements & lots more submitted by /u/dark-night-rises
[visit reddit] [comments]
Categories
Misc

GTC 21: Top 5 Game Development Technical Sessions

This year at GTC, we have a new track for Game Developers, where you can attend sessions for free, covering the latest in ray tracing, optimizing game performance, and content creation in NVIDIA Omniverse.

This year at GTC we have a new track for Game Developers where you can attend free sessions, covering the latest in ray tracing, optimizing game performance, and content creation in NVIDIA Omniverse.

Check out our top sessions below for those working in the gaming industry:

  1. Ray Tracing in Cyberpunk 2077

    Learn how ray tracing was used to create the visuals in the game, and how the developers at CD Projekt RED used extensive ray tracing techniques to bring the bustling Night City to life.

    Evgeny Makarov, Developer Technology Engineer, NVIDIA
    Jakub Knapik, Art Director at CDPR

  1. Our Sniper Elite 4 Journey  – Lessons in Porting AAA Action Games to the Nintendo Switch

    The Asura engine, entirely developed in-house by Rebellion, has allowed the independent developer/publisher the maximum creative and technical freedom. Rebellion has overcome enormous technical challenges and built on years of Nintendo development experience to bring their flagship game, “Sniper Elite 4,” to the Switch platform. Learn how a crack team took a AAA game targeting PS4/XB1 and got it running on a Nintendo Switch. Through a journey of Switch releases, you’ll see how Rebellion optimized “Sniper Elite 4” beyond what anyone thought was possible to deliver a beautiful and smooth experience.

    Arden Aspinall, Studio Head, Rebellion North

  1. Ray Tracing in One Weekend

    This presentation will assume the audience knows nothing about ray tracing. It is a guide for the first day in country. But rather than a broad survey it will dig deep on one way to make great looking images (the one discussed in the free ebook Ray Tracing in One Weekend). There will be no API or language discussed: all pseudocode. There will be no integrals, density functions, derivatives, or other topic inappropriate for polite company discussed.

    Pete Shirley, Distinguished Research Engineer, NVIDIA

  1. LEGO Builder’s Journey: Rendering Realistic LEGO Bricks Using Ray Tracing in Unity

    Learn how we render realistic-looking LEGO dioramas in real time using Unity high-definition render pipeline and ray tracing. Starting from a stylized look, we upgraded the game to use realistic rendering on PC to enhance immersion in the game play and story. From lighting and materials to geometry processing and post effects, you’ll  get a deep insight into what we’ve done to get as close to realism as possible with a small team in a limited time — all while still using the same assets for other versions of the game.

    Mikkel Fredborg, Technical Lead, Light Brick Studio

  1. Introduction to Real Time Ray Tracing with Minecraft

    This talk is aimed at graphics engineers that have little or no experience with ray tracing. It serves as a gentle introduction to many topics, including “What is ray tracing?”, “How many rays do you need to make an image?”, “The importance of [importance] sampling. (And more importantly, what is importance sampling?)”, “Denoising”, “The problem with small bright things”. Along the way, you will learn about specific implementation details from Minecraft.

    Oli Wright, GeForce DevTech, NVIDIA

Visit the GTC website to view the entire Game Development track and to register for the free conference.

Categories
Misc

Researchers Take Steps Towards Autonomous AI-Powered Exoskeleton Legs

University of Waterloo researchers are using deep learning and computer vision to develop autonomous exoskeleton legs to help users walk, climb stairs, and avoid obstacles.

University of Waterloo researchers are using deep learning and computer vision to develop autonomous exoskeleton legs to help users walk, climb stairs, and avoid obstacles. 

The project, described in an early-access paper on IEEE Transactions on Medical Robotics and Bionics, fits users with wearable cameras. AI software processes the camera’s video stream, and is being trained to recognize surrounding features such as stairs and doorways, and then determine the best movements to take.

“Our control approach wouldn’t necessarily require human thought,” said Brokoslaw Laschowski, Ph.D. candidate in systems design engineering and lead author on the project. “Similar to autonomous cars that drive themselves, we’re designing autonomous exoskeletons that walk for themselves.”

People who rely on exoskeletons for mobility typically operate the devices using smartphone apps or joysticks. 

“That can be inconvenient and cognitively demanding,” said Laschowski, who works with engineering professor John McPhee, the Canada Research Chair in Biomechatronic System Dynamics. “Every time you want to perform a new locomotor activity, you have to stop, take out your smartphone and select the desired mode.”

The researchers are using NVIDIA TITAN GPUs for neural network training and real-time image classification of walking environments. They collected 923,000 images of human locomotion environments to create a database dubbed ExoNet — which was used to train the initial model, developed using the TensorFlow deep learning framework. 


Still in development, the exoskeleton system must learn to operate on uneven terrain and avoid obstacles before becoming fully functional. To boost battery life, the team plans to use human motion to help charge the devices.

The recent paper analyzed how the power a person uses to go from a sitting to standing position could create biomechanical energy usable to charge the robotic exoskeletons.  

Read the University of Waterloo news release for more >> 

The researchers’ latest paper is available here. The original paper, published in 2019 at the IEEE International Conference on Rehabilitation Robotics, was a finalist for a best paper award.

Categories
Misc

Creating an MLP in TF, and extracting a single runs’ seed.

Lurked Reddit for a while but need some help with something I’m programming. I’m trying to create a multilayer perceptron in Tensorflow – from what I can understand an MLP is almost like a basic form of neural network that can be built upon and become other networks (adding in convolution layers turning it into a CNN). In Tensorflow/Keras I am creating a sequential object and then adding layers to it – is this how an MLP is meant to be created by those libraries or is there a more direct way?

Also, I know that whenever my model is compiled it generates random weight distributions from a seed – is there a way I can extract the seed used from a trained model so I can keep the one that produces the smallest loss value?

submitted by /u/Greedy-Snow808
[visit reddit] [comments]

Categories
Misc

MIT intro to deep learning how to run exercises on 4GB or less GPU memory locally

Hello everybody,

that’s my fist post here, so pleas be nice 🙂 I’m totaly new to tensorflow, so this is a beginners guide and no deep dive.

Like you may now the new free MIT intro to Deep Learning Course is online. some of the there given Models are kinda Memory hungry so here the solution:

CAUTION: think while coping form online Tutorials!

First of all it is a bless to work with the tensorflow/tensorflow:latest-gpu Docker Container so Yea, just do it.

first some dependencys, the notebooks do need python3-opencv and the lab 1 needs abcmidi and timidity

apt install python3-opencv abcmidi timidity 

to edit the code in a personal directory and not in the container you need a non root user

adduser nonroot 

login to the user

su - nonroot 

install your editor, it’s jupyter lab for me

pip install jupyterlab 

start jupyter lab on 0.0.0.0 in the bound directory

jupyter lab --ip 0.0.0.0 

add those lines on the top before importing tensorflow

import os os.environ['TF_FORCE_GPU_ALLOW_GROWTH'] = 'true' 

and those after importing tensorflow as tf

physical_devices = tf.config.list_physical_devices('GPU') try: tf.config.experimental.set_memory_growth(physical_devices[0], True) except: # Invalid device or cannot modify virtual devices once initialized. pass 

Tip: add

%config Completer.use_jedi = False 

if you have problems with autocomplete.

I hope that helps somebody!

submitted by /u/deep-and-learning
[visit reddit] [comments]

Categories
Misc

Power Your Big Data Analytics with the Latest NVIDIA GPUs in the Cloud

To make it easier to leverage NVIDIA accelerated compute, we’ve added support for launching RAPIDS + Dask on the latest NVIDIA A100 GPUs in the cloud.

Dask is an accessible and powerful solution for natively scaling Python analytics. Using familiar interfaces, it allows data scientists familiar with PyData tools to scale big data workloads easily. Dask is such a powerful tool that we have adopted it throughout a variety of projects at NVIDIA. When paired with RAPIDS, data practitioners can distribute big data workloads across massive NVIDIA GPU clusters.

To make it easier to leverage NVIDIA accelerated compute, we’ve added support for launching RAPIDS + Dask on the latest NVIDIA A100 GPUs in the cloud, allowing users and enterprises to get the most out of their data.

Spin-Up NVIDIA GPU Clusters Quickly with Dask Cloud Provider

While Dask makes scaling analytics workloads easy, distributing workloads in Cloud environments can be tricky. Dask-CloudProvider is a package that provides native Cloud integration, making it simple to get started on Amazon Web Services, Google Cloud Platform, or Microsoft Azure. Using native Cloud tools, data scientists, machine learning engineers, and DevOps engineers can stand-up infrastructure and start running workloads in no time.

RAPIDS builds upon Dask-CloudProvider to make spinning-up the most powerful NVIDIA GPU instances easy with raw virtual machines. While AWS, GCP, and Azure have great managed services for data scientists, these implementations can take time to adopt new GPU architectures. With Dask-CloudProvider and RAPIDS, users and enterprises can leverage the latest NVIDIA A100 GPUs, providing 20x more performance than the previous generation. With 40GB of GPU memory each and 600GB/s NVLINK connection, NVIDIA A100 GPUs are a supercharged workhorse for enterprise-scale data science workloads. Dask-CloudProvider and RAPIDS provide an easy way to get started with A100s without having to configure raw VMs from scratch.

RAPIDS strives to make NVIDIA accelerated data science accessible to a broader data-driven audience. With Dask, RAPIDS allows data scientists to solve enterprise-scale problems in less time and with less pain. For a deeper understanding of the latest RAPIDS features and integrations, read more here.

Categories
Misc

Build Your Own AI-Powered Q&A Service

You can now build your own AI-powered Q&A service with the step-by-step instructions provided in this four-part blog series.

Conversational AI, the ability for machines to understand and respond to human queries, is being widely adopted across industries as enterprises see the value of this technology through solutions like chatbots and virtual assistants to better support their customers while lowering the cost of customer service. 

You can now build your own AI-powered Q&A service with the step-by-step instructions provided in this four-part blog series. All the software resources you will need, from the deep learning frameworks to pre-trained models to inference engines are available from the NVIDIA NGC catalog – a hub of GPU-optimized software.

The blog series walks through:

  1. Part 1: Leveraging pre-trained models to build custom models with your training dataset 
  2. Part 2: Optimizing the custom model to provide lower latency and higher throughput 
  3. Part 3: Running inference on your custom models 
  4. Part 4: Deploying the virtual assistant in the cloud 

While the blog instructions use GPU-powered cloud instances, these instructions will work for on-prem systems as well.

Build your virtual assistant today with these instructions or join us at NVIDIA GTC for free on April 13th for our session “Accelerating AI Workflows at GTC” to learn step-by-step how to build a conversational AI solution using artifacts from the NGC catalog.