Categories
Misc

NVIDIA Launches Storefront in AWS Marketplace to Accelerate and Simplify AI Workflows

To help data scientists and developers simplify their AI workflows, we have collaborated with Amazon Web Services (AWS) to bring NVIDIA NGC software resources directly to the AWS Marketplace.

Enterprises across industries are adopting AI to drive business growth and they’re relying on cloud infrastructure to develop and deploy their solutions.

To help data scientists and developers simplify their AI workflows, we have collaborated with Amazon Web Services (AWS) to bring NVIDIA NGC software resources directly to the AWS Marketplace. The AWS Marketplace is where customers find, buy and immediately start using software and services that run on AWS.

The NVIDIA NGC catalog provides GPU-optimized AI software for data engineers, data scientists, developers, and DevOps teams so they can focus on building and deploying their AI solutions faster.

More than 250,000 unique users have now downloaded over 1 million of the AI containers, pretrained models, application frameworks, Helm charts and other machine learning resources available on the NGC catalog.

Available free of charge, the software from the NGC catalog is optimized to run on NVIDIA GPU cloud instances, such as the Amazon EC2 P4d instance featuring the record-breaking performance of NVIDIA A100 Tensor Core GPUs.

Instant Access to Performance-Optimized AI Software

NGC software in AWS Marketplace provides a number of benefits to help data scientists and developers build AI solutions.

  • Faster software discovery: Through the AWS Marketplace, developers and data scientists can access the latest versions of NVIDIA’s AI software with a single click.
  • The latest NVIDIA software: The NGC software in AWS Marketplace is automatically updated to the latest versions as soon as they’re available in the NGC catalog. The software is constantly optimized, and the monthly releases give users access to the latest features and performance improvements.
  • Simplified software deployment: Users of Amazon EC2, Amazon SageMaker, Amazon Elastic Kubernetes Service (EKS) and Amazon Elastic Container Service (ECS) can quickly subscribe, pull and run NGC software on NVIDIA GPU instances, all within the AWS console. Additionally, SageMaker users can simplify their workflows by eliminating the need to first store a container in Amazon Elastic Container Registry (ECR).
  • Continuous integration and development: NGC Helm charts are also available in AWS Marketplace to help DevOps teams quickly and consistently deploy their services.

Here’s a step-by-step guide to quickly discover the NGC software and run an object detection service on Amazon EC2 instances.

Accelerate your AI development on NVIDIA GPU-powered AWS services today with the NGC catalog in AWS Marketplace

Categories
Misc

A WebAssembly Powered Augmented Reality Sudoku Solver


A WebAssembly Powered Augmented Reality Sudoku Solver
submitted by /u/SpatialComputing

[visit reddit]

[comments]
Categories
Misc

How to implement recursive neural networks in Tensorflow?

I am trying to implement a very basic recursive neural network
into my linear regression analysis project in Tensorflow that takes
two inputs passed to it and then a third value of what it
previously calculated. So, my project is trying to calculate
something across the next x number of years, and after the first
year I want it to keep taking the value of the last year.
Currently, my training data has two inputs, not three, predicting
one output, so how could I make it recursive, so it keeps on
passing in the value from the last year, to calculate the next? To
explain slightly further, if it were to calculate across the next 5
years:

1st year:

Input 1: 10

Input 2: 20

(Maybe need input 3, but a value that has no affect on the
linear regression model)

Output: 30

2nd year:

Input 1: 11

Input 2: 22

Input 3: 30 (1st year output)

Output: 35

3rd Year:

Input 1:12

Input 2: 24

Input 3: 35 (2nd year output)

Output: 40

submitted by /u/HexadecimalHero

[visit reddit]

[comments]

Categories
Misc

Keras model predicts correctly, but always at 100% confidence

I’m trying to create my own model to classify a face as either
wearing a mask or not, and by what ratio. This
is my Colab notebook
, with predictions output at the end.

The question is:

How do I make the model predict with confidence, for example:
[0.966 0.034]?

Note: I didn’t use binary_crossentropy with one neuron dense
layer on purpose for this model, as I am planning on adding a 3rd
class (mask worn incorrectley) as soon as I have a better
dataset.

submitted by /u/LGariv

[visit reddit]

[comments]

Categories
Misc

I have a problem downloading tensorflow

I have tried to download tensorflow through pip install
tensorflow but only get ERROR: Could not find a version that
satisfies the required tensorflow and ERROR: No matching
distribution found for tensorflow. I have updated pip to 20.3.3 and
I have python 3.9.1. I’m running it in pycharm, cmd, and visual
studio. How can I fix this?

submitted by /u/Creeperhaten1

[visit reddit]

[comments]

Categories
Offsites

ToTTo: A Controlled Table-to-Text Generation Dataset

In the last few years, research in natural language generation, used for tasks like text summarization, has made tremendous progress. Yet, despite achieving high levels of fluency, neural systems can still be prone to hallucination (i.e.generating text that is understandable, but not faithful to the source), which can prohibit these systems from being used in many applications that require high degrees of accuracy. Consider an example from the Wikibio dataset, where the neural baseline model tasked with summarizing a Wikipedia infobox entry for Belgian football player Constant Vanden Stock summarizes incorrectly that he is an American figure skater.

While the process of assessing the faithfulness of generated text to the source content can be challenging, it is often easier when the source content is structured (e.g., in tabular format). Moreover, structured data can also test a model’s ability for reasoning and numerical inference. However, existing large scale structured datasets are often noisy (i.e., the reference sentence cannot be fully inferred from the tabular data), making them unreliable for the measurement of hallucination in model development.

In “ToTTo: A Controlled Table-To-Text Generation Dataset”, we present an open domain table-to-text generation dataset created using a novel annotation process (via sentence revision) along with a controlled text generation task that can be used to assess model hallucination. ToTTo (shorthand for “Table-To-Text”) consists of 121,000 training examples, along with 7,500 examples each for development and test. Due to the accuracy of annotations, this dataset is suitable as a challenging benchmark for research in high precision text generation. The dataset and code are open-sourced on our GitHub repo.

Table-to-Text Generation
ToTTo introduces a controlled generation task in which a given Wikipedia table with a set of selected cells is used as the source material for the task of producing a single sentence description that summarizes the cell contents in the context of the table. The example below demonstrates some of the many challenges posed by the task, such as numerical reasoning, a large open-domain vocabulary, and varied table structure.

Example in the ToTTo dataset, where given the source table and set of highlighted cells (left), the goal is to generate a one sentence description, such as the “target sentence” (right). Note that generating the target sentence would require numerical inference (eleven NFL seasons) and understanding of the NFL domain.

Annotation Process
Designing an annotation process to obtain natural but also clean target sentences from tabular data is a significant challenge. Many datasets like Wikibio and RotoWire pair naturally occurring text heuristically with tables, a noisy process that makes it difficult to disentangle whether hallucination is primarily caused by data noise or model shortcomings. On the other hand, one can elicit annotators to write sentence targets from scratch, which are faithful to the table, but the resulting targets often lack variety in terms of structure and style.

In contrast, ToTTo is constructed using a novel data annotation strategy in which annotators revise existing Wikipedia sentences in stages. This results in target sentences that are clean, as well as natural, containing interesting and varied linguistic properties. The data collection and annotation process begins by collecting tables from Wikipedia, where a given table is paired with a summary sentence collected from the supporting page context according to heuristics, such as word overlap between the page text and the table and hyperlinks referencing tabular data. This summary sentence may contain information not supported by the table and may contain pronouns with antecedents found in the table only, not the sentence itself.

The annotator then highlights the cells in the table that support the sentence and deletes phrases in the sentence that are not supported by the table. They also decontextualize the sentence so that it is standalone (e.g., with correct pronoun resolution) and correct grammar, where necessary.

We show that annotators obtain high agreement on the above task: 0.856 Fleiss Kappa for cell highlighting, and 67.0 BLEU for the final target sentence.

Dataset Analysis
We conducted a topic analysis on the ToTTo dataset over 44 categories and found that the Sports and Countries topics, each of which consists of a range of fine-grained topics, e.g., football/olympics for sports and population/buildings for countries, together comprise 56.4% of the dataset. The other 44% is composed of a much more broad set of topics, including Performing Arts, Transportation, and Entertainment.

Furthermore, we conducted a manual analysis of the different types of linguistic phenomena in the dataset over 100 randomly chosen examples. The table below summarizes the fraction of examples that require reference to the page and section titles, as well as some of the linguistic phenomena in the dataset that potentially pose new challenges to current systems.

Linguistic Phenomena Percentage
Require reference to page title 82%
Require reference to section title 19%
Require reference to table description 3%
Reasoning (logical, numerical, temporal etc.) 21%
Comparison across rows/columns/cells 13%
Require background information 12%

Baseline Results
We present some baseline results of three state-of-the-art models from the literature (BERT-to-BERT, Pointer Generator, and the Puduppully 2019 model) on two evaluation metrics, BLEU and PARENT. In addition to reporting the score on the overall test set, we also evaluate each model on a more challenging subset consisting of out-of-domain examples. As the table below shows, the BERT-to-BERT model performs best in terms of both BLEU and PARENT. Moreover, all models achieve considerably lower performance on the challenge set indicating the challenge of out-of-domain generalization.

  BLEU PARENT BLEU PARENT
Model (overall) (overall) (challenge) (challenge)
BERT-to-BERT 43.9 52.6 34.8 46.7
Pointer Generator 41.6 51.6 32.2 45.2
Puduppully et al. 2019 19.2 29.2 13.9 25.8

While automatic metrics can give some indication of performance, they are not currently sufficient for evaluating hallucination in text generation systems. To better understand hallucination, we manually evaluate the top performing baseline, to determine how faithful it is to the content in the source table, under the assumption that discrepancies indicate hallucination. To compute the “Expert” performance, for each example in our multi-reference test set, we held out one reference and asked annotators to compare it with the other references for faithfulness. As the results show, the top performing baseline appears to hallucinate information ~20% of the time.

  Faithfulness Faithfulness
Model (overall) (challenge)
Expert 93.6 91.4
BERT-to-BERT  76.2 74.2

Model Errors and Challenges
In the table below, we present a selection of the observed model errors to highlight some of the more challenging aspects of the ToTTo dataset. We find that state-of-the-art models struggle with hallucination, numerical reasoning, and rare topics, even when using cleaned references (errors in red). The last example shows that even when the model output is correct it is sometimes not as informative as the original reference which contains more reasoning about the table (shown in blue).

Reference Model Prediction
in the 1939 currie cup, western province lost to transvaal by 17–6 in cape town. the first currie cup was played in 1939 in transvaal1 at new- lands, with western province winning 17–6.
a second generation of micro- drive was announced by ibm in 2000 with increased capacities at 512 mb and 1 gb. there were 512 microdrive models in 2000: 1 gigabyte.
the 1956 grand prix motorcy- cle racing season consisted of six grand prix races in five classes: 500cc, 350cc, 250cc, 125cc and sidecars 500cc. the 1956 grand prix motorcycle racing season consisted of eight grand prix races in five classes: 500cc, 350cc, 250cc, 125cc and sidecars 500cc.
in travis kelce’s last collegiate season, he set personal career highs in receptions (45), re- ceiving yards (722), yards per receptions (16.0) and receiving touchdowns (8). travis kelce finished the 2012 season with 45 receptions for 722 yards (16.0 avg.) and eight touchdowns.

Conclusion
In this work, we presented ToTTo, a large, English table-to-text dataset that presents both a controlled generation task and a data annotation process based on iterative sentence revision. We also provided several state-of-the-art baselines, and demonstrated ToTTo could be a useful dataset for modeling research as well as for developing evaluation metrics that can better detect model improvements.

In addition to the proposed task, we hope our dataset can also be helpful for other tasks such as table understanding and sentence revision. ToTTo is available at our GitHub repo.

Acknowledgements
The authors wish to thank Ming-Wei Chang, Jonathan H. Clark, Kenton Lee, and Jennimaria Palomaki for their insightful discussions and support. Many thanks also to Ashwin Kakarla and his team for help with the annotations.

Categories
Misc

Electric Avenue: NVIDIA Engineer Revs Up Classic Car to Sport AI

Arman Toorians isn’t your average classic car restoration hobbyist. The NVIDIA engineer recently transformed a 1974 Triumph TR6 roadster at his home workshop into an EV featuring AI. Toorians built the vehicle to show a classic car can be recycled into an electric ride that taps NVIDIA Jetson AI for safety, security and vehicle management Read article >

The post Electric Avenue: NVIDIA Engineer Revs Up Classic Car to Sport AI appeared first on The Official NVIDIA Blog.

Categories
Misc

How XSplit Delivers Rich Content for Live Streaming with NVIDIA Broadcast

In this interview, Miguel Molina, Director of Developer Relations at SplitmediaLabs, the makers of XSplit, discussed how they were able to easily integrate NVIDIA Broadcast into their vastly popular streaming service.

In this interview, Miguel Molina, Director of Developer Relations at SplitmediaLabs, the makers of XSplit, discussed how they were able to easily integrate NVIDIA Broadcast into their vastly popular streaming service. 

For those who may not know, tell us about yourself?

My name is Miguel Molina, currently the Director of Developer Relations at SplitmediaLabs, the makers of XSplit. I’ve been with the company since before its inception, starting out as a software engineer, moving onto product management, and finally landing in business development where I work with our industry partners to find integrations and opportunities that bring value to our customers.

Tell us about Xsplit and the success of the company thus far.

XSplit is the brand that got us to where we are now and XSplit Broadcaster is the hero product behind it all. It’s a simple yet powerful live streaming and recording software for producing and delivering rich video content that powers countless live streams and recordings around the world.




What excited you most about NVIDIA Broadcast Engine?

Being able to add value to our products is a priority for us and the NVIDIA Broadcast Engine gives us just that in a straightforward package. With features that improve video, audio, and augmented reality, the SDK has the potential to massively improve the output of different types of media, vastly improving the user experience for various use cases.

Why were you interested in integrating the Audio Effects SDK?

We were looking for an alternative to CPU-based background noise removal and NVIDIA’s demo videos showing off NVIDIA’s noise removal feature got us sold on the idea. After receiving  a sample, we decided to commit to integrating it into XSplit Broadcaster.

How was the experience integrating the SDK?

It was as simple as looking at the sample code, putting the relevant code segments in their proper places, and hitting compile. The initial integration itself just took a few hours and a working build was available the same day we started on it.

Any surprises or unexpected challenges?

We were initially having massive CUDA utilization in an early alpha build of the SDK but NVIDIA engineers were very responsive and quickly isolated the issue on their end and were able to provide an updated build that fixed the problem. 

How have your users responded to the improved experience?

Our users love the fact that they are able to utilize NVIDIA’s noise removal natively within XSplit Broadcaster. It’s as simple as turning it on and it just works.

What new features or SDKs from NVIDIA are you looking forward to now?

We are looking to update our NVIDIA Video Codec SDK implementation so we can provide better granular preset control over quality versus performance on NVENC.

Which of the NBX SDKs are you most interested in beyond Audio?

Definitely the Video Effects SDK as their Virtual Background and Super Resolution features would be quite useful with people mostly staying at home these days.

+++

Developers can download XSplit Broadcaster here.

To learn more about NVIDIA Broadcast, or to get started, visit our page here.

Categories
Misc

How do I identify matching objects in a pair of stereo images?


How do I identify matching objects in a pair of stereo images?


Left and Right images

So, for instance, I have a pair of stereo images (as an example,
here I have duplicated the photo to represent left and right
images) of certain objects (in this case dogs and cats). I want to
match the dogs in the 2 images, i.e the network should identify
that if there’s a ‘Dog 1’ in the left image, then which dog in the
right image is the corresponding match for ‘Dog 1’. And similarly
for other objects as well.

I can perform instance segmentation on the images and get the
object boundaries and the masks for both left and right images, but
how do match the objects in the stereo image pair?

I was thinking of using Siamese Networks to get a similarity
score, but pretty clueless on how to proceed with that.

Any help would be great! TIA!

submitted by /u/chinmaygrg

[visit reddit]

[comments]

Categories
Misc

Amid CES, NVIDIA Packs Flying, Driving, Gaming Tech News into a Single Week

Flying, driving, gaming, racing… amid the first-ever virtual Consumer Electronics Show this week, NVIDIA-powered technologies spilled out in all directions. In automotive, Chinese automakers SAIC and NIO announced they’ll use NVIDIA DRIVE in future vehicles. In gaming, NVIDIA on Tuesday led off a slew of gaming announcements by revealing the affordable new RTX 3060 GPU Read article >

The post Amid CES, NVIDIA Packs Flying, Driving, Gaming Tech News into a Single Week appeared first on The Official NVIDIA Blog.