Categories
Misc

Issue generating tfrecord

So I have started to try my hand at tensor flow to learn how it works. While going through the steps I came across an error that I have not seen before. I can’t seem to figure out what is going on. Any help is appreciated

Traceback (most recent call last):

File “generate_tfrecord.py”, line 27, in <module>

from object_detection.utils import dataset_util, label_map_util

File “C:UsersnathaOneDriveDesktopProjectRealTimeObjectDetection-mainTensorflowscriptsobject_detectionutilslabel_map_util.py”, line 59, in <module>

label_map = label_map_util.load_labelmap(args.labels_path)

AttributeError: partially initialized module ‘object_detection.utils.label_map_util’ has no attribute ‘load_labelmap’ (most likely due to a circular import)

submitted by /u/Simshaffer
[visit reddit] [comments]

Categories
Misc

Giving Virtual Dressing Rooms a Makeover with Computer Vision

Screen shot of a virtual dressing room.With the help of AI, a new fashion startup offers online retailers a scalable virtual dressing room, capable of cataloging over a million garment images weekly.Screen shot of a virtual dressing room.

Combining a deep learning model with computer vision, Revery.ai is improving the online dressing room experience for both retailers and consumers. Creating a tool that uses existing shop catalog images to build a scalable virtual dressing room, the technology gives shoppers the power to try on a store’s entire inventory without leaving the house.

“The inspiration for creating Revery was really to tackle a problem that everyone faces when shopping online—how does this outfit actually look in person? The idea of a virtual dressing room is not new—from the movie Clueless to the spectacular failure of Boo.com—people have wanted virtual try-on since they could shop online,” said cofounder Jeffrey Zhang, a PhD candidate in fashion AI and computer vision at the University of Illinois.

Advised by David Forsyth—a Computer Science professor at the University of Illinois—Revery.ai cofounders also include two additional PhD candidates in fashion AI and computer vision, Kedan Li and Min Jin Chong. 

According to Zhang, Revery overcomes the biggest virtual dressing room obstacle for most retail giants—scalability. The technology offers a comprehensive tool capable of processing over a million garment images weekly.

Revery makes this possible with a newly developed AI algorithm that employs the cuDNN-accelerated deep learning framework PyTorch and with NVIDIA RTX 3090 and RTX A6000 GPUs to both train and process the models. As the framework learns from millions of images, the system is able to capture and process nuances, such as how the garments fall, texture, logos, or even shading, providing realistic online versions of the garments.  

“We have been privileged to get our hands on some of the latest generation GPUs, which have sped up our training substantially compared to previous generations. Furthermore, the increased memory size allows us to generate image resolutions of up to 1.5k,” Zhang said.

The technology not only saves time. It also stands to reduce the millions of dollars it would take to integrate a complete inventory, while offering retailers the ability to update stock quickly.

Revery.ai’s virtual dressing room. Credit: Revery.ai

Online shopping has been on the rise, with consumers spending $861.12 billion with U.S. merchants in 2020. By year’s end, U.S. apparel e-commerce is projected to hit about $100 billion and the team is looking to expand with more online retailers.

They are also focused on creating more inclusive and diverse offerings for customers—something the fashion industry often lacks. The group is working on increasing personalization, by offering different body shapes, and adding mix and match options for bags and shoes. The current product offers shoppers the ability to customize gender, skin tone, hair, and even change poses of the models.

“Our long-term goal is to digitize every garment from any store and integrate with shoppers’ wardrobes to create an immersive online shopping experience,” Zhang said.

Read the study >>
Learn more about Revery.AI >>

Categories
Misc

1,200+ Interns From Around the World Join NVIDIA’s Green Team

I wasn’t sure what to expect when I started my internship at NVIDIA. For a journalism student, joining a company full of engineers pioneering the technology behind AI, virtual reality and high-performance computing isn’t the first thing that comes to mind when thinking of the typical internship. But there are stories to tell. Stories about Read article >

The post 1,200+ Interns From Around the World Join NVIDIA’s Green Team appeared first on The Official NVIDIA Blog.

Categories
Misc

An AI for Fine Art: Attorney Trains NVIDIA RTX 2070 to Authenticate Masterpieces

What’s the difference between art created by Leonardo da Vinci and a knockoff? In the case of the Salvator Mundi, the answer is nearly half a billion dollars. Drawing on a convolutional neural network — a deep learning algorithm that’s led to breakthroughs in the analysis of a vast array of visual imagery — intellectual Read article >

The post An AI for Fine Art: Attorney Trains NVIDIA RTX 2070 to Authenticate Masterpieces appeared first on The Official NVIDIA Blog.

Categories
Misc

Bringing Scale to the Edge with Multi-Access Edge Computing

Multi-access edge computing (MEC) is the telco-centric approach to delivering edge computing by integrating it with fixed and mobile access networks. MEC is often used interchangeably to mean edge computing. But is this appropriate? And how does MEC relate to edge computing?

Multi-access edge computing (MEC) is often used interchangeably to mean edge computing. But is this appropriate? What is MEC and how does it relate with edge computing? MEC is the telco-centric approach to edge computing that integrates it with fixed and mobile access networks.

Setting the context

Every few decades, the computing world likes to swing back and forth between centralized and decentralized architectures. While the differences are waning, there is still much discussion around whether the data center—today’s unit of computing—should be located at the edge for edge computing or in the cloud for AI applications (Figure 1).

graphic showing cloud or data center, control plane and edge servers in remote locations
Figure 1. Where is the datacenter location? At the cloud or at the edge?

The choice to decentralize and locate the datacenter at the edge is growing in importance today because it allows the capture and processing of data as close to the source of the data as possible. Just like donuts, it promises that the closer the box is to the consumer, the happier everyone is. Send data to an AI application running in the cloud, and it delays answers. Process that data on an edge device, and it’s like grabbing directly from that pink box of glazed donuts.

Edge computing is big business. IDC expects overall worldwide spending on edge computing (including all hardware, software, services around edge computing) to reach $251 billion by 2025 (IDC Webinar, Future of Operations – Edge and IoT, July 2021). All this spending should stimulate a massive ecosystem. When AI applications are deployed over 5G and edge computing, this ecosystem could be worth in excess of $10 trillion, according to NVIDIA estimates.

Challenge to scale edge computing

Most implementations of edge computing today are standalone, as any user can define, design, and deploy their own bespoke edge computing network. While these implementations deliver benefits to users, the possibility of exchanging data across different edge computing networks or porting applications from one edge network to the other remains a challenge to scaling.

Imagine a hypothetical scenario where each of the 8+ million cellular base stations from over 750 mobile operators is an edge computing node. How do you write code that can work across these gargantuan configurations? Given that most mobile operators control

In comparison, in cloud computing, most developers have only a few options of supersized providers for whom to write code. Each of these hyperscale cloud providers in turn is well positioned to serve 100% of the global market, competition permitting.

In general, for most successful IT/tech innovations, scale comes from either standardization and interoperability, such as the internet or 4G/5G; market leadership from a few pace-setting companies, such as cloud computing or mobile OS; or a combination of both. Crucially, edge computing has not fully developed either.

MEC brings some standardization to edge computing

The early days of edge computing coincided with the early phase of 4G in the early 2010s. For the first time in history, the opportunity to have a fast and reliable internet service anywhere and at any time was becoming a reality.

This association, even though it was coincidental and not preplanned, made the edge of the cellular network the assumed default location for edge computing and the cellular network providers were the gatekeepers. Accordingly, several companies in the telecommunications sector came together in 2014, under the auspices of the European Telecommunications Standards Institute (ETSI, to found the MEC industry initiative.

The goal was for MEC to become the standard for edge computing under certain conditions:

  • It is located near a mobile access network.
  • It is integrated in some ways to the mobile network.
  • It is reachable or usable by third parties, through APIs.

This informed the vision, as outlined in their September 2014 whitepaper, Mobile Edge Computing (original term) to develop favorable market conditions. This would enable IT and cloud-computing capabilities within the radio access network (RAN) in proximity to mobile subscribers. According to the paper, “…the RAN edge offers a service environment with ultralow latency and high-bandwidth as well as direct access to real-time radio network information (such as subscriber location, cell load, etc.) that can be used by applications and services to offer context-related services.”

ETSI’s MEC Industry Standard Group (ISG) task is “…to create a standardized, open environment, which will allow the efficient and seamless integration of applications from vendors, service providers, and third-parties across multi-vendor, multi-access edge computing platforms.” The full list of their related specifications and publications can be found in the MEC Committee page.

From mobile to multi-access

As it soon became evident that edge computing was not restricted to only the cellular network edge, ETSI swapped the name in 2017 from mobile edge computing to multi-access edge computing. But the cellular-centric standardization of edge computing remains, with ETSI MEC, 3GPP SA6 and SA2, and GSMA’s Operator Platform Group all working towards standards and market initiatives for edge computing. For more information, see the Harmonizing standards for edge computing – A synergized architecture leveraging ETSI ISG MEC and 3GPP specifications whitepaper.

graphic showing definitions of edge from different industry perspectives.
Figure 2. The different interpretations of the edge, showing how a telco-centric view of edge computing differs from the non–telco-centric perspective
Source: Over The Edge: The Opportunities And Challenges Of The Coming edge computing Era,
 ABI Research

While this telco-centric view is unlikely to change, other stakeholders often view edge computing differently (Figure 2). There are other bodies who are working to incubate a non-telco-centric vision of edge computing. The Linux Foundation’s LF Edge, the Industrial Internet Consortium, Open Compute Project, and the Open19 edge datacenter project are a few examples.

Ultimately, regardless of whether edge computing is cellular-centric or scaling, its benefits in the age of AI remain an attractive draw for all stakeholders.

Categories
Misc

Is my dataset flawed? Trying to do object detection with masks but my masks contain a lot of black.

Here is just a tiny subset of the data:

Only using 4 labels/classes, the _background_, license plate ny, license plate tlc, license plate nj.

https://drive.google.com/file/d/1K8fWWEKRMeYfjDoNyXlUiMNXOuKDmqCO/view?usp=sharing

Here is the colab with tensorboard, it just goes to black which makes me think the _background_ is causing the issue. I originally tagged these images at full resolution since they are mostly user submissions and I wanted to use field images. Originally I have used mask rcnn on this dataset and had promising results, however it is not mobile friendly and I need that.

https://colab.research.google.com/drive/16CjoAoXQfsGD4TBCenbNupNTX9rfTrcS?usp=sharing

Any help would be appreciated, thank you.

submitted by /u/ryangravener
[visit reddit] [comments]

Categories
Misc

Can RNNs be distributed while training?

Hi Everyone, so I kept reading online that RNNs cannot be trained in parallel because of their inherent sequential nature, but today finally common sense kicked in and I began to wonder why

So if we consider the case of data parallelism, I can see that any map reduce function can easily aggregate the overall gradients and average them, which is what would’ve happened regardless even if it was trained sequentially.

In the case of model parallelism as well, it makes sense for the gradients to flow along each part of the model as long as the RNNs remain stateless

Are my assertions incorrect? If yes, can anyone please share resources for this?

submitted by /u/aloha_microbenis
[visit reddit] [comments]

Categories
Misc

Upcoming Webinar: Accelerate AI Model Development with PyTorch Lightning

The NGC team is hosting a webinar with live Q&A to dive into how to build AI models using PyTorch Lightning, an AI framework built on top of PyTorch.

The NGC team is hosting a webinar with live Q&A to dive into how to build AI models using PyTorch Lightning, an AI framework built on top of PyTorch, from the NGC catalog.

Simplify and Accelerate AI Model Development with PyTorch Lightning, NGC, and AWS
September 2 at 10 a.m. PT

Organizations across industries are using AI to help build better products, streamline operations, and increase customer satisfaction. 

Today, speech recognition services are deployed in financial organizations to transcribe earnings calls, in hospitals to assist doctors writing patient notes, and in video broadcasting for live captioning.

Under the hood, researchers and data scientists are building hundreds of AI models to experiment and identify the most impactful models to deploy for their use cases.

PyTorch Lightning, an AI framework built on top of PyTorch, simplifies coding, so researchers can focus on building models and reduce time spent on the engineering process. It also speeds up the development of hundreds of models by easily scaling on GPUs within and across nodes.

By joining this webinar, you will learn:

  • About the benefit of using PyTorch Lightning and how it simplifies building complex models
  • How NGC helps accelerate AI development by simplifying software deployment
  • How researchers can quickly build models using PyTorch Lightning from NGC on AWS

Register now >>>

Categories
Misc

Field of AI: Startup Helps Farmers Reduce Chemicals and Costs

Brad Janzen, a farmer based in Henderson, Neb., with more than 4,000 acres of corn and soybeans, knew he had to do something about herbicide-resistant weeds. Farms everywhere faced the rising costs of switching to new herbicides and the increased soil contamination. “Invasive weed species have inherited the ability to resist herbicides used in row-crop Read article >

The post Field of AI: Startup Helps Farmers Reduce Chemicals and Costs appeared first on The Official NVIDIA Blog.

Categories
Misc

Step into Omniverse – The Inaugural NVIDIA Omniverse User Group

Watch the event recording to learn more about Omniverse from some of the Omniverse teams and users, and start imagining what it can do for you and your ideas.

The inaugural NVIDIA Omniverse User Group at SIGGRAPH 2021 was a huge success thanks to the large showing from the Omniverse community that participated in presentations, discussions, and Q&A breakout sessions covering Omniverse apps, connectors, and workflows.

Highlights from the User Group include:

  • Hearing the vision and future of Omniverse from Rev Lebaredian, VP of Omniverse and Simulation Technology, and Richard Kerris, VP of Omniverse Developer Platform 
  • Getting a sneak peek of what’s to come from Frank DeLise, Product Management, Omniverse, and Damien Fagnou, Senior Director of Software, Omniverse
  • Learning about robotics from Liila Torabi, Sr. Project Manager, who spoke about NVIDIA Isaac Sim, our scalable robotics simulation application and synthetic data generation tool
  • Celebrating the winners of the Create with Marbles: Marvelous Machines with Michelle Lu, Director of Simulation Technology:
  • Exploring the resources available to developers from our Community Manager, Wendy Gram, including over 100 tutorials, our forums, Twitch channel, and Discord server
  • Meeting and chatting with the Omniverse experts about various capabilities of the platform

The support and excitement for Omniverse promise that it and its users’ future is bright. Watch the event recording to learn more about Omniverse from some of the Omniverse teams and users, and start imagining what it can do for you and your ideas.

Click the image to watch the inaugural NVIDIA Omniverse User Group at SIGGRAPH 2021

We are already planning and looking forward to the next User Group at GTC 2021, November 8-11.