Categories
Misc

Edge Computing in Ethiopia – A Quest for an AI Solution

This is a guest submitted post by Natnael Kebede, Co-founder and Chief NERD at New Era Research and Development Center It was one year ago in a random conversation that a friend told me about a piece of hardware in excitement. At that point I never imagined how that conversation would have the potential to … Continued

This is a guest submitted post by Natnael Kebede, Co-founder and Chief NERD at New Era Research and Development Center

It was one year ago in a random conversation that a friend told me about a piece of hardware in excitement. At that point I never imagined how that conversation would have the potential to impact my life. My name is Natnael Kebede, the Co-founder and Chief NERD at New Era Research and Development (NERD) Center in Ethiopia. NERD is a center that provides a hacker space, educational content, and research for Ethiopian youth to create a better Ethiopia Africa and World

That piece of hardware was the Jetson Nano. I went over to my house and started researching about the Jetson Nano that night. From that point onward, I could not stop following up and researching about the NVIDIA edge computing concept. This is a story about how a single conversation helped me build a career and a community around an idea.

The conversation we had with my friend was about edge computing disrupting the new AI-development environment. For a country like Ethiopia, AI is usually considered a luxury than a necessity. This is due to the perception of universities, investors, startups and the government have about AI. All of them think about expensive high-performance computers and lack of experienced professionals in the field. Very soon, I realized how the edge computing solution could be a peaceful weapon to change the attitude towards AI in Ethiopia. Me and my partner decided to buy the Jetson Nano and put our name on the map.

The learning process was much easier and efficient when we started experimenting with the Jetson hands-on. It was coincidentally by that time we were invited to the first Ethiopian AI Summit to showcase any project related to AI. We decided to go for building an edge solution. Our research team side of the problem was to build a system that reads a streamed video, and use the Jetson to identify any desired inference at the edge. The application could range from counting cars in a connected traffic junctions and detecting license plates, to an agricultural solution where any flying or ground vehicle feeding a video of a farm to detect plantation health issues/counting fruits. We started off with the traffic management system. We organized a team of three engineering interns and myself to build a prototype in less than eight weeks.

The NERD team behind the traffic management system.

We got the whole thing figured out after going through a lot of reading. Finding answers was not the easiest thing as it was rare to find publications on the Nano. Meanwhile, a friend gave our contact to one of the NVIDIA Emerging Chapters Program leads, and we had a conversation about the program. It was the best timing. Although we wish we had known about the program earlier to access the DLI (NVIDIA Deep Learning Institute) courses, we kept going with a hope that the program will enable future projects with hands-on experience and technical training. The project was finally presented at the Summit and we had the chance to pitch the idea of edge computing to the Prime Minster of Ethiopia His Excellency Dr. Abiy Ahmed. We received promising feedback from him in regards to taking the project forward into implementation on the years to come.

Presenting the smart traffic light control system to Prime Minister Dr. Abiy Ahmed at the Ethiopian AI Summit

After the Summit we knew we needed more talent to recruit and inspire. We launched a short-term training on the Jetson Nano. It took our team eight weeks to build the traffic management system. We made sure the training takes the same duration of time for the students to build a final project at the end. We value open-sourcing the project. The output from these trainings are projects to be open-source so we can build our community bigger. Since we have limited resources, we only provided 25 students in three groups. Currently, we registered 13 students and the first batch is currently taking trainings. We are using the free DLI courses we were granted as part of the Emerging Chapters Program as guideline to our curriculum. We will soon provide them with free course vouchers after filtering consistent training participants.

First group of Mechatronics at the Jetson Nano training

Our goal is to create a community of enthusiasts, hobbyist, engineers and developers passionate about edge computing solutions in Agriculture and Smart City projects. We do this by consistently engaging with the tech community in Addis Ababa, Ethiopia. We organized an open seminar last week where we showcased our projects and also talked about edge computing. We were able to inspire more students for our program. Our team members are all enjoying the free DLI courses and we will be coming up with something much larger very soon. I want to personally thank the NVIDIA Emerging Chapters Program for all the support and resources. On behalf of our team, we are grateful to be part of the program. Very soon we would love to present our work for other partners!

The AI-based traffic control system is now on GitHub.

To learn more about NERD and to stay up-to-date, follow us on Instagram @nerds_center.

Categories
Misc

How to use max_steps in train with boosted trees?

How to use max_steps in train with boosted trees?

I am a little bit confused about the the usage of steps in train with boosted trees. I get the meaning of step when I use it with a incremental model like neural net, but I don’t get when to use it with n_trees of the boosted trees. I guess that 1 step means that I train the whole n_trees one time. Am I correct?

https://preview.redd.it/z8rb5zwud5e71.png?width=1486&format=png&auto=webp&s=a81de0fc536387db29385cb8e2fca6cc507db8a2

submitted by /u/Tokukawa
[visit reddit] [comments]

Categories
Misc

Why the prediction in converted Tensorflow model in Javascript keeps returning the same broken result but it performed great in Python (the origin) ?

I made the transfer learning model in Tensorflow python with the pretrained model Mobilenet V2, it performed so great in python even when I predicted. After that, I save the model to keras h5 format and convert it to tensorflow.js model. Then I create a static page that implements the tensorflow.js model and run it with Web Server for Chrome in Chrome browser. The prediction result I got is really confusing as the result was always the same one, no matter that I changed the image to be predicted. Any inputs, suggestion and solutions on this problem are highly appreciated. Thanks in advance!

Full Description and Code I had made so far on this issue : https://stackoverflow.com/questions/68574893/my-converted-tensorflow-transfer-learning-model-always-returns-same-results-in-t

submitted by /u/TechnologyOk9486
[visit reddit] [comments]

Categories
Misc

Setting the Virtual Stage: ‘Deathtrap Dungeon’ Gets Interactive Thanks to NVIDIA RTX

Deathtrap Dungeon: The Golden Room is a gripping choose-your-own-adventure story, but it’s no page-turner. Based on the best-selling book of the same name, it’s an interactive film in which viewers become the player on their quest to find The Golden Room while facing down dungeon masters and avoiding traps. NVIDIA RTX technology powers the real-time Read article >

The post Setting the Virtual Stage: ‘Deathtrap Dungeon’ Gets Interactive Thanks to NVIDIA RTX appeared first on The Official NVIDIA Blog.

Categories
Misc

Accelerating Billion Vector Similarity Searches with GPUs

A collection of images.Relying on the capabilities of GPUs, a team from Facebook AI Research has developed a faster, more efficient way for AI to run similarity searches. The study, published in IEEE Transactions on Big Data, creates a deep learning algorithm capable of handling and comparing high-dimensional data from media that is notably faster, while just as … ContinuedA collection of images.

Relying on the capabilities of GPUs, a team from Facebook AI Research has developed a faster, more efficient way for AI to run similarity searches. The study, published in IEEE Transactions on Big Data, creates a deep learning algorithm capable of handling and comparing high-dimensional data from media that is notably faster, while just as accurate as previous techniques. 

In a world with an ever-growing supply of data, the work promises to ease both the compute power and time needed for processing large libraries.

“The most straightforward technique for searching and indexing [high-dimensional data] is by brute-force comparison, whereby you need to check [each image] against every other image in the database. This is impractical for collections containing billions of vectors,” Jeff Johnson, study colead and a research engineer at Facebook, said in a press release.

Containing millions of pixels and data points, every image and video creates billions of vectors. This large amount of data is valuable for analyzing, detecting, indexing, and comparing vectors. It is also problematic for calculating similarities of large libraries with traditional CPU algorithms that rely on several supercomputer components, slowing down overall computing time.

Using only four GPUs with CUDA, the researchers designed an algorithm for GPUs to both host and analyze library image data points. The method also compresses the data, making it easier, and thus faster to analyze.  

An example of how the algorithm computes the smoothest path between images where only the first and the last image are given. Credit: Facebook/Johnson et al

The new algorithm processed over 95 million high-dimensional images in 35 minutes. A graph of a billion vectors took less than 12 hours to compute. According to a comparison test in the study, handling the same database with a cluster of 128 CPU servers took 108.7 hours-about 8.5x longer.

“By keeping computations purely on a GPU, we can take advantage of the much faster memory available on the accelerator, instead of dealing with the slower memories of CPU servers and even slower machine-to-machine network interconnects within a traditional supercomputer cluster,” said Johnson. 

The researchers state the methods are already being applied to a wide variety of tasks, including a language processing search for translations. Known as the Facebook AI Similarity Search library, the approach is open source for implementation, testing, and comparison.

 

Read more >>>
Read the full article in IEEE Transactions on Big Data >>>

Categories
Misc

NVIDIA Showcases the Latest in Graphics, AI, and Virtual Collaboration at SIGGRAPH

Developers, researchers, graphics professionals, and others from around the world will get a sneak peek at the latest innovations in computer graphics at the SIGGRAPH 2021 virtual conference, taking place August 9-13.

Developers, researchers, graphics professionals, and others from around the world will get a sneak peek at the latest innovations in computer graphics at the SIGGRAPH 2021 virtual conference, taking place August 9-13.

NVIDIA will be presenting the breakthroughs that NVIDIA RTX technology delivers, from real-time ray tracing to AI-enhanced workflows.

Watch the NVIDIA special address on Tuesday, August 10 at 8:00 a.m. PDT, where we will showcase the latest tools and solutions that are driving graphics, AI, and the emergence of shared worlds.

And on Wednesday, August 11, catch the global premiere of “Connecting in the Metaverse: The Making of the GTC Keynote” at 11:00 a.m. PDT. The new documentary highlights the creative minds and groundbreaking technologies behind the NVIDIA GTC 2021 keynote. See how a small team of artists used NVIDIA Omniverse to blur the line between real and rendered.

Explore the Latest from NVIDIA Research

At SIGGRAPH, the NVIDIA Research team will be presenting the following papers:

Don’t miss our Real-Time Live! demo on August 10 at 4:30 p.m. PDT to see how NVIDIA Research creates AI-driven digital avatars.

Dive into Technical Training with NVIDIA Deep Learning Institute

Here’s a preview of some DLI sessions you don’t want to miss:

Omniverse 101: Getting Started with Universal Scene Description for Collaborative 3D Workflows

This free self-paced training provides an introduction to USD. Go through a series of hands-on exercises consisting of training videos accompanied by live scripted examples, and learn about concepts like layer composition, references and variants.

Fundamentals of Ray Tracing Development using NVIDIA Nsight Graphics and NVIDIA Nsight Systems

With NVIDIA RTX and real-time ray-tracing APIs like DXR and Vulkan Ray Tracing, see how it’s now easier than ever to create stunning visuals at interactive frame rates. This instructor-led workshop will show audiences how to utilize NVIDIA Nsight graphics and NVIDIA Nsight Systems to profile and optimize 3D applications that are using ray tracing. Space is limited.

Graphics and Omniverse Teaching Kit

Designed for college and university educators looking to bring graphics and NVIDIA Omniverse into the classroom, this teaching kit includes downloadable teaching materials and online courses that provide the foundation for understanding and building hands-on expertise in graphics and Omniverse.

Discover the Latest Tools and Solutions in Our Virtual Demos

We’ll be showcasing how NVIDIA technology is transforming workflows in some of our exciting demos, including:

  • Factory of the Future: Explore the next era of manufacturing with this demo, which showcases BMW Group’s factory of the future – designed, simulated, operated, and maintained entirely in NVIDIA Omniverse.
  • Multiple Artists, One Server: See how teams can accelerate visual effects production with the NVIDIA EGX Platform, which enables multiple artists to work together on a powerful, secure server from anywhere.
  • 3D Photogrammetry on an RTX Mobile Workstation: Watch how NVIDIA RTX-powered mobile workstations help drive the process of 3D scanning using photogrammetry, whether in a studio or in a remote location.
  • Interactive volumes with NanoVDB in Blender Cycles: Learn how NanoVDB makes volume rendering more GPU memory efficient, meaning larger and more complex scenes can be interactively adjusted and rendered with NVIDIA RTX-accelerated ray tracing and AI denoising.

Enter for a Chance to Win Some Gems

Attendees can win a limited-edition hard copy of Ray Tracing Gems II, the follow up to 2019’s Ray Tracing Gems.

Ray Tracing Gems II brings the community of rendering experts back together to share their knowledge. The book covers everything in ray tracing and rendering, from basic concepts geared toward beginners to full ray tracing deployment in shipping AAA games.

Learn more about the sweepstakes and enter for a chance to win.

Join NVIDIA at SIGGRAPH and learn more about the latest tools and technologies driving real-time graphics, AI-enhanced workflows, and virtual collaboration.

Categories
Misc

GFN Thursday Brings ‘Evil Genius 2: World Domination,’ ‘Escape From Naraka’ with RTX, and More This Week on GeForce NOW

This GFN Thursday shines a spotlight on the latest games joining the collection of over 1,000 titles in the GeForce NOW library from the many publishers that have opted in to stream their games on our open cloud-gaming service. Members can look forward to 14 games — including Evil Genius 2: World Domination from Rebellion Read article >

The post GFN Thursday Brings ‘Evil Genius 2: World Domination,’ ‘Escape From Naraka’ with RTX, and More This Week on GeForce NOW appeared first on The Official NVIDIA Blog.

Categories
Misc

RTX for Indies: Stunning Ray-Traced Lighting Achieved with RTXGI in Action-Platformer Escape from Naraka

Developed by XeloGames, an indie studio of just three, and published by Headup Games, Escape from Naraka achieves eye-catching ray-traced lighting using RTXGI and significant performance boosts from DLSS.

Developed by XeloGames, an indie studio of just three, and published by Headup Games, Escape from Naraka achieves eye-catching ray-traced lighting using RTX Global Illumination (RTXGI) and significant performance boosts from Deep Learning Super Sampling (DLSS). NVIDIA had the opportunity to speak with the XeloGames team about their experience using NVIDIA’s SDKs while developing their debut title.  

“We believe that, sooner or later, everyone will have ray tracing,” XeloGames said, discussing their motivation to use RTXGI, “so it’s really good for us to start earlier, especially in Indonesia.”

Starting early, in this case, is an understatement for XeloGames. Escape from Naraka is actually the first-ever ray tracing title from Indonesia; the team used ray-traced reflections, shadows, and global illumination to paint a dramatic labyrinth for the player to explore. 

Such a feat, executed by such a small studio, speaks to the usefulness of RTXGI as a tool for development. Escape from Naraka was made in Unreal Engine 4, using NVIDIA’s NvRTX branch to bring ray tracing and DLSS into production. Once ray-traced global illumination was integrated into the engine, XeloGames reported benefits they immediately experienced:

“RTXGI really helps with how quick you can set up a light in a scene. Instead of the old ways where you have to manually adjust every light, you can put in a Dynamic Diffuse Global Illumination (DDGI) volume and immediately see a difference.”

Rapid in-engine updates expedited the task of lighting design in Escape from Naraka, alongside the ability to make any object emissive for “cost-free performance lighting”, XeloGames added. Of course, implementing RTXGI in their title came with its challenges as well. For Escape from Naraka specifically, a unique obstacle presented itself; the abundance of rocks in their level design often made it challenging to find opportunities to make full use of ray-traced lighting. “Rocks are not really that great at bouncing lights around”, XeloGames developers remarked. 

RTXGI is undoubtedly a powerful tool to have in a game developers toolkit, but the mileage that can be achieved with its features can vary case-by-case. An important step before using ray traced global illumination is deciding if it’s features are a right fit for your game. 

Regardless of the rock conflict (mitigated by making certain textures emissive to brighten darker areas) and a couple of bugs that had to be squashed along the way, XeloGames’ three person team was able to achieve a beautiful integration of RTXGI in Escape from Naraka. Check out the Escape from Naraka Official RTX Reveal Trailer for a look at how RTX Global Illumination was able to enhance the game’s visual appeal: 

“It definitely made scenes look more natural,” said XeloGames developer on the enhancements RTXGI brought to their game, “lights bounce around more naturally instead of just directly.” 

The results of global illumination can speak for themselves, pairing excellently with ray-traced reflection and shadows for stunning results.

RTXGI is not the only NVIDIA feature XeloGames packed into their newest release. Deep Learning Super Sampling (DLSS) is implemented as well to bring an AI-powered frame rate boost.

“Adding NVIDIA DLSS to the game was fast and easy with the UE4 plugin, providing our players maximum performance as they take on all the challenges Escape From Naraka has to offer.” 

XeloGames reported a swift implementation of DLSS with NvRTX, emphasizing the importance of using DLSS as a frame booster as well as an enabler to turn ray tracing on with the performance headroom it provides. In concert, RTXGI and DLSS empower a rich and fully-immersive experience in Escape from Naraka.  

Escape from Naraka is available now on Steam.

Check out XeloGames at their official website

Learn more and download RTXGI here

Learn more and download DLSS here

Explore and use NVIDIA’s NvRTX branch for Unreal Engine 4 here.

Categories
Misc

Getting attribute error in windows but not in Linux

AttributeError: ‘google.protobuf.pyext._message.RepeatedCompositeCo’ object has no attribute ‘_values’

Protobuf version=3.15 Mediapiper version=0.8.6 Tensorflow version=2.5.0 I have tried installing all the version in virtual environment but error won’t go away.

submitted by /u/singh_prateek
[visit reddit] [comments]

Categories
Misc

Free Ray Tracing Gems II Chapter Covers Ray Tracing in Remedy’s Control

This chapter, written by Juha Sjöholm, Paula Jukarainen, and Tatu Aalto, presents how all ray tracing based effects were implemented in Remedy Entertainment’s Control.

Next week, Ray Tracing Gems II will finally be available in its entirety as a free download, or for purchase as a physical release from Apress or Amazon. Since the start of July, we’ve been providing early access to a new chapter every week. Today’s chapter, by Juha Sjöholm, Paula Jukarainen, and Tatu Aalto, presents how all ray tracing based effects were implemented in Remedy Entertainment’s Control. This includes opaque and transparent reflections, near field indirect diffuse illumination, contact shadows, and the denoisers tailored for these effects.

You can download the full chapter free here

You can also learn more about Game of the Year Winner Control here

We’ve collaborated with our partners to make limited edition versions of the book, including custom covers that highlight real-time ray tracing in Fortnite, Control, and Watch Dogs: Legion.

To win a limited edition print copy of Ray Tracing Gems II, enter the giveaway contest here: https://developer.nvidia.com/ray-tracing-gems-ii