I recently founded an organization called Pythonics that specializes in provided students with free Python-related courses. If you are interested in creating a TensorFlow course, feel free to fill out the following form in indicate what course you would like to create: https://forms.gle/mrtwqqVsswSjzSQQ7
If you have any questions at all, send me a DM and I will gladly answer them, thank you!
Note: I am NOT profiting off of this, this is simply a service project that I created.
Hey, I have dataset of brain tumors mri images with three type of tumors, when i trained a u-net model the results came out (79%) dicecoef.the problem i am facing is if i added healthy brain images to the database the results came almost smiller 79%, but when i plot the results for healthy images, the result should be blank dark but they show like this https://imgur.com/a/jXbLUMt .
how can i fix this problem ?
and in general question how to train a model to handel images not trained on,like if a model trained on classification of images of cats and dogs and presented an image of tree shouldn’t the model give results of unkonw or something familier?
SoftBank is a global technology player that aspires to drive the Information Revolution. The company operates in broadband, fixed-line telecommunications, ecommerce, information technology, finance, media, and marketing. To improve their users’ communication experience, and overcome the 5G capacity and coverage issues, SoftBank has used NVIDIA Maxine GPU-accelerated SDKs with state-of-the-art AI features to build virtual … Continued
SoftBank is a global technology player that aspires to drive the Information Revolution. The company operates in broadband, fixed-line telecommunications, ecommerce, information technology, finance, media, and marketing. To improve their users’ communication experience, and overcome the 5G capacity and coverage issues, SoftBank has used NVIDIA Maxine GPU-accelerated SDKs with state-of-the-art AI features to build virtual collaboration and content creation applications.
In this post, you learn how SoftBank used the Maxine SuperResolution and hardware-accelerated encode-decode operations to reduce the amount of data that must be uplinked to the multi-access edge computing (MEC) servers. Besides solving the challenge of limited bandwidth, Maxine features such as noise removal and virtual background enabled SoftBank to deliver the best video conferencing solution for their users.
Benefits of using MEC
Edge computing enables providers to deploy their technology closer to users. Simply put, edge computing reduces bandwidth and latency budgets for mission-critical, high-throughput, low-latency applications. This is achieved using MEC network technology to move the computing from a remote cloud server to a node closer to the consumption source. Edge computing relies heavily on network technologies such as 4G, and more recently 5G, to provide connectivity.
Figure 1. Simplified overview of a pipeline involving MEC servers
5G features such as ultra-high-speed, ultra-low latency, and multiple simultaneous connections enable new use cases such as telemedicine and smart factories that were previously unfeasible with wireless connectivity. MEC is the key to realizing the support of low-latency, high-throughput use cases. MEC reduces response delays by processing as much as possible at the edge by deploying regional MEC servers and sending only the minimum necessary data to the cloud. MEC servers often use the GPU massively parallel computing power for processing large amounts of data at high speed.
Challenges with the 5G network
The current 5G networks operate in a configuration called non-standalone (NSA). This configuration combines a 4G LTE network and a 5G base station, where some 5G features (such as network slicing) are not available. 5G SA (standalone) configuration has both a 5G core and a base station. 5G SA end-to-end support for 5G speeds services, reduces costs, improves the quality of service, and is a better platform for deploying services.
When the 5G SA configuration is in the market, the full 5G network is complete. In other words, 5G evolves in two steps: 5G NSA and 5G SA. Capital investment is required for each step.
On the other hand, some telecom carriers, including SoftBank, have started using 4G LTE low-band frequency for 4G LTE and 5G NR. Theoretically, capacity and coverage are trade-offs in wireless communication. To ensure the high-quality, wide-area coverage for the 5G SA configuration, SoftBank uses MEC to effectively reduce service delays as much as possible.
Figure 2. The trade-off between capacity and coverage in 5G frequencies
In addition, there are some technical challenges. Mobile networks are generally designed to accommodate a higher downlink speed than uplink. This design philosophy works for general applications such as streaming videos on a smartphone, as most of the traffic is the downlink. However, some critical applications require a strong uplink connection. One of these is video conferencing, where the user needs considerable uplink bandwidth to stream high-resolution video and audio.
The current 5G uplink capacity is insufficient, and carrier aggregation and MIMO antennas are needed to provide more uplink allocation. As more and more devices connect to 5G, saving bandwidth, especially in the uplink, is a common challenge for all global telecom carriers.
Uplink bandwidth-intensive applications, such as video conferencing, can be served with the same quality of service at reduced uplink bandwidth (for example, 500 Kbps) as with ample bandwidth (100 Mbps). In those cases, it’s possible to connect many more devices and provide high-quality services at the same time.
Video conferencing solution on MEC with NVIDIA Maxine
NVIDIA Maxine is a GPU-accelerated SDK platform that enables developers of video conferencing services to build and deploy AI-powered features that use state-of-the-art models in the cloud. Maxine includes APIs using the latest innovations from NVIDIA research, such as artifact reduction, body pose estimation, super resolution, and noise removal. Maxine also uses other products, like NVIDIA Riva, to provide features like closed captioning and access to virtual assistants. These capabilities are fully accelerated on NVIDIA GPUs to run real-time video streaming applications in the cloud.
Figure 3. Overview of Maxine super resolution
Maxine applications enable service providers to offer the same features to every user on any device, including computers, tablets, and phones. The key point is that all the processing happens on the cloud so that the application running on any device requires minimal resources. Applications built with Maxine are easily deployed as microservices and scale to hundreds of thousands of streams in a Kubernetes environment.
The idea is to offload the computationally intensive processing involved in video conferencing systems and reduce the amount of data that must be uplinked to the MEC servers. This is done through a combination of video effects like super resolution and hardware-accelerated encode-decode operations. Maxine also adds quality-of-life features like noise removal, virtual background, room echo cancelation, and more.
What does this mean for end users? Essentially, an end user with a low-bandwidth connection working onsite with a wide range of background noise can get connected with clean audio and a high-definition video. For instance, a plant manager at a noisy production floor in a remote location with a 180p stream connection can seem to be in a silent conference room with a 720p stream. The offloading of compute resources also translates to longer battery life and more free memory for the end user to multitask on resource-constrained devices like mobile phones and laptops.
The features mentioned earlier are housed in the following SDKs:
In addition, the NVIDIA Video Codec SDKprovides hardware-accelerated encoding and decoding to aid the infrastructure around video conferencing.
Figure 4. Overview of Maxine AI Face codec
How SoftBank used NVIDIA Maxine
Typically, if you want to use a video conference solution on your mobile phone, you must first install a client application. In SoftBank’s case, the Zoom client is installed on the MEC server on the carrier network instead of the mobile phone. The video and microphone output of the mobile phone are inputs to the Zoom client on the MEC over the 5G network. MEC recognizes the smartphone’s microphone and camera as a virtual microphone and camera and uses them as input for the Zoom client.
Figure 5. SoftBank and Maxine POC: Overview diagram
Here are the hardware and software specifications used for the SoftBank proof of concept implementation:
This work makes use of SoftBank’s MEC servers (Windows), a modified C++-based open source WebRTC client named “WebRTC Client Momo,” and an application that uses the Video Effect SDK and Audio Effect SDK API.
The NvAFX_RUN API (NVAFX_EFFECT_DENOISER) in AudioEffectSDK and NvVFX_RUN API (NVVFX_FX_SUPER_RES) in the Video Effect SDK are used to perform video super resolution and noise removal.
Figure 6. Sample code for Video Effects SDK API
Figure 7. Sample code for Audio Effects SDK API
The video stream sent from the 5G user equipment using the WebRTC protocol is uploaded to the MEC at a low bit rate (in this verification, H.264 (CBR) 180p) to conserve uplink bandwidth. MEC receives degraded audio and video at low bit rates and improves quality using Maxine SDKs. For video, the MEC server uses the Maxine SuperResolution function to resize the video sent from the user equipment at 180p to 720p. SuperResolution reduces noise and restores high-frequency components, resulting in high-quality video.
Figure 8 shows the results of SuperResolution.
Figure 8. The original blocky image (the left half) vs. image after applying Maxine AI features (the right half)
In Figure 8, the left side is the original data before applying SuperResolution, and the right side is the image upscaled. The blocky artifacts in the facial details are replaced with more pixels, leading to a high-quality image. You can replicate these results using the sample application provided with the Video Effects SDK. For a full demonstration, see this video. NEED VIDEO UPLOADED TO YOUTUBE OR DEVZONE
As with the Super Resolution result, the noise removal results are shown in the video.
Video 1. Video showcasing the output of Noise Removal
The video shows the results of testing the Maxine noise removal feature in a scenario where the user is talking while typing on a keyboard. Here, keyboard sounds were selected as a sample, but noise removal was also useful in various situations throughout the development process of SoftBank’s PoC. SoftBank believes that noise removal makes noisy-environment meetings possible, such as outdoors or in a car.
You can replicate these results using the sample application provided with the Audio Effects SDK.
Improve the quality of your video stream
By deploying Maxine on their MEC servers, in addition to low latency, SoftBank now provides a high-quality video and audio experience to all end users. The improved end-user experience is achieved with high savings on the uplink bandwidth since no additional hardware or user equipment is needed. To improve the video quality further, SoftBank plans to use Maxine AI Face Codec.
Award-winning research, stunning demos, a sweeping vision for how NVIDIA Omniverse will accelerate the work of millions more professionals, and a new pro RTX GPU were the highlights at this week’s SIGGRAPH pro graphics conference. Kicking off the week, NVIDA’s SIGGRAPH special address featuring Richard Kerris, vice president, Omniverse, and Sanja Fidler, senior director, AI Read article >
I want to run a minimal TF server on my machine to do inference on a GAN. So it needs to send the image over localhost. I can’t find any comparable examples. Any help?
We have 2 servers with one server A with a CPU and high storage capacity and another server B a nvidia-DGX with 4 GPUs, the tutorial code doesn’t say how to start the server on worker aka here the dgx, is there any fully working demo explaining it step by step?
With NVIDIA DriveWorks SDK, autonomous vehicles can bring their understanding of the world to a new dimension. The SDK enables autonomous vehicle developers to easily process three-dimensional lidar data and apply it to specific tasks, such as perception or localization. You can learn how to implement this critical toolkit in our expert-led webinar, Point Cloud … Continued
With NVIDIA DriveWorks SDK, autonomous vehicles can bring their understanding of the world to a new dimension.
The SDK enables autonomous vehicle developers to easily process three-dimensional lidar data and apply it to specific tasks, such as perception or localization. You can learn how to implement this critical toolkit in our expert-led webinar, Point Cloud Processing on DriveWorks, Aug. 25.
Lidar sensors enhance an autonomous vehicle’s sensing capabilities, detecting the depth of surrounding objects that may not be picked up by camera or radar.
It does so by bouncing invisible lasers off the vehicle’s surrounding environment, building a 3D image based on the time it takes for those lasers to return. However, processing and extracting contextual meaning from lidar data efficiently and quickly is not as straightforward.
Lidar point cloud processing must be performed in real-time and in tight coordination with other sensing modalities to deliver the full benefits of enhanced perception — a difficult feat to accomplish when working with third-party open source modules.
A Streamlined Solution
With DriveWorks, efficient and accelerated lidar point cloud processing can be performed right out of the gate.
The SDK provides middleware functions that are fundamental to autonomous vehicle development. These consist of the sensor abstraction layer (SAL) and sensor plugins, data recorder, vehicle I/O support and a deep neural network framework. It’s modular, open, and designed to be compliant with automotive industry software standards.
These development tools include a point cloud processing module, which works with the SAL and sensor plugin framework to provide a solid basis for developers to implement a lidar-based perception pipeline with little effort and quick results.
The module is CUDA-accelerated and straightforward to implement. It’s the same toolkit the NVIDIA autonomous driving team uses to develop our own self-driving systems, making it purpose-built for production solutions rather than purely research and development.
Register now to learn more from NVIDIA experts about the DriveWorks point cloud processing module and how to use it in your autonomous vehicle development process.
Hey all, I’m trying to take a train model and move it to a local deployment software (OpenFrameworks with ofxTensorFlow2 library) but the lib only takes .pb format models. Is there a way to convert the model from .pkl to .pb? It is a TF model, so I feel like maybe it isn’t so hard, but I have no idea how.