Categories
Misc

whats poppin my dudes ( git diff tfjs tf.py)

anyone got any tips of the disparity between python and transpiling to javascript support? for example I made a model in py, transpiled it, and then it turns out that after all that, and lots of digging into bugs, that some transformative layers that we can use in python are not supported in the javscript version? any other things worth noting? what are the gaps?

submitted by /u/doctor_slimm
[visit reddit] [comments]

Categories
Misc

How To Build Custom Object Detector In Live Video With TensorFlow | Introduction | #AI

How To Build Custom Object Detector In Live Video With TensorFlow | Introduction | #AI submitted by /u/Minayafl
[visit reddit] [comments]
Categories
Offsites

MURAL: Multimodal, Multi-task Retrieval Across Languages

For many concepts, there is no direct one-to-one translation from one language to another, and even when there is, such translations often carry different associations and connotations that are easily lost for a non-native speaker. In such cases, however, the meaning may be more obvious when grounded in visual examples. Take, for instance, the word “wedding”. In English, one often associates a bride in a white dress and a groom in a tuxedo, but when translated into Hindi (शादी), a more appropriate association may be a bride wearing vibrant colors and a groom wearing a sherwani. What each person associates with the word may vary considerably, but if they are shown an image of the intended concept, the meaning becomes more clear.

The word “wedding” in English and Hindi conveys different mental images. Images are taken from wikipedia, credited to Psoni2402 (left) and David McCandless (right) with CC BY-SA 4.0 license.

With current advances in neural machine translation and image recognition, it is possible to reduce this sort of ambiguity in translation by presenting a text paired with a supporting image. Prior research has made much progress in learning image–text joint representations for high-resource languages, such as English. These representation models strive to encode the image and text into vectors in a shared embedding space, such that the image and the text describing it are close to each other in that space. For example, ALIGN and CLIP have shown that training a dual-encoder model (i.e., one trained with two separate encoders) on image–text pairs using a contrastive learning loss works remarkably well when provided with ample training data.

Unfortunately, such image–text pair data does not exist at the same scale for the majority of languages. In fact, more than 90% of this type of web data belongs to the top-10 highly-resourced languages, such as English and Chinese, with much less data for under-resourced languages. To overcome this issue, one could either try to manually collect image–text pair data for under-resourced languages, which would be prohibitively difficult due to the scale of the undertaking, or one could seek to leverage pre-existing datasets (e.g., translation pairs) that could inform the necessary learned representations for multiple languages.

In “MURAL: Multimodal, Multitask Representations Across Languages”, presented at Findings of EMNLP 2021, we describe a representation model for image–text matching that uses multitask learning applied to image–text pairs in combination with translation pairs covering 100+ languages. This technology could allow users to express words that may not have a direct translation into a target language using images instead. For example, the word “valiha”, refers to a type of tube zither played by the Malagasy people, which lacks a direct translation into most languages, but could be easily described using images. Empirically, MURAL shows consistent improvements over state-of-the-art models, other benchmarks, and competitive baselines across the board. Moreover, MURAL does remarkably well for the majority of the under-resourced languages on which it was tested. Additionally, we discover interesting linguistic correlations learned by MURAL representations.

MURAL Architecture
The MURAL architecture is based on the structure of ALIGN, but employed in a multitask fashion. Whereas ALIGN uses a dual-encoder architecture to draw together representations of images and associated text descriptions, MURAL employs the dual-encoder structure for the same purpose while also extending it across languages by incorporating translation pairs. The dataset of image–text pairs is the same as that used for ALIGN, and the translation pairs are those used for LaBSE.

MURAL solves two contrastive learning tasks: 1) image–text matching and 2) text–text (bitext) matching, with both tasks sharing the text encoder module. The model learns associations between images and text from the image–text data, and learns the representations of hundreds of diverse languages from the translation pairs. The idea is that a shared encoder will transfer the image–text association learned from high-resource languages to under-resourced languages. We find that the best model employs an EfficientNet-B7 image encoder and a BERT-large text encoder, both trained from scratch. The learned representation can be used for downstream visual and vision-language tasks.

The architecture of MURAL depicts dual encoders with a shared text-encoder between the two tasks trained using a contrastive learning loss.

Multilingual Image-to-Text and Text-to-Image Retrieval
To demonstrate MURAL’s capabilities, we choose the task of cross-modal retrieval (i.e., retrieving relevant images given a text and vice versa) and report the scores on various academic image–text datasets covering well-resourced languages, such as MS-COCO (and its Japanese variant, STAIR), Flickr30K (in English) and Multi30K (extended to German, French, Czech), XTD (test-only set with seven well-resourced languages: Italian, Spanish, Russian, Chinese, Polish, Turkish, and Korean). In addition to well-resourced languages, we also evaluate MURAL on the recently published Wikipedia Image–Text (WIT) dataset, which covers 108 languages, with a broad range of both well-resourced (English, French, Chinese, etc.) and under-resourced (Swahili, Hindi, etc.) languages.

MURAL consistently outperforms prior state-of-the-art models, including M3P, UC2, and ALIGN, in both zero-shot and fine-tuned settings evaluated on well-resourced and under-resourced languages. We see remarkable performance gains for under-resourced languages when compared to the state-of-the-art model, ALIGN.

Mean recall on various multilingual image–text retrieval benchmarks. Mean recall is a common metric used to evaluate cross-modal retrieval performance on image–text datasets (higher is better). It measures the Recall@N (i.e., the chance that the ground truth image appears in the first N retrieved images) averaged over six measurements: Image→Text and Text→Image retrieval for N=[1, 5, 10]. Note that XTD scores report Recall@10 for Text→Image retrieval.

Retrieval Analysis
We also analyzed zero-shot retrieved examples on the WIT dataset comparing ALIGN and MURAL for English (en) and Hindi (hi). For under-resourced languages like Hindi, MURAL shows improved retrieval performance compared to ALIGN that reflects a better grasp of the text semantics.

Comparison of the top-5 images retrieved by ALIGN and by MURAL for the Text→Image retrieval task on the WIT dataset for the Hindi text, एक तश्तरी पर बिना मसाले या सब्ज़ी के रखी हुई सादी स्पगॅत्ती”, which translates to the English, “A bowl containing plain noodles without any spices or vegetables”.

Even for Image→Text retrieval in a well-resourced language, like French, MURAL shows better understanding for some words. For example, MURAL returns better results for the query “cadran solaire” (“sundial”, in French) than ALIGN, which doesn’t retrieve any text describing sundials (below).

Comparison of the top-5 text results from ALIGN and from MURAL on the Image→Text retrieval task for the same image of a sundial.

Embeddings Visualization
Previously, researchers have shown that visualizing model embeddings can reveal interesting connections among languages — for instance, representations learned by a neural machine translation (NMT) model have been shown to form clusters based on their membership to a language family. We perform a similar visualization for a subset of languages belonging to the Germanic, Romance, Slavic, Uralic, Finnic, Celtic, and Finno-Ugric language families (widely spoken in Europe and Western Asia). We compare MURAL’s text embeddings with LaBSE’s, which is a text-only encoder.

A plot of LabSE’s embeddings shows distinct clusters of languages influenced by language families. For instance, Romance languages (in purple, below) fall into a different region than Slavic languages (in brown, below). This finding is consistent with prior work that investigates intermediate representations learned by a NMT system.

Visualization of text representations of LaBSE for 35 languages. Languages are color coded based on their genealogical association. Representative languages include: Germanic (red) — German, English, Dutch; Uralic (orange) — Finnish, Estonian; Slavic (brown) — Polish, Russian; Romance (purple) — Italian, Portuguese, Spanish; Gaelic (blue) — Welsh, Irish.

In contrast to LaBSE’s visualization, MURAL’s embeddings, which are learned with a multimodal objective, shows some clusters that are in line with areal linguistics (where elements are shared by languages or dialects in a geographic area) and contact linguistics (where languages or dialects interact and influence each other). Notably, in the MURAL embedding space, Romanian (ro) is closer to the Slavic languages like Bulgarian (bg) and Macedonian (mk), which is in line with the Balkan sprachbund, than it is in LaBSE. Another possible language contact brings Finnic languages, Estonian (et) and Finnish (fi), closer to the Slavic languages cluster. The fact that MURAL pivots on images as well as translations appears to add an additional view on language relatedness as learned in deep representations, beyond the language family clustering observed in a text-only setting.

Visualization of text representations of MURAL for 35 languages. Color coding is the same as the figure above.

Final Remarks
Our findings show that training jointly using translation pairs helps overcome the scarcity of image–text pairs for many under-resourced languages and improves cross-modal performance. Additionally, it is interesting to observe hints of areal linguistics and contact linguistics in the text representations learned by using a multimodal model. This warrants more probing into different connections learned implicitly by multimodal models, such as MURAL. Finally, we hope this work promotes further research in the multimodal, multilingual space where models learn representations of and connections between languages (expressed via images and text), beyond well-resourced languages.

Acknowledgements
This research is in collaboration with Mandy Guo, Krishna Srinivasan, Ting Chen, Sneha Kudugunta, Chao Jia, and Jason Baldridge. We thank Zarana Parekh, Orhan Firat, Yuqing Chen, Apu Shah, Anosh Raj, Daphne Luong, and others who provided feedback for the project. We are also grateful for general support from Google Research teams.

Categories
Misc

Creating Smarter Spaces with NVIDIA Metropolis and Edge AI

Graphic with NVIDIA logo and smart cars.Learn how AI-enabled video analytics is helping companies and employees work smarter and safer.   Graphic with NVIDIA logo and smart cars.

What do a factory floor, retail store, and major roadway have in common? They are a few examples of valuable and constrained infrastructure that need to be optimized. Manufacturers aim for early detection of defects in the assembly process. Retailers seek to better understand their customer journey and deliver more frictionless checkout experiences. Traffic planners look to reduce traffic gridlock.  

Over one billion cameras are deployed worldwide in nearly all of our important spaces, generating tremendous amounts of data but without a system for analyzing this data, valuable insights are lost. Enter AI-powered computer vision, which unlocks insights hidden in the video to generate insights that enable cities and companies to improve their safety and operational efficiency. 

Optimizing AI-enabled video analytics solutions streamlines tasks across industries, from healthcare to manufacturing, helping companies and their employees to work smarter and safer.   

NVIDIA Metropolis is an application framework, set of developer tools, and partner ecosystem that unites visual data and AI to enable greater functionality and efficiency across a range of physical spaces and environments. 

Transit hubs, retail stores, and factories use vision AI applications for more efficient, accessible, and safe operations. The following examples illustrate vision AI applications transforming how we use and manage our most critical spaces. 

Airports: With terminals serving and moving millions of passengers a year, airports are small cities, industrial sites, and transportation hubs. AI-enabled video analytics solutions identify and manage incidents in real time to minimize disruptions to passengers and airport operations. These solutions help airlines accelerate airplane turnarounds, deliver safer airport operations, and provide parking management to passengers. 

Factories: Companies are increasingly automating their manufacturing processes with IoT sensors, the most common of which are video cameras. These cameras capture vast amounts of data that, when combined with the power of AI, produce valuable insights that manufacturers can use to improve operational efficiency. Real-time understanding and responses are critical, such as identifying product defects on assembly lines, scanning for workplace hazards and signaling when machines require maintenance.

Farms: Farmers around the world are turning to vision AI applications to automate and improve their operations and yield quality. These applications help in a wide range of use cases, from counting cows to detecting weeds to the robotic pollination of tomatoes. These computer vision applications help farmers revolutionize food production by improving yield and using less resources.

Stadiums: Millions of people around the world visit stadiums to enjoy live sporting and cultural events. AI-enabled video analytics solutions are used to automate perimeter protection, weapons detection, crowd analytics, parking management, and suspicious behavior monitoring to provide a safer and more cohesive experience for visitors.

Hospitals: AI-enabled video analytics solutions help keep track of operating room procedures, ultimately improving patient care and surgical outcomes. By using accurate action logging, hospital staff can monitor surgical procedures, enforce disinfecting protocols, and check medical supply inventory levels in real time. AI-enabled video analytics reduces the need for human input on certain routine tasks, giving doctors and nurses more time with their patients.

Universities: AI vision helps university administrators better understand how physical spaces, like offices, gyms, and halls, are used. AI applications can also analyze real-time video footage and generate insights that inform better campus management, from detecting crowd flow patterns to creating immediate alerts for abnormal activities like fires, accidents, or water leakage.



A new generation of AI applications at the edge is driving incredible operational efficiency and safety gains across a broad range of spaces. Download a free e-book to learn how Metropolis and edge AI are helping build smarter and safer spaces around the world.

Categories
Misc

How to get reproducibile results in tensorflow?

I’m working on a project based on a conda environment, by using:

  • tensorflow-gpu=2.4.0,
  • cudatoolkit=10.2.89,
  • cudnn=7.6.5.

I’d like to have reproducibile results, so I tried with:

import os import random import numpy as np from numpy.random import default_rng import tensorflow as tf random.seed(0) rng = default_rng(0) tf.random.set_seed(0) 

And launching the python script from the terminal as:

PYTHONHASHSEED=0 python /path/to/main.py 

But my results are not reproducible.

Without posting my code (because is long and includes many files), which could be some other aspects that I should consider in order to get reproducibility?

PS: the Artificial Neural Network is a CNN and is created with by adding layers as, e.g.,:tf.keras.layers.Convolution2D(…)

submitted by /u/RainbowRedditForum
[visit reddit] [comments]

Categories
Misc

whats poppin my dudes

any suggestions on how I could avoid the ‘loading’ aspect of a model in a server that servers client resquests to a web api endpoint? such that the model is permanently ‘loaded’ and only has to make predictions?

# to save compute time that is (duh)

beep bop

submitted by /u/doctor_slimm
[visit reddit] [comments]

Categories
Misc

NVIDIA BlueField DPU Ecosystem Expands as Partners Introduce Joint Solutions

Learn how these Industry leaders have started to integrate their solutions using the DPU/DOCA architecture as key partners showcase these solutions at the recent NVIDIA GTC.

NVIDIA recently introduced the NVIDIA DOCA 1.2 software framework for NVIDIA BlueField DPUs, the world’s most advanced Data Processing Unit (DPU). This latest release builds on the momentum of the DOCA early access program to enable partners and customers to accelerate the development of applications and holistic zero trust solutions on the DPU.

NVIDIA is working with leading platform vendors and partners to integrate and expand DOCA support for commercial distributions on NVIDIA BlueField DPUs. Learn how these Industry leaders have started to integrate their solutions using the DPU/DOCA architecture as key partners showcase these solutions at the recent NVIDIA GTC.

Red Hat – “Sensitive Information Detection using the NVIDIA Morpheus AI framework
Red Hat and NVIDIA have been working together to bring the security analytics capabilities of the NVIDIA Morpheus AI application framework to the Red Hat infrastructure platforms for cybersecurity developers. This post provides a set of configuration instructions to Red Hat developers working on applications that use the NVIDIA Morpheus AI application framework and NVIDIA BlueField DPUs to secure interservice communication.  

Figure 1. Red Hat and NVIDIA High-level architecture

Juniper Networks – “Extending the Edge of the Network with Juniper Edge Services Platform (JESP)
Earlier this year, Juniper discussed the value of extending the network all the way to the server through DPU, such as the NVIDIA BlueField DPU powered SmartNICs, and how these devices can be used to provide L2-L7 networking and security services. At NVIDIA GTC, Juniper provides a sneak preview of an internal project – Juniper Edge Services Platform (JESP), which enables the extension of the network all the way to the SmartNIC.  

Figure 2. Juniper Edge Services Platform (JESP)

F5 – “Redefining Cybersecurity at the Distributed Cloud Edge with AI and Real-time Telemetry
Augmenting well-established security measures for web, application, firewall, and fraud mitigation techniques, F5 is researching techniques to detect such advanced threats, which require contextual analysis of several of these data points via large-scale telemetry, and with near real-time analysis. This is where NVIDIA BlueField-2 DPU-based real-time telemetry and NVIDIA GPU-powered Morpheus cybersecurity framework come into play.

Figure 3. F5 Advanced Threats Classification

Excelero – “Storage Horsepower for Critical Application Performance
NVMesh technology is a low-latency, distributed storage software that is deployed across machines with very high-speed local drives (NVMe SSDs, to be exact), enabling high-speed compute and high data throughput that far exceeds anything achievable with other storage alternatives – and at a significantly lower cost. Network performance is also critical and this is why Excelero is working with NVIDIA and their BlueField DPU, plus NVIDIA DOCA software platform technology.

DDN – “DDN Supercharges AI Security with NVIDIA
Along with NVIDIA, DDN is helping customers choose a data strategy that supports enterprise-scale AI workloads with a “Storage-as-a-Service” approach. This solution delivers cost-effective centralized infrastructure that meets the performance and scalability needs of complex AI applications and datasets.  

Early access to the DOCA software framework is available now.

To experience accelerated software-defined management services today, click here to register and download the BlueField DPU software package that includes DOCA runtime accelerated libraries for networking, security, and storage.

Additional Resources:
Web: DOCA Home Page
Web: BlueField DPU Home Page
DLI Course: Take the Introduction to NVIDIA DOCA for BlueField DPUs DLI Course
Whitepaper: DPU-Based Hardware Acceleration: A Software Perspective
NVIDIA Corporate Blog: NVIDIA Creates Zero-Trust Cybersecurity Platform
NVIDIA Developer Blog: NVIDIA Introduces BlueField DPU as a Platform for Zero Trust Security with DOCA 1.2

Categories
Misc

Creating Robust and Generalizable AI Models with NVIDIA FLARE

NVIDIA FLARE v2.0 is an open-source federated learning SDK that is making it easier for data scientists to collaborate to develop more generalizable robust AI models by just sharing model weights rather than private data.

Federated learning (FL) has become a reality for many real-world applications. It enables multinational collaborations on a global scale to build more robust and generalizable machine learning and AI models. For more information, see Federated learning for predicting clinical outcomes in patients with COVID-19.

NVIDIA FLARE v2.0 is an open-source FL SDK that is making it easier for data scientists to collaborate to develop more generalizable robust AI models by just sharing model weights rather than private data.

For healthcare applications, this is particularly beneficial where data is patient protected, data may be sparse for certain patient types and diseases, or data lacks diversity across instrument types, genders, and geographies.

NVIDIA FLARE

NVIDIA FLARE stands for Federated Learning Application Runtime Environment. It is the engine underlying the NVIDIA Clara Train FL software, which has been used for AI applications in medical imaging, genetic analysis, oncology, and COVID-19 research. The SDK enables researchers and data scientists to adapt their existing machine learning and deep learning workflows to a distributed paradigm and enables platform developers to build a secure, privacy-preserving offering for distributed multiparty collaboration.

NVIDIA FLARE is a lightweight, flexible, and scalable distributed learning framework implemented in Python that is agnostic to your underlying training library. You can bring your own data science workflows implemented in PyTorch, TensorFlow, or even just NumPy, and apply them in a federated setting.

Maybe you’d like to implement the popular federated averaging (FedAvg) algorithm. Starting from an initial global model, each FL client trains the model on their local data for a certain amount of time and sends model updates to the server for aggregation. The server then uses the aggregated updates to update the global model for the next round of training. This process is iterated many times until the model converges.

NVIDIA FLARE provides customizable controller workflows to help you implement FedAvg and other FL algorithms, for example, cyclic weight transfer. It schedules different tasks, such as deep learning training, to be executed on the participating FL clients. The workflows enable you to gather the results, such as model updates, from each client and aggregate them to update the global model and send back the updated global models for continued training. Figure 1 shows the principle.

Each FL client acts as a worker requesting the next task to be executed, such as model training. After the controller provides the task, the worker executes it and returns the results to the controller. At each communication, there can be optional filters that process the task data or results, for example, homomorphic encryption and decryption or differential privacy.

This diagram describes the NVIDIA FLARE workflow.
Figure 1. NVIDIA FLARE workflow

Your task for implementing FedAvg could be a simple PyTorch program that trains a classification model for CIFAR-10. Your local trainer could look something like the following code example. For this post, I skip the full training loop for simplicity.

import torch
import torch.nn as nn
import torch.nn.functional as F

from nvflare.apis.dxo import DXO, DataKind, MetaKey, from_shareable
from nvflare.apis.executor import Executor
from nvflare.apis.fl_constant import ReturnCode
from nvflare.apis.fl_context import FLContext
from nvflare.apis.shareable import Shareable, make_reply
from nvflare.apis.signal import Signal
from nvflare.app_common.app_constant import AppConstants


class SimpleNetwork(nn.Module):
    def __init__(self):
        super(SimpleNetwork, self).__init__()

        self.conv1 = nn.Conv2d(3, 6, 5)
        self.pool = nn.MaxPool2d(2, 2)
        self.conv2 = nn.Conv2d(6, 16, 5)
        self.fc1 = nn.Linear(16 * 5 * 5, 120)
        self.fc2 = nn.Linear(120, 84)
        self.fc3 = nn.Linear(84, 10)

    def forward(self, x):
        x = self.pool(F.relu(self.conv1(x)))
        x = self.pool(F.relu(self.conv2(x)))
        x = torch.flatten(x, 1)  # flatten all dimensions except batch
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x


class SimpleTrainer(Executor):
    def __init__(self, train_task_name: str = AppConstants.TASK_TRAIN):
        super().__init__()
        self._train_task_name = train_task_name
        self.device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
        self.model = SimpleNetwork()
        self.model.to(self.device)
        self.optimizer = torch.optim.SGD(self.model.parameters(), lr=0.001, momentum=0.9)
        self.criterion = nn.CrossEntropyLoss()

    def execute(self, task_name: str, shareable: Shareable, fl_ctx: FLContext, abort_signal: Signal) -> Shareable:
        """
        This function is an extended function from the superclass.
        As a supervised learning-based trainer, the train function will run
        training based on model weights from `shareable`.
        After finishing training, a new `Shareable` object will be submitted
        to server for aggregation."""

        if task_name == self._train_task_name:
            epoch_len = 1

            # Get current global model weights
            dxo = from_shareable(shareable)

            # Ensure data kind is weights.
            if not dxo.data_kind == DataKind.WEIGHTS:
                self.log_exception(fl_ctx, f"data_kind expected WEIGHTS but got {dxo.data_kind} instead.")
                return make_reply(ReturnCode.EXECUTION_EXCEPTION)  # creates an empty Shareable with the return code

            # Convert weights to tensor and run training
            torch_weights = {k: torch.as_tensor(v) for k, v in dxo.data.items()}
            self.local_train(fl_ctx, torch_weights, epoch_len, abort_signal)

            # compute the differences between torch_weights and the now locally trained model
            model_diff = ...

            # build the shareable using a Data Exchange Object (DXO)
            dxo = DXO(data_kind=DataKind.WEIGHT_DIFF, data=model_diff)
            dxo.set_meta_prop(MetaKey.NUM_STEPS_CURRENT_ROUND, epoch_len)

            self.log_info(fl_ctx, "Local training finished. Returning shareable")
            return dxo.to_shareable()
        else:
            return make_reply(ReturnCode.TASK_UNKNOWN)

    def local_train(self, fl_ctx, weights, epoch_len, abort_signal):
        # Your training routine should respect the abort_signal.
        ...
        # Your local training loop ...
        for e in range(epoch_len):
        ...
            if abort_signal.triggered:
                self._abort_execution()
        ...

    def _abort_execution(self, return_code=ReturnCode.ERROR) -> Shareable:
        return make_reply(return_code)

You can see that your task implementations could be doing many different tasks. You could compute summary statistics on each client and share with the server (keeping privacy constraints in mind), perform preprocessing of the local data, or evaluate already trained models.

During FL training, you can plot the performance of the global model at the beginning of each training round. For this example, we ran with eight clients on a heterogenous data split of CIFAR-10. In the following plot (Figure 2), I show the different configurations that are available in NVIDIA FLARE 2.0 by default:

  • FedAvg
  • FedProx
  • FedOpt
  • FedAvg with secure aggregation using homomorphic encryption (FedAvg HE)
This diagram shows the different federated learning models and their accuracies.
Figure 2. Validation accuracy of the global models for different FL algorithms during training

While FedAvg, FedAvg HE, and FedProx perform comparably for this task, you can observe an improved convergence using the FedOpt setting that uses SGD with momentum to update the global model on the server.

The whole FL system can be controlled using the admin API to automatically start and operate differently configured tasks and workflows. NVIDIA also provides a comprehensive provisioning system that enables the easy and secure deployment of FL applications in the real world but also proof-of-concept studies for running local FL simulations.

This diagram shows the components of NVIDIA FLARE and their relationship.
Figure 3. NVIDIA FLARE Provision, start, operate (PSO) components, and their APIs

Get started

NVIDIA FLARE makes FL accessible to a wider range of applications. Potential use cases include helping energy companies analyze seismic and wellbore data, manufacturers optimize factory operations, and financial firms improve fraud detection models.

For more information and step-by-step examples, see NVIDIA/NVFlare on GitHub.

Categories
Misc

NVIDIA Announces Upcoming Events for Financial Community

SANTA CLARA, Calif., Nov. 29, 2021 (GLOBE NEWSWIRE) — NVIDIA will present at the following events for the financial community: Deutsche Bank’s Virtual AutoTech ConferenceThursday, Dec. 9, at …

Categories
Misc

AWS Launches First NVIDIA GPU-Accelerated Graviton-Based Instance with Amazon EC2 G5g

The new Amazon EC2 G5g instances feature the AWS Graviton2 processors and NVIDIA T4G Tensor Core GPUs, to power rich android game streaming for mobile devices.

Today at AWS re:Invent 2021, AWS announced the general availability of Amazon EC2 G5g instances—bringing the first NVIDIA GPU-accelerated Arm-based instance to the AWS cloud. The new EC2 G5g instance features AWS Graviton2 processors, based on the 64-bit Arm Neoverse cores, and NVIDIA T4G Tensor Core GPUs, enhanced for graphics-intensive applications. 

This powerful combination creates an optimal development environment for Android game content. It also brings a richer Android gaming experience to be streamed to a diverse set of mobile devices anywhere. 

Unlocking enhanced Android game streaming for mobile devices

EC2 G5g instances enable game developers to support and optimize games for high-quality streaming on a wide range of mobile devices. You can develop Android games natively on Arm-based Graviton2 processors, accelerate graphics rendering and encoding with NVIDIA T4G GPUs, and stream games to mobile devices eliminating the need for emulation software and cross-compilation. 

This brings together breakthrough graphics performance powered by NVIDIA RTX technology, the price performance of AWS Graviton2 processors, and the elastic scaling of the AWS cloud for Android-in-the-Cloud gaming services.

A number of customers are already building cloud game development and gaming platforms on AWS, and are up and running on the new G5g instance.

Genymobile

Initially a simple, fast, developer’s favorite Android emulator, Genymotion has evolved into a full-fledged Android platform, available across multiple channels both in the cloud and on your desktop. NVIDIA has worked closely with Genymobile to accelerate its platform on the G5g instances, improving the performance and density of its solution in the cloud.

now.gg

now.gg offers a mobile cloud gaming platform that enables game developers to publish games directly to the cloud. By leveraging the power of the new G5g instances, now.gg enables gamers to access and stream high-performance games on mobile devices anywhere without lag or compromising on gaming experience

Canonical 

The company has launched its Anbox Cloud Appliance, a small-scale version of Canonical’s Anbox Cloud, built for rapid prototyping of Android-in-the-Cloud solutions on the new G5g instance. Additionally, AWS Marketplace makes Anbox Cloud readily available with access to a more extensive set of instance types, including support for Arm CPUs and NVIDIA GPUs. Developers can upload their Android apps, configure and virtualize Android devices, and stream graphical output in real time to any web or mobile client. This development environment allows you to unleash your creativity to invent new user experiences.

Accelerating Arm-based HPC and AI 

In addition to being a great gaming and game development platform, AWS’ new G5g instance also brings the NVIDIA Arm HPC SDK to cloud computing. With support for the NVIDIA T4G GPU and the Arm-based Graviton CPU, the NVIDIA Arm HPC SDK provides the tools you need to build NVIDIA GPU-accelerated HPC applications in the cloud.

EC2 G5g instances can also be used to build and deploy high-performance, cost-effective AI-powered applications at scale. Developers can use the NVIDIA Deep Learning Amazon Machine Image on AWS Marketplace. This comes preconfigured with all the necessary NVIDIA drivers, libraries, and dependencies to run Arm-enabled software from the NVIDIA NGC catalog.

Learn more about the G5g instances and get started