Categories
Misc

how to visualize predictive model with weights

Hello

I imported a model with tensorflow 2.8’s C api and it outputs different predictions for the same test data set from the original keras model in Python. The model was exported in Python with

model.save(‘models/model1’)

I import it later on in C with:

TF_LoadSessionFromSavedModel

Do you know how I could visualize the model with weights in both Python and C to compare I am using exactly the same model with the same weights in both cases?

Thanks

submitted by /u/goahead97
[visit reddit] [comments]

Categories
Misc

No source available for "tensorflow::TF_TensorToTensor() at 0x7ffff52a9bdc"

I am trying to predict in C/C++ with a model previously trained in Keras with Python and the sentence

“`TF_SessionRun(Session, NULL, Input, InputValues, NumInputs, Output, OutputValues, NumOutputs, NULL, 0,NULL , Status);“`

outputs

No source available for “tensorflow::TF_TensorToTensor() at 0x7ffff52a9bdc”

Do you have any idea about a possible way to overcome this error?

Thanks

submitted by /u/goahead97
[visit reddit] [comments]

Categories
Misc

Scooping up Customers: Startup’s No-Code AI Gains Traction for Industrial Inspection

Bill Kish founded Ruckus Wireless two decades ago to make Wi-Fi networking easier. Now, he’s doing the same for computer vision in industrial AI. In 2015, Kish started Cogniac, a company that offers a self-service computer vision platform and development support. Like in the early days of Wi-Fi deployment, the rollout of AI is challenging, Read article >

The post Scooping up Customers: Startup’s No-Code AI Gains Traction for Industrial Inspection appeared first on The Official NVIDIA Blog.

Categories
Misc

Maximize Network Automation Efficiency with Digital Twins on NVIDIA AIR

NVIDIA AIR automates digital twins to increase efficiencyNVIDIA Air automates your network through a digital twin to increase efficiencies along with other benefits. NVIDIA AIR automates digital twins to increase efficiency

Automation is the key to increasing operational efficiency and lowering OpEx, but it does not guarantee a successful data center deployment. While automation can confirm configuration integrity and prevent human errors in repetitive changes, it can’t validate intent and network requirements. Therefore, automation must be tested and validated before deployment, and the NVIDIA way of doing this is with a data center digital twin.

What is a data center digital twin network?

A data center digital twin network is a 1:1 simulation of a physical network environment, with logical instances of every switch, server, and cable. This enables it to be used for validating routing (BGP, EVPN), security policy compliance, automation, monitoring tools, and upgrade procedures.

This digital twin is hosted in the cloud, enabling teams to test their configuration at scale without the overhead of physical infrastructure. Data center digital twins offer a number of benefits:

  • Decreases time to deployment
  • Decreases network downtime
  • Decreases lab costs
  • Decreases need for network hardware and build while waiting on hardware to arrive
  • Increases creativity and collaboration (design, monitoring, change management)
  • Enhances the value of physical infrastructure–by giving more capabilities
  • Simulates true-to-reality infrastructure
  • Continuous integration–fixes and changes can be implemented and tested on an ongoing basis

How do I create a data center digital twin?

NVIDIA Air is a free platform for creating network digital twins. These digital twins can be clones of existing topologies, prebuilt topologies, or custom designed networks that can scale to 1000s of switches and servers. Each server and switch in the digital twin can be spun up in the NVIDIA Air cloud hosted environment for IT teams to extract the value of testing to its full potential.

Prebuilt network automation

Every developer values reusable sample code, and NVIDIA has Production Ready Network Automation. We publish working Ansible playbooks for complete leaf/spine topologies with BGP & EVPN all set up for you. These playbooks are built for the NetDevOps approach of Continuous Integration and are the same playbooks our professional services team uses. The playbooks are constantly updated based on learnings & best practices from actual customer deployments-and we have made our Production Ready Automation assets available free of charge.

Test your automation

Testing is a tradeoff between risk and cost. On one hand, to fully validate network functionality and reduce the risk associated with change management, the test network needs to be similar to the production network. On the other hand, creating a physical replica of the production environment is expensive both in CapEx and OpEx.

Using a virtual replica via a data center digital twin can significantly reduce the costs associated with such testing.

IT teams can integrate the data center digital twin into their CI/CD pipeline, deploy new changes, validate the configuration using NetQ and deploy to production confidently. This level of integration helps drive down the cost of validation even further.

Automate your testing

To shorten the time to deployment and decrease the risk of downtime, IT teams use NVIDIA Air to automate their testing process.

In addition to testing an ad hoc change, every change goes through a set of regression tests to eliminate degradation of the current functionality. Once both regression and ad hoc tests are passed, the ad hoc test is added to the regression tests suite and validated in future deployments.

Get started

Help your team learn best practices by testing changes in a risk-free environment by building your own data center digital twin. Easy to work with and free to use! Get started at NVIDIA Air.

For more information, see the following resources:

Categories
Misc

Conjugating verbs using NN?

Hey,

I wanted to know if it is possible to conjugate verbs, assuming a regular pattern, using some form of NN. Essentially, I want to input a verb, in Arabic, and the output is the root form of the verb; e.g. “running” → “run”, or

“كَتَبَ” ← “يَكْتُب”.

I think I have a few challenges in this:

  1. Identify the current form of the verb, can be done using labels, I think…
  2. Transform that form into a different form, e.g. from present to past.

The second part is the one I am not sure about. I couldn’t find any information about relating text or words together under different labels. All I found was sentiment labels, and captioning images, which I don’t think necessarily solves my issue.

Any resources about something like this? Or anyone has any insight?

I am fairly new to TensorFlow, I don’t know where to necessarily look for answers, so any advice on where I should begin any research for future ideas would be greatly appreciated!

submitted by /u/Muscle_Man1993
[visit reddit] [comments]

Categories
Misc

Question about information loss when reducing image size.

submitted by /u/Senior1292
[visit reddit] [comments]

Categories
Misc

Battery charge prediction

Hey guys, ML beginner here.

I’m looking into creating an ML model that predicts how long it will take to fully charge a battery (similar to what Android smartphones has on their lock screen when charging).

I basically want to give the model the current power (in Watts) used to charge, current battery capacity (in mAh), full battery capacity (in mAh) and get the remaining time as the output.

How would I go about that? What kind of data / how much of it would I have to get?

submitted by /u/LGariv
[visit reddit] [comments]

Categories
Misc

What is the fastest way to learn Tensorflow?

I don’t expect to become an expert. Not even close.

  1. Can you learn TF in 5-7 days to create an image classification or computer vision app?
  2. Is going through tutorials and guide on https://www.tensorflow.org/ a good idea to accomplish that?
  3. What is the fastest way to learn TF?

submitted by /u/margot309
[visit reddit] [comments]

Categories
Misc

RAPIDS Accelerator for Apache Spark Release v21.10

This post details the latest functionality of RAPIDS Accelerator for Apache Spark.

RAPIDS Accelerator for Apache Spark v21.10 is now available! As an open source project, we value our community, their voice, and requests. This release constitutes community requests for operations that are ideally suited for GPU acceleration. 

Important callouts for this release:

  • Speed up – performance improvements and cost savings.
  • New Functionality – new I/O and nested datatype Qualification and Profiling tool features. 
  • Community Updates – updates to the spark-examples repository.

Speed up

RAPIDS Accelerator for Apache Spark is growing at a great pace in both functionality and performance. Standard industry benchmarks are a great way to measure performance over a period of time but another barometer to measure performance is to measure performance of common operators that are used in the data preprocessing stage or in data analytics.

We used four such queries shown in the chart below:

  • Count Distinct: a function used to estimate the number of unique page views or unique customers visiting an e-commerce site.
  • Window: a critical operator necessary for preprocessing components in analyzing timestamped event data in marketing or financial industry.
  • Intersect: an operator used to remove duplicates in a dataframes.
  • Cross-join: A common use for a cross join is to obtain all combinations of items.

These queries were run on a Google Cloud Platform (GCP) machine with 2xT4 GPUs each with 104GB RAM. The dataset used was of size 3TB with multiple different data types. More information about the setup and the queries can be found in the spark-rapids-examples repository on GitHub. These four queries show not only performance and cost benefits but also the range of speed-up (27x to 1.5x) varies depending on compute intensity. These queries vary in compute and network utilization similar to a practical use case in data preprocessing.

A bar chart showing GPU vs CPU runtime for four microbenchmarks (Apache Spark Operators) 1. Cross-join 2. Intersect 3.Windowing (with & without data skew) 4.Count Distinct..
  The preceding graph is a little sneak peek into the speed-up one can expect while using Spark-Rapids. A detailed performance analysis will be provided in the next release blog.
Figure 1: Microbenchmark Queries runtime on Google Cloud Platform Dataproc Cluster: GPU vs CPU.

New functionality

Plug-in

Most Apache Spark users are aware that Spark 3.2 was released this October. The v21.10 release has support for Spark 3.2 and CUDA 11.4. In this release, we focused on expanding support for I/O, nested data processing and machine learning functionality. RAPIDS Accelerator for Apache Spark v21.10 released a new plug-in jar to support machine learning in Spark. 

Currently, this jar supports training for the Principal Component Analysis algorithm. The ETL jar extended the input type support for Parquet and ORC. It now also provides users with the functionality to use HashAggregate, Sort, Join SHJ and Join BHJ on nested data. In addition to support for nested datatypes a performance test was also run.

In the figure below, we show that the speed-up is observed for two queries using nested data type input. Some other interesting features that were added in v21.10 are, pos_explode, create_map and so on. Please refer to RAPIDS Accelerator for Apache Spark’s documentation for a detailed list of new features.

A bar chart showing GPU vs CPU runtime for two microbenchmarks (Apache Spark Operators) 1. Count Distinct 2. Windowing.
Figure 2: Microbenchmark Queries runtime for nested datatypes on Google Cloud Platform Dataproc Cluster: GPU vs CPU.

Profiling & qualification tool

In addition to the plug-in, multiple new features were also added to RAPIDS Accelerator for Apache Spark’s Qualification and Profiling tool. The Qualification tool can now report the different nested datatypes and write data formats present. It now also includes support for adding conjunction and disjunction filters, and filter based Regular Expressions and usernames.

The Qualifications tool is not the only one with new tricks: the Profiling tool now provides structured output format and support to scale and run a large number of event logs.

Community updates

We are excited to announce that we are in public preview on Azure and we welcome Azure users to try RAPIDS Accelerated for Apache Spark on Azure Synapse.

We invite you to view our talks presented at NVIDIA’s flagship event, GTC, held from Nov. 8-11, to learn how AI is transforming the world. The RAPIDS Accelerator team presented two talks; Accelerating Apache Spark gives an overview of new functionality and other upcoming features. Also, Discover Common Apache Spark Operations Turbocharged with RAPIDS and NVIDIA GPUs covers many microbenchmarks on Apache Spark.

Coming soon

The upcoming versions will introduce support for 128-bit decimal datatype, inference support for the Principle Component Analysis algorithm and additional nested data type support for multi-level struct and maps. 

In addition, lookout for MIG support for NVIDIA Ampere Architecture based GPUs (A100/A30) which can help improve throughput on running multiple spark jobs with A100. As always, we want to thank all of you for using RAPIDS Accelerator for Apache Spark and we look forward to hearing from you. Reach out to us on GitHub and let us know how we can continue to improve your experience using RAPIDS Accelerator on Apache Spark.

Categories
Misc

Prepare for Genshin Impact, Coming to GeForce NOW in Limited Beta

GeForce NOW is charging into the new year at full force. This GFN Thursday comes with the news that Genshin Impact, the popular open-world action role-playing game, will be coming to the cloud this year, arriving in a limited beta. Plus, this year’s CES announcements were packed with news for GeForce NOW. Battlefield 4: Premium Read article >

The post Prepare for Genshin Impact, Coming to GeForce NOW in Limited Beta appeared first on The Official NVIDIA Blog.