Categories
Misc

Convert Tensorflow model to CoreML for iOS app

I’m trying to convert my tensorflow image classification model to a CoreML model that I can integrate in an iOS app. The tensorflow model takes in an image of shape (1,28,28,1) and outputs a softmax array. When I try to convert it to CoreML I get the following error:

raise TypeError(“Unsupported numpy type: %s” % (nptype))

TypeError: Unsupported numpy type: float32

Does anyone have any experience with this?

submitted by /u/berimbolo21
[visit reddit] [comments]

Categories
Misc

What is the current industry standard in distributed tensorflow setup?

In mapreduce everything is handled by yarn and hadoop for resource management, how is tensorflow setup in distributed environment?

Is Horovod recommended?

submitted by /u/Rough_Source_123
[visit reddit] [comments]

Categories
Misc

Updating the CUDA Linux GPG Repository Key

CUDA 16x9 Aspect RatioNVIDIA is updating and rotating the signing keys used by apt, dnf/yum, and zypper package managers beginning April 27, 2022.CUDA 16x9 Aspect Ratio

To best ensure the security and reliability of our RPM and Debian package repositories, NVIDIA is updating and rotating the signing keys used by the apt, dnf/yum, and zypper package managers beginning April 27, 2022.

If you don’t update your repository signing keys, expect package management errors when attempting to access or install packages from CUDA repositories.

To ensure continued access to the latest NVIDIA software, complete the following steps.

Remove the outdated signing key

Debian, Ubuntu, WSL

$ sudo apt-key del 7fa2af80

Fedora, RHEL, openSUSE, SLES

$ sudo rpm --erase gpg-pubkey-7fa2af80*

Install the new key

For Debian-based distributions, including Ubuntu, you must also install the new package or manually install the new signing key.

Install the new cuda-keyring package

To avoid the need for manual key installation steps, NVIDIA is providing a new helper package to automate the installation of new signing keys for NVIDIA repositories. 

Replace $distro/$arch in the following commands with values appropriate for your OS; for example:

  • debian10/x86_64
  • debian11/x86_64
  • ubuntu1604/x86_64
  • ubuntu1804/cross-linux-sbsa
  • ubuntu1804/ppc64el
  • ubuntu1804/sbsa
  • ubuntu1804/x86_64
  • ubuntu2004/cross-linux-sbsa
  • ubuntu2004/sbsa
  • ubuntu2004/x86_64
  • ubuntu2204/sbsa
  • ubuntu2204/x86_64
  • wsl-ubuntu/x86_64

Debian, Ubuntu, WSL

$ wget https://developer.download.nvidia.com/compute/cuda/repos/$distro/$arch/cuda-keyring_1.0-1_all.deb
$ sudo dpkg -i cuda-keyring_1.0-1_all.deb

Alternate method: Manually install the new signing key

If you can’t install the cuda-keyring package, you can install the new signing key manually (not the recommended method).

Debian, Ubuntu, WSL

$ sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/$distro/$arch/3bf863cc.pub

RPM distros

On a fresh installation, Fedora, RHEL, openSUSE, or SLES as dnf/yum/zypper prompt you to accept new keys when the repository signing key changes. Accept the change when prompted.

Replace $distro/$arch in the following commands with values appropriate for your OS; for example:

  • fedora32/x86_64
  • fedora33/x86_64
  • fedora34/x86_64
  • fedora35/x86_64
  • opensuse15/x86_64
  • rhel7/ppc64le
  • rhel7/x86_64
  • rhel8/cross-linux-sbsa
  • rhel8/ppc64le
  • rhel8/sbsa
  • rhel8/x86_64
  • sles15/cross-linux-sbsa
  • sles15/sbsa
  • sles15/x86_64

For upgrades on RPM-based distros including Fedora, RHEL, and SUSE, you must also run the following command.

Fedora and RHEL 8

$ sudo dnf config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/$distro/$arch/cuda-$distro.repo

RHEL 7

$ sudo yum-config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/rhel7/$arch/cuda-rhel7.repo

openSUSE and SLES

$ sudo zypper removerepo cuda-$distro-$arch
$ sudo zypper addrepo https://developer.download.nvidia.com/compute/cuda/repos/$distro/$arch/cuda-$distro.repo

Working with containers

CUDA applications built using older NGC base containers may contain outdated repository keys. If you build Docker containers using these images as a base and update the package manager or install additional NVIDIA packages as part of your Dockerfile, these commands may fail as they would on a non-container system. To work around this, integrate the earlier commands into the Dockerfile you use to build the container.

Existing containers in which the package manager is not used to install updates are not affected by this key rotation.

Categories
Misc

How DNEG Helped Win Another Visual-Effects Oscar by Bringing ‘Dune’ to Life With NVIDIA RTX

Featuring stunning visuals from futuristic interstellar worlds, including colossal sand creatures, Dune captivated audiences around the world. The sci-fi film picked up six Oscars last month at the 94th Academy Awards, including for Best Sound and Visual Effects. Adapted from Frank Herbert’s 1965 novel of the same name, Dune tells the story of Paul Atreides, Read article >

The post How DNEG Helped Win Another Visual-Effects Oscar by Bringing ‘Dune’ to Life With NVIDIA RTX appeared first on NVIDIA Blog.

Categories
Misc

Coursera TF Developer Certification Course Certification worth the price?

Hello,
So I am about to do the https://www.coursera.org/professional-certificates/tensorflow-in-practice#instructors to prepare for the real TF certificate. My company pays for the certificate but for the course Id have to use a shared account. Hence the granted coursera certificate will be addressed to another name. So my question is, is it worth it to pay the price for coursera and the course to get the cert on my name? (would be approx 44€*4 months) or does it not matter since I am going to do the TF Developer certificate anyways?

submitted by /u/sickTheBest
[visit reddit] [comments]

Categories
Misc

Ranking within groups

New to tensorflow. I have used ML.NET in the past.

I am trying to build a model that will rank a group of volunteers’ ability to sign up new subscribers for a service in a given day. Let’s say that for a year I hire 10 new volunteers each day (features are age, gender, education, etc … and my label is number-of-subscription-signups) . After a year I have 3650 rows of data that I want to train but use the ‘date’ as a group designation so that ranking consideration does not span across different days. How can I specify group id? or what is the proper terminology for what I’m trying to achieve (and I can do my own research)?

I want to use ‘group id’ as it’s being used in THIS ml.net EXAMPLE) .

Thanks

submitted by /u/MemphisRay
[visit reddit] [comments]

Categories
Misc

Your Odyssey Awaits: Stream ‘Lost Ark’ to Nearly Any Device This GFN Thursday

It’s a jam-packed GFN Thursday. This week brings the popular, free-to-play, action role-playing game Lost Ark to gamers across nearly all their devices, streaming on GeForce NOW. And that’s not all. GFN Thursday also delivers an upgraded experience in the 2.0.40 update. M1-based MacBooks, iMacs and Mac Minis are now supported natively. Plus, membership gift Read article >

The post Your Odyssey Awaits: Stream ‘Lost Ark’ to Nearly Any Device This GFN Thursday appeared first on NVIDIA Blog.

Categories
Misc

MoViNets: Mobile Video Networks for Efficient Video Recognition

Anyone try using MoViNets for custom dataset?

submitted by /u/InternalStorm133
[visit reddit] [comments]

Categories
Misc

Trying to understand TensorFlow and its users better

People who are using TensorFlow still, why are you using it? What is your reason for not selecting the other existing frameworks?

submitted by /u/user01052018
[visit reddit] [comments]

Categories
Misc

Improving Player Performance with Low Latency as Evident from FPS Aim Trainer Experiments

In two aim training experiments, results show that lower latency improves player aiming performance, and gives skilled players a better chance to stand out.

We’ve been collaborating with The Meta, makers of the popular KovaaK’s FPS aim trainer game, for some time now to distribute experiments to their players. Our most recent set of experiments was designed to test a player’s aiming ability under changing latency, and to give players a chance to compete for top spot on the leaderboards, at both lower and higher latencies.

During the week-long promotional period in December, 2021, players could get rewards for participating, and over 12,000 players tried one of these new latency experiments. This post uses the data provided by over 15,000 players  including results from the promotional period through April 17, 2022. We’d like to thank all of the players for their enthusiastic participation, and we hope you’ll enjoy seeing the results as much as we do.

To tease with our most interesting result up front, Figure 1 shows how the top 10% most skilled players moved their entire score distribution higher and were able to better display their skill at the lowest latency.

A chart showing the distribution of Latency Flicking scores with median 31 for 25 milliseconds, 22 for 55 milliseconds and 15 for 85 milliseconds of latency.
Figure 1. Latency Flicking top 10% score distribution. On this difficult task, the most skilled players increased their median score by more than 2x at 25 ms compared to 85 ms.

The Experiments

We designed two experiments for this release, one meant to be fun and exciting and the other designed to be challenging and test the limits of the most skilled and capable players.

The purpose of these experiments was to highlight the importance of computer system latency, and give players a chance to experience it for themselves at home without complicated equipment. To this end, both experiments vary the latency among a low, middle, and high latency value.

You can find the description and short videos of these experiments later in this post, and you can go try them out in the game if you want to experience them for yourself. We plan to keep them available in the game for some time to come.

All participants in the NVIDIA experiments mode in KovaaK’s are required to submit informed consent before voluntarily participating and are welcome to stop their participation at any point.

Both experiments were structured to have a 15-second warm-up period followed by a 45-second experiment stage for each of the latency conditions that we tested. Only the 45-second experiment stage scores were used for entry on the leaderboards, and we only consider those results in our analysis. This is based on the well-known principle that people new to a task take some time to learn it. Thus, the warm-up period was intended to serve as the training period for the players.

Ideally, there would a much longer training period and experiment period, but the durations were selected to balance the quality of data that we collected with the enjoyment of the players. One minute of gameplay per condition felt good in our testing, and we believe it has worked well for the players.

Controlling latency

The three latency conditions that we settled on were 25 ms, 55 ms, and 85 ms (Figure 2). These were selected to mirror the latency settings tested in our prior SIGGRAPH Asia publication, though the aiming tasks we used were different from that in the prior work. For more information, see Latency of 30 ms Benefits First Person Targeting Tasks More Than Refresh Rate Above 60 Hz.

Diagram showing known baseline of 25 milliseconds with Reflex and unknown baseline without Reflex.
Figure 2. Conditions used depending on the Reflex GPU. For non-reflex PCs, the baseline latency was unknown, so the results were omitted from leaderboards and analysis.

In this experiment, we used the Reflex integration in KovaaK’s to measure and control the latency for each of the conditions. This means that the full latency control was only available to systems with Reflex-capable GPUs in them, thus non-Reflex results were omitted from the leaderboards as well as the majority of our analysis.

For these non-Reflex systems, we still did our best to give the players a similar experience, instead treating their system’s default latency as the baseline (25 ms displayed to the player) and the other two conditions being effectively base+30 ms and base+60 ms.

We can’t be sure whether the baseline from one computer to another is similar without the markers we get from the Reflex integration. We also did a best-effort estimate of external latency contributions, including the mouse and monitor.

Experiment 1: Latency Frenzy

The first experiment was designed to be fun and accessible for nearly anyone, and we placed it at the top of the list. The majority of players (95%) tried it.

This experiment is based on popular frenzy modes where a set of targets spawn in a grid against a wall, and the user has to plan the order in which to shoot at each target. After a target is killed with a single click, a new target spawns somewhere else on the grid after a small pause.

This frenzy mode was set to have three simultaneous targets visible. The player’s score was equal to the number of targets that they destroyed within 45 seconds multiplied by their accuracy (shots/hits); we used that as the primary measure.

Leaderboard placement was determined based on the combined score across all three phases (25 ms, 55 ms, and 85 ms).


Figure 3. Latency Frenzy experiment. The task was completed in three phases with a randomized phase order per attempt.

As this experiment combines accuracy and planning, we expected latency to not be the only factor affecting the number of targets that a player can hit in quick succession. Skill level is obviously important, but so is the strategy that players employ to achieve the fastest, most accurate path through the targets.

Many players develop their aiming strategy over time. Thus they may quickly improve as they learn how to plan their path. The hope is that the warm-up period gives them a chance to select a strategy, though players who repeated the experiment may have adjusted their strategies. 

Experiment 2: Latency Flicking

The second experiment was designed to be much more challenging. It highlighted a situation where computer system latency had a large impact on aiming performance.

As you can see from the results, we succeeded in crafting a challenging task, especially when playing with high latency. About 60% of the players who tried Latency Frenzy or Latency Flicking participated in the latter.

The flicking task is to start with the player’s aim at the center of the screen, where a dummy target is placed. When the player clicks on that target, a second target is spawned at a random place away from the center, and the player is given 600 ms in which to aim at that target and eliminate it.

The primary metric of success for this task was the number of these center-aim-kill target loops that the player was able to complete in the 45-second duration. Again, we used the number of targets killed as the score and place players on the leaderboard based on their combined score across all three latency levels.


Figure 4. Latency Flicking experiment. This task was completed in three phases with a randomized order per attempt.

While this was a fair task as everyone had to play by the same rules, the actual number of attempts on target definitely varied from person to person given that the 45-second timer continued to run even while the player resets the aim to the dummy target at the center. As a result, a player who gets used to the 600 ms cadence and is skilled at returning to center gets more attempts and has a higher maximum score possible.

In our initial analysis of these results, we haven’t looked at how many attempts each player could make, but we may run that analysis in the future.

Results

Since we first released our experiment mode in February 2021, over 45,000 people have tried one or more of our experiments, completing more than 470,000 experiment sessions. Between the release of these new latency experiments in December, 2021, and April 17th, 2022, over 18,000 players have completed at least one of these new experiments.

We focus on these results in our analysis, though players like you can continue to play and contribute data for any future analysis. In any case, the experiments are available for anyone to try and compare results. 

Because the control of latency was considered completely fair and valid for those systems with a Reflex-enabled GPU, only Reflex-enabled results were allowed to be posted to the leaderboards. We excluded the 15-second warm-up sessions, as they were intended to enable players to get familiar with the task.

Players were allowed to complete each experiment as many times as they wanted, and we included these repeat attempts in the analysis. This means that players who played more than one time were likely able to refine their strategies and improve their skill over time.

For the results analysis, we also excluded all results that showed indications of not being able to reach the targeted latency values. The following analysis represents a relatively high confidence of latency being controlled to within 500 microseconds of the target.

The remaining confounding factors include latency of mice and monitors. Such latency was only estimated in many cases, which is almost unavoidable on an open platform like PC gaming when conducting a large-scale distributed study like this one. 

Skill levels

In addition to a general analysis of all participants, we also classified players by their skill level for each experiment. This is done by averaging each player’s total score across all runs, then ranking all of the players by this combined mean score.

While looking at various skill levels may be interesting, we decided to focus solely on the top 10% and top 1% player cohorts in the detailed skill-level analysis.  You can think of these two cohorts as the highly skilled enthusiasts (top 10%) and the best of the best (top 1%) who are effectively the “esports professionals” of these KovaaK’s tasks.

Latency Frenzy results

The Latency Frenzy experiment results analyzed here include 27,032 complete experiments from 12,168 players, equaling 81,096 sessions of 45 seconds each.

The biggest result is that, across all attempts, both of the lower-latency conditions (25 ms and 55 ms) improved the number of targets eliminated (Figure 5), a difference that was found to be statistically significant in pairwise t-tests (p-value

Bar chart of Latency Frenzy scores with means of 112.62 for 25 milliseconds, 103.75 for 55 milliseconds and 90.58 for 85 milliseconds.
Figure 5. Latency Frenzy mean scores

Figure 6 shows a quadratic fit to the raw data on a scatter plot. The fit line shows the likely mean score as the latency varies, crossing the clusters of scattered points at the mean of those distributions. Because there are so many points, they look like vertical lines in this plot.

Box and whisker plot with trend line for Latency Frenzy.
Figure 6. Box and whisker plot and quadratic fit for latency frenzy as latency increases

Looking at the distributions of scores in Figure 7, you can see even more interesting trends in the data. In particular, every percentile line moves up the score axis as the latency decreases. What’s even more exciting is that the entire distribution expands, allowing an easier chance to distinguish between players of similar skill levels. 

A chart showing the overall distribution of scores for Latency Frenzy.
Figure 7. Latency Frenzy overall score distributions

We believe these summary results show a clear (though somewhat small in score) benefit to frenzy-type aiming tasks that comes from a change in system latency of the computer system. On average, players hit over 20% more targets in 45 seconds at 25 ms than at 85 ms.

Latency Flicking results

As described earlier, the flicking experiment is challenging. In fact, in our final data set, 595 runs (7.20%) and 421 players (7.55%) hit 0 targets at 85 ms. We often exclude 0 scores from analysis because they could indicate that a player walked away from the computer and their score may not be useful. However, these 0 scores are an important part of the player performance for this particular task.

Fortunately, by reducing the latency to 25 ms, a much smaller 327 runs (3.96%) and 230 players (4.12%) still hit 0 targets. In other words, reduced system latency made an impossibly hard task possible for 3.4% of these players.

Fewer players completed this task than the frenzy task, probably in part because frenzy is more fun and less difficult than this flicking task. Yet 5,576 players completed 8,265 experiments comprising 24,795 sessions.

As in the frenzy results, the lower-latency conditions improved the average number of targets destroyed in 45 seconds, but with a greater magnitude of improvement (Figure 8). Again, pairwise t-tests show that these differences were statistically significant (p-value

Bar chart of Latency Flicking scores with means of 15.15 for 25 milliseconds, 11.20 for 55 milliseconds and 7.74 for 85 milliseconds.
Figure 8. Latency Flicking mean scores

Figure 9 shows that a quadratic fit to the flicking results suggests that this flicking task would become impossible for even the most skilled players with only a little more latency. This makes sense because the total of 600 ms of aiming time gets reduced by the computer system latency; the displayed location of the target isn’t seen by the player until after the full system latency amount. There is also less time to adjust aim to be sure it hits the target.

In testing during the design of this task, we found that 450 ms was barely doable for highly skilled players, even at the minimum latency possible.

Box and whisker plot with trend line for Latency Flicking.
Figure 9. Box and whisker plot and quadratic fit for latency flicking results

Another exciting aspect of this particular experiment can be highlighted by the histogram distributional plots in Figure 10. As with the frenzy results, we found that all percentiles increased their score at lower latencies, with the exception of the bottom 5%–10% who still weren’t able to complete such a difficult aiming task.

At the higher skill levels, the difference between scores becomes amplified even more. For instance, at 25 ms of latency, the top 25% of scores were above the top 10% line at 85 ms. The top 1% at 25 ms were higher than any score achieved at 55 ms. 

A chart showing the overall distribution of scores for latency flicking.
Figure 10. Latency Flicking overall score distribution

Figure 11 shows the distribution of results for the top 10% most skilled players in this experiment. As a reminder, this includes only players whose average score fell in the top 10% of scores, but we plotted all scores from those players. These players were more skilled than the general population, so there’s a fairly clear separation in the distributions between different latency conditions. In fact, the median score at 25 ms (31) was more than 2x as high as at 85 ms (15)!

A chart showing the distribution of scores for the top 10% most skilled at Latency Flicking with median 31 for 25 milliseconds, 22 for 55 milliseconds and 15 for 85 milliseconds of latency.
Figure 11. Latency Flicking top 10% score distribution. Players hit twice as many targets at low latency.

The top 1% of players demonstrates an even more telling change in score at different skill levels. There remains some overlap in scores at different latency levels, but the overlap between 85 ms only gets as high as the bottom 25% of scores at 25 ms.

A chart showing the distribution of scores for the top 1% most skilled at Latency Flicking.
Figure 12. Latency Flicking top 1% score distribution. For the most skilled players, every 30 ms reduction in latency produces a big improvement to their score.

Conclusion

We’re grateful to our friends at The Meta for helping us put this experiment mode in their game, and enabling us to run experiments with players at home.

Prior research has shown that computer latency is important to minimize for many types of aiming tasks. However, the bulk of prior research has depended on small numbers of players with careful control of the experimental conditions. This study represents the first study conducted in the wild where latency was able to be controlled well enough to be useful in analysis. Because the trends in these results reinforce prior findings, there is greater confidence in the importance of latency for competitive FPS gamers.

Perhaps the biggest new result we found is that lower latency is most important for the highest skilled players. Skill does make the difference between who wins and loses many times, but especially among the highest skill levels, latency has an increasingly essential role in who wins and loses.

We encourage all players to use technology like NVIDIA Reflex to have the best conditions for playing competitively. For players who are particularly interested in optimizing their PC and game settings for latency, G-SYNC monitors with a Reflex Latency Analyzer give you the chance to measure your latency directly.

NVIDIA Reflex SDK is a tool for game developers looking to implement a low-latency mode that enables just-in-time for rendering and optimizes system latency.

For more information, see the following research papers: