CES—NVIDIA today set out the next direction of the ultimate platform for gamers and creators, unveiling more than 160 gaming and Studio GeForce®-based laptop designs, as well as new desktop and laptop GeForce RTX® GPUs and technologies.
Category: Misc
Competitive gamers prefer to play at the highest refresh rate possible, but new higher resolution monitors increase aiming performance for small targets.
The majority of esports professionals today tend to play on high-refresh-rate 240 or 360 Hz monitors, the bulk of which are available with 24 to 25 inch diagonal sizes and 1080p resolution. In the past, we’ve seen that higher refresh rates help aiming performance, particularly because of the lower system latency that comes with them. Many productivity users prefer higher resolutions and screen sizes, gravitating toward 27 to 32 inch diagonals and 1440p to 4k resolution.
With the latest products announced at CES, 27″ 1440p 360 Hz monitors will soon be available, so we set out to find a situation where the hypothesized advantage of this monitor could be seen in first person shooter (FPS) gameplay (Figure 1).

Experiment design
The experiment we designed focuses on very small targets (Figure 1), which could be thought of as proxies for mid-to-long distance headshots in games like Valorant, CS:GO or PUBG. For such small targets, we might hypothesize that the higher resolution monitor would show the edges of the target more accurately than a lower resolution one. In fact, based on the double total pixel count available at 1440p compared to 1080p, we could estimate that any given target will have roughly twice the number of pixels at 1440p. In addition to these extra pixels, the larger screen makes the target itself cover a larger area based on the extra 2.5 inches diagonal (24.5″ vs 27″).
The aiming task we selected was to have each user complete a series of random target eliminations where a group of four targets all appear at the same time at the start of the task, and the task is considered complete once the player clicks on all four targets one time each. This makes task completion time or “aiming time,” a negative metric; lower aiming time is better.
As a secondary measure, we can consider the player’s accuracy, or number of hits over total number of shots. Because it takes time to take shots, we would expect these measures to relate to each other, thus lower aiming time should correspond with higher accuracy and a lower number of shots taken.
To test our hypothesis, that a bigger, higher resolution display helps players click on a series of targets faster, we implemented the task above in our FPS research platform, First Person Science. This is a program similar to commercially available aim trainers, but allows careful low level control of the game loop and hardware configuration. In case you’re curious to try the task for yourself, you can download this particular experiment from github.
We had 13 NVIDIA employees complete this task 75 times each (as 5 blocks of 15 trials) on each of two different monitors. The first was a 24.5″ 1080p 360 Hz Alienware 25 and the second was a 27″ 1440p 360 Hz Asus ROG Swift PG27AQN. It took roughly 20 minutes for each user to complete all 150 trials (75 on each monitor). We used a counter-balanced randomized order and each participant had the option to decline or stop participating at any point if they chose. One of the participants had difficulty with the task, therefore we exclude that user’s results from the following analysis. The remaining 12 participants had counterbalanced display ordering.
Task completion time
As a primary metric of success, we measured the time it took for each trial to be completed. In the histogram plot in Figure 2, you can see the spread of these times colored by the screen size. While the distribution of times follows roughly the same shape for both displays, you can see how there are slightly more trials at the low end of task completion time for the larger, higher resolution display and slightly more high completion times for the smaller, lower resolution display. Remember that both of these displays were set to update at 360 Hz.

Figure 2 shows that while the 24 inch and 27 inch monitors led to similar task completion times, there was a small shift to faster completion time in the distribution as a whole for the 27 inch, 1440p, 360 Hz monitor.
The mean completion time for the 24.5″ 1080p trials was 3.75 seconds while the 27″ 1440p display resulted in a mean completion time of 3.64 seconds. The improvement in mean completion time for this experiment was therefore 111 ms. We performed a pairwise t test on this data, indicating that the difference in these means is statistically significant for the 900 trials that we included in the analysis (p-value=0.000325).
We also considered the per-user completion time (Figure 3). Four users had a small to medium increase in task completion time when moving to the larger monitor. The majority of the users (8) showed a reduction in task completion time on the larger display. While the average improvement is clearly within the realm of normal per-user variation in aiming time for this task, there’s still a strong trend of improvement among these users. Most users showed reduced completion time on the 27 inch, 1440p, 360 Hz monitor.

Accuracy
I’d like to state up front that all pairwise t tests on accuracy could not reach significance, so the differences in accuracy should not be considered statistically significant. It’s also important to note that per-trial accuracy is entirely a function of how many shots were taken to eliminate the target.
As you can see by the Accuracy and Shots histograms in Figure 4, each additional shot reduces the accuracy for that round proportionally as accuracy is computed as the hits/shots ratio. Given that there are four targets in each trial, hits is always 4, thus the per-trial accuracy is selected from a small set of specific values (4/4, 4/5, 4/6, and so on).
If instead, we sum up all hits and all shots across all trials per user, we can consider the per-user accuracy. The below plot pairs the 24.5″ 1080p and 27″ 1440p accuracy results per user in our study. Across all users, the mean accuracy was 81.78% for the 1080p display and 82.34% for the 1440p display, resulting in a 0.56% increase in accuracy. Once again, this change in mean accuracy was not found to be statistically significant in a pairwise t test. Though it may have been a minor contributing factor to the difference in task completion time that was shown to be statistically significant.

Figure 5 shows that per-user accuracy has more mixed results when compared to the task completion times shown in Figure 3. While you can aim faster with the bigger display, it appears to be due to increased speed in aiming and not a significant change in accuracy.
Conclusion
Because this experiment coupled a change in physical display size with a change in display resolution, it’s not clear from the results how much the size or resolution may have contributed to the difference in task completion time individually. Further study is needed to isolate these factors.
Our choice of task was also intentionally intended to find an aiming challenge where display size and resolution were likely to make a difference. For any given FPS game, you may find more or less of this style of task in practice, and the value of these results varies depending on game, role, skill level, and numerous other factors.
We conclude from these results that for players who regularly aim at small targets and want to hit them as quickly as possible, there is a small but practical benefit to upgrading from 24.5 inch, 1080p, 360 Hz monitors to the latest and greatest 27 inch, 1440p, 360 Hz displays coming later this year.
Check out the new G-SYNC monitors that were just announced and stay tuned for more experiment results when they’re ready to share. We intend to continue these types of investigations to help gamers and esports tournaments know which PC hardware to use to unlock maximum human performance.
String as input to a tensorflow NN
Hello! I am trying to train a model to recognize plural and singular nouns; input is a noun and output is either 1 or 2, 1 for singular and 2 for plural. Truth be told, I am not sure entirely how to tackle this… I saw a few tutorials about TF NN and image processing, but I don’t know how does that relate. Every time I try to run model.fit(nouns, labels, epoc=N) it either doesn’t do anything or it fails due to bad input.
The challenges I am facing are as follows: * Can I have a variable sized input? * How can I get the text, stored in a CSV, to a form that can be input into the NN model?
The code I have so far is something like this: “`python model = keras.models.Sequential() model.add(keras.layers.Input(INPUT_LENGTH,)) ## I am padding the string to have this length model.add(keras.layers.Dense(10, activation=’relu’, name=”First_Layer”)) model.add(keras.layers.Dense(2, activation=’relu’, name=”Output_Layer”))
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy') # model.summary() model.fit(nouns_array, labels_array, epochs=10)
“`
I couldn’t find any tutorials or documentation, that I can clearly understand, talking about inputting string to a NN. Any advice or links would be appreciated.
—- Addendum:
I followed the linked YouTube tutorial to turn the text into tokens and it worked great. I didn’t use the suggested embedded layer and just stuck with the ordinary input dense dense model. Thanks everyone!
submitted by /u/Muscle_Man1993
[visit reddit] [comments]
I am using an ImageDataGenerator with the method flow_from_directory() to grab the images for my CNN. According to the documentation flow_from_directory() returns a tuple (x, y) where x is the data and y the labels for every item in the batch.
I tried to get the labels of every batch with the next() method and a loop but received the ValueError: too many values to unpack (expected 2).
What’s the recommended way to get all the matching labels for every image? I couldn’t find anything online except the approach with next(), which only worked for a single batch without a loop.
test_datagen = ImageDataGenerator(rescale=1./255) test_df = test_datagen.flow_from_directory( path, target_size=(512, 512), batch_size=32, class_mode='categorical') y = [] steps = test_df.n//32 #My approach that wasn't working for i in range(steps): a, b = test_df.next() y.extend(b)
submitted by /u/obskure_Gestalt
[visit reddit] [comments]
/tensorflow Subdirect Statistics
submitted by /u/_kiminara
[visit reddit] [comments]
Hi everyone. I’m deploying a resnet based 928×928 UNet on an android device. Performance is suboptimal even with GPU. Currently I’m only optimizing the models using the tf.lite.Optimize.DEFAULT flag. I was wondering if any of you have had experience with more intricate optimization techniques aimed specifically at latency and not neccesarily size reduction.
submitted by /u/ronsap123
[visit reddit] [comments]
Best way to improve inference throughput
I see multiple options on the internet to optimize inference, and i don’t know which would be the best fit for me. My goal is to maximize throughput on GPU, and preferably reduce GPU memory usage.
I have a reinforcement learning project, where i have multiple cpu processes generating input data in batches and sending them over to a single GPU for inference. Each process loads the same resnet model with two different weight configurations at a time. The weights used get updated about every 30 minutes and get distributed between the processes. I use Python and Tensorflow 2.7 on Windows(don’t judge) and the only optimization is use right now is the built-in XLA optimizations. My GPU does not support FP-16.
I have seen TensorRT being suggested to optimize inference, i have also seen TensorflowLite, Intel has an optimization tool too, and then there is Tensorflow Serve. What option do you think would fit my needs best?
submitted by /u/LikvidJozsi
[visit reddit] [comments]

![]() |
submitted by /u/fullerhouse570 [visit reddit] [comments] |
Detecting CUDA libraries on Windows

![]() |
So I’ve had things working fine on linux and now i’m trying to set up the same on windows so I can use the newer gpu in a new machine. The problem is that after install of both CUDA toolkit and cuDNN, the libraries are never picked up even after several restarts. I’ve searched quite a bit and haven’t turned up anything that works, and don’t know of a way to get Windows to look in the new PATH variable that the installer did properly set up. These are the offending libraries One thing to note is that when I copy the offending dynamic libs to System32, my run on the command line picks up the libraries and detects my gpu as it should. So something’s happening with searching for them in another PATH, I just don’t know how to fix it. This sort of thing rarely happens on linux and even when it does, ldconfig is usually the answer. Update: I tried something I didn’t think would work, but it turns out it did. I originally downloaded python from the windows app store because I figured it would save some time and python using scoop as the installer wasn’t working. I uninstalled the windows app store version of Python 3.8 and installed the same version from the python organization’s website, and now everything is working. I’m not sure what the issue is with windows app store downloads, but i’ve had an incident with Slack via the same method. The issue with Slack was completely different, but from what I can tell, python was installed in a different location from the windows store than it normally would’ve been, and I think that contributed somehow. On linux, there’s 4 folders anything could ever be installed to automatically by convention, so we don’t run into this specific problem on the platform. That’s why troubleshooting this was so tiresome. submitted by /u/CrashOverride332 |
Hi all,
I’ve got a Lenovo Legion laptop with an onboard GeForce GTX 1660 GPU. Here’s some setup details:
– Ubuntu 21.10
– Python 3.9.7
– using pip (not Conda)
– Tensorflow 2.7.0 (from Python: “tf.__version__” returns 2.7.0)
– TF doesn’t yet recognize GPU existence: “tf.config.list_physical_devices(‘GPU’) returns []
– I think I have CUDA installed: (cat /proc/driver/nvidia/version):
NVRM version: NVIDIA UNIX x86_64 Kernel Module 495.29.05 Thu Sep 30 16:00:29 UTC 2021
GCC version: gcc version 11.2.0 (Ubuntu 11.2.0-7ubuntu2)
I’m doing a TensorFlow tutorial (with PyTorch to come) & have reached a point where I need the GPU. How can I get TF to recognize it?
Before you ask: yes, I *could* download a Docker container or use Colab. I’m going this route because it seems dumb to have a GPU at my fingertips and not use it.
Thanks all & HNY…
submitted by /u/PullThisFinger
[visit reddit] [comments]