Interesting issue that I can’t quite wrap my head around.
We have a working Python project using Tensorflow to create and then use a model. This works great when we output the model as a directory, but if we output the model as an .h5 file, we run into the following error whenever we try to use the model:
ValueError: All `axis` values to be kept must have known shape. Got axis: (-1,), input shape: [None, None], with unknown axis at index: 1
Here is how we were and how we are currently saving the model:
# this technique works (saves model to a directory) tf.keras.models.save_model( dnn_model, filepath='./true_overall', overwrite=True, include_optimizer=True, save_format=None, signatures=None, options=None, save_traces=True ) #this saves the file, but throws an error when the file is used tf.keras.models.save_model( dnn_model, filepath='./true_overall.h5', overwrite=True, include_optimizer=True, save_format=None, signatures=None, options=None, save_traces=True )
This is how we’re importing the model for use:
dnn_model = tf.keras.models.load_model('./neural_network/true_overall.) #works dnn_model = tf.keras.models.load_model('./neural_network/true_overall.h5') #doesn't work
What would cause a model to work when saved as a directory but have issues when saved as an h5 file?
I am working on a project in which I am using layer wise relevance propagation to get the relevances of each input. But the output of LRP is in keras tensor. Is there any way to convert it to numpy array?
Hi there, I am a couple of weeks in with learning ML, and trying to get a decent image classifier. I have 60 or so labels, and only about 175-300 images each. I found that augmentation via flips and rotations suits the data and has helped bump up the accuracy a bit (maybe 7-10%).
The images have mostly white backgrounds, but some are not (greys, some darker) and this is not evenly distributed I think it was causing issues when making predictions from test photos: some incorrect labels came up frequently despite little visual similarity. I thought perhaps the background was involved as the darker background/shadows matched my photos. I figured adding contrast/brightness variation would nullify this behavior so I followed this here which adds a layer to randomize contrast and brightness to images in the training dataset. Snippet below:
With slight adjustments to contrast and brightness. I reviewed the output and it looks exactly how I wanted it, and I figured it would at least help, but it appears to cause a trainwreck! Does this make sense? Where should I look to improve on this?
As well, most tutorials focus on two labels, for 60 or so labels with 200-300 images each, in projects that deal with plants/nature/geology for example what is typically attainable for accuracy?
i looked the problem up but i didnt find any solutions plus the only threads i found were problems with poeple who wanted to use tensorflow with gpu. so here i post:
My situation:
i know basics in python and know a little bit about virtual environments and im using tensorflow object detection api without gpu on ubuntu 18.04
I installed the tensorflow object detection api with this anaconda guide “https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/” , tho im not sure if i activated the tensorflow environment (“conda activate tensorflow”) doing this. It worked fine and wrote various programs with spyder 5.2.3 using tensorflow and object detection.
Then i did a terrible rookie mistake and updated anaconda and i believe conda too cause i was pretty much mindlessly copying some pip commands and everything stopped working cause of a dependency chaos.
i tried with conda revisions to revert the update but it wasnt working and i tried deleting anaconda with
conda install anaconda-clean
anaconda-clean –yes
rm -rf ~/anaconda3
and uninstalling tensorflow with
pip uninstall tensorflow
and tried reinstalling the whole thing twice but since then i get the classic error or hint for not using a gpu but additionally some error message like “kernel driver does not appear to be running on this host” and UNKOWN ERROR: 303 with some luda files missing which are associated with Cuda, but i dont use cuda since i have no gpu.
does it have something to do with a virtual environment i dont use or did i not uninstall tensorflow or anaconda properly or something else.
Four NVIDIA Inception members have been selected as the first cohort of startups to access Cambridge-1, the U.K.’s most powerful supercomputer. The system will help British companies Alchemab Therapeutics, InstaDeep, Peptone and Relation Therapeutics enable breakthroughs in digital biology. Officially launched in July, Cambridge-1 — an NVIDIA DGX SuperPOD cluster powered by NVIDIA DGX A100 Read article >
Enriching its game developer ecosystem, NVIDIA today announced the launch of new NVIDIA Omniverse™ features that make it easier for developers to share assets, sort asset libraries, collaborate and deploy AI to animate characters’ facial expressions in a new game development pipeline.
Digital artists and creative professionals have plenty to be excited about at NVIDIA GTC. Impressive NVIDIA Studio laptop offerings from ASUS and MSI launch with upgraded RTX GPUs, providing more options for professional content creators to elevate and expand creative possibilities. NVIDIA Omniverse gets a significant upgrade — including updates to the Omniverse Create, Machinima Read article >
At GTC, NVIDIA announced significant updates for millions of creators using the NVIDIA Omniverse real-time 3D design collaboration platform. The announcements kicked off with updates to the Omniverse apps Create, Machinima and Showroom, with an immement View release. Powered by GeForce RTX and NVIDIA RTX GPUs, they dramatically accelerate 3D creative workflows. New Omniverse Connections Read article >
Sionna is a GPU-accelerated open-source library for link-level simulations.
Even while 5G wireless networks are being installed and used worldwide, researchers in academia and industry have already started defining visions and critical technologies for 6G. Although nobody knows what 6G will be, a recurring vision is that 6G must enable the creation of digital twins and distributed machine learning (ML) applications at an unprecedented scale. 6G research requires new tools.
Figure 1. 6G key technologies
Some of the key technologies underpinning the 6G vision are communications at the high frequencies known as the Terahertz band. In this band, more spectrum is available by orders of magnitude. Technology examples include the following:
Reconfigurable intelligent surfaces (RIS) to control how electromagnetic waves are reflected and achieve the best coverage.
Integrated sensing and communications (ISAC) to turn 6G networks into sensors, which offers many exciting applications for autonomous vehicles, road safety, robotics, and logistics.
Machine learning is expected to play a defining role for the entire 6G protocol stack, which may revolutionize how we design and standardize communication systems.
Addressing the research challenges of these revolutionary technologies requires a new generation of tools to achieve the breakthroughs that will define communications in the 6G era. Here is why:
Many 6G technologies require the simulation of a specific environment, such as a factory or cell site, with a spatially consistent correspondence between physical location, wireless channel impulse response, and visual input. This can currently only be achieved by either costly measurement campaigns, or by efficient simulation based on a combination of scene rendering and ray tracing.
As machine learning and neural networks become increasingly important, researchers would benefit tremendously from a link-level simulator with native ML integration and automatic gradient computation.
6G simulations need unprecedented modeling accuracy and scale. The full potential of ML-enhanced algorithms will only be realized through physically-based simulations that account for reality in a level of detail that has been impossible in the past.
Introducing NVIDIA Sionna
To address these needs, NVIDIA developed Sionna, a GPU-accelerated open-source library for link-level simulations.
Sionna enables rapid prototyping of complex communication system architectures. It’s the world’s first framework that natively enables the use of neural networks in the physical layer and eliminates the need for separate toolchains for data generation, training, and performance evaluation.
Sionna implements a wide range of carefully tested, state-of-the-art algorithms that can be used for benchmarking and end-to-end performance evaluation. This lets you focus on your research, making it more impactful and reproducible while you spend less time implementing components outside your area of expertise.
Sionna is written in Python and based on TensorFlow and Keras. All components are implemented as Keras layers, which lets you build sophisticated system architectures by connecting the desired layers in the same way you would build a neural network.
Apart from a few exceptions, all components are differentiable so that gradients can be back-propagated through an entire system. This is the key enabler for system optimization and machine learning, especially the integration of neural networks.
NVIDIA GPU acceleration provides orders-of-magnitude faster simulations and scaling to large multi-GPU setups, enabling the interactive exploration of such systems. If no GPU is available, Sionna even runs on the CPU, though more slowly.
Sionna comes with rich documentation and a wide range of tutorials that make it easy to get started.
Figure 2. Features of Sionna’s first release
The first release of Sionna has the following major features:
MIMO channel estimation, equalization, and precoding
Sionna is released under the Apache 2.0 license, and we welcome contributions from external parties.
Hello, Sionna!
The following code example shows a Sionna “Hello, World!” example in which the transmission of a batch of LDPC codewords over an AWGN channel using 16QAM modulation is simulated. This example shows how Sionna layers are instantiated and applied to a previously defined tensor. The coding style follows the functional API of Keras. You can open this example directly in a Jupyter notebook on Google Collaboratory.
batch_size = 1024
n = 1000 # codeword length
k = 500 # information bits per codeword
m = 4 # bits per symbol
snr = 10 # signal-to-noise ratio
c = Constellation("qam",m,trainable=True)
b = BinarySource()([batch_size, k])
u = LDPC5GEncoder (k,n)(b)
x = Mapper (constellation=c)(u)
y = AWGN()([x,1/snr])
11r = Demapper("app", constellation=c)([y,1/snr])
b_hat = LDPC5GDecoder(LDPC5GEncoder (k, n))(11r)
One of the key advantages of Sionna is that components can be made trainable or replaced by neural networks. NVIDIA made Constellation trainable and replaced Demapper with a NeuralDemapper, which is just a neural network defined through Keras.
c = Constellation("qam",m,trainable=True)
b = BinarySource()([batch_size, k])
u = LDPC5GEncoder (k,n)(b)
x = Mapper (constellation=c)(u)
y = AWGN()([x,1/snr])
11r = NeuralDemapper()([y,1/snr])
b_hat = LDPC5GDecoder(LDPC5GEncoder (k, n))(11r)
What happens under the hood is that the tensor defining the constellation points has now become a trainable TensorFlow variable and can be tracked together with the weights of NeuralDemapper by the TensorFlow automatic differentiation feature. For these reasons, Sionna can be seen as a differentiable link-level simulator.
Looking ahead
Soon, Sionna will allow for integrated ray tracing to replace stochastic channel models, enabling many new fields of research. Ultra-fast ray tracing is a crucial technology for digital twins of communication systems. For example, this enables the co-design of a building’s architecture and the communication infrastructure to achieve unprecedented levels of throughput and reliability.
Figure 3. Access the power of hardware-accelerated ray tracing from within a Jupyter notebook
Sionna takes advantage of computing (NVIDIA CUDA cores), AI (NVIDIA Tensor Cores), and ray tracing cores of NVIDIA GPUs for lightning-fast simulations of 6G systems.
We hope you share our excitement about Sionna, and we look forward to hearing about your success stories!
For more information, see the following resources: