Categories
Misc

NVIDIA Launches Morpheus to Bring AI-Driven Automation to Cybersecurity Industry

New Framework Powered by NVIDIA GPUs, BlueField DPUs Enables Cybersecurity Providers to Develop AI Solutions That Can Instantly Detect Cyber BreachesSANTA CLARA, Calif., April 12, 2021 (GLOBE …

Categories
Misc

Fast Track to Enterprise AI: New NVIDIA Workflow Lets Any User Choose, Adapt, Deploy Models Easily

AI is the most powerful new technology of our time, but it’s been a force that’s hard to harness for many enterprises — until now. Many companies lack the specialized skills, access to large datasets or accelerated computing that deep learning requires. Others are realizing the benefits of AI and want to spread them quickly Read article >

The post Fast Track to Enterprise AI: New NVIDIA Workflow Lets Any User Choose, Adapt, Deploy Models Easily appeared first on The Official NVIDIA Blog.

Categories
Misc

NVIDIA Announces Availability of Jarvis Interactive Conversational AI Framework

Pre-Trained Deep Learning Models and Software Tools Enable Developers to Adapt Jarvis for All Industries; Easily Deployed from Any Cloud to EdgeSANTA CLARA, Calif., April 12, 2021 (GLOBE …

Categories
Misc

NVIDIA Launches Omniverse Design Collaboration and Simulation Platform for Enterprises

Leading Computer Makers Launch Workstations and NVIDIA-Certified Systems for Omniverse; BMW Group, Ericsson, Foster + Partners, WPP Among Early AdoptersSANTA CLARA, Calif., April 12, 2021 …

Categories
Misc

Model was constructed with shape (1, 16, 1), but it was called on an input with incompatible shape (1, 1, 1)

I’m new to deep learning and I trying to model a univariate time series using the sliding window approach with a LSTM model. My training dataset takes 16 values to predict the next 16. My code is writing in R.

I am getting a warning and I cannot understand what I am doing wrong. I think that the problem is when specifying the model.

I am totally new to this. So if you could help me would be great.

Bellow is the whole code. I got the warning at the very end, after predicting

train_sliding = create_dataset(data = kt_train_male_scaled, n_input = 16, n_out = 16)

X_train = train_sliding[[1]] #97, 16

y_train = train_sliding[[2]] #97, 16

#Array transformation to Keras LSTM

dim(X_train) = c(dim(X_train), 1)

dim(X_train) # 97, 16, 1

I think the problem should be in this chunk of code. I think I am building the model wrong

#Model in Keras

X_shape2 = dim(X_train)[2] #16

X_shape3 = dim(X_train)[3] #1

batch_size = 1

model <- keras_model_sequential()

model%>%

layer_lstm(units = 64, activation = “relu”, batch_size = batch_size, input_shape = c(dim(X_train)[2], dim(X_train)[3]),stateful= TRUE)%>%

#layer_lstm(units = 5, activation = “relu”, stateful= TRUE) %>%

layer_dense(units = 1)

summary(model)

model %>% compile(

loss = ‘mse’,

optimizer = optimizer_adam(lr= 0.01, decay = 1e-6 ),

metrics = c(‘mae’)

)

Epochs = 100

for(i in 1:Epochs ){

model %>% fit(X_train, y_train, epochs=1, batch_size=batch_size, verbose=1, shuffle=FALSE)

model %>% reset_states()

}

L = length(kt_test_male_scaled)

scaler = Scaled$scaler

predictions = numeric(L)

I get the warning after running this part. Also, all my 16 predictions have the same value. I also tried to use dim(X) = c(1,16,1) but it did not work

for(i in 1:L){

X = kt_test_male_scaled[i]

dim(X) = c(1,1,1)

yhat = model %>% predict(X, batch_size=batch_size)

# invert scaling

yhat = invert_scaling(yhat, scaler, c(-1, 1))

# invert differencing

#yhat = yhat + kt_male[(n+i)]

# store

predictions[i] <- yhat

}

Model was constructed with shape (1, 16, 1) for input KerasTensor(type_spec=TensorSpec(shape=(1, 16, 1), dtype=tf.float32, name=’lstm_107_input’), name=’lstm_107_input’, description=”created by layer ‘lstm_107_input'”), but it was called on an input with incompatible shape (1, 1, 1)

submitted by /u/rods2292
[visit reddit] [comments]

Categories
Misc

Deep Learning with TensorFlow – Free course from udemy

Deep Learning with TensorFlow - Free course from udemy submitted by /u/Ordinary_Craft
[visit reddit] [comments]
Categories
Misc

AM I the only person deeply irritated by the logo of TF with the shadow?

It doesn’t make sense. the shadow of the corner T looks like it has 5 cubic lengths when the physical figure’s T has 1 cubic length on the top right.

submitted by /u/Silver4R4449
[visit reddit] [comments]

Categories
Misc

How many objects can be detected using Tensorflow?

submitted by /u/NickLRealtor
[visit reddit] [comments]

Categories
Misc

Inception Spotlight: Deepset collaborates with NVIDIA and AWS on BERT Optimization

Deepset bridges the gap between NLP research and industry – their core product, Haystack, is an open-source framework that enables developers to utilize the latest NLP models for semantic search and question answering at scale.

Language models are essential for modern NLP. Building a new language model from scratch can be beneficial for many domains. NVIDIA Inception member Deepset bridges the gap between NLP research and industry – their core product, Haystack, is an open-source framework that enables developers to utilize the latest NLP models for semantic search and question answering at scale. Haystack Hub, is their software as a service (SaaS) platform, used by developers from various industries, including finance, legal, and automotive, to find answers in all kinds of text documents. 

In a collaborative effort with NVIDIA and AWS, deepset used NVIDIA V100 GPUs for training their language model. The GPU performance profiles were captured by the NVIDIA Nsight Systems.

The collaboration was a product of the partnership between NVIDIA Inception and AWS Activate, an initiative to support AI startups by providing access to the benefits of both acceleration programs. The benefits for NVIDIA Inception startups joining AWS Activate include business and marketing support, as well as AWS Cloud credits, which can be used to access NVIDIA’s latest generation GPUs in Amazon EC2 – P3 Instances. AWS Activate members that are using AI and machine learning are referred to NVIDIA Inception and can benefit from immediate preferred pricing on NVIDIA GPUs and Deep Learning Institute credits.

“A considerable amount of manual development is required to create the training data and vocabulary, configure hyperparameters, start and monitor training jobs, and run periodical evaluation of different model checkpoints. In our first training runs, we also found several bugs only after multiple hours of training, resulting in a slow development cycle. In summary, language model training can be a painful job for a developer and easily consumes multiple days of work”.

“The increased efficiency of training jobs reduces our energy usage and lowers our carbon footprint. By tackling different areas of FARM’s training pipeline, we were able to significantly optimize the resource utilization. In the end, we were able to achieve a speedup in training time of 3.9 times faster, a 12.8 times reduction in training cost, and reduced the developer effort required from days to hours”.

Collaborating with NVIDIA and AWS, NVIDIA Inception partner deeepset achieves a 3.9x speedup and 12.8x cost reduction for training NLP models. As a result, the developer effort was significantly reduced.

Read more about technologies used in the training and their impact on improving BERT training performance.

Categories
Offsites

Monster Mash: A Sketch-Based Tool for Casual 3D Modeling and Animation

3D computer animation is a time-consuming and highly technical medium — to complete even a single animated scene requires numerous steps, like modeling, rigging and animating, each of which is itself a sub-discipline that can take years to master. Because of its complexity, 3D animation is generally practiced by teams of skilled specialists and is inaccessible to almost everyone else, despite decades of advances in technology and tools. With the recent development of tools that facilitate game character creation and game balance, a natural question arises: is it possible to democratize the 3D animation process so it’s accessible to everyone?

To explore this concept, we start with the observation that most forms of artistic expression have a casual mode: a classical guitarist might jam without any written music, a trained actor could ad-lib a line or two while rehearsing, and an oil painter can jot down a quick gesture drawing. What these casual modes have in common is that they allow an artist to express a complete thought quickly and intuitively without fear of making a mistake. This turns out to be essential to the creative process — when each sketch is nearly effortless, it is possible to iteratively explore the space of possibilities far more effectively.

In this post, we describe Monster Mash, an open source tool presented at SIGGRAPH Asia 2020 that allows experts and amateurs alike to create rich, expressive, deformable 3D models from scratch — and to animate them — all in a casual mode, without ever having to leave the 2D plane. With Monster Mash, the user sketches out a character, and the software automatically converts it to a soft, deformable 3D model that the user can immediately animate by grabbing parts of it and moving them around in real time. There is also an online demo, where you can try it out for yourself.

Creating a walk cycle using Monster Mash. Step 1: Draw a character. Step 2: Animate it.

Creating a 2D Sketch
The insight that makes this casual sketching approach possible is that many 3D models, particularly those of organic forms, can be described by an ordered set of overlapping 2D regions. This abstraction makes the complex task of 3D modeling much easier: the user creates 2D regions by drawing their outlines, then the algorithm creates a 3D model by stitching the regions together and inflating them. The result is a simple and intuitive user interface for sketching 3D figures.

For example, suppose the user wants to create a 3D model of an elephant. The first step is to draw the body as a closed stroke (a). Then the user adds strokes to depict other body parts such as legs (b). Drawing those additional strokes as open curves provides a hint to the system that they are meant to be smoothly connected with the regions they overlap. The user can also specify that some new parts should go behind the existing ones by drawing them with the right mouse button (c), and mark other parts as symmetrical by double-clicking on them (d). The result is an ordered list of 2D regions.

Steps in creating a 2D sketch of an elephant.

Stitching and Inflation
To understand how a 3D model is created from these 2D regions, let’s look more closely at one part of the elephant. First, the system identifies where the leg must be connected to the body (a) by finding the segment (red) that completes the open curve. The system cuts the body’s front surface along that segment, and then stitches the front of the leg together with the body (b). It then inflates the model into 3D by solving a modified form of Poisson’s equation to produce a surface with a rounded cross-section (c). The resulting model (d) is smooth and well-shaped, but because all of the 3D parts are rooted in the drawing plane, they may intersect each other, resulting in a somewhat odd-looking “elephant”. These intersections will be resolved by the deformation system.

Illustration of the details of the stitching and inflation process. The schematic illustrations (b, c) are cross-sections viewed from the elephant’s front.

Layered Deformation
At this point we just have a static model — we need to give the user an easy way to pose the model, and also separate the intersecting parts somehow. Monster Mash’s layered deformation system, based on the well-known smooth deformation method as-rigid-as-possible (ARAP), solves both of these problems at once. What’s novel about our layered “ARAP-L” approach is that it combines deformation and other constraints into a single optimization framework, allowing these processes to run in parallel at interactive speed, so that the user can manipulate the model in real time.

The framework incorporates a set of layering and equality constraints, which move body parts along the z axis to prevent them from visibly intersecting each other. These constraints are applied only at the silhouettes of overlapping parts, and are dynamically updated each frame.

In steps (d) through (h) above, ARAP-L transforms a model from one with intersecting 3D parts to one with the depth ordering specified by the user. The layering constraints force the leg’s silhouette to stay in front of the body (green), and the body’s silhouette to stay behind the leg (yellow). Equality constraints (red) seal together the loose boundaries between the leg and the body.

Meanwhile, in a separate thread of the framework, we satisfy point constraints to make the model follow user-defined control points (described in the section below) in the xy-plane. This ARAP-L method allows us to combine modeling, rigging, deformation, and animation all into a single process that is much more approachable to the non-specialist user.

The model deforms to match the point constraints (red dots) while the layering constraints prevent the parts from visibly intersecting.

Animation
To pose the model, the user can create control points anywhere on the model’s surface and move them. The deformation system converges over multiple frames, which gives the model’s movement a soft and floppy quality, allowing the user to intuitively grasp its dynamic properties — an essential prerequisite for kinesthetic learning.

Because the effect of deformations converges over multiple frames, our system lends 3D models a soft and dynamic quality.

To create animation, the system records the user’s movements in real time. The user can animate one control point, then play back that movement while recording additional control points. In this way, the user can build up a complex action like a walk by layering animation, one body part at a time. At every stage of the animation process, the only task required of the user is to move points around in 2D, a low-risk workflow meant to encourage experimentation and play.

Conclusion
We believe this new way of creating animation is intuitive and can thus help democratize the field of computer animation, encouraging novices who would normally be unable to try it on their own as well as experts who often require fast iteration under tight deadlines. Here you can see a few of the animated characters that have been created using Monster Mash. Most of these were created in a matter of minutes.

A selection of animated characters created using Monster Mash. The original hand-drawn outline used to create each 3D model is visible as an inset above each character.

All of the code for Monster Mash is available as open source, and you can watch our presentation and read our paper from SIGGRAPH Asia 2020 to learn more. We hope this software will make creating 3D animations more broadly accessible. Try out the online demo and see for yourself!

Acknowledgements
Monster Mash is the result of a collaboration between Google Research, Czech Technical University in Prague, ETH Zürich, and the University of Washington. Key contributors include Marek Dvorožňák, Daniel Sýkora, Cassidy Curtis, Brian Curless, Olga Sorkine-Hornung, and David Salesin. We are also grateful to Hélène Leroux, Neth Nom, David Murphy, Samuel Leather, Pavla Sýkorová, and Jakub Javora for participating in the early interactive sessions.