Categories
Misc

Neural Network Pinpoints Artist by Examining a Painting’s Brushstrokes

An up close view of a person holding a paintbrush and painting on a canvas.Researchers developed a new AI algorithm that can identify a painter based on brushstrokes, with precision down to a single bristle.An up close view of a person holding a paintbrush and painting on a canvas.

Spotting painting forgeries just became a bit easier with a newly developed AI tool that picks up style differences with precision down to a single paintbrush bristle. The research, from a team at Case Western Reserve University (CWRU), trained convolutional neural networks to learn and identify a painter based on the 3D topography of a painting. This work could help historians and art experts distinguish between artists in collaborative pieces, and find fraudulent copies.

There are several methods to authenticating antique paintings. Experts often evaluate the style and condition of materials and use scientific methods such as microscopic analysis, infrared spectroscopy, and reflectography.

However, these exhaustive methods are time-consuming and can result in errors. They also cannot identify multiple painters of one piece of art. According to the study, painters such as El Greco and Rembrandt often employed workshops of artists to paint parts of a canvas in the same style as their own, making individual contributions unclear.

While analyzing artwork with machine learning is a relatively new field, recent studies have focused on combining AI methods with high-resolution images of paintings to learn a painter’s style and identify an artist. The researchers hypothesized that 3D analysis could hold even more data than an image, where features such as brushwork patterns along with paint deposition and drying methods could serve as an artist’s unique fingerprint. 

“3D topography is a new way for AI to ‘see’ a painting,” senior author Kenneth Singer, the Ambrose Swasey Professor of Physics at CWRU, said in a press release.

Extracting topographical data from a surface with an optical profiler, the researchers scanned 12 paintings of the same scene, painted with identical materials, but by four different artists. Sampling small square patches of the art, approximately 5 to 15 mm, the optical profiler detects and logs minute changes on a surface, which can be attributed to how someone holds and uses a paintbrush. 

They then trained an ensemble of convolutional neural networks to find patterns in the small patches, sampling between 160 to 1,440 patches for each of the artists. Using NVIDIA GPUs with cuDNN-accelerated deep learning frameworks, the algorithm matches the samples back to a single painter.

The team tested the algorithm against 180 patches of an artist’s painting, matching the samples back to a painter at about 95% accuracy. 

According to coauthor Michael Hinczewski, the Warren E. Rupp Associate Professor of Physics at CWRU, the ability to work with such small training sets is promising for later art historical applications with limited training datasets.

Figure 1: Overview of the data acquisition workflow and an ensemble of convolutional neural networks used to assign artist attribution probabilities to each patch. Credit: Ji, F., McMaster, M.S., Schwab, S. et al./Herit Sci

“Most of the other studies using AI for art attribution are focused on photos of entire paintings,” said Hinczewski. “We broke the painting down into virtual patches ranging from one-half millimeter to a few centimeters square. So we no longer even have information about the subject matter—but we can accurately predict who painted it from an individual patch. That’s amazing.”

Based on their findings the researchers view surface topography as an additional tool for attribution and forgery detection using an unbiased and quantitative analysis. In a collaboration with art conservation company Factum Arte based in Madrid, the team is working on artist workshop attribution and conservation studies on several works of the Spanish Renaissance painter El Greco.

The data and code associated with the research are available through GitHub. The work is a joint effort between researchers from the CWRU Department of Art History and Art, Cleveland Institute of Art, and the Cleveland Museum of Art.


Read the published research in Heritage Science. >>
Read the press release. >>

Categories
Misc

NVIDIA Metropolis Partners Showcase Vision AI Traffic Optimization at CES 2022

Image of a city street with traffic overlaid with a CES 2022 promo text.Explore NVIDIA Metropolis partners showcasing new technologies to improve city mobility at CES 2022.Image of a city street with traffic overlaid with a CES 2022 promo text.

Consumer Electronics Show (CES), an annual trade show organized by the Consumer Technology Association, brings together thought leaders, products, and technologies working to transform traffic and roadways, an important cross-section of daily life.

With limited roadways and growing populations, cities increasingly look to automation, and simulation for managing traffic and constrained infrastructure. NVIDIA partners worldwide are deploying the NVIDIA Metropolis video analytics platform, leveraging real-time sensors and AI to design more efficient roadways and optimize traffic safety and operations. 

The following NVIDIA Metropolis partners are showcasing how they help manage traffic with AI at CES. 

Asilla: Asilla develops behavior recognition AI solutions that use posture estimation technology to enhance public safety. Asilla is helping cities and a wide range of industries improve safety and security by detecting abnormal behavior in real-time and enabling prompt response to events. Check out Asilla at booth #51127 in Sands Hall.

Bitsensing: Bitsensing uses GPU and radar technology to connect cities, roads, buildings, and individuals, building the complete autonomous, connected environment. With cutting-edge imaging radar technology, Bitsensing accelerates creating the ultimate smart city to bring a new level of reliability and convenience. Visit Bitsensing at booth #61059 in Eureka Park.

Ekin: Ekin develops the next generation of smart city solutions to optimize safety and security for cities. A forward-thinking provider of quantitative data to cities based on cutting-edge artificial intelligence technology, with a focus on traffic management, smart parking, smart city living, and public safety. Visit Ekin at LVCC – North Hall Booth #9136.

Nota: Nota produces an AI software optimization platform, which automates and optimizes customers’ AI applications. Nota creates lightweight AI models that are low latency, energy-efficient, and accurate. They work to optimize processor usage and proliferate lower-end edge devices. Check out Nota at booth #9646 in LVCC North Hall.

NoTraffic: NoTraffic’s real-time, plug-and-play autonomous traffic management platform uses AI and cloud computing to reinvent how cities run their transport networks. The NoTraffic platform is an end-to-end hardware and software solution installed at intersections, transforming roadways to optimize traffic flows and reduce accidents. Check out NoTraffic at booth #9130 in LVCC North Hall.

Ouster: Cities are using Ouster digital lidar solutions to capture the environment in minute detail and detect vehicles, vulnerable road users, and incidents in real time to improve safety and traffic efficiency. Ouster lidar’s 3D spatial awareness and 24/7 performance combine the high-resolution imagery of cameras with the all-weather reliability of radar. Check out Ouster and live demos at booth #3843 in LVCC West Hall.

Velodyne Lidar: Velodyne’s lidar-based Intelligent Infrastructure Solution (IIS) is a complete end-to-end Smart City solution. IIS creates a real-time 3D map of roads and intersections, providing precise traffic and pedestrian safety analytics, road user classification, and smart signal actuation. The solution is deployed in the US, Canada and across EMEA and APAC. Check out Velodyne Lidar at booth #6005 in LVCC West Hall.

Register for CES, happening Jan. 5-8 in Las Vegas.

CES 2022 promo banner advertisement.
Categories
Misc

NVIDIA Builds Isaac AMR Platform to Aid $9 Trillion Logistics Industry

Manufacturing and fulfillment centers are profoundly complex. Whenever new earbuds or socks land at your doorstep in hours or a vehicle rolls off an assembly line, a maze of magic happens with AI-driven logistics. Massive facilities like these are constantly in flux. Robots travel miles of aisles to roll up millions of products to assist Read article >

The post NVIDIA Builds Isaac AMR Platform to Aid $9 Trillion Logistics Industry appeared first on The Official NVIDIA Blog.

Categories
Misc

Gamers, Creators, Drivers Feel GeForce RTX, NVIDIA AI Everywhere

Putting the power of graphics and AI at the fingertips of more users than ever, NVIDIA announced today new laptops and autonomous vehicles using GeForce RTX and NVIDIA AI platforms and expanded reach for GeForce NOW cloud gaming across Samsung TVs and the AT&T network. A virtual address prior to CES showed next-gen games, new Read article >

The post Gamers, Creators, Drivers Feel GeForce RTX, NVIDIA AI Everywhere appeared first on The Official NVIDIA Blog.

Categories
Misc

NVIDIA Canvas Updated With New AI Model Delivering 4x Resolution and More Materials

As art evolves and artists’ abilities grow, so must their creative tools. NVIDIA Canvas, the free beta app and part of the NVIDIA Studio suite of creative apps and tools, has brought the real-time painting tool GauGAN to anyone with an NVIDIA RTX GPU. Artists use advanced AI to quickly turn simple brushstrokes into realistic Read article >

The post NVIDIA Canvas Updated With New AI Model Delivering 4x Resolution and More Materials appeared first on The Official NVIDIA Blog.

Categories
Misc

Groundbreaking Updates to NVIDIA Studio Power the 3D Virtual Worlds of Tomorrow, Today

We’re at the dawn of the next digital frontier. Creativity is fueling new developments in design, innovation and virtual worlds. For the creators driving this future, we’ve built NVIDIA Studio, a fully accelerated platform with high-performance GPUs as the heartbeat for laptops and desktops. This hardware is paired with exclusive NVIDIA RTX-accelerated software optimizations in Read article >

The post Groundbreaking Updates to NVIDIA Studio Power the 3D Virtual Worlds of Tomorrow, Today appeared first on The Official NVIDIA Blog.

Categories
Misc

NVIDIA Makes Free Version of Omniverse Available to Millions of Individual Creators and Artists Worldwide

Designed to be the foundation that connects virtual worlds, NVIDIA Omniverse is now available to millions of individual NVIDIA Studio creators using GeForce RTX and NVIDIA RTX GPUs. In a special address at CES, NVIDIA also announced new platform developments for Omniverse Machinima and Omniverse Audio2Face, new platform features like Nucleus Cloud and 3D marketplaces, Read article >

The post NVIDIA Makes Free Version of Omniverse Available to Millions of Individual Creators and Artists Worldwide appeared first on The Official NVIDIA Blog.

Categories
Misc

Autonomous Era Arrives at CES 2022 With NVIDIA DRIVE Hyperion and Omniverse Avatar

CES has long been a showcase on what’s coming down the technology pipeline. This year, NVIDIA is showing the radical innovation happening now. During a special virtual address at the show, Ali Kani, vice president and general manager of Automotive at NVIDIA, detailed the capabilities of DRIVE Hyperion and the many ways the industry is Read article >

The post Autonomous Era Arrives at CES 2022 With NVIDIA DRIVE Hyperion and Omniverse Avatar appeared first on The Official NVIDIA Blog.

Categories
Misc

GeForce NOW Delivers Legendary GeForce Gaming With More Games on More Networks to More Devices

GeForce NOW is kicking off the new year by bringing more games, more devices and more networks to our cloud gaming ecosystem. The next pair of Electronic Arts games, Battlefield 4 and Battlefield V, is streaming on GeForce NOW. We’re also working closely with a few titans in their respective industries: AT&T and Samsung. AT&T Read article >

The post GeForce NOW Delivers Legendary GeForce Gaming With More Games on More Networks to More Devices appeared first on The Official NVIDIA Blog.

Categories
Misc

Simplifying Realistic Character Creation with NVIDIA Omniverse Reallusion Connector

Combining Omniverse and Reallusion software accelerates the creation of realistic and stylized characters with a library of high-quality character assets and motions.

Character creation and animation are two distinct disciplines that demand the skill of well-trained artists with specialized background knowledge. These domains can be difficult and frustrating for artists who come from unrelated backgrounds with different skill sets, and it’s a pain point that Character Creator and iClone were specifically created to resolve.

Character Creator is well-positioned as the go-to solution for creating realistic and stylized characters with robust pipelines for mainstream tools like ZBrush, Substance, and Blender. Combined with the ability to export FBX with LODs (levels of detail), digital human shaders, and a rich collection of motion assets, iClone stands out as an animation editor without a steep learning curve.

The launch of NVIDIA Omniverse in 2020 was a momentous occasion that attracted our attention. Omniverse represents the next-generation 3D virtual collaboration and real-time simulation platform that connects people and applications for broad-based collaboration. The Reallusion software suite combined with a massive library of high-quality character assets and motions can play a crucial role in this ecosystem, while Omniverse provides the path-traced rendering and AI technology that makes for a powerful synergy.

Where to start?

To build the connector, you start by referencing the Connect Sample code available for download on the NVIDIA launcher application.

Image of ov connector sample: To get started with Omniverse and building a connector, reference the sample code that is attached.
Figure 1. Build your own Omniverse Connector with sample code

On the Omniverse Youtube channel, there is a great beginner tutorial: Create an Omniverse USD App from the Connect Sample.

Scene and character animation

iClone and Character Creator’s 3D scene consists of nodes with basic transforms consisting of translation, rotation, and scale values. Characters, meshes, lights, and cameras are all attached under these nodes.

The bones of a character skeleton are also represented by these nodes. Nodes that only represent transforms are exported as USD XForm and nodes that represent the body and facial bones are exported as USD skeleton joints. Additional bones and skinning are added to accessory nodes that are attached to bones before converting to USD format.

image of asset tree: When manipulating and animating a real character, the various bones and joints all have an important role to play to create a real output.
Figure 2. Scene graph of transforms (Xforms) and joints

USD Xform scaling works in fundamentally different ways to iClone. In iClone, a node can be made to inherit or ignore the parent scale, whereas in Omniverse, the node scale is always inherited from its parent node. Under such conditions, bone node scale inheritance must be removed with its values reset before exporting to Omniverse, in order for the scale values to match.

Most of iClone’s facial expressions are composed of morph animations that are exported as USD blend-shapes. In contrast to FBX blend-shapes, which are stored in local-space positions, USD blend-shapes store positional offsets.

Because iClone also stores positional offsets (in combination with strength multipliers), it is completely compatible with Omniverse, enabling direct conversion to USD format. It should be noted that Omniverse requires a skeleton root for props with blend-shapes attached and additional processing may be required.

Material conversion

The following section contains MDL code excerpts for iClone’s USD exporter. For more information, see USD Shader Attributes.

Within the USD file, MDLs are specified using info:mdl:sourceAsset with info:mdl:sourceAsset:subIdentifier as the entry function. The new subIndentifier attribute was introduced by NVIDIA and PIXAR. input: is later called to feed the texture and material parameters. OmniPBR.mdl and OmniSurface(Base).mdl provided with Omniverse were used as starting points.

OmniPBR.mdl was chosen from the start because it works well in both NVIDIA RTX Real-time and Path-traced mode. On the other hand, OmniSurface and OmniHairare photo-realistic MDLs for RTX Path-traced mode. The existing PBR, Skin, Head, and SSS shaders were then rewritten from HLSL to MDL.

Image of female: Accurate hair, skin shading, and facial manipulation is vital to creating realistic digital humans.
Figure 3. Omniverse using OmniSurface and OmniHairare MDLs for a photorealistic character
Image of pool:  Having high quality and realistic water that floats and moves like a true liquid would in real life with realistic reflections creates a true final image.
Figure 4. Creating realistic floating water in the swimming pool with Omniverse

Flowing water in the swimming pool is another good example:

float2 inputs:texture_translate.timeSamples = {
                    0: (0, 0),
                    4000: (4, 8),
                }

Besides the previously mentioned built-in MDLs, there is also a base.mdl on GitHub with some reusable functions that can be deployed in a jiffy.

Light conversion

Point lights and spotlights use UsdLuxSphereLight with adjusted cone angles. Tube and rectangle lights use UsdLuxCylinderLight and UsdLuxRectLight, respectively. The Light IES profile file is also one of the shaping attributes. Light intensity in USD is similar to luminous intensity per unit surface area. The USD intensity of a spherical light with radius (r in meters) can be approximated with the following formula:

USD intensity = candela * 1000 * / (4PI r*r)

The following formula is used when radius is in centimeters:

USD intensity = candela * 1000 / (4PI(0.01r)*(0.01r))

Radius is a significant attribute in Omniverse Renderer. We recommend imposing a minimum radius of 2 cm. For more examples, see iClone Omniverse Connector Tutorial – Light Settings in iClone & Omniverse.

Image of cars: Using the lighting radius to manipulate feel within Omniverse can have a great impact on the final outcome of the render.
Figure 5. Displaying lighting realism within Omniverse Renderer

What’s next

A prototype of a one-way live sync connector is under development. Because iClone’s undo/redo system is similar to the memento pattern, you use a table for keeping track of live objects with universal IDs. This table updates after performing an undo and redo function.

To get an overview of how this solution works, check out our character creation and character animation solution or watch our 2021 GTC Fall session video recording.

For more information and to join the discussion, visit the NVIDIA Omniverse forum and NVIDIA Omniverse Discord Channel. For more resources on USD, see USD GITHUB and prebuilt Lib/Tool by NVIDIA.