Adepth map is an image channel in computer graphics and computer vision that provides information on the distance of the surface of objects as seen from a viewpoint for each pixel in that image. Because of its wide range of applications in augmented reality, portrait mode, and 3D reconstruction, ongoing research in the field of depth-sensing capabilities is being done to pave the way for the future (particularly with the release of the ARCore Depth API). Furthermore, the web community is increasing interest in merging the depth capabilities with JavaScript to enhance the existing web applications by integrating them with real-time AR effects. Despite these recent improvements, the number of photos connected with depth maps continues to be a source of worry. To drive the next generation of web applications, Tensorflow released its first depth estimation API, called Depth API, and ARPortraitDepth, a model for estimating a depth map for portraiture. They also published 3D photo, a computational photography application that uses the anticipated depth and creates a 3D parallax effect on the given portrait image, further persuading people of the enormous possibilities of depth information. Tensorflow has also launched a live demo for people to try and convert their photographs into 3D versions. Github: https://github.com/tensorflow/tfjs-models/blob/master/depth-estimation/README.md submitted by /u/No_Coffee_4638 |
Categories