Art is a fascinating but rare discipline. Indeed, creating artwork is often not only a time-consuming task but also requires a lot of skill. If this problem holds for 2D artworks, consider expanding it to dimensions beyond the image plane, such as time (in animated content) or 3D space (with sculptures or a virtual environment). This introduces new limitations and challenges, which are addressed by this paper.
Previous results related to 2D stylization focus on video content separated by frame. The result is that the frame creator does good stylization but often causes flickering artifacts in the generated video. This is because there is no relationship between the time of production. In addition, they do not explore the 3D environment, which will increase the complexity of the task. Other works focused on 3D stylization suffer from geometrically inaccurate reconstructions of point clouds or triangle meshes and lack of style points. The reason is in the different geometrical properties of the initial mesh and the design mesh, as the style is applied after a linear transformation.
The proposed method, called Artistic Radiance Fields (ARF), can transform artwork from a 2D image into a real-world 3D scene, resulting in new movie-watching renderings that are faithful to the image. drawing (Figure 1).
For this purpose, the researchers used photo-realistic radiance fields recreated from many real-world images into a new, stylized radiance field that supports the beautiful stylized renderings from a new perspective. The results are shown in Figure 1.
As an example, put together a set of real world pictures of an excavator and a picture of the famous Van Gogh’s “Starry Night” painted as “style” should be used for it, the result is a colorful excavator with a smooth beautiful shape.
The ARF pipeline is presented in the diagram below (Figure 2).
The main point of this architecture is the combination of the proposed Nearest Neighbor Matching (NNFM) loss and color change.
NNFM includes a comparison of maps of both processing and texture images, extracted using the notorious VGG-16 Convolutional Neural Network (CNN). This way, the feature can be used to bring the changes in the depression-frequency content to a consistent view across multiple perspectives.
Color conversion is a technique used to avoid color mismatches between visual compositions and image styles. It implies a linear transformation of the pixels forming the input images to match the meaning and differences of the pixels in the image style.
In addition, the building design uses the delayed back-propagation method, allowing the calculation of the loss of the image resolution all with a reduction of the GPU. The first step is the image rendering of all resolutions and the calculation of image loss and gradient with the pixel color, which makes a cached gradient image. Then, the cache gradients are back-propagated patch-wise for the accumulation process.
The approach, ARF, presented in this paper brings several advantages. First, it makes stunning creations of stylized images with almost no artifacts. Second, the stylized images can be created from new perspectives with only a few conceptual images, making beautiful 3D reconstructions. Finally, using a delay-optimization method, the design minimizes the GPU memory footprint.
This Article is written as a research summary article by Marktechpost Staff based on the research paper 'ARF: Artistic Radiance Fields'. All Credit For This Research Goes To Researchers on This Project. Check out the paper, github link and project.
Please Don't Forget To Join Our ML Subreddit
Daniele Lorenzi received his M.Sc. in ICT for Internet and Multimedia Engineering in 2021 from the University of Padua, Italy. He is a Ph.D. Candidate at the Institute of Information Technology (ITEC) at the Alpen-Adria-Universität (AAU) Klagenfurt. He is currently working in the Christian Doppler Laboratory ATHENA and his research interests include adaptive video streaming, immersive media, machine learning, and QoS/QoE evaluation.