Listen : Audio version of this article
Colorization makes the film an artistic appearance. Quality of this colorization using AI is more quantitative and superior art than the normal. Recent research o a paper published on the preprint server Arxiv.org, “Deep Example-based Video Colorization”,
Scientists at Microsoft’s AI perception and mixed reality division, Hamad Bin Khalifa University, and USC’S Institute for Creative Technologies reveal a piece of news about the end to end system for autonomous exemplary based video colorization. The article also exemplified both the quality and quantity of Video colorization using AI make the art realistic and fabulous.
Co-authors of the article say that “without disturbing the reference style, achieving the temporal consistency is the basic and main challenge” also they reveal all the components used end-to-end, help to produce realistic videos with good and the best materialistic stability.
The authors of these papers also say that AI which can convert monochrome clips into color is not a new one and strange. Even more, studies at Nvidia last September revealed a framework that shows colors from just one colorized and annotated video frame, and Google AI in June introduced an algorithm that colorizes gray scale videos without any manual supervision i.e., human observance. But in fact, the result of this observation and other models contains artificial and errors, which takes the longest time and duration of the original video input.
To overcome these flaws, Scientists opts the result of an old video frame as input, to preserve its consistency and makes colorization using an image as a reference, taking this image to guide colorization frame-to-frame and cut down on an accumulation error.
(Anyway, if the reference is the colorized frame in the video, it’ll perform the same function like most other color propagation methods but in a much more stronger way). So it shows that it can forecast “natural” colors based on the semantics of the input grayscale images, even without perfect matching in a given reference image or on an old frame.
A type of AI system that requires an end to end convolutional network which is commonly used to analyze visual imagery will retain historical information. Each state consists of two modules: a correspondence model that gives and highlights the reference image to an input frame based on dense semantic correspondences, and a colorization model that colorizes a frame guided both by the colorized result of the previous and the aligned frame.