Abstract from 2012 paper Multiscale image fusion using an adaptive similarity-based sensor weighting scheme and human visual system-inspired contrast measure
Tufts researchers have developed a novel image fusion algorithm with superior performance
The goal of image fusion is to combine multiple source images obtained using different capture techniques into a single image to provide an effective contextual enhancement of a scene for human or machine perception. In practice, considerable value can be gained in the fusion of images that are dissimilar or complementary in nature. However, in such cases, global weighting schemes may not sufficiently weigh the contribution of the pertinent information of the source images, while existing adaptive schemes calculate weights based on the relative amounts of salient features, which can cause severe artifacting or inadequate local luminance in the fusion result.
Accordingly, a new multiscale image fusion algorithm is proposed. The approximation coefficient fusion rule of the algorithm is based on a novel similarity based weighting scheme capable of providing improved fusion results when the input source images are either similar or dissimilar to each other. Moreover, the algorithm employs a new detail coefficient fusion rule integrating a parametric multiscale contrast measure. The parametric nature of the contrast measure allows the degree to which psychophysical laws of human vision hold to be tuned based on image-dependent characteristics. Experimental results illustrate the superior performance of the proposed algorithm qualitatively and quantitatively.