Open Access Journal

ISSN : 2394-2320 (Online)

International Journal of Engineering Research in Computer Science and Engineering (IJERCSE)

Monthly Journal for Computer Science and Engineering

Open Access Journal

International Journal of Engineering Research in Computer Science and Engineering (IJERCSE)

Monthly Journal for Computer Science and Engineering

ISSN : 2394-2320 (Online)

Perceptual Image Fusion of CT and MR Images Using Wavelets

Author : Ch.N.S.Hanumantha Rao 1 Sai Chaitanya .A 2 Saiteja.G 3 Dr.P.Venkatesan 4

Date of Publication :20th April 2017

Abstract: This image fusion is that to employ explicit luminance and to contrast masking models. In this paper, the wavelet transform is used along with Dual-Tree Complex. By this Dual-Tree Complex, Wavelet Transform of each input image will be divided and will be diagnosed carefully. From this DTC the coefficients are retained and the retained information would be in the most effective way. For this image fusion, Discrete Wavelet Transform is used. In this paper, the complexity of finding out the disease from the MR and CT images will be simplified.

Reference :

    1. Paul Hill, Member, IEEE, Mohammed Ebrahim AlMualla, Senior Member, IEEE, and David Bull, Fellow, IEEE VOL. 26, NO. 3, MARCH 2017
    2. P. R. Hill, C. N. Canagarajah, and D. R. Bull, “Image fusion using complex wavelets,” in Proc. 13th Brit. Mach. Vis. Conf., 2002,pp. 487–496
    3. H. Li, B. S. Manjunath, and S. K. Mitra, “Multisensor image fusion using the wavelet transform,” in Proc. IEEE Int. Conf. Image Process., vol. 1. Nov. 1994, pp. 51–55.
    4. N. Kingsbury, “The dual-tree complex wavelet transform: A new tech-nique for shift invariance and directional filters,” in Proc. IEEE Digit.Signal Process. Workshop, Aug. 1998, pp. 319–322.
    5. V. S. Petrovic and C. S. Xydeas, “Gradient-based multiresolution image fusion,” IEEE Trans. Image Process., vol. 13, no. 2, pp. 228–237, Feb. 2004.
    6. O. Rockinger, “Pixel-level fusion of image sequences using wavelet frames,” in Proc. 16th Leeds Appl. Shape Res. Workshop, Jul. 1996,pp 149–154.
    7. A. Toet, “Hierarchical image fusion,” Mach. Vis. Appl., vol. 3, no. 1, pp. 3–11, 1990.
    8. J. J. Lewis, R. J. O‟Callaghan, S. G. Nikolov, D. R. Bull, and C. N. Canagarajah, “Pixel and region-based image fusion with complex wavelets,” Inf. Fusion, vol. 8, no. 2, pp. 119–130, 2007
    9. S. Nercessian, K. Panetta, and S. Agaian, “Human visual system-based image fusion for surveillance applications,” in Proc. IEEE Int. Conf.Syst., Man Cybern., Oct. 2011, pp. 2687–2691.
    10. J. W. G. Bhatnagar and Z. Liu, “Human visual system inspired multi-modal medical image fusion framework,” Expert Syst. Appl., vol. 40, no. 5, pp. 1708–1720, 2013.
    11. M. Li, W. Cai, and Z. Tan, “A region-based multisensor image fusion scheme using pulse-coupled neural network,” Pattern Recognit. Lett., vol. 27, no. 16, pp. 1948– 1956, 2006.
    12. J. Huang, Y. Shi, and X. Dai, “A segmentation-based image coding algorithm using the features of human vision system,” J. Image Graph, vol. 4, no. 5, pp. 400–404, 1999.
    13. C. Wang and Z.-F. Ye, “Perceptual contrast-based image fusion: A varia-tional approach,” ActaAutom. Sinica vol.33, no. 2, pp. 132–137, 2007
    14. J. A. Ferwerda, S. N. Pattanaik, P. Shirley, and D. P. Green-berg, “A model of visual masking for computer graphics,” in Proc. SIGGRAPH, 1997, pp. 173–182.
    15. S. J. Daly, “Visible differences predictor: An algorithm for the assessment of image fidelity,” Digital. Images Human Vis., vol. 1666, pp.179–206, Aug. 1993.

Recent Article