Open Access Open Access  Restricted Access Subscription or Fee Access

Real and Complex Representation of Image in Curvelet Domain

Rikin Nayak, Jignesh Bhavsar, J.P. Chaudhari

Abstract


In this paper we have described the Curvelet
representation of image. Many applications like image compression require sparse representation of image. Curvelet transform is localized not only in position (the spatial domain) and scale (the frequency domain), but also in orientation so it can handle any curve discontinuity more effectively compared to wavelet. Here we have compared real and complex Curvelet coefficients for different specifications. We have also described the comparison of tracking an object for both real and complex Curvelet for Energy based searching algorithm.


Keywords


Curvelet Transform, Ridgelet Transform

Full Text:

PDF

References


N.T.Binh, A.Khare “Object Tracking Of Video Sequences In Curvelet

Domain” International Journal Of Image And Graphics, Vol 11, No 1,

, Pp 1-20, World Scientific Publication.

E. J. Candès And D. L. Donoho, “Curvelets: A Surprisingly Effective

Nonadaptive Representation For Objects With Edges,” In Curve And

Surface Fitting: Saint-Malo 99, A. Cohen, J.-L. Merrien, And L. L.

Schumaker, Eds. Nashville, Tn: Vanderbilt Univ. Press, 2000, Pp. 105–

E. J. Candès, L. Demanet, D. L. Donoho, And L. Ying, “Fast Discrete

Curvelet Transforms,” Multiscale Model. Simul. Vol. 5, No. 3, Pp. 861–

, 2006.

Yan Shang, Yan-Hua Diao, Chun-Ming Li ,” Rotation Invariant Texture

Classification Algorithm Based On Curvelet Transform And Svm “ ,

Proceedings Of The Seventh International Conference On Machine

Learning And Cybernetics, Kunming, 12-15 July 2008.

David L. Donoho, Mark R. Duncan, “Digital Curvelet Transform:

Strategy, Implementation and Experiments”, Nov 1999.

Frames for tracking :

http://www.inf.ed.ac.uk/teaching/courses/av/MATLAB/TASK6/DATA/

Curvelet toolbox : www.curvelet.org


Refbacks

  • There are currently no refbacks.


Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.