마 유 라

마 유 라
석사과정
경북대학교 IT대학 1호관 415호
대구 북구 대학로 80 경북대학교
Tel: +82-53-940-8598
Fax: +82-53-957-4846
Email: mmayurapavan@gmail.com
member_education_eng
2023.03 ~ 현재: 공학석사, 경북대학교 IT대학 전자전기공학부
member_research_eng
 
member_projects_eng
 
member_papers_eng
해외 저널
  • [DOI] M. Manawadu and S. Park, “6DoF Object Pose and Focal Length Estimation from Single RGB Images in Uncontrolled Environments,” Sensors, vol. 24, iss. 17, 2024.
    [Bibtex]
    @Article{s24175474,
    AUTHOR = {Manawadu, Mayura and Park, Soon-Yong},
    TITLE = {6DoF Object Pose and Focal Length Estimation from Single RGB Images in Uncontrolled Environments},
    JOURNAL = {Sensors},
    VOLUME = {24},
    YEAR = {2024},
    NUMBER = {17},
    ARTICLE-NUMBER = {5474},
    URL = {https://www.mdpi.com/1424-8220/24/17/5474},
    PubMedID = {39275384},
    ISSN = {1424-8220},
    ABSTRACT = {Accurate 6DoF (degrees of freedom) pose and focal length estimation are important in extended reality (XR) applications, enabling precise object alignment and projection scaling, thereby enhancing user experiences. This study focuses on improving 6DoF pose estimation using single RGB images of unknown camera metadata. Estimating the 6DoF pose and focal length from an uncontrolled RGB image, obtained from the internet, is challenging because it often lacks crucial metadata. Existing methods such as FocalPose and Focalpose++ have made progress in this domain but still face challenges due to the projection scale ambiguity between the translation of an object along the z-axis (tz) and the camera’s focal length. To overcome this, we propose a two-stage strategy that decouples the projection scaling ambiguity in the estimation of z-axis translation and focal length. In the first stage, tz is set arbitrarily, and we predict all the other pose parameters and focal length relative to the fixed tz. In the second stage, we predict the true value of tz while scaling the focal length based on the tz update. The proposed two-stage method reduces projection scale ambiguity in RGB images and improves pose estimation accuracy. The iterative update rules constrained to the first stage and tailored loss functions including Huber loss in the second stage enhance the accuracy in both 6DoF pose and focal length estimation. Experimental results using benchmark datasets show significant improvements in terms of median rotation and translation errors, as well as better projection accuracy compared to the existing state-of-the-art methods. In an evaluation across the Pix3D datasets (chair, sofa, table, and bed), the proposed two-stage method improves projection accuracy by approximately 7.19%. Additionally, the incorporation of Huber loss resulted in a significant reduction in translation and focal length errors by 20.27% and 6.65%, respectively, in comparison to the Focalpose++ method.},
    DOI = {10.3390/s24175474}
    }

해외 컨퍼런스
  • [DOI] M. Manawadu and S. Park, “Enhancing 6DoF Pose and Focal Length Estimation from Uncontrolled RGB Images for Robotics Vision,” in ICRA 2024 Workshop on 3D Visual Representations for Robot Manipulation, 2024.
    [Bibtex]
    @inproceedings{manawadu2024enhancing,
    title={Enhancing 6DoF Pose and Focal Length Estimation from Uncontrolled RGB Images for Robotics Vision},
    author={Mayura Manawadu and Soon-Yong Park},
    booktitle={ICRA 2024 Workshop on 3D Visual Representations for Robot Manipulation},
    year={2024},
    doi={https://openreview.net/forum?id=5QDK8nd6NP}
    }

국내 저널

국내 컨퍼런스
  • M. Manawadu, S. Park, and S. Park, “Advancing 6D Pose Estimation in Augmented Reality – Overcoming Projection Ambiguity with Uncontrolled Imagery,” IPIU, 2024.
    [Bibtex]
    @domestic_conference{Mayura2024advancing,
    title={Advancing 6D Pose Estimation in Augmented Reality - Overcoming Projection Ambiguity with Uncontrolled Imagery},
    author={Manawadu, Mayura and Park, Sieun and Park, Soon-Yong},
    journal={IPIU},
    year={2024},
    }