Logo
Please use this identifier to cite or link to this item: http://20.198.91.3:8080/jspui/handle/123456789/8845
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorBhattacharjee, Debotosh-
dc.contributor.authorGhara, Sahasradal Kishor-
dc.date.accessioned2025-10-10T05:33:53Z-
dc.date.available2025-10-10T05:33:53Z-
dc.date.issued2022-
dc.date.submitted2022-
dc.identifier.otherDC3466-
dc.identifier.urihttp://20.198.91.3:8080/jspui/handle/123456789/8845-
dc.description.abstractImage-based 3D reconstruction is a very challenging problem in computer vision and deep learning. Since 2015 image-based 3D reconstruction using a convolution neural network has attracted and demonstrated impressive performance. We focus on the work that uses deep-learning techniques to reconstruct the 3D shape of generic objects from single or multiple RGB images. However, unlike 2D images, 3D cannot be represented in its canonical form to make it computationally lean and memory-efficient. This paper proposes Grid/voxel-based 3D object reconstruction from a single 2D image for better accuracy, using the Autoencoders (AE) model. The encoder part of the model is used to learn suitable compressed domain representation from a single 2D image, and a decoder generates a corresponding 3D object. We provide a comprehensive, structured review of the recent advanced 3D objects reconstruction using deep-learning techniques.en_US
dc.format.extentiv, 41 p.en_US
dc.language.isoenen_US
dc.publisherJadavpur University, Kolkata, West Bengalen_US
dc.subjectDeep learningen_US
dc.subjectAutoencoders (AE) modelen_US
dc.titleImage-based 3D object reconstruction: state-of-the-art and trends in the deep learning eraen_US
dc.typeTexten_US
dc.departmentJadavpur University, Dept. of Computer Science and Engineeringen_US
Appears in Collections:Dissertations

Files in This Item:
File Description SizeFormat 
MCA (Dept.of Computer Science and Engineering) Sahasradal Kishor Ghara.pdf1.18 MBAdobe PDFView/Open


Items in IR@JU are protected by copyright, with all rights reserved, unless otherwise indicated.