Please use this identifier to cite or link to this item:
|Point-cloud decomposition for scene analysis and understanding
Photography -- Digital techniques
|University of Malta. Faculty of ICT
|Spina, S. (2013). Point-cloud decomposition for scene analysis and understanding. Computer Science Annual Workshop CSAW’13, Msida. 5-6.
|Over the past decade digital photography has taken over traditional film based photography. The same can be said for video productions. A practice traditionally reserved only for the few has nowadays become commonplace. This has led to the creation of massive repositories of digital photographs and videos in various formats. Recently, another digital representation has started picking up, namely one that captures the geometry of real-world objects. In the latter, instead of using light sensors to store per pixel colour values of visible objects, depth sensors (and additional hardware) are used to record the distance (depth) to the visible objects in a scene. This depth information can be used to create virtual reconstructions of the objects and scenes captured. Various technologies have been proposed and successfully used to acquire this information, ranging from very expensive equipment (e.g. long range 3D scanners) to commodity hard- ware (e.g. Microsoft Kinect and Asus Xtion). A considerable amount of research has also looked into the extraction of accurate depth information from multi- view photographs of objects using specialised software (e.g. Microsoft Photo- Synth amongst many others). Recently, rapid advances in ubiqutous computing, has also brought to the masses the possibility of capturing the world around them in 3D using smartphones and tablets (e.g. http://structure.io/).
|Appears in Collections:
|Scholarly Works - FacICTCS
Files in This Item:
|Proceedings of CSAW’13 - A10.pdf
Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.