[image 03737] 講演会のご案内: Dr. Sudipta Sinha (Microsoft) and Dr. Alessio Del Bue (Italian Institute of Technology), 10月25日(金), 大阪大学吹田キャンパス
Yasuyuki Matsushita
yasumat @ ist.osaka-u.ac.jp
2019年 10月 8日 (火) 10:45:44 JST
Image-MLの皆様
大阪大学の松下と申します。お世話になっております。
MicrosoftのSudipta Sinha博士とItalian Institute of TechnologyのAlessio Del Bue博士をお招きして
以下の通り講演会を開催致します。参加登録・参加費共に不要です。
皆様のご参加をお待ちしております。
Time: October 25th 14:00-16:00
Place: B101, Building B, Graduate School of Information Science, Osaka U. (Suita campus)
— Talk 1 -
Speaker: Dr. Sudipta Sinha
Title: Privacy-Preserving Image-based Localization
Abstract:
Image-based localization methods register query images to 3D point cloud maps to estimate the camera pose within a scene. Such 3D maps are computed using structure from motion (SfM) after which the images are typically discarded for the sake of privacy. However, we showed recently that surprisingly detailed images of the scene can be reconstructed by inverting even sparse 3D point clouds and feature descriptors. This suggests that the persistent point cloud storage poses potential privacy issues that will become increasingly relevant as augmented reality and robotics applications are widely adopted.
To address this issue, we pose a new question -- How can we avoid disclosing confidential information about the 3D scene map, and yet allow reliable camera pose estimation? We propose the first solution to this problem. It is based on transforming conventional 3D point clouds into 3D line clouds, where we replace each 3D point with a randomly oriented 3D line passing through the point. This novel map representation obfuscates the underlying scene geometry while providing sufficient geometric constraints to enable robust and accurate 6-DOF camera pose estimation. We also investigate the related problem of keeping the query image confidential, which is important when camera localization is performed in the cloud. Even when only image features are uploaded, the query images can be recovered by inverting the features which can compromise privacy. To address this issue, we propose a new feature representation which conceals the query image content from an adversary on the server but provides sufficient constraints to recover the camera pose.
Bio:
Sudipta Sinha is a principal researcher at Microsoft Research Redmond. His research interests lie in computer vision, robotics and computer graphics. He works on 3D computer vision problems related to 3D scene reconstruction from images and video (structure from motion, visual odometry, dense stereo, optical flow, image-based localization, object detection and pose estimation). He is interested in applications such as 3D scanning, augmented reality (AR) and UAV-based aerial photogrammetry. He received his M.S. and Ph.D. from the University of North Carolina at Chapel Hill in 2005 and 2009 respectively. As a member of the UNC Chapel Hill team, he received the best demo award at CVPR 2007 for one of the first scalable, real-time, vision-based urban 3D reconstruction systems. He has served as an area chair for 3DV 2016, ICCV 2017, 3DV 2018 and 3DV 2019, was a program co-chair for 3DV 2017 and serves as an area editor for the Computer Vision and Image Understanding (CVIU) Journal.
— Talk 2 —
Speaker: Dr. Alessio Del Bue
Title: Dynamic illumination understanding: Modelling, measuring and controlling light in the invisible
Abstract:
Lighting design and modelling rely heavily on time-consuming manual measurements or on physically coherent computational simulations. Regarding the latter, standard approaches are based on CAD modelling simulations and offline rendering, with long processing times and therefore inflexible workflows. In this talk I will show that a single RGBD camera can provide an approximate solution for measuring lighting, even in real-time if the environment dynamically changes illumination conditions. This solution requires a new lighting model based on radiosity that can account for real, no pointwise, illuminations sources (bulbs, LED) and light perception models for estimating the correct luminance values in the scene. The new model is tested both on synthetic and real environments demonstrating better performance than commercial state of the art systems.
Bio:
Alessio Del Bue is a Tenured Senior Researcher leading the PAVIS (Pattern Analyisis and computer VISion) and the Visual Geometry and Modelling (VGM) research lines at the of the Italian Institute of Technology (IIT). Previously, he was a researcher in the Institute for Systems and Robotics at the Instituto Superior Técnico (IST) in Lisbon, Portugal. Before that, he obtained his Ph.D. under the supervision of Dr. Lourdes Agapito in the Department of Computer Science at Queen Mary University of London.
--------
Yasuyuki Matsushita
Professor
Graduate School of Information Science and Technology
Osaka University
yasumat @ ist.osaka-u.ac.jp
image メーリングリストの案内