Augmenting Aerial
Earth Maps with Dynamic Information
|
|
|
Kihwan Kim, Sangmin
Oh, Jeonggyu Lee and Irfan Essa,
College of
Computing, School of Interactive Computing
GVU
Center, Georgia
Institute of Technology
*
Virtual Reality, Journal,
Springer 2011
[Paper
: PDF
| BibTex ] *
IEEE/ACM ISMAR 2009
( International Symposium on Mixed
and Augmented Reality) 2009, Orlando Florida USA
[Paper
: PDF
| Presentation (talk at ISMAR) : PPT
|PDF| BibTex ] **
This work was in part funded by a Google Research Award
(http://research.google.com/university/relations/research_awards.html). |
|
|
|
Abstract
We
introduce methods for augmenting aerial visualizations
of Earth (from tools such as Google Earth or Microsoft
Virtual Earth) with dynamic information obtained from
videos. Our goal is to make Augmented Earth Maps that
visualize plausible live views of dynamic scenes in
a city. We propose different approaches to analyze videos
of pedestrians and cars in real situations, under differing
conditions to extract dynamic information. Then, we
augment an Aerial Earth Maps (AEMs) with the extracted
live and dynamic content. We also analyze natural phenomenon
(skies, clouds) and project information from these to
the AEMs to add to the visual reality.
Our
primary contributions are: (1) Analyzing videos with
different viewpoints, coverage, and overlaps to extract
relevant information about view geometry and movements,
with limited user input. (2) Projecting this information
appropriately to the viewpoint of the AEMs and modeling
the dynamics in the scene from observations to allow
inference (in case of missing data) and synthesis. We
demonstrate this over a variety of camera configurations
and conditions. (3) The modeled information from videos
is registered to the AEMs to render appropriate movements
and related dynamics. We demonstrate this with traffic
flow, people movements, and cloud motions. All of these
approaches are brought together as a prototype system
for a real-time visualization of a city that is alive
and engaging.
|
|
|
|
System
Overview
|
|
|
Result
Images
(Note
that all rendered footages are generated
in real-time)
|
|
|
|
Direct
Mapping
|
Multiple
View Blending
|
|
|
|
|
Unobservable
region estimation and simulation
|
Video-driven
clouds synthesis
|
|
More
results
|
|
More
Results from our prototype system using
36 videos : (1) OCCM(View Blending) : (a)
5 Cameras for soccer game (b) Two broadcasting
footages of NCAA Football game (c) Three
surveillance cameras. (2) SCSM(Traffic)
: (d) Merging Lanes (e) Rendered traffic
Scene and corresponding simulated scene
(f) 8 cameras for larger scale traffic simulation
including merge and split (3) DM(Pedestrians)
: (g) Direct mapping of pedestrian having
simple motion (4) SCCM(Clouds) : (h)Four
videos for clouds and sky generation (5)
Putting event to different location (i)
|
|
|
|
|
|
|
|