Dane Webster, associate professor in the School of Visual Arts and studio head for the Institute for Creativity, Arts, and Technology, is working with Rolf Mueller, associate professor in Mechanical Engineering and an affiliated faculty member with the Institute for Critical Technology and Applied Science, to find different ways to use visualization and rendering software to better understand biosonar in bats.
This project addresses the need to be prepared for a large-scale emergency crowd evacuation from Lane Stadium. A real-time crowd simulation system based on an accurate 3D model of Lane stadium will be implemented as well as an emergency evacuation plan with real-time crowd simulation for different emergency scenarios, like lightening or a bomb threat.
For Dane Webster, associate professor of animation and 3-D modeling and area coordinator of creative technologies in the College of Architecture and Urban Studies’ School of Visual Arts, one of the most important parts of his job is cultivating s
The Orb is a data visualization showcasing research projects associated with the Institute for Creativity, Arts, and Technology (ICAT) at Virginia Tech. Descriptions and keywords for the projects are analyzed for their “connectedness” to other projects. Arcing lines radiate out from any selected project and show weighted connections of up to 10 related projects. Viewers can also click on titles of the projects to read a short description or watch short summary videos.
StockBubbler is an interactive visualization tool for observing stock market data and predicting the onset of a stock market bubble burst. StockBubbler can be used by financial regulators to study market capitalization data through the circle packed design of StockBubbler along with the geometric mean return and percentage change in performance values to detect a potential bubble in the system.
Education and learning are going through a massive transformation due to the emergence of the Internet and social media. The relationships among learners; and between learners and experts, have been redefined in the massive open online courses (MOOC) and other online learning forums. To understand such new forms of learning structures, it is critical to develop new ways of examining how learning takes place in these environments.
Rendering massive 3D models in real-time has long been recognized as a very challenging problem because of the limited computational power and memory space available in a workstation. Most existing rendering techniques, especially level of detail (LOD) processing, have suffered from their sequential execution natures. We present a GPU-based approach which enables the interactive rendering of large 3D models with hundreds of millions of triangles.
Rendering massive 3D models at high frame rates has been recognized as a challenging task. In this work, we present a device-independent parallel algorithm to achieve high-performance rendering on a multi-GPU system. To optimize the GPU usage, our algorithm seamlessly integrates with parallel LOD and CPU-GPU data streaming approaches, and solves the load-balancing issues across GPU devices.
Interactive 3D animation of human figures is very common in video games, animation studios and virtual environments. However, it is difficult to produce full body animation that looks realistic enough to be comparable to studio quality human motion data. The commercial motion capture systems are expensive and not suitable for capture in everyday environments. Real-time requirements tend to reduce quality of animation.
Speech-driven facial motion synthesis is a well explored research topic. However, little has been done to model expressive visual behavior during speech. We address this issue using a machine learning approach that relies on a database of speech related high-fidelity facial motions. From this training set, we derive a generative model of expressive facial motion that incorporates emotion control while maintaining accurate lip-synching.
We introduce a GPU-accelerated visual analytics tool. By adopting the in-situ visualization architecture on the GPUs, AVIST supports real-time data analysis and visualization of massive scale datasets, such as VAST 2012 Challenge dataset. The design objective of the tool is to identify temporal patterns from large and complex data. To achieve this goal, we introduce three unique features: automatic animation, disjunctive data filters, and time-synced visualization of multiple datasets.