The planned project with these collaborators is to employ safe, noninvasive motion capture and 3D visualization tools to study canine puppy socialization behaviors in a newly developed portable motion capture unit. While individual interactions have been widely described in the veterinary literature, few recorded observations of group dynamics are available. Using an array of synchronized structured light sensors to capture a collection of moving objects is a feasible technique for recording and allowing the quantification of puppy social interactions. Structured light sensors, such as the Microsoft Kinect, can capture the depth of a scene over time. This technique for capturing depth is faster than a laser scanner because it covers the entire field every frame and is more suited toward moving objects. Several of these sensors can be combined together to provide a more complete image of a 3D scene of objects. By adding tiny vibrations to each sensor, the interference of their patterns is reduced. With the array of sensors calibrated to the same environment, their depths can be combined to produce an aggregate point cloud. The point cloud scene can be segmented to uniquely identify and track separate objects. These data can be used in future studies with the use of machine learning algorithms to computationally identify social behaviors.
|Thomas Tucker||Creative Technology, School of Visual Arts||College of Architecture and Urban Studies|
|Bess J. Pierce||Department of Small Animal Clinical Sciences||Virginia-Maryland Regional College of Veterinary Medicine|
|Jeryl C. Jones||Division of Animal and Nutritional Sciences||Davis College, West Virginia University|
|Matthew Swarts||Digital Building Lab||SimTigrate Design Lab, Georgia Tech|