Augmented Reality (AR) is technology for displaying computer graphics overlaid upon the real world. Different mechanisms can be used to displaying the graphics. One option, called optical see through, is to use head mounted transparent displays, where the graphics are projected onto clear lenses using mirrors or other technology, allowing the user to view the world and graphics at the same time. However, accurately registering the graphics with the world requires eye tracking, which can be difficult. Another option is called video see through. In this case, a camera image is digitized, overlaid with graphics, and then displayed to the user. The user display could also be head mounted in this case, but hand held displays are another attractive option. For example, a PDA or tablet PC with integrated camera is a natural hand held AR platform, and a more likely candidate than head mounted displays for future proliferation of this technology.
At CSM we have developed a handheld AR system that uses 3D fiducials (orange cones) for registration, along with a supplementary mems-based gyros that supply rotational velocity information. The fiducials are segmented using color and shape. A fast absolute orientation algorithm determines the camera position relative to the default triangular model of the cones. An Extended Kalman Filter is used to fuse the pose information obtained from fiducials with the gyro data to obtain a final pose estimate that is used to correctly place the virtual object in the scene. The system runs at frame rate (30Hz) on a PC with dual core processor and mid-range graphics card.
- Bao Ngyuen, Graduate Student
- John Steinbis, Graduate Student
- Bill Hoff, Faculty
- Tyrone Vincent, Faculty
Interested? Augmented Reality brings together technologies from computer vision, computer graphics, signal processing, and tracking and estimation. Key courses for students interested in AR are:
- EGGN510 Image Processing
- EGGN512 Computer Vision
- EGGN515 Mathematical Methods for Signals and Systems
- EGGN519 Estimation Theory and Kalman Filtering
- CSCI441 Computer Graphics
as well as solid C++ programming skills!
New Extensions of the 3-Simplex for Exterior Orientation
John M. Steinbis, Tyrone Vincent, and Bill Hoff
Submitted to ICPR 2008 April 2008
Object pose may be determined from a set of 2D image points and corresponding 3D model points, given the camera's intrinsic parameters. In this paper, two new exterior orientation algorithms are proposed and then compared against the Efficient PnP Method and Orthogonal Iteration Algorithm. As an alternative to the homogeneous transformation, both algorithms utilize the 3-simplex as a pose parameterization. One algorithm uses a semidefinite program and the other a Gauss-Newton algorithm.
3D Fiducials for Scalable AR Visual Tracking
John M. Steinbis, Tyrone Vincent, and Bill Hoff
Provisionally Accepted as Poster to ISMAR 2008
Abstract: A new vision and inertial pose estimation system was implemented for real-time handheld augmented reality (AR). A sparse set of 3D cone fiducials are utilized for scalable indoor/outdoor tracking, as opposed to traditional planar patterns. The cones are easy to segment and have a large working volume which makes them more suitable for many applications. The pose estimation system receives measurements from the camera and IMU at 30 Hz and 100 Hz respectively. With a dual-core workstation, all measurements can be processed in real-time to update the pose of virtual graphics within the AR display. To simulate systems with less available computational resources, such as PDAs, the effects of overlay accuracy with lower camera update rate and resolution are studied. To quantify performance, a method was developed to measure overlay error from misalignment of real and virtual dot patterns. Results show that update rate and resolution may be decreased without ill-effect when gyros are also used.
3-D motion and structure estimation using inertial sensors and computer vision for augmented reality
Lin Chai, William A. Hoff and Tyrone L. Vincent
Presence: Teleoperators and Virtual Environments, Vol. 11, No. 5, pp. 474-491, 2002.
Analysis of Head Pose Accuracy in Augmented Reality
William A. Hoff and Tyrone L. Vincent
IEEE Trans. Visualization and Computer Graphics, Vol 6., No. 4, 2000.
Abstract: A new method for registration in augmented reality (AR) was developed that simultaneously tracks the position, orientation, and motion of the user’s head, as well as estimating the three-dimensional (3-D) structure of the scene. The method fuses data from head-mounted cameras and head-mounted inertial sensors. Two Extended Kalman Filters (EKF) are used; one of which estimates the motion of the user’s head and the other that estimates the 3-D locations of points in the scene. A recursive loop is used between the two EKFs. The algorithm was tested using a combination of synthetic and real data, and in general was found to perform well. A further test showed that a system using two cameras performed much better than a system using a single camera, although improving the accuracy of the inertial sensors can partially compensate for the loss of one camera. The method is suitable for use in completely unstructured and unprepared environments. Unlike previous work in this area, this method requires no a priori knowledge about the scene, and can work in environments where the objects of interest are close to the user.
An Adaptive Estimator for Registration in Augmented Reality
Lin Chai, Khoi Nguyen, Bill Hoff, Tyrone Vincent
2nd International Workshop on Augmented Reality, 1999
AR videos: Here