Anselm Grundhöfer

Anselm Grundhöfer

Campbell, California, United States
1K followers 500+ connections

About

Focused on making tech products that people love, from consumer electronics to theme park…

Articles by Anselm

Activity

Join now to see all activity

Experience

  • Apple Graphic

    Apple

    San Francisco Bay Area

  • -

    San Francisco Bay Area

  • -

    Zürich und Umgebung, Schweiz

  • -

    Zürich

  • -

    Zürich

  • -

    Zürich

  • -

  • -

Education

  • Bauhaus-Universität Weimar Graphic
  • -

    Faculty of Media, Augmented Reality Department. Main areas:
    Projector-Camera Systems, Computer Vision, Real-Time Image Processing.

  • -

    Main areas:
    Real-Time Computer Graphics, Display Technology

  • -

    Fulfilling the requirements of BGV B2

  • -

    Half year remote course in personnel management

  • -

  • -

  • -

  • -

Publications

  • Color Display Using 365 nm UV Light Projection onto an Optimized Fluorescent Pattern Surface

    Journal of Imaging Science and Technology, Volume 63, Number 4

    We propose a method to generate a display with enhanced color appearance by projecting monochromatic 365 nm ultraviolet (UV) light onto an optimized pattern surface containing red, green, and blue fluorescent pigments. Our method is based on the Paxel framework proposed by Pjanic and Grundhöfer [1] describing the color formulation process when projecting spatially varying illumination onto a printed surface containing pigments with spatially varying properties, in our case fluorescent emission.…

    We propose a method to generate a display with enhanced color appearance by projecting monochromatic 365 nm ultraviolet (UV) light onto an optimized pattern surface containing red, green, and blue fluorescent pigments. Our method is based on the Paxel framework proposed by Pjanic and Grundhöfer [1] describing the color formulation process when projecting spatially varying illumination onto a printed surface containing pigments with spatially varying properties, in our case fluorescent emission. In this article, we extend this approach by optimizing the spatial arrangement and surface ratios of the used red, green, and blue transparent fluorescent pigments within the patterns. These ratios are optimized in order to obtain a color gamut with maximized volume. Furthermore, we propose a novel setup where the fluorescent inks are printed onto a transparent film which is then placed onto a black, light-absorbing material. This projection setup, consisting of an emissive and a black absorbing layer, enables us to produce high-contrast imagery even if a high amount of ambient light is present. Finally, to reduce the visually unpleasant appearance of the printed patterns, we introduce an alternative hexagon-shaped pattern structure.

    Other authors
    See publication
  • Magic Prints: Image-Changing Prints Observed under Visible and 365 nm UV Light

    Journal of Imaging Science and Technology, Volume 63, Number 2

    In this paper we propose a novel layered-printing method consisting of superposed visible cmy and invisible fluorescent ultraviolet (UV) rgb inks. Our approach can be used to generate a variety of visual color-alteration effects such as revealing two completely distinct images when the print is illuminated with either standard visible or 365 nm ultraviolet (UV) light (Figure 1). This is achieved by computing the maximum achievable color gamuts for both illumination conditions, generating…

    In this paper we propose a novel layered-printing method consisting of superposed visible cmy and invisible fluorescent ultraviolet (UV) rgb inks. Our approach can be used to generate a variety of visual color-alteration effects such as revealing two completely distinct images when the print is illuminated with either standard visible or 365 nm ultraviolet (UV) light (Figure 1). This is achieved by computing the maximum achievable color gamuts for both illumination conditions, generating accurate estimates, and applying a spatial-varying gamut mapping to minimize potential ghosting artifacts and calculate the optimal ink surface coverages that, when printed, generate the desired image-alteration effect. Our method uses invisible UV-rgb fluorescent inks which are printed onto a transparent film. It is placed on top of a visible print consisting of standard cmy inks. By separating the UV and the visible inks using the transparent film, physical mixing of the two different ink types is avoided. This significantly increases the intensity of the fluorescent emission resulting in stronger and more vivid color-alteration effects. Besides the revealing of two different images, the same method can be applied for other use cases as well, such as enhancing or adding specific parts to an image under one illumination condition, generating personalized document security features, or aiding color-blind people in color distinction.

    Other authors
    See publication
  • Seamless Multi Projection Revisited

    IEEE Transactions on Visualization and Computer Graphics / ISMAR / Siggraph Asia TVCG VR session

    This paper introduces a novel photometric compensation technique for inter-projector luminance and chrominance variations. Although it sounds as a classical technical issue, to the best of our knowledge there is no existing solution to alleviate the spatial non-uniformity among strongly heterogeneous projectors at perceptually acceptable quality. Primary goal of our method is increasing the perceived seamlessness of the projection system by automatically generating an improved and consistent…

    This paper introduces a novel photometric compensation technique for inter-projector luminance and chrominance variations. Although it sounds as a classical technical issue, to the best of our knowledge there is no existing solution to alleviate the spatial non-uniformity among strongly heterogeneous projectors at perceptually acceptable quality. Primary goal of our method is increasing the perceived seamlessness of the projection system by automatically generating an improved and consistent visual quality. It builds upon the existing research of multi-projection systems, but instead of working with perceptually non-uniform color spaces such as CIEXYZ, the overall computation is carried out using the RLab [10, pp. 243-254] color appearance model which models the color processing in an adaptive, perceptual manner. Besides, we propose an adaptive color gamut acquisition, spatially varying gamut mapping, and optimization framework for edge blending. The paper describes the overall workflow and detailed algorithm of each component, followed by an evaluation validating the proposed method. The experimental results both qualitatively and quantitatively show the proposed method significant improved the visual quality of projected results of a multi-projection display with projectors with severely heterogeneous color processing.

    Other authors
    See publication
  • Recent Advances in Projection Mapping Algorithms, Hardware and Applications

    Eurographics - State of the Art Report 2018

    This State-of-the-Art-Report covers the recent advances in research fields related to projection mapping applications. We summarize the novel enhancements to simplify the 3D geometric calibration task, which can now be reliably carried out either interactively or automatically using self-calibration methods. Furthermore, improvements regarding radiometric calibration and compensation as well as the neutralization of global illumination effects are summarized. We then introduce computational…

    This State-of-the-Art-Report covers the recent advances in research fields related to projection mapping applications. We summarize the novel enhancements to simplify the 3D geometric calibration task, which can now be reliably carried out either interactively or automatically using self-calibration methods. Furthermore, improvements regarding radiometric calibration and compensation as well as the neutralization of global illumination effects are summarized. We then introduce computational display approaches to overcome technical limitations of current projection hardware in terms of dynamic range, refresh rate, spatial resolution, depth-of-field, view dependency, and color space. These technologies contribute towards creating new application domains related to projection-based spatial augmentations. We summarize these emerging applications, and discuss new directions for industries

    See publication
  • Paxel: A Generic Framework to Superimpose High-Frequency Print Patterns using Projected Light

    IEEE Transactions on Image Processing

    In this paper, we propose Paxel, a generic framework for modeling the interaction between a projector and a high-frequency pattern surface. Using this framework, we present two different application setups: a novel colorchanging effect, created with a single projected image and only when the projection surface is changed from a pattern surface to a uniform white surface. The observed effect relies on the spatially different reflectance properties of these two surfaces. Using this approach, one…

    In this paper, we propose Paxel, a generic framework for modeling the interaction between a projector and a high-frequency pattern surface. Using this framework, we present two different application setups: a novel colorchanging effect, created with a single projected image and only when the projection surface is changed from a pattern surface to a uniform white surface. The observed effect relies on the spatially different reflectance properties of these two surfaces. Using this approach, one can alter color proprieties of the projected image such as hue or chroma. Furthermore, for a specific color range, defined by an full color-changing sub-gamut, one can embed two completely different images, within a single static projection, from which either one will be revealed depending on the surface. The second application allows the creation of color images using a single channel projector. For this application, we present a full color projection created using a 365 nm ultraviolet (UV) projector in combination with fluorescent pigments (cf. Fig. 1b), enabling new display possibilities, such as projection through participating media, e.g. fog, while hiding the scattering of the projection light outside of the visible spectrum. Both presented approaches create effects that might be striking to the observer, making this framework useful for art exhibitions, advertisements, entertainment and visual cryptography. Finally, in Sec. VI, we provide an in-depth analysis of the reproducible colors based on input parameters, used in the presented algorithm, such as: pattern layout, dot size of the pattern and the number of the clusters formed by k-means algorithm.

    Other authors
    See publication
  • Camera-Specific Image Quality Enhancement using a Convolutional Neural Network

    IEEE International Conference on Image Processing

    We propose a simple method to enhance the image quality of modern Bayer pattern based cameras that are offering an optional sub-pixel accurate sensor shift to capture full rgb images of static scenes. By capturing a series of image pairs of the same, unaltered scene, once captured with the Bayer pattern and once with the full rgb image data, a database of corresponding images can be generated in which the former contains artifacts resulting from the spatial interpolation which is absent in the…

    We propose a simple method to enhance the image quality of modern Bayer pattern based cameras that are offering an optional sub-pixel accurate sensor shift to capture full rgb images of static scenes. By capturing a series of image pairs of the same, unaltered scene, once captured with the Bayer pattern and once with the full rgb image data, a database of corresponding images can be generated in which the former contains artifacts resulting from the spatial interpolation which is absent in the latter. Using this data, we train a convolutional neural network (CNN) to generate a camera dependent image processing operation which reduces the image artifacts and enhances the image quality to approximate the quality of the full rgb image within a single exposure, even if moving scenes are captured. We present a simple, do-it-yourself method to capture and pre-process the data, train the network, and to enhance the images. An evaluation using several image quality assessment methods shows the effectiveness of the proposed method.

    See publication
  • Robust Geometric Self-Calibration of Generic Multi-Projector Camera Systems

    IEEE ISMAR

    In this paper, we propose a fully automated selfcalibration method for arbitrary complex multi-projectorcamera systems (MPCS). Combining the advantages of known, sub-pixel accurate correspondences using robust projected structured light patterns with state-of-the-art camera self-calibration algorithms enables a reliable and accurate intrinsic and extrinsic calibration without any required human parameter tuning. We evaluated the proposed methods using more than ten multi-projection datasets…

    In this paper, we propose a fully automated selfcalibration method for arbitrary complex multi-projectorcamera systems (MPCS). Combining the advantages of known, sub-pixel accurate correspondences using robust projected structured light patterns with state-of-the-art camera self-calibration algorithms enables a reliable and accurate intrinsic and extrinsic calibration without any required human parameter tuning. We evaluated the proposed methods using more than ten multi-projection datasets ranging from a toy castle set up consisting of three cameras and one projector up to a half dome display system with more than 30 devices. Comparisons to reference calibrations, which were generated using the standard checkerboard calibration approach, show the reliability of our proposed pipeline. Besides being fully automatic without the necessity of parameter fine tuning, the proposed method also significantly reduces the installation time of MPCS compared to checkerboard-based methods and makes it more suitable for real-world applications.

    Other authors
  • Geometric and Photometric Consistency in a Mixed Video and Galvanoscopic Scanning Laser Projection Mapping System

    IEEE Transactions on Visualization and Computer Graphics

    We present a geometric calibration method to accurately register a galvanoscopic scanning laser projection system (GLP) based on 2D vector input data onto an arbitrarily complex 3D-shaped projection surface. This method allows for accurate merging of 3D vertex data displayed on the laser projector with geometrically calibrated standard rasterization-based video projectors that are registered to the same geometry. Because laser projectors send out a laser light beam via galvanoscopic mirrors, a…

    We present a geometric calibration method to accurately register a galvanoscopic scanning laser projection system (GLP) based on 2D vector input data onto an arbitrarily complex 3D-shaped projection surface. This method allows for accurate merging of 3D vertex data displayed on the laser projector with geometrically calibrated standard rasterization-based video projectors that are registered to the same geometry. Because laser projectors send out a laser light beam via galvanoscopic mirrors, a standard pinhole model calibration procedure that is normally used for pixel raster displays projecting structured light patterns, such as Gray codes, cannot be carried out directly with sufficient accuracy as the rays do not converge into a single point. To overcome the complications of accurately registering the GLP while still enabling a treatment equivalent to a standard pinhole device, an adapted version is applied to enable straightforward content generation. Besides the geometrical calibration, we also present a photometric calibration to unify the color appearance of GLPs and standard video projectors maximizing the advantages of the large color gamut of the GLP and optimizing its color appearance to smoothly fade into the significantly smaller gamut of the video projector. The proposed algorithms were evaluated on a prototypical mixed video projector and GLP projection mapping setup.

    Other authors
    See publication
  • A Practical Method for Fully Automatic Intrinsic Camera Calibration Using Directionally Encoded Light

    IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017 (Spotlight paper)

    Calibrating the intrinsic properties of a camera is one of the fundamental tasks required for a variety of computer vision and image processing tasks. The precise measurement of focal length, location of the principal point as well as distortion parameters of the lens is crucial, for example, for 3D reconstruction. Although a variety of methods exist to achieve this goal, they are often cumbersome to carry out, require substantial manual interaction, expert knowledge, and a significant…

    Calibrating the intrinsic properties of a camera is one of the fundamental tasks required for a variety of computer vision and image processing tasks. The precise measurement of focal length, location of the principal point as well as distortion parameters of the lens is crucial, for example, for 3D reconstruction. Although a variety of methods exist to achieve this goal, they are often cumbersome to carry out, require substantial manual interaction, expert knowledge, and a significant operating volume. We propose a novel calibration method based on the usage of directionally encoded light rays for estimating the intrinsic parameters. It enables a fully automatic calibration with a small device mounted close to the front lens element and still enables an accuracy comparable to standard methods even when the lens is focused up to infinity. Our method overcomes the mentioned limitations since it guarantees an accurate calibration without any human intervention while requiring only a limited amount of space. Besides that, the approach also allows to estimate the distance of the focal plane as well as the size of the aperture. We demonstrate the advantages of the proposed method by evaluating several camera/lens configurations using prototypical devices.

    Other authors
    See publication
  • Makeup Lamps: Live Augmentation of Human Faces via Projection

    Eurographics 2017

    We propose the first system for live dynamic augmentation of human faces. Using projector-based illumination, we alter the appearance of human performers during novel performances. The key challenge of live augmentation is latency — an image is generated according to a specific pose, but is displayed on a different facial configuration by the time it is projected. Therefore, our system aims at reducing latency during every step of the process, from capture, through processing, to projection…

    We propose the first system for live dynamic augmentation of human faces. Using projector-based illumination, we alter the appearance of human performers during novel performances. The key challenge of live augmentation is latency — an image is generated according to a specific pose, but is displayed on a different facial configuration by the time it is projected. Therefore, our system aims at reducing latency during every step of the process, from capture, through processing, to projection. Using infrared illumination, an optically and computationally aligned high-speed camera detects facial orientation as well as expression. The estimated expression blendshapes are mapped onto a lower dimensional space, and the facial motion and non-rigid deformation are estimated, smoothed and predicted through adaptive Kalman filtering. Finally, the desired appearance is generated interpolating precomputed offset textures according to time, global position, and expression. We have evaluated our system through an optimized CPU and GPU prototype, and demonstrated successful low latency augmentation for different performers and performances with varying facial play and motion speed. In contrast to existing methods, the presented system is the first method which fully supports dynamic facial projection mapping without the requirement of any physical tracking markers and incorporates facial expressions.

    Video:

    https://youtu.be/Ilgu3aFCphs

    Other authors
    See publication
  • Spatio-Temporal Point Path Analysis and Optimization of a Galvanoscopic Scanning Laser Projector

    IEEE Transactions on Visualization and Computer Graphics

    Galvanoscopic scanning laser projectors are powerful vector graphic devices offering a tremendous local brightness advantage compared to standard video projection systems. However, such devices have inherent problems, such as temporal flicker and spatially inaccurate rendering. We propose a method to generate an accurate point-based projection with such devices. To overcome the mentioned problems, we present a camera-based method to automatically analyze the laser projector's motion behavior…

    Galvanoscopic scanning laser projectors are powerful vector graphic devices offering a tremendous local brightness advantage compared to standard video projection systems. However, such devices have inherent problems, such as temporal flicker and spatially inaccurate rendering. We propose a method to generate an accurate point-based projection with such devices. To overcome the mentioned problems, we present a camera-based method to automatically analyze the laser projector's motion behavior. With this information, a model database is generated that is used to optimize the scanning path of projected point sequences. The optimization considers the overall path length, its angular shape, acceleration behavior, and the spatio-temporal point neighborhood. The method minimizes perceived visual flickering while guaranteeing an accurate spatial point projection at the same time. Comparisons and timing measurements prove the effectiveness of our method. An informal user evaluation shows substantial visual quality improvement as well.

    Other authors
    See publication
  • A LED-Based IR/RGB End-to-End Latency Measurement Device

    IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct)

    Achieving a minimal latency within augmented reality (AR) systems is one of the most important factors to achieve a convincing visual impression. It is even more crucial for non-video augmentations such as dynamic projection mappings because in that case the superimposed imagery has to exactly match the dynamic real surface, which obviously cannot be directly influenced or delayed in its movement. In those cases, the inevitable latency is usually compensated for using prediction and…

    Achieving a minimal latency within augmented reality (AR) systems is one of the most important factors to achieve a convincing visual impression. It is even more crucial for non-video augmentations such as dynamic projection mappings because in that case the superimposed imagery has to exactly match the dynamic real surface, which obviously cannot be directly influenced or delayed in its movement. In those cases, the inevitable latency is usually compensated for using prediction and extrapolation operations, which require accurate information about the occurring overall latency to exactly predict to the right time frame for the augmentation. Different strategies have been applied to accurately compute this latency. Since some of these AR systems operate within different spectral bands for input and output, it is not possible to apply latency measurement methods encoding time stamps directly into the presented output images as these might not be sensed by used input device.We present a generic latency measurement device which can be used to accurately measure the overall end-to-end latency of camera-based AR systems with an accuracy below one millisecond. It comprises a LED-based time stamp generator displaying the time as a gray code on spatially and spectrally multiple locations. It is controlled by a micro-controller and sensed by an external camera device observing the output display as well as the LED device at the same time.

    Other authors
    See publication
  • Computational thermoforming

    ACM Transactions on Graphics (TOG) - Proceedings of ACM SIGGRAPH

    We propose a method to fabricate textured 3D models using thermoforming. Differently from industrial techniques, which target mass production of a specific shape, we propose a combined hardware and software solution to manufacture customized, unique objects. Our method simulates the forming process and converts the texture of a given digital 3D model into a pre-distorted image that we transfer onto a plastic sheet. During thermoforming, the sheet deforms to create a faithful physical replica of…

    We propose a method to fabricate textured 3D models using thermoforming. Differently from industrial techniques, which target mass production of a specific shape, we propose a combined hardware and software solution to manufacture customized, unique objects. Our method simulates the forming process and converts the texture of a given digital 3D model into a pre-distorted image that we transfer onto a plastic sheet. During thermoforming, the sheet deforms to create a faithful physical replica of the digital model. Our hardware setup uses off-the-shelf components and can be calibrated with an automatic algorithm that extracts the simulation parameters from a single calibration object produced by the same process.

    Other authors
    See publication
  • Robust, error-tolerant photometric projector compensation

    IEEE Transactions on Image Processing

    We propose a novel error tolerant optimization approach to generate a high-quality photometric compensated projection. The application of a non-linear color mapping function does not require radiometric pre-calibration of cameras or projectors. This characteristic improves the compensation quality compared with related linear methods if this approach is used with devices that apply complex color processing, such as single-chip digital light processing projectors. Our approach consists of a…

    We propose a novel error tolerant optimization approach to generate a high-quality photometric compensated projection. The application of a non-linear color mapping function does not require radiometric pre-calibration of cameras or projectors. This characteristic improves the compensation quality compared with related linear methods if this approach is used with devices that apply complex color processing, such as single-chip digital light processing projectors. Our approach consists of a sparse sampling of the projector's color gamut and non-linear scattered data interpolation to generate the per-pixel mapping from the projector to camera colors in real time. To avoid out-of-gamut artifacts, the input image's luminance is automatically adjusted locally in an optional offline optimization step that maximizes the achievable contrast while preserving smooth input gradients without significant clipping errors. To minimize the appearance of color artifacts at high-frequency reflectance changes of the surface due to usually unavoidable slight projector vibrations and movement (drift), we show that a drift measurement and analysis step, when combined with per-pixel compensation image optimization, significantly decreases the visibility of such artifacts.

    See publication
  • Chromatic calibration of an HDR display using 3D octree forests

    IEEE International Conference on Image Processing (ICIP)

    High dynamic range (HDR) display prototypes have been built and used for scientific studies for nearly a decade, and they are now on the verge of entering consumer market. However, problems exist regarding the accurate color reproduction capabilities on these displays. In this paper, we first characterize the image reproduction capability of a state-of-art HDR display through a set of measurements, and present a novel calibration method that takes into account the variation of the chrominance…

    High dynamic range (HDR) display prototypes have been built and used for scientific studies for nearly a decade, and they are now on the verge of entering consumer market. However, problems exist regarding the accurate color reproduction capabilities on these displays. In this paper, we first characterize the image reproduction capability of a state-of-art HDR display through a set of measurements, and present a novel calibration method that takes into account the variation of the chrominance error over HDR display’s wide luminance range. Our proposed 3D octree forest data structure for representing and querying the calibration function successfully addresses the challenges in calibrating HDR displays: (i) high computational complexity due to nonlinear chromatic distortions; (ii) huge storage space demand for a look-up table. We show that our method achieves high color reproduction accuracy through both objective metrics and a controlled subjective study

    Other authors
  • Challenges of Projection-Based Augmented Reality Systems in Theme Park Attractions

    Journal of VRSJ

    The last decade has witnessed tremendous improvements in display technology. Today's smartphones and laptops offer screen resolutions far beyond what the human eye is able to resolve. TV systems are also in the process of changing to resolutions far beyond "full HD", and cinemas have almost completely transitioned from analog film to digital projection. The high-quality projectors of today's cinema are also well suited for non-standard setups such as, for example, in theme park environments…

    The last decade has witnessed tremendous improvements in display technology. Today's smartphones and laptops offer screen resolutions far beyond what the human eye is able to resolve. TV systems are also in the process of changing to resolutions far beyond "full HD", and cinemas have almost completely transitioned from analog film to digital projection. The high-quality projectors of today's cinema are also well suited for non-standard setups such as, for example, in theme park environments. Generating a scenic lighting environment is one of the most important factors in the overall immersion quality of such attractions, and the usage of digital projectors can help to raise the bar in this regard. It is to be expected that more and more projection systems will be used in the future not only in theme parks, but also for general entertainment, advertisements, and art installations. While the tools have already been developed to enable sufficiently accurate geometric and photometric calibration for content generation of such installations, the question still remains as to whether non-experts will be able to successfully apply and judge calibration, as well as know how to maintain such a complex system without specific knowledge as to its underlying algorithms.

    See publication
  • Augmenting Physical Avatars Using Projector-Based Illumination

    ACM Transactions on Graphics (Siggraph Asia)

    In this project, we propose a complete process for augmenting physical avatars using projector based illumination, significantly increasing their expressiveness. Given an input animation, the system decomposes the motion into low-frequency motion that can be physically reproduced by the animatronic head and high-frequency details that are added using projected shading. At the core is a spatio-temporal optimization process that compresses the motion in gradient space, ensuring faithful motion…

    In this project, we propose a complete process for augmenting physical avatars using projector based illumination, significantly increasing their expressiveness. Given an input animation, the system decomposes the motion into low-frequency motion that can be physically reproduced by the animatronic head and high-frequency details that are added using projected shading. At the core is a spatio-temporal optimization process that compresses the motion in gradient space, ensuring faithful motion replay while respecting the physical limitations of the system. We also propose a complete multi-camera and projection system, including a novel defocused projection and subsurface scattering compensation scheme. The result of our system is a highly expressive physical avatar that features facial details and motion otherwise unreachable due to physical constraints.

    Video:

    https://www.youtube.com/watch?v=O1qV6glZkOQ&feature=youtu.be

    Other authors
    • Amit Bermano
    • Philipp Brüschweiler
    • Daisuke Iwai
    • Bernd Bickel
    • Markus Gross
    See publication
  • Practical Non-linear Photometric Projector Compensation

    IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)

    We propose a novel approach to generate a high quality photometric compensated projection which, to our knowledge, is the first one, which does not require a radiometrical pre-calibration of cameras or projectors. This improves the compensation quality using devices which cannot be easily linearized, such as single chip DLP projectors with complex color processing. In addition, the simple workflow significantly simplifies the compensation image generation. Our approach consists of a sparse…

    We propose a novel approach to generate a high quality photometric compensated projection which, to our knowledge, is the first one, which does not require a radiometrical pre-calibration of cameras or projectors. This improves the compensation quality using devices which cannot be easily linearized, such as single chip DLP projectors with complex color processing. In addition, the simple workflow significantly simplifies the compensation image generation. Our approach consists of a sparse sampling of the projector's color gamut and a scattered data interpolation to generate the per-pixel mapping from projector to camera colors in real-time. To avoid out-of-gamut artifacts, the input image is automatically scaled locally in an optional off-line optimization step maximizing the achievable luminance and contrast while still preserving smooth input gradients without significant clipping errors

    See publication
  • Projection-Based Augmented Reality in Disney Theme Parks

    IEEE

    "Walt Disney Imagineering and Disney Research Zürich are building a projector- camera toolbox to help create spatially augmented 3D objects and dynamic, interactive spaces that enhance the theme park experience by immersing guests in magical worlds."

    Other authors
    See publication
  • Adaptive Coded Aperture Photography

    International Symposium on Visual Computing ISVC 2011: Advances in Visual Computing pp 54-65

    We show how the intrinsically performed JPEG compression of many digital still cameras leaves margin for deriving and applying image-adapted coded apertures that support retention of the most important frequencies after compression. These coded apertures, together with subsequently applied image processing, enable a higher light throughput than corresponding circular apertures, while preserving adjusted focus, depth of field, and bokeh. Higher light throughput leads to proportionally higher…

    We show how the intrinsically performed JPEG compression of many digital still cameras leaves margin for deriving and applying image-adapted coded apertures that support retention of the most important frequencies after compression. These coded apertures, together with subsequently applied image processing, enable a higher light throughput than corresponding circular apertures, while preserving adjusted focus, depth of field, and bokeh. Higher light throughput leads to proportionally higher signal-to-noise ratios and reduced compression noise, or –alternatively– to lower shutter times. We explain how adaptive coded apertures can be computed quickly, how they can be applied in lenses by using binary spatial light modulators, and how a resulting coded bokeh can be transformed into a common radial one.

    Other authors
    See publication
  • Display Pixel Caching

    International Symposium on Visual Computing ISVC 2011: Advances in Visual Computing pp 66-77

    We present a new video mode for television sets that we refer to as display pixel caching (DPC). It fills empty borders with spatially and temporally consistent information while preserving the original video format. Unlike related video modes, such as stretching, zooming, and video retargeting, DPC does not scale or stretch individual frames. Instead, it merges the motion information from many subsequent frames to generate screen-filling panoramas in a consistent manner. In contrast to…

    We present a new video mode for television sets that we refer to as display pixel caching (DPC). It fills empty borders with spatially and temporally consistent information while preserving the original video format. Unlike related video modes, such as stretching, zooming, and video retargeting, DPC does not scale or stretch individual frames. Instead, it merges the motion information from many subsequent frames to generate screen-filling panoramas in a consistent manner. In contrast to state-of-the-art video mosaicing, DPC achieves real-time rates for high-resolution video content while processing more complex motion patterns fully automatically. We compare DPC to related video modes in the context of a user evaluation.

    Other authors
    See publication
  • Closed-Loop Feedback Illumination for Optical Inverse Tone-Mapping in Light Microscopy

    IEEE Transactions on Visualization and Computer Graphics, Volume: 17 , Issue: 6

    In this paper, we show that optical inverse tone-mapping (OITM) in light microscopy can improve the visibility of specimens, both when observed directly through the oculars and when imaged with a camera. In contrast to previous microscopy techniques, we premodulate the illumination based on the local modulation properties of the specimen itself. We explain how the modulation of uniform white light by a specimen can be estimated in real time, even though the specimen is continuously but not…

    In this paper, we show that optical inverse tone-mapping (OITM) in light microscopy can improve the visibility of specimens, both when observed directly through the oculars and when imaged with a camera. In contrast to previous microscopy techniques, we premodulate the illumination based on the local modulation properties of the specimen itself. We explain how the modulation of uniform white light by a specimen can be estimated in real time, even though the specimen is continuously but not uniformly illuminated. This information is processed and back-projected constantly, allowing the illumination to be adjusted on the fly if the specimen is moved or the focus or magnification of the microscope is changed. The contrast of the specimen's optical image can be enhanced, and high-intensity highlights can be suppressed. A formal pilot study with users indicates that this optimizes the visibility of spatial structures when observed through the oculars. We also demonstrate that the signal-to-noise (S/N) ratio in digital images of the specimen is higher if captured under an optimized rather than a uniform illumination. In contrast to advanced scanning techniques that maximize the S/N ratio using multiple measurements, our approach is fast because it requires only two images. This can improve image analysis in digital microscopy applications with real-time capturing requirements.

    Other authors
    See publication
  • Color invariant chroma keying and color spill neutralization for dynamic scenes and cameras

    CGI, The Visual Computer volume 26, pages 1167–1176(2010)

    In this article we show how temporal backdrops that alternately change their color rapidly at recording rate can aid chroma keying by transforming color spill into a neutral background illumination. Since the chosen colors sum up to white, the chromatic (color) spill component is neutralized when integrating over both backdrop states. The ability to separate both states additionally allows to compute high-quality alpha mattes. Besides the neutralization of color spill, our method is invariant…

    In this article we show how temporal backdrops that alternately change their color rapidly at recording rate can aid chroma keying by transforming color spill into a neutral background illumination. Since the chosen colors sum up to white, the chromatic (color) spill component is neutralized when integrating over both backdrop states. The ability to separate both states additionally allows to compute high-quality alpha mattes. Besides the neutralization of color spill, our method is invariant to foreground colors and supports applications with real-time demands. In this article, we explain different realizations of temporal backdrops and describe how keying and color spill neutralization are carried out, how artifacts resulting from rapid motion can be reduced, and how our approach can be implemented to be compatible with common real-time post-production pipelines.

    Other authors
    See publication
  • Coded aperture projection

    ACM Transactions on Graphics (TOG)

    Coding a projector's aperture plane with adaptive patterns together with inverse filtering allow the depth-of-field of projected imagery to be increased. We present two prototypes and corresponding algorithms for static and programmable apertures. We also explain how these patterns can be computed at interactive rates, by taking into account the image content and limitations of the human visual system. Applications such as projector defocus compensation, high-quality projector depixelation, and…

    Coding a projector's aperture plane with adaptive patterns together with inverse filtering allow the depth-of-field of projected imagery to be increased. We present two prototypes and corresponding algorithms for static and programmable apertures. We also explain how these patterns can be computed at interactive rates, by taking into account the image content and limitations of the human visual system. Applications such as projector defocus compensation, high-quality projector depixelation, and increased temporal contrast of projected video sequences can be supported. Coded apertures are a step towards next-generation auto-iris projector lenses.

    Other authors
    • Max Grosse
    • Gordon Wetzstein
    • Oliver Bimber
    See publication
  • Fast and robust CAMShift tracking

    Computer Vision and Pattern Recognition Workshops (CVPRW)

    CAMShift is a well-established and fundamental algorithm for kernel-based visual object tracking. While it performs well with objects that have a simple and constant appearance, it is not robust in more complex cases. As it solely relies on back projected probabilities it can fail in cases when the object's appearance changes (e.g., due to object or camera movement, or due to lighting changes), when similarly colored objects have to be re-detected or when they cross their trajectories. We…

    CAMShift is a well-established and fundamental algorithm for kernel-based visual object tracking. While it performs well with objects that have a simple and constant appearance, it is not robust in more complex cases. As it solely relies on back projected probabilities it can fail in cases when the object's appearance changes (e.g., due to object or camera movement, or due to lighting changes), when similarly colored objects have to be re-detected or when they cross their trajectories. We propose low-cost extensions to CAMShift that address and resolve all of these problems. They allow the accumulation of multiple histograms to model more complex object appearances and the continuous monitoring of object identities to handle ambiguous cases of partial or full occlusion. Most steps of our method are carried out on the GPU for achieving real-time tracking of multiple targets simultaneously. We explain efficient GPU implementations of histogram generation, probability back projection, computation of image moments, and histogram intersection. All of these techniques make full use of a GPU's high parallelization capabilities.

    Other authors
    • David Exner
    • Erich Bruns
    • Daniel Kurz
    • Oliver Bimber
    See publication
  • VirtualStudio2Go: digital video composition for real environments

    ACM Transactions on Graphics (Siggraph Asia)

    We synchronize film cameras and LED lighting with off-the-shelf video projectors. Radiometric compensation allows displaying keying patterns and other spatial codes on arbitrary real world surfaces. A fast temporal multiplexing of coded projection and flash illumination enables professional keying, environment matting, displaying moderator information, scene reconstruction, and camera tracking for non-studio film sets without being limited to the constraints of a virtual studio. This makes…

    We synchronize film cameras and LED lighting with off-the-shelf video projectors. Radiometric compensation allows displaying keying patterns and other spatial codes on arbitrary real world surfaces. A fast temporal multiplexing of coded projection and flash illumination enables professional keying, environment matting, displaying moderator information, scene reconstruction, and camera tracking for non-studio film sets without being limited to the constraints of a virtual studio. This makes digital video composition more flexible, since static studio equipment, such as blue screens, teleprompters, or tracking devices, is not required. Authentic film locations can be supported with our portable system without causing a lot of installation effort.

    Other authors
    • Oliver Bimber
    See publication
  • Real-time adaptive radiometric compensation

    IEEE Transactions on Visualization and Computer Graphics

    Recent radiometric compensation techniques make it possible to project images onto colored and textured surfaces. This is realized with projector-camera systems by scanning the projection surface on a per-pixel basis. Using the captured information, a compensation image is calculated that neutralizes geometric distortions and color blending caused by the underlying surface. As a result, the brightness and the contrast of the input image is reduced compared to a conventional projection onto a…

    Recent radiometric compensation techniques make it possible to project images onto colored and textured surfaces. This is realized with projector-camera systems by scanning the projection surface on a per-pixel basis. Using the captured information, a compensation image is calculated that neutralizes geometric distortions and color blending caused by the underlying surface. As a result, the brightness and the contrast of the input image is reduced compared to a conventional projection onto a white canvas. If the input image is not manipulated in its intensities, the compensation image can contain values that are outside the dynamic range of the projector. These will lead to clipping errors and to visible artifacts on the surface. In this article, we present an innovative algorithm that dynamically adjusts the content of the input images before radiometric compensation is carried out. This reduces the perceived visual artifacts while simultaneously preserving a maximum of luminance and contrast. The algorithm is implemented entirely on the GPU and is the first of its kind to run in real time.

    Other authors
    • Oliver Bimber
    See publication
  • Spatial Augmented Reality for Architecture–Designing and planning with and within existing buildings

    International Journal of Architectural Computing

    At present, more than half of all building activity in the German building sector is undertaken within existing built contexts. The development of a conceptual and technological basis for the digital support of design directly on site, within an existing building context is the focus of the research project "Spatial Augmented Reality for Architecture" (SAR). This paper describes the goals achieved in one aspect of the project: the sampling of colors and materials at a scale of 1:1 using…

    At present, more than half of all building activity in the German building sector is undertaken within existing built contexts. The development of a conceptual and technological basis for the digital support of design directly on site, within an existing building context is the focus of the research project "Spatial Augmented Reality for Architecture" (SAR). This paper describes the goals achieved in one aspect of the project: the sampling of colors and materials at a scale of 1:1 using Augmented Reality (AR) technologies. We present initial results from the project; the development of an ad-hoc visualization of interactive data on arbitrary surfaces in real-world indoor environments using a mobile hardware setup. With this, it was possible to project the color and material qualities of a design directly onto almost all surfaces within a geometrically corrected, existing building. Initially, a software prototype "Spatial Augmented Reality for Architecture-Colored Architecture" (SAR-CA) was developed and then assessed based on evaluation results from a user study.

    Other authors
    • Christian Tonn
    • Frank Petzold
    • Oliver Bimber
    • Dirk Donath
    See publication
  • The Visual Computing of Projector‐Camera Systems

    Computer Graphics Forum (Eurographics State of the Art Report)

    This article focuses on real-time image correction techniques that enable projector-camera systems to display images onto screens that are not optimized for projections, such as geometrically complex, coloured and textured surfaces. It reviews hardware-accelerated methods like pixel-precise geometric warping, radiometric compensation, multi-focal projection and the correction of general light modulation effects. Online and offline calibration as well as invisible coding methods are explained…

    This article focuses on real-time image correction techniques that enable projector-camera systems to display images onto screens that are not optimized for projections, such as geometrically complex, coloured and textured surfaces. It reviews hardware-accelerated methods like pixel-precise geometric warping, radiometric compensation, multi-focal projection and the correction of general light modulation effects. Online and offline calibration as well as invisible coding methods are explained. Novel attempts in super-resolution, high-dynamic range and high-speed projection are discussed. These techniques open a variety of new applications for projection displays. Some of them will also be presented in this report.

    Other authors
    • Oliver Bimber
    • Daisu
    • Gordon Wetzstein
    See publication
  • Dynamic adaptation of projected imperceptible codes

    IEEE ISMAR

    In this paper we present an innovative adaptive imperceptible pattern projection technique that takes into account parameters of human visual perception. A coded image is temporally integrated into the projected image, which is invisible to the human eye but can be reconstructed by a synchronized camera. The embedded code is dynamically adjusted on the fly to guarantee its imperceptibility and to adapt it to the current camera pose. Linked with real-time flash keying, for instance, this enables…

    In this paper we present an innovative adaptive imperceptible pattern projection technique that takes into account parameters of human visual perception. A coded image is temporally integrated into the projected image, which is invisible to the human eye but can be reconstructed by a synchronized camera. The embedded code is dynamically adjusted on the fly to guarantee its imperceptibility and to adapt it to the current camera pose. Linked with real-time flash keying, for instance, this enables in-shot optical tracking using a dynamic multi-resolution marker technique. A sample prototype has been realized that demonstrates the application of our method in the context of augmentations in television studios.

    Other authors
    • Manja Seeger
    • Ferry Hantsch
    • Oliver Bimber
    See publication
  • Compensating indirect scattering for immersive and semi-immersive projection displays

    IEEE Virtual Reality (VR)

    We present a real-time reverse radiosity method for compensating indirect scattering effects that occur with immersive and semi-immersive projection displays. It computes a numerical solution directly on the GPU and is implemented with pixel shading and multi-pass rendering which together realizes a Jacobi solver for sparse matrix linear equation systems. Our method is validated and evaluated based on a stereoscopic two-sided wall display. The images appear more brilliant and uniform when…

    We present a real-time reverse radiosity method for compensating indirect scattering effects that occur with immersive and semi-immersive projection displays. It computes a numerical solution directly on the GPU and is implemented with pixel shading and multi-pass rendering which together realizes a Jacobi solver for sparse matrix linear equation systems. Our method is validated and evaluated based on a stereoscopic two-sided wall display. The images appear more brilliant and uniform when compensating the scattering contribution.

    Other authors
    • Oliver Bimber
    • Thomas Zeidler
    • Daniel Danch
    • Pedro Kapakos
    See publication
  • Interacting with augmented holograms

    SPIE Proceedings Practical Holography

    Holography and computer graphics are being used as tools to solve individual research, engineering, and presentation problems within several domains. Up until today, however, these tools have been applied separately. Our intention is to combine both technologies to create a powerful tool for science, industry and education. We are currently investigating the possibility of integrating computer generated graphics and holograms. This paper gives an overview over our latest results. It presents…

    Holography and computer graphics are being used as tools to solve individual research, engineering, and presentation problems within several domains. Up until today, however, these tools have been applied separately. Our intention is to combine both technologies to create a powerful tool for science, industry and education. We are currently investigating the possibility of integrating computer generated graphics and holograms. This paper gives an overview over our latest results. It presents several applications of interaction techniques to graphically enhanced holograms and gives a first glance on a novel method that reconstructs depth from optical holograms.

    Other authors
    • Oliver Bimber
    • Thomas Zeidler
    • Gordon Wetzstein
    • Mathias Moehring
    • Sebastian Knödel
    • Uwe Hahne
    See publication
  • Level of detail based occlusion culling for dynamic scenes

    ACM Graphite

    This paper presents a non-conservative occlusion culling technique for dynamic scenes with animated or user-manipulated objects. We use a multi-pass algorithm, which decides the visibility based on low level of detail representations of the geometric models. Our approach makes efficient use of hardware support for occlusion queries and avoids stalling the graphics pipeline. We have tested our approach for large real-world models from different areas. Our results show that the algorithm performs…

    This paper presents a non-conservative occlusion culling technique for dynamic scenes with animated or user-manipulated objects. We use a multi-pass algorithm, which decides the visibility based on low level of detail representations of the geometric models. Our approach makes efficient use of hardware support for occlusion queries and avoids stalling the graphics pipeline. We have tested our approach for large real-world models from different areas. Our results show that the algorithm performs well for medium complex scenes with 5 to 20 million triangles, with a very low number of hardly noticeable pixel errors, typically in the range of 0.02 percent of the total number of visible pixels.

    Other authors
    • Benjamin Brombach
    • Robert Scheibe
    • Bernd Fröhlich
    See publication
  • Occlusion Culling for Sub-Surface Models in Geo-Scientific Applications

    VisSym04: Joint Eurographics - IEEE TCVG Symposium on Visualization

    Modern graphics cards support occlusion culling in hardware. We present a three pass algorithm, which makes efficient use of this feature. Our geo-scientific sub-surface data sets consist typically of a set of high resolution height fields, polygonal objects, and volume slices and lenses. For each height field, we compute a low and high resolution version in a pre-process and divide both into sets of corresponding tiles. For each tile and for the polygonal objects, the first rendering pass…

    Modern graphics cards support occlusion culling in hardware. We present a three pass algorithm, which makes efficient use of this feature. Our geo-scientific sub-surface data sets consist typically of a set of high resolution height fields, polygonal objects, and volume slices and lenses. For each height field, we compute a low and high resolution version in a pre-process and divide both into sets of corresponding tiles. For each tile and for the polygonal objects, the first rendering pass computes a z-buffer image using the low resolution tiles, the polygonal objects and the non-transparent volume objects. During the second pass, we render the same objects against the z-buffer of the first pass while submitting an occlusion query with each object. The third pass reads this occlusion information back from the graphics hardware and renders only those high resolution objects, for which the corresponding low resolution objects were not completely occluded. To avoid fill rate bottle necks, the first two passes may be rendered to a low resolution window. Our implementation shows frame rate improvements for all test cases while introducing only a small overhead and no or hardly noticeable errors. Our non-conservative approach does not require front to back sorting and it works for dynamic scenes.

    Other authors
    • John Plate
    • Benjamin Schmidt
    • Bernd Fröhlich
    See publication
  • Consistent illumination within optical see-through augmented environments

    IEEE ISMAR

    We present techniques which create a consistent illumination between real and virtual objects inside an application specific optical see-through display: the Virtual Showcase. We use projectors and cameras to capture reflectance information from diffuse real objects and to illuminate them under new synthetic lighting conditions. Matching direct and indirect lighting effects,such as shading, shadows, reflections and color bleedingcan be approximated at interactive rates in such a controlled…

    We present techniques which create a consistent illumination between real and virtual objects inside an application specific optical see-through display: the Virtual Showcase. We use projectors and cameras to capture reflectance information from diffuse real objects and to illuminate them under new synthetic lighting conditions. Matching direct and indirect lighting effects,such as shading, shadows, reflections and color bleedingcan be approximated at interactive rates in such a controlled mixed environment.

    Other authors
    • Oliver Bimber
    • Gordon Wetzstein
    • Sebastian Knödel
    See publication

Patents

  • Method and system for projector calibration

    Issued US11115633

    The present disclosure relates to a method for calibrating a projector. The method includes projecting a test pattern onto a scene or an object within the scene and capturing the test pattern using a camera. Once the test pattern image has been captured by the camera, the method further includes estimating by a processing element a perspective projection matrix, warping the estimated projection matrix based on a non-linear distortion function, and modifying the projector to project light based…

    The present disclosure relates to a method for calibrating a projector. The method includes projecting a test pattern onto a scene or an object within the scene and capturing the test pattern using a camera. Once the test pattern image has been captured by the camera, the method further includes estimating by a processing element a perspective projection matrix, warping the estimated projection matrix based on a non-linear distortion function, and modifying the projector to project light based on the distortion function. The present disclosure also relates to a presentation or projection system including two types of projectors for projecting an output presentation having a second image overlaid on a first image.

    See patent
  • Channel based projector calibration

    Issued US 10,694,160

    The present disclosure is related to methods and systems for calibrating a projector. The method includes displaying a plurality of calibration patterns, where the calibration patterns are displayed separately by different wavelength channel bands, e.g., red channel, blue channel, green channel. The method then includes receiving by a processing element one or more calibration images. The method then includes determining one or more displacements corresponding to a plurality of pixels in the…

    The present disclosure is related to methods and systems for calibrating a projector. The method includes displaying a plurality of calibration patterns, where the calibration patterns are displayed separately by different wavelength channel bands, e.g., red channel, blue channel, green channel. The method then includes receiving by a processing element one or more calibration images. The method then includes determining one or more displacements corresponding to a plurality of pixels in the calibration image with respect to a plurality of pixels in the calibration pattern to adjust for distortions on a per wavelength channel basis.

    Other inventors
    See patent
  • Light Ray Based Calibration System and Method

    Issued US 10,609,365

    The present disclosure relates to a system and method for calibrating an optical device, such as a camera. In one example, the system includes a light-emitting device that generates light patterns and an ray generator that is positioned between the light-emitting device and the optical device. The ray generator separates the light emitted as part of the light patterns into a plurality of directional rays. The optical device then captures the directional rays and the captured data, along with…

    The present disclosure relates to a system and method for calibrating an optical device, such as a camera. In one example, the system includes a light-emitting device that generates light patterns and an ray generator that is positioned between the light-emitting device and the optical device. The ray generator separates the light emitted as part of the light patterns into a plurality of directional rays. The optical device then captures the directional rays and the captured data, along with data corresponding to the light pattern and the ray generator, are used to calibrate the optical device.

    Other inventors
    See patent
  • Optimizing Emissive and Color Changing Projection Surfaces

    Issued US 10511815B2

    A method and system for optimizing projection surfaces for generating visible images. The projection system may include a projector that emits light in the ultraviolet range and a screen in optical communication with the projector. The screen includes a visible light absorbing layer, a transparent layer positioned over the visible light absorbing layer, and a plurality of fluorescent colorants printed on the transparent layer in a predetermined pattern, where the light emitted by the projector…

    A method and system for optimizing projection surfaces for generating visible images. The projection system may include a projector that emits light in the ultraviolet range and a screen in optical communication with the projector. The screen includes a visible light absorbing layer, a transparent layer positioned over the visible light absorbing layer, and a plurality of fluorescent colorants printed on the transparent layer in a predetermined pattern, where the light emitted by the projector excites the fluorescent colorants to emit visible light forming the visible images. The predetermined pattern can be optimized to increase a color gamut of the formed images by varying surface coverage ratios of the fluorescent colorants.

    Other inventors
    See patent
  • Generating prints with multiple appearances

    Issued US US10491784B2

    The present disclosure generally relates to printed structures and methods for generating the same. The printed structure includes a first surface including a first ink type arranged in a first image and a second surface at least partially aligned with at least a portion of the first surface, the second surface including a second type of ink arranged in a second image. The first and second surfaces are physically separated from one another, the first image is visible under a first set of light…

    The present disclosure generally relates to printed structures and methods for generating the same. The printed structure includes a first surface including a first ink type arranged in a first image and a second surface at least partially aligned with at least a portion of the first surface, the second surface including a second type of ink arranged in a second image. The first and second surfaces are physically separated from one another, the first image is visible under a first set of light wavelengths and the second image is visible under a second set of light wavelengths.

    Other inventors
    See patent
  • Projecting augmentation images onto moving objects

    Issued US US10380802B2

    A method of augmenting a human face with projected light is disclosed. The method includes determining a blend of component attributes to define visual characteristics of the human face, modifying an input image based, at least in part, on an image of the human face, wherein the modified input image defines an augmented visual characteristic of the human face, determining a present location of one or more landmarks on the human face based, at least in part, on the image of the human face…

    A method of augmenting a human face with projected light is disclosed. The method includes determining a blend of component attributes to define visual characteristics of the human face, modifying an input image based, at least in part, on an image of the human face, wherein the modified input image defines an augmented visual characteristic of the human face, determining a present location of one or more landmarks on the human face based, at least in part, on the image of the human face, predicting a future location of the one or more landmarks, deforming a model of the human face based on the future location of the one or more landmarks, generating an augmentation image based on the deformed model and the modified input image, and transmitting for projection the augmentation image.

    Other inventors
    See patent
  • Projector optimization method and system

    Issued EU EP 3 200 451 B1

    Other inventors
    See patent
  • Method and system for projector calibration

    Issued CN CN105592310B

    The present invention relates to the method and systems for projector calibration.The method includes projecting on the object in scene or scene test pattern and capture test pattern using camera.Once camera has captured test pattern image, method further includes estimating perspective projection matrix by processing element, twists estimated projection matrix, and modification projector based on nonlinear distortion function to be based on distortion function projected light.Present…

    The present invention relates to the method and systems for projector calibration.The method includes projecting on the object in scene or scene test pattern and capture test pattern using camera.Once camera has captured test pattern image, method further includes estimating perspective projection matrix by processing element, twists estimated projection matrix, and modification projector based on nonlinear distortion function to be based on distortion function projected light.Present disclosure also relates to include the presentation or optical projection system for projecting the two kinds of projector that the output with the second image of superimposition on the first image is presented.

    See patent
  • Method and system for projector calibration

    Issued US10080004

    The present disclosure relates to a method for calibrating a projector. The method includes projecting a test pattern onto a scene or an object within the scene and capturing the test pattern using a camera. Once the test pattern image has been captured by the camera, the method further includes estimating by a processing element a perspective projection matrix, warping the estimated projection matrix based on a non-linear distortion function, and modifying the projector to project light based…

    The present disclosure relates to a method for calibrating a projector. The method includes projecting a test pattern onto a scene or an object within the scene and capturing the test pattern using a camera. Once the test pattern image has been captured by the camera, the method further includes estimating by a processing element a perspective projection matrix, warping the estimated projection matrix based on a non-linear distortion function, and modifying the projector to project light based on the distortion function. The present disclosure also relates to a presentation or projection system including two types of projectors for projecting an output presentation having a second image overlaid on a first image.

    See patent
  • Projector optimization method and system

    Issued US US10057556B2

    The present disclosure relates to a method and system for optimizing a projector for projection of content. In one embodiment the method includes receiving by a processing element a plurality of test images corresponding to test patterns projected by the projector on a projection surface, where each of the test patterns include at least two points, comparing by the processing element the plurality of test images to assess one or more projector characteristics related to a distance between the…

    The present disclosure relates to a method and system for optimizing a projector for projection of content. In one embodiment the method includes receiving by a processing element a plurality of test images corresponding to test patterns projected by the projector on a projection surface, where each of the test patterns include at least two points, comparing by the processing element the plurality of test images to assess one or more projector characteristics related to a distance between the two points, generating by the processing element a projector model representing the one or more projector characteristics, and utilizing the model to determine a projection path of the projector for the content.

    Other inventors
    See patent
  • Image capture device calibration

    Issued US 10,038,895

    A three-dimensional coordinate position of a calibration device is determined. Further, a code is emitted to an image capture device. The code indicates the three-dimensional coordinate position of the calibration device. In addition, an image of light emitted from the calibration device is captured. The light includes the code. An image capture device three-dimensional coordinate position of the calibration device is calibrated according to the real world three-dimensional coordinate position…

    A three-dimensional coordinate position of a calibration device is determined. Further, a code is emitted to an image capture device. The code indicates the three-dimensional coordinate position of the calibration device. In addition, an image of light emitted from the calibration device is captured. The light includes the code. An image capture device three-dimensional coordinate position of the calibration device is calibrated according to the real world three-dimensional coordinate position of the calibration device indicated by the code.

    Other inventors
    See patent
  • Chromatic Calibration of an HDR display using 3D octree forests

    Issued US 9,997,134

    Methods and systems for calibrating devices reproducing high dimensional data, such as calibrating High Dynamic Range (HDR) displays that reproduce chromatic data. Methods include mapping input data into calibrated data using calibration information retrieved from spatial data structures that encode a calibration function. The calibration function may be represented by any multidimensional scattered data interpolation methods such as Thin-Plate Splines. To efficiently represent and access the…

    Methods and systems for calibrating devices reproducing high dimensional data, such as calibrating High Dynamic Range (HDR) displays that reproduce chromatic data. Methods include mapping input data into calibrated data using calibration information retrieved from spatial data structures that encode a calibration function. The calibration function may be represented by any multidimensional scattered data interpolation methods such as Thin-Plate Splines. To efficiently represent and access the calibration information in runtime, the calibration function is recursively sampled based on guidance dataset. In an embodiment, an HDR display may be adaptively calibrated using a dynamic color guidance dataset and dynamic spatial data structures.

    Other inventors
    See patent
  • Selectively activated color changing hairpiece

    Issued US 9,977,267

    A color-changing hairpiece is disclosed. The hairpiece may include a plurality of elongated light sources that emit light when activated and a photochromic layer provided in association with at least one light source such that the light emitted from the at least one light source impinges on a portion of the photochromic layer, the photochromic layer configured to change color responsive to the light emitted from the at least one light source.

    See patent
  • Mapping for three dimensional surfaces

    Issued US 9,956,717

    The present disclosure includes a method for modifying an input image to map to a three dimensional shape prior to forming the three dimensional shape. The method includes receiving by a processor at least two pre-forming images of a locally unique non-repeating pattern printed on an input material. The at least two pre-forming images are captured prior to the input material being formed into the three dimensional shape. The method further includes receiving by the processor at least two…

    The present disclosure includes a method for modifying an input image to map to a three dimensional shape prior to forming the three dimensional shape. The method includes receiving by a processor at least two pre-forming images of a locally unique non-repeating pattern printed on an input material. The at least two pre-forming images are captured prior to the input material being formed into the three dimensional shape. The method further includes receiving by the processor at least two post-forming images of the pattern printed on the input material, wherein the post-forming images are captured after the input material has been formed into the three dimensional shape and analyzing by the processor the at least pre-forming images and the at least two post-forming images to determine a translation table.

    Other inventors
    See patent
  • Real time surface augmentation using projected light

    Issued US 9,940,753

    A method of augmenting a target object with projected light is disclosed. The method includes determining a blend of component attributes to define visual characteristics of the target object, modifying an input image based, at least in part, on an image of the target object, wherein the modified input image defines an augmented visual characteristic of the target object, determining a present location of one or more landmarks on the target object based, at least in part, on the image of the…

    A method of augmenting a target object with projected light is disclosed. The method includes determining a blend of component attributes to define visual characteristics of the target object, modifying an input image based, at least in part, on an image of the target object, wherein the modified input image defines an augmented visual characteristic of the target object, determining a present location of one or more landmarks on the target object based, at least in part, on the image of the target object, predicting a future location of the one or more landmarks, deforming a model of the target object based on the future location of the one or more landmarks, generating an augmentation image based on the deformed model and the modified input image, and transmitting for projection the augmentation image.

    Other inventors
    See patent
  • Projection system for enhancing and modifying the appearance of a projection surface

    Issued US US9632404 B2

    The present disclosure includes a projection system for enhancing color saturation and contrast for a projected image. The projection surface includes a coat containing active materials and the projection system includes a first light source emitting light onto the surface and defining a first image on the surface and a second light source emitting light onto the surface and activating the active materials within the transparent coat to emit visible light of one or more wavelengths. The first…

    The present disclosure includes a projection system for enhancing color saturation and contrast for a projected image. The projection surface includes a coat containing active materials and the projection system includes a first light source emitting light onto the surface and defining a first image on the surface and a second light source emitting light onto the surface and activating the active materials within the transparent coat to emit visible light of one or more wavelengths. The first light source defines the first image and the visible light emitted from the active materials enhances one or more characteristics of the first image. In another embodiment, the second light source may be activated when the first light source fails to define a backup or static image that may be different or the same as the image produced by the first light source.

    Other inventors
    See patent
  • Camera calibration

    Issued US 9560345

    Embodiments herein include systems, methods and articles of manufacture for calibrating a camera. In one embodiment, a computing system may be coupled to a calibration apparatus and the camera to programmatically identify the intrinsic properties of the camera. The calibration apparatus includes a plurality of light sources which are controlled by the computer system. By selectively activating one or more of the light sources, the computer system identifies correspondences that relate the 3D…

    Embodiments herein include systems, methods and articles of manufacture for calibrating a camera. In one embodiment, a computing system may be coupled to a calibration apparatus and the camera to programmatically identify the intrinsic properties of the camera. The calibration apparatus includes a plurality of light sources which are controlled by the computer system. By selectively activating one or more of the light sources, the computer system identifies correspondences that relate the 3D position of the light sources to the 2D image captured by the camera. The computer system then calculates the intrinsics of the camera using the 3D to 2D correspondences. After the intrinsics are measured, cameras may be further calibrated to in order to identify 3D locations of objects within their field of view in a presentation area. Using passive markers in a presentation area, computing system may use an iterative process that estimates the actual pose of the camera.

    See patent
  • Non-linear photometric projector compensation

    Issued US 9325956

    The present disclosure is related to a method for calibrating a projector. The method includes receiving by a processing element one or more mapping images. After receiving the images, the method includes defining by the processing element a non-linear color mapping function, the color mapping function mapping pixels between the projector and a camera used to capture the mapping images. The method then includes determining by the processing element a compensation image using the color mapping…

    The present disclosure is related to a method for calibrating a projector. The method includes receiving by a processing element one or more mapping images. After receiving the images, the method includes defining by the processing element a non-linear color mapping function, the color mapping function mapping pixels between the projector and a camera used to capture the mapping images. The method then includes determining by the processing element a compensation image using the color mapping function. The compensation image corresponds to an input image and takes into account variations in the surface onto which the input image is to be projected.

    See patent
  • Projector drift corrected compensated projection

    Issued US 9319649

    The present invention relates generally to adjusting projected images from a projector. A projected image is compensated to neutralize effects of a projection surface and optimized based on anticipated pixel drift movement.

    See patent
  • Augmenting physical appearance using illumination

    Issued US 9300901

    A system for augmenting the appearance of an object including a plurality of projectors. Each projector includes a light source and a lens in optical communication with the light source, where the lens focuses light emitted by the light source on the object. The system also includes a computer in communication with the plurality of projectors, the computer including a memory component and a processing element in communication with the memory component and the plurality of projectors. The…

    A system for augmenting the appearance of an object including a plurality of projectors. Each projector includes a light source and a lens in optical communication with the light source, where the lens focuses light emitted by the light source on the object. The system also includes a computer in communication with the plurality of projectors, the computer including a memory component and a processing element in communication with the memory component and the plurality of projectors. The processing element determines a plurality of images to create an augmented appearance of the object and provides the plurality of images to the plurality of projectors to project light corresponding to the plurality of images onto the object to create the augmented appearance of the object. After the images are projected onto the object, the augmented appearance of the objected is substantially the same regardless of a viewing angle for the object.

    Other inventors
    See patent
  • Light-based caustic surface calibration

    Issued US 9148658

    A method for performing light-based calibration of optics with caustic surfaces. The method includes mapping a light detecting device to a programmable light source. Then, the method includes operating a calibration light source to direct light onto one or more caustic surfaces of an optical assembly, e.g., an assembly of one or more lenses, facets, lenticules, and lenslets. The method may then involve, with the light detecting device, capturing an image of a projection surface of the optical…

    A method for performing light-based calibration of optics with caustic surfaces. The method includes mapping a light detecting device to a programmable light source. Then, the method includes operating a calibration light source to direct light onto one or more caustic surfaces of an optical assembly, e.g., an assembly of one or more lenses, facets, lenticules, and lenslets. The method may then involve, with the light detecting device, capturing an image of a projection surface of the optical assembly, which is opposite the one or more caustic surfaces in the optical assembly, as the projection surface is illuminated by the light from the light source. Further, the method includes processing the captured image, along with the mapping of the light detecting device to the programmable light source, to generate a calibration map of the optical assembly including the caustic surfaces.

    Other inventors
    See patent
  • Chromakeyverfahren und Chromakeyvorrichtung zur Aufnahme und Bildbearbeitung von Kamerabildern

    Issued DE DE102010014733

    Chromakeyverfahren zur Aufnahme und Bildbearbeitung von Kamerabildern (Ca, Cb), bei dem wenigstens ein Objekt vor einem einfarbigen Bildhintergrund mit einer Bildaufnahmefrequenz aufgenommen wird und die Hintergrundfarbe des Bildhintergrundes während der Aufnahme der Kamerabilder (Ca, Cb) verändert wird,
    dadurch gekennzeichnet, dass
    – die Hintergrundfarbe während des Aufnehmens der Kamerabilder (Ca, Cb) abwechselnd und synchronisiert mit der Bildaufnahmefrequenz zwischen zwei…

    Chromakeyverfahren zur Aufnahme und Bildbearbeitung von Kamerabildern (Ca, Cb), bei dem wenigstens ein Objekt vor einem einfarbigen Bildhintergrund mit einer Bildaufnahmefrequenz aufgenommen wird und die Hintergrundfarbe des Bildhintergrundes während der Aufnahme der Kamerabilder (Ca, Cb) verändert wird,
    dadurch gekennzeichnet, dass
    – die Hintergrundfarbe während des Aufnehmens der Kamerabilder (Ca, Cb) abwechselnd und synchronisiert mit der Bildaufnahmefrequenz zwischen zwei Komplementärfarben geändert wird, so dass je zwei aufeinander folgende Kamerabilder (Ca, Cb) mit zueinander komplementären Hintergrundfarben aufgenommen werden,
    – für jedes Kamerabild (Ca, Cb) eine Aussparungsmaske (Ma, Mb) für das Bild des wenigstens einen Objekts erstellt wird und aus den erstellten Aussparungsmasken (Ma, Mb) je zweier aufeinander folgender Kamerabilder (Ca, Cb) ein Aussparungsmaximum (Mmax) ermittelt wird
    – und aus den beiden aufeinander folgenden Kamerabildern (Ca, Cb) ein gemitteltes Kamerabild (Cab) ermittelt wird.

    Other inventors
    See patent
  • Digitalprojektor und Verfahren zur Erhöhung einer Schärfentiefe eines projizierten Bildes

    Issued DE DE102009035870

    Digitalprojektor (1) mit einem Projektorobjektiv (2), das eine Lichtmodulationseinheit (3) mit mehreren unabhängig voneinander ansteuerbaren Modulierungskomponenten (4) zur räumlichen und zeitlichen Modulation der Lichtintensität von dem Digitalprojektor (1) ausgesendeten Lichts aufweist,
    gekennzeichnet durch
    eine Steuereinheit, mittels derer abbildungsrelevante Eigenschaften eines zu projizierenden Bildes (Iin) ermittelbar und die Modulierungskomponenten (4) in Abhängigkeit von den…

    Digitalprojektor (1) mit einem Projektorobjektiv (2), das eine Lichtmodulationseinheit (3) mit mehreren unabhängig voneinander ansteuerbaren Modulierungskomponenten (4) zur räumlichen und zeitlichen Modulation der Lichtintensität von dem Digitalprojektor (1) ausgesendeten Lichts aufweist,
    gekennzeichnet durch
    eine Steuereinheit, mittels derer abbildungsrelevante Eigenschaften eines zu projizierenden Bildes (Iin) ermittelbar und die Modulierungskomponenten (4) in Abhängigkeit von den ermittelten Eigenschaften ansteuerbar sind,
    wobei als abbildungsrelevante Eigenschaft ein ortsfrequenzaufgelöstes Leuchtdichtenspektrum (L(fx, fy)) des zu projizierenden Bildes (Iin) ermittelt wird,
    wobei zur Ermittelung des ortsfrequenzaufgelösten Leuchtdichtenspektrums (L(fx, fy)) eine ortsabhängige Leuchtdichte (ILum(x, y)) des zu projizierenden Bildes (Iin) ermittelt wird und eine Fourierzerlegung der ermittelten ortsabhängigen Leuchtdichte (ILum(x, y)) bezüglich Ortsfrequenzen (fx, fy) durchgeführt wird.

    Other inventors
    • Oliver Bimber
    • Max Grosse
    • Gordon Wetzstein
    See patent
  • Verfahren zur Erzeugung erweiterter Realität in einem Raum

    Issued DE DE102007041719

    Verfahren zur Erzeugung erweiterter Realität in einem Raum (1), bei dem zumindest auf einen Teil den Raum (1) begrenzender und/oder im Raum (1) befindlicher Oberflächen mittels mindestens eines digitalen Videoprojektors (3) Bilder (I, I') einer Bildfolge (BP) projiziert werden, wobei die Bilder (I, I') räumlich und/oder zeitlich in Luminanz und/oder Chrominanz moduliert werden, wobei mindestens eine mit mindestens einem der Videoprojektoren (3) synchronisierte Kamera (2) zumindest einen Teil…

    Verfahren zur Erzeugung erweiterter Realität in einem Raum (1), bei dem zumindest auf einen Teil den Raum (1) begrenzender und/oder im Raum (1) befindlicher Oberflächen mittels mindestens eines digitalen Videoprojektors (3) Bilder (I, I') einer Bildfolge (BP) projiziert werden, wobei die Bilder (I, I') räumlich und/oder zeitlich in Luminanz und/oder Chrominanz moduliert werden, wobei mindestens eine mit mindestens einem der Videoprojektoren (3) synchronisierte Kamera (2) zumindest einen Teil des Raumes (1) aufnimmt, wobei die Bilder (I, I') pixelweise moduliert werden, dadurch gekennzeichnet, dass eine für ein menschliches Auge gerade wahrnehmbare Differenz (Δ) für mindestens ein Pixel mindestens eines Originalbildes (O) aus einer Originalbildfolge berechnet wird, dass zur Einblendung einer für einen Betrachter oder eine der Kameras (2) nicht wahrnehmbaren Struktur (C) ein positionsgleiches Pixel in einem aus dem Originalbild (O) abgeleiteten Bild (I) zumindest in einem roten und/oder blauen und/oder grünen Farbkanal maximal um die gerade wahrnehmbare Differenz (Δ) und...

    Other inventors
    See patent

Projects

  • Augmenting Physical Avatars Using Projector-Based Illumination

    One of my last projects was focused on the augmentation of physical avatars using projector based illumination to significantly increase their expressiveness and to enhance the projection quality by compensating for various global illumination effects. The quite impressive results were presented at the ACM Siggraph Asia conference 2013. For more information please refer to the URL.

    See project
  • Makeup Lamps: Live Augmentation of Human Faces via Projection

    -

    We propose the first system for live dynamic augmentation of human faces. Using projector-based illumination, we alter the appearance of human performers during novel performances. The key challenge of live augmentation is latency — an image is generated according to a specific pose, but is displayed on a different facial configuration by the time it is projected. Therefore, our system aims at reducing latency during every step of the process, from capture, through processing, to projection…

    We propose the first system for live dynamic augmentation of human faces. Using projector-based illumination, we alter the appearance of human performers during novel performances. The key challenge of live augmentation is latency — an image is generated according to a specific pose, but is displayed on a different facial configuration by the time it is projected. Therefore, our system aims at reducing latency during every step of the process, from capture, through processing, to projection. Using infrared illumination, an optically and computationally aligned high-speed camera detects facial orientation as well as expression. The estimated expression blendshapes are mapped onto a lower dimensional space, and the facial motion and non-rigid deformation are estimated, smoothed and predicted through adaptive Kalman filtering. Finally, the desired appearance is generated interpolating precomputed offset textures according to time, global position, and expression. We have evaluated our system through an optimized CPU and GPU prototype, and demonstrated successful low latency augmentation for different performers and performances with varying facial play and motion speed. In contrast to existing methods, the presented system is the first method which fully supports dynamic facial projection mapping without the requirement of any physical tracking markers and incorporates facial expressions.

    See project

Honors & Awards

  • Official Nominee "Best of Disney" Technology Award 2016

    The Walt Disney Company

    TWDC nominated the projection mapping software tools which have been developed by my team.

  • University Award

    Bauhaus-Universität Weimar, Germany

    https://www.uni-weimar.de/fileadmin/user/uni/zentrale_einrichtungen/uk_kommunikation/Import/files/aktuelles/bogen/2007_4/der_bogen_4_2007.pdf

  • 1st Place ACM Student Research Competition “Real-Time Adaptive Radiometric Compensation", A. Grundhöfer and O. Bimber

    ACM

    Undergrad category
    https://src.acm.org/candidates/2007

  • 1st Place ACM Siggraph Student Research Competition “Real-Time Adaptive Radiometric Compensation", A. Grundhöfer and O. Bimber

    ACM

    Undergrad category
    https://src.acm.org/winners/2007

Recommendations received

More activity by Anselm

View Anselm’s full profile

  • See who you know in common
  • Get introduced
  • Contact Anselm directly
Join to view full profile

People also viewed

Explore collaborative articles

We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.

Explore More

Add new skills with these courses