Home / Issues / 12.2015 / Advanced luminance control and black offset correction for multi-projector display systems
Document Actions

GI VR/AR 2014

Advanced luminance control and black offset correction for multi-projector display systems

  1. Timon Zietlow University of Technology Chemnitz, Germany
  2. Marcel Heinz University of Technology Chemnitz, Germany
  3. Guido Brunnett ORCID iD University of Technology Chemnitz, Germany

Abstract

In order to display a homogeneous image using multiple projectors, differences in the projected intensities must be compensated. In this paper, we present novel approaches to combine and extend existing techniques for edge blending and luminance harmonization to achieve a detailed luminance control. Furthermore, we apply techniques for improving the contrast ratio of multi-segmented displays also to the black offset correction. We also present a simple scheme to involve the displayed context in the correction process to dynamically improve the contrast in brighter images. In addition, we present a metric to evaluate the different methods and their influence on the visual quality.

  1. published: 2016-03-18

Keywords

1.  Introduction

Multi-projector display systems use multiple projectors on one screen to enhance resolution and brightness of the display. Since these projectors cannot be perfectly aligned, the state of the art approach is to calibrate the display geometrically and photometrically ( [ RGM03 ], [ MI06 ], [ Hei13 ]). The photometric calibration deals with the determination of projector internal attributes and the compensation of differences in the projected intensities. To achieve specific changes in the projected intensities, the projector's response to a specified input value must be known. These color transfer functions are then inverted and used to apply the correction needed to create a homogeneous image.

The goal is to create a display that appears homogeneous while minimizing the negative effects on the display's quality, such as loss in contrast or shifted colors due to imperfections in the measured projector transfer function. Since the images of neighboring projectors overlap, a smooth transition between the different projectors is desirable. This is achieved with edge blending techniques which compensate for the accumulation of the luminance in these zones by gradually fading out each projector.

For technical reasons, most projectors are not able to reproduce a perfect black. The residual light projected by a projector for the input of zero is called the black offset. Especially in the aforementioned overlapping areas, the accumulating black offsets become perceptible when the displayed content is relatively dark. In existing approaches for photometric calibration, differences in the black offset are often neglected. However, with the application of multi-display solutions in a multimedia context, the importance of the black offset correction becomes obvious. To conceal the overlap zones, the displayed image must be modified to adjust for the differences in the black offsets throughout the whole display, significantly reducing the contrast.

In this work, we present a generalized model combining different approaches for blending and luminance control by extending the existing techniques to allow for better control of the transition in the overlap zones. Second, we propose new approaches to decrease the negative side effects due to the black offset correction by correlating the correction to the human contrast sensitivity and the displayed content.

1.1.  Preconditions

For this work, we presume that the geometric registration between each projector's pixel coordinates and some global wall space is known and mapping functions that transform between these spaces (and between the image spaces of different projectors) are established. We will use the notation x = (x,y) as two-dimensional vector representing a point, with xi denoting that point in projector i's image space, and X the corresponding wall space location.

Furthermore, we assume that the color transfer function fi of each projector is known, so that we can work in a linear color space. Applying a function g in that linear space to some color value c conceptually implies calculating fi-1(g(fi(c))).

2.  Previous Work

2.1.  Edge blending and luminance correction

The most noticeable non-uniformities present in multi-projector displays are the areas where the images of multiple projectors overlap. Without correction, the luminances of the projectors accumulate. Soft edge blending is a technique to conceal those overlap areas by smoothly fading out the luminance towards the edges, so that the resulting luminance in the overlap area adds up to approximately the same as that of a single projector. If the segments are arranged as a uniform grid, the simplest from of blending can be achieved by applying a linear attenuation along the horizontal and vertical axis [ RWF98 ]. Fluke et al. suggest a nonlinear transition controlled by an exponent [ FBO06 ].

Raskar et al. address the blending problem for arbitrarily arranged projectors and calculate blend masks with a per-pixel attenuation factor [ RWF98 ] for each pixel xm of projector m :

   (1)

with

   (2)

where wi denotes a weight function which yields 1 if the pixel location lies inside the image of projector i and 0 if it lies outside. The function di represents the euclidean distance of the pixel to the edge of projector i.

Different enhancements of this general blend scheme have been suggested. Yang et al. propose using an attenuation factor to account for different pixel densities [ YGH01 ]. Jaynes and Webb address visible color banding artifacts by introducing a noise component [ JW09 ].

The luminance of projectors varies greatly, even if identical devices are used. Edge blending results in smooth transitions between the projectors, but does not establish photometrical uniformity. The segmented structure of the display is still noticeable due to variations in color and brightness between the segments. In [ MS02 ], a method to achieve uniform luminance across the display is proposed. An image of the display with all projectors showing maximum intensity is captured with a CCD camera, resulting in a map L(X). The lowest luminance Lmin within the display is used as the target luminance for all other pixels. A luminance attenuation mask (LAM) is calculated, yielding an attenuation factor for each pixel of projector i:

   (3)

2.2.  Incorporating the black offset

The black offset of the projectors is often neglected. This is not a problem when relatively bright content is displayed. However, when showing darker content, this leads to noticeable artifacts. In order to achieve a homogeneous output, the black offsets must be taken into account ( [ RGM03 ], [ MI06 ]). As the black offset cannot be reduced, the black level must be artificially increased in some areas to achieve uniformity, so a per-pixel black offset β is added. Thus, a linear model can be used to apply the complete luminance correction to an image:

   (4)

In [ RGM03 ], Raij et al. presented the basic techniques for photometric correction in multi-display systems. This approach achieves homogeneous output by raising the black offset for each pixel to the maximum value Kmax measured throughout the display. Additionally, the α map must be adjusted:

   (5)

   (6)

With this β-mask and a correct representation of the transfer function, we can create a corrected image (fig. 1). This calibration has a strong impact on the remaining dynamic range (fig. 9).

Majumder et al. proposed using the human eyes' limited capability to identify differences in contrast to reduce the impact of the luminance calibration [ MS05, Maj05, MI06 ]. In this paper, we will refer to this approach as perceptual LAM or simply pLAM. The key idea of pLAM is, that instead of using a constant target luminance throughout the whole display, the target luminance may vary spatially and can be described by the function or mask L'(X).

   (7)

To achieve a calibration with a correction such that the human eye is unable to spot any differences the computed target values have to fulfill three constraints:

  • The target luminance must not exceed the maximum possible luminance: L'(X)L(X).

  • The gradient in the displayed intensities must be undetectable by the human eye as

        (8)

    with Δ denoting the minimal change in brightness the human eye can detect. Majumder and Stevens propose to choose Δ as

       (9)

    with τ being the brightness threshold that humans can tolerate per degree of visual angle [ MI06 ], d is the maximal viewing distance and r is the resolution in pixels per unit area. According to Majumder and Stevens, τ = 0.01 is a well chosen value due to the contrast sensitivity function of the human eye.

  • The overall luminance shall be maximized:

       (10)

In [ Maj05 ], an algorithm to find L' in linear time is presented. It is also argued that using the same technique for the black masks would be "overkill", since the black offset is several orders of magnitude smaller than the white levels.

Figure 1.  The measured black offset for a segment of the display (top) combined with the calculated β-mask (middle) results in a smooth black output (bottom). The hot spot is ignored due to the selection of the Kmax inside the blending areas.

The measured black offset for a segment of the display (top) combined with the calculated β-mask (middle) results in a smooth black output (bottom). The hot spot is ignored due to the selection of the Kmax inside the blending areas.

3.  Generalized edge blending

We propose a generalization by combining and extending the different blend approaches discussed in section 2.1. This generalization is implemented by modifying the approach of Raskar et al. from eq. (2) to

   (11)

Contrary to Raskar et al., we do not use the camera space, but a wall space established by a geometrical calibration, which eliminates any perspective distortion introduced by the camera standpoint.

To allow for a better control of the transition in the overlap areas, we modify the euclidean distance term d by exponent v specified for each projector. This represents the extension of the one-dimensional model proposed in [ FBO06 ] to the two-dimensional case with arbitrarily arranged projectors. Choosing v above 1 results in a slower transition near the edges of the blend zones and helps concealing the blending. Values in the range [1,2] are typically a good choice.

Support for stacking of multiple blended displays (i.e. for multi-segmented passive stereo setups) is achieved by assigning each projector to a layer q; blending shall only occur within the layers, never between projectors across different layers. Thus the weight function is modified to

   (12)

where Xi is the set of all wall space points which are illuminated by projector i. The term p with parameter n represents an extension of the pixel density coefficient proposed in [ YGH01 ]: Instead of using a constant value based on the average pixel density of the whole projector, the local pixel density at X is taken into account. Furthermore, the coefficient r allows to specify a weighting factor for each projector, if desired.

The term z incorporates a noise function similar to the approach proposed in [ JW09 ] as

   (13)

where si and ti controls the range and ui the frequency of the noise data. The vector oi introduces an additional coordinate offset. The resulting noise in the blend mask will depend on the ratio of the zi's of all involved projectors at the same wall space point, so oi should be set differently for each projector, while the noise range and frequency might be set identically for all projectors. We use a two-dimensional perlin noise as the noise function [ Per85 ].

3.1.  Combination of blending and luminance control

LAM and the advanced methods based on that approach (like pLAM and [ SLMG09 ]) do not take the exact geometry of the overlap zones into account. Instead, the luminance data of each projector is artificially attenuated towards the edges with some parameter chosen by the user. In contrast, our model combines the edge blending approach with the perceptual luminance model of eq. (7) by applying the blend masks to the target luminance maps for each projector as

   (14)

Note that the denominator doesn't contain the maximum luminance L of the combined projectors at wall space point X any more, but only projector i's contribution li at the corresponding pixel location xi. This overcomes a shortcoming of the original approach: The ratio of each projector's contribution to the total luminance only depends on the ratio of the luminances of the unattenuated projectors. Using our model, the blending defines the exact ratio each projector should contribute to the target luminance L'. Any method for determining the target luminance can be used as long as the constraint

   (15)

is fulfilled. This in effect imposes an upper limit onto the luminance in the overlap areas.

Figure 2.  Total luminance and each projector's contribution in an overlap area of two projectors. The dashed lines show the transition using pLAM (with λ and an edge attenuation of 50 pixels), the solid lines the combination of pLAM with the generalized blending (v = 2).

Total luminance and each projector's contribution in an overlap area of two projectors. The dashed lines show the transition using pLAM (with λ and an edge attenuation of 50 pixels), the solid lines the combination of pLAM with the generalized blending (v = 2).

In figure 2, the luminance transition between two horizontally overlapping projectors is compared for the pLAM method and the proposed combination with the blending. The limit imposed by the blending has the strongest impact near the edges of the overlap zones and has the effect of smoothing the resulting luminance gradient.

3.1.1.  Idealized blending

In classical edge blending strategies, the image content is modified only in the overlap areas. Assuming ideal projectors with no spatial variance of the luminance, blending will result in a smooth transition of the inter-projector luminance variations. In practice, the intra-projector luminance variations cannot be ignored. The fall-off between the brightest spot towards the corners can be as high as 50% [ Maj02 ]. Without correction, the overlap zones appear to dark.

In [ RBY99 ], the projector's intensity transfer function is presumed to follow a standard RGB model with gamma correction. Using this color space linearization, applying the alpha mask to color values can be simplified to

   (16)

so Raskar et al. directly apply the inverse gamma-correction to the blend mask. Note that this model is only applicable if the black offset is neglected. In such a scenario, the appearance of the transition can be improved by using a gamma value differing from the real one of the device (fig. 3). A higher value results in a brighter blend zone. However, this approach leads only to a very rough approximation of the luminance correction, as the brightness falloff is implicitly modeled by an exponential function. Furthermore, it is not suitable for modern projectors using intensity transfer functions which can not be adequately described by a gamma value.

To overcome these limitations, we propose an improved approach based on idealized blending. Assuming no intra-projector luminance variations, ideal blending will generate a monotonous luminance transition in the overlap zones. To achieve this effect with real projectors, the actual (measured) luminance of the projectors must be considered. As the blend factors of all projectors add up to 1 at each point of the display, we can use them to interpolate the luminances at the edges of the overlap areas across the blend zone. Thus, we can define a new target luminance function as

   (17)

with

   (18)

where Ni(xi) denotes the nearest point to xi in the image space of the projector i which lies outside of any overlap area and denotes the projector's blend mask which is to be applied as if i were an ideal projector. The target luminance can be used in eq. (14) to generate the final masks. Note that and α do not necessarily have to be the same function. Different parameters can be used for specifying the ideal blending defining the target luminance and for actually blending the projectors. Especially, the model from eq. (11) might be used with differing distance exponents v and to independently control the total luminance transition and the transition in the contributions of each projector.

Figure 3.  Idealized blending. Left: Theoretical blending ([ RBY99 ]) assuming ideal projectors using correct gamma value of 2.4 (dashed line) and modified gamma value of 2.72 (solid line). Right: The results of the blending with real projectors. The gamma value of 2.72 was empirically determined to find the visually most appealing result in that situation. Additionally, the idealized blending for Idealized blending. Left: Theoretical blending ( ) assuming ideal projectors using correct gamma value of 2.4 (dashed line) and modified gamma value of 2.72 (solid line). Right: The results of the blending with real projectors. The gamma value of 2.72 was empirically determined to find the visually most appealing result in that situation. Additionally, the idealized blending for = 1 (red) and = 2 (green) is shown. = 1 (red) and Idealized blending. Left: Theoretical blending ( ) assuming ideal projectors using correct gamma value of 2.4 (dashed line) and modified gamma value of 2.72 (solid line). Right: The results of the blending with real projectors. The gamma value of 2.72 was empirically determined to find the visually most appealing result in that situation. Additionally, the idealized blending for = 1 (red) and = 2 (green) is shown. = 2 (green) is shown.

Idealized blending. Left: Theoretical blending ( ) assuming ideal projectors using correct gamma value of 2.4 (dashed line) and modified gamma value of 2.72 (solid line). Right: The results of the blending with real projectors. The gamma value of 2.72 was empirically determined to find the visually most appealing result in that situation. Additionally, the idealized blending for = 1 (red) and = 2 (green) is shown. Idealized blending. Left: Theoretical blending ( ) assuming ideal projectors using correct gamma value of 2.4 (dashed line) and modified gamma value of 2.72 (solid line). Right: The results of the blending with real projectors. The gamma value of 2.72 was empirically determined to find the visually most appealing result in that situation. Additionally, the idealized blending for = 1 (red) and = 2 (green) is shown.

3.1.2.  Combination of idealized blending and LAM

The approach described in section 3.1.1 is suitable for situations where only moderate inter-projector luminance variations are present. In such cases, the maximum brightness of the display is achieved, as the luminance is only modified in the overlap areas. To guarantee a perceptually uniform appearance, the idealized blending can be combined with the LAM-based approaches. Instead of replacing L' by in eq. (14), is used to as another constraint: L'(X) ≤ (X).

4.  Black offset correction using human contrast sensitivity

As already mentioned, in [ Maj05 ] a technique is proposed incorporating the human contrast sensitivity into the calculation of the target luminance (α-masks). We adapted this approach to the black offset correction (β-masks). This is achieved by measuring the black level K(X) for each wall space location X and applying the gradient-limiting algorithm to the resulting surface to calculate a per-pixel target black level K'(X). In order to work with black levels instead of white levels like in the original approach, two of the three constrains must be modified:

  • The target value K'(X) must be greater than K(X) since the projector is not able to produce a darker value.

  • We need to minimize the target values in order to maximize the contrast:

       (19)

Apart from these modifications, no changes on the general algorithm are required and it still works in linear time. The β-mask is then calculated with the new, per pixel, target values:

   (20)

4.1.  Context-dependent correction

Due to the Weber-Fechner law, the difference in brightness the human eye can perceive grows exponentially with the absolute brightness perceived. We can use this relation to improve the contrast of the projected image. If, for example, a bright image is projected, small differences in smaller dark areas will not be recognized. This makes a correction of the black offset obsolete.

To take the projected content into account, we need to analyze it and have to decide in which areas the projected image is bright enough to go with less offset correction. This analysis of the image must be done for every frame, so a fast approach is necessary.

In order to control the strength of the applied black offset correction, we introduce the correction influence map C(X). This map is reciprocal to the brightness of the projected image. To achieve this, we use the inverted greyscale version of the content we are going to display. The second requirement of the context dependent correction is the inhibition of the correction beyond the bright areas, because the saturating effect of the perception also affects areas beyond the bright areas.

Our approach is to sample a lower-resolution version of the image. We create n mipmap layers (LOD1,...,LODn) of our source image. By sampling of these low-resolution images, fine details disappear and large areas get blurred (fig. 4). In order to control the size of the blurred areas, we combine different levels of detail of the same image, resulting in a smooth fading without any perceptible distortions. Due to the bilinear filtering used to sample the mipmap, the core area of a bright spot is smaller then the original bright area. We compensate this effect by applying a gamma correction. This method sharpens the transition areas between bright and dark areas.

During our experiments, we discovered that the use of four different mipmap levels (LODn-6.5, LOD3.5 ), combined with a gamma correction of 2.5, results in a correction influence map that is not detectable.

Figure 4.  Different stages in the creation of the influence map: (a) The input image. (b) Different stages in the creation of the influence map: (a) The input image. (b) , the linear combination of the third and the fourth mipmap levels. (c) , the combination of the sixth- and the seventh-last mipmap levels. (d) The resulting influence map combining both (b) and (c). , the linear combination of the third and the fourth mipmap levels. (c) Different stages in the creation of the influence map: (a) The input image. (b) , the linear combination of the third and the fourth mipmap levels. (c) , the combination of the sixth- and the seventh-last mipmap levels. (d) The resulting influence map combining both (b) and (c). , the combination of the sixth- and the seventh-last mipmap levels. (d) The resulting influence map combining both (b) and (c).

Different stages in the creation of the influence map: (a) The input image. (b) , the linear combination of the third and the fourth mipmap levels. (c) , the combination of the sixth- and the seventh-last mipmap levels. (d) The resulting influence map combining both (b) and (c). Different stages in the creation of the influence map: (a) The input image. (b) , the linear combination of the third and the fourth mipmap levels. (c) , the combination of the sixth- and the seventh-last mipmap levels. (d) The resulting influence map combining both (b) and (c).
Different stages in the creation of the influence map: (a) The input image. (b) , the linear combination of the third and the fourth mipmap levels. (c) , the combination of the sixth- and the seventh-last mipmap levels. (d) The resulting influence map combining both (b) and (c). Different stages in the creation of the influence map: (a) The input image. (b) , the linear combination of the third and the fourth mipmap levels. (c) , the combination of the sixth- and the seventh-last mipmap levels. (d) The resulting influence map combining both (b) and (c).

5.  Experimental results

We tested our algorithms on a segmented display with a screen size of 3.55 m x 2.20 m, lit by 2 x 2 "ultra short throw" Sanyo PDG DWL-2500 projectors in a front-projection setup. The images of the projectors overlap by about 30 cm horizontally and vertically. The projectors are using DLP technology and a color wheel with 6 segments. The geometric registration and color space linearization was done using the methods from [ Hei13 ].

To measure the display output, we used a Nikon D3S CMOS DSLR camera. It is generally understood that the intensity transfer function of CMOS and CCD sensors is nearly linear [ HKT07 ], so the data from the raw files represents values which are approximately proportional to the luminance. We use a modified high dynamic range method based on [ DM97 ] to capture the data, hence the sensor values are corrected for the exposure parameters used during the different exposures. The so-gathered data is determined up to an arbitrary scale factor. We used an X-Rite i1pro spectrophotometer as a reference to determine an RGB-to-XYZ color transform matrix and to define that scale factor so that the obtained values from the camera roughly correspond to the luminance measured with the spectrometer (in ).

5.1.  Luminance control

We applied different blending and luminance control techniques to a white image and measured the output of the display. In figure 5 the results are compared for pLAM, the idealized blending and the combination of both.

Figure 5.  Comparison of different luminance control techniques, from top to bottom: pLAM with λ = 650 , pLAM with λ = 900 , idealized blending with = 1 , pLAM with λ = 900 combined with idealized blending with = 1.

Comparison of different luminance control techniques, from top to bottom: pLAM with λ = 650 , pLAM with λ = 900 , idealized blending with = 1 , pLAM with λ = 900 combined with idealized blending with = 1. Comparison of different luminance control techniques, from top to bottom: pLAM with λ = 650 , pLAM with λ = 900 , idealized blending with = 1 , pLAM with λ = 900 combined with idealized blending with = 1.
Comparison of different luminance control techniques, from top to bottom: pLAM with λ = 650 , pLAM with λ = 900 , idealized blending with = 1 , pLAM with λ = 900 combined with idealized blending with = 1. Comparison of different luminance control techniques, from top to bottom: pLAM with λ = 650 , pLAM with λ = 900 , idealized blending with = 1 , pLAM with λ = 900 combined with idealized blending with = 1.

Figure 6.  Comparison of overall luminance across two projectors using different luminance control techniques. LAM achieves uniformity but reduces brightness drastically. pLAM improves overall brightness, but leads to perceptible discontinuities in the overlap areas. The smoothing suggested in [ SLMG09 ] reduced luminance by 15%. Our idealized blending achieves a smooth transition by only constraining the brightness in the areas of overlap.

Comparison of overall luminance across two projectors using different luminance control techniques. LAM achieves uniformity but reduces brightness drastically. pLAM improves overall brightness, but leads to perceptible discontinuities in the overlap areas. The smoothing suggested in reduced luminance by 15%. Our idealized blending achieves a smooth transition by only constraining the brightness in the areas of overlap.

The pLAM approach only limits the absolute value of the luminance gradient, but not its sign. In a typical powerwall setup, projectors overlap near their borders, where there is a steep luminance falloff, so the gradient towards the overlap zone is negative. Since the achievable brightness is much higher in the overlap regions, pLAM immediately tries to increase brightness inside the area, limited only by the maximum (positive) gradient. This results in discontinuities in the target luminance, which are perceptible by human vision. In [ SLGM09 ], this problem is addressed by smoothing the luminance surface using higher order Bézier surfaces. The fitting does not take the luminance constraints into account, so that the resulting masks need to be renormalized to prevent clipping. Unfortunately, this further decreases the overall brightness. In contrast, our method does not introduce such a loss of brightness. Discontinuities at the border of the overlap areas are prevented by the blending constraints and the overall brightness in the blend zones is limited by the idealized blending, so that a smooth transition between the projectors is achieved and the pLAM-typical spikes are avoided (see fig. 6). We found the results visually appealing as long as the overall brightness of neighboring segments does not vary too much. In such cases, we suggest the combination with pLAM.

In figure 7, the horizontal overlap between the two projectors in the bottom row is shown in detail. Using the combination of blending and pLAM further restricts luminance gradients near the edges of the blend zone, resulting in a smoother transition, and leads to marginally lower brightness only in the overlap area. Incorporating blend masks based on the exact geometric registration constitutes an improvement over artificially fading out the luminance towards the edges as suggested by the LAM-based methods. Differences in chromacity are smoothly blended throughout the whole overlap area, so that the segmented structure of the display is more effectively concealed. Furthermore, the approach is better suited for arbitrarily aligned projectors.

Figure 7.  Horizontal luminance transition in the overlap area of two projectors using different luminance control techniques in combination with blending. Blending constraints the brightness at the borders of the overlap zones and ensures a smooth transtion. Luminance control by idealized blending further constrains the brightness across the whole blend zone and prevents pLAM-typical luminance spikes.

Horizontal luminance transition in the overlap area of two projectors using different luminance control techniques in combination with blending. Blending constraints the brightness at the borders of the overlap zones and ensures a smooth transtion. Luminance control by idealized blending further constrains the brightness across the whole blend zone and prevents pLAM-typical luminance spikes.

5.2.  Black offset correction

In order to compare the results of different approaches for black offset correction, we decided to use two different characteristics: The gradient G(X) of the image to measure how smooth the resulting projection is, and the dynamic range D(X) to determine the negative influence of the correction on the projected image.

5.2.1.  Quality of the correction

In the discrete case, we can define the gradient as

   (21)

with function d representing the euclidean distance between two pixels and X' representing the pixels adjacent to X. By calculating the gradient for each pixel, the homogeneity of the display can be measured (fig. 8).

5.2.2.  Quality of the calibrated result

We examine the dynamic range of the display to compare the negative side effect caused by black offset correction. The dynamic range can also be calculated per pixel:

   (22)

With the dynamic range computed for each pixel, we can evaluate its spatial distribution (fig. 9).

Figure 8.  The measured gradients on the bottom two segments of the display projecting black: No correction (a), [ RGM03 ] (b) and gradient limitation (c). Note that the strong lines marking the border of the overlapping area are fainter in (c) than in (a). Since these borders are well detectable by the human eye, this presents an improvement. The noise in (b) and (c) is caused by the dithering of the used DLP projectors.

The measured gradients on the bottom two segments of the display projecting black: No correction (a), (b) and gradient limitation (c). Note that the strong lines marking the border of the overlapping area are fainter in (c) than in (a). Since these borders are well detectable by the human eye, this presents an improvement. The noise in (b) and (c) is caused by the dithering of the used DLP projectors.

Figure 9.  The measured dynamic range of the projectors at the bottom row of the display: (a) Without any black offset correction. The dynamic range in the overlap zone is significantly reduced due to the restricted brightness caused by the luminance correction. (The cirular spots were caused by dust particles on the projector's lense.) (b) The correction according to [ RGM03 ] results in a dramatic reduction of the dynamic range across the whole display. (c) Our method of gradient limitation better preserves the dynamic range the projectors are capable of.

The measured dynamic range of the projectors at the bottom row of the display: (a) Without any black offset correction. The dynamic range in the overlap zone is significantly reduced due to the restricted brightness caused by the luminance correction. (The cirular spots were caused by dust particles on the projector's lense.) (b) The correction according to results in a dramatic reduction of the dynamic range across the whole display. (c) Our method of gradient limitation better preserves the dynamic range the projectors are capable of.

Figure 10.  The photos show three different ways to treat the black offsets. The untreated (top), the physically accurate correction (center) and our approach (bottom). All three photos were taken with the same camera (F/3.5, 1/6s, ISO - 500).

The photos show three different ways to treat the black offsets. The untreated (top), the physically accurate correction (center) and our approach (bottom). All three photos were taken with the same camera ( F/3.5 , 1/6s , ISO - 500). The photos show three different ways to treat the black offsets. The untreated (top), the physically accurate correction (center) and our approach (bottom). All three photos were taken with the same camera ( F/3.5 , 1/6s , ISO - 500). The photos show three different ways to treat the black offsets. The untreated (top), the physically accurate correction (center) and our approach (bottom). All three photos were taken with the same camera ( F/3.5 , 1/6s , ISO - 500).

Figure 11.  The photo shows a campfire. In the top right without, in the bottom left with enabled context influence mask. The underlying black offset correction is the physical accurate one. The photo is intentionally overexposed to enhance the visibility of the effect. This also enhances the visibility of the overlap zones. Camera parameters: (F/3.5, 1/2s, ISO - 500).

The photo shows a campfire. In the top right without, in the bottom left with enabled context influence mask. The underlying black offset correction is the physical accurate one. The photo is intentionally overexposed to enhance the visibility of the effect. This also enhances the visibility of the overlap zones. Camera parameters:( F/3.5, 1/2s, ISO - 500).

Table 1.  The resulting dynamic range of the display for two different black offset correction methods in comparison. The correction according to [ RGM03] reduces the dynamic range to less than of what the projectors are capable of. In contrast, our method is able to preserve more than of the dynamic range.

Uncorr.

[ RGM03 ]

our approach

min.

328 : 1

114 : 1

242 : 1

max.

1532 : 1

518 : 1

1240 : 1

avg.

935 : 1

273 : 1

666 : 1


5.2.3.  Discussion

The results of the contrast sensitivity limited blending are satisfying (fig. 10 and 11). The overlap zones between different segments can not be perceived from the chosen distance.

In comparison to [ RGM03], our approach has a smaller impact on the dynamic range of the display. In fig. 9 it is shown how the correction using the [ RGM03 ] effects the whole display whereas our approach primarily effects the areas around the hard edges of the blending areas.

Our method depends on the viewer's distance to the screen. However, in many situations (especially in VR caves), the viewer's distance is known or limited by the size of the room. So this approach is an adequate solution for those situations.

The use of the correction influence map achieves a better internal contrast ratio but the difference is barely recognizable. This is due to the Weber-Fechner law mentioned above. The improvement in brightness and contrast is achieved by using the human eye's inability to detect some small differences in brightness. This also limits the effect of this approach, since the improvement in contrast gained through this technique comes within these small differences. The greatest benefit of the procedure is a visible reduction of color disturbance due to imperfections of the used representation of the projectors' transfer function (fig. 11).

6.  Conclusions

In this paper, we presented novel approaches for edge blending, luminance control and black offset calibration for multi-projector display systems. We consider our model as a toolbox for working with such displays. Different blending and luminance control approaches can be used and combined depending on the requirements of the users and specific characteristics of the projectors. Our unified and extensible model is controlled by only a few parameters and proved very useful in practice.

We showed that the existing techniques for luminance control, which did not take the actual geometric registration into account, can be combined with blending techniques. This combinations allows to directly control the exact contribution each projector shall have on the overall target luminance at each point of the display. Since blending slowly fades in one projector while fading out the other, this also constrains the brightness at the borders of the blend zones, so that the C1 discontinuities typical for pLAM are prevented implicitly.

The luminance control by the idealized blending model allows to retain the maximum brightness of the device outside of the areas of overlap. It is suitable as long as there are moderate brightness differences between the projects. If stronger variations occur, it can be combined with pLAM. In both cases, the overall brightness is higher than what is achieved by the luminance smoothing suggested in [ SLMG09 ] .

Applying the methods of pLAM also to the black level correction improves the image quality when displaying dark image content. The contrast ratio outside of the overlap zones is significantly increased compared to the naive approach of choosing a constant black level across the whole display.

The dynamic adaption on the display content achieves a better internal contrast ratio but the difference is barely recognizable. Therefore, it is questionable if this slight benefit is worth the effort, especially because this correction can't be pre-calculated offline.

6.1.  Future work

We think that a content-adaptive correction might more successfully applied to the general target luminance instead of the black offset. Luminance differences throughout the display are most easily spotted when large areas with uniform colors are displayed. However, when displaying highly structured content, such luminance variations are not noticeable at all. Trying to blend between alpha masks with a totally uniform target luminance (standard LAM) and some less restricted variants depending on image content seems feasible. In doing so, overall brightness and contrast could be increased whenever suitable image content is displayed. To implement that approach, reliable methods for deriving such a blend factor from the image content must be researched.

Although our improved black offset correction works good there are still some problems with the robustness of our system. Since the black offset represents the lowest possible value the projector can display, the transition from one segment to another cannot be smoothed in software. In the corrected segment, there is a hard transition to the overlapping area. Due to the discrete pixel raster this hard edge becomes visible, especially when the screen and projectors move, for example due to changes in temperature. Hard edge blending (like [ CSB03 ]) is an alternative approach to the blending problem. By installing an aperture in the optical system of each projector, a smooth transition from one segment to another is generated, including the black offset. However, appropriately positioning the apertures is difficult, limiting practicalness of the approach. We think that using hard edge blending to only roughly fade out each segment together with our techniques could combine the advantages of both approaches.

Bibliography

[CSB03] Robert M. Clodfelter David Sadler Johan Blondelle Large high resolution display systems via tiling of projectors 2003 Barco Simulation Products.

[DM97] Paul E. Debevec Jitendra Malik Recovering High Dynamic Range Radiance Maps from Photographs Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques,  1997 ACM Press/Addison-Wesley Publishing Co. New York, NY, USA pp. 369—378 DOI 10.1145/258734.258884 0-89791-896-7

[FBO06] Christopher J. Fluke Paul D. Bourke David O'Donovan Future Directions in Astronomy Visualization Publications of the Astronomical Society of Australia,  23 2006 1 12—24 DOI 10.1071/AS05034 1448-6083

[HKT07] Rainer Hain Christian J. Kähler Cam Tropea Comparison of CCD, CMOS and intensified cameras Experiments in Fluids,  42 2007 3 403—411 DOI 10.1007/s00348-006-0247-1 0723-4864

[JW09] Christhopher O. Jaynes Stephen B. Webb Multiple-Display Systems and Methods of Generating Multiple-Display Images 2009.

[Maj02] Aditi Majumder Properties of Color Variation Across a Multi-Projector Display Conference proceedings - the 22nd International Display Research Conference: Acropolis Convention Centre Nice (France), October 2-4, 2002,  2002 pp. 807—810 2-9507804-3-1

[Maj05] Aditi Majumder Contrast Enhancement of Multi-Displays Using Human Contrast Sensitivity Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05),  2005 IEEE Computer Society Washington, DC, USA Vol. 2 pp. 377—382 DOI 10.1109/CVPR.2005.111 0-7695-2372-2

[MI06] Aditi Majumder Sandy Irani Contrast Enhancement of Images Using Human Contrast Sensitivity APGV '06 Proceedings of the 3rd symposium on Applied perception in graphics and visualization,  2006 ACM New York, NY, USA pp. 69—76 DOI 10.1145/1140491.1140506 1-59593-429-4

[MS02] Aditi Majumder Rick Stevens LAM: luminance attenuation map for photometric uniformity in projection based displays Proceedings of the ACM symposium on Virtual reality software and technology VRST '02,  2002 ACM New York, NY, USA pp. 147—154 DOI 10.1145/585740.585765 1-58113-530-0

[MS05] Aditi Majumder Rick Stevens Perceptual photometric seamlessness in projection-based tiled displays ACM Transactions on Graphics,  24 2005 1 118—139 DOI 10.1145/1037957.1037964 0730-0301

[Per85] Ken Perlin An image synthesizer Proceedings of the 12th annual conference on Computer graphics and interactive techniques SIGGRAPH '85,  1985 ACM New York, NY, USA pp. 287—296 DOI 10.1145/325334.325247 0-89791-166-0

[RBY99] Ramesh Raskar Michael S. Brown Ruigang Yang Wei-Chao Chen Greg Welch Herman Towles Brent Seales Henry Fuchs Multi-projector Displays Using Camera-based Registration Proceedings of the Conference on Visualization '99: Celebrating Ten Years,  1999 IEEE Computer Society Press Los Alamitos, CA, USA pp. 161—168;522 DOI 10.1109/VISUAL.1999.809883 0-7803-5897-X

[RGM03] Andrew Raij Gennette Gill Aditi Majumder Herman Towles Henry Fuchs PixelFlex2: A Comprehensive, Automatic, Casually-Aligned Multi-Projector Display Proceedings of IEEE International Workshop on Projector-Camera Systems (PROCAMS), held in conjunction with the 9th IEEE International Conference on Computer Vision (ICCV), Nice, France, October 2003,  2003.

[RWF98] Ramesh Raskar Greg Welch Henry Fuchs Hal Thwaites Seamless Projection Overlaps using Image Warping And Intensity Blending Future Fusion: Application Realities for the Virtual Age - VSMM'98 Fourth International Conference on Virtual Systems and Multimedia, Gifu, Japan. November 1998,  1998 IOS Press Amsterdam Vol. 2 pp. 517—521 90-5199-470-2

[SLMG09] Behzad Sajadi Maxim Lazarov Aditi Majumder M. Gopi Color Seamlessness in Multi-Projector Displays Using Constrained Gamut Morphing IEEE Transactions on Visualization and Computer Graphics,  15 2009 6 1317—1326 DOI 10.1109/TVCG.2009.1241077-2626

[YGH01] Ruigang Yang David Gotz Justin Hensley Herman Towles Michael S.Brown PixelFlex: a reconfigurable multi-projector display system Proceedings of the conference on Visualization '01,  2001 IEEE Computer Society Washington, DC, USA pp. 167—174 DOI 10.1109/VISUAL.2001.964508 0-7803-7200-X

Fulltext

License

Any party may pass on this Work by electronic means and make it available for download under the terms and conditions of the Digital Peer Publishing License. The text of the license may be accessed and retrieved at http://www.dipp.nrw.de/lizenzen/dppl/dppl/DPPL_v2_en_06-2004.html.