見て来ませんでした!
ちゃんちゃん♪
って、そんな終わり方じゃちょっと酷いよね
ちょっとだけね?
だから、今日は秋葉原で偶然に巡りあってしまったアドトレーラーの写真をアップします♪
しかし、トレーラーさんも停車している時間が短いから追いかけるのが大変だよ…
ヘトヘトでござる
だれか、オラに元気を分けてケロ?
ABSTRACT
Imaging in poor weather is often severely degraded by scattering due to suspended particles in the atmosphere such as haze and fog.
In this paper, we propose a novel fast defogging method from a single image of a scene based on the atmospheric scattering model.
In the inference process of the atmospheric veil, the coarser estimate is refined using a fast edge-preserving smoothing approach.
The complexity of the proposed method is only a linear function of the number of image pixels and this thus allows a very fast implementation.
Results on a variety of outdoor foggy images demonstrate that the proposed method achieves good restoration for contrast and color fidelity, resulting in a great improvement in image visibility.
Index Terms — defog, atmospheric scattering model, edge-preserving smoothing, atmospheric veil
1. INTRODUCTION
Poor visibility becomes a major problem for most outdoor vision applications.
Bad weather caused by atmospheric particles, such as fog, haze, etc., may significantly reduce the visibility and distort the colors of the scene.
This is due to the following two scattering processes, (i) light reflected from the object surface is attenuated due to scattering by particles; and (ii) some direct light flux is scattered toward the camera.
These effects result in the contrast reduction increases with the distance.
In computer vision, the atmospheric scattering model is usually used to describe the formation of a foggy or hazy image.
Almost all established methods are based on this model.
Some of them require multiple input images of a scene; e.g., images taken either under different atmospheric conditions [1], or with different degrees of polarization [2].
Another methods attempt to remove the effects of fog from a single image using some form of depth information either from terrain models [3] or user inputs [4].
In practical applications, it is difficult to achieve these conditions so such approaches are restricted.
The very latest defogging methods [5-9] are able to defog single images by making various assumptions about the depth or colors in the scene.
In this paper, we propose a novel fast defogging method from a single image based on this scattering model.
The white balance is performed and the atmospheric scattering model is simplified prior to visibility restoration.
In the inference process of the atmospheric veil, the atmospheric veil is first roughly estimated using the minimal component of the color-corrected image and the coarser estimate is then refined using a fast edge-preserving smoothing approach.
Finally, the scene albedo is recovered by inverting this simplified model.
The complexity of the proposed method is only a linear function of the number of input image pixels.
The remainder of this paper is organized as follows.
Section 2 describes the atmospheric scattering model and briefly reviews established constraint-based single image defogging methods.
In Section 3, we present a detailed description of the proposed method.
Section 4 provides a performance comparison with He’s method.
Finally, concluding remarks are made in Section 5.
2. BACKGROUND
Consider an image taken under foggy or hazy conditions.
The intensity at spatial coordinate x recorded by a monochrome (narrow spectral band) camera as follows [1]:
Aρ(x)e-βd(x)+A(1-e-βd(x)) (1)
where I(x) is the observed image, A denotes the skylight, ρ(x) is the scene albedo, d(x) is the depth and β denotes the extinction coefficient of the atmosphere.
The former term Aρ(x)e-βd(x) on the right hand side of Eq.(1) is called direct attenuation, and the latter term A(1-e-βd(x)) is called airlight.
Since the unknown parameters in Eq.(1) include A, β and d(x), fog removal from a single image is an inherently ill-posed problem.
Recent work on single image defogging imposes constraints upon either the albedo or depth, or on both.
Tan [5] imposes a locally constant constraint on the airlight as a function of the depth to maximize the local contrast of the image.
However, the results tend to have larger saturation values because this method does not physically recover the albedo or depth but rather just enhance the visibility.
Besides, the result contains halo effects along depth discontinuities.
Fattal [6] imposes a locally constant constraint on the albedo together with decorrelation of the transmission in a local patch under the assumption that the surface shading and the transmission are locally statistically uncorrelated.
This method requires sufficient color information and significant variation, for its performance greatly depends on the statistics of the input data.
Kratz et al. [7] impose natural statistics priors on both the depth and albedo values and jointly estimate the depth and albedo through a canonical probabilistic formulation.
It is a tedious task to determine scene-specific albedo priors and empirically set these parameters, resulting in not suitable for practical needs.
He et al. [8] impose constraints on the depth structure induced by an empirical observation that within a local patch the scene albedo is assumed to tend to zero in at least one color channel.
Tarel et al. [9] impose constraints on the depth variation by maximizing the atmospheric veil assuming that it must be smooth most of the time.
Most constraint-based defogging methods from a single image are computationally too demanding to fulfill the requirement of a wide range of practical applications.
3. VISIBILITY RESTORATION
Recovering the scene albedo is an inversion process of the formation model of a foggy or hazy image.
The proposed method can be decomposed into three steps: estimation of the skylight A, inference of the atmospheric veil V(x) from the observed image I(x), solution of the scene albedo ρ(x) by inverting this scattering model.
3.1. Estimating Skylight
The skylight A is estimated from the pixel with highest intensity in most of the previous single image methods.
The disturbing effects of a white object lead to incorrect skylight estimation.
In [8], the size of the min filter specifies the larger size to filter out a white object with a smaller size, but it will also mistakenly remove a smaller sky region.
Since the accuracy of the skylight A plays an important part in the restoration process, we present here a more robust approach to search for the sky region.
The min filter is first performed on the image of the minimal component of I(x) in order to filter out trivial noise and small white objects, and the output of the filter for a pixel x is denoted by Imin(x).
Then, we adopt the Canny operator to detect edges of its gray version and obtain the edge image.
For every edge pixel, we count the ratio between the number of edge pixels and the total number of pixels within its small neighborhood and obtain a percentage map Nedge(x).
The pixels that satisfy both Imin(x) > Tv and Nedge(x) < Tp are selected to be candidates for the sky region.
We fix the brightness threshold Tv to 95% of the maximum value of Imin(x) and the flatness threshold Tp to 0.001.
Finally, we search for the first connected component from top to bottom and these pixels are determined as the sky region.
The skylight A is estimated as the maximum value of the corresponding region in the input image I(x).
3.2. White Balance
The first effect of atmospheric particles is that the scene radiance is attenuated exponentially with the scene depth d(x).
To simplify the description, the medium transmission t(x) can be expressed by the exponential decay e-βd(x):
t(x)=e-βd(x) (2)
The second effect is the addition of an atmospheric veil:
V(x)=1-t(x) (3)
being an increasing function of the scene depth d(x).
The white balance is first performed to correct the color of the airlight prior to visibility restoration, and this scattering model is thus simplified as:
I(x)/A=ρ(x)t(x)+V(x) (4)
Next we restrict the color-corrected image I’(x) between 0 and 1 as:
I’(x)=min{I(x)/A,1} (5)
With this formula, the formation model of a foggy or hazy scene can thus be rewritten as:
I’(x)=ρ(x)t(x)+V(x) (6)
This implies the skylight is set to be pure white (1,1,1)T.
Figure 1(a) shows an image of a hazy scene where the haze appears brownish.
Figure 1(b) shows the result of white balance correction for the airlight.
3.3. Atmospheric Veil
3.3.1. Coarse Estimation
粗粒の推定
The effects of atmospheric particles increase with the distances of scene points from the observer.
Thanks to the presence of fog in an image, it is most likely to be a clue for scene depths.
According to Eq.(6), the atmospheric veil is subject to two constraints: for each pixel, it is positive V(x) ≧ 0 and it is not higher than the minimal component of I’(x), i.e. V(x) ≦ I’(x).
Based on the maximization assumption [9], we thus take the min operation among three color channels and acquire a rough estimation of the atmospheric veil, expressed as:
V(~)(x)=minI’(x) (7)
This seems reasonably in accordance with the opinion that the intensity of a dark pixel is mainly contributed by the airlight in a foggy image [8].
3.3.2. Refinement with WLS-based Smoothing
Consider that the variation of the atmospheric veil depends solely on the depth d(x) of the objects, implying that objects with the same depth will have the same value of V(x) , regardless their albedo ρ(x) .
Therefore, a smoothing operator must be performed to force V(x) to change smoothly across small neighboring areas.
In [8], the first step for estimating the transmission is actually the min filtering on V(~)(x) and the second step is a refinement using an image matting approach.
The alpha channel is introduced to image matting for anti-aliasing purposes.
For this reason, it is impropriate to be applied to refine the coarser transmission.
In [9], a variant of median filter is used to estimate the atmospheric veil.
Since it is not a good edge-preserving filter, it is hard to tune these parameters and improper parameters are prone to inducing incorrect halo artifacts.
In this paper, we apply an edge-preserving smoothing approach based on the weighted least squares (WLS) optimization framework [10] to refine the coarser V(~)(x) and produce the finer estimate V(x).
With WLS-based smoothing, the shape of the signal is not significantly distorted, while achieving stronger smoothing in the regions bounded by edges.
Further, substituting the atmospheric veil V(x) into Eq.(3) yields the medium transmission t(x):
t(x)=1-V(x) (8)
3.4. Recovering Scene Albedo
Now that the atmospheric veil V(x) and the medium transmission t(x) have been inferred, we can recover the scene albedo by solving Eq.(6) with respect to ρ(x).
The sky is at infinite and tends to have zero transmission, meanwhile according to Eq.(7), the difference between the color-corrected image I’(x) and the atmospheric veil V(x) can be very close to zero.
In this case, the directly recovered scene albedo leads to the large shift of the sky colors.
To avoid zero divided by zero (or very small numbers), we introduce a constant factor (0 < K
The scene albedo ρ(x) is recovered by inverting this simplified model:
(9)
where the parameter K is simply fixed to 0.95, which forces distance regions white.
For the scene with depth discontinuities and severe occlusions, we truncate the albedo values beyond the range [0, 1] for removing outliers.
But for the scene with depth trends, a non-linear mapping function [9] is used to obtain the final tone mapped image.
According to the statistical observation on a variety of images, the ρ(x) histogram shape allows us to discriminate between these two cases.
4. COMPARISON EXPERIMENTS
The multi-resolution preconditioned conjugate gradient (PCG) is used to solve the WLS optimization.
The performance of this solver is linear in the number of pixels [10]. Therefore, the complexity of the proposed method is also a linear function of the number of input image pixels.
A. Comparison We present here a comparison with He’s work.
Figure 2 illustrates a comparison between results obtained by He’s method and our method. As shown in Figure 1(a), the input image shows poor visibility under hazy conditions.
Figure 2(a) shows He’s the transmission map and haze removal result with λ=0.0001 and to=0.1.
Figure 2(b) is our result.
As can be seen, our method can achieve better restoration for the fidelity of the colors.
Furthermore, the left image illustrates the edges surrounding the smaller feature are eroded faster than those around larger features, while avoiding blurring strong edges.
Therefore, the estimated transmission map is as smooth as possible everywhere, except along the depth discontinuities.
B. More Results
Our method is applied on a wide variety of outdoor foggy images.
Figure 3 provides the fog removal results of typical urban and natural scenes.
In the left column are input images of outdoor foggy scenes.
The middle column displays the recovered transmission maps.
The right images are the unveiled results by our method.
As can be seen, our method can removes the “veiling” effect caused by atmospheric scattering, effectively restore actual scene colors and contrasts, and provide a clearer view of the scene.
Furthermore, since the WLS-based smoothing is able to consistently preserve edges, our results contain few halo artifacts.
5. CONCLUSIONS
In this paper, we present a new fast scheme for inferring the atmospheric veil of a foggy scene using an edge-preserving smoothing approach, e.g.
WLS-based smoothing.
This scheme relies only on the assumption that the resulting image has higher contrast and the depth map tends to be smooth except along edges with large depth jumps.
Our method automatically defogs a single image of a scene without requiring any additional information about the atmosphere or scene depths, or any user interactions.
Finally, our method achieves a linear complexity and results demonstrate the effectiveness of our method.