底页的页面布局不符合main.tex

时间:2019-12-01 18:17:10

标签: latex tex

下面是我正在呈现文档的main.tex代码。

\documentclass{article}
\usepackage{graphicx}
\usepackage[utf8]{inputenc}

\begin{titlepage}
    \title{Compte rendu de TP traitement image}
    \author{Quentin Tixeront, Maxime Michel}
    \date{November 2019}
\end{titlepage}

\begin{document}

\maketitle

\tableofcontents

\section{Part 1: basic operations on images}
\subsection{Image Histogram}
Muscle and key images are both encoded on 8 bits, so they show 256 values of grey (0 being black and 255 being white). The histogram depicts the proportion of each value present in the picture. 

\begin{figure}[ht]
    \centering
    \includegraphics[width=0.75\textwidth]{muscle_key}
    \caption{Addition without scalar multiplication}
    \label{fig:muscle}
\end{figure}

We see in the figure \ref{fig:muscle} that the contrast is really narrow, there is no great difference between each grey, and we have no extreme values such as a bright white or a deep black. In the figure \ref{fig:muscle}, there is way more contrast, values tends to be less close to each other. 
As such, the histogram of [clef] shows that the values composing the pictures are really close, when the histogram of the muscle have a larger spectral spread. 


When we look at the muscle’s histogram, we see a much higher spike for low values of grey, as such, we know that white muscle fibers are more dense than red one. Red ones tends to be more homogeneous, as the spectral spread around high values (dark grey) shows us. 
\newpage
\subsection{Arithmetic operations}
Both images are encoded on 8 bits and present values of color in said range. But if we add one image to the other, we may oversaturate our picture by getting values out of bonds, mostly when we see the spot’s histogram, which has two values, one to zero, and one to 255. Thus, we use the scalar multiplication to avoid oversaturation.

\begin{figure}[ht]
    \centering
    \includegraphics[width=0.5\textwidth]{quantification}
    \caption{Addition with a scalar multiplication}
    \label{fig:quantif}
\end{figure}

\begin{figure}[ht]
    \centering
    \includegraphics[width=0.5\textwidth]{add}
    \caption{Oversaturated addition}
    \label{fig:ad}
\end{figure}


As said previously, by removing the scalar multiplication, we oversaturate our picture, and we lose the effect of adding. The histogram of such an addition shows that the white is 
predominant, and other values are faded. 

If we know both of the image’s histogram, but not the images themselves, we can’t predict the histogram of the resulting image. Indeed, we don’t know the spatial repartition of the colors by only looking at the histogram. As such, we don’t know which colors are gonna be added to others, and we can’t predict which value will be predominant, or what new grey value will appear.

\newpage
\subsection{Negative Image}
A negative image can be obtained by the simple equation : $Ineg = 255 - Iinit$
Thus, the resulting histogram of such an operation will be vertically symmetrical to the previous one.
\begin{figure}[ht]
    \centering
    \includegraphics[width=0.75\textwidth]{negative}
    \caption{Image and its negative, with symmetrical histogram}
    \label{fig:neg}
\end{figure}

\newpage
\subsection{Quantification}   

\begin{figure}[ht]
    \centering
    \includegraphics[width=1\textwidth]{qtfct}
    \caption{Quantified images, respectively 8, 5, 3 and 1 bit}
    \label{fig:qtf}
\end{figure}

We conclude that we are losing quality through losing details. Indeed, between first and last image (respectively 8 and 1 bit encoded) there are only two values of colors, so details have been diluted.

\begin{figure}[ht]
    \centering
    \includegraphics[width=0.75\textwidth]{hist_3b}
    \caption{3bit quantified image histogram}
    \label{fig:h3b}
\end{figure}

We know that the image from which comes this histogram has been quantified on 3 bits. Quantifying on 3 bits means we can have up to 8 values of grey.
The histogram shows us as such the 8 values composing this image. The range [0-255] has been equally divided in eight part, and each values of grey composing the original image as been transposed into the closest value.

The less significant bit (LSB) shows us only the slightest variations composing the image, and as such, shows us the noise, which is small variations on a flat colour. Indeed, let’s take the desk. Its colours (in a grey scale), should change according to the light, and we can assume if so, rather linearly. But there we can see many small variations, mostly invisible at first sight. Those variations (+1 -1) are due to additive noise, present mostly in the enlightened area.

On the MSB, we see the most important variations present in the picture.

\newpage
\section{Part 2: histogram transformations}
\subsection{Image enhancement with histogram modification}
We enhance there the image of a key. The aim is to distribute the initial values on a larger range with a linear equation, in order to enhance the contrast. The formula giving such an histogram from the picture of the key is \[(I - 50)(255/(160-50))\]

\begin{figure}[ht]
    \centering
    \includegraphics[width=0.75\textwidth]{enhaced}
    \caption{Enhanced picture and its original}
    \label{fig:enh}
\end{figure}
We increase the grayscale range of the initial image to obtain the QUITO contrast enhanced image. QUITO log is what happens when we apply the log transform to the original image. The log transform allows us to bring distant values closer together. Hence we are reducing the contrast between various regions of the image.

\newpage
\subsection{Histogram equalization}

We initially have an image as pictured in figure \ref{fig:aq} whose values are all very close together centered around 64. The equalization we are applying to the image is spreading out the most frequent intensity values to enhance the contrast of the image. We can also see that when we equalize the image a second time, the center of the histogram doesn’t seem to change (although it may have but the resolution of the histogram doesn’t allow us to see if it has) but the extremities around 0 and 255 do seem more spread out. It is important to note that the change is very minimal and we can wonder if in this particular case, it is useful to do the equalization twice.

\begin{figure}[ht]
    \centering
    \includegraphics[width=0.5\textwidth]{equal}
    \caption{Equalized one and two times image with their original}
    \label{fig:aq}
\end{figure}

\begin{figure}[ht]
    \centering
    \includegraphics[width=0.5\textwidth]{unif_equal}
    \caption{Equalization of an uniform image}
    \label{fig:unif}
\end{figure}

In Figure \ref{fig:unif} we can also see how the equalization has affected the original image but similarly to the previous one, the fact we have equalized the image twice has no noticeable effect and we can wonder if doing this twice has any use on any image.

\newpage
\subsection{Thresholding}


\begin{figure}[ht]
    \centering
    \includegraphics[width=0.75\textwidth]{threshold}
    \caption{Image of a muscle, filtered with a threshold T=90}
    \label{fig:thresh}
\end{figure}
We see on the histogram that there is a pit near the peak depicting the dark fibers which are the most contrasted, and the bump depicting the grey fibers cross this pit at a value around 90. We deduce that the threshold between white and grey fibers can be found in this area.
When we set the threshold at this value, we obtain the second picture and histogram. The histogram set all values under the threshold to 0 and the other above to 255. The peak at 255 is higher because of the composition of the initial image.

\newpage
\section{Part 3: linear and non-linear filters}
\subsection{Mean (moving average) filter}

\begin{figure}[ht]
    \centering
    \includegraphics[width=0.75\textwidth]{blurred}
    \caption{5 bits encoded image before and after being filtered by a blur filter}
    \label{fig:bl}
\end{figure}

By counting the number of values appearing on the histogram, we deduce the minimal number of bits needed to code such an image.
Here, the image is encoded on 5 bits.

Using a 3x3 mean filter (blur) is equivalent to compute the convolution of a matrix ones(3,3) on each pixel of the image. As such, we are creating new values of grey, thus the histogram. 
When we compute the difference, we see that shapes are the most sensible part to the blur filter. Indeed, those shapes are located in highly contrasted areas, and by computing the mean of nearby pixel, we smoothen those shapes, fading in most of those hard edges.

\begin{table}[ht!]
\centering
\begin{tabular}{ |c c c c c| }
\hline
 1 & 2 & 3 & 2 & 1 \\ 
 2 & 4 & 6 & 4 & 2 \\  
 3 & 6 & 9 & 6 & 3 \\
 2 & 4 & 6 & 4 & 2 \\
 1 & 2 & 3 & 2 & 1 \\
 \hline
\end{tabular}
\caption{Matrix resulting of a convolution of 2 3x3 mean filter}
\label{table:data}
\end{table}

Resulting images are similar but not identical. Indeed, using two consecutive convolutions 3x3 means using the resulting filter of height and width 5. But if we compute the convolution with Matlab or by hand, we found that the resulting filter of two 3x3 convolutions is not equal to a 5x5 mean filter. 
If we look closely at each matrix, we see that the 5x5 filter removes high frequency more efficiently because it is doing the mean of each of the 25 pixels whereas the two 3x3 filters give a higher proportion to the central pixel due to the coefficient composing the filter.

\begin{figure}[ht]
    \centering
    \includegraphics[width=0.75\textwidth]{5x5}
    \caption{Comparison between a 5x5 filtered image and a two times 3x3 filtered image}
    \label{fig:5}
\end{figure}

\newpage
\subsection{Non linear filter}

\begin{figure}[ht]
    \centering
    \includegraphics[width=0.75\textwidth]{noise}
    \caption{Comparison between the use of two filter regarding the noise}
    \label{fig:nois}
\end{figure}

Depending of the noise affecting the picture, we won’t use the same filter in order to recover the in information. In fact, salt and pepper noise affects particular pixel of the pictures, when uniform noise affects every pixel on the picture. 
When we are facing located noise, the corrupted pixel is surrounded by informative bits, so when we select the median, we are more likely to correct the error by selecting the values of one of the nearby bits. Since uniform noise affects every bit of information, not at the same proportion but still, we use a blur filter to compute the mean and fade the noise into the approximate value. 

\begin{figure}[ht]
    \centering
    \includegraphics[width=0.75\textwidth]{noise2}
    \caption{Another example of the use of filters}
    \label{fig:n2}
\end{figure}

As previously said, we see in figure \ref{fig:n2} that, to correct errors due to salt and pepper noise, we use a median filter, when a blur filter will be more effective to correct uniform noise.

\newpage
\subsection{Edge detection linear filters}

\begin{figure}[ht]
    \centering
    \includegraphics[width=0.75\textwidth]{Edges}
    \caption{Using gradient allows to detect edges}
    \label{fig:grad}
\end{figure}

Computing the X and Y gradient norm of this image allows us to see the shapes of each objects. Indeed, gradient informs us on the huge variations of values in the following direction, and thus detecting shapes, since objects and background are really contrasted. Gradient uses those values and shows use the shapes in both of these directions.
\newpage
\subsection{Canny operator}
Edges are more clearly visible on the second image, after the use of the Gaussian filter. Indeed, we smoothed the background and forms by correcting the errors. Those variations issued from the noise now corrected, they are not detected by the Sobel, or at least less likely, and thus we obtain a more precise image.

\begin{figure}[ht]
    \centering
    \includegraphics[width=0.75\textwidth]{Sobel}
    \caption{Use of Sobel to detect edges}
    \label{fig:Sob}
\end{figure}

\newpage
\newpage
\subsection{Laplace operator}
We saw previously that a blur filter smoothen the edges of a form. As such, since they lost their sharpness, the Laplace operator will be less efficient, since the derivative is affected by the noise.

\begin{figure}[ht]
    \centering
    \includegraphics[width=0.75\textwidth]{Laplacian}
    \caption{Laplacian detection of edges}
    \label{fig:Lapla}
\end{figure}


Here, Laplace is not accurate since there are no great difference between the forms and the background, plus the presence of noise. The second derivative is too much sensible to such disturbance to be usefull.

\begin{figure}[ht]
    \centering
    \includegraphics[width=0.75\textwidth]{nL}
    \caption{Use of the derivative facing noise}
    \label{fig:n}
\end{figure}

Adding an image with itself filtered by a Laplacian filter enhance the edges of the objects.
The variations associated with the change of contours are identified thanks to Laplace and the change of value of the derivative when approaching contours

\begin{figure}[ht]
    \centering
    \includegraphics[width=0.75\textwidth]{Combined_Lpl}
    \caption{Enhancing the edges using Laplacian filter}
    \label{fig:Combined}
\end{figure}


The second derivative brings more information when the change in value between two zones is strong


\newpage
\section{Part 4: mathematical morphology}

The Erode and Dilate functions affect the surrounding pixels to the ones considered as border pixels of a certain value. Erode makes them 0 and Dilate makes them 255. It is important to note that these operations occur based on a specific region size and shape which is modifiable. Here a 3x3 rectangle (a square in fact) was used.

When we apply a negative filter to the image, we are essentially inverting the contour pixels, therefore, when we now erode the image, it is the same as dilating the original image and applying the negative filter. This means that both filters are interchangeable depending on how the image is defined, black on a white background or white on a black background.

Opening is the process of eroding an image and then dilating the resulting image. This process aims to remove small objects from the foreground of an image and place them in the background. It is also used to identify specific shapes in an image in the sense that opening finds things into which a structuring element can fit. In our examples the structuring element is a 3x3 square.

Closing is the process of dilating then eroding and image which is used to remove holes in the foreground, ie removing small background islands and turning them into foreground.

This is visible when we look at the resulting images after the Opening and Closing operations.


\newpage
\section{Part 5: image segmentation}
\subsection{Segmentation based on automatic thresholding }

\begin{figure}[ht]
    \centering
    \includegraphics[width=0.75\textwidth]{segment}
    \caption{Segmenting a picture}
    \label{fig:seg}
\end{figure}

Otsu’s method: In the simplest form, the algorithm returns a single intensity threshold that separates pixels into two classes, foreground and background. The main idea is to either minimize intra-class variance or maximize inter-class variance which means finding a value for which points of both classes are the most separated and points within a class are closest together.

\begin{figure}[ht]
    \centering
    \includegraphics[width=0.75\textwidth]{Otsu}
    \caption{OTSU method applied for segmentation}
    \label{fig:Ot}
\end{figure}

White areas are what Otsu considers as the foreground of the image and black areas are the background.

Here in figure \ref{fig:Ot} Otsu does the same thing except the two regions, background and foreground aren’t very well separated. 

\newpage
\subsection{Split and merge segmentation}

The split segmentation, as the name implies, splits the image into square regions. The size of the regions depends on the variation within it. Small regions indicate that there are a lot of small variations in that area and large regions indicate that the area is very similar all around. Small regions indicate that there is more work required on said region since more is “happening”.

\begin{figure}[ht]
    \centering
    \includegraphics[width=0.75\textwidth]{splitted}
    \caption{Representation of the splitting method}
    \label{fig:spliiit}
\end{figure}

Changing the value of the standard deviation of the split segmentation allows us to define how similar values have to be for them to be considered as belonging to the same region. A higher standard deviation indicates that we consider that larger differences in values are still similar, on the other hand, a smaller standard deviation means that values have to be very close for them to be considered as belonging to the same region. This is what we call the homogeneity criterion.

\begin{figure}[ht]
    \centering
    \includegraphics[width=0.75\textwidth]{derivation}
    \caption{We need to chose a good value in order to use splitting}
    \label{fig:deriv}
\end{figure}

In our example, a value between 10 and 20 seems to be the most appropriate for figure \ref{fig:deriv}.

\begin{figure}[ht]
    \centering
    \includegraphics[width=0.75\textwidth]{merged}
    \caption{Merging an image using the split method}
    \label{fig:erged}
\end{figure}


Here we can see again how the split has divided the image but now, after applying the merge to it, we obtain the segmented image. Although the merge seems to be very random, each colour indicates a different region and depending on the deviation given to the merge, it will differentiate regions more or less between them. Here we have used the value 3 to merge the regions and we can see that it has grossly considered all regions to be different between them meaning that each region will have a specific grayscale value associated to it. Note that the ordering of colours has no importance but two regions coloured the same are considered to be the same region and have the same final grayscale value.

\begin{figure}[ht]
    \centering
    \includegraphics[width=0.75\textwidth]{wellmerged}
    \caption{Merging an image using the split method}
    \label{fig:wel}
\end{figure}


Here, with a value of 10, we can see that the merge has merged more regions together and has determined that most of the torus is the same grayscale value as well as the background being split into two separate regions (defined by the two different colours). If the standard deviation is below a certain value (here 10), then the two regions are considered to be the same and are merged together. Increasing the standard deviation means increasing the distance tolerance between regions.

The segmented image is obtained by setting each pixel in a region to the mean value of the region.


\section{Final exercises}
\subsection{Final exercise 1}
\newpage
\subsection{Final exercise 2}

\end{document}

但是,当我编译代码时,图像最终会错位,并且\newpage标签每次都无法正确创建新页面。

这里有三张图片显示了我的意思。

Page 11 rendering

Page 12 rendering

Page 18 rendering

如果对照渲染检查代码,则显然是不正确的,我无法弄清它是什么或发生的原因。

谢谢。

0 个答案:

没有答案
相关问题