Image Segmentation: Extract the new finest details from your images in 2023

Image Segmentation: Definition

In Digital Signal Processing, Image Segmentation is a process of segmenting or dividing a digital image into multiple segments or parts. 

In simple words, an image is, usually, a collection of different objects and we want to separate objects from each other. With the help of the process of image segmentation, we will be able to separate objects from each other. 

An image “I” can be broken into multiple regions of E(i=0 to infinity).

For example, an image of a cup and a plate can be separated into two meaningful images of a cup and a plate. 

So, dividing a digital image into multiple regions for extraction of the region of interest is known as Image Segmentation.

Two Main Principles of Image Segmentation 

A pixel is the smallest element of programmable color on a digital image.

As photons are to light and positrons are to Beta rays, pixels are to digital images.

Similarly Principle or Region-Based Approach

Extraction by the grouping of pixels by common properties.

Techniques used in the Similarity Principle

Threshold method

Threshold Method: The intensity of the pixels is compared with a threshold value and hence, portions are segmented. This is the most basic and is generally used when the intensity of the background is less in the image.

Used for white noise removal.

Region-based segmentation method

a) Region growing: is a procedure in which pixels are grouped into larger regions based on a seed pixel, a seed pixel is a point where we start the procedure. 

b) Region Splitting: In this technique, we consider a region. The region gets splitter further and further until the conditions of the grey level are met.

c) Region Merging: This technique is the reverse of Region splitting. Here, we combine regions until they attain a certain grey-level condition.

d) Split and Merge: We use a combination of region splitting and merging.

Discontinuity Principle or Boundary Based Approach

Extraction can happen by defining a boundary at the point where the boundary abruptly changes which is based on characteristics such as pixel intensity, pixel density, histogram, texture, color, etc.

Methods for Boundary-based Segmentation

Edge Detection

  • In this method, edges are taken into consideration. As edges are an important characteristic of an image since they divide objects at pixel level as we see by visible eyes too. 
  • Edges differ by pixel intensity, pixel density, grey levels, colors, texture, etc. 
  • Grey levels change abruptly beyond the edges. 
  • This type of segmentation gives the best results.
  • Firstly, the boundary of an edge is defined by an integral operator, then, a further process is carried out.
  • Edge is, usually, detected by linking.
  • We define edges and combine them to form a whole object. 

Local Processing and Edge Detection Operators

A) Local processing

We use Gradient and Direction. The ones with the similarities can be linked.  

Gradient-based operators which compute first-order derivations

Prewitt operator: This is used for detecting edges horizontally and vertically.

Sobel Operator: This operator detects the edges in both horizontal and vertical directions.

Robinson Compass Masks/Direction mask: In this operator, we take one mask and rotate it in all 8 compass major directions to calculate the edges of each direction.

Kirsch Compass Masks: This is also used for calculating edges in all directions.

Second-order derivatives

Laplacian Operator: This is also a derivative operator which is used to find edges in an image. Laplacian is a second-order derivative mask. It can be further divided into positive laplacian and negative laplacian.

Canny Operator: It is a Gaussian-based second-order derivative operator in detecting edges. This operator is not sensitive to noise. It extracts image features without affecting any kind of feature.

B) Global processing: By HOG transformation.

Applications of image segmentation:

1. Artificial Intelligence.

2. Medical Imagery.

3. Face reconstruction.

4. Satellite images. 

5. Robotics.

6. Software for graphics.

7. Pattern/face recognition.

Leave a Comment