Thinktechway

Convolutional Neural networks for signal processing from deep learning architecture to very deep learning architecture(Tutorial)

Resnet contains residual blocks connected by shortcut connections to skip; this connection facilitates input propagation to future layers to mitigate vanishing gradients or exploding gradient problems. This makes it easier for the network to learn by preserving gradient magnitude, which ensures better and more stable training for very deep networks. The skipping connection, in fact, doesn't mean the layer is internally totally skipped but rather modified through the training to modify the input minimally or apply a transformation if it contributes to improving the result.

Skipping connection decisions

Skipping connections are inherent properties of the convolutional NNs, which were previously determined as the layers that should be minimally affecting the feature maps. Some of the advantages of ResNet are as follows: enabling the skipping connection to take effect depends on the variant  :
1. improve optimisation and convergence speeds
by allowing stable gradients. Another reason is that it doesn’t need to learn a new representation inside the feature maps. 
2. The residual approach in Resnet has been adapted into a much more complex approach, such as Unet++, Densenet, and ResNeXt. The residual connection has become a standard component in deeper architecture 

 

Variant of  Resnet 

UNet++:a varient of Resnet with improved skipped connection techniques

Scientific Reading Template
Scientific Illustration
Figure 1: UNet and UNet++ variant with encoder-decoder architecture, downsampling, and upsampling layers, including enhanced skip connections. Black indicates the original UNet structure; green represents convolutional layers (Zhou et al., 2019).

Network Connectivity

xi,j = { H(D(xi-1,j)),   j = 0
H([ xi,k ]k=0j-1, U(xi+1,j-1) ]),   j > 0 }

The function H(·) represents a convolution followed by an activation function. D(·) and U(·) denote down-sampling and up-sampling layers, respectively, while [ ] denotes concatenation.

Where:

xi,j - Output feature map at layer i and sub-layer j in the skip connection.

j - Number of feature maps at the current layer. For nodes where j = 0, there is only one input; as j increases, more feature maps are concatenated.

H(·) - Convolution operation followed by an activation function.

D(·) - Down-sampling layer that reduces spatial resolution.

U(·) - Up-sampling layer that increases spatial resolution.

[ ] - Concatenation of multiple inputs.

DenseNet

DenseNet is a Convolutional Neural Network (CNN) with a unique, densely connected architecture. Unlike conventional CNNs where each layer only receives input from the previous layer, DenseNet uses concatenation layers to allow each layer to access all preceding feature maps. This structure provides several advantages:

Advantages of Feature Map Concatenation

  • Feature Reuse
  • DenseNet enables each layer to reuse features from preceding layers, enhancing representation learning and minimizing redundancy.

  • Reduction of Redundant Features
  • The concatenation of feature maps helps DenseNet learn more comprehensive representations and reduces redundant information.

  • Fewer Parameters and Improved Efficiency
  • Traditional CNNs tend to increase in parameter count and computational cost with each layer. DenseNet’s architecture minimizes parameters, making it more computationally efficient.

Implementation of the concatenation feature maps using python

Explanation of Feature Maps Concatenation

DenseNet Dense Block Example

L_1
Input: X
L_2
Input: [X, F_1]
L_3
Input: [X, F_1, F_2]
Feature Map F_1
Feature Map F_2
				
					import tensorflow as tf
from tensorflow.keras import layers

# Define a single layer in a dense block
def dense_layer(x, growth_rate):
    # Batch normalization
    x = layers.BatchNormalization()(x)
    # ReLU activation
    x = layers.ReLU()(x)
    # Convolutional layer with growth rate filters
    x = layers.Conv2D(growth_rate, (3, 3), padding='same')(x)
    return x
#then the define the concatination layer for the feature maps 

def dense_block(x, num_layers, growth_rate):
    feature_maps = [x]  # Initialize list to keep track of all feature maps in this block

    for _ in range(num_layers):
       
        layer_output = dense_layer(x, growth_rate)
        
       
        x = layers.Concatenate()(feature_maps + [layer_output])
        
        # Update the feature maps list to include the output of the new layer
        feature_maps.append(layer_output)
    
    return x

				
			
Conclusion Container

Conclusion

In summary, the advancements in DenseNet architecture demonstrate the potential for deeper feature reuse and parameter efficiency. This unique concatenation strategy not only enhances the network's performance but also reduces computational costs, paving the way for more robust deep-learning applications. DenseNet's impact on feature learning and efficiency makes it a valuable tool for modern AI systems. We introduced two very deep learning architecture and their unique layers of mechanism and the advantages of utilising them. Next, we will go in-depth into their application on signal processing are their efficient to utilise and their computation expenses.

References

(1) UNet++: Redesigning Skip Connections to Exploit Multiscale Features in Image Segmentation

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top