Deep Learning Semester 7 – MU AI-DS [viva QnA]

Module 1: Fundamentals of Neural Networks

Q1: What are Multilayer Perceptrons (MLPs), and why are they important in neural networks?
A1: MLPs are a type of neural network that consists of multiple layers of nodes with activation functions. They are important because they can model complex non-linear relationships by learning from the data.

Q2: How do sigmoid neurons work, and what are their limitations?
A2: Sigmoid neurons output values between 0 and 1 using a sigmoid function. However, they suffer from problems like vanishing gradients during backpropagation.

Q3: What is the representation power of feedforward neural networks?
A3: Feedforward neural networks can approximate any continuous function, given sufficient neurons and layers, making them powerful in solving complex problems.

Q4: Describe the process of gradient descent in neural networks.
A4: Gradient descent is an optimization algorithm that minimizes the loss function by updating weights iteratively in the direction of the steepest descent.

Q5: Can you explain the concept of deep networks and the three classes of deep learning?
A5: Deep networks consist of multiple layers, and the three classes of deep learning are supervised, unsupervised, and reinforcement learning.


Module 2: Training, Optimization, and Regularization of Deep Neural Networks

Q6: What is the difference between feedforward and backpropagation in training a neural network?
A6: Feedforward is the process of passing inputs through the network to get outputs, while backpropagation updates the weights using the gradients of the loss function.

Q7: Explain the importance of different activation functions like ReLU and Softmax in neural networks.
A7: Activation functions introduce non-linearity into the model. ReLU is used for hidden layers as it mitigates the vanishing gradient problem, while Softmax is used in the output layer for classification problems.

Q8: What are the differences between Gradient Descent, Stochastic Gradient Descent, and Mini-batch Gradient Descent?
A8: Gradient Descent uses all data for each update, while Stochastic uses one sample, and Mini-batch uses a subset of the data, providing a balance between computation and convergence speed.

Q9: How does the Adam optimization algorithm work, and why is it popular?
A9: Adam combines the benefits of momentum-based optimization and adaptive learning rates, making it efficient and widely used in practice.

Q10: What is overfitting, and how do regularization techniques like L1 and L2 help prevent it?
A10: Overfitting occurs when a model performs well on training data but poorly on new data. L1 and L2 regularization penalize large weights to prevent overfitting by adding a regularization term to the loss function.

Q11: Describe the process of data augmentation and its significance in training deep neural networks.
A11: Data augmentation generates new training samples by applying transformations (like rotation, cropping) to existing data, which helps to increase the diversity of the dataset and prevent overfitting.

Q12: What is the role of batch normalization in deep learning?
A12: Batch normalization normalizes activations across a mini-batch, speeding up training and making the model more robust to initialization issues.


Module 3: Autoencoders – Unsupervised Learning

Q13: What is an autoencoder, and how does it differ from supervised learning models?
A13: Autoencoders are unsupervised models used for learning compressed representations of data. Unlike supervised models, they don’t rely on labeled outputs and instead reconstruct input data.

Q14: What is the difference between a linear autoencoder and an overcomplete autoencoder?
A14: A linear autoencoder performs a simple linear transformation, while an overcomplete autoencoder has more hidden units than input units, allowing for a richer representation but increasing the risk of overfitting.

Q15: Explain the significance of denoising autoencoders.
A15: Denoising autoencoders learn to reconstruct original data from noisy inputs, improving robustness and generalization capabilities.

Q16: How can autoencoders be used for image compression?
A16: Autoencoders can reduce the dimensionality of image data by learning compact representations, which can then be decoded back into images, effectively compressing the data.


Module 4: Convolutional Neural Networks (CNN) – Supervised Learning

Q17: What is a convolution operation in CNNs, and why is it important?
A17: The convolution operation involves applying a filter to input data to detect features like edges. It’s important because it reduces the number of parameters and computation in the network.

Q18: How do padding and stride affect the output of a CNN?
A18: Padding adds zeros around the input to control the output size, while stride determines the step size of the convolution filter, affecting the spatial resolution of the output.

Q19: Compare a fully connected neural network to a CNN.
A19: Fully connected networks treat all inputs equally, whereas CNNs take into account the spatial structure of the input, making them better suited for image-related tasks.

Q20: What are pooling layers in CNNs, and how do they contribute to the network?
A20: Pooling layers reduce the spatial dimensions of the input by summarizing neighboring pixels, making the network more invariant to small translations of the input.

Q21: Explain the architecture of ResNet and its contribution to solving the vanishing gradient problem.
A21: ResNet introduces skip connections that allow gradients to flow through the network more effectively, addressing the vanishing gradient problem in very deep networks.


Module 5: Recurrent Neural Networks (RNN)

Q22: What is the sequence learning problem, and how do RNNs address it?
A22: The sequence learning problem involves modeling data that has a temporal or sequential order. RNNs address this by using hidden states to capture information from previous time steps.

Q23: Explain the backpropagation through time (BTT) algorithm in RNNs.
A23: BTT is an extension of backpropagation that unfolds the RNN through time and computes gradients at each time step to update weights.

Q24: What are the main limitations of vanilla RNNs, and how do LSTMs address them?
A24: Vanilla RNNs struggle with long-term dependencies due to vanishing gradients. LSTMs use gating mechanisms to control the flow of information, enabling them to capture long-term dependencies.

Q25: How does the Gated Recurrent Unit (GRU) differ from LSTM?
A25: GRUs are a simplified version of LSTMs with fewer gates, making them faster to train while still addressing the vanishing gradient problem.

Q26: What is the vanishing gradient problem, and how does it affect RNN training?
A26: The vanishing gradient problem occurs when gradients become too small during backpropagation, slowing down learning or making it impossible. This is particularly problematic in RNNs over long sequences.


Module 6: Recent Trends and Applications

Q27: What is a Generative Adversarial Network (GAN), and how does it work?
A27: A GAN consists of two networks, a generator and a discriminator, competing against each other. The generator tries to create fake data, while the discriminator tries to distinguish between real and fake data.

Q28: How are GANs used in image generation?
A28: GANs generate new images by learning the underlying distribution of a training dataset and using that knowledge to create realistic images.

Q29: What are DeepFakes, and what role do GANs play in creating them?
A29: DeepFakes are synthetic media generated by GANs that can superimpose one person’s face onto another, often used for both benign and malicious purposes.


Practical Questions

Q30: Implement a Multilayer Perceptron algorithm to simulate an XOR gate. Explain how it works.

A30:
The XOR problem is not linearly separable, which means that a simple perceptron cannot solve it. A Multilayer Perceptron (MLP) with at least one hidden layer can learn the XOR function due to its ability to model non-linear decision boundaries.

Working of MLP for XOR Gate:

  1. Input Layer: The input layer has two neurons (for two binary inputs: x1 and x2).
  2. Hidden Layer: The hidden layer typically has two neurons (minimum required for XOR), each applying a non-linear activation function (such as ReLU or Sigmoid).
  3. Output Layer: The output layer has one neuron, representing the XOR output. The final output is passed through a non-linear activation function (often Sigmoid) to produce a value between 0 and 1.

The MLP learns the weights of connections between the layers using backpropagation and gradient descent to minimize the loss (e.g., mean squared error) between predicted and actual outputs. The XOR outputs are:

  • Input (0, 0) → Output 0
  • Input (0, 1) → Output 1
  • Input (1, 0) → Output 1
  • Input (1, 1) → Output 0

During training, the MLP adjusts its weights to fit this mapping.


Q31: Implement a backpropagation algorithm to train a deep neural network with two hidden layers.

A31:
Backpropagation is an algorithm used to train deep neural networks by minimizing the error between predicted and actual outputs.

Steps to Implement:

  1. Initialize Weights and Biases: Randomly initialize the weights and biases for all neurons in the network.
  2. Forward Pass: For each input, compute the output by passing it through the network, layer by layer:
  • Multiply the input by the weights, add bias, and apply an activation function (e.g., ReLU for hidden layers, softmax or sigmoid for output layer).
  1. Compute Loss: Calculate the error or loss using a suitable loss function (e.g., cross-entropy for classification).
  2. Backpropagation:
  • Compute the gradient of the loss with respect to the weights of the output layer by applying the chain rule of calculus.
  • For hidden layers, propagate the error backward, adjusting the weights using the gradient of the loss with respect to each weight (via chain rule).
  1. Update Weights: Use an optimization algorithm (e.g., Stochastic Gradient Descent or Adam) to update the weights in the direction that minimizes the loss.
  2. Repeat: Repeat the forward and backward passes for all training samples for several epochs until the loss converges.

The two hidden layers allow the network to learn complex patterns in the data, and backpropagation ensures that all layers learn effectively.


Q32: Design and implement a CNN model for digit recognition. How does it handle image data differently than MLPs?

A32:
Convolutional Neural Networks (CNNs) are specifically designed to work with image data, unlike MLPs, which treat all input data equally without considering spatial relationships.

Steps to Implement:

  1. Input Layer: The input layer takes in the image (e.g., a 28×28 grayscale image of a digit from the MNIST dataset).
  2. Convolutional Layers: These layers apply filters (small matrices) that slide over the input image to detect local features like edges, textures, or patterns. The same filter is applied across the image, ensuring parameter sharing, which reduces the number of parameters.
  3. Pooling Layers: Pooling layers (e.g., MaxPooling) reduce the spatial dimensions of the feature maps while preserving important features, making the model more efficient and reducing the risk of overfitting.
  4. Fully Connected Layer: After a series of convolutional and pooling layers, the features are flattened and fed into a fully connected layer for classification.
  5. Output Layer: The output layer has 10 neurons, each representing a digit (0–9), and uses the softmax function to output the probability for each class.

Difference from MLP:

  • MLPs do not account for the spatial structure of image data, treating each pixel independently. CNNs, however, exploit the 2D structure of images, detecting patterns with convolutional layers and reducing dimensions with pooling.
  • CNNs are more computationally efficient and accurate for image tasks than MLPs due to local receptive fields, parameter sharing, and down-sampling.

Q33: Implement LSTM for handwriting recognition. How does LSTM help in sequence learning?

A33:
Long Short-Term Memory (LSTM) networks are a type of Recurrent Neural Network (RNN) that excel at learning from sequential data, such as handwriting.

Steps to Implement:

  1. Input Layer: The input consists of sequences of pen strokes (time steps), each containing the pen’s x, y coordinates and possibly pressure data.
  2. LSTM Layers: LSTM layers are used to process the sequence of pen strokes over time. LSTMs have memory cells and gates (input, forget, output) that decide what information to store or discard, making them well-suited for sequence learning.
  3. Fully Connected Layer: After processing the sequence, the output is passed to a fully connected layer for classification (e.g., predicting a character or word).
  4. Output Layer: The softmax function is applied to predict the most likely character or sequence of characters from the handwriting.

How LSTM Helps in Sequence Learning:

  • LSTMs retain important information from earlier time steps while selectively forgetting irrelevant details, making them ideal for handwriting recognition, where each stroke depends on the previous ones.
  • LSTMs solve the vanishing gradient problem faced by traditional RNNs, allowing them to learn long-term dependencies in the data.

Q34: Implement a GRU for a chatbot application. Explain the architecture and why GRU is chosen over LSTM.

A34:
Gated Recurrent Units (GRUs) are a simplified version of LSTMs that can also handle sequential data efficiently, making them suitable for chatbot applications.

Steps to Implement:

  1. Input Layer: The input consists of sequences of user messages, which are tokenized and converted into word embeddings (e.g., using pre-trained embeddings like Word2Vec or GloVe).
  2. GRU Layers: The GRU processes the sequence of words. Unlike LSTM, which has three gates (input, forget, output), the GRU has only two gates (update and reset). These gates control the flow of information, deciding which part of the input to pass to the next time step.
  3. Fully Connected Layer: The output from the GRU is passed to a fully connected layer to predict the response.
  4. Output Layer: The output is a sequence of words that form the chatbot’s response, generated using a softmax function.

Why Choose GRU Over LSTM:

  • Simplicity: GRUs have fewer parameters than LSTMs, making them faster to train and more computationally efficient, which is beneficial for real-time applications like chatbots.
  • Performance: In many cases, GRUs perform on par with LSTMs while being simpler, particularly when training data is limited or the model needs to be deployed on resource-constrained devices.
  • Efficiency: GRUs are less prone to overfitting due to their simpler architecture, and they can still handle sequence learning effectively by retaining relevant information over time.
Team
Team

This account on Doubtly.in is managed by the core team of Doubtly.

Articles: 457