Questions appeared in Deep Learning Question paper Dec & May – sem 7
Module 01: Fundamentals of Neural Networks
- What are Feed Forward Neural Networks?
- Explain Gradient Descent in Deep Learning.
- What are the Three Classes of Deep Learning, explain each?
- Design AND gate using Perceptron.
- Suppose we have N input-output pairs. Our goal is to find the parameters that predict the output y from the input x according to some function y = xw. Calculate the sum-of squared error function E between predictions y and inputs x. The parameter w can be determined iteratively using gradient descent. For the calculated error function E, derive the gradient descent update rule w ← w – α.
Module 02: Training, Optimization, and Regularization of Deep Neural Networks
- Explain the dropout method and its advantages.
- What are L1 and L2 regularization methods?
- What is the significance of Activation Functions in Neural Networks, explain different types of Activation functions used in NN.
- What are the different types of Gradient Descent methods, explain any three of them.
- Explain early stopping, batch normalization, and data augmentation.
- Explain Stochastic Gradient Descent and momentum-based gradient descent optimization techniques.
- What is an activation function? Describe any four activation functions.
Module 04: Convolutional Neural Networks (CNN)
- Explain Pooling operation in CNN.
- Explain the architecture of CNN with the help of a diagram.
- Explain CNN architecture in detail. Suppose we have an input volume of 32323 for a layer in CNN and there are ten 5*5 filters with stride 1 and pad 2; calculate the number of parameters in this layer of CNN.
- Describe LeNET architecture.
This list includes all the relevant question
Zaroor bhai, main aapko in saare questions ke basic answers simple Hinglish mein samjhata hoon. Chalo shuru karte hain:
**Module 01: Neural Networks ke Basics**
1. **Feed Forward Neural Networks kya hote hain?**
Ye ek simple type ka neural network hai jisme information ek direction mein travel karti hai – input se output ki taraf. Isme koi loops nahi hote.
2. **Deep Learning mein Gradient Descent kya hai?**
Ye ek method hai jo neural network ko train karne mein help karta hai. Isse network apne mistakes ko kam karta jata hai, jaise jaise vo seekhta hai.
3. **Deep Learning ki teen classes kya hain?**
– Supervised Learning: Jisme hum network ko batate hain ki kya sahi output hai
– Unsupervised Learning: Jisme network khud patterns dhoondhta hai
– Reinforcement Learning: Jisme network trial and error se seekhta hai
4. **Perceptron se AND gate kaise banayenge?**
Perceptron ek simple neural network hai. AND gate ke liye, hum do inputs lenge aur ek threshold set karenge. Agar dono inputs on hain, toh hi output on hoga.
5. **Gradient Descent update rule ka calculation**
Ye thoda technical hai, but simple words mein – hum error ko calculate karte hain aur phir parameters ko adjust karte hain taki error kam ho.
**Module 02: Neural Networks ki Training aur Optimization**
1. **Dropout method kya hai aur iske fayde?**
Dropout mein hum randomly kuch neurons ko training ke time switch off kar dete hain. Isse overfitting kam hoti hai aur network zyada robust banta hai.
2. **L1 aur L2 regularization kya hain?**
Ye dono methods network ko overfitting se bachane mein help karte hain. L1 sparse solutions deta hai, L2 weights ko chota rakhta hai.
3. **Activation Functions ka kya matlab hai?**
Activation functions decide karte hain ki neuron activate hoga ya nahi. Kuch common types hain:
– ReLU: Jo negative values ko zero kar deta hai
– Sigmoid: Jo values ko 0 aur 1 ke beech mein laata hai
– Tanh: Jo values ko -1 aur 1 ke beech mein laata hai
4. **Gradient Descent ke different types**
– Batch Gradient Descent: Poore dataset pe ek baar mein calculation
– Stochastic Gradient Descent: Har example pe alag calculation
– Mini-batch Gradient Descent: Thode se examples pe calculation
5. **Early stopping, batch normalization, aur data augmentation**
– Early stopping: Training jaldi rok dena jab improvement nahi ho rahi
– Batch normalization: Data ko normalize karna har layer ke baad
– Data augmentation: Training data ko artificially badhana
6. **Stochastic Gradient Descent aur momentum-based gradient descent**
SGD har example pe learning karta hai. Momentum wala method past updates ko bhi consider karta hai, jisse learning smooth hoti hai.
**Module 04: Convolutional Neural Networks (CNN)**
1. **CNN mein Pooling kya hota hai?**
Pooling mein hum image ka size chota karte hain, important features rakhte hue. Isse computation kam hoti hai.
2. **CNN ki architecture**
CNN mein usually ye layers hote hain:
Input Layer → Convolutional Layer → Activation Function → Pooling Layer → Fully Connected Layer → Output
3. **CNN layer ka calculation**
Is calculation mein hum dekhte hain ki kitne parameters honge ek particular layer mein, based on input size, filter size, etc.
4. **LeNET architecture**
Ye ek famous CNN architecture hai jo handwritten digits ko recognize karne ke liye banayi gayi thi. Isme multiple convolutional aur pooling layers hote hain.
Bhai, ye the basic explanations. Agar kisi specific point pe aur detail chahiye, toh batana!