Neural Networks Syllabus
This page contains Syllabus of Neural Networks of CSIT.
Title | Neural Networks |
Short Name | |
Course code | CSC372 |
Nature of course | Theory + Lab |
Sixth Semester | |
Full marks | 60 + 20 + 20 |
Pass marks | 24 + 8 + 8 |
Credit Hrs | 3 |
Elective/Compulsary | Elective |
Course Description
Course Description:
The course introduces the underlying principles and design of Neural Network. The course covers the basics concepts of Neural Network including: its architecture, learning processes, single layer and multilayer perceptron followed by Recurrent Neural Network
Course Objective:
The course objective is to demonstrate the concept of supervised learning, unsupervised learning in conjunction with different architectures of Neural Network
Units and Unit Content
- 1. Introduction to Neural Network
- teaching hours: 4 hrs
Basics of neural networks and human brain, Models of a neuron, Neural Network viewed as Directed Graphs, Feedback, Network Architectures, Knowledge Representation, Learning Processes, Learning Tasks
- 2. Rosenblatt’s Perceptron
- teaching hours: 3 hrs
Introduction, Perceptron, The Perceptron Convergence Theorem, Relation between the Perceptron and Bayes Classifier for a Gaussian Environment, The Batch Perceptron Algorithm
- 3. Model Building through Regression
- teaching hours: 5 hrs
Introduction, Linear Regression Model: Preliminary Considerations, Maximum a Posteriori Estimation of the Parameter Vector, Relationship Between Regularized Least-Squares Estimation and Map Estimation, Computer Experiment: Pattern Classification, The Minimum-Description Length Principle, Finite Sample-Size Considerations, The instrumental- Variables Method
- 4. The Least-Mean-Square Algorithm
- teaching hours: 5 hrs
Introduction, Filtering Structure of the LMS Algorithm, Unconstrained Optimization: A Review, The Wiener Filter, The Least-Mean-Square Algorithm, Markov Model Portraying the Deviation of the LMS Algorithm from the Wiener Filter, The Langevin Equation: Characterization of Brownian Motion, Kushner‟s Direct-Averaging Method, Statistical LMS Learning Theory for Small Learning-Rate Parameter, Virtues and Limitations of the LMS Algorithm, Learning-Rate Annealing Schedules
- 5. Multilayer Perceptron
- teaching hours: 8 hrs
Introduction, Batch Learning and On-Line Learning, The Back-Propagation Algorithm, XOR problem, Heuristics for Making the back-propagation Algorithm Perform Better, Back Propagation and Differentiation, The Hessian and Its Role in On-Line Learning, Optimal Annealing and Adaptive Control of the Learning Rate, Generalization, Approximations of Functions, Cross Validation, Complexity Regularization and Network Pruning, Virtues and Limitations of Back-Propagation Learning, Supervised Learning Viewed as Optimization Problem, Convolutional Networks, Nonlinear Filtering, Small-Scale Versus Large-Scale Learning Problems
- 6. Kernel Methods and Radial-Basis Function Networks
- teaching hours: 7 hrs
Introduction, Cover‟s Theorem on the separability of Patterns, The Interpolation problem, Radial-Basis-Function Networks, K-Means Clustering, Recursive Least-Squares Estimation of the Weight Vector, Hybrid Learning Procedure for RBF Networks, Kernel Regression and Its Relation to RBF Networks
- 7. Self-Organizing Maps
- teaching hours: 6 hrs
Introduction, Two Basic Feature-Mapping Models, Self-Organizing Map, Properties of the Feature Map, Contextual Maps, Hierarchical Vector Quantization, Kernel Self-Organizing Map, Relationship between Kernel SOM and Kullback-Leibler Divergence
- 8. Dynamic Driven Recurrent Networks
- teaching hours: 7 hrs
Introduction, Recurrent Network Architectures, Universal Approximation Theorem, Controllability and Observability, Computational Power of Recurrent Networks, Learning Algorithms, Back Propagation through Time, Real-Time Recurrent Learning, Vanishing Gradients in Recurrent Networks, Supervised Training Framework for Recurrent Networks Using Non Sate Estimators, Adaptivity Considerations, Case Study: Model Reference Applied to Neurocontrol
Lab and Practical works
Laboratory works:
Practical should be focused on Single Layer Perceptron, Multilayer Perceptron, Supervised Learning, Unsupervised Learning, Recurrent Neural Network, Linear Prediction and Pattern Classification