Natural Language Processing (DLOC-III) Question papers

Paper 01 – Natural Language Processing (DLOC-III)

Duration: 3 hours Max Marks: 80
Instructions:
  1. Question No. 1 is compulsory
  2. Assume suitable data if necessary
  3. Attempt any three questions from the remaining questions

Q.1 Solve any Four out of Five (5 marks each)

a) Explain the challenges of Natural Language processing.

b) Explain how N-gram model is used in spelling correction

c) Explain three types of referents that complicate the reference resolution problem.

d) Explain Machine Translation Approaches used in NLP.

e) Explain the various stages of Natural Language processing.

Q.2 (10 marks each)

a) What is Word Sense Disambiguation (WSD)? Explain the dictionary based approach to Word Sense Disambiguation.

b) Represent output of morphological analysis for Regular verb, Irregular verb, singular noun, plural noun Also Explain Role of FST in Morphological Parsing with an example

Q.3 (10 marks each)

a) Explain the ambiguities associated at each level with example for Natural Language processing.

b) Explain Discourse reference resolution in detail.

Q.4 (10 marks each)

a) For given corpus:

<S> Martin Justin can watch Will <E>
<S> Spot will watch Martin <E>
<S> Will Justin spot Martin <E>
<S> Martin will pat Spot <E>
  • N: Noun [Martin, Justin, Will, Spot, Pat]
  • M: Modal verb [can, will]
  • V: Verb [watch, spot, pat]

Create Transition Matrix & Emission Probability Matrix
Statement is “Justin will spot Will”
Apply Hidden Markov Model and do POS tagging for given statements

b) Describe in detail Centering Algorithm for reference resolution.

Q.5 (10 marks each)

a) For a given grammar using CYK or CKY algorithm parse the statement “The man read this book”

S → NP VP Det → that | this | a | the
S → Aux NP VP Noun → book | flight | meal | man
S → VP Verb → book | include | read
NP → Det NOM Aux → does
NOM → Noun
NOM → Noun NOM
VP → Verb
VP → Verb NP

b) Explain Porter Stemmer algorithm with rules

Q.6 (10 marks each)

a) Explain information retrieval versus Information extraction systems

b) Explain Maximum Entropy Model for POS Tagging

Paper 02 – Natural Language Processing (DLOC-III)

Duration: 3 hours Max Marks: 80
Instructions:
  1. Question No. 1 is compulsory
  2. Attempt any three questions out of the remaining five
  3. All questions carry equal marks
  4. Assume suitable data, if required, and state it clearly

Q.1 (5 marks each)

a) Explain the applications of Natural Language processing.

b) Illustrate the concept of tokenization and stemming in Natural Language processing.

c) Discuss the challenges in part of speech tagging.

d) Describe the semantic analysis in Natural Language processing.

Q.2 (10 marks each)

a) Explain inflectional and derivational morphology with an example

b) Illustrate the working of Porter stemmer algorithm

Q.3 (10 marks each)

a) Explain hidden markov model for POS based tagging.

b) Demonstrate the concept of conditional Random field in NLP

Q.4 (10 marks each)

a) Explain the Lesk algorithm for Word Sense Disambiguation.

b) Demonstrate lexical semantic analysis using an example

Q.5 (10 marks each)

a) Illustrate the reference phenomena for solving the pronoun problem

b) Explain Anaphora Resolution using Hobbs and Cantering Algorithm

Q.6 (10 marks each)

a) Demonstrate the working of machine translation systems

b) Explain the Information retrieval system

Team
Team

This account on Doubtly.in is managed by the core team of Doubtly.

Articles: 481