Software Engineering Viva Exam Questions and Answers

Module 1: Introduction To Software Engineering and Process Models

Q1: What is the Capability Maturity Model (CMM) in software engineering?
A1: The Capability Maturity Model (CMM) is a framework used to assess and improve an organization’s software development processes. It defines five maturity levels: Initial, Repeatable, Defined, Managed, and Optimizing. Each level represents a stage in the organization’s software process improvement journey.

Q2: Compare and contrast the Waterfall and Incremental process models.
A2: The Waterfall model is a linear sequential approach where each phase must be completed before the next begins. The Incremental model, on the other hand, divides the project into smaller, functional increments that are developed and delivered iteratively. Waterfall is more rigid and suitable for well-defined projects, while Incremental is more flexible and allows for changes between increments.

Q3: Explain the key principles of the Extreme Programming (XP) Agile methodology.
A3: Key principles of Extreme Programming include: continuous feedback, assuming simplicity, incremental changes, embracing change, and quality work. XP practices include pair programming, test-driven development, continuous integration, and frequent small releases.

Q4: How does the Scrum framework differ from traditional project management approaches?
A4: Scrum is an Agile framework that emphasizes flexibility, collaboration, and rapid iteration. Unlike traditional approaches, Scrum uses short, time-boxed sprints to deliver working software incrementally. It includes roles like Scrum Master and Product Owner, and ceremonies like daily stand-ups and sprint retrospectives.

Q5: What are the main components of the process framework in software engineering?
A5: The main components of the software engineering process framework are: communication, planning, modeling, construction, and deployment. These components provide a foundation for effective software development regardless of the specific process model used.

Module 2: Software Requirements Analysis and Modeling

Q6: What is the purpose of a Software Requirements Specification (SRS) document?
A6: The Software Requirements Specification (SRS) document serves to clearly and precisely define the functional and non-functional requirements of a software system. It acts as a contract between stakeholders and developers, providing a basis for design, implementation, and testing activities.

Q7: Describe the key components of a Data Flow Diagram (DFD).
A7: The key components of a Data Flow Diagram are: processes (represented by circles), data flows (arrows), data stores (parallel lines), and external entities (rectangles). DFDs show how data moves through a system, helping to visualize the system’s processes and data interactions.

Q8: What is the difference between functional and non-functional requirements?
A8: Functional requirements define specific behaviors or functions that a system must perform, such as “The system shall allow users to login.” Non-functional requirements specify criteria that can be used to judge the operation of a system, rather than specific behaviors. Examples include performance, security, and usability requirements.

Q9: Explain the concept of use case modeling in requirements analysis.
A9: Use case modeling is a technique for capturing functional requirements by describing interactions between users (actors) and the system. Each use case represents a specific goal that an actor wants to achieve using the system. Use cases help in understanding user needs and system functionalities from an external perspective.

Q10: What are the key sections typically included in an IEEE format SRS document?
A10: Key sections in an IEEE format SRS document typically include: Introduction, Overall Description, Specific Requirements (including functional, non-functional, and interface requirements), Appendices, and Index. The document also includes a purpose, scope, definitions, references, and an overview of the system.

Module 3: Software Estimation Metrics

Q11: What is the purpose of Function Point Analysis (FPA) in software estimation?
A11: Function Point Analysis is used to measure the size and complexity of a software system based on its functionality from the user’s perspective. It helps in estimating development effort, cost, and duration, and can be used to compare productivity across different projects and technologies.

Q12: Explain the basic COCOMO model for software cost estimation.
A12: The basic COCOMO (Constructive Cost Model) uses a simple regression formula to estimate software development effort based on the size of the software measured in thousands of lines of code (KLOC). The formula is: Effort = a * (KLOC)^b, where ‘a’ and ‘b’ are constants that depend on the project type (organic, semi-detached, or embedded).

Q13: What are the key differences between COCOMO I and COCOMO II models?
A13: COCOMO II is an updated version of COCOMO I that addresses modern software development practices. Key differences include: COCOMO II considers reuse and reengineering, uses function points in addition to lines of code for size estimation, and includes more cost drivers and scale factors to account for various project and organizational characteristics.

Q14: How does Lines of Code (LOC) estimation differ from Function Point (FP) estimation?
A14: Lines of Code (LOC) estimation is based on the physical size of the software, counting the number of code lines. Function Point (FP) estimation measures the logical size of the software based on its functionality from the user’s perspective. LOC is language-dependent and easier to count, while FP is language-independent and more closely related to user requirements.

Q15: What is the purpose of project tracking in software development?
A15: Project tracking involves monitoring the progress of a software project against the planned schedule, budget, and deliverables. Its purpose is to identify any deviations from the plan early, allowing for timely corrective actions. Tracking helps in managing risks, ensuring resource allocation, and maintaining project transparency.

Module 4: Software Design

Q16: What are the key principles of effective modular design in software engineering?
A16: Key principles of effective modular design include: high cohesion (modules should have a single, well-defined purpose), low coupling (minimize dependencies between modules), information hiding (encapsulate implementation details), and separation of concerns (divide the system into distinct features with minimal overlap).

Q17: Explain the difference between cohesion and coupling in software design.
A17: Cohesion refers to the degree to which elements within a module are related to each other and work together to perform a single, well-defined task. High cohesion is desirable. Coupling, on the other hand, refers to the degree of interdependence between modules. Low coupling is preferable as it makes the system more maintainable and less prone to ripple effects when changes are made.

Q18: What is the importance of architectural design in software development?
A18: Architectural design is crucial as it defines the overall structure of the software system. It helps in managing complexity, facilitates communication among stakeholders, supports quality attributes (like scalability and maintainability), guides detailed design and implementation, and provides a basis for reuse and evolution of the system.

Q19: Describe the Model-View-Controller (MVC) architectural pattern.
A19: The Model-View-Controller (MVC) pattern separates an application into three interconnected components: Model (data and business logic), View (user interface), and Controller (handles user input and updates Model and View). This separation of concerns improves maintainability, allows for parallel development, and facilitates code reuse.

Q20: What are design patterns and why are they important in software design?
A20: Design patterns are reusable solutions to common problems in software design. They provide tested, proven development paradigms that can speed up the development process, improve code readability, and reduce the likelihood of subtle issues. Examples include Singleton, Factory, and Observer patterns. They are important because they encapsulate best practices and promote code reuse and extensibility.

Module 5: Software Testing

Q21: What is the difference between unit testing and integration testing?
A21: Unit testing focuses on testing individual components or modules of a system in isolation. Integration testing, on the other hand, verifies that different modules or components work correctly together when combined. Unit testing is typically done by developers, while integration testing often involves a dedicated testing team.

Q22: Explain the concept of basis path testing in white-box testing.
A22: Basis path testing is a white-box testing technique that uses the control flow graph of the code to identify a set of linearly independent paths through the program. The goal is to ensure that each statement in the program is executed at least once. It helps in achieving thorough code coverage and identifying logical errors.

Q23: What is boundary value analysis in black-box testing?
A23: Boundary value analysis is a black-box testing technique that focuses on testing at and around the edges of equivalent partitions. It’s based on the observation that errors often occur at the boundaries of input domains. For example, if a valid input range is 1-100, boundary value analysis would test values like 0, 1, 100, and 101.

Q24: Describe the process of regression testing and its importance.
A24: Regression testing involves re-running functional and non-functional tests to ensure that previously developed and tested software still performs correctly after a change. It’s important because it helps detect if changes have introduced new faults or regressions in existing functionality, ensuring that fixes and new features don’t break existing features.

Q25: What is the difference between verification and validation in software testing?
A25: Verification is the process of evaluating work-products (such as requirements, design, and code) to determine whether they meet specified requirements. It answers the question “Are we building the product right?” Validation, on the other hand, is the process of evaluating software during or at the end of the development process to determine whether it satisfies specified business requirements. It answers the question “Are we building the right product?”

Module 6: Software Configuration Management, Quality Assurance and Maintenance

Q26: What is the purpose of a Risk Mitigation, Monitoring and Management Plan (RMMM)?
A26: The RMMM plan is a document that outlines strategies to identify, analyze, and manage potential risks in a software project. It includes steps to mitigate risks (reduce their likelihood or impact), monitor for risk occurrences, and manage risks when they materialize. The purpose is to proactively address potential issues that could affect project success.

Q27: Explain the concept of software quality assurance metrics.
A27: Software quality assurance metrics are quantitative measures used to assess the quality of software products or processes. They can include measures like defect density, code coverage, customer satisfaction scores, and mean time between failures. These metrics help in objectively evaluating software quality, identifying areas for improvement, and tracking progress over time.

Q28: What is the role of formal technical reviews in software quality assurance?
A28: Formal technical reviews are structured examinations of software artifacts (like requirements, design, or code) by a group of peers. Their role is to identify defects early in the development process, verify technical conformance to specifications, and ensure adherence to standards. Reviews help improve software quality by catching issues before they propagate to later stages of development.

Q29: Describe the process of version control in software configuration management.
A29: Version control is a system that records changes to files over time, allowing you to recall specific versions later. In software configuration management, it involves tracking and managing changes to software artifacts (code, documentation, etc.). The process typically includes creating branches for development, merging changes, tagging releases, and maintaining a history of all modifications. Tools like Git are commonly used for version control.

Q30: What is the difference between corrective, adaptive, and perfective maintenance?
A30: Corrective maintenance involves fixing defects found after the software is in use. Adaptive maintenance modifies the software to adapt it to changes in the environment (like new hardware or operating systems). Perfective maintenance improves the software beyond its original specifications, often to enhance performance or add new features. All these types of maintenance are crucial for keeping software relevant and functional over time.

Experiment-related Questions:

Q31: In the context of traditional process models, compare and contrast the Waterfall and V-Model approaches.
A31: The Waterfall model is a linear sequential approach where each phase must be completed before the next begins. The V-Model, while similar, emphasizes the relationship between each development stage and its associated testing phase. In the V-Model, unit testing corresponds to implementation, integration testing to design, and acceptance testing to requirements analysis. This makes the V-Model more focused on verification and validation throughout the development process.

Q32: How does the Kanban methodology differ from Scrum in Agile project management?
A32: While both Kanban and Scrum are Agile methodologies, they differ in several ways. Scrum uses fixed-length sprints and has specific roles like Scrum Master and Product Owner. Kanban, on the other hand, focuses on visualizing workflow, limiting work in progress, and continuous delivery. Kanban doesn’t prescribe specific timeboxed iterations or roles, allowing for more flexibility in workflow and release cycles.

Q33: What are the key components of a Software Requirement Specification (SRS) document in IEEE format?
A33: The key components of an SRS document in IEEE format typically include:

  1. Introduction (purpose, scope, definitions, references, overview)
  2. Overall Description (product perspective, functions, user characteristics, constraints, assumptions and dependencies)
  3. Specific Requirements (external interfaces, functional requirements, performance requirements, design constraints, software system attributes)
  4. Appendices
  5. Index

Q34: Explain the process of creating a Data Flow Diagram (DFD) for structured data flow analysis.
A34: Creating a Data Flow Diagram involves the following steps:

  1. Identify external entities that interact with the system
  2. Identify key processes within the system
  3. Identify data stores (databases, files)
  4. Draw data flows between entities, processes, and data stores
  5. Start with a context diagram (Level 0) showing the entire system as a single process
  6. Decompose into more detailed levels (Level 1, 2, etc.) as needed
  7. Ensure consistency between levels
  8. Validate the DFD with stakeholders

Q35: How is the COCOMO II model used to estimate the cost of a software project?
A35: COCOMO II is used for cost estimation as follows:

  1. Determine the size of the software in KLOC or function points
  2. Choose the appropriate development mode (organic, semi-detached, embedded)
  3. Calculate the nominal effort using the basic equation: PM = A * (Size)^B
    Where PM is Person-Months, A and B are coefficients based on the mode
  4. Adjust the nominal effort using cost drivers (like required reliability, team capability)
  5. Use the adjusted effort to estimate schedule and staffing needs

Q36: Describe the process of creating a Gantt chart for project scheduling and tracking.
A36: Creating a Gantt chart involves these steps:

  1. List all project tasks and their durations
  2. Determine task dependencies
  3. Create a timeline with appropriate time units (days, weeks, etc.)
  4. Plot tasks as horizontal bars on the timeline
  5. Show dependencies with arrows between tasks
  6. Assign resources to each task
  7. Mark milestones and deadlines
  8. Update the chart regularly to reflect actual progress
    Gantt charts help visualize the project schedule, track progress, and identify potential delays or resource conflicts.

Q37: What are the key principles of writing effective test cases for black box testing?
A37: Key principles for writing effective black box test cases include:

  1. Each test case should have a unique identifier
  2. Clearly specify the test objective
  3. List all preconditions and assumptions
  4. Provide detailed steps to execute the test
  5. Specify the expected results
  6. Cover both positive and negative scenarios
  7. Use boundary value analysis and equivalence partitioning
  8. Ensure test cases are traceable to requirements
  9. Keep test cases simple, clear, and concise
  10. Make test cases repeatable and independent of each other

Q38: Explain the concept of statement coverage in white box testing.
A38: Statement coverage is a white box testing technique that measures the percentage of executable statements in the source code that have been exercised by a test suite. The goal is to execute each statement at least once. Steps to achieve statement coverage:

  1. Analyze the source code to identify all executable statements
  2. Design test cases to execute each statement
  3. Run the tests and track which statements are executed
  4. Calculate coverage: (Executed Statements / Total Statements) * 100
    While 100% statement coverage doesn’t guarantee bug-free code, it’s a useful metric for assessing test thoroughness.

Q39: What are the key components of a Risk Mitigation, Monitoring and Management Plan (RMMM)?
A39: A comprehensive RMMM plan typically includes:

  1. Risk identification: List of potential risks
  2. Risk analysis: Probability and impact assessment for each risk
  3. Risk prioritization: Ranking risks based on their severity
  4. Mitigation strategies: Plans to reduce the likelihood or impact of each risk
  5. Monitoring procedures: How risks will be tracked throughout the project
  6. Management actions: Steps to be taken if a risk occurs
  7. Roles and responsibilities: Who is responsible for each aspect of risk management
  8. Contingency plans: Backup strategies for high-priority risks
  9. Risk thresholds: Defining when to escalate or take action on risks
  10. Reporting and communication: How risk information will be shared with stakeholders

Q40: Describe the process of implementing version control for a software project using Git.
A40: Implementing version control with Git involves these steps:

  1. Initialize a Git repository in the project directory: git init
  2. Create a .gitignore file to exclude unnecessary files
  3. Make an initial commit of the project files:
  4. Stage files for commit: git add .
  5. Commit files: git commit -m "Initial commit"
  6. Create a remote repository (e.g., on GitHub or GitLab)
  7. Add the remote to your local repository: git remote add origin [URL]
  8. Push the initial commit to the remote: git push -u origin master
  9. Create branches for new features or bug fixes: git branch [branch-name]
  10. Switch between branches: git checkout [branch-name]
  11. Merge changes: git merge [branch-name]
  12. Pull updates from the remote: git pull
  13. Use tags for versioning: git tag -a v1.0 -m "Version 1.0"

Q41: How does the Incremental process model differ from the Spiral model?
A41: The Incremental model delivers the software in small, functional increments, with each increment building on previous ones. It follows a linear sequence for each increment. The Spiral model, however, is risk-driven and combines elements of both iterative development and systematic planning. It uses spiraling cycles, each containing planning, risk analysis, engineering, and evaluation phases. The Spiral model emphasizes risk assessment more heavily and is typically used for large, complex projects.

Q42: Explain the concept of user stories in Agile methodologies and how they differ from traditional requirements.
A42: User stories are short, simple descriptions of a feature told from the perspective of the user. They typically follow the format: “As a [type of user], I want [some goal] so that [some reason].” User stories differ from traditional requirements in that they:

  1. Are more informal and focus on user needs rather than system features
  2. Encourage conversation and collaboration between developers and stakeholders
  3. Are typically smaller in scope and can be completed in a single iteration
  4. Are more flexible and can easily adapt to changing project needs
  5. Don’t include technical details, which are discussed during implementation

Q43: Describe the process of creating a use case diagram as part of requirements modeling.
A43: Creating a use case diagram involves these steps:

  1. Identify the system boundary (what’s in and out of scope)
  2. Identify actors (users or external systems that interact with your system)
  3. Identify main use cases (key functionalities of the system)
  4. Draw the use case diagram:
  • Represent the system as a rectangle
  • Place actors outside the system boundary
  • Represent use cases as ovals inside the system boundary
  • Connect actors to their associated use cases with lines
  1. Identify and draw relationships between use cases (include, extend, generalization)
  2. Review and refine the diagram with stakeholders

Q44: What is the difference between the Lines of Code (LOC) and Function Point (FP) metrics in software estimation?
A44:

  • LOC counts the number of lines in the program’s source code. It’s easy to measure but language-dependent.
  • FP measures the amount of functionality in a system based on user requirements. It’s language-independent but more complex to calculate.

Key differences:

  1. LOC is a physical measure, while FP is a logical measure of software size.
  2. LOC varies with programming language; FP is language-independent.
  3. LOC can only be accurately measured after coding; FP can be estimated from requirements.
  4. FP considers user functionality; LOC doesn’t directly reflect functionality.
  5. FP is more useful for productivity comparisons across different languages or platforms.

Q45: Explain the concept of cyclomatic complexity in software testing and how it’s calculated.
A45: Cyclomatic complexity is a quantitative measure of the logical complexity of a program. It defines the number of linearly independent paths through a program’s source code. Key points:

  1. It’s used in white-box testing to determine the complexity of a program and the number of test cases needed for thorough testing.
  2. Calculation: V(G) = E – N + 2P, where:
    E = number of edges in the control flow graph
    N = number of nodes in the control flow graph
    P = number of connected components (usually 1 for a single program)
  3. Alternatively, V(G) = number of decision points + 1
  4. Higher cyclomatic complexity indicates more complex code, which may be harder to understand and maintain.
  5. Generally, a cyclomatic complexity of 10 is considered the upper threshold for a single function or method.

Q46: Describe the key steps in performing a Formal Technical Review (FTR) of a software artifact.
A46: The key steps in a Formal Technical Review are:

  1. Planning: Select reviewers, schedule the review, and distribute materials.
  2. Preparation: Reviewers individually examine the artifact before the meeting.
  3. Review Meeting:
  • Overview: The author briefly introduces the artifact.
  • Discussion: Reviewers raise issues and questions.
  • Decision Making: The team decides on necessary actions.
  1. Rework: The author addresses the issues identified.
  2. Follow-up: Ensure that all required changes have been made.
  3. Reporting: Document the review process and outcomes.

Throughout the process, it’s important to focus on the artifact, not the author, and to maintain a constructive atmosphere.

Q47: What is the difference between alpha and beta testing in the software development lifecycle?
A47: Alpha and beta testing are both forms of user acceptance testing, but they differ in several ways:

Alpha Testing:

  1. Conducted in-house by the development team or dedicated testers
  2. Simulates real-world usage scenarios
  3. Occurs before the software is released to external users
  4. Aims to identify major bugs and usability issues

Beta Testing:

  1. Conducted by real users in their own environments
  2. Users report bugs and provide feedback on their experience
  3. Occurs after alpha testing, with a nearly finished product
  4. Aims to gather real-world usage data and identify any remaining issues

Alpha testing is more controlled and focuses on functionality, while beta testing provides insights into real-world usage and user satisfaction.

Q48: Explain the concept of refactoring in software maintenance and its benefits.
A48: Refactoring is the process of restructuring existing code without changing its external behavior. Key points:

  1. Goal: Improve code quality, readability, and maintainability
  2. Does not add new features or fix bugs
  3. Often performed as part of regular maintenance or before adding new features
  4. Common refactoring techniques: extracting methods, renaming variables, simplifying conditional expressions

Benefits:

  1. Improves code readability and understandability
  2. Reduces complexity and technical debt
  3. Makes the code easier to maintain and extend
  4. Can improve performance in some cases
  5. Helps in identifying and fixing bugs
  6. Facilitates better code reuse

Q49: Describe the key components of a software configuration management (SCM) plan.
A49: A comprehensive SCM plan typically includes:

  1. Introduction: Purpose and scope of the SCM plan
  2. Management: Roles, responsibilities, and organizational structure
  3. SCM Activities: Identification, control, status accounting, and auditing
  4. Configuration Identification: Naming conventions, version numbering scheme
  5. Change Control: Process for requesting, evaluating, and approving changes
  6. Configuration Status Accounting: Tracking and reporting on configuration items
  7. Configuration Audits: Verifying consistency and completeness of items
  8. Release Management: Process for creating and deploying releases
  9. Tools and Infrastructure: SCM tools and environments to be used
  10. Training: Required training for team members
  11. Supplier Control: Managing third-party components and libraries
  12. Schedules: Timelines for SCM activities

Q50: What are the key differences between verification and validation in software quality assurance?
A50: While both verification and validation are crucial for software quality assurance, they serve different purposes:

Verification:

  1. Focuses on whether the software is built correctly
  2. Checks if the software meets specified requirements
  3. Usually performed throughout the development process
  4. Involves activities like reviews, inspections, and walkthroughs
  5. Asks the question: “Are we building the product right?”

Validation:

  1. Focuses on whether the correct software is built
  2. Checks if the software meets user needs and expectations
  3. Usually performed at the end of development or in later stages
  4. Involves activities like acceptance testing and beta testing
  5. Asks the question: “Are we building the right product?”

In essence, verification ensures the software is consistent with its specifications, while validation ensures the software fulfills its intended use in the user’s environment.

Team
Team

This account on Doubtly.in is managed by the core team of Doubtly.

Articles: 453