Measurement and Analysis of Data MCQ Quiz in मराठी - Objective Question with Answer for Measurement and Analysis of Data - मोफत PDF डाउनलोड करा

Last updated on Mar 16, 2025

पाईये Measurement and Analysis of Data उत्तरे आणि तपशीलवार उपायांसह एकाधिक निवड प्रश्न (MCQ क्विझ). हे मोफत डाउनलोड करा Measurement and Analysis of Data एमसीक्यू क्विझ पीडीएफ आणि बँकिंग, एसएससी, रेल्वे, यूपीएससी, स्टेट पीएससी यासारख्या तुमच्या आगामी परीक्षांची तयारी करा.

Latest Measurement and Analysis of Data MCQ Objective Questions

Top Measurement and Analysis of Data MCQ Objective Questions

Measurement and Analysis of Data Question 1:

In a testing survey of class X students with N = 500, their mean was observed as 80 and standard deviation as 8. Assuming of distribution of their scores, how many students would be below the score of a student getting 88?

  1. 171
  2. 421
  3. 84
  4. 16

Answer (Detailed Solution Below)

Option 2 : 421

Measurement and Analysis of Data Question 1 Detailed Solution

We have given,

N = 500,

M = 80

\(σ\) = 8

X = 88 ( ∵ We have to find the number of students below the score of the student getting 88)

F1 Shraddha Alka 02.12.2020 D4

We need to find the standard score (Z-score) (Important Points z-score tells you how many standard deviations you are away from the mean)

\(z\ score = \frac {X - M} {σ}\)

\(z\ score = \frac {88 - 80} {8} \)

\(z\ score = \frac {8} {8} = 1 σ\)

We need to locate the value of 1 σ from sigma normal distribution

F1 Shraddha Alka 02.12.2020 D5

⇒ 1 σ = 34.13

⇒ 50% + 34.13% = 84.13%

Required number of students = \(\frac {84.13}{100} \times 500\)

Required number of students = 421 approx

Measurement and Analysis of Data Question 2:

A weighing machine mostly over measures the weight of individuals. It will be said to be indicative of:

  1. Random Error
  2. Systematic Error
  3. Standard Error
  4. Probable Error

Answer (Detailed Solution Below)

Option 2 : Systematic Error

Measurement and Analysis of Data Question 2 Detailed Solution

1) Standard Error

  • The standard deviation of the sampling distribution of a statistic is known as its 'standard error', abbreviated as S.E. The standard errors of some of the well-known statistics, for large samples, are given below, where 'n' is the sample size, o2 is the population variance, and P is the population proportion, and Q = 1-P. n1 and n2 represent the sizes of two independent random samples respectively drawn from the given population(s). 
  • The utility of Standard Error
    • S.E plays a significant role in the large sample theory and forms the basis of the testing of the hypothesis.
    • The magnitude of the standard error gives an index of the precision of the estimate of the parameter. The reciprocal of the standard error is taken as the measure of reliability or precision of the statistic.
    • S.E. enables us to determine the probable limits within which the population parameter may be expected to lie. 

2) Systematic Error

  • The systematic errors, also called ‘determinable’ errors, arise due to identifiable causes. For this reason, these can, in principle, be eliminated or corrected. These errors result in measured values being either consistently high or low, i.e., different from the true value.
  • For example, A weighing machine mostly over measures the weight of individuals.
  • Zero Error arises due to wear and tear caused by extensive use. The zero of the vernier scale may not coincide with the zero of the main scale when the jaws are put in contact.
  • Backlash Error in a screw gauge, a traveling microscope or a spherometer, can arise due to wear and tear or defective fitting in the instrument. In this case, a forward or backward rotation may not produce the same result.

3) Random error 

  • The Observational random errors arise due to error of judgment of the observer while reading the smallest division in the scale (like the coincident vernier division with the main scale division). To minimize observable random errors, you should always take more readings and calculate their mean or draw the best fit graph.
  • Random error can also be induced by a careless experimenter, who does not concentrate on his/her work in the laboratory. The errors arising out of this situation can not be determined in any way.
  • For example, The length of a piece of the steel rod is measured by several students in a laboratory.

4) Probable error

  • The probable deviation P is an older measure of precision and now only rarely used. It is defined as the deviation having a magnitude such that there are equal numbers of deviations greater and smaller than themselves. In a set of large numbers of observations, it is also known as probable error. 
  • Some people prefer to use 0.6745 times the standard error, which is called the 'probable error' of the statistic. 

Hence, A weighing machine mostly over measures the weight of individuals. It will be said to be indicative of systematic error.

Measurement and Analysis of Data Question 3:

What percent of IQ scores lie between 85 and 130 in a normal distribution with a mean of 100 and SD of 15 points?

  1. 79.85
  2. 80.85
  3. 82.85
  4. 81.85 

Answer (Detailed Solution Below)

Option 4 : 81.85 

Measurement and Analysis of Data Question 3 Detailed Solution

Mean: Mean (average) is sum of all quantities divided by number of quantities.

Standard deviation: It is used to measure how deviated or dispersed the numbers are with respect to the mean in the same set of data.

Percentile Rank: It is a measure to find how many values are below the score. It basically ranks of the score.

Z score: Z score is a value calculated through the above formula, used to interpret the percentile of a given score or value.

Normal Probability Curve: It is a bell-shaped curve having the highest point at the mean which is symmetrical along the vertical line drawn at the mean. It is used to interpret the percentile or the percentage of no of cases for the respective value of z scores.

To find out the percentage of IQ score lie between 85 and 130, we need to calculate the z score 

Formula:  Z = \((X - \bar{X}) \over σ\)

Calculation:

Step 1:

Mean (x̄) = 100  Std deviation(σ) = 15

Calculate z scores

At x=85, Z = (85−100)/15 = -1 

At x= 130, Z = (130−100)/15 = 2 

Step 2:

Calculate the percentile rank

From Normal Probability Curve, the total percentage of cases from -1 to mean is 34.13, mean to 1 is 34.13 and from 1 to 2 = 50 + 34.13 = 13.59

F1 Alka.S 13-01-21 Savita D2

From Normal Probability curve, when z score is -1 to 2 = 34.13% + 34.13% +13.59%

Percentage of  IQ score lies in between 85 and 130 = 81.85%

Measurement and Analysis of Data Question 4:

Given below are two statements:

Statement I: Level of significance is the probability of rejecting null hypothesis (H0) when it is true.

Statement II: Accepting null hypothesis (H0) when it is not true is called Type-I error.

In the light of the above statements, choose the correct answer from the options given below

  1. Both Statement I and Statement II are true
  2. Both Statement I and Statement II are false
  3. Statement I is true but Statement II is false
  4. Statement I is false but Statement II is true

Answer (Detailed Solution Below)

Option 3 : Statement I is true but Statement II is false

Measurement and Analysis of Data Question 4 Detailed Solution

Key Points 

Statement I: Level of significance is the probability of rejecting null hypothesis (H0) when it is true.

  • This statement is true.
  • Level of significance is the probability of committing a Type I error, which is the error of rejecting the null hypothesis when it is true.
  • The level of significance is typically set at 0.05 or 0.01, which means that there is a 5% or 1% chance of committing a Type I error.

Statement II: Accepting null hypothesis (H0) when it is not true is called Type-I error.

  • This statement is false.
  • Accepting the null hypothesis when it is not true is called a Type II error. A Type II error is the error of failing to reject the null hypothesis when it is false.
  • The probability of committing a Type II error is known as beta. Beta is typically difficult to calculate, and it is often estimated using power analysis.

 

Therefore, Statement I is true but Statement II is false.

Measurement and Analysis of Data Question 5:

The method of least squares is used on time-series data for:  

  1. deseasonalising the data 
  2. obtaining the trend equation 
  3. exponentially smoothing a series 
  4. eliminating irregular movements 

Answer (Detailed Solution Below)

Option 2 : obtaining the trend equation 

Measurement and Analysis of Data Question 5 Detailed Solution

The correct answer is 
Key Points
  • The method of least squares is commonly employed in the analysis of time-series data to obtain the trend equation.
  • This method aims to find the best-fitting line that minimizes the sum of the squared differences between the observed data points and the corresponding values predicted by the trend equation.
  • By fitting a trend line to the time-series data, we can estimate and model the underlying trend or pattern present in the data, which helps in forecasting future values or understanding the overall direction of change. 

Measurement and Analysis of Data Question 6:

Arrange the following researches in an increasing order in terms of generalization of their respective findings:

A. Ethnographic research

B. Survey research

C. Experimental research

D. Action research

E. Case study research

Choose the correct answer from the options given below:

  1. A, D, E, B, C
  2. B, D, C, E, A  
  3. D, E, A, C, B  
  4. C, B, E, A, D

Answer (Detailed Solution Below)

Option 3 : D, E, A, C, B  

Measurement and Analysis of Data Question 6 Detailed Solution

Key Points

The correct order of research methods in terms of generalization of their respective findings is as follows:

  1. Action research
  2. Case study research
  3. Ethnographic research
  4. Experimental research
  5. Survey research
     
  • Action research is not generalizable because it focuses on solving a specific problem in a particular setting. The action research results only apply to the specific setting in which the research was conducted.
  • Case study research is the least generalizable type because it focuses on a single case or a small number of cases. This means that the results may not be applicable to other cases.
  • Ethnographic research is also not generalizable because it involves an in-depth study of a particular culture or group. This means that the results may not be applicable to other cultures or groups of people.
  • Experimental research is more generalizable than case studies or ethnographic research because it randomly assigns participants to different groups and manipulates an independent variable. This allows researchers to draw conclusions about cause-and-effect relationships.
  • Survey research is the most generalizable type because it relies on self-reported data from many participants. This means that the results are less likely to be biased by participants' memories, perceptions, and social desirability bias.

 

Therefore, the correct order of research methods in terms of generalising their respective findings is D, E, A, C, B.  

Measurement and Analysis of Data Question 7:

Parametric and non-parametric analyses commonly share the following:

  1. Testing of null hypotheses only
  2. Chain of reasoning based on inferential statistics
  3. Statistics as means and frequencies
  4. Ordinal and interval scale data

Answer (Detailed Solution Below)

Option 2 : Chain of reasoning based on inferential statistics

Measurement and Analysis of Data Question 7 Detailed Solution

Key Points 
  • Parametric and non-parametric analyses are two broad categories of statistical analyses used in research and data analysis. While they differ in their underlying assumptions and techniques, they share some common aspects. One common aspect is the use of inferential statistics, which involves making inferences or conclusions about a population based on sample data.

1- Testing of null hypotheses only:

  • This option is not correct because both parametric and non-parametric analyses can involve testing null hypotheses. In hypothesis testing, researchers formulate a null hypothesis and an alternative hypothesis, and statistical tests are used to assess the evidence against the null hypothesis.

2. Chain of reasoning based on inferential statistics:

  • This option is correct. Both parametric and non-parametric analyses follow a chain of reasoning based on inferential statistics. In both cases, researchers use statistical techniques to analyze the data, calculate test statistics, and make inferences about the population based on the sample data.

3. Statistics as means and frequencies:

  • This option is not entirely accurate. While both parametric and non-parametric analyses involve using statistical measures such as means and frequencies, it is not something they commonly share. The choice of statistical measures used in the analysis depends on the nature of the data and the specific research question.

4. Ordinal and interval scale data:

  • This option is not accurate in terms of what parametric and non-parametric analyses commonly share. Parametric analyses typically assume the data to follow a specific distribution and require interval or ratio scale data. Non-parametric analyses, on the other hand, do not rely on these assumptions and can be used with various types of data, including ordinal and interval scale data.

 

Hence option 2 is the correct answer.

Measurement and Analysis of Data Question 8:

What are the properties on the basis of which various measurement scales are differentiated?

  1. Identity, magnitude, equal interval and ordered relationship
  2. Identity, magnitude, equal interval and ratio
  3. Identity, magnitude, equal interval and value of zero
  4. Ordered relationship, magnitude, equal interval and value of zero

Answer (Detailed Solution Below)

Option 3 : Identity, magnitude, equal interval and value of zero

Measurement and Analysis of Data Question 8 Detailed Solution

Key Points

Measurement Scales

  • Qualitative data is used to define the information and can also be further broken down into sub-categories through the four scales of measurement.
  • Psychologist Stanley Stevens developed the four common scales of measurement. Each scale of measurement has properties that determine how to properly analyse the data. The four scales are- Nominal, Ordinal, Interval, and Ratio.
  • The properties evaluated are identity, magnitude, equal interval, and value of zero-
  1. Identity - Identity refers to each value having a unique meaning.
  2. Magnitude - Magnitude means that the values have an ordered relationship to one another, so there is a specific order to the variables.
  3. Equal interval - Equal intervals mean that data points along the scale are equal, so the difference between data points one and two will be the same as the difference between data points five and six.
  4. Value of zero - A minimum value of zero means the scale has a true zero point

Hence, we can conclude that measurement scales are differentiated on the bases of identity, magnitude, equal interval, and value of zero. 

Measurement and Analysis of Data Question 9:

Grounded theory is a systematic, methodology in the social sciences involving:

I. Construction of theory

II. Inductive reasoning

III. Hypothetico-deductive model

IV. Initiation with a question

  1. I, III and IV
  2. I, II and III
  3. II, III and IV
  4. I, II and IV

Answer (Detailed Solution Below)

Option 4 : I, II and IV

Measurement and Analysis of Data Question 9 Detailed Solution

Key PointsGrounded theory:
  • Grounded theory is a research methodology that was developed in the field of sociology. The goal of grounded theory is to generate a theory or explanation of a social phenomenon that is grounded in the data that is collected from research participants. Grounded theory is an inductive approach to research, which means that it starts with the data and develops a theory or explanation based on that data.
  • Grounded theory is a systematic methodology in the social sciences that involves inductive reasoning and the construction of theory. It is typically initiated with a research question and involves the following steps:
    • Construction of theory: Grounded theory involves the construction of theory based on the data that is collected during the research process. This theory is developed through an iterative process of data collection, coding, and analysis, which helps to refine and develop the theory over time.
    • Inductive reasoning: Grounded theory is based on inductive reasoning, which involves starting with specific observations and data and then developing broader generalizations and theories based on that data.
    •  Initiation with a question: Grounded theory typically begins with a research question, which is used to guide the data collection and analysis process. The research question is not considered to be a hypothesis that is tested, but rather a starting point for exploring the data and developing theory.
  • In summary, grounded theory is a systematic methodology in the social sciences that involves inductive reasoning, the construction of theory based on data, and the initiation of research with a question. It is a valuable approach for developing new theories and understanding complex social phenomena.

 

Hence option 4 is the correct answer.

Measurement and Analysis of Data Question 10:

Given below are two statements:

Statement I: As the alpha level becomes more stringent - goes from 0.05 to 0.01 the power of a statistical test decreases

Statement II: A directional hypothesis leads to more power than a non-directional hypothesis

In the light of the above Statements, choose the most appropriate answer from the options given below:

  1. Both Statement I and Statement II are true
  2. Both Statement I and Statement II are false
  3. Statement I is correct but Statement II is false
  4. Statement I is incorrect but Statement II is true

Answer (Detailed Solution Below)

Option 1 : Both Statement I and Statement II are true

Measurement and Analysis of Data Question 10 Detailed Solution

Hypothesis testing

  • Hypothesis testing is a procedure that assesses two mutually exclusive theories about the properties of a population.
  •  For Hypothesis testing, the two hypotheses are as follows:
  1. Null Hypothesis
  2. Alternative hypothesis
  • There are two errors defined, both are for the null hypothesis condition
  • Type-I error corresponds to rejecting H0 (Null hypothesis) when H0 is actually true, and a Type-II error corresponds to accepting H0 (Null hypothesis)when H0 is false. Hence four possibilities may arise:

  • F1 Alka Madhu 19.10.20 D1
  • Decreasing alpha from 0.05 to 0.01 increases the chance of a Type II error (makes it harder to reject the null hypothesis).
  • Statistical power or the power of a hypothesis test is the probability that the test correctly rejects the null hypothesis.
  • The higher the statistical power for a given experiment, the lower the probability of making a Type II (false negative) error. That is the higher the probability of detecting an effect when there is an effect. In fact, the power is precisely the inverse of the probability of a Type II error.
    • Low Statistical Power: Large risk of committing Type II errors, e.g. a false negative.
    • High Statistical Power: Small risk of committing Type II errors. 
  • Power is defined as 1 — the probability of type II error (β). In other words, it is the probability of detecting a difference between the groups when the difference actually exists (ie. the probability of correctly rejecting the null hypothesis). Therefore, as we increase the power of a statistical test we increase its ability to detect a significant difference between the groups.

​Hence, Statement I "As the alpha level becomes more stringent - goes from 0.05 to 0.01 the power of a statistical test decreases" is true.

Hypothesis:

  • A hypothesis is a formal affirmative statement predicting a single research outcome, a tentative explanation of the relationship between two or more variables.
  • To give an example, “Discussion method gives better academic scores than lecture method of teaching” or " There is no significant difference between teaching aptitude of male and female teachers".
  • In hypothesis-generating research, the researcher explores a set of data searching for relationships and patterns and then proposes hypotheses that may then be tested in some subsequent study.

Types of Research Hypotheses

Alternative Hypothesis

  • The alternative hypothesis states that there is a relationship between the two variables i.e. one variable has an effect on the other.
  • For e.g. There is a significant difference in the aptitude of urban and rural students.

Null Hypothesis:

  • The null hypothesis states that there is no relationship between the two variables i.e. one variable does not have an effect on another variable.
  • For e.g. There is no significant difference in the aptitude of urban and rural students.

Nondirectional Hypothesis:

  • A two-tailed non-directional hypothesis predicts that the independent variable will have an effect on the dependent variable, but the direction of the effect is not specified.
  • E.g., There is a difference in vocabulary between males and females in some numbers.

Directional Hypothesis

  • A one-tailed directional hypothesis predicts the nature of the effect of the independent variable on the dependent variable. Here direction is specified.
  • E.g., females have a better vocabulary than males.

Directional tests are more powerful than non-directional tests. As they show direction and the critical region is located in one tail. Whenever we are certain about direction, this is a better choice than the non-directional hypothesis.​

Hence, Statement II "A directional hypothesis leads to more power than a non-directional hypothesis is true".

Get Free Access Now
Hot Links: teen patti joy vip teen patti star teen patti master downloadable content teen patti master new version