Chat with us, powered by LiveChat 1.?Word count is 250 or more words. Everything must be in OWN words. There needs to be two scholarly resources. Also please b - EssayAbode

1.?Word count is 250 or more words. Everything must be in OWN words. There needs to be two scholarly resources. Also please b

 Word count is 250 or more words. Everything must be in OWN words. There needs to be two scholarly resources. Also please be sure to answer the question thought completely out. There will be two attachments one is the assignment and the other one is the reading. 

book:  Bethel University. (2013). Human Resource Management, An Experimental Approach, Sixth Edition. Retrieved from https://www.betheluniversityonline.net

Pre-employment tests are used by many employers as part of the selection process. What steps should employers take to assure that the tests used are legal and valid?

,

C o p

y ri

g h

t ©

2 0 1 3 T

h e M

c G

ra w

-H il

l C

o m

p a n

ie s.

A ll

r ig

h ts

r e se

rv e d

.

185

O B J E C T I V E S

After reading this chapter, you should be able to

1. Understand the concepts of reliability, validity, and utility.

2. Understand the validity evidence for various selection methods.

3. Discuss approaches to the more effective use for application blanks,

reference checks, biographical data, testing, and various other selection

methods programs in order to increase the validity and legal defensibility

of each.

4. Discuss the approaches available for drug testing.

5. Describe the validity of different approaches to interviewing.

6. Explain how the various types of job candidate information should be

integrated and evaluated.

Personnel Selection Chapter

6

OVERVIEW It sounds simple: Match employees with jobs. Researchers have made this task easier by

developing selection methods that successfully predict employee effectiveness. Still, there

is a void between what research indicates and how organizations actually do personnel se-

lection. Real-world personnel selection is replete with examples of methods that have been

proven to be ineffective or inferior.

Personnel selection (and retention) is key to organizational effectiveness. The most

successful firms use methods that accurately predict future performance. The use

of validated selection models is another of the High-Performance Work Practices linking this HR process to corporate financial performance. Organizations are, or

should be, interested in selecting employees who not only will be effective but who

will work as long as the organization needs them and, of course, will not engage in

counterproductive behaviors such as violence, substance abuse, avoidable accidents,

and employee theft.

A multiple-hurdle process involving an application, reference and background checks,

various forms of standardized testing, and some form of interview is the typical chronology

of events for selection, particularly for external hiring decisions. Internal decisions, such

as promotions, are typically done with less formality. Personnel selection is the process

Use of validated selection models: A HPWS characteristic

ber29163_ch06_185-236.indd 185ber29163_ch06_185-236.indd 185 17/02/12 2:38 PM17/02/12 2:38 PM

W I L L I S , K A S S A N D R A 2 1 6 1 T S

2 / Acquiring Human Resource Capability

of gathering and assessing information about job candidates in order to make decisions about personnel. The process applies to entry-level personnel and promotions, transfers, and even job retention in the context of corporate downsizing efforts. This chapter in-

troduces you to personnel selection, describes some of the most popular types of hiring/

screening procedures, reviews the research evidence on each, and discusses the social and

legal implications of the various options.

The chapter begins with an overview of measurement issues related to personnel selec-

tion and staffing. Next the various selection methods are introduced in their usual order

of use. Application blanks, background checks, and reference checks are discussed first.

Then the various forms of standardized tests that purport to assess applicants’ suitability

or KASOCs are reviewed. The use, validity, and possible adverse impact of various types

of selection methods are considered, including general mental ability tests and personality

tests. The final sections of the chapter discuss employment interviews and methods that

have been shown to increase their validity, the use of more sophisticated (and expensive)

selection procedures such as assessment centers, performance testing and work samples,

and drug and medical tests in the preemployment selection process. The context of the dis-

cussion are the legal implications of the various personnel practices and pointing out where

there are clear discrepancies between what typically happens in practice and what aca-

demic research indicates should happen. This is one chapter where the distance between academic research findings and recommendations and actual selection practices is great. The good news is that the gap is closing.

Wackenhut Security (recently acquired by G4S) had its share of selection challenges.

Although recruitment efforts and a sluggish economy attracted a large number of appli-

cants for its entry-level armed and unarmed security guard positions, there was concern

about the quality of those hired and high voluntary employee turnover. The turnover rate

for some positions exceeded 100 percent—meaning that the quit rate in 1 year exceeded

the number of available positions. Wackenhut Security also was dissatisfied with the qual-

ity of its supervisory personnel.

The company contracted with BA&C (Behavioral Analysts and Consultants), a Florida

psychological consulting firm that specializes in staffing problems and personnel selec-

tion. Wackenhut asked BA&C to develop a new personnel selection system for entry-level

guards and supervisors. Underlying this request was a need for Wackenhut to improve its

competitive position in this highly competitive industry by increasing sales and contracts,

decreasing costs, and, most important, making certain its security personnel do the job.

The company, which already compensated its guards and supervisors more than others

in the industry, wanted to avoid any increase in compensation. The company estimated

that the cost of training a new armed guard was about $1,800. With several hundred guards

quitting in less than a year, the company often failed to even recover training costs in sales.

Wackenhut needed new selection methods that could increase the effectiveness of the

guards and supervisors and identify those guard applicants who not only performed well

but would be most likely to stay with the company.

You will recall from Chapter 4 that work analysis should identify the knowledge, abili-

ties, skills, and other characteristics (KASOCs) or competencies that are necessary for

successful performance and retention on the job. In this case, BA&C first conducted a job

analysis of the various guard jobs to get better information on the KASOCs required for

the work. After identifying the critical KASOCs, BA&C developed a reliable, valid, and

job-related weighted application blank, screening test, and interview format. The process of selection varies substantially within this industry. While Wackenhut ini-

tially used only a high school diploma as a job specification, an application blank, a back-

ground check, and an interview by someone in personnel, competitors used more complex

methods to select employees. American Protective Services, for example, the company that

handled security for the Atlanta Olympics, used a battery of psychological and aptitude

tests along with a structured interview. Wackenhut wanted selection systems that were

even more valid and useful than what their major competitors were using. Their marketing

strategy would then emphasize their more sophisticated screening methods.

As with the job analysis and the recruitment process, personnel selection should be di-

rectly linked to the HR planning function and the strategic objectives of the company. For

First Step is Work analysis

186

ber29163_ch06_185-236.indd 186ber29163_ch06_185-236.indd 186 17/02/12 2:38 PM17/02/12 2:38 PM

W I L L I S , K A S S A N D R A 2 1 6 1 T S

6 / Personnel Selection

example, the mission of the Marriott Corporation is to be the hotel chain of choice of fre-

quent travelers. As part of this strategy, the company developed a successful selection sys-

tem to identify people who could be particularly attentive to customer demands. Wackenhut

Security also had a major marketing strategy aimed at new contracts for armed security

guards who would be extremely vigilant. The new selection system would be designed to

identify people more likely to perform well in this capacity.

Figure 6-1 presents a chronology of our recommended strategy for selection system

development and the major options available for personnel selection. The previous chap-

ters on work analysis, planning, and recruitment have gotten us to the point of selecting

job candidates based on relevant and job-related information from one or more selection

methods. Each of these methods is reviewed in this chapter. But keep in mind that the focus

should be on selecting or developing tools that will provide valid assessments on the critical

KASOCs, competencies, and job specifications most important for strategy execution. The

work analysis should identify the strategically important KASOCs or competencies from

which the job specifications will be derived. Then particular selection methods (selection tools) should be adopted to assess people in terms of these particular job specifications.

JOB ANALYSIS/HUMAN RESOURCE PLANNING Identify knowledge, abilities, skills, and other characteristics (KASOCs) (aka: competencies). Use a competency model tied to organizational objectives.

RECRUITMENT STRATEGY: SELECT/DEVELOP SELECTION PROCEDURES Review options for assessing applicants on each of the KASOCs:

Standardized tests (cognitive, personality, motivational, psychomotor). Application blanks, biographical data, background and reference checks, accomplishment record. Performance tests, assessment centers, interviews.

DETERMINE VALIDITY FOR SELECTION METHODS Criterion-related validation or validity generalization. Expert judgment (content validity).

DETERMINE WEIGHTING SYSTEM FOR DATA FROM SELECTION METHODS

SELECTION METHODS: ARE THEY EFFECTIVE? This review includes a summary of the validity of each major approach to selection and

an assessment of the relative cost to develop and administer each method. Three key terms

related to effectiveness are reliability, validity, and utility. While these terms are strongly related to one another, the most important criterion for a selection method is validity. Remember the discussion of the research on High-Performance Work Practices. One of the HR practices shown to be related to corporate financial performance was the per-

centage of employees hired using “validated selection methods.” 1 The essence of the term

validity is the extent to which scores on a selection method predict one or more important criteria. While the most typical criterion of interest to selection and staffing specialists is job performance, companies also may be interested in other criteria such as how long an

employee may stay on the job or whether the employee will steal from the organization,

be violent, or be more likely to be involved in work-related accidents. But before address-

ing the validity of a method, let’s look at one of the necessary conditions for validity: the

reliability of measurement.

The primary purpose of personnel selection is measuring the attributes of job candidates.

A necessary condition for a selection method to be valid is that it first be reliable. Reli- ability concerns the degree of consistency or the agreement between two sets of scores

Figure 6-1 Steps in the Development and Evaluation of a Selection Procedure

What Is Reliability?

187

C o p

y ri

g h

t ©

2 0 1 3 T

h e M

c G

ra w

-H il

l C

o m

p a n

ie s.

A ll

r ig

h ts

r e se

rv e d

.

ber29163_ch06_185-236.indd 187ber29163_ch06_185-236.indd 187 17/02/12 2:38 PM17/02/12 2:38 PM

W I L L I S , K A S S A N D R A 2 1 6 1 T S

2 / Acquiring Human Resource Capability

on some measurement device. Reliability refers to freedom from unsystematic errors of measurement. The consistency in measurement applies to the scores that derive from the

selection method. These scores can come from a paper-and-pencil test, a job interview,

a performance appraisal, or any other method that is used to measure characteristics and

make decisions about people. The CIA uses a very long multiple-choice test as an initial

screening device for job applicants to be agents. If applicants were to take the test twice

3 weeks apart, their scores on the test would stay pretty much the same (the same thing can

be said for SAT scores). These tests can be considered reliable. The level of reliability can

be represented by a correlation coefficient. Correlations from 0 to 1.0 show the extent of

the reliability. Generally, reliable methods have reliability coefficients that are .8 or higher,

indicating a high degree of consistency in scores. No selection method achieves perfect

reliability, but the goal should be to reduce error in measurement as much as possible and

achieve high reliability. If raters are a part of the selection method, such as job interviewers

or on-the-job performance evaluators, the extent to which different raters agree also can

represent the reliability (or unreliability) of the method.

Remember our criticism about the use of graphology (or handwriting analysis) for person-

nel selection we discussed in Chapter 1? Handwriting analysis is used by some U.S. compa-

nies and even more European firms as a method of selection. But this method is first of all not

even reliable, much less valid. If the same handwriting sample were given to two grapholo-

gists, they would not necessarily agree on the levels or scores on various employment-related

attributes (e.g., drive, judgment, creativity, intelligence), supposedly measured based on a

handwriting sample. Thus the method has low reliability as an assessment of these attributes. (But even if the two graphologists did agree on relative levels of some attribute, this agree-

ment would not necessarily mean that their assessments are valid.)

Reliable methods tend to be long. One of the reasons the SAT, the GRE, the GMAT,

and the LSAT seem to take forever to complete is so these tests will have very high levels

of reliability (and they do). Reliabilities for “high stakes” tests such as the GMAT, the

SAT, and the LSAT are quite high. For example, the average reliability estimates are .92,

.90, and .89 for the GMAT total score, the Verbal score, and the Quantitative score, respec-

tively. 2 But while high reliability is a necessary condition for high validity , high reli- ability does not ensure that a method is valid. The GMAT may be highly reliable, but do scores on the GMAT actually predict success in business school? This question addresses

the validity of the method.

The objective of the Wackenhut Security consultants was to develop a reliable, valid, legally defensible, user-friendly, and inexpensive test that could predict both job perfor-

mance and long job tenure for security guards. The extent to which the test was able to

predict an important criterion such as performance was an indication of the test’s valid- ity. The term validity is close in meaning but not synonymous with the critical legal term job relatedness , which is discussed in Chapters 3 and 4. Empirical or criterion-related validity involves the statistical relationship between scores on some predictor or selection method (e.g., a test or an interview) and performance on some criterion measure such as

on-the-job effectiveness (e.g., sales, supervisory ratings, job turnover, employee theft). At

Wackenhut, a study was conducted in which scores on the new screening test were corre-

lated with job performance and job tenure. Given a certain level of correlation, such a study

would support a legal argument of job relatedness.

The statistical relationship is usually reported as a correlation coefficient . This de- scribes the relationship between scores on the predictor and measures of effectiveness (also

called criteria). Correlations from −1 to +1 show the direction and strength of the relation-

ship. Higher correlations indicate stronger validity. Assuming that the study was conducted

properly, a significant correlation between the scores on a method and scores (or data) on

some important criterion could be offered as a strong argument for the job relatedness of the method. Under certain circumstances, correlation coefficients even in the .20s can sig-

nify a useful method. However, higher correlations are clearly better. In general, an increase

in the validity of a selection method will translate into a proportional increase in the average

dollar value of the annual output from employees who are selected with this method.

What Is Validity?

Good reliability: .8 or higher

Validity is close in meaning to “job relatedness”

Criterion-related validity

188

ber29163_ch06_185-236.indd 188ber29163_ch06_185-236.indd 188 17/02/12 2:38 PM17/02/12 2:38 PM

W I L L I S , K A S S A N D R A 2 1 6 1 T S

6 / Personnel Selection

While higher correlations are generally better, the size of the sample (and other factors)

are very important for achieving statistical significance. Validity studies with small sample

sizes will often not achieve significance mainly because of the error in the study. Many selection methods have average validities between .20 and .40. Samples of a minimum of 100 scores are strongly recommended in order to empirically validate in a particular set-

ting. 3 So, do scores on the GMAT predict success in business school? Clearly, they do with

an average validity of about .5 across hundreds of studies.

Another key issue that will have an impact on the results and interpretation of empiri-

cal studies is the conceptual match between a particular criterion of interest (e.g., some

element of job performance) and any particular predictor. Cognitively loaded predictors

(those correlated with general mental ability [GMA]) are the strongest predictors of task

performance, while so-called noncognitive predictors such as personality and motivational

measures are better predictors of contextual performance/citizenship behavior (e.g., effects

on co-workers) and counterproductive behavior (e.g., employee theft).

A critical concept related to validity is generalizability. This term refers to the extent to which the validity of a selection method can generalize to other employment settings

or situations. At the most basic level, generalizability concerns whether the validity of

a selection method established based on a study or studies in other situations can be

inferred for a new situation in which no new correlational data are collected. Validity generalization (VG) invokes evidence from past studies on a selection method that is then applied to a new and similar setting. Many studies have used appropriate scientific

methods to establish the validity and generalizability of constructs, such as cognitive or

general mental ability and emotional intelligence, and also particular instruments and

methods developed to measure these constructs. Meta-analytic techniques are used to establish VG for a method. Meta-analysis is a methodology for quantitatively accumulating results across studies. Meta-analytic findings are generally more reli- able than results obtained from an individual study and help researchers draw conclu-

sions. Like other areas of scientific inquiry, meta-analytic methods have evolved and

new refinements continue to emerge. These improvements have increased the accuracy

of meta-analytic methods and estimates of the validity of these particular selection tests

and methods. 4

VG is an excellent alternative to empirical validation for selection methods when a

criterion-related validation study cannot be done because of inadequate sample sizes or

other reasons. Employers could invoke an appropriate VG study to argue that a particular

test or method is valid for their setting as well. This approach is recommended if there is

insufficient data to allow for an empirical study by this employer (i.e., at a minimum, less

than 100 pairs of scores on an instrument correlated with performance data on the same

individuals).

A VG argument for validity can be invoked if an organization can first locate previously

conducted empirical studies showing that the same or similar methods (e.g., tests) are valid

for a particular job or purpose. The organization should then produce an analysis showing

that the job for which the method is used (or will be used) for selection is the same as, or

very similar to the job(s) that were involved in the empirical studies of the VG study and

that the criterion measures used in the VG studies are also important for the organization.

Does an accredited MBA program need to do another study showing the validity of the

GMAT for that particular program? Almost certainly not; there is plenty of evidence docu-

menting the VG of this test for predicting business school success.

Figure 6-2 presents a summary of the meta-analytic evidence for the most popular se-

lection tools, plus the relative cost of their development and administration. An obvious

and critical question is “How large must a correlation be?” Correlations between of .20

and .30 are often discounted because they account for less than 10 percent of the vari-

ance in performance. However, as a matter of fact, a correlation of say .30 for a selection

method is sufficiently large that hiring applicants who score better on this particular mea-

sure can actually double the rate of successful performance. For example, with validity at

.30, 67 percent of individuals who score in the top 20 percent on a measure would have

above-average performance versus only 33 percent of individuals who score in the bottom

20 percent.

Validity Generalization

189

C o p

y ri

g h

t ©

2 0 1 3 T

h e M

c G

ra w

-H il

l C

o m

p a n

ie s.

A ll

r ig

h ts

r e se

rv e d

.

ber29163_ch06_185-236.indd 189ber29163_ch06_185-236.indd 189 17/02/12 2:38 PM17/02/12 2:38 PM

W I L L I S , K A S S A N D R A 2 1 6 1 T S

2 / Acquiring Human Resource Capability

Figure 6-2 Selection Tools, and Cost for Development and Administration

Tool Validity 1 Costs (Development/

Administration) 2

General mental ability tests (or GMA) measure mental abilities such as reading comprehension, verbal or math skills.

.5–.7 3 Low/low

Structured interviews measure a variety of skills and abilities using a standard set of questions.

.4–.45 High/high

Unstructured interviews measure a variety of skills using questions that vary from candidate to candidate and interviewer to interviewer.

.2–.3 Low/high

Work samples/performance tests measure job skills using the actual performance of tasks as on job.

.3–.4 High/high

Job knowledge tests measure bodies of knowledge required by a job. .4–.5 High/low Personality Testing 4    

Conscientiousness .25–.3 Low/low Extraversion .15–.35 5 Low/low Emotional Stability .1–.3 Low/low Agreeableness 6 .1–.2 Low/low Openness to Experience .1–.2 Low/low

Biographical information measures a variety of skills and personal characteristics through questions about education, training, work experience, and interests.

.3–.4 High/low

Measures of work experience (e.g., “behavioral consistency”) .3–.4 High/low Situational judgment tests measure a variety of skills with short scenarios (either in written

or video format) asking test takers what would be their most likely response. .3–.4 High/low

Integrity tests measure attitudes and experiences related to a person's honesty, dependability, and trustworthiness.

.3.–.4 Low/low

Assessment centers measure KASOCs through a series of work samples/exercises with trained assessors (may include GMA and other tests).

.3–.45 High/high

Reference checks provide information about an applicant's past performance or measure the accuracy of applicants' statements on their résumés.

.2–.3 Low/low

1 Validities range from 0 to 1.0; higher numbers indicate better prediction of job performance. Ranges are reported here. 2 References to high or low are based on relative comparisons to other methods. 3 Validities for more complex jobs tend to be higher for GMA. 4 Validities for personality measures tend to vary with the job. FFM self-report validity ranges reported here. Much stronger validities (.5–.6 range) for peer-based (versus self-reported) measures of personality. 5 Stronger validity in predicting managerial and/or leadership performance; weak validities for jobs involving less interaction. 6 Low validity for managerial jobs (.10); higher validities for team-based settings.

Sources: Adapted from W. F. Cascio, and H. Aguinis (2011). Applied psychology in human resource management. Upper Saddle River, NJ: Prentice Hall; and A. M. Ryan & N. T. Tippins, (2004). Attracting and Selecting: What Psychological Research Tells Us. Human Resource Management, 43, 307–308.

Content validity assesses the degree to which the contents of a selection method (i.e., the actual test or instrument items or components) represent (or assess) the requirements of

the job. This approach to validation is of course ideal when the employer lacks an adequate

sample size to be able to empirically validate a method. Subject matter experts are typically

used to evaluate the compatibility of the content of the method with the actual requirements

of a job (e.g., is the knowledge or skill assessed on the test compatible with the knowledge

or skill required on the actual job?). Such a study or evaluation by experts also can be of-

fered as evidence of job relatedness, but the study should follow the directions provided by

the Supreme Court in Albemarle v. Moody (see Chapter 3) and, just to be safe, comply with the Uniform Guidelines on Employee Selection Procedures (UGESP). (See www.eeoc.gov for details on the UGESP.)

A knowledge-based test for “Certified Public Accountant” could be considered to have

content validity for an accounting job. Many organizations now use job simulations or

work samples where an applicant is instructed to play the role of a job incumbent and

perform tasks judged to be directly related to the job. Content validation is ideal for these

types of methods. Of course, with this approach to validation, it is assumed that job candi-

dates have the essential KASOCs at the time of assessment. Another possible problem is

190

ber29163_ch06_185-236.indd 190ber29163_ch06_185-236.indd 190 17/02/12 2:38 PM17/02/12 2:38 PM

W I L L I S , K A S S A N D R A 2 1 6 1 T S

6 / Personnel Selection

that content validation relies on the judgments of humans regarding “job relatedness” or

the validity of these methods and the underlying items of the method. This approach is also

inappropriate for tests of basic constructs such as cognitive or general mental ability or

personality characteristics.

The validity correlation coefficient can also be used to calculate the financial value of a se-

lection method, using a utility formula, which can convert correlations into dollar savings

or profits that can be credited to a particular selection method. A method’s utility depends on its validity but on other issues as well. For example, recall the discussion of selection ratio in Chapter 5. Selection ratio is the number of positions divided by the number of applicants for those positions. A test with perfect validity will have no utility if the selection ratio is 1.0 (one applicant per position). This is why an organization’s reputation,

its recruitment programs, and other HR issues such as compensation are so important for

personnel selection. Valid selection methods have great utility for an organization only

when that organization can be selective based on the scores on that method.

Utility (U) or expected return based on using a particular selection method is typically

derived based on the formula where U = N s r xy SD y Z x −N T (C) where N s = number of job applicants selected; r xy = the validity coefficient for the method; SD y = standard deviation of job performance in dollars and Z x = average score on the selection method for hired (a measure of the quality of recruitment); N T = number of applicants assessed with the selection method and C = cost of assessing each job candidate with the selection method. In general, the higher the validity of a method, the higher its utility. Any increase in the

validity of a selection method translates into an increase in the average dollar value of the

annual productivity by employees who are selected with the method. Even an increase in a

small percentage can translate into a substantial annual output per employee and thus large

financial gains.

Selection methods with high validity that are relatively in

Related Tags

Academic APA Assignment Business Capstone College Conclusion Course Day Discussion Double Spaced Essay English Finance General Graduate History Information Justify Literature Management Market Masters Math Minimum MLA Nursing Organizational Outline Pages Paper Presentation Questions Questionnaire Reference Response Response School Subject Slides Sources Student Support Times New Roman Title Topics Word Write Writing