Inhaltspezifische Aktionen

Personalauswahl

Das Ziel vieler Auswahlverfahren (z.B. strukturierter Interviews, Persönlichkeitstests, Assessment Center) ist es, zu erfahren, wie sich Bewerbende normalerweise in arbeitsrelevanten Situationen verhalten würden. Jedoch versuchen Bewerbende während der Auswahlsituation oft, einen möglichst guten Eindruck zu hinterlassen. Unsere Forschung hierzu adressiert dieses Phänomen mit Hilfe einer Kombination verschiedener Untersuchungsströme (z.B. zur Fähigkeit der Kandidaten Kriterien zu identifizieren, typische versus maximale Arbeitsleistung). Selection Banner

 Previous publications in this line of study include:

  • Klehe, U.-C., Kleinmann, M., Hartstein, T., Melchers, K., König, C., Heslin, P. & Lievens, F. (in press).
    Responding to Personality Tests in a Selection Context: The Role of the Ability to Identify Criteria and the Ideal-Employee Factor.

    Human Performance.
Abstract
Personality assessments are often distorted during personnel selection, resulting in a common “ideal-employee factor” (IEF) underlying ratings of theoretically unrelated constructs. However, this seems not to affect the personality measures’ criterion-related validity. The current study attempts to explain this set of findings by combining the literature on response distortion with the ones on cognitive schemata and on candidates’ ability to identify criteria (ATIC).  During a simulated selection process, 149 participants filled out Big Five personality measures and participated in several high- and low-fidelity work simulations to estimate their managerial performance. Structural equation modeling showed that the IEF presents an indicator of response distortion and that ATIC accounted for variance between the IEF and performance during the work simulations, even after controlling for self-monitoring and general mental ability.
Availability
Journal’s website: available soon
Open access download: -
Further access:

 

  • Lievens, F., Klehe, U.-C., & Libbrecht, N. (2011).
    Applicant versus employee scores on self-report emotional intelligence measures.
    Journal of Personnel Psychology, 10, 89-95. (Note. Order of authorship was based on a completely random yet skillfully executed flip of a perfectly normal coin.)
Abstract
There exists growing interest to assess applicants’ emotional intelligence (EI) via self-report trait-based measures of EI as part of the selection process. However, some studies that experimentally manipulated applicant conditions have cautioned that in these conditions use of self-report measures for assessing EI might lead to considerably higher scores than current norm scores suggest. So far, no studies have scrutinized self-reported EI scores among a sample of actual job applicants. Therefore, this study compares the scores of actual applicants at a large ICT organization (n = 109) on a well-known self-report measure of EI to the scores of employees already working in the organization (n = 239). The current study is the first to show that applicants’ scores on a self-report measure of EI during the selection process are indeed higher (d = 1.12) and have less variance (SD ratio = 0.86/1) than incumbents’ scores. Finally, a meta-analytic combination of our results with those of earlier research showed that a score increase of about 1 SD in applicant conditions seems to be the rule, regardless of the type of setting, selfreport EI measure, and within- versus between-subjects design employed.
Availability
Journal’s website: http://www.psycontent.com/content/920w040u0r19624n/?p=b4d2e842a7274e208d9a1f4bb2b5d5b6&pi=0
Open access download: http://nbn-resolving.de/urn:nbn:de:hebis:26-opus-88689

 

  • Kleinmann, M. & Klehe, U.-C. (2011).
    Selling oneself: Construct and criterion validity of impression management in structured interviews.
    Human Performance, 24, 29 -46.
Abstract
Interviewee impression management has been a long-standing concern in the interview literature. Yet recent insights into the impact of impression management on interviewee performance in structured interviews suggest that interviewee impression management may be more than just a source of bias and a nuisance. Rather, impression management should possess construct-related validity and contribute to the interviews’criterion-related validity. These hypotheses were tested with 129 participants using a simulated selection interview aimed at university graduates. Results confirmed most of the hypotheses. In particular, interviewee impression management behavior showed construct-related validity across different structured interview types and correlated positively with interviewees’performance on subsequent typical and maximum performance proxy criteria. Implications and directions for future research are discussed.
Availability
Journal’s website: http://www.tandfonline.com/doi/abs/10.1080/08959285.2010.530634
Open access download: http://nbn-resolving.de/urn:nbn:de:hebis:26-opus-88699

 

  • Melchers, K. G., Klehe, U.-C., Richter, G. M., Kleinmann, M., König, C. J., & Lievens, F. (2009).
    “I know what you want to know”: The impact of interviewees’ ability to identify criteria on interview performance and construct-related validity.
    Human Performance, 22, 355-374.
Abstract
The current study tested whether candidates' ability to identify the targeted interview dimensions fosters their interview success as well as the interviews' convergent and discriminant validity. Ninety-two interviewees participated in a simulated structured interview developed to measure three different dimensions. In line with the hypotheses, interviewees who were more proficient at identifying the targeted dimensions received better evaluations. Furthermore, interviewees' ability to identify these evaluation criteria accounted for substantial variance in predicting their performance even after controlling for cognitive ability. Finally, the interviewer ratings showed poor discriminant and convergent validity. However, we found some support for the hypothesis that the quality of the interviewer ratings improves when one only considers ratings from questions for which interviewees had correctly identified the intended dimensions.
Availability
Journal’s website: http://www.tandfonline.com/doi/pdf/10.1080/08959280903120295
Open access download: http://nbn-resolving.de/urn:nbn:de:hebis:26-opus-85161

 

  • Klehe, U.-C., König, C. J., Kleinmann, M., Richter, G. M., & Melchers, K. G. (2008).
    Transparency in Structured Selection Interviews: Consequences for Construct and Criterion-Related Validity
    Human Performance, 21, 107-137.
Abstract
Although researchers agree on the use of structured interviews in personnel selection, past research has been undecided on whether these interviews need to be conducted nontransparently (i.e., without giving interviewees any indication of the evaluated criteria) or transparently (i.e., by revealing to interviewees the dimensions assessed in the interview). This article presents two independent studies examining the effects of interview transparency on interviewees’ performance and on the interview’s construct and criterion-related validity in the context of an application training program. Results from both Study 1 (N = 123) and Study 2 (N = 269) indicate an improvement in interviewees’ performance under transparent interview conditions. Both studies further support the assumption that transparent interviews show satisfactory construct validity, whereas nontransparent interviews do not. Moreover, Study 2 showed no significant difference between the interview’s criterion-related validity under transparent versus nontransparent conditions. Implications and directions for future research are discussed.
Availability
Journal’s website: http://www.tandfonline.com/doi/pdf/10.1080/08959280801917636
Open access download: http://nbn-resolving.de/urn:nbn:de:hebis:26-opus-85140

 

  • König, C. J., Melchers, K. G., Kleinmann, M., Richter, G. M., & Klehe, U.-C. (2007).
    Candidates' ability to identify criteria in nontransparent selection procedures: Evidence from an assessment center and a structured interview.
    International Journal of Selection and Assessment, 15, 283-292. 
Abstract
In selection procedures like assessment centers (ACs) and structured interviews, candidates are often not informed about the targeted criteria. Previous studies have shown that candidates' ability to identify these criteria (ATIC) is related to their performance in the respective selection procedure. However, past research has studied ATIC in only one selection procedure at a time, even though it has been assumed that ATIC is consistent across situations, which is a prerequisite for ATIC to contribute to selection procedures' criterion-related validity. In this study, 95 candidates participated in an AC and a structured interview. ATIC scores showed cross-situational consistency across the two procedures and accounted for part of the relationship between performance in the selection procedures. Furthermore, ATIC scores in one procedure predicted performance in the other procedure even after controlling for cognitive ability. Implications and directions for future research are discussed.
Availability
Journal’s website: http://onlinelibrary.wiley.com/doi/10.1111/j.1468-2389.2007.00388.x/full
Open access download: -
Further access: please contact  to receive a copy of the manuscript.

  

  • Klehe, U.-C. & Latham, G. (2006).
    What would you do – really or ideally? The constructs underlying the behaviour description interview and situational interview in predicting typical versus maximum performance.
    Human Performance, 19, 357-382. 
Abstract
A predictive validation study was conducted to determine the extent to which the behavior description (BDI) and situational (SI) interviews predict typical versus maximum performance. Incoming MBA-students (n = 79) were interviewed regarding teamplaying behavior. Four months later, peers within study groups anonymously evaluated each person’s typical teamplaying behavior, whereas other peers within project groups anonymously evaluated each person’s maximum teamplaying behavior. Both the BDI and the SI predicted typical performance. The SI also predicted maximum performance. Implications and directions for future research are discussed.
Availability
Journal’s website: http://www.tandfonline.com/doi/pdf/10.1207/s15327043hup1904_3
Open access download: http://nbn-resolving.de/urn:nbn:de:hebis:26-opus-85155

 

  • König, C. Melchers, K., Kleinmann, M., Richter, G. & Klehe, U.-C. (2006).
    The relationship between the ability to identify evaluation criteria and integrity test scores.
    Psychology Science, 48, 369-377. 
Abstract
A predictive validation study was conducted to determine the extent to which the behavior description (BDI) and situational (SI) interviews predict typical versus maximum performance. Incoming MBA-students (n = 79) were interviewed regarding teamplaying behavior. Four months later, peers within study groups anonymously evaluated each person’s typical teamplaying behavior, whereas other peers within project groups anonymously evaluated each person’s maximum teamplaying behavior. Both the BDI and the SI predicted typical performance. The SI also predicted maximum performance. Implications and directions for future research are discussed.
Availability
Journal’s website: http://www.journaldatabase.org/articles/relationship_between_ability_identify.html
Open access download: http://www.journaldatabase.org/articles/relationship_between_ability_identify.html

 

  • Klehe, U.-C. & Latham, G. (2005).
    The predictive and incremental validity of the situational and patterned behavior description interviews for teamplaying behavior.
    International Journal of Selection and Assessment, 13, 108-115. 
Abstract
A predictive validation study of the situational interview (SI) and the patterned behavioral description interview (PBDI) was conducted for the criterion of teamplaying behavior. Both the SI and PBDI were valid predictors (r5.41, .34, respectively). Only the SI, however, had incremental validity. Reasons for this finding are explained in terms of the development and scoring of SI items, conducting a pilot study, use of a panel who took notes, and the method of including/excluding interview applicants and questions for a validation study.
Availability
Journal’s website: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=739925##
Open access download: -
Further access:

 

  • Huffcutt, A. I., Conway, J. M., Roth, P. L., & Klehe, U.-C. (2004).
    The impact of job complexity and study design on situational and behavior description interview validity.
    International Journal of Selection and Assessment, 12, 262-273. 
Abstract
The primary purpose of this investigation was to test two key characteristics hypothesized to influence the validity of situational (SI) and behavior description (BDI) structured interviews. A meta-analysis of 54 studies with a total sample size of 5536 suggested that job complexity influences the validity of SIs, with decreased validity for high-complexity jobs, but does not influence the validity of BDIs. And, results indicated a main effect for study design across both SIs and BDIs, with predictive studies having 0.10 lower validity on average than concurrent studies. Directions for future research are discussed.
Availability
Journal’s website: http://onlinelibrary.wiley.com/doi/10.1111/j.0965-075X.2004.280_1.x/abstract
Open access download: http://nbn-resolving.de/urn:nbn:de:hebis:26-opus-85295

 

  • Melchers, K. G., Kleinmann, M., Richter, G. M., König, C. J., & Klehe, U.-C. (2004).
    Messen Einstellungsinterviews das, was sie messen sollen? Zur Bedeutung der Bewerberkognitionen über bewertetes Verhalten. [Do selection interviews measure what they are supposed to measure? The role of applicant cognitions about observed behavior].
    Zeitschrift für Personalpsychologie, 3, 159-169. 
Abstract
Mit strukturierten Einstellungsinterviews wird häufig versucht, verschiedene Anforderungsdimensionen zu erfassen, die für eine erfolgreiche berufliche Tätigkeit nötig sind. Allerdings ist noch weitgehend ungeklärt, inwieweit es gelingt, die angestrebten Dimensionen tatsächlich zu erfassen. Zur Untersuchung dieser Frage wurde im Rahmen eines Bewerbungs-Trainings ein strukturiertes Interview durchgeführt, das aus drei Komponenten (Selbstvorstellung, biographischen Fragen und situativen Fragen) bestand. Eine Analyse der Multitrait-Multimethod-Matrix ergab, dass das verwendete Interview wenig konstruktvalide war. Allerdings zeigte sich, dass die Gesamtbeurteilung im Interview mit der Anzahl der Fragen korrelierte, für die die Interviewten die jeweilige Anforderungsdimension korrekt erkannten. Zudem wurden Teilnehmerinnen und Teilnehmer auch intraindividuell besser bei Fragen beurteilt, bei denen sie die jeweils relevante Dimension erkannten, als bei Fragen, bei denen dies nicht der Fall war. Es wird argumentiert, dass die Fähigkeit, relevante Anforderungs- und Beurteilungsdimensionen zu erkennen, einen Teil der prognostischen Validität strukturierter Interviews erklären kann.
Availability
Journal’s website: http://www.psycontent.com/content/r31g92rk22v637kr/
Open access download: http://nbn-resolving.de/urn:nbn:de:hebis:26-opus-85273

 

  • Klehe, U.-C. (2007).
    Biographische Fragebögen [Biographical Inventories].
    In H. Schuler & K. Sonntag (Eds.), Handbuch der Arbeits- und Organisationspsychologie, p. 497-502. Göttingen, Germany: Hogrefe. 
Abstract
-
Availability
Journal’s website: -
Open access download: -
Further access:

 

  • Klehe, U.-C. (2007).
    Biographische Fragebögen [Biographical Inventories].
    In H. Schuler & K. Sonntag (Eds.), Handbuch der Arbeits- und Organisationspsychologie, p. 497-502. Göttingen, Germany: Hogrefe. 
Abstract
-
Availability
Journal’s website: -
Open access download: -
Further access:

 

  • Kleinmann, M., Melchers, K. G., König, C. J., & Klehe, U.-C. (2007).
    Transparenz der Anforderungsdimensionen: Ein Moderator der Konstrukt- und Kriteriumsvalidität des Assessment Centers [Transparency of the observed dimensions. A moderator of assessment center construct- and criterion related validity].
    In H. Schuler (Ed.), Assessment Center zur Potentialanalyse [Assessment centers for analyzing potential], p. 70-80. Göttingen, Germany: Hogrefe. 
Abstract
-
Availability
Journal’s website: -
Open access download: -
Further access: please contact  to receive a copy of the manuscript.

 

  • Klehe, U.-C., & Anderson, N. (2005).
    The prediction of typical and maximum performance.
    In A. Evers, O. Smit-Voskuijl, & N. Anderson (Eds.). Handbook of Personnel Selection. Blackwell, U.K. 
Abstract
In any selection process, organizations wish to distinguish between what applicants can (i.e., maximum performance) and what they will (i.e., typical performance) do in terms of their likely job performance. Our objectives for the current chapter are to outline the distinction between typical and maximum performance and to demonstrate how it can add valuable information for both practitioners and researchers in personnel selection, for while this distinction fits well with current models of job performance and has received considerable attention in theoretical accounts, it is frequently overlooked by both scientists and practitioners in personnel selection. Researchers run extensive validation studies while organizations make huge financial investments in the selection of new employees without knowing which of these two aspects of performance they are predicting, or even trying to predict ( Guion, 1991 ). Finally, we will propose areas of future research, such as moderators and boundary conditions, and we will outline potential pitfalls in the study of typical and maximum performance.
Availability
Journal’s website: -
Open access download: -
Further access:

 

  • Latham, G. P. & Klehe, U.-C. (2002).
    Towards an understanding of the underlying constructs of the situational and patterned behavior description interview in predicting typical versus maximum performance.
    In: W. Auer-Rizzi, C. Innreiter-Moser & E. Szabo (Eds.): Management in einer Welt der Globalisierung und Diversität: Europäische und nordamerikanische Sichtweisen. (Management in a Global, Yet Diverse World: Perspectives Across Europe and North America). Stuttgart, Germany: Schäffer. 
Abstract
-
Availability
Journal’s website: -
Open access download: -
Further access: