HurtzLab encompasses the basic and applied research activities of Dr. Greg Hurtz, in collaboration with his students and colleagues. Dr. Hurtz is currently a Professor of Psychology at California State University, Sacramento with a background in industrial/organizational psychology, psychometrics, and statistical methods. He has been on the faculty at Sacramento State since 2002 regularly teaching industrial psychology and various courses in research methods, statistics, and psychological testing/measurement at both the undergraduate and graduate levels. He has maintained an active research program and has chaired over 20 master's theses and supervised many other student projects, mostly within the areas of industrial and quantitative psychology. Prior to Sacramento State, he worked for two years as a full time measurement statistician at Excelsior College, and prior to that he held several internship-level positions in industrial psychology and psychometrics while completing his master's and doctorate degrees. A listing of his research publications and presentations can be viewed by clicking the "Recent Research" link above.
Also visit Greg Hurtz on:
- LinkedIn: http://www.linkedin.com/in/greghurtz
- Google Scholar Citations: http://scholar.google.com/citations?user=7Zz4uhkAAAAJ&hl=en
- CSUS I-O Program Website: http://www.csus.edu/psyc/graduate/industrial-organizational-psychology.html
Areas of Expertise for HurtzLab
In this area of research we focus primarily on the development and use of psychological measures in the context of employee selection and training. Recent and current projects fall into the following topic areas:
Cognitive Ability Testing
Cognitive ability tests are consistently strong predictors of job performance across a variety of jobs. We carry out research developing measures of different cognitive abilities following taxonomies such as the CHC and O*NET models. We have developed and pilot tested some of these tests with undergraduate research participants, and others in police academies throughout the State of CA. We evaluate the dimensionality of the tests and calibrate items in both Rasch and item response theory models.
Job Knowledge/Skill Testing
Job knowledge and skill tests measure specific knowledge and skills required for particular jobs or occupations, based on work or practice analysis findings for that particular job or occupation. Our work includes the development and analysis of licensure and certification tests as well as more targeted employee selection tests. In addition, we carry out research into setting performance standards (cutoff scores) on the tests for use in decision-making. We apply classical test theory methods as well as Rasch and item response theory models in the analysis of these tests.
Training Evaluation Research
Training involves the development of work-related knowledge and skills among new hires or an existing workforce. We apply our expertise in measurement to develop tests of knowledge and skills acquired during training, as measures of training success. We also apply our expertise in experimental and quasi-experimental design to plan training evaluation research studies and evaluate the results of such studies. In this respect we capitalize on the full "research trinity" of design, measurement, and analysis.
In this area of research we focus on exploring and evaluating measurement models and statistical models that are used in psychological research and practice. Recent and current projects fall into the following topic areas:
Measurement is crucial to any science, and psychological science is no exception. We research measurement methods for psychological constructs and models for evaluating psychological measures and scaling people. We research and apply Rasch measurement and item response theory, including methods for testing their assumptions and developing high quality psychological measures.
Linear models are prevalent in psychological research for testing hypotheses, estimating population parameters, and making statistical predictions. Even nonlinear effects are often modeled with linear regression using polynomial terms or with generalized linear models. We investigate the use of such models for modeling psychological, behavioral, and performance data.
Monte Carlo Analysis
Monte Carlo analysis involves the use of data simulation to evaluate the behavior and performance of statistical tests and indices. We develop methods for conducting Monte Carlo analysis in SPSS and other statistical software, and apply those methods to comparisons of alternate formulas for different statistical procedures and the performance of statistical tests under varied conditions such as assumption violations.