About HurtzLab


HurtzLab encompasses the basic and applied research activities of Dr. Greg Hurtz, in collaboration with his students and colleagues. Dr. Hurtz is currently a Professor of Psychology at California State University, Sacramento in the industrial-organizational psychology program, and also serves as an expert consultant in the areas of industrial psychology and quantitative psychology described below. 

Dr. Hurtz (and his graduate students) can work as independent freelance consultants, or via a grant or contract (e.g., an interagency agreement for State of California agencies) established through the Research Administration and Contract Administration division at Sacramento State. Interested parties should contact Dr. Hurtz at: Greg@HurtzLab.com.


Areas of expertise for HurtzLab

Industrial Psychology


In this area of research we focus primarily on the development and use of psychological measures in the context of employee selection and training. Recent and current projects fall into the following topic areas:


Cognitive Ability Testing
Cognitive ability tests are consistently strong predictors of job performance across a variety of jobs. We carry out research developing measures of different cognitive abilities following taxonomies such as the CHC and O*NET models. We have developed and pilot tested some of these tests with undergraduate research participants, and others in police academies throughout the State of CA. We evaluate the dimensionality of the tests and calibrate items in both Rasch and item response theory models.

Job Knowledge/Skill Testing
Job knowledge and skill tests measure specific knowledge and skills required for particular jobs or occupations, based on work or practice analysis findings for that particular job or occupation. Our work includes the development and analysis of licensure and certification tests as well as more targeted employee selection tests. In addition, we carry out research into setting performance standards (cutoff scores) on the tests for use in decision-making. We apply classical test theory methods as well as Rasch and item response theory models in the analysis of these tests.

Training Evaluation Research
Training involves the development of work-related knowledge and skills among new hires or an existing workforce. We apply our expertise in measurement to develop tests of knowledge and skills acquired during training, as measures of training success. We also apply our expertise in experimental and quasi-experimental design to plan training evaluation research studies and evaluate the results of such studies. In this respect we capitalize on the full "research trinity" of design, measurement, and analysis.
Quantitative Psychology


In this area of research we focus on exploring and evaluating measurement models and statistical models that are used in psychological research and practice. Recent and current projects fall into the following topic areas:


Psychometric Theory
Measurement is crucial to any science, and psychological science is no exception. We research measurement methods for psychological constructs and models for evaluating psychological measures and scaling people. We research and apply Rasch measurement and item response theory, including methods for testing their assumptions and developing high quality psychological measures.

Linear Models
Linear models are prevalent in psychological research for testing hypotheses, estimating population parameters, and making statistical predictions. Even nonlinear effects are often modeled with linear regression using polynomial terms or with generalized linear models. We investigate the use of such models for modeling psychological, behavioral, and performance data.

Monte Carlo Analysis
Monte Carlo analysis involves the use of data simulation to evaluate the behavior and performance of statistical tests and indices. We develop methods for conducting Monte Carlo analysis in SPSS and other statistical software, and apply those methods to comparisons of alternate formulas for different statistical procedures and the performance of statistical tests under varied conditions such as assumption violations.