Greetings!

My current academic interests are focused on topics used in my business: job seekers-jobs fit, automatic item generation, and psychometrics for a complex test. If I were to pursue an academic career again, I would like to work on model evaluation and research planning more.

Job Seekers-Jobs Fit

I have started a business named MindAnalytica. MindAnalytica was established to match people with jobs that suit them the most. Our goal, in accordance, is to develop standardized tests on psychological profiles, such as intelligence, personality, motivation, etc. With our provided services, their profiles are thoroughly validated through the statistical methodology. The reports will assist job seekers in expressing their authentic inner potential. Job seekers can put them on their resumes and discuss them with recruiters during job interviews.

Our team is developing many tests to study job seekers, jobs, workgroups, and organizational characteristics. We are trying to find which aspects can indicate "fit" in which ways. With extensive studies, we would like a test kit that tailors their prediction for each type of job, team, and organization.

Model Evaluation

Analysts usually ask whether the hypothesized model fits the obtained data in structural equation modeling. Model fit can be addressed by the chi-square test of absolute fit. However, the chi-square test evaluates the perfect fit. Therefore, when sample size increases, a minor misspecification from the perfect-fitting model (such as an error correlation of 0.2) can lead to rejection by the chi-square test. In fact, researchers wish to check whether the hypothesized model approximates the obtained data. Trivial details within the hypothesized model (e.g., minor cross-factor loadings) should be ignored. Therefore, model fit indices have been developed, such as RMSEA, SRMR, CFI, TLI, and more than 30 other fit indices. Researchers usually use the fit index cutoff (e.g., RMSEA < .05) to decide whether the model approximately fits the data.

However, the consensus on which cutoffs researchers should use has yet to be established. I have studied alternative methods besides fit indices to evaluate approximate fit. In my dissertation, I developed a unified approach for model fit evaluation. The unified approach consists of two methods. First, simulation studies of models with trivial misspecifications are used to find fit indices cutoffs. Second, modification indices are used to calculate confidence intervals of expected parameter changes to evaluate model fit. An approximate-fit model should have confidence intervals of expected parameter changes within trivial misspecification thresholds. I also investigated the performance of other alternative methods, such as the Monte Carlo and Bayesian approaches. Also, I am developing R packages called simsem and semTools. One of the objectives of these packages is to implement alternative approaches for model fit evaluation. See the software page for further details.

Research Planning

I am interested in research planning before collecting any data. I am interested in sample size estimation and planned missing designs. These areas of study will help researchers save money and resources while still achieving desired characteristics (such as enough statistical power or enough accuracy in parameter estimation). I developed the PAWS program for sample size estimation in clustered randomized design in my master thesis, as well as several functions in the MBESS package. Also, the simsem package for R was developed to help researchers plan their designs easier in structural equation modeling. See the software page for further details.

Miscellaneous

I have studied Bayesian analysis, multilevel structural equation modeling, sampling theory, and modeling latent variable interaction in structural equation modeling.