I worked as a post-master researcher at the Center for Neuroscience Imaging Research. During the research, I focused on developing models to predict the intensity of human pain by analyzing brain activity using functional magnetic resonance imaging (fMRI) and machine learning techniques.
Through a literature review and benchmark analysis, this study identified model targets and characteristics that should be considered when developing brain imaging-based pain prediction models. We found that the level of data averaging, spatial scale of the brain, and sample size significantly impacted model performance. These results suggest factors to consider in studies developing and evaluating brain imaging-based biomarkers and suggest the need for more precise model development strategies.
This is a commentary paper for Hoeppli et al. (2022) and we found that a multiple regression model including brain imaging, sociodemographic, and psychological measures could predict individual differences in pain self-report. We also identified brain regions associated with these differences using fMRI data, suggesting and validating predictability for individual pain experience.
This study identified brain regions that are highly variable when processing pain across individuals. We found that higher-level transmodal regions show greater variability across participants, while unimodal regions have more stable pain representations. These individual differences in brain regions provide insights into pain diagnosis and treatment from a precision medicine perspective.
This study used a capsaicin source to stimulate nociception of the tongue and explored changes in functional brain networks during the duration of this pain in the mouth. We found that as pain decreases, orofacial regions integrate with subcortical and frontoparietal regions in the early stages, whereas they dissociate in the later stages. We also show that machine learning models can effectively predict pain changes. This study provides new insights into how dynamic interactions between brain systems organize and modulate the experience of pain.
Side projects
Automated heart sound segmentation: a deep learning web platform for cardiac diagnostics Dong Hee Lee(Team Leader),
Myungjun Lee,
Junghyun Kim 2024.1.~2024.2. (7 weeks)
codes
/
Presentation (Korean)
The goal of this project is to develop a heart sound segmentation deep learning model and web app service for assisting heart checkup. We developed a heart sound (S1, S2) segmentation deep learning model based on U-Net++ with stethoscope sound data provided by the PhysioNet Challenge and deployed it as a web app service. We preprocessed the audio data and converted it into spectrogram images, and then trained deep learning models and evaluated performance of the image segmentation.