2024 - Present
Pancreatic cancer has a low five-year survival rate (less than 11%), with only 9.7% detected early due to its subtle presentation in imaging. Pancreatic carcinoma typically appears as a poorly enhancing, hypodense mass in the arterial phase, making accurate segmentation and early diagnosis challenging. Traditional radiological methods rely on expert interpretation, which are time-intensive, subject to variability, and often limited in availability. This research provides an alternate approach for pancreatic cancer segmentation using a U-Net, a CNN architecture. By leveraging publicly available data from the Memorial Sloan Kettering Cancer Center, which consists of 420 fully annotated 3D volumes, the model learned to segment pancreatic cancer using just a CT scan. The model consists of 34 layers and 1,940,817 trainable parameters, trained on 282 3D CT volumes capturing diverse tumor morphologies. Eight trials were performed to fine-tune hyperparameters, including learning rates, batch sizes, and augmentation strategies for optimal performance. A Hounsfield filter with a -50 to 200 HU range was used to increase accuracy. Multi-stage training with RAdam and batch normalization improved convergence and accuracy by stabilizing learning and enhancing generalization. The final model slightly outperformed trained radiologists in segmentation accuracy and significantly exceeded them in segmentation speed.. This model has the potential to increase the five-year survival rate, improve accessibility, and reduce variability in the diagnosis and treatment of pancreatic cancer. Future work will focus on integrating multi-modal imaging techniques, and optimizing real-time clinical deployment to further improve early detection.
That was a lot so here is the TLDR:
Pancreatic cancer is really bad. A large part of the reason is that it usually goes undetectd until it's too late. This project allows the detection of the cancer to be greatly automated reducing the amount of time and money it takes to get regular pancreas scans.
The heart of this project is a U-Net model. Using this architecture (which is really simple if you look into it) I was able to make a program which could take in 128x128 images and output 128x128 segmentation masks. Medical segmentation accuracy (and maybe other things tbh) is measured with the DICE coefficent. Which is just the amount of overlap between two blobs. My model got a DICE of 0.7451, better than most radiolosts and most programs trained on CT scans.
People who thought this was cool for some reason:
Good Reads:
NOW
I am currently looking into advanced IC packaging technologies and using chiplets to improve yield, performance, all that good stuff. I am also working at the UVM INTERACT lab and am hopefully going to have some cool creppy crawly robots.