Search the Community
Showing results for tags 'thresholding'.
Found 2 results
I wanted to take some time to look into a brief history of medical image segmentation before moving into what I consider the more modern method of segmentation. (be warned video is rather long) First to be clear the goal of segmentation is to separate the bones or anatomy of interest from 3D scan data. This is done most easily when there is a sharp contrast between the anatomy of interest and what surrounds it. If you have a CT scan of an engine block this is pretty straight forward. The density of metal to air is hard to beat, but for anatomy and especially MRI scans this is a whole other story. Anatomical boundaries are often more of gradients than sharp edges. Over the years there have been many approaches to make the process of segmenting anatomy faster, easier, and less subjective then the dreaded 'manual segmentation'. When I first started working with medical images back around 2003 the group I was at was trying an alternative to their previous method. Their previous method involved using ImageJ to separate each bone of the foot by applying a threshold then going in and 'fixing' that by painting... They wanted to segment the bones of the foot and it would take like 10 hours of tedious labor... fortunately that was before my time. I was tasked with figuring out how to get 3DViewNix to work. It was basically a research project that ran on linux (which I hadn't used before). Its had a special algorithm called 'live-wire' which allowed clicking on a few points around the edge of the bone on each slice to get a closed contour that matched the bone edge then doing that for each scan slice for each bone. This resulted in about 3 hours a foot of still rather mind numbing effort. After a while a radiologist with a PhD electrical engineering student let us know that there were much better ways. His student had some software written in IDL that allowed using 'seeds' in each bone that would then grow out in 3D to the edges of the bones. After some time to get setup we were able to segment a foot in less than an hour with a good portion of that being computer time. My background is as an ME so I don't pretend to fully understand the image processing algorithms but I have used them in various forms. This year I got more familiar with 3DSlicer which I have found to be the best open source medical imaging program yet. It is built off of VTK and ITK and has a very nice GUI making seeding far more convenient (other programs I've used didn't really allow zooming). It took me a while to find something similar to what I had used before but eventually I found the extension 'FastGrowCut' gives very good results, enough to move away from the special software I had been using before that wasn't free. My basic explanation of 'FastGrowCut' and similar region growing algorithms is; you start with 'seeds' which are labeled voxels for the different anatomy of interest. The algorithm then grows the seeds until it reaches the edge of the bone/anatomy or a different growing seed. There is then a back and forth until it stabilizes on what the edge really is. The result is a 'label' file which has all the voxels labeled as background or one of the entities of interest. Once everything is segmented to the level that you like I prefer to do volumetric smoothing of each entity (bone) before creating the surface models. These algorithms are an active area of research typically in image processing groups within university electrical engineering departments. The algorithms are not a silver bullet that works on all situations, there are a variety of other methods (some as extensions to 3DSlicer) for specific situations. Thin features, long tubular features, noisy data (metal artifacts), low quality scans (scouts), will still take more time and effort to get good results. No algorithm can take a low resolution, low quality scan and give you a nice model... garbage in = garbage out. Now I have been surprised bemoaned to find thresholding used as a common segmentation technique, often as the main tool even in expensive commercial programs. That style typically involves applying a threshold then going in and cleaning up the model until you get something close to what you want. To me this seems rather antiquated but for quickly viewing data or creating a quick and rough model it really can't be beat... but for creating high quality models to be printed there are better ways.