• entries
  • comments
  • views

About this blog

A blog focused on medical imaging technology and 3D printing.   My interests are using medical imaging data to create anatomical models for instruction, surgical planning, prosthesis, and other relevant projects.



Entries in this blog


So I have seen some questions here on embodi3D asking how to work with MRI data.  I believe the main issue to be with attempting to segment the data using a threshold method.  The democratiz3D feature of the website simplifies the segmentation process but as far as I can tell relies on thresholding which can work somewhat well for CT scans but for MRI is almost certain to fail.  Using 3DSlicer I show the advantage of using a region growing method (FastGrowCut) vs threshold.


The scan I am using is of a middle aged woman's foot available here



The scan was optimized for segmenting bone and was performed on a 1.5T scanner.  While a patient doesn't really have control of scan settings if you are a physician or researcher who does; picking the right settings is critical.  Some of these different settings can be found on one of Dr. Mike's blog entries.


For comparison purposes I first showed the kind of results achievable when segmenting an MRI using thresholds.



With the goal of separating the bones out the result is obviously pretty worthless.  To get the bones out of that resultant clump would take a ridiculous amount of effort in blender or similar software:



If you read a previous blog entry of mine on using a region growing method I really don't like using thresholding for segmenting anatomy.  So once again using a region growing method (FastGrowCut in this case) allows decent results even from an MRI scan.




Now this was a relatively quick and rough segmentation of just the hindfoot but already it is much closer to having bones that could be printed.  A further step of label map smoothing can further improve the rough results.



The above shows just the calcaneous volume smoothed with its associated surface generated.  Now I had done a more proper segmentation of this foot in the past where I spent more time to get the below result



If the volume above is smoothed (in my case I used some of my matlab code) I can get the below result.



Which looks much better.  Segmenting a CT scan will still give better results for bone as the cortical bone doesn't show up well in MRI's (why the metatarsals and phalanges get a bit skinny), but CT scans are not always an option.


So if you have been trying to segment an MRI scan and only get a messy clump I would encourage you to try a method a bit more modern than thresholding.  However, keep in mind there are limits to what can be done with bad data.  If the image is really noisy, has large voxels, or is optimized for the wrong type of anatomy there may be no way to get the results you want.


3D Printed Wrist Brace

So I began to develop some pain in my right wrist which was later diagnosed as tendinitis. At the same time I had been looking at the CT scan of my abdomen and noticed they also captured my right hand as it was resting on my stomach during the scan (I had injured my right shoulder again).



I recalled a concept project a while back I had seen: the CORTEX brace. It presented the idea of replacing the typical plaster cast with a 3D printed one which would prevent the issues of sweating and itchiness… as well as be much more stylish (though not allowing people to sign your cast).

I had wanted to apply this to prosthesis sockets initially but never got past the idea stage. Looking around for how to create the ‘webbing’ style I found that meshmixer had the necessary capabilities. So I now had all the tools needed to make my own brace to partially immobilize my wrist.


Once the surface model is created and loaded into meshmixer the first step is to cut off anatomy that you don't want in the model using 'plane cut'.



Once the general shape of the brace is created the next step is to consider how the brace will be taken on and off.  For my design I wanted to have one piece that is flexible enough to slide my wrist in.  To create the 'slot' I found that I did a boolean in blender as meshmixer would crash when I tried to create the slot.




With the brace model and slot in place the next step was to offset the surface since creating the voroni mesh would generate the tubes on both sides of the surface.  This is done back in meshmixer and is fairly computationally intensive so partially reducing the mesh density first is a good idea.




The next step is to further decimate the mesh to get the desired voroni mesh pattern.  This takes a bit of playing around to get the desired style.  Too dense and the resulting web structure will not have many openings which will be stronger but not as breathable.  Too rough and the model may not conform to the surface well causing pressure points.




The final step is to take the reduced mesh and web like structure using the 'make pattern' feature within meshmixer.  There are various settings to be applied within this feature but setting 'Dual Edges' then adjusting the pipe size to double your offset will result in the inner edge of the webbing to just touch the skin of the initial model.




Having never made a brace/cast before it took me a few iterations to get a design which I could easily don and doff (put on and take off). I also found that I could make a brace that held my wrist very rigidly but would be too restrictive.




Also material selection became important.  Initially I used ABS which is more flexible than PLA and I had it in a nice pink skin color. It turned out to be too rigid for the style I was designing.  I found PETT (taulman t-glass) to work well as it had a lower modulus of elasticity meaning it was more flexible than ABS.



After using the brace on and off for a few weeks I have found that it fits well and is surprisingly comfortable. I have taken a shower with it on as well as slept with it on.  It doesn’t seem to smell as bad as the cheap and common cloth type braces.  The main downsides have been taking it on and off is a bit challenging still and it is more restrictive of my motion as it behaves somewhere between a brace and a cast. There is definitely a great deal of potential for this type of cast though widespread adoption would require further technical development to simplify the process.


I wanted to take some time to look into a brief history of medical image segmentation before moving into what I consider the more modern method of segmentation.  (be warned video is rather long)



First to be clear the goal of segmentation is to separate the bones or anatomy of interest from 3D scan data.  This is done most easily when there is a sharp contrast between the anatomy of interest and what surrounds it.  If you have a CT scan of an engine block this is pretty straight forward.  The density of metal to air is hard to beat, but for anatomy and especially MRI scans this is a whole other story.  Anatomical boundaries are often more of gradients than sharp edges.


Over the years there have been many approaches to make the process of segmenting anatomy faster, easier, and less subjective then the dreaded 'manual segmentation'.  When I first started working with medical images back around 2003 the group I was at was trying an alternative to their previous method.  Their previous method involved using ImageJ to separate each bone of the foot by applying a threshold then going in and 'fixing' that by painting... They wanted to segment the bones of the foot and it would take like 10 hours of tedious labor... fortunately that was before my time.

I was tasked with figuring out how to get 3DViewNix to work.  It was basically a research project that ran on linux (which I hadn't used before).  Its had a special algorithm called 'live-wire' which allowed clicking on a few points around the edge of the bone on each slice to get a closed contour that matched the bone edge then doing that for each scan slice for each bone.  This resulted in about 3 hours a foot of still rather mind numbing effort.




After a while a radiologist with a PhD electrical engineering student let us know that there were much better ways.  His student had some software written in IDL that allowed using 'seeds' in each bone that would then grow out in 3D to the edges of the bones.  After some time to get setup we were able to segment a foot in less than an hour with a good portion of that being computer time.


My background is as an ME so I don't pretend to fully understand the image processing algorithms but I have used them in various forms.  This year I got more familiar with 3DSlicer which I have found to be the best open source medical imaging program yet.  It is built off of VTK and ITK and has a very nice GUI making seeding far more convenient (other programs I've used didn't really allow zooming).  It took me a while to find something similar to what I had used before but eventually I found the extension 'FastGrowCut' gives very good results, enough to move away from the special software I had been using before that wasn't free.


My basic explanation of 'FastGrowCut' and similar region growing algorithms is; you start with 'seeds' which are labeled voxels for the different anatomy of interest.  

  •  SegmentationSeeds.png


The algorithm then grows the seeds until it reaches the edge of the bone/anatomy or a different growing seed.  There is then a back and forth until it stabilizes on what the edge really is.  The result is a 'label' file which has all the voxels labeled as background or one of the entities of interest.  




Once everything is segmented to the level that you like I prefer to do volumetric smoothing of each entity (bone) before creating the surface models.




These algorithms are an active area of research typically in image processing groups within university electrical engineering departments.  The algorithms are not a silver bullet that works on all situations, there are a variety of other methods (some as extensions to 3DSlicer) for specific situations.  Thin features, long tubular features, noisy data (metal artifacts), low quality scans (scouts), will still take more time and effort to get good results.  No algorithm can take a low resolution, low quality scan and give you a nice model... garbage in = garbage out. 


Now I have been surprised bemoaned to find thresholding used as a common segmentation technique, often as the main tool even in expensive commercial programs.  That style typically involves applying a threshold then going in and cleaning up the model until you get something close to what you want.  To me this seems rather antiquated but for quickly viewing data or creating a quick and rough model it really can't be beat... but for creating high quality models to be printed there are better ways.


In this entry we look at registering one scan to another from the same subject pre and post op.



There may come a time when you have multiple scans of the same subject which you want to compare to each other.  This could be a CT and an MRI or a pre and post op scan as in this example.  Since the scans were taken at different times and possibly different places they will not line up with each other when they are loaded.  Registration is the process that can find the transformation that moves one volume to line up with the other. 


The first step after loading the data is to perform an initial alignment.  If the two volumes are far apart the difference will likely be too much for the registration algorithm to properly work.  The initial alignment is done using the 'transforms' menu within 3DSlicer.  After creating a new transform pick which volume you will be moving as the other one (fixed image) will stay stationary through the whole process.  Now adjust the 3 translation and 3 rotation sliders until you get a decent alignment by eye.  It can help to center each volume first if they have a large origin offset.  Also changing the way one of the volumes is colored can make visual alignment easier.


With the two volumes roughly lined up find the BRAINS registration within the Registration group under the main menu.  Before performing the registration set the:

  • Fixed Image
  • Moving Image
  • Output Image Volume
  • Initialization Transform
  • Registration Phases, set to Rigid (6 Degrees Of Freedom).  


When you click 'Apply 'the registration will run until it finds the best match between the scans.  Registration quality is typically measured in therms of 'Mutual Information' which is basically the union between the two volumes.


Full volume rigid registration will not work for all situations such as two scans of a foot which are flexed in a different way.  Rigid registration works best when the two scans are from the same person and the volume in question doesn't change shape (such as head scans).  Other types of registration will allows the 'moving' volume to be distorted until it matches the 'fixed' volume.  This can be simple affine (scaling) all the way to template matching (warping).


Find the scans used at:





And of course

3DSlicer - https://www.slicer.org/


Dicom Primer

In this tutorial I will cover some of the basics on working with dicom data with a focus on anatomizing, and reading into medical imaging software as well as how to potentially fix problematic scans.



So first of all what is DICOM data?  It is a standard file type for basically all medical imaging devices (CT, MRI, US, PET, X-ray, etc), DICOM stands for Digital Imaging and COmmunication in Medicine and along with the file format, and the tags, it is designed to be transferred and stored with PACS.  The DICOM standard can be found at their homepage.  


The useful bits for the purpose of creating anatomical models and particularly values that define the volume geometry can be found in 'tags'.  These are in each image/slice header file (metadata).  They are two 4 digit hexadecimal values assigned to a particular type of value like:


(0018, 0088) Spacing Between Slices


To find the official library of these tags go to the standard on the dicom home page and go down to "Part 6: Data Dictionary".  When opened scrolling down will reveal just how immense the dicom standard is.  Now this library just gives you the tag and the name but not much information about that tag.  To get a bit more of a description use Dicom Lookup and type in the tag or name to find more information.


Before looking at data a mention on anatomizing data.  The goal is to remove any information that can be traced back to the original person without removing other important information like modality, etc...  To get an official type of list of these values go to HIPPA and find there de-identification guidance document.  In general (pages 7 and 8) remove all names, dates, addresses, times, and other sensitive information like SSN.


Now to actually look at the data I have for years used ImageJ which has been updated to Fiji.  Open an image from the scan CD and click 'CNTR+I' to open the header file and see what is in there.




Fiji (ImageJ) is a very simple and useful program for looking at data.  It is mostly made for working in 2D so in that way is kind of outdated compared to modern medical imaging software like 3DSlicer but it still has its place.  Fiji can save a stack of images as an nrrd file so if for some reason 3DSlicer doesn't want to load a scan correctly Fiji gives you another option.


So as useful as Fiji is; for anonymising and changing the values of tags I would suggest Dicom Browser.  I personally use some code in Matlab to automate the process but that is an expensive and cumbersome tool for the average user.  Open the folder with the data in Dicom Browser and when the main folder is selected the values from each slice are stacked on top of each other.  To anonymise the data select a value and set it to 'clear'




Find all relevant information and clear it or change its value to something that can't be traced to the person (like patient A001).  This is also where geometrical values like slice thickness can be changed if that is necessary to get a scan to load properly.  Once all the values are changed save the new dicom files and open the new ones again in ImageJ just to check that it all worked and that no PID (Patient Identifier Data) was missed.


As to fixing data the most common issue I have come across is an incorrect slice spacing which causes the scan to be shrunk or stretched.  There are a few values that control this and different programs will use different values.  'SliceThickness' is sometimes used which is bad.  The best is to use the 'ImagePositionPatient' which changes for each slice/image.  'SliceSpacing' is often used as well which is better than 'SliceThickness'.  If you suspect your slice spacing value is wrong calculate the difference between two consecutive 'ImagePositionPatient' values and check it against the slice spacing if they are not equal something is amiss. 


Now you have anonymized and potentially fixed data that you can send to a friend, share here on embodi3D or load up in medical imaging software like my favorite 3DSlicer.


When dicom data (anonymized or not) is loaded into 3DSlicer and saved to an nrrd file (see Dr. Mike's tutorial) you will have a single volume file which is inherently anonymized.  Opening the *.nrrd file in a text editor like notepad++ there are a few lines at the top which are basically your new header file.  It is very minimal and doesn't include a great deal of the information that was in the original dicom files like modality, scan type and settings.  This is fine if all you want to do is create a model from it but it can be helpful to have other information then what you have in an nrrd file, so anonymized dicom will be better in some situations.