• Content count

  • Joined

  • Last visited

  • Days Won


mikefazz last won the day on December 29 2016

mikefazz had the most liked content!

About mikefazz

  • Rank
    Advanced Member
  • Birthday 07/28/1980

Recent Profile Visitors

195 profile views
  1. While I haven't done this before from similar projects I can think of a fairly simple procedure: 1. Create 3D model of anatomy to do surgery on 2. Create cylinders to represent drill bits and place/orient them where you want 3. Create a block that will represent the jig and orient it so it overlaps the anatomy 4. Do boolean subtractions to remove the cylinders (create holes) and intersecting anatomy (create reference surface) Mike
  2. I would concur I mentioned thresholding for viewing data but that doesn't work well for MRI which does have a larger variety of 'styles' compared to CT. I would definitely start with orthogonal views but that may not be all that novel.
  3. 3D Printed Wrist Brace

    So I began to develop some pain in my right wrist which was later diagnosed as tendinitis. At the same time I had been looking at the CT scan of my abdomen and noticed they also captured my right hand as it was resting on my stomach during the scan (I had injured my right shoulder again). I recalled a concept project a while back I had seen: the CORTEX brace. It presented the idea of replacing the typical plaster cast with a 3D printed one which would prevent the issues of sweating and itchiness… as well as be much more stylish (though not allowing people to sign your cast). I had wanted to apply this to prosthesis sockets initially but never got past the idea stage. Looking around for how to create the ‘webbing’ style I found that meshmixer had the necessary capabilities. So I now had all the tools needed to make my own brace to partially immobilize my wrist. Once the surface model is created and loaded into meshmixer the first step is to cut off anatomy that you don't want in the model using 'plane cut'. Once the general shape of the brace is created the next step is to consider how the brace will be taken on and off. For my design I wanted to have one piece that is flexible enough to slide my wrist in. To create the 'slot' I found that I did a boolean in blender as meshmixer would crash when I tried to create the slot. With the brace model and slot in place the next step was to offset the surface since creating the voroni mesh would generate the tubes on both sides of the surface. This is done back in meshmixer and is fairly computationally intensive so partially reducing the mesh density first is a good idea. The next step is to further decimate the mesh to get the desired voroni mesh pattern. This takes a bit of playing around to get the desired style. Too dense and the resulting web structure will not have many openings which will be stronger but not as breathable. Too rough and the model may not conform to the surface well causing pressure points. The final step is to take the reduced mesh and web like structure using the 'make pattern' feature within meshmixer. There are various settings to be applied within this feature but setting 'Dual Edges' then adjusting the pipe size to double your offset will result in the inner edge of the webbing to just touch the skin of the initial model. Having never made a brace/cast before it took me a few iterations to get a design which I could easily don and doff (put on and take off). I also found that I could make a brace that held my wrist very rigidly but would be too restrictive. Also material selection became important. Initially I used ABS which is more flexible than PLA and I had it in a nice pink skin color. It turned out to be too rigid for the style I was designing. I found PETT (taulman t-glass) to work well as it had a lower modulus of elasticity meaning it was more flexible than ABS. After using the brace on and off for a few weeks I have found that it fits well and is surprisingly comfortable. I have taken a shower with it on as well as slept with it on. It doesn’t seem to smell as bad as the cheap and common cloth type braces. The main downsides have been taking it on and off is a bit challenging still and it is more restrictive of my motion as it behaves somewhere between a brace and a cast. There is definitely a great deal of potential for this type of cast though widespread adoption would require further technical development to simplify the process.
  4. At my rate for the 3D printing part it would cost under $100 and take less than a day to print (calculated around 13 hours).
  5. I would add that voxel to real world dimensions come from the dicom tags 'pixel spacing' and 'slice spacing' which give the voxel dimentions. Only if one of these values (or other related ones) is incorrect will the dimensions of the resulting model be wrong. Also if printing make sure all software uses mm when importing the mesh.
  6. Hi zee, This sounds like an interesting project. I would not have thought a cell phone could handle viewing 3D scan data but they do keep getting faster. I guess I am not clear on what the goals of the software would be. If you want an app that a provider and patient can use I would think the 'volume rendering' (which is basically volume thresholding) feature in 3dSlicer is the kind of thing you would want but that taxes my desktop computer let alone a cell phone. Scrolling through the slices in the 3 orthogonal planes I would expect to be more realistic and still what a lot of medical practitioners seem to be used to doing.
  7. I wanted to take some time to look into a brief history of medical image segmentation before moving into what I consider the more modern method of segmentation. (be warned video is rather long) First to be clear the goal of segmentation is to separate the bones or anatomy of interest from 3D scan data. This is done most easily when there is a sharp contrast between the anatomy of interest and what surrounds it. If you have a CT scan of an engine block this is pretty straight forward. The density of metal to air is hard to beat, but for anatomy and especially MRI scans this is a whole other story. Anatomical boundaries are often more of gradients than sharp edges. Over the years there have been many approaches to make the process of segmenting anatomy faster, easier, and less subjective then the dreaded 'manual segmentation'. When I first started working with medical images back around 2003 the group I was at was trying an alternative to their previous method. Their previous method involved using ImageJ to separate each bone of the foot by applying a threshold then going in and 'fixing' that by painting... They wanted to segment the bones of the foot and it would take like 10 hours of tedious labor... fortunately that was before my time. I was tasked with figuring out how to get 3DViewNix to work. It was basically a research project that ran on linux (which I hadn't used before). Its had a special algorithm called 'live-wire' which allowed clicking on a few points around the edge of the bone on each slice to get a closed contour that matched the bone edge then doing that for each scan slice for each bone. This resulted in about 3 hours a foot of still rather mind numbing effort. After a while a radiologist with a PhD electrical engineering student let us know that there were much better ways. His student had some software written in IDL that allowed using 'seeds' in each bone that would then grow out in 3D to the edges of the bones. After some time to get setup we were able to segment a foot in less than an hour with a good portion of that being computer time. My background is as an ME so I don't pretend to fully understand the image processing algorithms but I have used them in various forms. This year I got more familiar with 3DSlicer which I have found to be the best open source medical imaging program yet. It is built off of VTK and ITK and has a very nice GUI making seeding far more convenient (other programs I've used didn't really allow zooming). It took me a while to find something similar to what I had used before but eventually I found the extension 'FastGrowCut' gives very good results, enough to move away from the special software I had been using before that wasn't free. My basic explanation of 'FastGrowCut' and similar region growing algorithms is; you start with 'seeds' which are labeled voxels for the different anatomy of interest. The algorithm then grows the seeds until it reaches the edge of the bone/anatomy or a different growing seed. There is then a back and forth until it stabilizes on what the edge really is. The result is a 'label' file which has all the voxels labeled as background or one of the entities of interest. Once everything is segmented to the level that you like I prefer to do volumetric smoothing of each entity (bone) before creating the surface models. These algorithms are an active area of research typically in image processing groups within university electrical engineering departments. The algorithms are not a silver bullet that works on all situations, there are a variety of other methods (some as extensions to 3DSlicer) for specific situations. Thin features, long tubular features, noisy data (metal artifacts), low quality scans (scouts), will still take more time and effort to get good results. No algorithm can take a low resolution, low quality scan and give you a nice model... garbage in = garbage out. Now I have been surprised bemoaned to find thresholding used as a common segmentation technique, often as the main tool even in expensive commercial programs. That style typically involves applying a threshold then going in and cleaning up the model until you get something close to what you want. To me this seems rather antiquated but for quickly viewing data or creating a quick and rough model it really can't be beat... but for creating high quality models to be printed there are better ways.
  8. Wow looks like a 7 extruder printer! I'm guessing they use something like the diamond hot end except it can handle 5 inputs with 2 other printer heads. Interesting implementation of the blending hot end concept along with separate hot ends...
  9. Registration with 3DSlicer

    In this entry we look at registering one scan to another from the same subject pre and post op. There may come a time when you have multiple scans of the same subject which you want to compare to each other. This could be a CT and an MRI or a pre and post op scan as in this example. Since the scans were taken at different times and possibly different places they will not line up with each other when they are loaded. Registration is the process that can find the transformation that moves one volume to line up with the other. The first step after loading the data is to perform an initial alignment. If the two volumes are far apart the difference will likely be too much for the registration algorithm to properly work. The initial alignment is done using the 'transforms' menu within 3DSlicer. After creating a new transform pick which volume you will be moving as the other one (fixed image) will stay stationary through the whole process. Now adjust the 3 translation and 3 rotation sliders until you get a decent alignment by eye. It can help to center each volume first if they have a large origin offset. Also changing the way one of the volumes is colored can make visual alignment easier. With the two volumes roughly lined up find the BRAINS registration within the Registration group under the main menu. Before performing the registration set the: Fixed Image Moving Image Output Image Volume Initialization Transform Registration Phases, set to Rigid (6 Degrees Of Freedom). When you click 'Apply 'the registration will run until it finds the best match between the scans. Registration quality is typically measured in therms of 'Mutual Information' which is basically the union between the two volumes. Full volume rigid registration will not work for all situations such as two scans of a foot which are flexed in a different way. Rigid registration works best when the two scans are from the same person and the volume in question doesn't change shape (such as head scans). Other types of registration will allows the 'moving' volume to be distorted until it matches the 'fixed' volume. This can be simple affine (scaling) all the way to template matching (warping). Find the scans used at: PreOp PostOp And of course 3DSlicer - https://www.slicer.org/
  10. Dicom Primer

    In this tutorial I will cover some of the basics on working with dicom data with a focus on anatomizing, and reading into medical imaging software as well as how to potentially fix problematic scans. So first of all what is DICOM data? It is a standard file type for basically all medical imaging devices (CT, MRI, US, PET, X-ray, etc), DICOM stands for Digital Imaging and COmmunication in Medicine and along with the file format, and the tags, it is designed to be transferred and stored with PACS. The DICOM standard can be found at their homepage. The useful bits for the purpose of creating anatomical models and particularly values that define the volume geometry can be found in 'tags'. These are in each image/slice header file (metadata). They are two 4 digit hexadecimal values assigned to a particular type of value like: (0018, 0088) Spacing Between Slices To find the official library of these tags go to the standard on the dicom home page and go down to "Part 6: Data Dictionary". When opened scrolling down will reveal just how immense the dicom standard is. Now this library just gives you the tag and the name but not much information about that tag. To get a bit more of a description use Dicom Lookup and type in the tag or name to find more information. Before looking at data a mention on anatomizing data. The goal is to remove any information that can be traced back to the original person without removing other important information like modality, etc... To get an official type of list of these values go to HIPPA and find there de-identification guidance document. In general (pages 7 and 8) remove all names, dates, addresses, times, and other sensitive information like SSN. Now to actually look at the data I have for years used ImageJ which has been updated to Fiji. Open an image from the scan CD and click 'CNTR+I' to open the header file and see what is in there. Fiji (ImageJ) is a very simple and useful program for looking at data. It is mostly made for working in 2D so in that way is kind of outdated compared to modern medical imaging software like 3DSlicer but it still has its place. Fiji can save a stack of images as an nrrd file so if for some reason 3DSlicer doesn't want to load a scan correctly Fiji gives you another option. So as useful as Fiji is; for anonymising and changing the values of tags I would suggest Dicom Browser. I personally use some code in Matlab to automate the process but that is an expensive and cumbersome tool for the average user. Open the folder with the data in Dicom Browser and when the main folder is selected the values from each slice are stacked on top of each other. To anonymise the data select a value and set it to 'clear' Find all relevant information and clear it or change its value to something that can't be traced to the person (like patient A001). This is also where geometrical values like slice thickness can be changed if that is necessary to get a scan to load properly. Once all the values are changed save the new dicom files and open the new ones again in ImageJ just to check that it all worked and that no PID (Patient Identifier Data) was missed. As to fixing data the most common issue I have come across is an incorrect slice spacing which causes the scan to be shrunk or stretched. There are a few values that control this and different programs will use different values. 'SliceThickness' is sometimes used which is bad. The best is to use the 'ImagePositionPatient' which changes for each slice/image. 'SliceSpacing' is often used as well which is better than 'SliceThickness'. If you suspect your slice spacing value is wrong calculate the difference between two consecutive 'ImagePositionPatient' values and check it against the slice spacing if they are not equal something is amiss. Now you have anonymized and potentially fixed data that you can send to a friend, share here on embodi3D or load up in medical imaging software like my favorite 3DSlicer. When dicom data (anonymized or not) is loaded into 3DSlicer and saved to an nrrd file (see Dr. Mike's tutorial) you will have a single volume file which is inherently anonymized. Opening the *.nrrd file in a text editor like notepad++ there are a few lines at the top which are basically your new header file. It is very minimal and doesn't include a great deal of the information that was in the original dicom files like modality, scan type and settings. This is fine if all you want to do is create a model from it but it can be helpful to have other information then what you have in an nrrd file, so anonymized dicom will be better in some situations.
  11. From the two images the first scan shows some mis-ordered images (some of posterior slices connect to anterior slices). This suggests the slices are in the coronal plane but not ordered correctly. The second scan shows that the slices are in the transverse plane with large slice spacing. This can be confirmed by looking at the volume dimensions in the volumes section (see picture). My guess is the second scan has about 1mm pixel size and the out of plane is 3+mm (look at "Image Spacing" like in attached image). Often clinical scans have large slice spacing as they are for a radiologist to look at not for making 3D models. Sometimes the original scan data will be of higher resolution and you can go back and ask for a higher quality (lower slice spacing) scan data. As to the first scan getting images ordered correctly can be difficult. I'm not sure what 3DSlicer uses to order images (ImageNumber, ImagePositionPatient, file name, etc...). A separate program like Fiji (ImageJ) can make an nrrd file but when I have tried it doesn't load into 3DSlicer since Fiji doesn't include all the header info that 3DSlicer is looking for. I use some matlab code I wrote to fix scan data and make an nrrd file for 3DSlicer but not everyone has matlab...
  12. For segmentation I would try a region growing method. The airway segmentation extention may work if you want to get connected thin parts. GrowCut or FastGrowCut may work but I think they don't work as well with thin features.
  13. This is an all too common issue, some software uses 'slice thickness' when 'slice spacing' is a much better option. Thickness can be larger than spacing when slices overlap or spacing can be more than thickness when there are gaps in the data (an unfortunate situation). Consecutive 'ImagePatientPosition' values is typically a good measure as well. I have come across data with incorrect values before, probably a result a mistake during image reformatting. Calculating the change between two consecutive 'ImagePatientPosition' values and checking it against slice spacing is a good check as sometimes the error is not as visible as your example.
  14. Well 3DSlicer has an extension to work with DTI data but just being able to view it doesn't mean a surface model can be made. Definitely would be a challenging part to make.
  15. I typically use KISSlicer which handles multi-material parts(if you use the paid version) slic3r also works. If the two parts are segmented from the same coordinate system (i.e. line up when loaded on top of each other) then they can be loaded together and each set to its own extruder for printing. So if one model is the bone even if it overlaps the implant the implant can be given priority and it will print as 'metal' instead of 'bone'. As to segmenting multiple entities in 3DSlicer I use the extension 'FastGrowCut' to segment multiple parts... I don't really bother with thresholding which is too 'global' for my needs.