• Content count

  • Joined

  • Last visited

  • Days Won


mikefazz last won the day on February 3

mikefazz had the most liked content!

About mikefazz

  • Rank
    Advanced Member
  • Birthday 07/28/1980

Recent Profile Visitors

250 profile views
  1. Version 1.0.0

    Here is my right hand segmented and smoothed. All bones are included separately as well as combined together. The Radius, Ulna, and Soft Tissue have also been cut to make printing easier. The combined bones stl is with the arm bones cut. Segmentation of wrist bones wasn't ideal due to the somewhat low resolution of the scan... smoothing may be a bit excessive there to account for this. Refer to the shared scan volume to see the original data.


  2. Mikes Hand

    Version 1.0.0

    1 download

    This is a subvolume from an abdomen CT scan from a 32 year old male. I happened to have my hand in the field of view (probably since I had just re-injured my shoulder from a skiing fall). Voxel Size: 0.835mm in plane 1.6mm out of plane


  3. Looks like good quality, is this with PVA and is the viewed side the top or support side? My experience with using PVA as a support is good but the interface is a bit rough. I know the UM3 uses a different PVA than I have used which may do a better job with the interface.
  4. It seems you're in luck I have a few scans from max inversion, internal rotation and plantar flexion to max eversion, external rotation and dorsiflexion... part of my masters thesis actually. They are MRI's so creating 3D models is not as easy as with CT scans. Visually the foot model you made looks good considering it came from 2D images. BTW the terms pronation and supination are better used for upper extremity than feet, the group I worked at which studies foot biomechanics doesn't use them as they are not well defined for lower extremities.
  5. Version 1.0.0


    MRI of a middle aged female held in maximum external rotation, eversion, and dorsiflexion. Not weight bearing other than necessary to hold foot in position. Relatively low quality scan: 1.5T MRI 0.7mm slice spacing 0.566 x 0.566 pixel spacing


  6. Version 1.0.0


    MRI of a middle aged female held in Neutral Orientation. Not weight bearing other than necessary to hold foot in position. Relatively high quality scan: 1.5T MRI 0.5mm slice spacing 0.566 x 0.566 pixel spacing


  7. Version 1.0.0


    MRI of a middle aged female held in maximum internal rotation, inversion, and plantar flexion. Not weight bearing other than necessary to hold foot in position. Relatively low quality scan: 1.5T MRI 0.7mm slice spacing 0.566 x 0.566 pixel spacing


  8. Well for my particular case its hard to say as I don't wear it 24/7. I swap between it and the cloth type. With tendonitis the printed brace is rigid and restrictive. Important for a broken bone but not so much for a soft tissue injury where it is important to have just limited mobility. I may tweak the design some more to give a bit more comfort and flexibility. Yes the early designs were far from usable/comfortable took some trial and error to get it right.
  9. While I haven't done this before from similar projects I can think of a fairly simple procedure: 1. Create 3D model of anatomy to do surgery on 2. Create cylinders to represent drill bits and place/orient them where you want 3. Create a block that will represent the jig and orient it so it overlaps the anatomy 4. Do boolean subtractions to remove the cylinders (create holes) and intersecting anatomy (create reference surface) Mike
  10. I would concur I mentioned thresholding for viewing data but that doesn't work well for MRI which does have a larger variety of 'styles' compared to CT. I would definitely start with orthogonal views but that may not be all that novel.
  11. 3D Printed Wrist Brace

    So I began to develop some pain in my right wrist which was later diagnosed as tendinitis. At the same time I had been looking at the CT scan of my abdomen and noticed they also captured my right hand as it was resting on my stomach during the scan (I had injured my right shoulder again). I recalled a concept project a while back I had seen: the CORTEX brace. It presented the idea of replacing the typical plaster cast with a 3D printed one which would prevent the issues of sweating and itchiness… as well as be much more stylish (though not allowing people to sign your cast). I had wanted to apply this to prosthesis sockets initially but never got past the idea stage. Looking around for how to create the ‘webbing’ style I found that meshmixer had the necessary capabilities. So I now had all the tools needed to make my own brace to partially immobilize my wrist. Once the surface model is created and loaded into meshmixer the first step is to cut off anatomy that you don't want in the model using 'plane cut'. Once the general shape of the brace is created the next step is to consider how the brace will be taken on and off. For my design I wanted to have one piece that is flexible enough to slide my wrist in. To create the 'slot' I found that I did a boolean in blender as meshmixer would crash when I tried to create the slot. With the brace model and slot in place the next step was to offset the surface since creating the voroni mesh would generate the tubes on both sides of the surface. This is done back in meshmixer and is fairly computationally intensive so partially reducing the mesh density first is a good idea. The next step is to further decimate the mesh to get the desired voroni mesh pattern. This takes a bit of playing around to get the desired style. Too dense and the resulting web structure will not have many openings which will be stronger but not as breathable. Too rough and the model may not conform to the surface well causing pressure points. The final step is to take the reduced mesh and web like structure using the 'make pattern' feature within meshmixer. There are various settings to be applied within this feature but setting 'Dual Edges' then adjusting the pipe size to double your offset will result in the inner edge of the webbing to just touch the skin of the initial model. Having never made a brace/cast before it took me a few iterations to get a design which I could easily don and doff (put on and take off). I also found that I could make a brace that held my wrist very rigidly but would be too restrictive. Also material selection became important. Initially I used ABS which is more flexible than PLA and I had it in a nice pink skin color. It turned out to be too rigid for the style I was designing. I found PETT (taulman t-glass) to work well as it had a lower modulus of elasticity meaning it was more flexible than ABS. After using the brace on and off for a few weeks I have found that it fits well and is surprisingly comfortable. I have taken a shower with it on as well as slept with it on. It doesn’t seem to smell as bad as the cheap and common cloth type braces. The main downsides have been taking it on and off is a bit challenging still and it is more restrictive of my motion as it behaves somewhere between a brace and a cast. There is definitely a great deal of potential for this type of cast though widespread adoption would require further technical development to simplify the process.
  12. At my rate for the 3D printing part it would cost under $100 and take less than a day to print (calculated around 13 hours).
  13. I would add that voxel to real world dimensions come from the dicom tags 'pixel spacing' and 'slice spacing' which give the voxel dimentions. Only if one of these values (or other related ones) is incorrect will the dimensions of the resulting model be wrong. Also if printing make sure all software uses mm when importing the mesh.
  14. Hi zee, This sounds like an interesting project. I would not have thought a cell phone could handle viewing 3D scan data but they do keep getting faster. I guess I am not clear on what the goals of the software would be. If you want an app that a provider and patient can use I would think the 'volume rendering' (which is basically volume thresholding) feature in 3dSlicer is the kind of thing you would want but that taxes my desktop computer let alone a cell phone. Scrolling through the slices in the 3 orthogonal planes I would expect to be more realistic and still what a lot of medical practitioners seem to be used to doing.
  15. I wanted to take some time to look into a brief history of medical image segmentation before moving into what I consider the more modern method of segmentation. (be warned video is rather long) First to be clear the goal of segmentation is to separate the bones or anatomy of interest from 3D scan data. This is done most easily when there is a sharp contrast between the anatomy of interest and what surrounds it. If you have a CT scan of an engine block this is pretty straight forward. The density of metal to air is hard to beat, but for anatomy and especially MRI scans this is a whole other story. Anatomical boundaries are often more of gradients than sharp edges. Over the years there have been many approaches to make the process of segmenting anatomy faster, easier, and less subjective then the dreaded 'manual segmentation'. When I first started working with medical images back around 2003 the group I was at was trying an alternative to their previous method. Their previous method involved using ImageJ to separate each bone of the foot by applying a threshold then going in and 'fixing' that by painting... They wanted to segment the bones of the foot and it would take like 10 hours of tedious labor... fortunately that was before my time. I was tasked with figuring out how to get 3DViewNix to work. It was basically a research project that ran on linux (which I hadn't used before). Its had a special algorithm called 'live-wire' which allowed clicking on a few points around the edge of the bone on each slice to get a closed contour that matched the bone edge then doing that for each scan slice for each bone. This resulted in about 3 hours a foot of still rather mind numbing effort. After a while a radiologist with a PhD electrical engineering student let us know that there were much better ways. His student had some software written in IDL that allowed using 'seeds' in each bone that would then grow out in 3D to the edges of the bones. After some time to get setup we were able to segment a foot in less than an hour with a good portion of that being computer time. My background is as an ME so I don't pretend to fully understand the image processing algorithms but I have used them in various forms. This year I got more familiar with 3DSlicer which I have found to be the best open source medical imaging program yet. It is built off of VTK and ITK and has a very nice GUI making seeding far more convenient (other programs I've used didn't really allow zooming). It took me a while to find something similar to what I had used before but eventually I found the extension 'FastGrowCut' gives very good results, enough to move away from the special software I had been using before that wasn't free. My basic explanation of 'FastGrowCut' and similar region growing algorithms is; you start with 'seeds' which are labeled voxels for the different anatomy of interest. The algorithm then grows the seeds until it reaches the edge of the bone/anatomy or a different growing seed. There is then a back and forth until it stabilizes on what the edge really is. The result is a 'label' file which has all the voxels labeled as background or one of the entities of interest. Once everything is segmented to the level that you like I prefer to do volumetric smoothing of each entity (bone) before creating the surface models. These algorithms are an active area of research typically in image processing groups within university electrical engineering departments. The algorithms are not a silver bullet that works on all situations, there are a variety of other methods (some as extensions to 3DSlicer) for specific situations. Thin features, long tubular features, noisy data (metal artifacts), low quality scans (scouts), will still take more time and effort to get good results. No algorithm can take a low resolution, low quality scan and give you a nice model... garbage in = garbage out. Now I have been surprised bemoaned to find thresholding used as a common segmentation technique, often as the main tool even in expensive commercial programs. That style typically involves applying a threshold then going in and cleaning up the model until you get something close to what you want. To me this seems rather antiquated but for quickly viewing data or creating a quick and rough model it really can't be beat... but for creating high quality models to be printed there are better ways.