Here is a tutorial for the Grayscale Model Maker in the free program Slicer, specifically for modeling pubic bones since they are used in anthropology for age and sex estimation. The Grayscale Model Maker is very quick and easy!
And I can't stand the "flashing" in the Editor.
For this example, I am using a scan from TCIA, specifically from the CT Lymph Node collection.
Slicer Functions used:
Load Data/Load DICOM
Grayscale Model Maker
Load a DICOM directory or .nrrd file.
Make sure your volume loads into the red, yellow, and green views. Select Volume Rendering from the drop-down.
Select a bone preset, such as CT-AAA. Then click on the eye next to "Volume."
...Give it a minute...
Use the centering button in the top left of the 3D window to center the volume if needed. Since we only want the pubic bones, we will use the ROI box and Crop Volume tools to isolate that area.
To crop the volume check the "Enable" box next to "Crop" and click on the eye next to "Display ROI" to open it. A box appears in all 4 windows. The spheres can be grabbed and dragged in any view to adjust the size of the box. The 3D view is pretty handy for this so you can rotate the model around to get the area you want.
The model itself doesn't have to be perfectly symmetrical because you can always edit it later. Once you like the ROI, we can crop the volume.
To crop the volume, go to the drop-down in the top toolbar, select "All Modules" and navigate to "Crop Volume."
Once the Crop Volume workspace opens, just hit the big Crop button and wait. You won't see a change in the 3D window, but you will see your slice views adjust to the cropped area. At this point, you can Save your subvolume that you worked so hard to isolate in case your software crashes! Select the Save button from the top left of the toolbar and select the .nrrd with "subvolume" in the file name to save.
Now we will use the All Modules dropdown to open the Grayscale Model Maker. If you want to clear the 3D window of the volume rendering and ROI box, you can just go back to Volume Rendering, uncheck the Enable box and close the eyes for the Volume and ROI.
When using the Grayscale Model Maker, the only tricky thing here is to select your "subvolume" from the "Input Volume" list, otherwise your original uncropped volume will be used.
Click on the "Output Geometry" box and select "Create a new Model as..." and type in a name for your model.
Now move down to "Grayscale Model Maker Parameters" in the workspace. I like to enter the same name for my Output Geometry into the "Model Name" field. Enter a threshold value: 200 works well for bone, but for lower density bone, you might need to adjust it down. Since the Grayscale Model Maker is so fast, I usually start with 200 and make additional models at lower values to see which works best for the current volume. ***Here is where I adjust settings for pubic bones in order to retain the irregular surfaces of the symphyseal faces.***The default values for the Smoothing and Decimate parameters work well for other bones, but for the pubic symphyses, they tend to smooth out all the relevant features, so I slide them both all the way down. Then hit Apply and wait for the model to appear in the 3D window (it will be gray).
You can see from the image above that my model is gray, but still has the beige from the Volume Render on it since I didn't close the Volume Rendering. If for some reason you don't see your model: 1) check your Input Volume to make sure your subvolume is selected, 2) click on that tiny centering button at the top left of your 3D window, or 3) go to the main dropdown and go to "Models." If the model actually generated, it will be there with the name you specified, but sometimes the eye will be closed so just open it to look at your model. Now we an save your subvolume and model using the Save button in the top left of the main toolbar. You can uncheck all the other options and just save the subvolume .nrrd and adjust the file type of your model to .stl. Click on "Change Directory" to specify where you want to save your files and Save!
This model still needs some editing to be printable, so stay tuned for Pt. 2 where I will discuss functions in Meshlab and Meshmixer.
Thanks for reading and please comment if you have any issues with these steps!
Here is another tutorial on hollowing meshes, specifically head meshes to obtain a face shell, but I use this method to hollow out bones as well.
Dr. Mike recently posted a great video tutorial on hollowing a head using Meshmixer: https://www.embodi3d.com/blogs/entry/359-how-to-create-a-hollow-shell-from-a-medical-stl-file-using-meshmixer/.
I tend to go back and forth between Meshmixer and Meshlab for different functions to prep a print, but I like to use Meshlab for hollowing because it's quick and you can easily control how much "external" surface is selected, which is especially handy for models that have highly complex internal structures.
Note that this workflow is also useful if you simply want a 3D model (for viewing/interacting in software, Sketchfab) of a smaller file size where you don't need the internal structures and/or you don't want to decimate the model to achieve a smaller file size.
Here are the steps to hollow a head model in Meshlab. I will post screeshots below which you can also find in the Gallery, https://www.embodi3d.com/gallery/album/73-hollowing-skin-model-with-meshlab/.
Import a model into Meshlab.
Go to Filters --> Color Creation and Processing --> Ambient Occlusion per Vertex.
When the new box opens, check the box to select "Use GPU Acceleration" and click "Apply." The default settings are fine for a first step.
Once you become comfortable with the workflow, you can play around with applying the light from different axes: "Lighting Direction" and "Directional Bias".
You will notice that your model is now colorized from light to dark, with "deeper" areas shaded darker.
On the main toolbar, select the "transparent wireframe" view.
You can now see the internal structures that are shaded completely black.
We can now use the shading values to select the areas we want to remove.
Go to Filters --> Selection --> Select Faces by Vertex Quality. The shading values are stored in the Vertex Quality field of your 3D model, with values from 0 (black) to 1 (white), so we can use these values to select the dark (internal or deep) areas we want to remove.
When the Selection box opens up, slide the "Min Quality" value all the way to 0 (to the left). Check the "Preview" box so that you can see which areas are selected in red.
Adjust the "Max Quality" slider left and right until you see that no external surfaces are selected in red. In the image below, you can see that the bottom edges of the eyelids are still red and some skin below the nostrils is also red. When you find a good value, click "Apply" and Close.
**Depending on the model, it may be difficult to adjust the Max slider to a value that doesn't include parts of the eyelids or nose, but I will explain in Step 6 how you can recover these features. Instead of deleting the selection in Step 5, skip to Step 6.
Once you are happy with your selection from Step 4, you can delete everything selected in red by clicking the button shown in the image below. You can see that the model is now hollow, although there may be some disconnected pieces which we will remove in multiple cleaning steps.
If you think you may have selected some external features in Step 4 that you don't want deleted, instead of deleting (Step 5), you can move the selected (red) areas to another layer. Sometimes with overhanging eyelids or very deeply set eyes, these areas might have the same shading values as some internal structures and can't be excluded from the red.
Go to Filters --> Mesh Layer --> Move selected faces to another layer (if your layer dialog is already open, you can right-click on the model name to access the Mesh Layer menu as well). The layer dialog will open up on the right and you will see the name of your original model as well as the new layer. Use the eye icons to toggle visibility.
The Meshlab selection tools can be used to select the areas from the red you want to keep, then move them to another layer. Right-clicking on a mesh name will open the Mesh Layer menu, from which you can "Flatten Visible Layers"--the layers you want to keep can be kept visible and merged into a new mesh.
This image shows the view from the bottom. The head is empty except for that big flat piece at the top of the head.
As an initial cleaning step to remove small pieces, go to Filters --> Cleaning and Repairing --> Remove Isolated pieces (wrt Diameter). The default size works well, but you can adjust it up to 40% or so to remove larger pieces. This is a deletion function, so the floating pieces will be removed and gone forever! Try to not to adjust the size too high--we'll remove large pieces in step 9.
Step 8 will usually not remove large pieces, especially if you're being cautious and only remove small pieces. To remove larger pieces, go to Filters --> Mesh Layer --> Split in Connected Components. The pieces will drop into separate layers in the layer dialog box on the right, and they will be named CC 0, CC 1, etc. You don't want to apply this filter until you've removed small pieces, or you might end up crashing the program because there are too many pieces separating out! As mentioned above, the Mesh Layer menu can also be accessed by right-clicking on the mesh name in the right-hand layer dialog box.
The largest layer is usually CC 0. Toggle visibility to figure out which layer is the one you want. Left-click on it to highlight it in yellow and then export using File --> Export Mesh as...
I prefer to fill holes (Inspector) and create internal walls (Extrude or Offset) in Meshmixer, so you can now import the hollowed model to Meshmixer to fix it up for printing if needed. You can also use the plane cut tool in Meshmixer to remove the flattened edge at the top of the skin model, or apply Ambient Occlusion again in only the z-direction (see Step 1--"Lighting Direction").
This can be an interative process depending on the complexity of the model you're trying to hollow, but it can save on printing time as well as $$ if you're only interested in the external surface. Play around with lighting directions to select the surfaces you want and as always, SAVE meshes along the way in case the program crashes or you make a mistake!
I thought I'd do a quick post on why anthropologists need 3D printed bones in case anybody's interested.
Real bones are expensive! Although we have real skeletons for teaching osteology, we are often limited to teaching the identification and examination of whole bones. For both forensic and archaeological contexts, osteologists need to be able to identify bones that are incomplete, scavenged, weathered, burned, or damaged in some other way. In such situations, the first question is whether or not the bone is human. In order to teach this advanced level of identification, we need bone fragments. We can't go around smashing bones to create the fragments, and if you're at an institution without a large archaeological collection of bones, 3D printing, especially of CT scans, can provide some fragments. Because CT scans contain internal structures (as opposed to laser scans of bones), we can digitally slice long bones to create cross-sections or cut models in ways that bone frequently fragments. We can potentially simulate trauma as well, although scans of bones with trauma or pathology would be even better.
I've recently started working with the Virtual Curation Laboratory (https://vcuarchaeolo....wordpress.com/) to 3D print bone fragments, whole bones, and bones with pathology or trauma. All of these things can be used to create "case studies" of single individuals or commingled individuals as well, and since they're plastic, we would have no problem using them outside for field exercises and excavations. Having age and/or sex is also important since higher quality 3D printed bones could be analyzed for those traits as well.
I've added some pictures from a recent conference at VCU where we presented our preliminary work and displayed a few printed bones. Some of them still have some support structures, but you can see what we're going for.
Thanks for reading!