Sunday 25 October 2015

Finite element analyis: the importance of being valid

On the back of finally getting another paper out:


it required another blog post. I won't over blog it as it is open access and apparently far clearer than some of my other papers to my parents (my metric for how overly complicated I've made things). This one is about the importance of validating finite element analyses (see FEA for "dummies"), but will also touch on the joys of trying to publish negative results (i.e. when experiments don't match computer models). A quick background for those who don't want to read the previous post, finite element analysis (FEA) is a method for analysis how complex structures deform under loads, by simplifying them to a series of finite interconnected units (be it bricks, tetrahedra or any triangles: the elements) that have been given material properties appropriate for the structure (e.g. if it is a steel beam, the elements are given steel structural properties). It is known the method works incredibly well on man-made objects, and it is indeed the engineering tool used for everything from designing cars (and crashing them virtually) or planes, to bridges and buildings. Basically anything that an engineer might build, there is probably a finite element model out there somewhere.You may see where I am going with this then, the method works with varying degrees of success on biological structures for replicating strain magnitudes and orientations. Most recent work on mammals (monkeys, pigs) and reptiles (particularly alligators) manages to get very close replication of strain patterns across the models, but to date few studies have looked at birds. Birds are important as they have very mobile skulls (they have loads of extra little joints in the skull compared to most mammal and reptile skulls) are in a palaeontological context are important as the nearest living relatives to dinosaurs (being descended from them). Many studies have looked at how dinosaur skulls perform under feeding loads, but what does that really mean if we don't know how accurate models are on even their living relatives?

So building on the previous limited work that has looked at ostrich mandibles (Rayfield 2011), and finch beaks (Soons et al., 2012a,b,c), and in preparation for trying to understand ornithomimosaur (ostrich mimic dinosaurs) skull function, I started looking at validating an ostrich cranium (n.b. skull is cranium plus the jaws). We had some frozen ostrich skulls from an ostrich farm in the UK, and I used several in the course of the project, first as a practice dissection, then as a practice experiment, then one for the actual experiment/validation, and one more for material property testing. The one used for the validation was sent frozen to Hull/York Medical school for CT scanning prior to any work so we had a full digital copy, and could use it for making the computer models.

Labelled ostrich crania, showing the ‘average’ ten month old ostrich crania. From Cuff, 2014.

Myological reconstructions of an ostrich skull. A) M. depressor mandibulae, B) M. adductor mandibulae externus, C) M. adductor mandibulae posterior, D) M. pseodtemporalis profundus, E) M. pseudotemporalis superficialis, F) M. pterygoideus. From Cuff, 2014.
From the initial work, it was decided that the M. pseudotemporalis superficialis (See D in the above figure) was the best load to use. I dissected the muscles of the experimental specimen. From the dissection I was able to measure muscle mass, fibre lengths and angles and using these metrics you can estimate the force a muscle can produce. I actually measured higher potential force production by the muscle than we used, but this was to keep well within safety factor of the experimental set-up and the cranium whilst producing visible bending in the cranium. For the experiment, Jen Bright (now Sheffield) and I first had to apply a way of loading the cranium that would replicate a muscle. Previous work has either used the original muscle, or screwed some metal attachment to the skull. We tried something somewhere in between by screwing an artificial tendon (made of layers of fibreglass, resin and a carbon fibre loop) instead that would allow a flexible load application (a design that Colin Palmer, an engineer and now a part-time PhD at Bristol).

From Cuff et al., 2015. Artificial tendon. (A) Schematic of the artificial tendon construction showing the carbon fibre loop sandwiched between layers of fibreglass. (B) The artificial tendon screwed into place on the M. pseudotemporalis superficialis. Screws highlighted in black circles.
Once the tendon was attached, 13 strain gauges were applied to the dissected ostrich cranium, and cranium was then placed on the rig. For anyone who follows the field, they may have noticed this is the same one as seen in some of Jen Bright's earlier work on pigs (hence the affectionate name "pig rig", which now is the "ostretch")

From Cuff et al., 2015. Ex-vivo experimental set up. (A) Experimental testing of ostrich with gauges attached, under loading of the artificial tendons. (B) Schematic of experimental rig showing load and constraints.
From there we applied the loads, and using the gauges measured the strains. Unfortunately, and for reasons we do not know, gauge 6 was not functional during the experiment. Then came the fun computer model which was, to the best of our abilities, as identical to the experimental set-up. This involved first isolating the bone of the cranium (two types, the surface cortical bone, and the deeper honeycomb-esque trabecular bone), the beak, and sutures.

From Cuff et al., 2015. Digital reconstruction of the ostrich skull. Red triangles represent the constraints, black arrows show orientation and location of loads, red rectangles are membrane elements that mirror the strain gauges. Gauge 6 was non-functional so was not included in the model, but its location is marked. The blue lines are sutures, and the yellow material is the keratinous rhamphotheca. The trabecular bone is not visible. Gauges labelled with an asterisk (*) are sites where nanoindentation was performed. Direction from grid one is labelled as the white arrow from which strain orientation were measured.
And what the skull more or less looks like under loading to give an idea of the areas where strains will be highest (NB this is only to give an example, and is a skull, with only cortical bone, no beaks, and loaded with muscles).

From Cuff 2014. Ostrich cortical bone, and muscle model showing strain patterns.
As you can see from the two images showing the ostrich models, missing gauge 6 is a shame as it is in one of the high strain areas. It becomes important, when considering strain magnitudes (effectively change in shape, i.e. deformation) which don't particularly match:

From Cuff et al., 2015. Maximum and minimum principal strains for both ex-vivo experiments, and finite element models in microstrain. (A) Maximum, and (B) minimum principal strain for models with material properties from the literature; (C) Maximum, and (D) minimum principal strain for models with posthoc material properties; (E) Maximum, and (F) minimum principal strain for models with material properties from nanoindentation. Material properties for each model are listed in Table 1. Note that both experimental trials are shown.
This is particularly true for absolute magnitudes of maximum principal strain where gauge 7 far exceeds anything we could reasonably produce, but from here you can see the recurring theme for the other metrics we measured. Strain magnitudes, ratios (maximum: |minimum|) and strain orientations are similar in magnitudes in certain places, but don't match as well as we would expect in others. Generally the patterns are correct (where there are high or low strains), but that is the best we could achieve no matter what material properties we used (and included some novel ostrich property measurements).

These results are particularly interesting as similar methods have worked on mammals and alligators producing models that closely match those of the experiments. As for why the results are so far off in our models is unknown, and something that needs further investigating. It may come down to how we modelled the materials of the cranium, because joints in the skull are far more difficult to model than we have, because our new tendons were worse than before, or a myriad of other factors that I've not discussed here or in the paper. However, the data in the paper are all interesting and this is the first full attempted cranium validation of a bird ever. As a spin off issue from the paper, it showed me how difficult it is to publish negative results. Negative results are where the results of a study show no match between models and the experiments or in the case of medical science, where the medicine are no better than a placebo. However, these results are really poorly represented in publishing as they don't make sexy stories. This leads to the potential for replication of experiments that don't work repeatedly through time:

From: http://theupturnedmicroscope.com/comic/negative-data/
My paper went through a round of major corrections at one of the "traditional" journals, before being rejected when we put in more data showing the fact the model doesn't match. As such we sent it to PeerJ (a new open access more welcoming to all result types) who sent it through a round of major revisions, before accepting it. Most of the biggest problems stem from reviewers believing our results are wrong through some fault in the methodology and telling us to do more experiments (I accept some of the corrections were things that we needed to clarify, or tidy, or explain further). 1) This is problematic as the specimen quickly dries out during testing so would require a complete redoing of the entire thing which took me almost a year and 2) this perpetuates the not publishing negative results trend. If the method doesn't work, why shouldn't we tell people this doesn't work and not to try it again, or to come up with modifications that might improve it? I believe if our results had been very close with no issues it would have been published rapidly in the "traditional" journal and not taken 2.5 years. It is something I would love to test, but the ethics of sending papers out to review that are the same methods, but differing results is a bit dubious and would require some thoughts. If anyone has any idea or willingness to get involved on this, please let me know.

References
Cuff AR, 2014. Functional mechanics of ornithomimosaurs. Thesis. University of Bristol.
Rayfield EJ. 2011. Strain in the ostrich mandible during simulated pecking and validation of specimen-specific finite element models. Journal of Anatomy 218:47-58.
Soons J, Herrel A, Aerts P, Dirckx J. 2012a. Determination and validation of the elastic moduli of small and complex biological samples: bone and keratin in bird beaks. Journal of the Royal Society Interface 9:1381-1388.
Soons J, Herrel A, Genbrugge A, Adriaens D, Aerts P, Dirkx J. 2012b. Multi-layered bird beaks: a finite-element approach towards the role of keratin in stress dissipation. Journal of the Royal Society Interface 9:1787-1796.
Soons J, Lava P, Debruyne D, Dirckx J. 2012c. Full-field optical deformation measurement in biomechanics: digital speckle pattern interferometry and 3D digital image correlation applied to bird beaks. Journal of Mechanical Behavior Biomedical Materials 14:186-191.