FIRST v1.2

FMRIB's Integrated Registration and Segmentation Tool

Subcortical brain segmentation using Bayesian shape & appearance models.


Contents

Introduction - References - Segmentation - Vertex Analysis - Models - Advanced Usage

Introduction

FIRST is a model-based segmentation/registration tool. The shape/appearance models used in FIRST are constructed from manually segmented images provided by the Center for Morphometric Analysis (CMA), MGH, Boston. The manual labels are parameterized as surface meshes and modelled as a point distribution model. Deformable surfaces are used to automatically parameterize the volumetric labels in terms of meshes; the deformable surfaces are constrained to preserve vertex correspondence across the training data. Furthermore, normalized intensities along the surface normals are sampled and modelled. The shape and appearance model is based on multivariate Gaussian assumptions. Shape is then expressed as a mean with modes of variation (principal components). Based on our learned models, FIRST searches through linear combinations of shape modes of variation for the most probable shape instance given the observed intensities in your T1 image.

For more information on FIRST, see the D.Phil. thesis or the FMRIB technical report. The thesis provides a more thorough and complete description.


References

The following reference is the main journal paper describing FIRST:

Patenaude, B., Smith, S.M., Kennedy, D., and Jenkinson M.
A Bayesian Model of Shape and Appearance for Subcortical Brain
NeuroImage, 2011 (in press).

There is also a thesis relating to FIRST that contains some more technical details:

Brian Patenaude. Bayesian Statistical Models of Shape and Appearance for Subc ortical Brain Segmentation. D.Phil. Thesis. University of Oxford. 2007.

FIRST Training Data Contributors

We are very grateful for the training data for FIRST, particularly to David Kennedy at the CMA, and also to: Christian Haselgrove, Centre for Morphometric Analysis, Harvard; Bruce Fischl, Martinos Center for Biomedical Imaging, MGH; Janis Breeze and Jean Frazier, Child and Adolescent Neuropsychiatric Research Program, Cambridge Health Alliance; Larry Seidman and Jill Goldstein, Department of Psychiatry of Harvard Medical School; Barry Kosofsky, Weill Cornell Medical Center.


Segmentation using FIRST

The simplest way to perform segmentation using FIRST is to use the run_first_all script which segments all the subcortical structures, producing mesh and volumetric outputs (applying boundary correction). It uses default settings for each structure which have been optimised empirically.

run_first_all

first_roi_slicesdir

General Advice and Workflow

Below is a recommendation for running FIRST in a systematic way. While it is only a recommendation, you may find that organizing your data in a different way than suggested below leads to complications further down the road, especially when moving files.


Vertex Analysis

concat_bvars

Multiple Comparison Correction

Important Note

General Advice and Workflow


Volumetric Analysis


Models



Advanced Usage

The following sections detail the more fundamental commands that the script run_first_all calls. If problems are encountered when running run_first_all, it is recommended that each of the individual stages described below be run separately in order to identify and fix the problem.

Registration

FIRST segmentation requires firstly that you run first_flirt to find the affine transformation to standard space, and secondly that you run run_first to segment a single structure (re-running it for each further structure that you require). These are both run by run_first_all which also produces a summary segmentation image for all structures.

Please note, if the registration stage fails then the models fitting will not work, despite the fact that run_first_all continues to run and may produce outputs.


Segmentation


Boundary Correction


first_utils

This command can be used to fill meshes, as well as for running vertex analysis.


Copyright © 2006-2009, University of Oxford. Documentation written by Brian Patenaude, Aaron Trachtenberg and Mark Jenkinson.