Correction Phase

Localization microscopy experiments generate super-resolution images by temporally resolving the positions of subsets of fluorophores within the sample. During data acquisition, it is possible that the microscope stage may exhibit lateral drift. Stage drift may contribute to localization precision errors for each fluorophore and should be corrected to obtain the most accurate positional information for each fluorophore within the sample.

DeltaVision Localization Microscopy includes two ways to adjust fluorophore positions to account for lateral stage movement during data acquisition. The first method involves using image correlation algorithms to measure the stage drift at specific time-points in the data. The advantage of this method is that the drift is measured by image registration and therefore it is not necessary to include fiducial markers in the sample to track the drift.

In some cases, image correlation drift correction may not give accurate drift correction results. This may occur with sparsely labeled samples or in cases where there is lateral drift on very short time scales. To correct for drift in these cases, it is necessary to add fiducial markers to the sample. This increases the probability of accurate drift measurements using the image correlation drift correction algorithm and/or will allow the user to track the fiducial markers in every frame of the data acquisition using the Fiducial Drift Correction procedure (see Tracking Phase).

Image Correlation Drift Correction

After all of the fluorophores within the collected image frames have been detected and localized and a _LOC.txt file has been generated, the drift correction algorithm appropriately shifts each fluorophore’s position to account for lateral stage movement. The drift correction algorithm divides the data into smaller user-specified “time” windows. The detected fluorophores in each time window are used to generate a super-resolution image of user-defined pixel size. Using image correlation algorithms, the image produced from the nth time window is compared to the image from the first time window. The algorithm detects persistent structures between the two images, and attempts to align them, where the difference in their positions is the measured x and y drift. The x and y drift can be measured using either a Gaussian fit or by a weighted center of mass calculation to the image correlation function. Linear interpolation is performed to determine drift between each time point, and each fluorophore’s position is shifted appropriately.

Related Topics

Setting Up Image Correction

Setting Up Tracking

Reconstruction Phase

Setting Up Image Reconstruction

Using the Localization Results Viewer