Ax2t2 2RDrr

If the material is homogeneous and isotropic on length scales significantly smaller than the tracer, incompressible, and connected to the tracers by uniform no-slip boundary conditions over their entire surfaces, the two MSDs will be equal (Ax2(t)>2 = (Ax2(t)>. If these boundary and homogeneity conditions are not satisfied, the two MSDs will be unequal. In this case, applying the two-point MSD in the GSER will still yield the "bulk" rheology of the material (on the long length scale "R"), while the conventional single-particle MSD will report a rheology that is a complicated superposition of the bulk rheology and the rheology of the material at the tracer boundary (Levine and Lubensky, 2001).

Fig. 8 Schematic of displacements used to compute the two-point MSD.

As mentioned earlier, two-point measurements will readily detect sample vibration and drift, as these effects lead to completely correlated tracer motion. When this artifactual motion is significant compared to the Brownian signal, we fit the Drr(R, t) function to a form (A(t)/R) + B(t). By using the A(t) component exclusively to compute the 2P-MSD and rheology, we can reliably remove artifacts due to sample vibration and drift.

In IDL: is used to generate Drr from trajectory data. Like, it requires the pixel size and the frame rate of the camera to produce data in physical units. Also, it requires the user to input a range of separations over which to correlate particles. The images of closely spaced particles can overlap, adding a spurious cross-correlation to their motion and confounding the TPM measurement. We find that using a 2 mm minimum separation is sufficient to overcome this problem (Lau et al., 2003). The upper limit is determined by the cell's finite thickness, which is about 4 mm for the cells we study. The fact that the cell is a thick slab rather than an infinite three-dimensional solid leads to deviations from 1/R decay. However, we find these deviations become negligible when R is less than about twice the cell thickness, so we typically use an upper limit of 8 mm and a smaller number for thinner cell types. converts Drr into (Ax2(t))2 and can further delimit the minimum and maximum separations used in the calculation. Since can take hours to run, while takes seconds, it is more efficient to set R limits wide in and then reduce them to different extents in, in order to test for the effects of changing the R limits on the final rheology. The "lfit" keyword in causes the routine to perform the vibration/drift correction described above.

C. Applying Automated Image Analysis for Statistics

The statistical error in particle MSDs is readily estimated. If we approximate the distribution of tracer displacements (as in Fig. 7B) as a Gaussian, the standard error for the variance is simply 2(x2)/^/Neff, where NeV is the number of statistically uncorrelated measurements in the distribution. If an image series contains Nt tracers and spans a time interval T, then Neff « NtT/t. That is, if we image a single particle for 10 sec at 50 frames a second, we have roughly 500 independent samples of the displacement for a lag time of 1/50 sec, but only 10 independent samples for a lag time of 1 sec. This t dependence causes the statistical errors to increase dramatically at longer lag times. As an example, if we were imaging a sample containing 100 tracers at 50 frames per second, and we wanted no more than 1% statistical error in the MSD over the lag times from 1/50 to 1 sec, then we need NeV = 104 independent samples at t = 1 sec. Since we are pooling the results of 100 tracers, then we need T = 100 sec of data. Obviously, analyzing the corresponding 5000 images, and 500,000 tracer positions for this modest example requires efficient, automated image analysis. Algorithms that require the user to manually select particles to be tracked or to estimate their locations are not practical for this application.

In general, two-point correlation functions have much higher statistical noise, requiring the acquisition of significantly higher statistical power, higher tolerance of noisier data, or both. The origin of this is straightforward to understand. The value of Drr is the mean of a distribution of numbers (AriAr2), since both Af! and Ar2 are single-particle displacements, the widths of the distribution of (Ar1Ar2) is roughly (Ax2(t)), the conventional MSD, in the limit of weak correlation. In general, the two-point correlated motion, Drr, is much smaller than the single-particle MSD (Ax2 (t)). Indeed, under the most favorable case, the ratio of these two quantities according to Eq. (5) is 2R/a, which typically has a value of 10-20. We then expect that reliable measurements of the two-point MSD would require averaging at least (2R/a)2 or several hundred times [i.e., (10-20)2] more Ar1Ar2 measurements, relative to Ax2 measurements to compute a conventional MSD, in order to reach a similar statistical noise. This simple estimate is consistent with our experience.

Does this mean that rather than the 5000 images in our example to measure the conventional MSD, we now need to collect 500,000 images? Fortunately, that is not the case. In a field of view containing 100 randomly located tracers, each tracer might have 10 or more neighbors within the proper distance range for computing two-point correlations. Thus, each image gives us not 100 samples of Ar1Ar2 but more like several thousand. For this reason, the statistical noise of two-point measurements is highly sensitive to the number of tracers in the field of view. In general, if there are 100 or more tracers in a microscope field of view, then about 10 times as many images are required to accurately compute a two-point MSD than a conventional MSD. Alternatively, the statistical noise of the two-point measurement will be about VT0, or just a few times higher than that of the conventional MSD computed from the same data. It should be noted, however, that more statistical power is required for materials where the conventional MSD is much larger than the two-point MSD, according to the square of MSD/2P-MSD ratio. In highly porous materials, the two-point signal can be so small compared to the "background noise'' of uncorrelated tracer motion that it becomes hopelessly impractical to measure from a statistical point of view.

D. Converting MSDs to Rheology

Earlier we defined mathematical relationships that should allow analytical calculation of rheological properties from the MSD, a process called inversion typically performed in the Fourier space. However, notice that the Fourier transform integral spans all times from zero to infinity. This means we would need data sets spanning this same time interval in order to do the conversion analytically. While this cannot be achieved in practice, many numerical transform methods have been proposed to provide an approximation (Waigh, 2005), with many stressing the importance of collecting data at a high frequency over a very wide time range.

Here we describe a very simple method (Mason, 2000). At each lag time point, we estimate the logarithmic derivative (or slope on a log-log plot):

One way to do this is to first take the logarithm of both the MSD and t values and then for each t point, fit a line to a few points of the corresponding logarithms of MSD surrounding the chosen t. The value of this fit at the chosen t is therefore a smoothed approximation of the MSD, and its slope is the logarithmic derivative. We then use an approximate, algebraic form of the GSER:

where r represents the Gamma function. In using this expression, one first computes a set of o values that are reciprocals of the measured lag times. At each frequency point, the value of |G*(o)| is then computed using only the values of the MSD and its logarithmic derivative at t = 1/o. Formally, the value of |G*(o)| at each frequency would require the numerical Fourier integration of the MSD over all lag times in the interval (0 to infinity), but in practice that integral is dominated by the value of the integrand at ot « 1. Equation (7) is exact in the limit that the MSD has a purely power-law form. For other more general forms, Eq. (7) is an excellent approximation at lag times where the MSD is well approximated locally by a power-law, and is seldom more than 15% in error otherwise.

To make contact with more standard rheological representations, one can compute the following from |G*(o)|:

where d(o) is the phase angle (d = 0 indicates solid-like behavior and d = p/2 liquid-like behavior), and G and G( are the storage and loss moduli satisfying G = G + iG". In practice, we compute d(o) using a procedure identical to that for computing a(t). Note that when the phase angle is near 0 or p/2, small errors in d(o) [due either to statistical noise in the MSD or to systematic uncertainties in Eq. (7)] can get amplified tremendously in the smaller modulus, G" or G, respectively. We have developed more accurate (but also more complicated) versions of Eqs. (7) and (8), which also rely on second logarithmic derivatives (Dasgupta et al., 2002). While these give more accurate results in general, for cell rheological data the difference is negligible.

Additionally, there are some artifacts that, no matter what algorithms are used, cannot be corrected for. Because the shear modulus formally depends on the MSD value at over a finite range of frequencies, shear moduli at the extrema of frequency range are subjected to additional "truncation" errors. The algorithm above implicitly assumes that the power-law behavior at the extrema of the data set extends indefinitely to higher or lower frequencies. Any curvature, even the slightest ripple, at the extremal lag times can cause a disproportionate change in the shear moduli. Consequently, unless there are physical motivations for believing this extrapolation is justified, confidence in this part of the data should be low. Strictly speaking, it may be best to compute the rheology from the entire lag time range and then ignore a decade (one order of magnitude) of frequency on each end of the curve.

MSDs themselves are often subject to artifacts at their frequency extrema as well, as described in Section V.E. Any systematic deviations of the MSD due to these effects will be further amplified in the shear moduli. Therefore, in some situations it might be wise to truncate the lag time range of the MSD prior to computing the rheology. In general, as the lag time range of the MSD being used is restricted, the ends of the curve will tend to "wiggle." This is a sign that the shear moduli being produced are subject to strong truncation errors.

One exercise we have found invaluable when utilizing this algorithm is to generate simulated MSD curves that resemble the actual data and calculate rheo-logical properties from these curves. Changing the dynamic range of such simulated data allows the identification of truncation effects. Furthermore, adding Gaussian distributed noise to this data is helpful for determining the artifacts associated with it. A final warning is that a "reasonable" appearance of calculated rheology (e.g., resembling something from a rheology text book) does not mean the inversion is physical. The emergence of a noise floor on data from a particle diffusing in a liquid will lead to results that look almost exactly like the rheology of a Maxwell fluid, which is a common example in these texts.

In IDL: is the program in our IDL suite that does these calculations. It has a built in smoothing to help reduce statistical noise. In addition, it uses second-order formula for smaller systematic errors (Dasgupta et al., 2002) and provides warnings when the computed shear moduli are numerically unreliable. Its input is either an MSD or 2P-MSD.

V. Error Sources in Multiple-Particle Tracking

In this section, we describe several common sources of error in particle-tracking instruments. While some of these error sources can be mitigated by the use of high-quality equipment, most errors are due to irreducible physical limitations on imaging detector performance and illumination brightness. In practice, a good understanding of the origin of different errors, followed by careful adjustment of the imaging system to optimal settings, can lead to significant performance improvements. We will describe three classes of error: random error, systematic errors, and dynamic error.

A. Random Error (Camera Noise)

The more accurately a given particle can be located, the higher the quality of measurements of cellular rheology. In fact with optimal particles, illumination, imaging, and software, it is possible to measure the position of a 400-nm particle to within nm. While it may seem remarkable to be able to determine the particle's location this well with optical methods, the limiting precision of particle localization is quite different from optical resolution—the limit below which structural details in complicated specimens cannot be discerned (typically about a quarter of the wavelength of light used). Instead, the situation is analogous to a familiar problem in curve fitting, finding the position of a local maximum or peak in a curve. If we have a reasonable number of evaluations of a peaked function and the values are very precise, a least squares fitter can locate the peak center to an arbitrarily high precision (relative to the width of the peak). In this analogy, the peaked function is the light intensity distribution for a tracer, with a width set by the tracer size or optical resolution.

While some noise in a camera depends on the details of its construction and electronics, there are ultimate physical limits on the performance of all cameras due to the discrete nature of light itself. Imagine that we had a "perfect" camera that recorded the precise two-dimensional coordinates of all incoming photons arriving from a microscope. An "image" of a single small tracer might resemble a round cloud of points in a two-dimensional scatter plot (Fig. 9). From elementary statistics, we know that the standard error sx, sy when computing the mean x and y center positions of the cloud (corresponding to the particle location) is:

0 0

Post a comment