Back to home page

EIC code displayed by LXR

 
 

    


Warning, /acts/docs/pages/tracking.md is written in an unsupported language. File is not indexed.

0001 @page tracking Tracking in a nutshell
0002 
0003 Track reconstruction is the process to recover the properties of a charged
0004 particle from a set of measurements caused by the interaction with some form of
0005 sensitive detector. The goal is to find which measurements are likely to have
0006 been caused by which particle, to group them accordingly, and to estimate the
0007 associated trajectory. Such charged particle trajectories form the basic input
0008 to the majority of higher-level reconstruction procedures in many cases.
0009 
0010 ![Illustration of a track reconstruction chain starting from space points to fully formed tracks.](tracking/tracking.svg) {width=60%}
0011 
0012 This section provides a high-level view of a track reconstruction chain, and is
0013 largely based on @cite gessinger_befurt_2021_m23d2-xsq75. It gives an overview of the basic
0014 building blocks conceptually, and also connects these building blocks with the
0015 concrete implementations in the core ACTS library, where available.
0016 
0017 # Charged particle detection {#charged-particle-detection}
0018 
0019 The first step in the chain to reconstruct charged particles is their
0020 detection using sensitive elements. Charged particle detection can be
0021 achieved in a variety of ways with very different technologies. It can be
0022 achieved by measuring the interaction of charged particles with matter. When
0023 this occurs, the interacting particle typically ionizes the surrounding
0024 material. Particle detectors make use of this fact by converting the
0025 resulting charge into a measurable signal in various ways.
0026 
0027 
0028 @anchor segmentation
0029 
0030 ![Illustration of a one-dimensional (a) and a two-dimensional segmentation (b) of a silicon sensor.](tracking/segmentation.svg) {width=70%}
0031 
0032 A very common electronic detection approach is the use of semiconducting
0033 particle detectors, often made of silicon. When a charged particle traverses
0034 such a sensor, it ionizes the material in the depletion zone, caused by the
0035 interface of two different semiconducting materials. The result are pairs of
0036 opposite charges. These charge pairs are separated by an electric field and
0037 drift toward the electrodes. At this point, an electric signal is created
0038 which can be amplified and read out. By means of segmentation, the measured
0039 signal can be associated with a location on the sensor. Silicon sensors are
0040 usually segmented in one dimension (*strips*) or in two dimensions
0041 (*pixels*) (see Figure @ref segmentation).
0042 
0043 # Track parametrization {#track-parametrization}
0044 
0045 To express the properties of a particle's trajectory, a choice of parameters
0046 has to be made. The parameters need to express all the relevant quantities of
0047 interest. In the presence of a magnetic field, which affects charged
0048 trajectories, the global position and momentum, as well as the charge are
0049 needed to fully specify the particle properties. In addition, a time parameter
0050 can be included. Apart from the global reference frame, track quantities often
0051 need to be represented with respect to a surface. This can be achieved with a
0052 parametrization like
0053 
0054 @f[
0055   \vec x = \left(l_0, l_1, \phi, \theta, q/p, t\right)^T
0056 @f]
0057 
0058 although other parameter conventions exist as well.
0059 Figure @ref parameters illustrates this choice of parameters. @f$l_0@f$, @f$l_1@f$ are
0060 the local coordinates of the corresponding surface, @f$\phi \in [-\pi,\pi)@f$ and
0061 @f$\theta \in [0,\pi]@f$ are the angles in the transverse and longitudinal
0062 direction of the global frame, expressed with respect to the current location
0063 along the trajectory, as indicated in Figure @ref parameters (b). @f$\theta@f$ is
0064 the polar angle relative to the positive @f$z@f$-axis, and @f$\phi@f$ is the azimuth
0065 angle in the transverse plane. Finally, @f$q/p@f$ combines the charge of the
0066 particle with the inverse momentum. In Figure @ref parameters (a), the global
0067 momentum vector @f$\vec p@f$ is shown, which can be recovered from the parameters
0068 @f$\vec x@f$ using @f$\phi@f$, @f$\theta@f$ and @f$q/p@f$.
0069 
0070 @anchor parameters
0071 
0072 ![Illustration of the parametrization of a particle track with respect to a two-dimensional surface. (a) shows the local position, global momentum and their corresponding uncertainties. (b) displays the angles $\\phi$ and $\\theta$ in the transverse and longitudinal planes.](tracking/parameters.svg) {width=70%}
0073 
0074 @anchor perigee
0075 
0076 ![Illustration of the perigee parametrization which uses the point of closest approach relative to a reference point. The impact parameter $d_0$, the position $l$ and the momentum vector $\\vec p$ are shown.](tracking/perigee.svg) {width=40%}
0077 
0078 Aside from the nominal quantities captured in @f$\vec x@f$, the related
0079 uncertainties and correlations need to be taken into account as well. They
0080 can be expressed as a @f$6\times 6@f$ covariance matrix like
0081 
0082 @f[
0083   C =
0084   \begin{bmatrix}
0085    \sigma^2(l_0)& \text{cov}(l_0,l_1) & \text{cov}(l_0, \phi) & \text{cov}(l_0, \theta) & \text{cov}(l_0, q/p) & \text{cov}(l_0, t) \\
0086    . & \sigma^2(l_1) & \text{cov}(l_1, \phi) & \text{cov}(l_1, \theta) & \text{cov}(l_1, q/p) & \text{cov}(l_1, t) \\
0087    . & . &  \sigma^2(\phi) & \text{cov}(\phi,\theta) & \text{cov}(\phi, q/p) & \text{cov}(\phi, t) \\
0088    . & . & . & \sigma^2(\theta) & \text{cov}(\theta, q/p) & \text{cov}(\theta, t) \\
0089    . & . & . & . & \sigma^2(q/p) & \text{cov}(q/p, t) \\
0090    . & . & . & . & . & \sigma^2(t)
0091   \end{bmatrix}
0092 @f]
0093 
0094 Here, @f$\text{cov}(X,Y)@f$ is the covariance of variables @f$X@f$ and @f$Y@f$, while
0095 @f$\sigma^2(X)@f$ are the regular variances. As the covariance matrix @f$C@f$ is
0096 symmetric, only the upper right half is shown in the matrix above. The
0097 uncertainties associated with the local position, as well as the momentum
0098 direction are indicated in Figure @ref parameters (a) as an ellipse and a cone
0099 around the momentum vector @f$\vec p@f$, respectively.
0100 
0101 # Particle propagation {#particle-propagation}
0102 
0103 > [!tip]
0104 > A dedicated description of the ACTS implementation of particle propagation
0105 > can be found @ref propagation "here".
0106 
0107 A central part of track reconstruction is the ability to calculate the
0108 trajectory of a charged particle, given its properties at a given point. This
0109 process, called *particle propagation* or *extrapolation*, is used to predict a
0110 particle's properties after it has travelled a certain distance. In many cases,
0111 the projected intersection with various types of surfaces is desired. The
0112 trajectory of a charged particle is governed by the @ref magnetic_field
0113 "magnetic field" through which it travels, as well as any material effects (see
0114 @ref material). In case of a homogeneous magnetic field, and in the absence of
0115 material interaction, the particle follows a helical trajectory. Such a helix
0116 can be calculated purely analytically, although intersections require numerical
0117 methods nevertheless.
0118 
0119 Often, magnetic fields are not homogeneous, however. In the presence of such
0120 changing fields, the corresponding differential equations of motions need to be
0121 solved using numerical integration techniques.
0122 
0123 ## Numerical integration {#numerical-integration}
0124 
0125 In ACTS, numerical integration is done using the *Runge-Kutta-Nyström* (RKN) method.
0126 Commonly used in the variant at fourth order, the RKN method describes how to calculate a
0127 solution to an initial value problem that can be formulated generically like
0128 
0129 
0130 @f[
0131 \frac{dy}{dt} = f(t,y), \qquad y(t_0) = y_0,
0132 @f]
0133 
0134 
0135 where @f$y_0@f$ refers to the initial value of @f$y@f$ at @f$t_0@f$, and
0136 @f$f(t,y)@f$ is the functional form describing the dynamics. The method then
0137 successively approximates the analytical solution @f$y(t)@f$ in a stepwise
0138 fashion. At each step @f$(t_n, y_n)@f$, the goal is effectively to approximate
0139 the next step @f$y(t_{n+1})@f$. Using a step size @f$h@f$, the algorithm evaluates
0140 the function @f$f@f$ at four points @f$k_{1-4}@f$:
0141 
0142 
0143 @f[
0144 \begin{aligned}
0145     k_1 &= f(t_n, y_n) \\
0146     k_2 &= f\left( t_n + \frac h 2, y_n + h \frac{k_1} 2 \right) \\
0147     k_3 &= f\left( t_n + \frac h 2, y_n + h \frac{k_2} 2 \right)\\
0148     k_4 &= f\left( t_n + h, y_n + hk_3 \right).
0149 \end{aligned}
0150 @f]
0151 
0152 @anchor rk
0153 
0154 ![Illustration of the RKN method approximating a first order differential equation. Shown is the calculation of an estimate $y_{n+1}$ at $t_{n+1} = t_n + h$, based on the current step $(t_n,y_n)$. Shown are the four distinct points at which function $y(t)$ is evaluated, and which are blended to form the estimate.](tracking/rk.svg) {width=40%}
0155 
0156 
0157 Looking at Figure @ref rk, the meaning of these four points in relation
0158 to the step size @f$h@f$ can be understood. @f$k_1@f$ is the derivative at the
0159 current location, @f$k_{2,3}@f$ use @f$k_1@f$ and @f$k_2@f$ respectively to calculate two
0160 envelope derivatives at @f$h/2@f$ and @f$k_4@f$ uses @f$k_3@f$ to make an estimate of the
0161 derivative at @f$h@f$. Combining @f$k_{1-4}@f$, @f$(t_{n+1},y_{n+1})@f$ can be calculated
0162 as the approximation of @f$y(t_{n+1})@f$ like
0163 
0164 
0165 @f[
0166 \begin{aligned}
0167     y_{n+1} &= y_n + \frac 1 6 h ( k_1 + 2 k_2 + 2 k_3 + k_4)\\\\
0168     t_{n+1} &= t_n + h
0169 \end{aligned}
0170 @f]
0171 
0172 
0173 by effectively averaging the four derivatives. It is apparent that
0174 the step size crucially influences the accuracy of the approximation. A large
0175 step size weakens the approximation, especially if the magnetic field changes
0176 strongly. On the other hand, a too small step size will negatively affect the
0177 execution time of the algorithm.
0178 
0179 The Runge-Kutta-Nyström method from above can be adapted to handle second order
0180 differential equations, as is needed for the equations of motion in question,
0181 
0182 
0183 @f[
0184 \frac{d^2 \vec r}{ds^2} = \frac q p \left( \frac{d\vec r}{ds} \times \vec B (\vec r) \right) = f(s, \vec r, \vec T), \qquad \vec T \equiv \frac{d \vec r}{ds},
0185 @f]
0186 
0187 
0188 
0189 with the global position @f$\vec r@f$, the path element @f$s@f$, the
0190 normalized tangent vector @f$\vec T@f$ and the magnetic field @f$\vec B(\vec r)@f$ at
0191 the global position. A slight modification of @f$k_{1-4}@f$ is also required,
0192 incorporating the first derivative of @f$f(s, \vec r, \vec r')@f$, finally
0193 leading to
0194 
0195 
0196 @f[
0197 \begin{aligned}
0198   \vec T_{n+1} &= \vec T_n + \frac h 6 (k_1 + 2k_2 + 2k_3 + k_4) \\\\
0199   \vec r_{n+1} &= \vec r_n + h \vec T_n + \frac{h^2}{6} (k_1 + k_2 + k_3).
0200 \end{aligned}
0201 @f]
0202 
0203 
0204 A strategy exists to dynamically adapt the step size according to the magnetic
0205 field strength, with the definition of a target accuracy that the algorithm
0206 tries to achieve. Here, the step size @f$h@f$ will successively be decreased and
0207 the approximation recalculated until the accuracy goal is achieved. Even with
0208 these additional calculations, the approach is still preferable over a
0209 consistently low step size.
0210 
0211 ## Covariance transport {#covariance-transport}
0212 
0213 Aside from the prediction of the track parameters at a given path length, a key
0214 ingredient to many dependent applications are the uncertainties in the form of
0215 the associated covariance matrix @f$C@f$. Conversions between covariance matrices
0216 @f$C^i\to C^f@f$ can generally be achieved like
0217 
0218 
0219 @f[
0220 C^f = J \cdot C^i \cdot J^T,
0221 @f]
0222 
0223 
0224 using the Jacobian matrix
0225 
0226 
0227 
0228 @f[
0229 J = \begin{bmatrix}
0230     \frac{\partial l_0^f}{\partial l_0^i} & \cdots  & \frac{\partial l_0^f}{\partial (t_{0})^i} \\
0231     \vdots & \ddots & \vdots \\
0232     \frac{\partial (t_{0})^f}{\partial l_0^i} & \cdots  & \frac{\partial (t_0)^f}{\partial (t_0)^i}
0233   \end{bmatrix},
0234 @f]
0235 
0236 
0237 
0238 between initial and final parameters @f$\vec x^i@f$ and @f$\vec x^f@f$. The
0239 task therefore becomes calculating the necessary Jacobians to achieve correct
0240 transformation.
0241 
0242 One part is the transformation between different coordinate systems, but at the
0243 same location along the trajectory. For this purpose, generic Jacobians can be
0244 calculated between each coordinate system type, and a common coordinate system.
0245 The common coordinate system used for this purpose is the curvilinear frame,
0246 which consists of the global direction angles, and a plane surface located at
0247 the track location, with the normal of the plane surface aligned with the track
0248 momentum. By using Jacobians to the curvilinear frame and the corresponding
0249 inverse matrices, conversions between any two coordinate systems can be
0250 performed.
0251 
0252 The second part is the calculation of the evolution of the covariance matrix
0253 during the propagation between surfaces. To this end, a semi-analytical
0254 method which calculates the effective derivatives between two consecutive
0255 RKN steps can be used. By accumulating the Jacobian
0256 matrices calculated for each step, the effective Jacobian between the
0257 starting point and the destination can be obtained.
0258 
0259 # Material effects {#material-eff}
0260 
0261 Charged particles interact with matter as they pass through it. Since
0262 particle detectors inevitably consist of some form or material, this effect
0263 cannot be completely avoided. By building tracking detectors as light as
0264 possible, and arranging passive components, such as services and support
0265 structures carefully, the material a particle encounters before being
0266 measured can be reduced. Charged particles traversing any form of matter
0267 undergo elastic and inelastic interactions with the atomic structure of the
0268 material, depending on the particle properties.
0269 
0270 @anchor multiple_scattering
0271 
0272 ![Illustration of the effect of multiple scattering on the trajectory of a charged particle passing through a block of material. Entering from the left, it undergoes a series of scattering events, deflecting the trajectory statistically, before exiting on the right.](tracking/multiple_scattering.svg) {width=200px}
0273 
0274 In elastic interactions, the particle does not lose a significant amount of
0275 energy, while its trajectory is affected. Figure @ref multiple_scattering shows a
0276 sketch of the way multiple Coulomb scattering affects the direction of a
0277 particle trajectory. In addition, a shift in the transverse plane relative to
0278 the incident direction can occur. As the scattering events occur in
0279 statistically independent directions, the means of both the deflection and
0280 offset tends toward zero as the number of scatters increases. Therefore, in the
0281 numerical particle propagation, this can be accounted for by simply increasing
0282 the uncertainties associated with the direction, depending on the amount of
0283 material encountered.
0284 
0285 On the other hand, there are interactions during which the particle loses some
0286 of its energy. Relevant processes here are ionization, as well as
0287 bremsstrahlung for light particles like electrons. For hadronic particles,
0288 hadronic interactions with the nuclei of surrounding material is another
0289 process of interest. In such hadronic interactions, the incoming particle often
0290 disintegrates, and does not propagate further. Since the size of ionization
0291 losses only fluctuates to a small degree for thin layers of material, they can
0292 usually be accounted for by reducing the trajectory energy correspondingly. For
0293 bremsstrahlung, where fluctuations are much larger and the effect cannot be
0294 modelled adequately with a Gaussian distribution, dedicated techniques are
0295 needed (see @ref Acts::GaussianSumFitter).
0296 
0297 Two main approaches are implemented in ACTS. The first approximates the
0298 material interaction by using a description that averages the real material
0299 onto thin surfaces across the detector (more on this in
0300 @ref geometry-and-material-modelling). When the propagation encounters such a
0301 surface, it retrieves the material properties, and executes parametrized
0302 modifications to the particle properties and uncertainties. In the second
0303 approach, material effects are continuously incorporated during propagation,
0304 rather than at discrete locations. The latter approach is especially suited for
0305 propagation through volumes of dense material, where the discretization of the
0306 material distribution will not work as well.
0307 
0308 # Geometry and material modelling {#geometry-and-material-modelling}
0309 
0310 > [!tip]
0311 > A dedicated description of the ACTS implementation of the tracking geometry
0312 > model can be found @ref geometry "here".
0313 
0314 A detailed model of the geometry of an experiment is required for tracking. In
0315 many cases, external information is needed to associate a sensitive element
0316 with a position and rotation in the laboratory frame. In case of silicon
0317 sensors, the intrinsic information captured by the sensor is restricted to the
0318 measurement plane. Using a transformation matrix, this local measurement can be
0319 turned into a global one.
0320 
0321 Full simulation using tools like Geant4 are frequently used in HEP.
0322 It includes its own geometry
0323 description framework. For the precise simulation of particle interactions
0324 with the detector, this geometry modelling
0325 is highly detailed. Even very small details of the
0326 physical hardware can be crucial, and are often included in the geometry
0327 description. An example for this are readout chips on silicon sensors, or cooling
0328 elements. Figure @ref geometry_detail (a) shows a sketch of such a detailed
0329 geometry description. Shown as an example is a *layer* of silicon
0330 sensors in a barrel configuration. The green rectangles represent the actual
0331 sensitive surfaces, while other elements include cooling, readout and other components.
0332 
0333 @anchor geometry_detail
0334 
0335 ![Sketch of the way a fully detailed simulation geometry (a) models passive elements, in addition to the sensitive elements shown in green. (b) shows a simplified version, where all non-sensitive elements are approximated.](tracking/geometry_detail.svg) {width=90%}
0336 
0337 In the majority of cases in track reconstruction, this detailed
0338 geometry is unnecessary. During track reconstruction, the aforementioned
0339 associated information needs to be accessible for measurements, so all
0340 sensitive elements need to be included in some form. Passive elements, on the
0341 other hand, are only required to factor in material interaction effects (see
0342 @ref particle-propagation). Moreover, the fully detailed geometry comes
0343 at the disadvantage of introducing significant overhead during navigation. In
0344 this process, an algorithm attempts to figure out which elements the particle
0345 propagation needs to target, as the trajectory is likely to intersect them.
0346 With a geometry description this precise, the navigation process becomes a
0347 significant performance bottleneck.
0348 
0349 @anchor layer_barrel
0350 
0351 ![Sketch of the way sensitive elements are grouped into layers. Shown is an $xy$-view of a number of sensors, arranged as in e.g. the ATLAS silicon detector barrels. The grouping is based on their mounting radius. The layers are indicated in different colors.](tracking/layer_barrel.svg) {width=40%}
0352 
0353 As a compromise between modelling accuracy and performance, ACTS
0354 uses a simplified geometry model. It
0355 focusses on the sensitive elements, which are strictly needed, while passive
0356 elements are discarded from the explicit description and approximated.
0357 Figure @ref geometry_detail (b) shows such a simplified geometry. Here, the
0358 sensitive elements are still shown in green, and other elements are greyed
0359 out, indicating that they are discarded. The sensitive elements are then
0360 grouped into layers, as sketched in Figure @ref layer_barrel. How exactly the
0361 grouping occurs depends on the concrete experiment geometry. In some cases, the layers have the shape
0362 of cylinder surfaces with increasing radii. This example is shown in the
0363 figure in the transverse plane at radii @f$r_{1,2,3}@f$. In the endcaps, where
0364 modules are often arranged on disks, these are used as the layer shape. An
0365 illustration of endcap disk layers can be found in Figure @ref layer_ec, where
0366 six disks are located at six distinct positions in @f$\pm z_{1,2,3}@f$, and
0367 shown in different colors.
0368 
0369 @anchor layer_ec
0370 
0371 ![Sketch of the way sensitive elements are grouped into layers. Shown is a view of a number of sensors, arranged as in e.g. the ATLAS silicon detector endcaps. They are grouped into disks based on their mounting position in $z$. The layers are indicated in different colors.](tracking/layer_ec.svg) {width=80%}
0372 
0373 During particle propagation, the navigation makes use of this layer
0374 system. Each layer contains a binned structure, which maps a bin to a set
0375 of sensitive surfaces that overlap with the bin area. This is illustrated in
0376 Figure @ref geo_binning, where the left picture shows the sensitive surface
0377 structure of an exemplary endcap disk. The picture on the right overlays the
0378 binning structure that can be used to enable fast retrieval of compatible
0379 sensitive surfaces. By performing a simple bin lookup, the navigation can
0380 ascertain which sensors it needs to attempt propagation to.
0381 
0382 
0383 @anchor geo_binning
0384 
0385 ![Illustration of the binning structure that is used to subdivide layer surfaces. (a) shows two sensor rings of different radii grouped into one disk layer. (b) overlays the binning structure that the navigation queries for compatible surfaces.](tracking/surface_array.svg) {width=80%}
0386 
0387 Furthermore, layers are grouped into volumes. Each volume loosely corresponds
0388 to a region of the detector.
0389 Volumes are set up such that their boundary surfaces always touch another
0390 volume. An exception to this is the outermost volume. Each volume's boundary
0391 surfaces store which volume is located on their other side, essentially
0392 forming portals between the volumes. This glueing enables the geometry
0393 navigation between volumes. When the propagation has finished processing a
0394 set of layers, it attempts to target the boundary surfaces. Once a boundary
0395 surface is reached, the active volume is switched, and the next set of layers
0396 is processed.
0397 
0398 Care has to be taken to correctly model the passive material, that is
0399 initially discarded with non-sensitive elements. For the material effects to
0400 be correctly taken into account during particle propagation, the material is
0401 projected onto dedicated material surfaces. These material surfaces are
0402 spread across the detector geometry. Each layer is created with two
0403 *approach surfaces* on either side. Their distance can be interpreted as
0404 the thickness of the layer in question. Examples of these approach surfaces
0405 can be found in Figure @ref geometry_detail, at the inner and outer radius.
0406 Approach surfaces, and the boundary surfaces between volumes mentioned before,
0407 are candidates to receive a projection of the surrounding material.
0408 Additional artificial material layers can also be inserted to receive
0409 projected material.
0410 
0411 The projection procedure (see @ref material and @ref material_mapping) works
0412 by extrapolating test particles using the fully detailed simulation geometry.
0413 During the extrapolation, the material properties of the geometry are sampled
0414 in small intervals. Subsequently, the same test particle is extrapolated
0415 through the tracking geometry. All material samples are then assigned and
0416 projected onto the closest material surface. Finally, the projection is
0417 averaged. The exact number and placement of the material surfaces has to be
0418 optimized to yield a sufficiently accurate representation of the inactive
0419 material in the detector.
0420 
0421 The numerical integration uses these projected material surfaces. Whenever
0422 such a surface is encountered in the propagation, the material properties are
0423 retrieved, and the corresponding modifications to the trajectory are
0424 executed. In case material is supposed to be integrated in a continuous way
0425 (as mentioned in @ref particle-propagation), volumes can also store an
0426 effective volumetric material composition, which is queried by the numerical
0427 integration when needed. As the actual physical location of the detection
0428 hardware can vary over time, possible misalignment of the sensors needs to be
0429 handled correctly.
0430 
0431 # Clustering {#clustering}
0432 
0433 > [!tip]
0434 > See @ref clustering for information of the implementation of clustering
0435 > in the core library.
0436 
0437 The actual track reconstruction procedure itself starts with the conversion of
0438 raw inputs that have been read out from the detector. In case of silicon
0439 detectors, the readout can either be performed in a binary way, only recording
0440 which segments fired, or the amount of charges measured in the segment can be
0441 recorded, e.g. via *time-over-threshold* readout. In all cases, the readout is
0442 attached to an identifier uniquely locating the segment on the corresponding
0443 sensor.
0444 
0445 As a next step, these raw readouts need to be *clustered*, in order to
0446 extract an estimate of where particles intersect with the sensor. The general
0447 strategy of clustering algorithms follows the Connected Component Analysis (CCA)
0448 approach, where subsets of segments are successively grouped into clusters.
0449 In case of the Pixel detector, this clustering occurs in two dimensions,
0450 corresponding to the segmentation of its sensors. Here, the CCA can
0451 either consider all eight surrounding pixels as neighboring a central one, or
0452 only consider the four non-diagonal ones, as shown in
0453 Figure @ref clustering_cca. The figure only shows the simplest possible
0454 cluster starting from the central pixel. In reality, the CCA will iteratively
0455 continue from the pixels on the cluster edges.
0456 
0457 @anchor clustering_cca
0458 
0459 ![Illustration of both eight and four cell connectivity.](tracking/cca.svg) {width=60%}
0460 
0461 Subsequently, the effective cluster position needs to be estimated. Multiple
0462 factors play a role here. First of all, the average position of the cluster
0463 can be calculated either using only the geometry position of the segments,
0464 
0465 
0466 @f[
0467 \vec r = \frac{1}{N} \sum_{i=1}^N \vec l_i,
0468 @f]
0469 
0470 
0471 or be weighted by the charge collected in each segment:
0472 
0473 
0474 @f[
0475 \vec r = \frac{1}{\sum_{i=1}^N q_i} \sum_{i=1}^N q_i \vec l_i.
0476 @f]
0477 
0478 
0479 Here, @f$\vec l_i@f$ is the local position of the @f$i@f$-th segment while
0480 @f$q_i@f$ is its charge.
0481 
0482 An illustration of the clusterization can be found in Figure @ref clustering_image,
0483 where a pixel sensor is shown to be intersected by a charged particle,
0484 entering on the lower left and exiting on the top right. Three cells shown
0485 with a red frame receive energy from the particle, but the amount is under
0486 the readout threshold. Four other cells receive energy above the threshold
0487 and are read out. The clustering will then group these four cells into a
0488 cluster, and subsequently estimate the cluster position based on the energy
0489 deposited in the cells. In case no charge information is not available
0490 for a given detector, the calculation is purely geometric.
0491 
0492 
0493 @anchor clustering_image
0494 
0495 ![Illustration of the clustering of multiple pixels into a cluster, in a three-dimensional view on the left and a projection onto the $xy$-plane on the right. A particle enters the center in the lower left, crosses several segments before exiting the sensor on the top right. The cell colors indicate how far along the trajectory they are encountered.](tracking/clustering.svg) {width=50%}
0496 
0497 Another factor that needs to be accounted for is the drift direction of the
0498 created charges. In addition to the collection field of the sensor itself,
0499 the surrounding magnetic field modifies the drift direction by the
0500 *Lorentz-angle* @f$\theta_\text{L}@f$. Depending on the field strength, this
0501 additional angle can cause segments to be activated that would otherwise not
0502 be geometrically within reach of the charges. Other effects, such as the fact
0503 that the modules are not perfectly flat, as the geometry description assumes,
0504 or cross-talk between readout channels, also play a role at this stage.
0505 
0506 In the presence of high event activity, particle intersections on single
0507 sensors can be close enough to one another to result in clusters that are not
0508 clearly separated from each other. This circumstance can be somewhat
0509 mitigated by allowing tracks to share clusters with other particles, which
0510 comes at the price of allowing duplicated tracks to some extent.
0511 Additionally, merged clusters typically feature worse position resolution,
0512 which manifests itself since it negatively affects the final fit of the
0513 track.
0514 
0515 # Space point formation {#space-point-formation}
0516 
0517 The basic input to most forms of pattern recognition algorithms for tracking
0518 are space points, which need to be assembled from the raw measurements. To this
0519 end, the raw measurements are combined with information provided by the
0520 geometry description, such as the location and rotation of the sensors. In this
0521 way, the locations, which are restricted to be local to the sensor surfaces
0522 intrinsically, can be converted into three dimensional points in space.  See
0523 @ref sp_formation for a description of the implementation of
0524 space point formation in the core library.
0525 
0526 The @ref fig_sensor "figure" below shows an illustration of the information that is consumed for
0527 a pixel measurement. Shown are three clusters on a sensor, which are caused by
0528 three tracks intersecting it. The corresponding cluster positions are indicated
0529 as well, and can be converted to global positions using the inverse of the
0530 global-to-local transformation matrix, that is provided by the geometry
0531 description.
0532 
0533 @anchor fig_sensor
0534 
0535 ![Illustration of a pixel sensor and its local coordinate system in relation to the global laboratory frame. A transformation allows conversion between the two systems. Shown are three tracks intersecting the sensor, alongside clusters that they produce.](tracking/sp_l2g.svg) {width=50%}
0536 
0537 In strip detectors, on the other hand, only a single
0538 dimension is segmented, and an individual measurement is therefore only
0539 constrained in one direction on the surface. However, usually the
0540 strip modules are mounted in pairs, with a stereo angle rotation
0541 between the pairs. To form global space points, measurements from both
0542 sensors of a pair need to be combined.
0543 Due to the stereo angle, a two dimensional
0544 location on the orthogonal projection plane relative to the two parallel
0545 pairs can be found. Using the global transformations of the pair, the
0546 combined measurement location can be converted to global coordinates. If
0547 multiple measurements are located on a stereo pair of strip sensors, there
0548 exists an ambiguity on how to combine strips to form space points, which has to be resolved.
0549 
0550 
0551 # Seeding {#sec_seeding}
0552 
0553 The next step after space point formation is pattern recognition, which be
0554 implemented in various ways.  Global methods exist which attempt to cluster
0555 space points, such as conformal mapping. In this approach, the space points are
0556 transformed into a feature parameter space that reveals patterns for hits
0557 belonging to the same track. In the specific example of a Hough transform, a
0558 parameter space @f$\left(\phi, q/p_\mathrm{T}\right)@f$ is used. As a result, each
0559 space point is effectively transformed into a line, as a series of combinations
0560 of these parameters would lead to the same space point. The lines from a set of
0561 space points of a single track will intersect in one common area. Such an
0562 intersection can be used to identify which space points originate from the same
0563 track. However, this task grows in complexity as detector activity increases
0564 and is susceptible to material effects. See @ref seeding for a description
0565 of the seeding implementation in the core library.
0566 
0567 Another group of approaches is the one of seeding and track following. These
0568 algorithms differ from the global ones in that they evaluate individual
0569 combinations of space points, and successively explore the events. One
0570 algorithm from this group is the cellular automaton that iteratively forms
0571 chains of space points going from one layer to the next.
0572 
0573 The main approach in ACTS is an algorithm that operates on coarse
0574 subdivisions of the detector is used. This seeding algorithm attempts to find
0575 triplets of space points from increasing radii which are likely to belong to
0576 the same track. It achieves this by iterating the combinatorial triplets and
0577 successively filtering them. Filtering is performed based on the momentum and
0578 impact parameters, which the algorithm attempts to estimate for each triplet.
0579 
0580 Under the assumption of a homogeneous magnetic field along the @f$z@f$-axis,
0581 charged particles should follow helical trajectories. In the transverse plane,
0582 the motion is circular, while it is a straight line in the @f$rz@f$-plane.  The
0583 transverse impact parameter and momentum can be estimated from the radius of
0584 the circle in the transverse plane like
0585 
0586 
0587 @f[
0588 d_0 = \sqrt{c_x^2 + c_y^2} - \rho,
0589 @f]
0590 
0591 
0592 with the circle center @f$(c_x, c_y)@f$ and radius @f$\rho@f$. The
0593 transverse momentum can be related to available quantities like
0594 
0595 
0596 @f[
0597 p_\mathrm{T} \propto \cdot q B \rho
0598 @f]
0599 
0600 
0601 with the charge @f$q@f$ and the magnetic field @f$B@f$. An intersection
0602 between the straight line in the @f$rz@f$-plane with the @f$z@f$-axis gives an
0603 estimate of the longitudinal impact parameter.
0604 An illustration of seeds in the transverse plane is found in
0605 the @ref seeding_figure "figure" below. Note that seeds can incorporate hits spread across all of
0606 the layers shown, although this can be a configuration parameter.
0607 
0608 
0609 @anchor seeding_figure
0610 
0611 ![Sketch of seeds in the transverse plane for a number of tracks on four layers. Seeds can combine hits on any three of these layers. The shown seeds appear compatible with having originated in the center of the detector, which is also drawn.](tracking/seeding.svg) {width=50%}
0612 
0613 # Track finding and track fitting {#track-finding-and-track-fitting}
0614 
0615 In the track seeding and following approach, track candidates are built from
0616 the initial seeds. One method implemented in ACTS, the @ref
0617 Acts::CombinatorialKalmanFilter "Combinatorial Kalman Filter" (CKF), uses the
0618 *Kalman formalism*. Originally developed for monitoring and steering mechanical
0619 systems, it can also be used to iteratively calculate a track estimate. After a
0620 set of track candidates has been assembled and filtered (see @ref
0621 ambiguity-resolution), an additional track fit is usually performed to extract
0622 the best estimate of the track. The Kalman formalism can also be used for this,
0623 with the addition of a smoothing step that has certain benefits.  Other fit
0624 strategies exist, such as a global @f$\chi^2@f$ fit that minimizes the
0625 distances between track-sensor intersections and measurements on all sensors at
0626 the same time. One drawback of this method is the necessity to invert very
0627 large matrices, which is computationally expensive.
0628 
0629 In a track fit, the Kalman formalism can be shown to yield optimal estimates
0630 for Gaussian uncertainties. This assumption breaks down when effects like
0631 bremsstrahlung come into play. An extension of the Kalman Filter (KF) exists
0632 that relies on the individual propagation of a set of trajectories, instead of
0633 a single one, to model these biased uncertainties by a sum of Gaussian
0634 components. This @ref Acts::GaussianSumFitter "Gaussian Sum Filter" (GSF) achieves better results when
0635 fitting particles such as electrons, likely to undergo bremsstrahlung, and is
0636 deployed in e.g. the ATLAS tracking chain.
0637 
0638 
0639 ## Kalman formalism and Kalman track fitter {#kalman-formalism}
0640 
0641 > [!tip]
0642 > See @ref Acts::KalmanFitter for documentation of the implementation of the
0643 > Kalman Filter in the core library.
0644 
0645 The basis of the Kalman formalism is a state vector, that can be identified
0646 with the set of track parameters @f$\vec x@f$. Note that the concrete
0647 parametrization plays a subordinate role in this context. Rather than building
0648 an estimate of the state of a system in real time, a Kalman track fit can be
0649 understood as estimating the parameters iteratively in steps. In the track
0650 fitting application, each step is defined by a measurement to be included.
0651 The evolution of the state vector is described by
0652 
0653 
0654 @f[
0655   \vec x_k = \mathbf F_{k-1} \vec x_{k-1} + \vec w_{k-1},
0656 @f]
0657 
0658 
0659 where the linear function @f$\mathbf F_{k-1}@f$ transports the state vector at
0660 step @f$k-1@f$ to step @f$k@f$. @f$\vec w_{k-1}@f$ is additional so-called process noise
0661 that affects the transport additively. Each step has an associated
0662 measurement, with the fixed relationship between the measurement and the state vector
0663 
0664 
0665 @f[
0666   \vec m_k = \mathbf H_k \vec x_k + \epsilon_k.
0667 @f]
0668 
0669 
0670 Here, @f$\mathbf H_k@f$ is the *measurement mapping function*, which
0671 transforms the state vector into the measurement space. In the ideal case,
0672 this purpose can be achieved by a simple projection matrix, which extracts a
0673 subspace of the state vector. Additionally, @f$\epsilon_k@f$ represents the
0674 measurement uncertainty.
0675 
0676 The Kalman fit process is divided into different phases:
0677 
0678 1. **Prediction** of the state vector at the next step @f$k+1@f$ based on the information at the current step @f$k@f$.
0679 2. **Filtering** of the prediction by incorporating the measurement associated to the step
0680 3. **Smoothing** of the state vector by walking back the steps and using information for the subsequent step @f$k+1@f$ to improve the estimate at the current step @f$k@f$.
0681 
0682 An illustration of these concepts is found in the @ref fig_kalman_filter "figure" below. Here,
0683 a series of three sensors is shown with measurements on them. The KF
0684 then predicts the track parameters at an intersection, shown in blue.
0685 Subsequently, a filtered set of parameters is calculated as a mixture between
0686 the measurement and the prediction. Not shown in this picture is the
0687 smoothing step.
0688 
0689 @anchor fig_kalman_filter
0690 
0691 ![Illustration of the KF. Two of the three stages, the prediction and the filtering are shown. The filtering updates the prediction with information from the measurement.](tracking/kalman.svg) {width=70%}
0692 
0693 
0694 In many cases, the first two phases run in tandem, with prediction and
0695 filtering happening alternatingly at each step. The smoothing phase is
0696 launched once the last measurement has been encountered.
0697 Starting from a state @f$k@f$, first, a prediction of the state vector at the
0698 next measurement location is obtained via
0699 
0700 @anchor kf_pred
0701 
0702 @f[
0703   \vec x_k^{k-1} = \mathbf F_{k-1} \vec x_{k-1},
0704 @f]
0705 
0706 
0707 with the linear transport function from above. @f$\vec x_k^{k-1}@f$ is
0708 the prediction of the state vector at step @f$k@f$ based on step @f$k-1@f$. The next
0709 stage is the filtering. Here, the state vector is refined by taking into
0710 account the measurement at the current step. Following one of two variants of
0711 filtering from @cite Fruhwirth:1987fm, the gain matrix formalism, the state
0712 vector is updated like
0713 
0714 
0715 @f[
0716   \vec x_k = \vec x_k^{k-1} + \mathbf K_k \left( \vec m_k - \mathbf H_k \vec x_k^{k-1} \right),
0717 @f]
0718 
0719 
0720 with the *Kalman gain matrix*
0721 
0722 
0723 @f[
0724   \mathbf K_k = \mathbf C_k^{k-1} \mathbf H_k^\mathrm{T}
0725     \left(
0726       \mathbf V_k + \mathbf H_k \mathbf C_k^{k-1} \mathbf H_k^\mathrm{T}
0727     \right)^{-1}
0728     .
0729 @f]
0730 
0731 
0732 Note that @f$\vec x_k@f$ is the filtered state vector at step @f$k@f$,
0733 based on information from previous steps and step @f$k@f$ itself. This is in
0734 contrast to @f$\vec x_k^{k-1}@f$, which is the prediction of the state vector at
0735 step @f$k@f$ based on @f$k-1@f$, and is used to calculate the filtered state vector.
0736 One input to these equations is the covariance matrix prediction @f$\mathbf
0737 C_k^{k-1}@f$ at step @f$k@f$ based on step @f$k-1@f$, which can be written as
0738 
0739 
0740 @f[
0741   \mathbf C_k^{k-1}  = \mathbf F_{k-1} \mathbf C_{k-1} \mathbf F_{k-1}^\mathrm{T} + \mathbf Q_{k-1}
0742 @f]
0743 
0744 
0745 in the linear version from @cite Fruhwirth:1987fm, with the
0746 covariance @f$\mathbf C_{k-1}@f$ at step @f$k-1@f$, and the covariance @f$\mathbf
0747 Q_{k-1}@f$ associated with @f$\vec w_{k-1}@f$ from above. Also needed is @f$\mathbf
0748 V_k@f$, which is the covariance associated with @f$\epsilon_k@f$, effectively
0749 representing the measurement uncertainty.
0750 
0751 Similar to the state vector itself, the corresponding covariance matrix is
0752 also filtered using
0753 
0754 @anchor kf_cov_pred
0755 
0756 @f[
0757   \mathbf C_k = \left( \mathbb 1 - \mathbf K_k \mathbf H_k \right) \mathbf C_k^{k-1}.
0758 @f]
0759 
0760 
0761 In the smoothing phase, the state vector at step @f$k@f$ is improved using the
0762 information from the subsequent step @f$k+1@f$ using
0763 
0764 
0765 @f[
0766   \vec x_k^n = \vec x_k + \mathbf A_k \left( \vec x_{k+1}^n - \vec x_{k+1}^k \right).
0767 @f]
0768 
0769 
0770 Here, @f$\vec x_{k+1}^n@f$ is the smoothed state vector and @f$\vec
0771 x_{k+1}^k@f$ the predicted state vector at the subsequent step @f$k+1@f$. Also
0772 needed is the *smoother gain matrix*
0773 
0774 
0775 @f[
0776   \mathbf A_k = \mathbf C_k \mathbf F_k^\mathrm{T} \left( \mathbf C^k_{k+1} \right)^{-1},
0777 @f]
0778 
0779 
0780 with the predicted covariance at step @f$k+1@f$, @f$\mathbf C_k^{k+1}@f$.
0781 Finally, the covariance at the current step @f$k@f$ can also be smoothed with
0782 
0783 
0784 @f[
0785   \mathbf C_k^n = \mathbf C_k + \mathbf A_k \left(\mathbf C_{k+1}^n - \mathbf C_{k+1}^k \right) \mathbf A_k^\mathrm{T}.
0786 @f]
0787 
0788 
0789 After smoothing, the parameter estimate at the first step contains information
0790 from all other measurement states. As mentioned above, in case the
0791 uncertainties entering the Kalman fit are Gaussian distributions without
0792 biases, the KF can be shown to be the optimal solution minimizing mean
0793 square estimation error. However, certain caveats exist. The KF assumes
0794 that a linear transport function @f$\mathbf F@f$ exists that can propagate the
0795 state vector. In the presence of inhomogeneous magnetic fields this is not
0796 the case. Instead of explicitly applying @f$\mathbf F@f$ to the state vector for
0797 the prediction, the ACTS KF turns to the numerical integration,
0798 discussed in @ref numerical-integration. With it, the prediction from
0799 @ref kf_pred "this equation" is simply the intersection of the extrapolated trajectory
0800 with the next sensitive surface. Aside from this, @f$\mathbf F@f$ is also used to
0801 transport the covariance between steps (see @ref kf_cov_pred "here"). Here, the
0802 semi-analytical method for covariance transport in the numerical integration
0803 can be used. @f$\mathbf F@f$ can then be identified with the transport
0804 Jacobian accumulated between surfaces.
0805 
0806 For smoothing, two possibilities exist to obtain the needed covariances from
0807 subsequent measurement steps. Either, the inverse transport Jacobian is used
0808 and applied, in a way similar to @ref kf_cov_pred "this equation", or the numerical
0809 integration is executed again in an inverse fashion, propagating from the
0810 subsequent step to the current one.
0811 
0812 ## Combinatorial Kalman Filter {#combinatorial-kalman-filter}
0813 
0814 > [!tip]
0815 > See @ref Acts::CombinatorialKalmanFilter for information on the CKF
0816 > implementation found in the core library.
0817 
0818 As mentioned above, the Kalman formalism can be used for track finding. In this
0819 case, the smoothing step can be dropped, as the resulting track candidates are
0820 likely to be refit regardless, therefore saving some time. The CKF explores the
0821 event starting from an initial track seed. It does this by considering not only
0822 a single sequence of measurements, but allowing the branching of the fit at
0823 each sensitive surface that is encountered. To this end, all or a subset of
0824 measurements that are found on each surface are considered. Measurements are
0825 selected based on their compatibility with the current state estimate, by using
0826 their residuals. A predicted residual
0827 
0828 
0829 @f[
0830   \vec r_k^{k-1} = \vec m_k - \mathbf H_k \vec x_k^{k-1},
0831 @f]
0832 
0833 
0834 and a filtered residual
0835 
0836 
0837 @f[
0838   \vec r_k = \vec m_k - \mathbf H_k \vec x_k,
0839 @f]
0840 
0841 
0842 can be defined, depending on which state estimate is compared with
0843 the measurement @f$\vec m_k@f$. Using the filtered residual, an effective
0844 @f$\chi^2@f$ increment
0845 
0846 
0847 @f[
0848   \chi^2_+ = \vec r_k^\mathrm{T}
0849   \left[ \left( \mathbb 1 - \mathbf H_k  \mathbf K_k \right)  \mathbf V_k \right]^{-1}
0850   \vec r_k
0851 @f]
0852 
0853 
0854 can be calculated. The global @f$\chi^2@f$ of the track candidate can
0855 be calculated as the sum of all @f$\chi^2_+@f$ across the steps. Measurements
0856 with a large @f$\chi^2_+@f$ are considered as outliers, which have low
0857 compatibility with the trajectory. By branching out for measurements below a
0858 certain @f$\chi^2_+@f$, and following the branches, a tree-like structure of
0859 compatible track candidates originating from a track seed is assembled. This
0860 feature is shown in the @ref fig_tracking_ckf "figure" below, which displays a circular
0861 trajectory, and a set of iteratively assembled track candidates. Basic
0862 quality criteria can be applied at this stage, to remove bad candidates. A
0863 dedicated @ref ambiguity-resolution.
0864 selects the candidates most likely to belong to real particle tracks.
0865 
0866 @anchor fig_tracking_ckf
0867 
0868 ![Illustration of the way the CKF iteratively explores measurements from a seed outwards. Measurements are added successively, and can be shared between the resulting track candidates. Shown in green is a circular *real* trajectory.](tracking/finding.svg) {width=50%}
0869 
0870 # Ambiguity resolution {#ambiguity-resolution}
0871 
0872 Due to the combinatorial nature of track finding, and to achieve high
0873 efficiencies, this set of candidates is often large, and contains a
0874 non-negligible fraction of *fake* candidates. These fake candidates are either
0875 completely combinatorial, or arise from real particle measurements with
0876 combinatorial additions. Track candidates coming from a single seed necessarily
0877 share a common stem of measurements. Measurements can potentially also be
0878 shared between candidates from different seeds.
0879 
0880 One possibility to resolve this (as is done in e.g. ATLAS tracking) is an
0881 ambiguity resolution algorithm, that attempts to filter out as many undesirable
0882 tracks as possible. This is implemented by means of a scoring function, that
0883 combines properties of the track parameters. Higher scores are correlated with
0884 a larger probability to be a desirable track candidate. A larger number of hits
0885 results in an increase in the score, as longer compatible hit chains are less
0886 likely to be random combinations. On the other hand, missing hits in sensors
0887 where a hit was expected negatively impact the score.  Experiment specific
0888 scoring of hits from different subsystems is also implemented. The overall
0889 @f$\chi^2@f$ value computed for the track candidate also plays a role. Candidates
0890 that share hits with other candidates are penalized. Another quantity is the
0891 measured particle @f$p_\mathrm{Y}@f$, which enters the score, to give preference to
0892 tracks with large momenta. For tracks containing measurements with a
0893 substantial local @f$\chi^2_+@f$ at the start or end of the trajectory, the
0894 ambiguity resolution step can also attempt to remove these hits, and determine
0895 whether a refit without them yields a more favorable global @f$\chi^2@f$.
0896 
0897 Finally, the output of the ambiguity resolution step is a set of track candidates
0898 that contain an enhanced fraction of tracks from actual particles, while fake
0899 tracks are suppressed. They are passed into the final precision fit outlined
0900 in @ref track-finding-and-track-fitting, to extract the parameter estimate, and used
0901 for further aspects of reconstruction.
0902 
0903 # Vertex reconstruction {#vertex-reconstruction}
0904 
0905 > [!tip]
0906 > See @ref vertexing for more information on the vertexing as implemented in ACTS.
0907 
0908 A vertex is a point within the detector, where an interaction or a
0909 decay occurred. We distinguish between primary vertices (from
0910 collisions/interactions) and secondary vertices (from subsequent particle
0911 decays), see the @ref vertexing_illust "figure" below. Primary vertices are further divided
0912 into hard-scatter and pile-up vertices. While primary vertices are located in
0913 the luminous region, secondary vertices are slightly displaced due to the finite
0914  life time of the decaying particle.
0915 
0916 @anchor vertexing_illust
0917 
0918 ![Illustration of a set of three vertices in a proton-proton collision. We distinguish between primary hard-scatter, primary pile-up, and secondary vertices.](tracking/vertexing.svg) {width=60%}
0919 
0920 Vertices play an important role in higher-level reconstruction algorithms. For
0921 example, secondary vertices can help with the identification of particles:
0922 During *b-tagging*, a displaced vertex located inside a jet is a sign for the
0923 decay of a @f$b@f$-hadron.
0924 
0925 In analogy to track reconstruction, vertex reconstruction can be divided into
0926 two stages: vertex finding and vertex fitting. As a first step of vertex
0927 finding, we compute a rough estimate of the vertex position from a set of
0928 tracks. This first estimate can be calculated in many different ways, and is
0929 referred to as "vertex seed". Seeding algorithms differ for primary and
0930 secondary vertexing. For primary vertex seeding, one option is to use a
0931 histogram approach to cluster tracks on the @f$z@f$-axis @cite Piacquadio_2010.
0932 This is based on the assumption that primary vertices will be close to the
0933 beamline. Other approaches model tracks as multivariate Gaussian distributions
0934 and identify regions of high track density as vertex seeds @cite schlag_2022.
0935 For secondary vertexing, seeds are formed from pairs of reconstructed tracks as
0936 the constraint to the beamline does not apply.
0937 
0938 Once a vertex seed is determined, tracks that are compatible with it are
0939 selected as part of the vertex finding.
0940 
0941 Before the vertex fit, we linearize tracks in the vicinity of the vertex seed
0942 under assuming that they follow a helical (for constant magnetic field) or
0943 straight (for no magnetic field) trajectory @cite Piacquadio_2010. The vertex
0944 fitter then uses this linearization to improve the position of the vertex seed.
0945 Furthermore, the track momenta are refitted under the assumption that the tracks
0946 originate at the vertex @cite Fruhwirth:1987fm @cite Billoir:1992yq .
0947 
0948 One issue with an approach like this is that the assignment of tracks to
0949 vertices is ambiguous. As an improvement, one can perform a multi-vertex fit,
0950 where vertices compete for tracks. This means that one track can be assigned to
0951 several vertices. Their contribution to each vertex fit is determined by a
0952 weight factor, which, in turn, depends on the tracks' compatibility with respect
0953 to all vertices @cite Fruhwirth:2005.
0954 
0955 A flowchart of a multi-vertex reconstruction chain is shown in
0956 the @ref vertexing_flowchart "figure" below.
0957 
0958 @anchor vertexing_flowchart
0959 
0960 ![Simplified flowchart of multi-vertex reconstruction. From a set of seed tracks, we first compute a rough estimate of the vertex position, i.e., the vertex seed. Then, we evaluate the compatibility of all tracks with the latter. If a track is deemed compatible, it is assigned a weight and attached to the vertex seed. Next, the vertex seed and all previously found vertices that share tracks with it are (re-)fitted. Finally, after convergence of the fit, we check whether the vertex candidate is merged with other vertices and discard it if that is the case. For the next iteration, all tracks that were assigned to the vertex seed and that have a weight above a certain threshold are removed from the seed tracks.](tracking/vertexing_flowchart.svg)