Guide for advanced processing of the grism spectra

Various topics are discussed. This is a work in progress, and more in-depth examples are planned, including perhaps example by video.

Observing strategies


To start the observation, the spacecraft will execute a slew to the desired sky location. The accuracy of the slew depends on the size of the slew, and is usually of the order of 1-2 arc minutes. When observing with the target in the centre of the field of view, that is usually a good enough accuracy. However, if multiple spectra are required or when using an offset (see Using an offset to place the spectrum somewhere else on the detector) a better positioning is obtained by executing a second slew after the first one, called a "slew-in-place" which will refine the positioning of the source to a few (5-10) arc seconds.

Early grism observations did not use a slew-in-place, and were often found at offsets on the detector that were large enough that they could not be calibrated using the early calibration which was valid for the centre of the detector only.

Using an offset to place the spectrum somewhere else on the detector

The Swift operations were designed as to place the target in the centre of the field of view. This has obvious advantages. During the year, the spacecraft position with respect to the sun is adjusted, leading to changes in the optimal roll-angle range. Since changes in roll angle rotate the observed field with respect to the sky around the boresight, sources near the boresight will be least affected.

The roll angle affects where a source will fall on the detector when there is an offset in pointing. A source that would fall on the upper right of the detector in April, will fall on the lower left 6 months later in October.

For the planning of offset observations this has implications. For a given offset, spacecraft roll angle and target sky coordinates, the sky coordinates for the centre of the field of view (the boresight) have to be calculated. These are then assigned an observation ID, and the observation is commanded using the coordinates that were calculated for the centre of the field of view. At present that is done by hand by the planners. This means that these kind of observations need some lead time to be done.

For the grism, the first offset observations were done for the calibration, nova U Sco and for SN2011fe in M101. In the case of the calibration observations it was done to determine the instrumental characteristics enabling the anchor point, wavelength, and effective area calibrations. For the supernova the offset was done since the source was very bright, and an offset would give less contamination of the first order, and also provide some indication of the nova emission lines in second order.

Faint sources

For this discussion, faint sources are those with count rates up to the count rate in the background.

As the noise in faint source observations is background limited, it is natural to use the clocked mode for the grism which has a lower background in the upper left of the detector. Usually that benefits the longer wavelengths most, as the blue part of the spectrum still has a higher background.

A possibility is observing with an offset, see Using an offset to place the spectrum somewhere else on the detector, placing the spectrum closer to the area of lower background, but this carries the risk of the spectrum falling in the area of reduced sensitivity, or even being lost.

Spectral Extraction

Centering the extraction slit on a different spectrum

Sometimes the target is fainter than a nearby spectrum, and the optimisation of the extraction slit picks the wrong spectrum. The reason is that the automated extraction searches a slightly larger region for a spectrum in order to allow for anchor errors. It pays therefore to examine the position of the slit on the image and make sure that you got the desired target.

In some cases I have had to go and blink the detector image in DS9 with a catalog of source positions. Lately my preferences is the GSC2.3 catalog, set with a filter:

"$Fmag >2 && $Fmag < 15"

Once you know what your target spectrum is in the rotated image, you can re-extract with an offset range for the optimisation, where the range is given in the center pixel y-coordinate, and pixel range. The default position is centered on y=100, so a spectrum higher up , say around y=115 can be extracted by passing the parameter:


will search the rotated image between y=110 and y=119.

To force the extraction to center on a certain y value, like 98 (the value is always a whole number), use a range smaller than 1 (not zero):


Extraction with a smaller aperture

Due to coincidence loss being calibrated for the default aperture, using a smaller aperture than the default is only recommended for faint sources. Coincidence loss is seen to change the width and shape of the point spread function. Therefore the aperture is set to a fixed size of 2.5 in terms of the width parameter of the fitted gaussian to the spectrum normal to the dispersion.

In order to change the aperture, you need to have had the import of the uvotgetspec module while running the extraction from ipython. This will have set a number of parameters, one of which controls the width of the extraction slit. The width in pixels is determined from fitting the profile of the spectrum to derive the gaussian width, which then is multiplied by the trackwidth parameter. Then, set the following parameter:

trackwidth = 2.5

Change the 2.5 value to, for example, 1.5 to get a smaller extraction aperture. Then do the spectral extraction again.

Post-extraction processing

Correcting wavelengths errors

Errors in the predicted anchor position cause shifts in the wavelength at the anchor. These can get quite large when the anchor position was determined using uvotgraspcorr only. Since the dispersion equation is not completely linear large wavelength errors affect the wavelength are the end of the wavelength range the most. Once the main wavelength error has been corrected by a linear term (a shift), the dispersion equation has to be reapplied and the wavelength scale must be recalculated.

To do this, I provide now code to post-process the extracted spectrum. Since version 2.0.3 the UVOTPY distribution includes a module uvotspec. The adjust_wavelength_manually program allows interactive adjustment of the wavelengths, by sliding the spectrum till the wavelengths are better. Next the wavelength scale is recalculated using the dispersion equation with the new anchor position, and the new solution is overplotted. Sometimes the process needs to be repeated twice, because the main parameter for adjustment is the wavelength shift at the anchor.

Summing spectra

There are two approaches possible. If the spectra were taken with the same roll angle, the orientation of the field stars relative to the spectrum will be the same, and one can consider summing the images, followed by extracting the spectrum. A more general method is by extracting a spectrum from each exposure and then to sum these spectra, taking care of weighting based on error, shifting the spectra so their wavelenght scales match, and removing bad parts of each input spectrum to the sum.


The more general method is recommended, and is implemented in the uvotspec.sum_PHAfiles program.

First the spectrum from each exposure needs to be extracted, and one ends up with a number of pha files. (The naming dates back to XSPEC usage). These are then put into a list to feed to the program:

phafiles = ['sw00010511002ugu_1ord_1_f.pha', 'sw00010511002ugu_1ord_2_f.pha',
            'sw00010511004ugu_1ord_1_g.pha', 'sw00010511004ugu_1ord_2_f.pha']

It may be helpful to first apply wavelength shift using a well known feature in the spectrum, like the Mg II 2800 line using the uvotspec.adjust_wavelength_manually, or the uvotspec.apply_shift programs (see uvotspec), and mask out undesirable parts of each spectrum while running sum_PHAfiles. (note that on some graphics backends the plot does not update for each spectrum, but wiggling the plot then forces it to update).

Alternatively, a list of regions to exclude can be given following the example:


This will interactively (1) select bad regions and (2) ask for shifts to the wavelengths of one spectra compared to one chosen as reference.


...: wave_shifts=[0,0,0,0,0],exclude_wave=[[[1600,1730],[3350,3450]],[[1600,1730],
...: [3370,3420]],[[1600,1730],[3370,3415]],[[1600,1712],[3380,3430]],[[1600,1730],
...: [3370,3430]]],objectname='Nova KT Eri 2009',chatter=5,ignore_flags=True)

Here the wavelength shifts are set to zero, and the exclusion regions have been given, while no exclusion regions are taken from the quality in the file.


For very faint sources an observation of 5ks will be spread over several orbits and images since the maximum exposure time on Swift time is not much more than 1500s. For some objects the spectra cannot be extracted from the individual exposures, and one can attempt to first sum the aligned images followed by a spectral extraction. The uvotpy.uvotgetspec.sum_Extimage program. Input is a list of the filenames to be processed, just like before.

To prepare the extracted data, run default extraction on all the files ignoring where the program thinks the spectrum is located, and usually the program will lock on the same alternate source in all the exposures, which guarantees the anchors will match. The pha files include a small image containing the spectrum.

Next call uvotgetspec.sum_Extimage. The program run automatically an autocorrelator which lines up the extracted images further. This is usually dominated by the zeroth orders in the images, so it is essential that all inputs were taken at the same roll angle.

Transients: Using a later observation as background subtraction

Type II supernovae often show early UV emission which often is affected by zeroth orders of field stars. A method was developed by Smitka et al. by which one obtains a year later for the same roll angle a spectral exposure of the same supernova pointing. uvotgetspec.getSpec can be supplied the later observation as a background input and the program will then calculate the required corrections for coincidence loss, but also subtract the provided background image to get a cleaner net spectrum. The input background image needs special preparation, as described in the Smitka et al. paper and software for this process has been made available by Mike Smitka on Github.