MMT/Hectospec: data reduction and common problems
Benjamin Weiner, MMT Observatory
The Hectospec fiber spectrograph on the MMT telescope obtains 300
fiber spectra at a time over a 1 degree field of view. This page is
intended to compile some notes on data reduction and issues that
occasionally cause problems in reduction.
Resources:
How to run HSRED
See the HSRED page for
how to run HSRED v2 using a wrapper to run all the reduction steps. I
also strongly recommend reading the older HSRED
instructions. These tell you what the individual steps are doing.
For formatting reasons, I write the names of HSRED IDL procedures in
capitals here, such as HS_EXTRACT, but you don't need to type them
in all caps. These routines can usually be found under the hsred/idl
directory in filenames corresponding to the procedure name, such as
hsred/idl/spec2d/hs_extract.pro. You need to have the hsred directory
in your IDL_PATH environment variable
with a '+' plus sign prefix so that IDL will search
all the subdirectories, like '+/Users/bjw/idl/hsred'.
Common problems
- Fits files that don't belong
- Mix of 270 and 600-line grating data
- Missing skyflats
- No standard stars on config?
- Restarting a reduction halfway through
- Detecting completed but problematic data
- Combining data from multiple nights
- Inspecting your spectra
Fits files that don't belong
HSRED will usually attempt to reduce all the *.fits files in your
directory. If there are fits files that it should not touch, it will
run into problems. Generally, every *.fits data file should have an
accompanying *_map file, a text file that describes the fiber
assignments. If there is no map file, an error will occur in
HS_MAPTOPLUG.
Actions:
- Move all skycam fits files to a subdirectory. See the
prune_skycam_600.sh file linked below.
- Move all extraneous files away. In particular, if you have any
"ring250*.fits" files, remove those.
- You should usually have only bias, comp, dark, domeflat, sflat,
and science images. The science images will have the name of your
configuration. (Possibly also images of other fields like a
standard? but typically just your data.)
Mix of 270 and 600-line grating data
Hectospec has two gratings, 270-line and 600-line. The 270 is more
commonly used. If your directory has a mix of 270 and 600 data, the
reduction will run and may complete, but the data will look
terrible. The most common issue is that there are some 600-line comps
(arc lamps) or flats mixed in with your comps and flats.
Action:
- Run this script: sh prune_skycam_600.sh linked here: prune_skycam_600.sh in your data
directory. This will find all *skycam.fits files and move them to a
tmp_skycam subdirectory, and look in the image headers to find any
files with DISPERSE = 600_gpm and move those to a tmp_600grating
directory. It uses pyraf to list the image headers to a file.
If you don't have pyraf, you can use anything that looks at the fits
header of extension 1 or 2 for the DISPERSE keyword to find the files
with 600_gpm. If you are reducing 600-line data and need to get rid of
the 270-line files, just change all the 600 references to 270.
Note that if there are data with multiple wavelength settings (only
likely with the 600-line grating), you also need to sort those into
one wavelength setting per reduction directory.
Missing skyflats
If there are no skyflats, HSRED will balk and issue an error partway
through the reduction. Sometimes there may not be any skyflats named
sflat.*.fits in your data directory, if for example the weather was
bad at sunset.
No standard stars on config
If you try to reduce a configuration with the /uberextract option to get flux
calibration, and you don't have any standard stars on the config,
HSRED will complain. If you did put F stars on the config, you need to
add them and their magnitudes to HSRED's list of standard stars in
$HSRED_DIR/etc/standardstars.dat, described at the older HSRED
instructions.
Action:
- Add the standard stars to standardstars.dat and rerun. If you
didn't have any F stars, don't use /uberextract. Next time, select
some standard stars, e.g. from SDSS use the
SPECTROPHOTOMETRIC_STANDARD flag and some magnitude criteria (about
17th mag?).
Restarting a reduction halfway through
I find that if the pipeline crashes for some reason, you fix the
problem, and try to run it again without making changes, the end data
are often garbled - this may include bad sky subtractions, or a SNR vs
magnitude plot with a lot of scatter. One issue is that HSRED detects
whether files have been processed and doesn't redo them, e.g. if
you've run HS_CALIBPROC and made all of the files in the calibration
subdirectory, running HS_CALIBPROC again will not reprocess them. To
avoid this, start
over from the beginning with a new rerun number. Your first run will
probably be number 0000. You can start over by giving the optional
command rerun='0100' (or 0200, 0300, etc) to commands such as HS_PIPELINE_WRAP,
HS_CALIBPROC, HS_EXTRACT, HS_REDUCE1D.
Detecting completed but problematic data
One tool for checking whether your data are sensible is to make a SNR
vs magnitude plot. You can do this by running HS_SNR on the spHect
output file. In IDL, run commands like these:
fname = 'reduction/0000/spHect-2016.1001_1.fits'
lam = mrdfits(fname,0)
object = mrdfits(fname,1)
plugmap = mrdfits(fname,5)
hs_snr, lam, object, plugmap, snr=snr
This should pop up a plot with continuum SNR measured from the spectra vs.
magnitudes from your input catalog. If your magnitudes are good,
there should be a decent correlation with modest scatter. For galaxies
with SDSS photometry, I typically see a number like SNR ~ 2-3 at r=21
in about 1-2 hours of Hectospec exposure. If there are
a few outliers, it may indicate problems with some of your targets -
for example, low surface brightness objects will be low SNR outliers
for their magnitude. If it's a total scatterplot with no correlation,
then something may have gone wrong in the reduction. This has happened
to me a couple of times with the mixed 270 and 600-line files, or with
a reduction that crashed halfway through and was restarted without a
new rerun.
Combining data from multiple nights
Sometimes data will be taken for the same configuration across multiple
nights. Generally, I've found that trying to put all the data
together and reduce it in one chunk does not work well and you get
poor results. The problem may be that the calibration changes
enough that combining the calibrations or applying one calibration
to another night's data works poorly. The better way to proceed is
to reduce each night separately up to and including the HS_EXTRACT
step, which produces the "spHect*.fits" file of extracted spectra.
You should then be able to use the HS_COADD routine to coadd
multiple spHect files into an output file of combined spectra. This
routine is in hsred/idl/spec2d/hs_coadd.pro. Then you
can run HS_TOIRAF to convert it into IRAF multispec format, or
HS_REDUCE1D to find redshifts (see "Inspecting your spectra" below).
Inspecting your spectra
Now that you have completed a reduction, what to do with it?
The data products from HSRED are described at the HSRED pipeline page.
The reduced spectra are stored in a file named something like
"spHect-configname_2019.1024_1.fits" where configname is the
name of the input catalog file, and the other numbers are observation
date and the configuration number. The format of the spHect file
is similar to SDSS spectra. HSRED v2 also makes an output file
called [something].ms.fits, which is in IRAF multispec format, and a
directory with individual *ms.fits files, one file per fiber. You can
look at these with IRAF's splot or any other spectrum-visualizing
tool. These ms.fits files are created by the pipeline's hs_toiraf.pro
procedure.
The pipeline also tries to fit redshifts to spectra, using code
derived from the SDSS algorithms, which is run by the HS_REDUCE1D
procedure. The results of this fitting
are stored in a file called something like
"spZall-configname_2019.1024_1.fits". These include candidate
redshift fits for multiple templates in a format similar to SDSS.
To plot the spectra and overplot the candidate redshifts for quality
inspection, there is a piece of IDL code called "qplot2," based on
code from Ben Weiner, Christopher Willmer, and Casey Papovich.
You can retrieve this code from qplot2 on github. Read
the Readme.qplot_hotwo file. qplot2 was written to take output from
HSRED v1.x, and it expects a catalog text file that was made during the
setup of a HSRED v1.x reduction. This means it may complain when run on
data reduced with HSRED v2, but you should be able to run it by
either omitting the catalog file argument, or creating the catalog
file.
Other problems?
If you have a Hectospec data reduction issue that isn't covered here,
please contact me at bjw@mmto.org. If you have suggestions to add to
this page, contributions are welcome!
Benjamin Weiner, bjw@mmto.org
Last modified: Thu May 28 16:37:01 MST 2020