We are searching data for your request:
Upon completion, a link will appear to access the found materials.
The IRAF package is old.
I've been looking around for a more modern software to replace it in the processes of CCD data reduction and photometry, but haven't been able to find any.
The closest I've found is the PyRAF tool, but this seems more like a Python wrapper around IRAF rather than a replacement for it.
Is there some new software I might've miss or is IRAF really the only option even today?
I forgot to mention this, but I'm looking for tools that work under Linux and are free (open source + no charge), if possible. I will not pay (neither for a Windows license nor for a software package) to get rid of IRAF.
I suspect that everything you want and more is available and written in python or has python wrappers.
Astropy ccdproc photutils
AIP4Win has a comprehensive photometry package, also I believe MaximDL (though more expensive) has some as well.
I am having the same issue too. Been considering using Python with Numpy, SciPy, and Astropy, supplement that with GNU Octave and PyRAF. I heard some younger students would opt for the proprietary MATLAB in lieu of GNU Octave. My supervisor belongs to the older generation so I will have to also learn IRAF to effectively communicate with him. Some calibration scientist recommended me to use the Herschel Interactive Processing Environment (HIPE), which has been developed for the ESA Herschel pipelines (like PACS and SPIRE) in mind and uses Jython (a hybrid of Python and Java). But I am learning to use that too, before either I become proficient in Python or developed some software with a proper GUI of my own…
There is an AstroImageJ. Available on windows, mac, and linux https://www.astro.louisville.edu/software/astroimagej/
Data reduction and photometry without IRAF? - Astronomy
You have requested a machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Neither SPIE nor the owners and publishers of the content make, and they explicitly disclaim, any express or implied representations or warranties of any kind, including, without limitation, representations and warranties as to the functionality of the translation feature or the accuracy or completeness of the translations.
Translations are not retained in our system. Your use of this feature and the translations is subject to all use restrictions contained in the Terms and Conditions of Use of the SPIE website.
The realization of an automated data reduction pipeline in IRAF: the PhotMate system
Stephen O'Driscoll, 1 Niall J. Smith 1
1 Cork Institute of Technology (Ireland)
SUBSCRIBE TO DIGITAL LIBRARY
50 downloads per 1-year subscription
25 downloads per 1 - year subscription
Includes PDF, HTML & Video, when available
Developments in imaging technology over the past decade have provided impetus toward the realization of automated data reduction systems within the astronomical community. These developments, in particular advances in CCD technology, have meant that the data envelope associated with even modest observing programmes can reach gigabyte volumes. We describe the development of an automated data reduction system for differential photometry called PhotMate. For issues of reuse and interoperability the system was developed entirely within the IRAF environment and now forms the backbone of our data analysis procedure. We discuss the methodologies behind its implementation and the use of IRAF scripts for the realization of an automated process. Finally, we place the effectiveness of such a system in context by reference to two recent observing runs at the Calar Alto observatory where we tested a new low light level (L3) CCD. It is our belief that this observing campaign is an important indicator of future trends in observational optical astronomy. As the cost of such devices decreases, their usage will increase and with it the volume of data collectively generated, toppling large astronomical projects as the primary data generators.
© (2004) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
A number of good tutorials for reducing data have been developed over the years, some of which contain links to demonstration data. This Cookbook draws upon many of them. Here are several:
- that are applicable to GMOS longslit or MOS spectra Power-point presentations from the Gemini Data Workshop in Tucson (2010 July all in PDF):
- User’s Guide to the CCDRED Package by F. Valdes  ,
- The IRAF CCD Reduction Package – CCDRED by F. Valdes  .
- A User’s Guide to CCD Reductions with IRAF by P. Massey  ,
- Rectifying and Registering Images Using IRAF by L.A. Wells  ,
- Cleaning Images of Bad Pixels and Cosmic Rays Using IRAF by L.A.Wells and D.J.ꂾll  .
- Latest version of Gemini IRAF: v1.14 (July 2017)
- Latest version of DRAGONS: v2.1.1 (April 2020)
- Python 2.7 (for PyRAF)
- 32-bit compatibility libraries
- Linux, equilvalent to CentOS 6 and above
- Or Mac OS X 10.10 to 10.14.0
- Python 2.7 or 3.6
- Linux, equivalent to CentOS 6 and above
- Or Mac OS X 10.10 and above
- 3.1 You have a question? Start with our FAQ.
- 3.2 You are running into an issue? Please check the Known Scisoft Issues and the Known Problems with the Gemini IRAF package.
3.3 You may find useful info in past Data Workshops that include data reduction tutorials.
- (Rodrigo Carrasco) (Richard McDermid) (Katy Roth) (Katy Roth)
- (Ricardo Schiavon) (Rodrigo Carrasco)
2018 Volume 35 Issue 03
With the transition from IRAF to a more mainstream programming language (Python) it is important to realize that there has also been a major change in workflow. The traditional IRAF workflow consists of individual tasks that take a FITS file as input, run a data processing algorithm on the input file, and return an updated output file for the user. However, in Python data manipulation is more a flow of operations expressed in regular code than individual tasks that operate on files. You can read a bit more about this in the Introduction section of the STAK notebooks, along with an introduction to FITS I/O.
While the STAK Notebook effort was primarily to remove IRAF dependencies in the internal STScI workflow, within the next few years, STScI will slowly phase out active support of our IRAF/PyRAF-based software as well. We will continue the distribution of the Astroconda Python 2.7 environment which includes PyRAF, but this will be frozen and there will be no updates for dependencies, bug fixes, and user support.
The end of IRAF support in the community is a challenge best overcome by teamwork. Contributions from the community can be made in various ways. As stated above, additions to the STAK notebooks would be very welcome, but there are other ways to help. Astropy has been the hub of astronomy-tool development in Python for quite a few years now, but there is always more work to be done. The perspective of newer Astropy users is especially valuable, and specific feedback can be communicated through issues on the GitHub repository. Even more helpful are contributions (via GitHub pull requests) from the community, even if it's a small change to make the documentation more clear. There is currently active development on tools for spectroscopy, photometry, and world-coordinate systems underway within the Astropy community, and now is a good time to test early releases and request features or suggest changes.
We would like to thank all the staff members of the institute who participated in this effort and provided feedback on the STAK notebooks. This project could not have been a success without the teamwork shown by the institute staff. We would also like to thank Cristina Oliveira for acting as our INS liaison during this project.
Data reduction and photometry without IRAF? - Astronomy
The previous section listed some of the instrumental effects which must be corrected during the reduction of CCD data. The reduction procedure is non-trivial and it must be carried out carefully if it is not to go awry. The various stages typically involved in CCD data reduction are: (1) read your data from disk or tape, (2) convert your data to a format compatible with your software, (3) inspect the original images and discard those that are faulty, (4) flag all the known faulty pixels as ’ or replace them with invented, reasonable values, (5) create master bias and dark images for subsequent use in removing the dark and bias signal from raw images of target astronomical objects, (6) for each filter, create a master flat field frame defining the pixel-to-pixel sensitivity variations and then flat field each of the images, (7) for each filter, align and add the individual images of each target astronomical object to produce a master image of the object.
You would normally carry out the stages in the order listed, as you progress from copies of the raw images to reduced images. However, in some cases some of the stages can be omitted. The recipes given in Part II together constitute an example of working through these stages.
6.1 Software available
The principal Starlink software for reducing CCD images is CCDPACK (see SUN/139  ). It provides extensive specialised facilities for reducing CCD data. A considerable advantage of CCDPACK is that it is optionally able to estimate and propagate an error estimate for each individual pixel in the CCD frames through the data reduction process. CCDPACK should be used in conjunction with the KAPPA package (see SUN/95  ), which provides general image display, examination and processing facilities. KAPPA and CCDPACK are completely and seamlessly inter-operable and, indeed, intended to be used together. It is possible to make a reasonable attempt at reducing CCD data using KAPPA alone, though it is less convenient and gives less good results than CCDPACK.
The image processing and spectroscopy package Figaro (see SUN/86  ) includes some facilities for reducing CCD data, though these are less comprehensive than CCDPACK. Again, Figaro is inter-operable with KAPPA and CCDPACK. In addition to CCDPACK, KAPPA and Figaro there are various other Starlink packages which are relevant to some aspects of CCD data reduction and CCD photometry. The various packages available, their uses and their inter-relations, are conveniently summarised in the ‘Road-Map for CCD Photometry’  which has appeared in the Starlink Bulletin.
The early stages of reducing spectra recorded with CCD detectors are similar to those for direct images, though the later stages differ. There is a thorough and accessible introduction to reducing spectroscopic data in the cookbook SC/7: Simple Spectroscopy Reductions  .
6.2 Data formats
Virtually any format might be used to initially write observations to magnetic disk following their acquisition at the telescope. The choice will simply be whatever is practical and convenient for the observatory concerned. Similarly, most software packages for reducing astronomical data have a preferred or ‘native’ format on which they operate. For most Starlink software it is the NDF ( n -dimensional Data Format see SUN/33  ) and for IRAF it is the OIF (Old Image Format). However, most well-established packages are able to import data in various different formats and, in some cases, may be able to process data which are not in their native format, albeit with some loss of efficiency.
This proliferation of different and incompatible data formats is no longer a substantial problem. The FITS format is ubiquitous in astronomy for transferring data between institutions and between software packages. Howsoever the data were originally written when they were acquired at the telescope they will almost invariably be exported from the observatory in the FITS format. That is, the magnetic tape cartridge with which you return from your observing run will almost always contain FITS files. Similarly, observations extracted from data archives are likely to be in FITS format. All the major packages for reducing astronomical data can import files in the FITS format.
The original FITS (Flexible Image Transport System) format was proposed by Wells et al.  in 1981. However, it has been developed and enhanced over the years. The FITS standard is now maintained and documented by the FITS Support Office of the Astrophysics Data Facility at the NASA Goddard Space Flight Center (see URL:
http://fits.gsfc.nasa.gov/fits_home.html ). Though FITS is basically an astronomical format it is sometimes mentioned in books about standard image formats. See, for example, Graphics File Formats by Kay and Levine  . The development of the FITS standard since its inception has recently been reviewed by Wells  .
FITS was originally a standard for files on magnetic tape. However, nowadays it is just as often used as a format for files on magnetic disk. Its primary rôle is the interchange of data between different institutions and software packages, though some packages can process data in the FITS format directly.
Even a brief description of the FITS format is not appropriate here (if you are interested you can retrieve a document prescribing the standard for the FITS format from the FITS Support Office). However, a few comments might be useful. A FITS file is a sequence of records, each of which must be exactly 2880 bytes long. Two types of information are included in a FITS file: the basic data (comprising the image or spectra read from the CCD or whatever) and header information describing and annotating it. Typical header information for an observation might be the instrument and telescope used, the date and time of observation, details of the instrumental set up etc. In the jargon of computer science such header information is often called metadata, though this term is rarely used in astronomy.
A given record may contain header information or data but not both. A record of header information is divided into thirty-six eighty-byte ‘logical records’ (older readers will recognise these as card images). The data are often stored as binary numbers, but the header records always comprise ASCII characters. Header records can occur throughout the file, though there is always at least one at its start.
Figure lists the first few header records from a FITS file. The details are not germane here (and, indeed, the example is not typical it is for a FITS file which contains no array of data). However, it illustrates the important point that there are two types of FITS header records: keywords and comments. Keywords A keyword record consists of a named keyword, an equals sign, the value of the keyword and optionally a comment. For example, in the figure the keyword SIMPLE has the value ‘ T ’ (for true in this instance) and the keyword BITPIX has the value 8. There are some additional rules about the position and length of these items, but they are not important here. Keywords are the principal mechanism used to associate auxiliary information with a dataset. Programs which process FITS files will often search the file for appropriately named keywords to give them the information that they need. The keywords in the figure are mandatory (their meanings are not important here). Others, if present have a specified meaning. Comment A comment header record starts with the string ‘ COMMENT ’ and the rest of the record consists of free text which is intended to be read by a human. Typically it is used to annotate the dataset.
A consequence of the header information always being ASCII characters and some always occurring at the start of the file is that it is possible to examine it with the Unix command more . The resulting display is perfectly readable, though perhaps not very æsthetic. This technique works best with a window which is eighty characters wide. A disadvantage of using more is that it is usually not practical to examine any header information which does not occur at the start of the file. Most data reduction packages have more sophisticated means for examining FITS header information: see, for example, the recipe in Sectionਉ and the script in Sectionꀡ . These mechanisms usually allow you to examine header information which is embedded in the file as well as that at the front.
6.3 Illustration of data reduction
Despite marked improvements in CCD manufacturing techniques in recent years, the bias and pixel-to-pixel sensitivity variations found in raw CCD images are not small effects. This section briefly illustrates the effects and shows the improvement which careful calibration can yield. The images used here were generated by Matthew Trewhella using the WIRO CCD operating in the K band in the near infrared. The images are all of 128 × 128 pixels and were taken as part of a project to mosaic the M51 galaxy and its companion NGC 5194. Several hundred images were taken and they were all reduced, aligned and stacked using CCDPACK.
Figureਅ shows a raw, unprocessed target image, as read-out from the CCD. An astronomical object is visible in the middle of the right-hand edge of the image, but any other features are swamped by the instrumental signature.
Figuresਆ and 7 are respectively a bias and a flat field frame. They clearly show the origin of the instrumental effects seen in Figureਅ . For the sake of clarity the contrast of the flat field frame has been enhanced using a histogram-equalisation technique in order to make subtle changes in intensity easier to see.
Figureਈ shows the final mosaic of M51. The boxed area indicates the part of the mosaic to which the raw image shown in Figureਅ contributed. Over 2Gb of raw images, darks, flats and biases were used to construct this mosaic.
Data reduction and photometry without IRAF? - Astronomy
Copyright © Council for the Central Laboratory of the Research Councils
The 2-D CCD Data Reduction Cookbook
This cookbook presents simple recipes and scripts for reducing direct images acquired with optical CCD detectors. Using these recipes and scripts you can correct un-processed images obtained from CCDs for various instrumental effects to retrieve an accurate picture of the field of sky observed. The recipes and scripts use standard software available at all Starlink sites.
The topics covered include: creating and applying bias and flat-field corrections, registering frames and creating a stack or mosaic of registered frames. Related auxiliary tasks, such as converting between different data formats, displaying images and calculating image statistics are also presented.
In addition to the recipes and scripts, sufficient background material is presented to explain the procedures and techniques used. The treatment is deliberately practical rather than theoretical, in keeping with the aim of providing advice on the actual reduction of observations. Additional material outlines some of the differences between using conventional optical CCDs and the similar arrays used to observe at infrared wavelengths.
Who Should Read this Cookbook?
This cookbook is aimed firmly at people who are new to reducing CCD observations. Typical readers might have a set of CCD observations to reduce (perhaps for the first time) or be planning to observe with a CCD camera. No prior knowledge of CCD data reduction is assumed.
We are grateful to Mike Lawden, Peterraper, Rodney Warren-Smith and, particularly, Malcolmurrie who all provided many useful comments on the draft version of this cookbook. Additional thanks go to Peterraper for the examples, text and diagrams borrowed from SUN/139, the CCDPACK manual.
Thanks are due to Mr I. Morgan, Mr Gareth Leyshon and Dr Steveꃪles for providing a number of scripts and to Dr Gerry Luppino of the Hawaii IFA for allowing us to use Figureਁ . The data from Walter Jaffe’s CD-ROM Astronomical Images are used with the permission of the author and publisher. Karen Moran kindly unearthed the contact details for Twin Press.
Any mistakes, of course, are our own.
 D.S.ꂾrry, 1996, SUN/183.3: ARD — A Textual Language for Describing Regions within a Data Array (Starlink).
 C.uil, 1991, CCD Astronomy – Construction and Use of an Astronomical CCD Camera (Willmann-Bell: Richmond, Virginia). Translated by E.ਊnd.ꃚvoust. Originally published in French as Astronomie CCD, Construction et Utilisation des Cameras CCD en Astronomie Amateur in 1989.
 M.J.layton, 1998, SC/7.2: Simple Spectroscopy Reductions (Starlink).
 M.J.urrie, G.J.Privett, A.J.Chipperfield, D.S.ꂾrry and A.C.ꃚvenhall, 2000, SUN/55.14: CONVERT — A Format-conversion Package (Starlink).
 A.C.ꃚvenhall, 1998, Starlink Bulletin, No. 20, pp17-19. In the first instance see your site manager for back issues of the Starlink Bulletin. Alternatively, a version of this article is available by anonymous ftp from Edinburgh. The details are: ftp site ftp.roe.ac.uk , directory /pub/acd/misc , file roadmap.ps . The file is in PostScript format.
 P.W.raper, 1995, Starlink Bulletin, No. 16, pp6-7. In the first instance see your site manager for back issues of the Starlink Bulletin.
 P.W.raper, M.B. Taylor and A.ਊllan, 2000, SUN/139.13: CCDPACK — CCD Data Reduction Package (Starlink).
 P.W.raper and N. Gray, 2000, SUN/214.8: GAIA — Graphical Astronomy and Image Analysis Tool (Starlink).
 S.B. Howell (ed), 1992, Astronomical CCD Observing and Reduction Techniques, Astronomical Society of the Pacific Conference Series, 23.
 W. Jaffe, 1998, Astronomical Images (Twin Press: Vledder) CD-ROM. To contact Twin Press send an electronic mail message to G. Kiers ( [email protected] ).
 D.C. Kay and J.R. Levine, 1995, Graphics File Formats, second edition
(Windcrest/McGraw-Hill: New York). See in particular Chapter 18, pp235-244.
 C.R. Kitchin, 1998, Astrophysical Techniques, third edition (Institute of Physics: Bristol and Philadelphia). First edition published in 1984.
 P. Massey, 1997, A User’s Guide to CCD Reductions with IRAF (National Optical Astronomy Observatories: Tucson). See SG/12 op. cit. (  ) for details of obtaining IRAF manuals. Retrieve file ccduser3.ps.Z
 I.S. McLean, 1989, Electronic and Computer-Aided Astronomy – From Eyes to Electronic Sensors (Ellis Horwood: Chichester).
 I.S. McLean, 1997, Electronic Imaging in Astronomy – Detectors and Instrumentation (Wiley: Chichester and New York). Published in association with Praxis in the Wiley-Praxis series in Astronomy and Astrophysics.
 R. Morris and G.J. Privett, 1996, SUN/166.4: SAOIMAGE — Astronomical Image Display (Starlink).
 R. Morris, G.J. Privett and A.C.ꃚvenhall, 1999, SG/12.2: IRAF — Image Reduction Analysis Facility (Starlink).
 M.V. Newberry, 1995, CCD Astronomy, 2, No. 1, pp18-21.
 M.V. Newberry, 1995, CCD Astronomy, 2, No. 3, pp12-14.
 M.V. Newberry, 1996, CCD Astronomy, 3, No. 1, pp18-21.
 J. Palmer and A.C.ꃚvenhall, 2001, SC/6.4: The CCD Photometric Calibration Cookbook (Starlink).
 K.T. Shortridge, H. Meyerdierks, M.J.urrie, M.J.layton, J. Lockley, A.C.harles, A.C.ꃚvenhall, M.B. Taylor, T.ਊsh, T. Wilkins, D.ਊxon, J. Palmer and A. Holloway, 2001, SUN/86.18: FIGARO — A General Data Reduction System (Starlink).
 F. Valdes, 1990, User’s Guide to the CCDRED Package (National Optical Astronomy Observatories: Tucson). See SG/12 op. cit. (  ) for details of obtaining IRAF manuals. Retrieve file ccdguide.ps.Z
 F. Valdes, 1990, The IRAF CCD Reduction Package – CCDRED (National Optical Astronomy Observatories: Tucson). See SG/12 op. cit. (  ) for details of obtaining IRAF manuals. Retrieve file ccdred.ps.Z
 G. Walker, 1987, Astronomical Observations – An Optical Perspective (Cambridge University Press: Cambridge).
Data reduction and photometry without IRAF? - Astronomy
This manual deals with the procedures involved in extracting stellar magnitudes from CCD images obtained at the Gettysburg College Observatory (GCO) or the National Undergraduate Research Observatory (NURO) using the IRAF (Image Reduction and Analysis Facility) command language, the standard image processing and analysis program used by professional astronomers. IRAF was developed at the National Optical Astronomy Observatories in Tucson, Arizona, and is available over the web at the IRAF website.
From Raw Images to Magnitudes
Images taken with the Gettysburg College telescope are 1k x 1k FITS images produced with the PMIS operating system and a Photometric CH250 thermoelectrically-cooled camera. Images taken at NURO are 512 x 512 images from a Photometric liquid-nitrogen-cooled camera. There are two ways to get images with the NURO telescopes. If they were taken with the PMIS operating system at NURO, they will, like the Gettysburg Images, be in FITS format. If they were taken using the IRAF Control Language interface to the NURO camera, they will be in IRAF format, which consists of two separate files, one containing image headers (.imh) and the other (invisible to the user) containing the pixels.
There are two elements to the data reduction. The first is processing the images. The second is measuring the processed images to extract magnitudes.
Processing the images follows the following logic. During the course of the evening, one takes images of objects one wants to study. These are called "object" files, but they need to be processed to remove spurious patterns introduced in the CCD camera by electronic readout noise, thermal electrons, and pixel-to-pixel sensitivity variations. To remove these effects, one takes a large number of calibration images during the course of the evening. There are three types of calibration images. (1) Zero (or "bias") frames: Typically 20 to 40 of these are taken each evening to establish the pattern of readout noise across the chip. They are taken with 0 exposure time the chip is just flushed and read out. (2) Dark frames: Typically 20 to 40 images are taken each evening, with 120 second integration times, but with the shutter closed. This establishes the pattern and rate at which thermal electrons accumulate in the CCD chip. Since dark current is negligible with the nitrogen-cooled camera at NURO, we seldom take dark frames there, but we always take them at GCO. (3) "Flat Field" images: Usually 5 or 10 images through each filter (U,B,V,R, and/or I) are taken while pointing at blank sky near sunset or sunrise. Alternately, a uniformly illuminated screen may be used. The purpose is to determine the pixel-to-pixel pattern caused by sensitivity variations of the pixels or shadowing ("vignetting") by telescope optics or dust particles.
We want to apply these calibrations to each object file. To do this use an IRAF command called zerocombine to create a master Zero frame by averaging together all the individual zero frames. Then (only at GCO, where we take dark frames) we use a program called darkcombine to create a master Dark frame by averaging together all the individual dark frames--the program automatically subtracts the master Zero frame from each raw dark frame before averaging. We next use a program called flatcombine to create master Flat frames for each filter, again by averaging together the individual flat-field images, after first subtracting the master Zero frame and the Dark frame (scaled to the exposure time of the Flat). At NURO we also trim the images to a smaller size than that created by the camera, and use the trimmed region, called the "overscan" to remove large-scale patterns from the readout background. Once this is set at the startup of IRAF, however, the "trim" and "overscan" functions are automatic and need not concern us under ordinary circumstances.
When master Zero, Dark and Flats are ready, we then run a program called ccdproc that automatically subtracts the Zero and Dark (scaled to exposure time) from each object file, and divides by the appropriate Flat for the filter used. Voila! The object files, zero, dark, and flat -corrected, are ready to be measured. The overall logic of this is shown on Figure 1, at the end of this document.
There are many ways to extract magnitudes from a CCD image. Let us first be clear what we mean by magnitudes. The images of stars on our processed frames, in profile, look like a Gaussian or bell-shaped curve (see Figure 2). We extract magnitudes from the images by measuring the area under the curve, which represents light from both the star and the sky background, subtracting out the sky background, dividing by the exposure time, and taking the log of the result. The final product is a number that represents the brightness of the star---it's a magnitude. (Technically speaking it's an "instrumental magnitude", since it's not referred to a standard star in the sky, but just calculated from the counts you get in your CCD.)
Two IRAF commands we commonly use to extract magnitudes are imexamine and apphot . Both commands essentially let you point the cursor to a star, push a mouse button, and print out an instrumental magnitude. For quick-look, moderate precision measurements we use imexamine , but if we want to insure the best results, we use apphot , which allows the user to set the measuring parameters interactively, insuring that the proper sky background is chosen, that adjacent stars don't contribute to the measured brightness, that a sufficient amount of the star image is sampled, etc. We describe the use of imexamine and apphot in a write-up by Bentley Laaksonen, included as an Appendix IX.
IRAF is started differently on different machines. But one started, you should see the "command language" prompt.
You enter commands just like you do on any Unix or DOS machine. A list of IRAF commands we find useful is listed in Appendix I, but we will assume you knew a bit about computers before you read this manual and can find information on specific IRAF commands by typing help <command> . IRAF commands always have associated with them a "parameter" file, which sets the operating parameters for the file. You need to learn how to set these parameters for your particular needs, using the epar (edit parameters) command. We will give some hints for what parameters to set for good results, but you may find your own particular recipe that works better.
Initially, when you start IRAF, you should run the command
which will set up some of the fundamental parameters you need for your observing site. At GCO, the computer responds to this command by
Instrument ID (type ? for a list) (gco):
To accept the default, gco, you just press return. At NURO you can type ? to see the list, and pick the obvious choice, the NURO CCD camera.
You will find yourself immediately in the editor for several of the parameter files needed for data reduction. The first is the parameter file for the CCD reduction system, ccdred . Usually you don't have to edit this (See Appendix II). To accept the choices, just push the ESC key, type a colon (:), and then type q (or wq). This is the standard procedure for exiting the editor.
You will find yourself next in the parameter editor for the program ccdproc , which is the master program that trims your images to size, applies zero, dark, and flatfield corrections if needed, etc. Make sure all the parameters are set to the proper values for your site and your purpose. Sample ccdproc parameter settings for GCO and NURO are shown in Appendix III and IV. The important things to note are that you need to specify the names (and directories) you will use for your Zero, Dark, and Flat field images, and you need to tell the program which processing steps (trim, zero, dark, flat, etc) it should perform on your object files.
When you have edited all the ccdproc commands that need setting, you can then type Esc :q to get back to the cl> prompt. You are ready to go.
STEPS IN PROCESSING IMAGES
0) Conversion to standard FITS format
This step is needed only at NURO, and only when taking images using PMIS, to make sure images are in standard FITS format. Simply type in :
!fitscon -nuro <FITS image file names>
to convert the raw data files to standard FITS format. (The ! is needed only if you're executing the fitscon command from IRAF. If you've got a separate unix command shell window open, the ! can be omitted when issuing the command from unix.)
1) Conversion from FITS to IRAF internal format
This step is not needed if the images were taken using the IRAF Control Language interface at NURO, which produces images directly in IRAF format. Otherwise, your images will be in FITS format, with headers (containing image information like exposure time) and pixels all together in one file. To work on the images we need to split the image headers from the pixels, using the IRAF command rfits . All fits files to be processed by IRAF must first be converted to IRAF format as described in the following.
Using the command is simple. Just type rfits . The computer will ask for an input file name. You type the name in: eg: myimage.fts . The computer then asks for an output name. Since IRAF will add a .imh extension to this (meaning "image header") to whatever you type, you simply type myimage for the output file name. The computer will the extract two files from the original FITS file, the image header, which will be named myimage.imh and the myimage.pix . You probably won't see the .pix file on your directory, since IRAF usually stores it elsewhere on the disk. The header knows where the pixels are, and you don't have to worry about it.
If you're converting lots of files, which is usually the case, IRAF lets you take lists from a data file. You can make two files, one containing the FITS file names and one containing the output names. I usually call them fitlist and imlist. At the top of the next page we list two sample files:
When IRAF prompts you for an input name you type
And when it prompts you for an output name you type
The result will be the extraction of headers and pixels from the five files named myimage?.fit , and the appearance on your directory of five files named myimage?.imh.
Note that the "at" sign, @, is essential. That indicates that input comes from the file. Otherwise IRAF thinks that fitlist is the name of the file you want converted, and that imlist.imh is to be the output name.
2) Making a master zero frame
First make sure you have all your "bias" or "zero" frames in IRAF format. Say they're named bias01.imh , bias02.imh , etc. Then make sure the parameters for the zerocombine command are set (See Appendix V for a sample), using the command epar zerocombine . You can name the output zero file whatever you want, but make sure its the same name you set earlier in the ccdproc parameter file, or edit the ccdproc "zero level calibration image" parameter later to match this name. When the parameters are set properly, exit the editor using Esc :wq and then simply type
The computer will prompt you for your input file names. You could type bias*.imh in this case, or give it the name of a file that has the list of zero images on it, e.g. @imlist .
The computer now reads all your zero frames, averages them, and outputs the results in the file Zero.imh, which is now your master Zero image.
If you want to make sure the image came out OK, type
to check the statistics of each image. Or display the image on the screen to make sure the that Zero.imh looks smoother than any of the bias. imh files.
3) Making a master Dark Frame (only at GCO)
The procedure is exactly the same as zerocombine . Use rfits to make sure all your dark frames are in IRAF format. Edit the darkcombine parameters to meet your needs (see Appendix VI for a sample), making sure to give a name, say Dark.imh for the master output name to match your ccdproc darkk image name. Then just type
and specify the file names or a list.
The computer reads all the dark frames, subtracts the master zero frame you created in the last step, and averages them together into a master dark frame, e.g. Dark.imh.
Again, the procedure is the same as zerocombine and darkcombine . Use rfits to make sure all your flats are in IRAF format. Edit the flatcombine parameters to meet your needs (See Appendix VII). Note that the IRAF program reads your headers to determine the filters, and will only combine flats for the same filter. Thus the name you give as an output file name is a "root" name, to which the filter name will be added when the master flat is created. For instance, when you specify a root name "Flat", IRAF will name the output files FlatI, FlatR, FlatV, FlatI, etc, depending on the filter.
and then, when asked, the input file names or a list.
IRAF then reads all the flatfield images, subtracts Zero and Dark from each, averages together ones of the same filter, and prduces a master flat for each filter. A singe flatcombine command, wonder of wonders, can produce a set of master flats, e.g. FlatB.imh, FlatV.imh, FlatR.imh, and FlatI.imh .
Check these images out using the display command to make sure your flats look reasonable. This is a matter of judgment, and as you get more experience in processing you'll learn what to look for.
5) Processing your object files
The last step is easy. Make sure any object files you want process are in IRAF format. Assuming that the ccdproc parameters are set properly (see the section on starting IRAF), all you have to do is type
and IRAF will completely process any images on your directory. It knows which ones have already been processed, so you simply have to type the general command above to keep up with your data as it comes in. If you get any errors saying it can't find a flat-field, dark, or zero image, make sure that the names for these files in the ccdproc parameter file match the actual master files you have on your directory. You can edit the ccdproc parameter file with the command epar ccdproc. The learning curve here is short, and it's easy to process raw images now with just a few keystrokes.
You can display the images to check if the pattern noise has been removed and the background looks relatively smooth. And you are now ready to measure magnitudes using imexamine or apphot , as described in the attached paper (Appendix IX) by Ben Laaksonen.
6) Converting your processed files to FITS format for storage or transfer (optional)
Because IRAF format consists of separate image headers and pixel files (with a pointer in the header pointing to the pixel files), you may want to convert your images back to FITS format (which has the header and pixel information all together on one file) before you send them over the internet or put them on tape. To do this, just use the wfits command, which works just like rfits . You can convert files one by one, or use lists as input and output. The parameter file for wfits that we use is shown in Appendix VIII. At GCO we distinguish between raw FITS images and processed images by using a .fit extension for all raw images from the CCD and a .fts extension for all processed images.
You may want to compress the images before storing or ftp'ing over the internet. The command compress will do this, creating a file with an added extension .Z , which is
Data Processing Software
Gemini offers data reduction software for its facility instruments. The Gemini IRAF package and the DRAGONS platform are the official data reduction software supported by Gemini.
For the next several years we will be transitioning from the IRAF platform to our new Python-based DRAGONS platform. As time goes by, more and more instruments and modes will be supported by DRAGONS. During this transition, it is possible that users will require both platforms depending on the data they have obtained.
Which platform should I use?
Gemini IRAF currently still support all the instruments and modes. However for any imaging data from current instruments, we recommend using DRAGONS. DRAGONS currently support imaging data reduction, but does not support any spectroscopy, for now.
|GMOS Imaging||DRAGONS (recommended), Gemini IRAF|
|NIRI Imaging||DRAGONS (recommended), Gemini IRAF|
|GNIRS Keyhole Imaging||DRAGONS|
|Flamingos-2 Imaging||DRAGONS (recommended), Gemini IRAF|
|GSAOI Imaging||DRAGONS (recommended), Gemini IRAF, plus Disco-Stu|
|Any spectroscopy||Gemini IRAF|
|Decommisioned Instruments||Gemini IRAF|
Download and Install
DRAGONS and Gemini IRAF, support software and dependencies, are all installed the same way using the Anaconda distribution system. We also make use of the STScI Astroconda suite to install common astronomy tools, like ds9.
Please note that it is becoming more and more difficult to package DRAGONS for Python 2.7 and the day is approaching when we will have to drop support for Python 2.7 in DRAGONS. Also, the spectroscopy support under development is Python 3-compatible only. We encourage users to start using Python 3 for anything Python to smooth the transition later.
See the installation instructions here:
Virtual machine image for running IRAF under recent MacOS releases
May 19, 2020
A CentOS 7 virtual machine image (OVA file) is now available to facilitate running Astroconda IRAF under MacOS 10.15+, which no longer supports running the necessary 32-bit binaries natively. This comes with Anaconda 2019.10, Gemini IRAF 1.14, DRAGONS 2.1.0 and other packages from Astroconda pre-installed. Users of MacOS 10.14 affected by the Tk bug that causes a desktop session logout when displaying graphics may also want to install this guest distribution as a workaround.
New DRAGONS Patch Release
April 20, 2020
A bug fix release of DRAGONS is now available. In release 2.1.1, we have fixed bugs and typos found by the users and ourselves since the initial release. We have also added compatibility with astropy v4. If you already have DRAGONS installed, you can update by doing conda install dragons=2.1.1 . If you need to install DRAGONS for the first time, please see the Download and Installation Instructions.
This update also contains an update of disco_stu .
For information and tutorials on DRAGONS, see the "DRAGONS Information" section.
DRAGONS First Public Release!
October 31, 2019
It is with great delight that we are announcing the first public release of Gemini's new Python-base data reduction platform, DRAGONS, Data Reduction for Astronomy from Gemini Observatory North and South. This project has been many years in the making. DRAGONS offers a more streamlined approached to the data reduction of Gemini data, compared to the Gemini IRAF package.
This release, version 2.1.0, supports imaging reduction only, for the current facility instruments. For spectroscopy data, please continue to use Gemini IRAF for the time being. Work is on-going regarding spectroscopy-support in DRAGONS but it will be a while before it is publicly available for science-quality reduction.
For information and tutorials on DRAGONS, see the "DRAGONS Information" section.
IMPORTANT: MacOS 10.14.6 and 10.15 incompatibilities with data reduction software
October 11, 2019
As of this week's v10.15 release, MacOS is no longer capable of running the 32-bit Astroconda IRAF distribution needed by Gemini IRAF. For the time being, Gemini IRAF users on Apple machines are advised to continue using MacOS 10.14 or earlier, or to install Astroconda in a virtual machine with a compatible OS. Gemini will look into providing a ready-made VM image to help with this while we are migrating our data reduction tools to Python. Furthermore, MacOS 10.14.6 suffers from a bug that can cause a desktop session logout when attempting to display plots or images with PyRAF, DS9, Matplotlib or other software that uses Tk. We suggest that PyRAF users on 10.14 avoid updating their OS until such time as this problem is resolved by Apple and/or we can determine a reliable workaround (check here for further announcements). IRAF CL is unaffected, but note that we are no longer testing it routinely and are aware of occasional failures with Gemini IRAF.
For older announcements, see the Announcements page.
Gemini IRAF Information
Use Gemini IRAF to reduce Gemini facility instrument data, including decommissioned instruments. Get to the Gemini IRAF page for a Description of the package, details of its content, and list of releases and revision history.
However, for imaging data from currently active instruments, we recommend the use of DRAGONS (below) instead.
Use DRAGONS to reduce imaging data from the currently active Gemini facility instruments: GMOS, NIRI, Flamingos-2, GSAOI, GNIRS (keyhole). For spectroscopy data, you must use Gemini IRAF.
The DRAGONS documentation is hosted on readthedocs.org .
Science quality verification for DRAGONS imaging modes is discussed in this report.
The primary reference to be cited by users of DRAGONS is:
Other Gemini Data Reduction Software
- Disco-Stu - Distortion Correction and Stacking Utility
Disco-Stu is a software package for GSAOI images. This standalone package, written in python, will align and stack images that have already been processed by DRAGONS or the Gemini IRAF gareduce task. The current release is v1.3.7. To install:
conda install disco_stu
The Gemini Local Calibration Manager, gemini_calmgr , is a python package that DRAGONS uses to associate data to reduce with the best processed calibration available. The Local Calibration Manager handles a lightweight database where the users can upload information about the processed calibration they have produced. DRAGONS can then use the calibration association rules to identify and retrieve automatically the master bias, the master flat, etc. that the data requires. The calibration association rules are the same as those used in the Gemini Observatory Archive.
The package gemini_calmgr is installed automatically as part of a DRAGONS installation.
To get help with Gemini IRAF or DRAGONS, please use the Gemini Helpdesk system. Use the Gemini IRAF category even for DRAGONS questions.
If you find a bug with DRAGONS, please consider reporting on the DRAGONS Github issues portal. You will need a Github account.
For comment, suggestions, and general feedback on DRAGONS, please add a comment to this decidicated post on the Gemini Data Reduction User Forum.
Gemini Observatory Participants
The Gemini Observatory provides the astronomical communities in six participant countries with state-of-the-art astronomical facilities that allocate observing time in proportion to each country's contribution. In addition to financial support, each country also contributes significant scientific and technical resources. The national research agencies that form the Gemini partnership include: the US National Science Foundation (NSF), the Canadian National Research Council (NRC), the Chilean Comisión Nacional de Investigación Cientifica y Tecnológica (CONICYT), the Brazilian Ministério da Ciência, Tecnologia e Inovação, the Argentinean Ministerio de Ciencia, Tecnología e Innovación, and the Korea Astronomy and Space Institute (KASI). The observatory is managed by the Association of Universities for Research in Astronomy, Inc. (AURA) under a cooperative agreement with the NSF. The NSF also serves as the executive agency for the international partnership.
3. Getting support
Data Reduction User Forum: User-supported location for trading ideas, scripts and best practices, and taking part in user-driven public discussions of data reduction processes and strategies.
US NGO data reduction portal: Discussion of and links to data reduction procedures for all current Gemini instruments.
The magnitude of stars will vary from day to day from changes in weather and observing conditions. To find the flux of the stars without this variation, photometry must also be taken of reference stars. Reference stars are stars that have a steady, measured flux. From knowing the magnitude of these stars, the offset can be computed and this is subtracted from the measured flux of the source. Therefore, if we measure variability in the flux of the source, we know it is not due to changing observing conditions. We used qphot "quick photometry" to determine the magnitudes from the CCD images. This is located in the package noao:digiphot:apphot.