Opening VLA Export Format data on Windows

Opening VLA Export Format data on Windows

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

I recently downloaded some data from the NRAO archive.

It came as a file in the "VLA export format" with the extension .xp1. From what I gather I can open this using some software called CASA or a Python library calledcasatoolsusing the importvla command.

The trouble is that I can't seem to find any way to install CASA on Windows 10. Infact as far as I can tell CASA isn't supported at all for Windows.

How do I open this file on Windows? I just want a big matrix of data that I can process with my own tools and astropy. Is there some reason that the NRAO distributes data using this (as far as I can tell) arcane and undocumented (I couldn't find any documentation after about 1/2 an hour of looking) file format instead of something like NetCDF or hdf5?

The VLA data downloaded from the archive is raw data. It contains the output of the correlator in its own format, data on weather, water vapor in atmosphere, turnings of the antennas, wellbeing of each antenna, all calibrators, information which part of the data might not be healthy and why, comments by staff, if any, who examined the data, etc.

CASA allows the user to run the data through the data reduction pipeline, which is designed to process these outputs and deliver “the product”, i.e. calibrated data.

The product is a large file containing the UV data (Fourier transform data) on calibrators and targets. To get images this data has to be imaged and cleaned. Imaging implies doing the inverse Fourier transform. Cleaning implies application of CLEAN algorithm which takes care of the fact that the telescope beam has non-trivial shape.

Unless one knows everything about the telescope hardware and software, it is impossible to make any use of the raw data from the archive. The purpose of CASA is to eliminate necessity of the telescope's inner working knowledge as well as necessity to write ones own imaging code.

CASA is essential tool. Try NRAO's Helpdesk. They might know a workaround the Windows issue.

If you are determined to play with data yourself, ask Helpdesk to point you to the documentation about VLA's correlator and its output format.

Have you tried using windows subsystem for linux? That gives you a linux environment inside Windows from which you can install the linux version of casatools.

Exporting your map

Once you have created your map, you have a number of choices for sharing it. This topic provides details of the various map export options, along with a brief overview of other ways to share your maps with others.

Sometimes the term export can mean exporting individual map layers to other data formats. That type of export is referred to as exporting data. This help topic discusses exporting maps, exporting the full map image to graphics interchange files using the ArcMap Export Map command ( File > Export Map ).

For more information on exporting data, see the topics referenced below:

What is GPXSee?

GPXSee is a GPS log file viewer and analyzer that supports all common GPS log file formats.

Key features

  • Opens GPX, TCX, FIT, KML, NMEA, IGC, CUP, SIGMA SLF, Suunto SML, LOC, GeoJSON, OziExplorer (PLT, RTE, WPT), Garmin GPI & CSV, TomTom OV2 & ITN and geotagged JPEG files.
  • User-definable online maps (OpenStreetMap/Google tiles, WMTS, WMS, TMS, QuadTiles).
  • Offline maps (OziExplorer maps, TrekBuddy maps/atlases, Garmin IMG/GMAP & JNX maps, TwoNav RMaps, GeoTIFF images, MBTiles, BSB charts, KMZ maps, AlpineQuest maps, Locus/OsmAnd/RMaps SQLite maps, Mapsforge maps).
  • Elevation, speed, heart rate, cadence, power, temperature and gear ratio/shifts graphs.
  • Support for DEM files (SRTM HGT).
  • Support for POI files.
  • Print and export to PNG and PDF.
  • Multiple tracks in one view.
  • Full-screen mode.
  • HiDPI/Retina displays and maps support.
  • Native GUI (Qt) for Windows, Mac OS X and Linux.
  • Free software (GPLv3 open-source license).

GPXSee is designed as a small (no dependencies except of Qt), fast and uncomplicated GPS data/map viewer, not a full featured GIS software. However, the spectrum of supported data files/map sources is relatively rich, see the Documentation section for details.

How to format USB flash drive with Disk Management

The Disk Management tool offers at least two routes to format a USB flash drive. You can format the storage to rebuild the file system table and erase the content, or you can delete the partition that comes in handy to fix storage problems.

Format flash drive

To use Disk Management to format a USB drive, use these steps:

  1. Open Start.
  2. Search for Create and format hard disk partitions and click the top result to open the Disk Management tool.

Right-click the removable drive and select the Format option.

Source: Windows Central

Check the Perform a quick format option.

Source: Windows Central

After you complete the steps, the drive will be erased and reformatted to store data again.

Clean and format flash drive

If you are dealing with errors or other problems, you can clean the USB drive and start again from scratch with a new partition and file system with Disk Management.

To clean and format a USB flash drive, use these steps:

  1. Open Start.
  2. Search for Create and format hard disk partitions and click the top result to open the Disk Management tool.

Right-click the removable storage and select the Delete volume option.

Source: Windows Central

Right-click the "Unallocated" space and choose the New Simple Volume option.

Source: Windows Central

Use the drop-down menu to select one of the available letters.

Source: Windows Central

Quick tip: If you assign a letter manually, it is best to select a letter in reverse order (Z, Y, X, and so on).

Use the File system drop-down menu and select the NTFS option.

Quick note: Using this method, you can only use "NTFS" or "FAT32." If you need to format the drive using "exFAT," you will need to use Command Prompt or PowerShell

Check the Perform a quick format option. (If you clear the option, a full format will take place that can take some time depending on the storage size.)

Source: Windows Central

Once you complete the steps, the process will create a new partition and set up the file system, fixing common problems with the flash drive, including data corruption.

If you cannot perform a format, the removable drive is probably broken. If this is the case, you can always purchase another USB flash drive, such as the SanDisk Ultra Fit USB 3.1 flash drive, which comes in 16GB up to 512GB variants with enough space to save large files and small backups. You can find even more great options in our roundup of best flash drives.

Reliable storage

SanDisk Ultra Fit

If you're in the market for a reliable thumb drive with enough storage for large projects and fast transfer speeds, the SanDisk Ultra Fit USB 3.1 flash drive is an excellent option. The removable drive offers up to 512GB of storage with transfer speeds up to 130MB/s, it's backed by a strong brand, and it even includes some nifty features like password protection, recovery, and encryption. It's also affordable, at around $6.32 for 16GB.

Why does my exported CSV data get converted to weird formats?

When exporting CSV data from your store, you may sometimes notice that large numbers are converted into scientific or exponential numbers in the cells of your spreadhseet program.

For example, when exporting the customer records CSV file, and a customer's phone number is listed as 8008286650 upon opening up the file in Excel, you'll have the number shown as 8.01E+09 or something similar.

A similar issue occurs when numerical data in your store contains zeroes at the start of the number, but upon opening the file in your spreadsheet program, the leading zeroes are removed.

An example of this can be seen if your products have part or MFGID numbers like 000986543219 but upon opening the exported file in the spreadsheet program, the numbers are shown as 986543219.

Unfortunately, saving the file in your spreadsheet program and re-uploading the CSV data to your store will then permanently change the record(s) to those displayed values.

To be clear, the reason these problems occur is not due to the way Shift4Shop creates your export data. Instead, this is due to the way Excel and other spreadsheet programs open a CSV file and display the data therein.

Basically, spreadsheet programs are designed to be used for calculation purposes, so they tend to apply mathematical formats to numbers when the CSV file is opened directly into the program. A CSV file is nothing more than a text file with its data values separated by commas (hence the name "comma separated values" or CSV). These text files can be difficult to read as just straight text. Fortunately, CSV files have the advantage of being read in a spreadsheet program, which allows the file to be read in organized columns and rows.

Opening the CSV file in a Spreadsheet

Unfortunately, when opening a CSV text file directly into a spreadsheet program like Excel, the data - if numerical - is converted by the program to be shown in a mathematical format. So numbers with leading zeroes are converted to whole numbers and numbers that are long integers such as phone numbers and UPC code numbers are converted by the spreadhseet application into exponential numbers like 8.01E+09. It's not that the CSV file exported from your store is formatted incorrectly, but rather that the spreadsheet program applied numerical formatting to those cells when opening it directly.

To verify that your exported data is in fact formatted correctly, try opening the CSV file into a text program like notepad. You will see the exported data exactly as it is formatted by the Shift4Shop software before the format is changed by your spreadsheet program.

To work around this problem, the best thing is to not open the file directly into the spreadsheet program, but rather import the text file (CSV) data into the spreadsheet program while it is already open. During the import of text data, your spreadsheet program will give you the option to have the spreadsheet program format the data as "text" so that mathematical formatting is not used on your CSV file's numbers.

The exact steps differ depending on the spreadsheet program being used but the basic process is as follows:

You will have a CSV file on your computer. Most CSV files will open up directly into your computer's installed Spreadsheet program. However, do NOT double-click the file to open it. Instead, proceed with the following steps.

  1. Open up your spreadsheet program independently
  2. Create a new workbook or blank spreadsheet in the program.
  3. Look for your spreadsheet program's import functions *

* Note
This is the part that will differ depending on the specific spreadsheet program you are using. Some versions of Microsoft Excel will have an "Import Wizard" located in the "Data" menu. Other versions will have the import functions located directly in the program's main screen. Please refer to your spreadsheet program's documentation for information on importing data into the spreadsheet.

When importing the data you may be given the option to select the data type. If so, select comma separated. Furthermore, as part of the import you should be given the option to select the formatting that will be used for the imported data's display.

Before selecting the format type, be sure to select all of the columns in the file, then proceed:

Your workbook/spreadsheet file will now be formatted in text only and preserve your numerical data as it was generated by the store.¶

In the previous section you learned about reading and writing tabular data files using direct Python I/O and simple parsing and type-conversion rules. This is important base knowledge, but in real-world use when dealing with ASCII data files there are many complications and we recommend using the module. It natively understands (and can automatically detect) most of the formats which astronomers typically encounter. As of Astropy 1.0 it includes C-based read and write methods which are much faster than any pure-Python implementations.

The module supports these formats:

    : AASTeX deluxetable used for AAS journals : basic table with customizable delimiters and header configurations : CDS format table (also Vizier and ApJ machine readable tables) : column names given in a line that begins with the comment character : table from the IRAF DAOphot package : Enhanced Character-Separated-Values : table with fixed-width columns (see also fixed_width_gallery) : IPAC format table : HTML format table contained in a <table> tag : LaTeX table with datavalue in the tabular environment : basic table with no header where columns are auto-named : tab-separated values with an extra line after the column definition line : SExtractor format table : tab-separated values

Reading tables¶

The first and most important argument to the function is the table input. There is some flexibility here and you can supply any of the following:

  • Name of a file (string)
  • Single string containing all table lines separated by newlines
  • File-like object with a callable read() method
  • List of strings where each list element is a table line


Even though it seems obvious to a human, parsing this table to get the right column names, data values and data types is not trivial. needed to figure out (or guess):

  • Overall table format (DAOphot, CDS, RDB, Basic, etc)
  • Column delimiter, e.g. space, comma, tab, vertical bar, etc.
  • Column names (which row, maybe preceded by #)
  • Quote character (single or double quote)

By default will try each format it knows and use the first one that gives a “reasonable” answer. The details are in the Guess table format section. Sometimes it will fail, e.g.:

This gives an ominous looking stack trace, but actually all that happened is that guessed every format it knows and nothing worked. The standard set of column delimiters is space, comma, tab, and the vertical bar. In this case you simply need to give it some help:

The full list of parameters for reading includes common options like format , delimiter , quote_char , and comment .

No guessing¶

For some tricky tables you will want to disable guessing and explicitly provide the relevant table format information to the function. A big advantage in this strategy is that can then provide more detailed information if it still fails to parse the table, e.g.:

This produces a message (after the stacktrace) that should be a pretty good clue that is using the wrong column delimiter:


You can write ASCII tables using the ascii.write() function. There is a lot of flexibility in the format of the input data to be written:

  • NumPy structured array or record array
  • astropy Table object
  • Sequence of sequences
  • Dict of sequences

As a first simple example, read a comma-delimited table and then write it out as space-delimited:

We can use a different column delimiter:

or a different table writer class:

As a final example, imagine you’ve gathered basic information about 5 galaxies which you want to write as an ASCII table. You could just use pure Python file I/O as shown earlier, but then you may need to be careful about quoting and formatting (and why rewrite the same code every time when it is already done!). Instead just use

Exercise: scraping table data from the web

Note: this exercise only works on Python 2 due to BeautifulSoup doing something differently in Python 3. Five cheers to the person who can fix this!

To do this exercise you must first install the BeautifulSoup package which will parse HTML pages into nice data structures. QUIT your IPython session and from the command line do:

Now start IPython again. The exercise is to grab the table data from the XJET catalog page into a Python data structure. First we’ll show you the really easy answer using the HTML reading capability of

But in order to understand the guts of how this works, start by defining following function which converts an HTML table to a list of lines with tab-separated values (this will be more clear in the next part):

Now examine what you got in the table variables and use to parse the right one into a table. Then plot a histogram of the redshift distribution in this sample.

HINT: the table has missing values so include fill_values=('', '-1') in the call to has robust functionality to replace bad or missing values.

Click to Show/Hide Solution

The data are in the second table, so do:

Opening VLA Export Format data on Windows - Astronomy

All versions of Stata under Microsoft Windows are supported. Stata files have a .dta file extension.

Import of all Stata versions under Microsoft Windows and UNIX is supported. Export of Stata version 8 and later is supported.

Stata supports missing values. SAS missing values are written as Stata missing values.

When using importing, Stata variable names can be up to 32 characters in length. The first character in a variable name can be any lowercase letter (a-z) or uppercase letter (A-Z), or an underscore ( _ ). Subsequent characters can be any of these characters, plus numerals (0-9). No other characters are permitted. Stata reserves these 19 words, which are not allowed to stand alone as variable names:

_all long _N
in _skip _weight
_pred _cons float
_b _n pi
int using with
_rc double if

If the program encounters any of these reserved words as variable names, it appends an underscore to the variable name to distinguish it from the reserved word. For example, _N becomes _N_ .

When exporting, variable names greater than 32 characters are truncated. The first character in a variable name can be any lowercase letter (a-z) or uppercase letter (A-Z), or an underscore ( _ ). Subsequent characters can be any of these characters, plus numerals (0-9). No other characters are permitted. Invalid characters are converted to underscores ( _ ).

Stata supports variable labels when using the IMPORT procedure. When exporting, if the variable name is not a valid Stata name and there is no label, the EXPORT procedure writes the variable name as the label.

Stata stores value labels within the data file. The value labels are converted to format library entries as they are read with the IMPORT procedure. The name of the format includes its associated variable name modified to meet the requirements of format names. The name of the format is also associated with a variable in the SAS data set. You can use FMTLIB=libref.format-catalog: statement to save the formats catalog under a specified SAS library.

When writing SAS data to a Stata file, the EXPORT procedure saves the value labels that are associated with the variables. The procedure uses the formats that are associated with the variables to retrieve the value entries. You can use the FMTLIB=libref.format-catalog statement to tell SAS where to locate the formats catalog.

Restriction: Numeric formats only.

Stata supports numeric field types that map directly to SAS numeric fields.

Stata date variables become numerics with a date format.

When writing SAS data to a Stata file, the EXPORT procedure converts data into variable type double. A SAS date format becomes a Stata date variable.

This IMPORT|EXPORT method uses the client/server model to access data in Stata files on Microsoft Windows from Linux, UNIX, or Microsoft Windows 64-bit operating environments. This method requires running the PC Files Server on Microsoft Windows.

Requirement: A filename with a .dta extension is required.
IMPORT and EXPORT Procedures Supported Syntax

When importing a Stata file, SAS saves value labels to the specified SAS format catalog. When exporting a SAS data set to a Stata file, SAS uses formats that are associated with the variables to retrieve the value entries.

[ASIAIR GUIDE] How to transfer images from ASIAIR (UPDATED)

About half a year ago, we posted a tutorial on how to transfer images from ASIAIR, and proposed three solutions. As time goes by, the ASIAIR application has gone through many times of iterations. Also considering the comprehensive upgrade in the hardware of ASIAIR PRO, some of these 3 ways are no longer recommended now.

So here we re-write this tutorial. Hope it can help you guys a bit.

Method One: External Storage Device

The first is the most recommended: Using an external storage device.

There is a 64GB USB memory stick in the package box of ASIAIR PRO. It will be auto-recognized when you plug it into ASIAIR PRO.

The USB flash disk coming with the package

Let’s go “Storage Settings” on the application. You can see clearly how much memory space you now have here.

Choose the SD card, hit “Image management”. As all images saved in the SD card are all listed here, it’s very easy for you to choose the target files.

(P.S. A new function called “Image Preview” was added in the version 1.4.2, which allows you to preview the saved images on ASIAIR application without using other software on the computer)

In “Image Management”, select the image files and copy to the USB memory stick. Then plug off the memory stick and connect it with the computer – I’m sure you know what to do next.

You can also move the image files or delete them in “Image Management”. It’s all your choice.

Method Two: WiFi Station Mode

Connect ASIAIR to your WiFi router via WiFi Station Mode, press on the exclamation point to achieve the IP address.

WiFi Station Mode – IP Address

Open the File Manager on your computer, type the IP address you just achieved.

Done! You are free to visit the shared directory of ASIAIR now. Just copy the ones you like to your computer hard drives ^^

Only three points you might need to take care with:

1. For different operation systems, the ways to access the shared directory are different.

Windows: Type the IP address in the address bar of the File Manager.

MacOS: In the Finder on your Mac, choose Go > Connect to Server. Then type the network address.

2. Your computers should be equipped with wireless network cards. Same with Method 3 and Method 4.

3. You will find an ASIAIR shared directory which is set to read-only for security. You can only copy the files from this location to your computer. We advise you delete the files from ASIAIR application after you done transferring.

Method Three: ASIAIR WiFi Connection

Connect your computer to the ASIAIR Wi-Fi network, open the computer File Manager and type in the address bar.

Then copy the read-only files like what you do in Method three.

Method Four: Wired LAN Connection

This method is similar to using the Wi-Fi connection but this time utilizes the wired LAN connection.

Connect the ASIAIR to your network via a LAN cable port, check “Wired Ethernet” and access the IP address in the detailed page.

Open the computer File Manager and type the IP address in the address bar.

You may find the method two and method three quite similar at the first glance. However, they have quite distinct differences.

The WiFi Station Mode is made to greatly expand the transmission range of ASIAIR, which will benefit the remote operation for sure. You can leave your setup outside of the house and control them via ASIAIR application in anywhere of your house. Stay warm and transfer the image files in a very relaxing way! But it has one necessary precondition — You need to have the wireless LAN at your shooting place. It’s not a big deal when you are at home, but if you come into the wild, it can be a problem.

The ASIAIR WiFi network, in contrast, it does not need the wireless LAN to work but the signal transmission range is very limited.

As for Method four, the wired LAN connection, it aslo has its advantages and disadvantages. The advantage is its transmission speed is relatively fast compared with other methods. The disadvantage is it can be really troublesome and messy in your shooting place. You’ll have to use a long cable to connect the WiFi router to your ASIAIR PRO. So we do not recommend this method.


Thus, in conclusion, the most recommended method to transfer image files from ASIAIR is still using the external storage device.

Switched-power (EVLA)

The EVLA is equipped with noise diodes that synchronously inject a nominally constant and known power contribution appropriate for tracking electronic gain changes with time resolution as short as 1 second. The total power in both the ON and OFF states of the noise diodes is continuously recorded, enabling a gain calibration derived from their difference (as a fraction of the mean total power), and scaled by a the approximately known contributed power (nominally in K). Including this calibration will render the data in units of (nominal) K, and also calibrate the data weights to units of inverse K 2 . To generate a switched-power calibration table for use in subsequent processing, run gencal as follows:

The resulting calibration table should then be used in all subsequent processing the requires the specification of prior calibration.

To ensure that the weight calibration by this table works correctly, it is important that the raw data weights are proprotional to integration time and channel bandwidth. This can be guaranteed by use of initweights as described above.

1.6 ਏrom Loading Data to Images

The subsections below provide a brief overview of the steps you will need to load data into CASA and obtain a final, calibrated image. Each subject is covered in more detail in Chapters  2 through 6 .

An end-to-end workflow diagram for CASA data reduction for interferometry data is shown in Figure  1.8 . This might help you chart your course through the package. In the following sub-sections, we will chart a rough course through this process, with the later chapters filling in the individual boxes.

Figure 1.8: Flow chart of the data processing operations that a general user will carry out in an end-to-end CASA reduction session.

Note that single-dish data reduction (for example with the ALMA single-dish system) follows a similar course. This is detailed in Chapter  8 .

1.6.1  Loading Data into CASA

The key data and image import tasks are:

  • importuvfits — import visibility data in UVFITS format (§  2.2.7 )
  • importvla — import data from VLA that is in export format (§  2.2.3 )
  • importasdm — import ALMA data in ASDM format (§  2.2.1 )
  • importevla — import JVLA/EVLA data in SDM format (§  2.2.2 )
  • importfits — import a FITS image into a CASA image format table (§  6.27 ).

These are used to bring in your interferometer data, to be stored as a CASA Measurement Set (MS), and any previously made images or models (to be stored as CASA image tables).

The data import tasks will create a MS with a path and name specified by the vis parameter. See §  1.5.3 for more information on MS in CASA. The Measurement Set is the internal data format used by CASA, and conversion from any other native format is necessary for most of the data reduction tasks.

Once data is imported, there are other operations you can use to manipulate the datasets:

Data import, export, concatenation, and selection detailed in Chapter  2 .  VLA: Filling data from VLA archive format

VLA data in 𠇊rchive” format are read into CASA from disk using the importvla task (see §  2.2.3 ). This filler supports the new naming conventions of EVLA antennas when incorporated into the old VLA system.

Note that future data from the EVLA in ASDM format will use a different filler. This will be made available in a later release. ਏilling data from UVFITS format

For UVFITS format, use the importuvfits task. A subset of popular flavors of UVFITS (in particular UVFITS as written by AIPS) is supported by the CASA filler. See §  2.2.7 for details.  Loading FITS images

For FITS format images, such as those to be used as calibration models, use the importfits task. Most, though not all, types of FITS images written by astronomical software packages can be read in.

See §  6.27 for more information. ਌oncatenation of multiple MS

Once you have loaded data into Measurement Sets on disk, you can use the tasks concat or virtualconcat to combine them.

1.6.2 ꃚta Examination, Editing, and Flagging

The main data examination and flagging tasks are:

  • listobs — summarize the contents of a MS (§  2.2.9 )
  • flagmanager — save and manage versions of the flagging entries in the Measurement Set (§  3.2 )
  • plotms — interactive X-Y plotting and flagging of visibility data (§  3.3.1 )
  • ( plotxy — interactive X-Y plotting and flagging of visibility data (§  3.3.2 ), note: plotxy is slower than plotms and will eventually be phased out, plotxy is still useful to create scripted hardcopy output, this functionality will likely be available in plotms in the next release)
  • flagdata — flagging (and unflagging) of specified data (§  3.4 )
  • viewer — the CASA viewer can display (as a raster image) MS data, with some editing capabilities (§  7 )

These tasks allow you to list, plot, and/or flag data in a CASA MS.

There will eventually be tasks for 𠇊utomatic” flagging to data based upon statistical criteria. Stay tuned.

Examination and editing of synthesis data is described in Chapter  3 .

Visualization and editing of an MS using the casaviewer is described in Chapter  7 .  Interactive X-Y Plotting and Flagging

The principal tool for making X-Y plots of visibility data is plotms (see §  3.3.1 ). Amplitudes and phases (among other things) can be plotted against several x-axis options.

Interactive flagging (i.e., “see it – flag it”) is possible on the plotms X-Y displays of the data (§ ). Since flags are inserted into the Measurement Set, it is useful to backup (or make a copy) of the current flags before further flagging is done, using flagmanager (§  3.2 ). Copies of the flag table can also be restored to the MS in this way. ਏlag the Data Non-interactively

The flagdata task (§  3.4 ) will flag the visibility data set based on the specified data selections. The listobs task (§  2.2.9 ) may be run (e.g. with verbose=True ) to provide some of the information needed to specify the flagging scope. flagdata also contains autoflagging routines.  Viewing and Flagging the MS

The CASA viewer can be used to display the data in the MS as a (grayscale or color) raster image. The MS can also be edited. Use of the viewer on an MS is detailed in §  7.5 .

1.6.3 ꃊlibration

The major calibration tasks are:

  • setjy — Computes the model visibilities for a specified source given a flux density or model image, knows about standard calibrator sources (§  4.3.5 )
  • initweights — if necessary, supports (re-)initialization of the data weights, including an option for enabling spectral weight accounting (§  4.3.1 )
  • gencal — Creates a calibration table for known delay and antenna position offsets (§  4.3.6 )
  • bandpass — Solves for frequency-dependent (bandpass) complex gains (§  4.4.2 )
  • gaincal — Solves for time-dependent (frequency-independent) complex gains (§  4.4.3 )
  • fluxscale — Bootstraps the flux density scale from standard calibrators (§  4.4.4 )
  • polcal — polarization calibration (§  4.4.5 )
  • applycal — Applies calculated calibration solutions (§  4.6.1 )
  • clearcal — Re-initializes calibrated visibility data in a given Measurement Set (§  4.6.3 )
  • listcal — Lists calibration solutions (§  4.5.3 )
  • plotcal — Plots (and optionally flags) calibration solutions (§  4.5.1 )
  • uvcontsub — carry out uv-plane continuum subtraction for spectral-line data (§  4.7.6 )
  • split — write out a new (calibrated) MS for specified sources (§  4.7.1 )
  • cvel — Regrid a spectral MS onto a new frequency channel system (§  4.7.7 ).

During the course of calibration, the user will specify a set of calibrations to pre-apply before solving for a particular type of effect, for example gain or bandpass or polarization. The solutions are stored in a calibration table (subdirectory) which is specified by the user, not by the task: care must be taken in naming the table for future use. The user then has the option, as the calibration process proceeds, to accumulate the current state of calibration in a new cumulative table. Finally, the calibration can be applied to the dataset.

Synthesis data calibration is described in detail in Chapter  4 .  Prior Calibration

The setjy task calculates absolute fluxes for Measurement Set base on known calibrator sources. This can then be used in later calibration tasks. Currently, setjy knows the flux density as a function of frequency for several standard EVLA flux calibrators and solar system objects, and the value of the flux density can be manually inserted for any other source. If the source is not well-modeled as a point source, then a model image of that source structure can be used (with the total flux density scaled by the values given or calculated above for the flux density). Models are provided for the standard VLA calibrators.

Antenna gain-elevation curves (e.g. for the EVLA antennas) and atmospheric optical depth corrections (applied as an elevation-dependent function) may be pre-applied before solving for the bandpass and gains. CASA v4.1 was the last version where these specialized calibration were supported by explicit parameters in the calibration tasks ( gaincurve and opacity ). As of v4.2, these parameters have been removed, and gain curves and opacity are supported via gencal , which will generate standard calibration tables describing these effects, much as other a priori effects (Tsys, switched power, etc.) are supported. ꂺndpass Calibration

The bandpass task calculates a bandpass calibration solution: that is, it solves for gain variations in frequency as well as in time. Since the bandpass (relative gain as a function of frequency) generally varies much more slowly than the changes in overall (mean) gain solved for by gaincal , one generally uses a long time scale when solving for the bandpass. The default 𠆛’ solution mode solves for the gains in frequency slots consisting of channels or averages of channels.

A polynomial fit for the solution (solution type 𠆛POLY’ ) may be carried out instead of the default frequency-slot based 𠆛’ solutions. This single solution will span (combine) multiple spectral windows.

Bandpass calibration is discussed in detail in §  4.4.2 .

If the gains of the system are changing over the time that the bandpass calibrator is observed, then you may need to do an initial gain calibration (see next step).  Gain Calibration

The gaincal task determines solutions for the time-based complex antenna gains, for each spectral window, from the specified calibration sources. A solution interval may be specified. The default ’G’ solution mode solves for antenna-based gains in each polarization in specified time solution intervals. The ’T’ solution mode is the same as ’G’ except that it solves for a single solution shared by both polarizations.

A spline fit for the solution (solution type ’GSPLINE’ ) may be carried out instead of the default time-slot based ’G’ solutions.

See §  4.4.3 for more on gain calibration.  Polarization Calibration

The polcal task will solve for any unknown polarization leakage and cross-hand phase terms ( 𠆝’ and ’X’ solutions). The 𠆝’ leakage solutions will work on sources with no polarization and sources with known (and supplied, e.g., using smodel ) polarization. For sources with unknown polarization tracked through a range in parallactic angle on the sky, using poltype 𠆝+QU’ , which will first estimate the calibrator polarization for you.

The solution for the unknown cross-hand polarization phase difference ’X’ term requires a polarized source with known linear polarization (Q,U).

Frequency-dependent (i.e., per channel) versions of all of these modes are also supported (poltypes �’ , �+QU’ , and ’Xf’ .

See §  4.4.5 for more on polarization calibration. ਎xamining Calibration Solutions

The plotcal task (§  4.5.1 ) will plot the solutions in a calibration table. The xaxis choices include time (for gaincal solutions) and channel (e.g. for bandpass calibration). The plotcal interface and plotting surface is similar to that in plotxy . Eventually, plotcal will allow you to flag and unflag calibration solutions in the same way that data can be edited in plotxy .

The listcal task (§  4.5.3 ) will print out the calibration solutions in a specified table. ਋ootstrapping Flux Calibration

The fluxscale task bootstraps the flux density scale from “primary” standard calibrators to the “secondary” calibration sources. Note that the flux density scale must have been previously established on the “primary” calibrator(s), typically using setjy , and of course a calibration table containing valid solutions for all calibrators must be available. ਌orrecting the Data

The final step in the calibration process, applycal may be used to apply several calibration tables (e.g., from gaincal or bandpass , along with prior calibration tables). The corrections are applied to the DATA column of the visibility, writing the CORRECTED_DATA column which can then be plotted (e.g. in plotxy ), split out as the DATA column of a new MS, or imaged (e.g. using clean ). Any existing corrected data are overwritten.  Splitting the Data

After a suitable calibration is achieved, it may be desirable to create one or more new Measurement Sets containing the data for selected sources. This can be done using the split task (§  4.7.1 ).

Further imaging and calibration (e.g. self-calibration) can be carried out on these split Measurement Sets.  UV Continuum subtraction

For spectral line data, continuum subtraction can be performed in the image domain ( imcontsub ) or in the uv domain. For the latter, there are two tasks available: uvcontsub subtracts polynomial of desired order from each baseline, defined by line-free channels.  Transforming the Data to a new frame

If you want to transform your dataset to a different frequency and velocity frame than the one it was observed in, then you can use the cvel task (§  4.7.7 ). Alternatively, you can do the regridding during the imaging process in clean without running cvel before.

1.6.4  Synthesis Imaging

The key synthesis imaging tasks are:

  • clean — Calculates a deconvolved image based on the visibility data, using one of several clean algorithms (§  5.3 )
  • feather — Combines a single dish and synthesis image in the Fourier plane (§  5.6 ).

Most of these tasks are used to take calibrated interferometer data, with the possible addition of a single-dish image, and reconstruct a model image of the sky. Alert: The clean task is now even more powerful and incorporates the functionality of previous specialized tasks such as mosaic and widefield .

See Chapter  5 for more on synthesis imaging. ਌leaning a single-field image or a mosaic

The CLEAN algorithm is the most popular and widely-studied method for reconstructing a model image based on interferometer data. It iteratively removes at each step a fraction of the flux in the brightest pixel in a defined region of the current 𠇍irty” image, and places this in the model image. The clean task implements the CLEAN algorithm for single-field data. The user can choose from a number of options for the particular flavor of CLEAN to use.

Often, the first step in imaging is to make a simple gridded Fourier inversion of the calibrated data to make a 𠇍irty” image. This can then be examined to look for the presence of noticeable emission above the noise, and to assess the quality of the calibration by searching for artifacts in the image. This is done using clean with niter=0 .

The clean task can jointly deconvolve mosaics as well as single fields, and also has options to do wide-field and wide-band multi-frequency synthesis imaging.

See §  5.3 for an in-depth discussion of the clean task. 򠿪thering in a Single-Dish image

If you have a single-dish image of the large-scale emission in the field, this can be �thered” in to the image obtained from the interferometer data. This is carried out using the feather tasks as the weighted sum in the uv-plane of the gridded transforms of these two images. While not as accurate as a true joint reconstruction of an image from the synthesis and single-dish data together, it is sufficient for most purposes.

See §  5.6 for details on the use of the feather task.

1.6.5  Self Calibration

Once a calibrated dataset is obtained, and a first deconvolved model image is computed, a “self-calibration” loop can be performed. Effectively, the model (not restored) image is passed back to another calibration process (on the target data). This refines the calibration of the target source, which up to this point has had (usually) only external calibration applied. This process follows the regular calibration procedure outlined above.

Any number of self-calibration loops can be performed. As long as the images are improving, it is usually prudent to continue the self-calibration iterations.

This process is described in §  5.11 .

1.6.6 ꃚta and Image Analysis

The key data and image analysis tasks are:

  • imhead — summarize and manipulate the “header” information in a CASA image (§  6.2 )
  • imcontsub — perform continuum subtraction on a spectral-line image cube (§  6.5 )
  • immath — perform mathematical operations on or between images (§  6.7 )
  • immoments — compute the moments of an image cube (§  6.8 )
  • imstat — calculate statistics on an image or part of an image (§  6.10 )
  • imval — extract values of one or more pixels, as a spectrum for cubes, from an image (§  6.11 )
  • imfit — simple 2D Gaussian fitting of single components to a region of an image (§  6.6 )
  • imregrid — regrid an image onto the coordinate system of another image (§  6.14 )
  • viewer — there are useful region statistics and image cube plotting capabilities in the viewer (§  7 ).  What’s in an image?

The imhead task will print out a summary of image “header” keywords and values. This task can also be used to retrieve and change the header values.  Image statistics

The imstat task will print image statistics. There are options to restrict this to a box region, and to specified channels and Stokes of the cube. This task will return the statistics in a Python dictionary return variable.  Image values

The imval task will values from an image. There are options to restrict this to a box region, and to return specified channels and Stokes of the cube as a spectrum. This task will return these values in a Python dictionary return variable which can then be operated on in the casa environment.  Moments of an image cube

The immoments task will compute a “moments” image of an input image cube. A number of options are available, from the traditional true moments (zero, first, second) and variations thereof, to other images such as median, minimum, or maximum along the moment axis.  Image math

The immath task will allow you to form a new image by mathematical combinations of other images (or parts of images). This is a powerful, but tricky, task to use.  Regridding an Image

It is occasionally necessary to regrid an image onto a new coordinate system. The imregrid task can be used to regrid an input image onto the coordinate system of an existing template image, creating a new output image.

See §  6.14 for a description of this task. ਍isplaying Images

To display an image use the viewer task. The viewer will display images in raster, contour, or vector form. Blinking and movies are available for spectral-line image cubes. To start the viewer, type:

Executing the viewer task will bring up two windows: a viewer screen showing the data or image, and a file catalog list. Click on an image or ms from the file catalog list, choose the proper display, and the image should pop up on the screen. Clicking on the wrench tool (second from left on upper left) will obtain the data display options. Most functions are self-documenting.

The viewer can be run outside of casa by typing casaviewer .

See §  7 for more on viewing images.

1.6.7  Getting data and images out of CASA

The key data and image export tasks are:

  • exportuvfits — export a CASA MS in UVFITS format (§  2.2.7 )
  • exportfits — export a CASA image table as FITS (§  6.27 ).

These tasks can be used to export a CASA MS or image to UVFITS or FITS respectively. See the individual sections referred to above for more on each.