Scripts that show real-world flows

The Python scripts below are detailed, working examples of ADMIT processing as might be done in the ALMA pipeline or by a scientist with an ALMA FITS cube.

Example 1: Ingest through LineID and Moment

This script takes a FITS cube and does the following:

  1. Create an ADMIT project directory and import the FITS cube into a CASA image. [Ingest]
  2. Compute robust statistics on the input data. [CubeStats]
  3. Sum all cube along all channels of the frequency axis to make a integrated map. [CubeSum]
  4. Compute a spectrum through the cube at a specified point (default: the spatial center). [CubeSpectrum]
  5. Create a position-velocity diagram by selecting an optimum slice position angle through examination of the emission properties. [PVSlice]
  6. Compute a position-velocity cross-correlation [PVCorr]
  7. Identify any spectral lines present in the data [LineID]
  8. Create a cutout cube for each spectral lines identified [LineCube]
  9. Compute moment 0,1,2 maps, and spectra for each cutout cube [Moment, CubeSpectrum]
  10. Write out the ADMIT project results to XML and a summary HTML file.

admit_example1.py

Line # Notes
1 casarun is ADMIT’s wrapper for running Python scripts with ‘casapy –quiet –nogui -c’
21 You need only to ‘import admit’ to get all ADMIT functionality.
27-45 Initial parameters to for script options.
48-54 ALMA does not yet pass VLSR to its output cubes, so we have to kludge it here.
56-70 Method to create string name of output directory based on input cube name. E.g., for input cube ‘mycube.fits’, output directory will be ‘mycube.admit’
73-80 Parse command line arguments
89-91 Potentially clean up an old run
98 Create an ADMIT project or open existing one (if it wasn’t cleaned up)
100-108 If a new ADMIT project, copy the FITS file into it, otherwise print some info and exit
111 Set default plotting parameters.
114-116 Create the first task, Ingest, which reads in the FITS cube, and add the task to the project flow. Optionally mask data that are zero (useMask=True).
119-123 Create the second task, CubeStats_AT which calculates data statistics, and add the task to the project flow. Set the method of robust statistics to be used (robust) and optionally create the peak-point plot (usePPP=True).
125-129 Create the third task, CubeSum_AT which calculates a map over all channels, and add it to the flow.
131-137 Create the fourth task, CubeSpectrum_AT calculates a spectrum through the cube and add it to the flow.
141-146 Choose one of the hardcoded ways of specifying a slice or slit for a position-velocity diagram. See PVSlice for the difference between slices and slits.
147-151 Alternatively, let PVSlice_AT determine the best slice location.
152-153 Now create the 5th task, PVSlice, and add it to the flow.
155-156 Create the 6th task, PVCorr, and add it to the flow.
159-166 This is a workaround for the fact that ALMA data cubes do not currently include the source LSR velocity in the image header. So we look in our own internal list to see if there is an entry for the source. Future releases should not need this workaround.
168-170 Create the 6th task, LineID, and add it to the flow.
168-170 Create the 7th task, LineCube, and add it to the flow. Set the growth option to include 10 channels around the channel range found by LineID.
172-174 Now we must run the tasks created so far, so that the subsequent tasks, which depend the number of lines found, can run.
179-180 Now we must run the tasks created so far, so that the subsequent tasks, which depend the number of lines found, can run. Write the results to disk.
183-184 Informational, print the number of lines found, equal the the length of the linecube output BDP vector.
193-195 For each spectral line found, there will be a linecube image, accessed by ImageBDP.getimagefile.
197 Make the tuple needed for the subsequent addTask
198 Create and add a Moment task for the i-th spectral line.
200-202 Set the parameters for the moment tasks
203-214 Make a choice as to how to compute the CubeSpectrum for each output of LineCube. Use the peak the moment map (usePeak=True), the positions listed in maxpos, or the location of the original input FITS cube emission maximum.
216-221 If no spectral lines were found, create moments of the whole input cube instead.
224-229 Finally, run all the un-run tasks, write the output to disk, and create a summary HTML file index.html in the output admit directory.
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
#! /usr/bin/env casarun
#
# admit-example1 version using new package style
#
#   admit-example1.py :  a first example ADMIT pipeline/flow
#
#   Usage:      $ADMIT/admit/test/admit-example1.py  [spwcube.fits] [alias] 
#
#   This will create a spwcube.admit directory with all the
#   BDP's and associated data products inside. Linecubes will
#   have their own directory with their associates products
#   within that directory.
#
#   If you give an admit directory, it will try and re-run in that directory.
# ==============================================================================

# python system modules
import sys, os, math

# Always import admit!
import admit

version  = '29-oct-2015'


#  ===>>> set some parameters for this run <<<===========================================
#
file     = 'foobar.fits'    # the default FITS input name to be ingested
alias    = ''               # use a short alias instead of the possibly long basename?
#
useMask  = True             # use a mask where fits data == 0.0
vlsr     = None             # either set it below, or make get_vlsr() to work (else vlsr=0 will be used)
maxpos   = []               # default to the peak in the cube for CubeSpectrum
clean    = True             # clean up the previous admit tree, i.e. no re-running
plotmode = admit.PlotControl.BATCH # NOPLOT, BATCH, INTERACTIVE, SHOW_AT_END
plottype = admit.PlotControl.PNG   # output image format. PNG, JPG, etc.
loglevel = 15               # 10=DEBUG, 15=TIMING 20=INFO 30=WARNING 40=ERROR 50=FATAL
usePeak  = True             # LineCubeSpectra through peak of a Moment0 map? (else pos[] or SpwCube Peak)
pvslice  = []               # PV Slice (x0,y0,x1,y1)
pvslit   = []               # PV Slice (xc,yc,len,pa)  if none given, it will try and find out
usePPP   = True             # create and use PeakPointPlot?
useMOM   = True             # if no lines are found, do a MOM0,1,2 anyways ?
pvSmooth = [10,10]          # smooth the PVslice ?   Pos or Pos,Vel pixel numbers
robust  = ()                # Robust statistics method, () = default.
#  ====================================================================


# placeholder until we have an official way to find the VLSR of a source
# this is a remnant from how we did this in milestone2.py
# See the file $ADMIT/etc/vlsr.tab if you want to add sources
def get_vlsr(source, vlsr=None):
    if vlsr != None : return vlsr
    vq = admit.VLSR()
    return vq.vlsr(source.upper())

def admit_dir(file):
    """ Create the admit directory name from a filename.
    This filename can be a FITS file (usually with a .fits extension
    or a directory, which would be assumed to be a CASA image or
    a MIRIAD image. It can be an absolute or relative address.
    """
    loc = file.rfind('.')
    ext = '.admit'
    if loc < 0:
        return file + ext
    else:
        if file[loc:] == ext:
            print "Warning: assuming a re-run on existing ",file
            return file
        return file[:loc] + ext


# Parse command line arguments, FITS file name and alias
argv = admit.utils.casa_argv(sys.argv)
if len(argv) > 1:
    file  = argv[1]
    alias = ""
if len(argv) > 2:
    file  = argv[1]
    alias = argv[2]

#-----------------------------------------------------------------------------------------
#----------------------------------- start of script -------------------------------------

#  announce version
print 'ADMIT1: Version ',version

#  do the work in a proper ".admit" directory
adir = admit_dir(file)
#   dirty method, it really should check if adir is an admit directory
if clean and adir != file:
    print "Removing previous results from ",adir
    os.system('rm -rf %s' % adir)
    create=True
else:
    create=False

a = admit.Project(adir,name='Testing ADMIT1 style pipeline - version %s' % version,create=create,loglevel=loglevel)

if a.new:
    # copy the script into the admit directory
    print "Starting a new ADMIT using ",argv[0]
    os.system('cp -a %s %s' % (argv[0],adir))                 
else:
    print "All done, we just read an existing admit.xml and it should do nothing"
    a.fm.diagram(a.dir()+'admit.dot')
    a.show()
    a.showsetkey()
    sys.exit(0)     # doesn't work properly in IPython!

# Default ADMIT plotting environment
a.plotparams(plotmode,plottype)

# Ingest
ingest1 = a.addtask(admit.Ingest_AT(file=file,alias=alias))
a[ingest1].setkey('mask',useMask) 
bandcube1 = (ingest1,0)   


# CubeStats - will also do log(Noise),log(Peak) plot
cubestats1 = a.addtask(admit.CubeStats_AT(), [bandcube1])
a[cubestats1].setkey('robust',robust)
a[cubestats1].setkey('ppp',usePPP)
csttab1 = (cubestats1,0)

# CubeSum
moment1 = a.addtask(admit.CubeSum_AT(), [bandcube1,csttab1])
a[moment1].setkey('numsigma',4.0)   # Nsigma clip in cube
# >0 force single cuberms sigma from cubestats; <0 would use rms(freq) table
a[moment1].setkey('sigma',99.0)     
csmom0 = (moment1,0)

# CubeSpectrum 
if len(maxpos) > 0:
    cubespectrum1 = a.addtask(admit.CubeSpectrum_AT(), [bandcube1])
    a[cubespectrum1].setkey('pos',maxpos)
else:
    cubespectrum1 = a.addtask(admit.CubeSpectrum_AT(), [bandcube1,csmom0])
csptab1 = (cubespectrum1,0)


# PVSlice
if len(pvslice) == 4:
    # hardcoded with a start,end
    slice1 = a.addtask(admit.PVSlice_AT(slice=pvslice,width=5),[bandcube1,csmom0])
elif len(pvslit) == 4:
    # hardcoded with a center,len,pa
    slice1 = a.addtask(admit.PVSlice_AT(slit=pvslit,width=5),[bandcube1,csmom0])
else:
    # use cubesum's map to generate the best slice
    slice1 = a.addtask(admit.PVSlice_AT(width=5),[bandcube1,csmom0])
    a[slice1].setkey('clip',0.3)        # TODO: this is an absolute number for mom0
    a[slice1].setkey('gamma',1.0)       # SB185 works better with gamma=4
a[slice1].setkey('smooth',pvSmooth)     # smooth, in pixel numbers
pvslice1 = (slice1,0)

corr1 = a.addtask(admit.PVCorr_AT(),[pvslice1,csttab1])
pvcorr1 = (corr1,0)


#----------- Workaround because ALMA data don't include VLSR in header -----------------
# Need to run now, since LineID_AT needs the vlsr
a.run()
a.write()
source = a.summaryData.get('object')[0].getValue()[0]
vlsr = get_vlsr(source,vlsr)
print "OBJECT = %s VLSR = ", (source, vlsr)
#---------------------------------------------------------------------------------------

# LineID uses integer segment=<STRING>:   pick ASAP or ADMIT 
lineid1 = a.addtask(admit.LineID_AT(vlsr=vlsr,segment="ADMIT"), [csptab1,csttab1,pvcorr1]) 
lltab1 = (lineid1,0)

# LineCube
linecube1 = a.addtask(admit.LineCube_AT(), [bandcube1,lltab1])
a[linecube1].setkey('grow',10)          # +growth

# RUN_1: now we need to run the flow, since we need to 
# know the number of Lines found and produce the linecubes 
# for the next for-loop.
a.run()
a.write()


nlines = len(a[linecube1])
print "Found %d lines during runtime" % nlines

x = range(nlines)    # place holder to contain mol/line
m = {}               # task id for moment on this mol/line
sp= {}               # task id for cubespectrum on this mol/line
st= {}               # task id for cubestats

# loop over all lines  
# produce moments and spectra from the linecubes just created
for i in range(nlines):
    x[i] = a[linecube1][i].getimagefile(admit.bdp_types.CASA)
    print "LineDir:", i, x[i]
    # Moment maps from the LineCube
    linecubei = (linecube1,i)
    m[x[i]] = a.addtask(admit.Moment_AT(),[linecubei,csttab1])
    print "MOMENT_AT:",m[x[i]]
    a[m[x[i]]].setkey('moments',[0,1,2])
    a[m[x[i]]].setkey('numsigma',[2.0])
    a[m[x[i]]].setkey('mom0clip',2.0)        # TODO: this is still an absolute value
    if usePeak:
        momenti0 = (m[x[i]],0)          # this linecube mom0
        # CubeSpectrum through the line cube where the mom0 map has peak
        sp[x[i]] = a.addtask(admit.CubeSpectrum_AT(),[linecubei,momenti0])
    elif len(maxpos) > 0:
        # CubeSpectrum through the line cube at given point, 
        # use the same maxpos for all linecubes as we used before
        sp[x[i]] = a.addtask(admit.CubeSpectrum_AT(),[linecubei])
        a[sp[x[i]]].setkey('pos',maxpos)
    else:
        # CubeSpectrum last resort, where it takes the peak from cube maximum
        sp[x[i]] = a.addtask(admit.CubeSpectrum_AT(),[linecubei,csttab1])

if useMOM and nlines == 0:
    # if no lines found, take a moment (0,1,2) over the whole cube
    moment1 = a.addtask(admit.Moment_AT(),[bandcube1,csttab1])
    a[moment1].setkey('moments',[0,1,2])
    a[moment1].setkey('numsigma',[2.0])
    a[moment1].setkey('mom0clip',2.0)       # Nsigma


# final run and save
a.run()
a.write()

# symlink resources, create dot file, and write out index.html
a.updateHTML()

Example 2: Adding optional continuum map input, Smooth, and OverlapIntegral

This script is identical to Example1 , with the addition of optional clipping and smoothing of the input FITS file when ingested, finding sources in a related continuum map using SFind2D, and creating an OverlapIntegral image of the first three spectral lines identified. The notes below highlight the differences between Example1 and Example2.

admit_example2.py

Line # Notes
7 The name of the continuum image can be passed in on the command line
40-44 Options to smooth the cube when ingesting (insmooth), or trim the size of the input cube (inbox or inedge). See Ingest documentation for details of these keywords. Note this is an in-place smoothing, the output will be the smoothed cube.
45 Optionally, one can smooth the output of Ingest, rather than doing it in place.
129-133 Smooth and trim options passed to Ingest.
136-152 Add a task to smooth the output of Ingest with Smooth and use the smoothed output from here on (bandcube1 = bandcube2).
157-165 If a continuum image was give, set up processing of it with Ingest, CubeStats, and SFind2D.
273-281 If more than 3 spectral lines were identified by LineID, create an OverlapIntegral task and add it to the flow.
284-289 Run the flow, write the output to disk, and create a summary HTML file index.html in the output admit directory.
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
#! /usr/bin/env casarun
#
# admit-example1 version using new package style
#
#   admit-example2.py :  a second example ADMIT pipeline/flow
#
#   Usage:      $ADMIT/admit/test/admit-example2.py  [spwcube.fits [alias]] [cont.fits]
#
#   This will create a spwcube.admit directory with all the
#   BDP's and associated data products inside. Linecubes will
#   have their own directory with their associates products
#   within that directory.
#   Optionally you can give a continuum image, cont.fits
#
#   If you give an admit directory, it will try and re-run in that directory.

# ========================================================================================
# python system modules
import sys, os, math

# Always import admit!
import admit

version  = '29-oct-2015'

#  ===>>> set some parameters for this run <<<========================================================
#
file     = 'foobar.fits'    # the default FITS input name to be ingested
alias    = ''               # use a short alias instead of the possibly long basename?
cont     = ''               # add this continuum fits file to the ingestion process?
#
useMask  = True             # use a mask where fits data == 0.0
vlsr     = None             # either set it below, or make get_vlsr() to work (else vlsr=0 will be used)
maxpos   = []               # default to the peak in the cube for CubeSpectrum
clean    = True             # clean up the previous admit tree, i.e. no re-running
plotmode = admit.PlotControl.BATCH # NOPLOT, BATCH, INTERACTIVE, SHOW_AT_END
plottype = admit.PlotControl.PNG   # output image format. PNG, JPG, etc.
loglevel = 15               # 10=DEBUG, 15=TIMING 20=INFO 30=WARNING 40=ERROR 50=FATAL
insmooth = []               # smooth inside of Ingest_AT, in pixels
inbox    = []               # box to cut in Ingest_AT  [x0,y0,x1,y1] or [x0,y0,z0,x1,y1,z1]
inedge   = []               # edges to cut in Ingest_AT [zleft,zright]
                            # [0] and [1] are spatial
                            # [2] is frequency:    1=>Hanning  >1 Boxcar
smooth   = []               # if set, smooth the cube right after ingest
usePeak  = True             # LineCubeSpectra through peak of a Moment0 map? (else pos[] or SpwCube Peak)
pvslice  = []               # PV Slice (x0,y0,x1,y1)
pvslit   = []               # PV Slice (xc,yc,len,pa)  if none given, it will try and find out
usePPP   = True             # create and use PeakPointPlot?
useMOM   = True             # if no lines are found, do a MOM0,1,2 anyways ?
pvSmooth = [10,10]          # smooth the PVslice ?   Pos or Pos,Vel pixel numbers
robust  = ()                # Robust statistics method, () = default.
minOI    = 3                # If at least minOI linecubes present, use them in an OverlapIntegral

# placeholder until we have an official way to find the VLSR of a source
# this is a remnant from how we did this in milestone2.py
# See the file $ADMIT/etc/vlsr.tab if you want to add sources
def get_vlsr(source, vlsr=None):
    if vlsr != None : return vlsr
    vq = admit.VLSR()
    return vq.vlsr(source.upper())

def admit_dir(file):
    """ Create the admit directory name from a filename.
    This filename can be a FITS file (usually with a .fits extension
    or a directory, which would be assumed to be a CASA image or
    a MIRIAD image. It can be an absolute or relative address.
    """
    loc = file.rfind('.')
    ext = '.admit'
    if loc < 0:
        return file + ext
    else:
        if file[loc:] == ext:
            print "Warning: assuming a re-run on existing ",file
            return file
        return file[:loc] + ext


# allow a command line argument to be the fits file name, unless you have foobar.fits ;-)
argv = admit.utils.casa_argv(sys.argv)
if len(argv) > 1:
    file  = argv[1]
    alias = ""
if len(argv) > 2:
    file  = argv[1]
    alias = argv[2]
if len(argv) > 3:
    file  = argv[1]
    alias = argv[2]
    cont  = argv[3]

#----------------------------------------------------------------------
#--------------------------- start of script --------------------------

#  announce version
print 'ADMIT2: Version ',version

#  do the work in a proper ".admit" directory
adir = admit_dir(file)
#   dirty method, it really should check if adir is an admit directory
if clean and adir != file:
    print "Removing previous results from ",adir
    os.system('rm -rf %s' % adir)
    create=True
else:
    create=False

a = admit.Project(adir,name='Testing ADMIT2 style pipeline - version %s' % version,create=create,loglevel=loglevel)

if a.new:
    # copy the script into the admit directory
    print "Starting a new ADMIT using ",argv[0]
    os.system('cp -a %s %s' % (argv[0],adir))     
else:
    print "All done, we just read an existing admit.xml and it should do nothing"
    print "Use admit0.py to re-run inside of your admit directory"
    #
    a.fm.diagram(a.dir()+'admit.dot')
    a.show()
    a.showsetkey()
    sys.exit(0)     # doesn't work properly in IPython!

# Default ADMIT plotting environment
a.plotparams(plotmode,plottype)

# Ingest
ingest1 = a.addtask(admit.Ingest_AT(file=file,alias=alias))
a[ingest1].setkey('mask',useMask) 
a[ingest1].setkey('smooth',insmooth)
if len(inbox) > 0:
    a[ingest1].setkey('box',inbox)
if len(inedge) > 0:
    a[ingest1].setkey('edge',inedge)
bandcube1 = (ingest1,0)   

if len(smooth) > 0:
    smooth1 = a.addtask(admit.Smooth_AT(), [bandcube1])
    a[smooth1].setkey('bmaj',{'value':smooth[0], 'unit':'pixel'}) 
    a[smooth1].setkey('bmin',{'value':smooth[1], 'unit':'pixel'})
    a[smooth1].setkey('bpa',0.0)                       
    if len(smooth) > 2:
        a[smooth1].setkey('velres',{'value':smooth[2], 'unit':'pixel'})
    
    bandcube2 = (smooth1,0)
    # Forget about the original, so we can continue the flow with the smoothed cube
    # in this model, it would be better to use insmooth= for Ingest_AT, it doesn't
    # need the extra space to hold bandcube1
    #
    # Another model could be to have two flows in this script, up to LineID,
    # one with each (or more) smoothing factor and decide which one to continue with
    # or do LineID with bandcube2, but make linecube's from bandcube1
    bandcube1 = bandcube2


# If a continuum image was given, set up to ingest it and run
# source-find on it.
if cont != '':
    ingest2 = a.addtask(admit.Ingest_AT(file=cont,alias=alias+'-cont'))
    contmap = (ingest2,0)
    cubestats2 = a.addtask(admit.CubeStats_AT(), [contmap])
    csttab2 = (cubestats2,0)
    sfind2 = a.addtask(admit.SFind2D_AT(), [contmap,csttab2])
    cslist = (sfind2,0)
else:
    cslist = ()

# CubeStats - will also do log(Noise),log(Peak) plot
cubestats1 = a.addtask(admit.CubeStats_AT(), [bandcube1])
a[cubestats1].setkey('robust',robust)
a[cubestats1].setkey('ppp',usePPP)
csttab1 = (cubestats1,0)

# CubeSum
moment1 = a.addtask(admit.CubeSum_AT(), [bandcube1,csttab1])
a[moment1].setkey('numsigma',4.0)   # Nsigma clip in cube
# >0 force single cuberms sigma from cubestats; <0 would use rms(freq) table
a[moment1].setkey('sigma',99.0)     
csmom0 = (moment1,0)

# CubeSpectrum 
if len(maxpos) > 0:
    cubespectrum1 = a.addtask(admit.CubeSpectrum_AT(), [bandcube1])
    a[cubespectrum1].setkey('pos',maxpos)
elif len(cslist) > 0:
    cubespectrum1 = a.addtask(admit.CubeSpectrum_AT(), [bandcube1,cslist])    
else:
    cubespectrum1 = a.addtask(admit.CubeSpectrum_AT(), [bandcube1,csmom0])
csptab1 = (cubespectrum1,0)


# PVSlice
if len(pvslice) == 4:
    # hardcoded with a start,end
    slice1 = a.addtask(admit.PVSlice_AT(slice=pvslice,width=5),[bandcube1,csmom0])
elif len(pvslit) == 4:
    # hardcoded with a center,len,pa
    slice1 = a.addtask(admit.PVSlice_AT(slit=pvslit,width=5),[bandcube1,csmom0])
else:
    # use cubesum's map to generate the best slice
    slice1 = a.addtask(admit.PVSlice_AT(width=5),[bandcube1,csmom0])
    a[slice1].setkey('clip',0.3)        # TODO: this is an absolute number for mom0
    a[slice1].setkey('gamma',1.0)       # SB185 works better with gamma=4
a[slice1].setkey('smooth',pvSmooth)     # smooth, in pixel numbers
pvslice1 = (slice1,0)

corr1 = a.addtask(admit.PVCorr_AT(),[pvslice1,csttab1])
pvcorr1 = (corr1,0)


#----------- Workaround because ALMA data don't include VLSR in header -----------------
# Need to run now, since LineID_AT needs the vlsr
a.run()
a.write()
source = a.summaryData.get('object')[0].getValue()[0]
vlsr = get_vlsr(source,vlsr)
print "OBJECT = %s VLSR = ", (source, vlsr)
#---------------------------------------------------------------------------------------

# LineID uses integer segment=<STRING>:   pick ASAP or ADMIT 
lineid1 = a.addtask(admit.LineID_AT(vlsr=vlsr,segment="ADMIT"), [csptab1,csttab1,pvcorr1]) 
lltab1 = (lineid1,0)

# LineCube
linecube1 = a.addtask(admit.LineCube_AT(), [bandcube1,lltab1])
a[linecube1].setkey('grow',10)          # +growth

# RUN_1: now we need to run the flow, since we need to 
# know the number of Lines found and produce the linecubes 
# for the next for-loop.
a.run()
a.write()


nlines = len(a[linecube1])
print "Found %d lines during runtime" % nlines

x = range(nlines)    # place holder to contain mol/line
m = {}               # task id for moment on this mol/line
sp= {}               # task id for cubespectrum on this mol/line
st= {}               # task id for cubestats

# loop over all lines  
# produce moments and spectra from the linecubes just created
for i in range(nlines):
    x[i] = a[linecube1][i].getimagefile(admit.bdp_types.CASA)
    print "LineDir:", i, x[i]
    # Moment maps from the LineCube
    linecubei = (linecube1,i)
    m[x[i]] = a.addtask(admit.Moment_AT(),[linecubei,csttab1])
    print "MOMENT_AT:",m[x[i]]
    a[m[x[i]]].setkey('moments',[0,1,2])
    a[m[x[i]]].setkey('numsigma',[2.0])
    a[m[x[i]]].setkey('mom0clip',2.0)        # TODO: this is still an absolute value
    if usePeak:
        momenti0 = (m[x[i]],0)          # this linecube mom0
        # CubeSpectrum through the line cube where the mom0 map has peak
        sp[x[i]] = a.addtask(admit.CubeSpectrum_AT(),[linecubei,momenti0])
    elif len(maxpos) > 0:
        # CubeSpectrum through the line cube at given point, 
        # use the same maxpos for all linecubes as we used before
        sp[x[i]] = a.addtask(admit.CubeSpectrum_AT(),[linecubei])
        a[sp[x[i]]].setkey('pos',maxpos)
    else:
        # CubeSpectrum last resort, where it takes the peak from cube maximum
        sp[x[i]] = a.addtask(admit.CubeSpectrum_AT(),[linecubei,csttab1])

if useMOM and nlines == 0:
    # if no lines found, take a moment (0,1,2) over the whole cube
    moment1 = a.addtask(admit.Moment_AT(),[bandcube1,csttab1])
    a[moment1].setkey('moments',[0,1,2])
    a[moment1].setkey('numsigma',[2.0])
    a[moment1].setkey('mom0clip',2.0)       # Nsigma

# OverlapIntegral
if minOI > 0 and nlines >= minOI:
    #  would really like to take the 3 strongest lines
    print "Testing OverlapIntegral, just take the first 3 lines"
    r = (m[x[0]],0)
    g = (m[x[1]],0)
    b = (m[x[2]],0)
    oi1 = a.addtask(admit.OverlapIntegral_AT(),[r,g,b])
elif minOI > 0:
    print "Only found %d lines, need %d for OverlapIntegral" % (nlines,minOI)


# final run and save
a.run()
a.write()

# symlink resources, create dot file, and write out index.html
a.updateHTML()

Example 3: Multiflow

This is an example of a simple multiflow that takes the output of several Moment tasks to use as input to PrincipalComponent analysis. The initial setup is similar to Example1 and Example2.

admit_example3.py

Line # Notes
45 At least two projects must be provided on the command line (or you can’t do PCA!)
83 In order to create a multiflow, one uses the ProjectManager.
85-94 For each input project, find output of Moment tasks to use as input to PrincipalComponent.
89 findTask will locate tasks of a specific type, in this case Moment_AT.
92 Add each Moment task that is found to the current project, in case the tasks are stale and need to be re-run.
93 Keep track of the zeroth moment output BDPs of each Moment task
100 Create the PrincipalComponent task and add it to the flow. Its input BDPs are the zeroth moment outputs of the Moment tasks.
102-105 Run the flow, including possible re-run of the Moment tasks, write the output to disk, and create a summary HTML file index.html in the output admit directory.
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
#! /usr/bin/env casarun
#
#   admit-example3.py :  process a series of admit projects with PCA
#
#   Usage:      admit-example3.py  aproject p1.admit p2.admit p3.admit ....
#
#   This will create a new aproject.admit directory and search through p1,p2,p3,....
#   for moment0 maps to enter into PCA, optional smooth to a common beam.
#
# ==================================================================================
# python system modules
import sys, os, math

import admit

version  = '29-oct-2015'

#  ===>>> set some parameters for this run <<<=========================================

clean    = True             # clean up the previous admit tree, i.e. no re-running
plotmode = admit.PlotControl.BATCH # NOPLOT, BATCH, INTERACTIVE, SHOW_AT_END
plottype = admit.PlotControl.PNG   # PNG, JPG, etc.
loglevel = 15               # 10=DEBUG, 15=TIMING 20=INFO 30=WARNING 40=ERROR 50=FATAL


def admit_dir(file):
    """ create the admit directory name from a filename 
    This filename can be a FITS file (usually with a .fits extension
    or a directory, which would be assumed to be a CASA image or
    a MIRIAD image.  It can be an absolute or relative address
    """
    loc = file.rfind('.')
    ext = '.admit'
    if loc < 0:
        return file + ext
    else:
        if file[loc:] == ext:
            print "Warning: assuming a re-run on existing ",file
            return file
        return file[:loc] + ext


# Parse command line arguments
argv = admit.utils.casa_argv(sys.argv)
if len(argv) < 3:
    raise Exception,"Need an admit project name, followed by one or more FITS files"
file = argv[1]

#-------------------------------------------------------------------------------
#--------------- start of script -----------------------------------------------

#  announce version
print 'ADMIT3: Version ',version

#  do the work in a proper ".admit" directory
adir = admit_dir(file)
#   dirty method, it really should check if adir is an admit directory
if clean and adir != file:
    print "Removing previous results from ",adir
    os.system('rm -rf %s' % adir)
    create=True
else:
    create=False

a = admit.Project(adir,name='Testing ADMIT3 style pipeline - version %s' % version,create=create,loglevel=loglevel)

if a.new:
    print "Starting a new ADMIT using ",argv[0]
    os.system('cp -a %s %s' % (argv[0],adir))
    a.set(admit_dir=adir)
else:
    print "All done, we just read an existing admit.xml and it should do nothing"
    print "Use admit0.py to re-run inside of your admit directory"
    #
    a.fm.diagram(a.dir()+'admit.dot')
    a.show()
    a.showsetkey()
    sys.exit(0)  #doesn't work in IPython!

# Default ADMIT plotting environment
a.plotparams(plotmode,plottype)

pm = a.getManager()                            # get ProjectManager

bdps = []
for ap in argv[2:]:                            # loop over projects
    _p = pm.addProject(ap)
    # Search for Moment_AT tasks in the projects
    ats = pm.findTask(_p, lambda at: type(at) == type(admit.Moment_AT()))
    print "Found %d Moment_AT's in %s" % (len(ats),ap)
    for at in ats:
        tid = a.addtask(at)                    # add the AT to the 
        bdps.append((tid,0))                   # for now, assume '0' was the mom0
        print "Moment_AT ",ap,tid,0

print "ADIR:",argv[2:]
print "Found %d BDP's" % len(bdps)
print "BDPs:",bdps

pca = a.addtask(admit.PrincipalComponent_AT(),bdps)

# Run the flow, write the outputs, update the web page.
a.run()
a.write()
a.updateHTML()