[LSST|dm-astrometry #4] Re: LSST Astrometry and Photometry

Pierre Astier pierre.astier at in2p3.fr
Thu Apr 23 08:21:35 PDT 2015


Le 23/04/2015 01:02, Robert Lupton the Good a écrit :
> Dear Dr. Astrometry,
>
> I'm sending you this email in response to a useful meeting with those on the CC list re the involvement of the French (IN2P3/LPNHE/CNRS/??) group in LSST astrometric and photometric calibration.  Pierre Astier presented a proposal for an astrometric solver with functionality similar to Emmanuel Bertin's SCAMP (which I'm appending without permission...).
You're welcome !
>
> At the end of the meeting we decided that the LSST folk would make a proposal on how to start our collaboration, but Les Français are of course more than welcome to take part.
>
> The point of this email is to make sure that everyone's on the mailing list (you can subscribe at https://lists.lsst.org/mailman/listinfo/dm-astrometry), and to let everyone else on the list know what's going on.
>
> So as to have *some* content, I'll also add my notes on Pierre's presentation.
Salut Robert,
> 	- I assume that clipping of outliers is included in all the solutions
It has to. Technically, one would like to avoid solving again. I know 
that cholmod
has a built-in rank-1 update (that Marc has successfully used).  For Eigen,
we might have to cook it up. I suspect there are documented ways to apply
the Woodburry identity  to sparse  matrix factorizations, because
any large-scale fit faces outliers.


>
> 	- It's nice to be able to solve for the relative astrometry without any external catalogue.  Otherwise errors in the external catalogue can drive distortions in the internal catalogue.  I realise that the system is under-determined without at least 2 externl fixed points, but it'd be good to allow a solution either adopting the mean position and scale of the internal points, or leaving the overall position and scale set to e.g. (0, 0) and 1.0
I think it is doable. The nice thing would be to try to avoid constraints
in the form of Lagrange multipliers because this drives us to QR
factorization, which is much slower.

>
> 	- I'd much rather not spend our time thinking about Calabretta-'n'-Greisen FITS WCS conventions.  Let's define the "Wcs" as a mapping from pixel to world coordinates without restriction on the representation.  I totally agree that we need a way to map this to de-factor standards such as TAN-SIP for external users, but let's decouple the problems.  For example, LSST's been looking at Starlink/Dave Berry's AST classes.
I think that as long as expressing residuals in some tangent plane is 
acceptable,
( i.e. it does not narrow the field of possible representations), the 
WCS practical
implementation should certainly *not* drive the design. Could you please 
point
us to some doc about these AST things?

> 	- In the same spirit, let's not think about I/O and FITS files.  The LSST code assumes data structures in memory, and that's what this astrometric solver should manipulate.  We do need to do I/O, of course, but that's a separate problem.
Of course. Abstract access is what we want, but it requires concrete 
implementations. Can you tell us
where to find some example of how the application code grabs e.g. the 
hour angle,
or the Julian date of a given set of observations ?

>
> 	- When you say, "Use polynomials initially" I hope you're thinking of Chebyshev polynomials.  The problem is still linear, and Chebyshev's behave much better between data points and when you might want to truncate the solutions.
I am not sure I get the point. It seems to me that the Chebychev 
polynomial span exactly the same
space as regular polynomials, i.e. that the fit to a data set will yield 
exactly the same
result, if (and it is a serious if) there are no numerical concerns. 
This is just a change of representation.
Then, polynomials are notoriously difficult to fit and choosing a 
representation that auto-magically
delivers a well-conditioned (and almost diagonal) Hessian is appealing. 
My experience is that
in the regular representation, mapping cooordinates to [-1,1] for the 
fit allows one to go to
higher orders than we usually need. I have not evaluated what it means 
in practice to use
Chebychev polynomials for the internal representation.

>
> 	- I mentioned freeing up the CCDs in the camera.  I don't think that we need this in the initial version, but I'm thinking of a model where one CCD (probably near the boresight) is fixed, and the others are connected by "springs" whose spring constants can be varied when minimising the X^2 (i.e. add a term sum k_ij (x_i - x_j)^2 to the cost function, where x_i and x_j are the positions of CCDs i and j.  I'd probably set k_ij == k initially, but I can imagine a different k between CCDs in a raft and between rafts).  If we make k very large the system becomes rigid).
The model should certainly be able to accommodate such features. We just 
have to think seriously
to what it means concerning the abstract classe(s) that connect the 
calculation of the chi² and its derivatives
to the concrete model.

>
> 	- If the matrices get too large, does it make sense to think of some sort of tiled or hierarchical fitter?
It does make sense to think about it. The point is probably to define 
precisely
how it is differs from just reducing the input data set


> 	- You're thinking of Cholesky decompositions.  Have you thought of pre-conditioned conjugate gradient solvers (as used by e.g. the CMB community)?  They are designed for Very large sparse systems, and can be easily parallelised using openMP or MPI (actually I'm not sure how easy the MPI versions are).  If you're interested, Jon Sievers is a colleague of mine (now in South Africa) who wrote the solvers for the ACT dataset.
The conjugate gradient is proposed in Eigen as a way to solve a 
(sparse)  linear system. I have not
worked a lot on figuring out what the capabilities of various 
implementations are, especially
regarding multi-whatever, that we will almost certainly want at some 
point. What I would propose at the moment
is that we are just talking to the concrete implementation  of sparse
algebra through a home-made (thin) layer of  software, at least until we 
understand
what we are doing. I don't think  it is a big piece of software to 
write, because
there are very few operations we are interested in.

>
> 	- When you say, "proper motion" I assume you mean parallax too.
Why not? My experience of these matters on the CFHTLS is that proper motions
are a nuisance ! There, finding moving stars and fitting their proper 
motions
turned out to be sufficient (over 5 years) to get rid of embarrassingly 
large residuals,
though not going as bright as  LSST will (because exposures were ~5 mn 
on a 4 m).

Pierre.

> 						R
>
>
>
>


-- 
-------------------
Pierre Astier , LPNHE, 12-22 1er étage.
4 place Jussieu, F 75252 Paris Cedex 05
tel (33) 1 44 27 76 47  ---- fax (33) 1 44 27 46 38

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.lsst.org/mailman/private/dm-astrometry/attachments/20150423/9357f8b8/attachment.html>


More information about the dm-astrometry mailing list