"Let me tell you, this target sucks," Chris exclaims. Truly, this target is in a very odd spot. The flight software will happily place the APXS and MB on it, but it generates errors when you try to place the RAT or MI. Which is supposed to be impossible -- if you can get there with one tool, you can get there with all of them. Yet we've found an exception to the rule. Still stranger, the software will happily place the MI all around the target (which we did the other sol) -- just not on the target itself. I'd look into that if I had the time.
Still, our explorations here continue to go well. The scientists have gained enough confidence in the instrument placements here that they've cut down the MIs from 5-stacks to 3-stacks. That is, instead of taking a series of five images at each position, we're taking just three. The more images you take, the more likely it is one will be in focus; thus, shorter stacks indicate more confidence (or less importance, or tighter downlink). We've gone as high as eleven. Insert Spinal Tap joke here.
There are limits to their confidence, though. Since the APXS doors recently didn't open fully when we tried to open them on a rock target, they've asked us to open the APXS on the CCT instead. The CCT, or Compositional Calibration Target, is a spot on the rover's own body, just above the space the arm stows into. As its name implies, it's used to help calibrate the MB and APXS -- we know what it's made of, so taking readings of it helps us interpret readings of unknown stuff. And since it's a hard and accessible surface, we can also open the APXS doors on it.
But doing so requires a more-than-usual amount of clearance under the rover. The HAZCAMs can't see that area, so Chris asks me to look back at imagery from previous sols to see if we have room. He tells me about some weird trick Frank uses for this, which basically amounts to an abuse of the IDD workspace display, but it doesn't seem to work for me and I quickly lose patience. Instead, I do what I usually do in this situation -- bring up the terrain mesh built from the old imagery, and slide the rover around until the simulated camera view looks like the real one. This isn't precise, but it's easy and quick -- and in a case like this one, where it turns out there's nothing of consequence under the rover that could possibly be a problem, it turns out to be good enough.
One problem solved.
Chris is solving another one. There are basically two ways to move the arm. One way is called a joint move: tell one or more of its five joints what position you want it in. The other way is called a Cartesian move: give it the three-dimensional (i.e., Cartesian) coordinates of a point in space and tell it to put the current tool there. A very useful variant on this second way can also be used to move the arm in so-called "tool frame," a coordinate system centered on the current tool. Z is "down" in this frame, so you can place the tool 10cm above the target, then tell the arm to move 10cm "down" to place the tool on the target. (Actually, we normally tell it to move 11cm "down" in this case, to account for uncertainty in the terrain mesh and the arm's own behavior, but never mind that.)
We mostly use Cartesian moves, and RSVP has more facilities for using Cartesian moves than joint moves, because the original plan was to use Cartesian moves all the time. But internally, the IDD flight software converts Cartesian moves into joint moves, and there are some cases it can't convert -- cases like the one that's biting us thisol. We routinely avoid this problem by using a joint move to get near the desired position, then a Cartesian move to get to exactly the right position. (We'd just use joint moves all the time, except that RSVP doesn't make that easy to do; also, Cartesian moves result in easier-to-read sequences.) Chris tries that trick, but it doesn't work, so he just gives up and does it the hard way, computing the needed joint angles and plugging them into RSVP.
Let me tell you, this target sucks.
 Not true, and I'm not sure now why I ever thought that.
 A technique now called "Hartman Localization," a clever use of a couple of RSVP features. You start with an old terrain mesh covering an area the rover has since driven into; you know the rover is now somewhere in that mesh, but you don't know exactly where. Then you load in a second mesh, a current one, which is attached to the simulated rover. Then you can slide the simulated rover around until the two meshes match up, and that tells you where the rover actually is in terms of the old mesh. Having done that, you can examine, for instance, what's near the rover that you can't see from the current position (because, say, the rover's own structure now blocks its view).