Robotic Camera Platform - Prototype 1 with Parallella


Following on from my previous post where I described Prototype 1 of the camera controller, as promised I have now gone ahead with interfacing this to the Parallella board.

Connected how?

The Parallella issues commands to the AVR mounted in the camera controller, as well as issuing commands directly to the camera itself. At present it is a bit of a nasty bash script. I intend to clean this up, and replace it with something nice and pythonic.

The current script takes a sequence of shots, rotating the camera, taking another shot, etc. The result can then be stitched into a panorama.

So what does it look like now?

Like this (only without the camera in it - to see that, refer to previous post).

alt text

I'll post more photos soon, but it'll be getting on midnight by the time I create and share a video, so this will have to wait.

To give a basic overview, I control the unit via an SSH connection (wired ethernet). I run a script on the Parallella, which issues a sequence of interleaved motor and camera commands. This causes the motors to position the camera, the camera to take a shot, move to the next position, and so on.

Once all the photos have been taken, I issue a command to retrieve them all via USB.

What does it look like in action?

See the videos on youtube. Apologies for the crudeness, my first time shooting/sharing video - I had planned to take this during daytime with better lighting, and in a less messy environment. I'll redo them at some later date.

Taking a panorama:

Exploring movement:

See how the movement is already much zippier than the original version, with only a few software/hardware tweaks to the AVR board.

Current issues

These are very rough, and in taking them I noticed a bunch of improvements I can make in the AVR code.

resolved: - in the first movement video, we saw th effects of wanting to send too many movement commands and it overflowing the UI task's mailbox. Actually, it was probably the verbose responses (which I wasn't seeing due to corrupt comms in one direction - an issue which appeared after interfacing to the Parallella. This was resolved by adding a 12MHz external crystal, rather than using the internal RC oscillator. I can now talk to it at 115200 perfectly, versus the 19200 corrupt in one direction - so 6x faster. In the updated one, the whole thing is running 3-4x faster.

Eventually, I will be issuing an absolute command and having it set by a PID loop, which should allow me to control the jerkiness to some degree, and use accelerometer feedback to compensate for outside disturbances.

update: I had said the stepper couldn't be made to go any faster after the first videos - but actually, an unintended consequence if improving the CPU speed has meant the improved timer resolution allowed me to reduce the delay between steps from 4us to 3us, speeding up the stepper motor movements by a third. Any faster and it just grinds its gears in frustration.

resolved: in one of the first videos (the movement one) I also had to power the thing via the USB<->Serial dongle, as taking power from the Porcupine appeared to be causing issues where the processor would freeze up. I thought this would be to do with noise - but sticking the external crystal on solved it.

I was also going to flip the Parallella over so you could see it. I just had to verify the Porcupine was safe to put underneath, and wasn't going to short the rails on the baseplate by making assumptions about the potential of the different standoffs (mine are all common, at ground). But I ran out of time, so will get that sorted for next time.

Bonus features - robot shows attitude

I'll write a separate post on this - but the thing now nods its head (the camera dips) during reprogramming. I still don't know why - this was a completely unintended consequence, but is super cool. It slumps down while receiving data over JTAG, then perks up on completion. I couldn't have done it better if I'd intended it as a cool behavioural feature. Kinda creepy the first few times - because as I said, I still don't know why - something is triggering a PWM shift which readjusts the servo position. If I'd had them around the other way, it would have raised its head instead.

It also tossed the camera part way across the room. I think I had the stand too close to the edge. Hard to tell as it wiped out a whole stack of boxes, tossed various electronic gear over the floor, and generally hurled toys from cot. This means the camera popped out of the side beams, where the servo gears hold it securely (or so I thought) in place. First time it has done anything like this - but I won't be putting my good camera in again anytime soon.

Generating a panorama

I can then take the series of photos and programatically generate a panorama. I am not interested in using the visual tools to correctly align anything, this is simply proving I can automatically stich photos. Eventually, I'd like this done on the Parallella itself. While it works, it is much too slow to be of any use at present.

My script for generating the panorama looks like

$ cat mkpano.sh
autopano-sift-c tmp.pto $@
autooptimiser -v 50 -a -l -s -o opt.pto tmp.pto
pto2mk -o final -p final_ opt.pto
make -f final

This is a time consuming process, even on my PC (an octa-core AMD FX-8350 with 16Gb RAM). Watching the CPU activity, very little of the process appears to be executing in parallel - only the part where it processes each image do I see activity across multiple cores, and it is far from maxing them out.

yani@octopuss:tmp-photo$ /usr/bin/time -v sh mkpano.sh _DSC296?.JPG
    User time (seconds): 137.04
    System time (seconds): 2.84
    Percent of CPU this job got: 157%
    Elapsed (wall clock) time (h:mm:ss or m:ss): 1:28.78

So the processing time is 137 seconds, which takes 98 seconds of real time. Roughly, this means there was a 40% speed up due to parallelisation. On average, it utilized 1.57 cores.

On the Parallella, this process is excruciatingly slow.

linaro@linaro-nano:~/tmp$ /usr/bin/time -v sh mkpano.sh

enblend: info: loading next image: final_0000.tif 1/1 enblend: info: loading next image: final_0001.tif 1/1 make: [final_.tif] Killed make: Deleting file `final_.tif' User time (seconds): 1029.02 System time (seconds): 12.16 Percent of CPU this job got: 138% Elapsed (wall clock) time (h:mm:ss or m:ss): 12:29.70

This is also an incomplete result, as it died processing the 2nd of 6 images. I expect this will have been killed due to running out of RAM (No, I hadn't created a swap file on the SD card).

So there are questions of both processing capabilities (e.g. can we offload processing tasks to the Epiphany, possibly even the FPGA), and memory requirements to actually perform the final enblend. The latter can be circumvented, I expect, by reducing the size of the image prior to beginning the process. This run was done on images from my 16.2MP Nikon D7000, compared with the 10.2MP Nikon D60 I usually have in the rig (was recharging the battery on the D60 for the video demo).

At some point I will lower the image quality and try again to get a complete result so we can see what the difference between Cortex-A9 and AMD chip is - but I will be surprised if it is less than a factor of 10x slower on the Arm.

Tagged as  camera control

Back to Blog


comments powered by Disqus