If you move just your eyes while watching a fan, for a split second you will see one of the blades. What if while the drone is mapping if the camera tilted at a speed to create the same standstill effect as the fan example. This would generate much higher quality maps!
Interesting concept, but on a camera this would be due to a sufficiently fast shutter speed. You would be surprised how fast your eye’s/brain’s “shutter” speed can be at times - when you get that split second glance of something and your brain can record it for recall.
One thing that I have often thought was that what if we could put the focal point at the top/first third of the frame as you fly? It would be similar to a photographer moving the camera off-frame to focus/expose according to something other than the direct subject. This would serve two purposes. (1) It would seem the AF faster on conditions that the drone is about to be over, but maybe more importantly (2) it would allow the auto-exposure to be ahead of the game. Then again it probably wouldn’t matter on exposure because it would have the reverse affect on the back third of the image.
Exactly my thought. Similar to photographers taking long exposure of stars on a setup so it keeps them in focus and moves enough so they don’t move.
Sam, if I understand what you want to accomplish, it sounds like you want to move the camera to compensate for the movement of the aerial camera platform (ACP/drone). While this would seem to work in concept, it will actually only work with a flat object. Because of the parallax effect of taller objects, you wont actually get a stopped motion, you will only shift the parts that are out of focus. Say you are flying at 140’ and shooting a building or trees that are 70 feet. The apparent shift of the tops of the objects will be twice as far as at the ground level.
Michael is correct about a focus pre-set and about shutter speed. The focus setting will help determine the best range of focus. A faster shutter speed cuts down on the blurring effect of the ACP movment. The flip side of that coin is that as you shorten the exposure (faster shutter speed), you need to let more light through the lens with a wider aperture to get the same exposure. Since the aperture affects depth of field, this could reduce the image sharpness, and is counter-productive. You can increase the ISO setting (makes the sensor more sensitive to allow faster shutter and/or smaller aperture), but that trade-off is an increase in digital noise in the image, affecting contrast and sharpness of the image. Some of these problems can be corrected to some extent with image post-processing, but that is tedious and time consuming when you have hundreds of individual images to work with.
No easy solutions, and no single solution that will be best every time.
I never shoot with anything less than f.5.0. On the same token I try to keep my iso as low as possible. Which means in the PNW I’m usually 1/320 shutter. Wish that it wasn’t always cloudy or rainy.
I had this thought last autumn when DD support told me that a lot of my uploaded NDVI images were quite blurry. I think this method works but requires quite some expert programming for the gimbal timing. Everyone who had problems with motion blur on nadir images and starts thinking about a solution will finally come to that idea or conclusion as it is IMO relatively obvious. Amazingly though - AFAIK - the industry did not come out with a turn key system performing that way so far (for civil application at least) …
In addition to that I regard a rolling shutter as an advantage as long as its scan direction corresponds to the way the image (vertical flow) is projected (via the lenses) on the sensor. This is IMO a second innovative idea that would help reducing blurry nadir images. Perfect if “sensor pixel scan line speed” would equal “GSD speed”, because the ground surface would actually be “towed below a hovering scanner”. The rolling shutter effect this way “self-compensates” (quasi stationary projection) resulting in further reduced motion blur.
I was discussing the gimbal tilt timing idea with a certified drone engineer (having access to drone and gimbal programming code) and the result was: a “very busy and reliable” tilt step motor (for perfect re-alignment every one or two seconds) must be operated with a super perfect synchronization of all parameters that can only be achieved via fast IMU readouts and lean code programming.
The collaboration of “gimbal tilt sync” and “sensor line scan” could especially help hybrid copters or fixed wing drones to deliver “less blurry” imagery under windy conditions (tail wind here increases the probelm due to higher ground speeds).
I will chime in here hopefully quite soon about the progress. Right now we are experimenting mainly with wing mounts on copters that might increase endurance on surveys (steady ground speed)…
Out of curiosity, what is your average airspeed?
As some quick workarounds I would try setting the mission to the low light compensation, but also try running half of a crosshatch mission with an 80 degree gimbal pitch.
In general I like your feature request, but I fear that it will put a lot of strain on the gimbal as it will have to pitch up and down in between each photo location. This could also add a lot of complications when individuals are having hardware failures. on the flip side it might be a good thing for them to recognize that they are having gimbal issues. It should be easy enough to calculate as part of the algorithm with the airspeed, the altitude and the overlaps.
We fly a mean G/S of 25 km/h (7 m/s) which gives us the best endurance for our H520 from Yuneec.
Its E90 camera gives a GSD of 1 cm/px at approx. 33 m AGL, (2 cm at 66 m, 3 cm at 99 m).
Depending on the requested (or sufficient) GSD we choose a fixed altitude from takeoff or even terrain follow with a more constant AGL. But this function regularly causes stitching problems (visible seams at the edges) because images off neighboring route legs flown at different heights seem to make the map processing obviously more difficult.
As a rule of thumb we fly 1 m/s at 10 m AGL, 5 m/s at 50 m AGL, …, 10 m/s at 100 m AGL. This method counteracts GSD vs. ground speed vs. shutter speed conflicts. Nevertheless we reduce speed by 1 to 3 m/s at low light for blur reduction or to avoid hard wind struggles when on upwind.
The gimbal tilt “trick” surely works like e.g. everybody can see in a slow motion recording of a fast driving car passing a video camera that follows the motion by panning. The fast car has almost no motion blur while the static background is totally “smeared”. The problem with drone imagery is that we actually use the smeared background while ignoring the tilting or better “pendulum” technique.
Sure, the tilt stop motor will have a harder life than before but its expected life time will be sufficient and with a camera being mounted - as intended - at its center of gravity, the forces will not be “overwhelming”.
The hardest part will be the exact timing of all parameters, as failure will end in worser results than without all that effort.
As said above we are experimenting with wing mounts, will try to program “gimbal tilt sync” and probably can even increase results by going into the mentioned “sensor line scan” (motion compensation) direction. In the end we could have a hybrid wingcopter based on standard hardware with enhanced endurance, constantly sharper images, even at high winds at differing ground speeds (shifting head/tail wind situation).
Sounds crazy? Maybe, but in fact it would mean solving of core survey problems, namely often poor efficiency.
Interesting. I am getting 1cm at 75m with my H520 E90 camera and normally fly at 8m/s and have not had any issues with blurring. Grant it that I typically do not fly homogeneous subjects. The stop-motion that you are talking about is more likely attributed to a larger aperture than either the P4P or E90 possess. That said, you cannot use flight with DroneDeploy and the H520, so I am not sure why this is a request? It is obviously not needed on the P4P and sounds like you need to dial in your camera settings in DataPilot.
I am wondering how you can get 1 cm/px GSD at 75 m AGL with a Yuneec E90?
Calculating with sensor pixel size and lens FOV it is not possible:
E90 Sensor max. resolution = 5472x3648 px (3:2, equals 6576 px diagonally)
E90 lens = 91° DFOV projection on 6576 px sensor diagonal
This 6567 px 91° projection on sensor equals 6567 cm of ground depection (GSD = 1 cm/px) at exactly 3227 cm AGL. (With 90° DFOV the width of the ground diagonal equals half the flown hight)
tan (°DFOV/2) * AGL * 2 = diagonal ground depiction width
tan (45.5°) * 33 m * 2 = 66 m ground depiction at 1 cm GSD (6567 px = 6600 cm)
You could have 1 cm GSD at 75 m AGL only when using a 47.5° DFOV lens:
tan (23.75°) * 75 m * 2 = 66 m ground depiction at 1 cm GSD (6567 px = 6600 cm)
In addition to that it is not possible in Yuneec’s DataPilot to change camera parameters in survey mode. Survey mode sets ISO, Exposure and WB to automatic
Accordingly to DroneDeploy’s processing and you change camera parameters on the camera view screen before you swipe to start the mission. Calculate from your distance versus pixels of your map. My last map was a 3200ft width at 39.4k pixels.
Just for the sake of avoiding confusion my above quoted sentence should read as follows:
With 90° DFOV the width of the ground diagonal equals twice the flown hight.