Failed map

Hi, I have run in to a few issues while trying to process my first map via MapEngine.

I used some data that has been successfully stitched using two other methods, one online and one software based.

i uploaded the 238 images, it ran through the fast stich, but nothing showed up on the screen when i followed the link from my email. It also reported the area of interest to be 0.0 Ha, but appeared to begin the second stitching process so i let it carry on.

The link to my finished product now states ‘Failed’ in a red box, and shows nothing on the map, but claims the map is 20000000000 ha.

Any ideas on where i went wrong?

Here is the link, not sure if its public or not. https://www.dronedeploy.com/app/data?planId=1442869401_OPENPIPELINE

Hoping someone from DD can help me out with this

Thanks

1 Like

Thanks for flagging this! The link isn’t public, but it contains the info required for me to look into it.

It seems as though you may have found a bug with our Map Engine. The reason it showed it was still processing when you got the email was because I had noticed the issue and started reprocessing it with some experimental code while I tried to figure out what happened.

I took a quick break for dinner, but back on the job now :slight_smile: I’ll let you know what happened and when we have a resolution.

1 Like

Finally found the issue. It seems as though one of your images (G0082240.jpeg) has some strange GPS information:

The altitude is what threw us out, and made our system think you were trying to map such a large area - we’re building some sanity checks to prevent this throwing issues in the future. In the mean time, I’ve removed that image from your dataset, and I’m re-processing the job for you.

Thanks Jono, I should have known it would be a geotagging issue!

We have actually processed that imagery successfully using those tags before, not sure how that worked, but in general we found using geotagged photos to be error prone, and use GCPs instead now.

Thanks for you help!

Yeah - in general it works quite well, but our system had a panic attack trying to provision enough ram to process 20000000000 ha of imagery :slight_smile: we’ll be detecting the crazy outliers in an upcoming release.