It’s been a while since my last post, and a lot has been going on. So I’m going to run a series of blogs going over past events since my last blog. The first covers mapping a vineyard in Paso Robles in California back in April 2015. So you may ask yourself, well what’s so special about mapping a vineyard, didn’t he explain how to do that back in 2014? Well yes, but this has a twist, as I’ll be explaining about autonomous fixed-wing mapping, using an off the shelf components.
Fixed-Wing versus Multirotors
So what are the pros and cons of fixed-wings compared to multirotors? Well in general when we talk about sUAS, UAV, drones etc. most people are generally talking about multirotors such as the Phantom 3, 3DR Solo etc. In general fixed-wing is a much smaller sector of the commercial UAV market. Why you ask yourself? Well there has been a lot of effort put into multirotor technology, as this is what the public wants, they are easy to fly, and most importantly they hover. That last bit is very important, as an operator with today’s GPS position hold, can let go of the controller sticks and the multirotor with stabilize in the last position it was in. This allows multirotors to be used in small areas, and for beginner operators to learn in relatively crash free environment.
The downside to hovering is that most multirotors have the aerodynamics of a brick. The lift to keep the multirotor in the air is solely generated by the 3, 4, 6, 8 or more propellers, pushing air down and the copter up. If the motors stop, lift is stopped and it all comes crashing down. Also hovering uses a lot of power, which generally means there is a tradeoff between weight and endurance. The bigger the battery you put in the more flight time, however this is negated by the battery been heavier, which reduces flight time. It’s all a compromise. Now all helicopters are more efficient as they are in cruise by approx. 40%. This is due to effective translational lift, as airflow moves over the rotors and generates extra lift.
So as a summary, multirotors are in generally good for 40-80 acres with today’s technology, work well in small places and are easy to learn to take-off hover, fly and land. Notes there are exceptions to the 80 acre multirotor limit, such as Agribotix and 3D Robotics FX8 (Folding X8.) These have much longer arms, with low kV motors swinging large propellers for maximum efficiency and endurance.
So what about planes? Well just like multirotors, you can fly autonomous missions. Using the 3D Robotics Pixhawk and Mission Planner software, you can plan missions for both multirotors and planes. What planes give you over a multirotor is efficiency. Lift is generated by airflow over the wings, so long as you have enough stable airflow you will have lift. This is great, because all the motor on a plane is doing is moving the plane forward. If the motor stops, the plane can still fly by gliding. This can be by capturing thermals, pointing into a headwind, or entering a shallow dive. No dropping like a brick like a multirotor. So planes generally have one motor compared to 3 or more on a multirotor. So you can see straight away how it achieves its efficiency advantage.
Now here’s the hard bit, planes are a lot harder to fly than multirotors, the takeoffs and landings in particular. You have to be able to keep orientation at all times, and there is no intelligent flight mode, once that plane turns and is pointing towards you, the controls reverse. Due to this planes are normally flown by people with experience (mostly gained by simulator time on Realflight 7.5 or by the age old process of learning, which is crashing.) Landings can be extremely interesting, as you have to judge speed such that you don’t stall (not enough airflow over the wings) and crash, but not so fast that you well crash. It’s a fine line. I’d recommend anyone to join there AMA club and learn to fly a plane, it will really help with you multirotor skills. And if you really adventurous, learn to fly a helicopter (without stability control.) I digress, so planes are more efficient and can cover 1000 acres, but in the USA it is at this moment limited to line of sight, so say 250 acres. Big note here, the bigger your plane, the further you can fly it away from you, the more acres you can map! The FAA is presently working with PrecisionHawk and other UAV companies as part of the Path Finder program to investigate safe BVLOS (Beyond Visual Line of Sight) operations in the NAS (National Air Space.)
Precisionhawk Pathfinder blog
Now the downside is that to keep flying your need to be moving forward above stall speed, then planes generally need wide open spaces as they are always moving forward (unless you fly 3D.) Due to the above points, only a few fixed-wing mapping drones exist, Sensefly eBee and PrecisionHawk been the most famous, followed by Ag Eagle, Quest UAV and a number of smaller companies. The successful ones generally have good algorithms for takeoff and landings, so in a sense these systems are fully autonomous and can be flown by beginners. The penalty for these easy to operate autonomous planes is cost, like $15k to $25k cost. However it is my opinion that you should allow be able to fly in manual mode and not rely on GPS and other technologies. So sure, fly in autonomous but get on the plane simulator and fly hobby planes to perfect your flying skills, as those skills will save you once that little flight controller has a gremlin.
So what I’m about to show you is how you can make an off the shelf autonomous fixed-wing mapping drone. Here are some videos of the Kextrel Hayabusa in a loiter and performing an auto landing:
Kextrel Hayabusa Loiter Video
Kextrel Hayabusa Auto-Landing Video
What’s makes an autonomous fixed-wing drone
The plane consists of a number of parts, the fuselage, the power system consisting of an electronic speed controller, motor and propeller, actuators for the control surfaces, the flight controller and RC receiver, battery, and payload which is normally a NADIR downward pointing digital camera. Additional equipment can include telemetry link for flight control and monitoring (plus uploading missions,) video downlink FPV camera or from mapping camera.
So the airframe is the Zeta Science Phantom FX61, as the name suggests it is a 61” wingspan, big enough to see at distances, small enough to throw in the back of a car or pickup. With the added advantage of removalable wings. This is also an elevon model, in that pitch and roll is controlled by only two control surfaces, one on each wing driven by a single actuator or servo in each wing. So we have only 3 moving parts, a servo in each wing controlling a control surface and the motor. This gives us much higher reliability, as the less moving parts, the less things to go wrong (See my previous two posts on this topic.)
The Zeta Science FX61 Airframe
theUAVguy blog on Airframe choices
Now the interesting thing is that nice shiny $15k autonomous mapping fixed-wing is probably using a 3D Robotics Pixhawk flight controller. It’s the fixed-wing autonomous flight controller of choice. This then means using the 3D Robotics 915MHz telemetry radios and mission planner or Tower App, which is very mature now. The flight controller allows you to plan a mission by defining waypoints along which the plane flies, it also allows you to define the landing point. There are also some very useful features such as defining a mapping grid, plus what mapping camera you are using, and the software automatically generates the way points plus the camera triggers.
The 3DR Pixhawk
The hard part is cramming the flight controller, telemetry gear, video downlink, cameras, battery, and power system inside the plane’s fuselage. The other hard part is programming the flight controller so you get reliable stable flight in various environmental flying conditions such as high wind conditions. This is really where the secret sauce is, and many an hour is spent tuning PID control loop parameters for different fuselage, payloads, battery combinations and flight envelopes. This is what you’re really paying your money for in a $25k commercial autonomous plane, it’s the blood sweat and tears of tuning, flying, crashing, more tuning until you have an optimal setup that works in all flight cases, reliably!
3D Robotics has the APM Plane site which is a good introduction to the Pixhawk and how to build a system and tune it. If you go this route, be prepared for long nights and crashes, but once you have it working it’s a great feeling.
APM Plane Website
Debugging the Pixhawk
The suggestion is to build up from manual flight mode, move to more basic autonomous modes such as loiter and return home, then move to waypoint flying. Also debug everything on the ground, have a checklist and follow it every time. This will save you plane, I know, once the elevens in autonomous flight got reversed which would have been disastrous in the air, but we did a ground check using our checklist. Always, always use a checklist. Manned aviation does and so should unmanned aviation.
Now the whole point of the plane is only to get the camera in a position that it can take pictures, which once post-processed can generate a survey map, which contains actionable data. Cameras are normally NADIR down facing and do not require a gimbal, planes are inherent less prone to vibration, plus they are generally flying level when taking pictures unlike a copter which is tipped in the direction of flight. To give an example of ground resolution, i.e. how much you can see on the ground, Landsat 8 satellite has a 30m ground resolution:
Landsat 8 Information
A 12.1MP Canon SX260 with an Event 38 NDVI filter, a Sony 20.1 MP QX1, a Sony 18MP, and a GoPro Hero 4 Black in still mode of 5MP has a ground resolution of 4 to 10cm when the picture is taken from 400 AGL (above ground level.) That’s a little bit better than Landsat 8 can do!
The SX260HS with an Event 38 is a great NDVI camera given superb spectral separation, which in turn gives high quality post-processed NDVI data. And that is a key point, your data is only as good as your camera and how good its NDVI filter is. Better spectral separation, high resolution, a large low noise image sensor equates to good data. Remember the adage “Garbage in, garbage out”, well get the best camera and filters you can. The latest choice for UAV’s are the Sony QX range, as they have 20MP resolution, low weight, remove unnecessary functions and are small. A number of companies such as LifePixel, MAxMax now do Sony conversions for NDVI for PrecisionAg UAV work. And there is the trust old GoPro with filters from IRPro and similar companies.
Event 38 NDVI Filter in a Canon SX260HS:
Max Max NDVI Cameras
LifePixel NDVI Cameras
IRPro NDVI Cameras
The best cameras have GPS, as this allows GPS data to be automatically geotagged onto the images as they are taken. This reduces the workload of the post-processing software, as it knows where each picture is with respect to the other images, allowing faster more accurate stitching of the final GeoTiff.
On top of this is camera control. The cameras are normally controlled from the flight controller, which triggers the camera at specific waypoints. This is done such that you have the correct forward lap and side lap (overlap) to allow the stitching software to work efficiently. Aim for greater than 65% and more optimally 80% lap. The issue here can be that with high plane speed, the front lap will require the camera to trigger very often, say every 2 seconds. To ensure this, make sure the camera can achieve this, plus use high data rate storage cards. If you don’t have a GPS camera, you can trigger the camera from the flight controller, and then fuse the plane’s flight log with the images in the stitching software to get geotagged images. This is another step, which can be prone to errors.
A final approach is to use no trigger or GPS, and just put the camera in auto mode so it takes a picture every 2 seconds. Flying at 20m/s at 400 feet, 2 seconds will give correct side and front lap for most cameras. Note, no GPS and no flight controller triggering means the stitching software has no idea of how the pictures place with respect to each other, as such processing is much more complex. However the software does know that the pictures are taken in sequential order, thus helping the software. The best help in this situation is to use GCP (ground control points), which are visible in the images. These visible points have GPS latitude, longitude and altitude information, which when you enter this data, it can cross reference in the stitching software allows for faster stitching, but also a GeoTiff single image (single map image with GPS info).
Below are cameras triggers we have developed at Kextrel, a Canon trigger, a GoPro trigger and a Sony Wireless trigger (which automatically adds a GPS stamp to the none GPS Sony QX range of cameras.)
Sony QX1/QX10 Wireless Camera Trigger
Canon SX260HS Camera Trigger
Now the holy grail of cameras for Ag work is the multispectral camera, but these in general fall in the $7k region. Here the red, green, blue and near infra-red have a sensor each, and my combining the sensor images you can get normal images RGB, or NDVI GBNIR, or ENDVI etc. The sensors are nominal 6MP and have GPS to geotag the images. Some also have ambient sunlight sensors to compensate for cloudy days versus bright sunny days. If a thermal image could also be added this could be useful for silage temperature measurement, crop analysis after side-dressing etc. A number of companies make UAV ready multi-spectral cameras including Slantrange, Micasense, Event 38, and Airinov. Time will tell if others will join this small but elite group.
Another disruptive company is Propeller Aero who have some interesting ideas to generate stitched images very fast. I’ve tried the system out, and I’d say keep an eye on these company.
Propeller Aero Website
One final point is that if an IMU as well as GPS was integrated with a camera, then as well as the camera position in longitude, latitude and altitude, but we would also know the roll, pitch and yaw of the camera. In a sense we would have six parameters defining camera position in space. This should in theory allow stitching software to remove the long processing time of the point cloud determining where the camera is and where it is pointing.
For this flight we used the GoPro Hero 4 with an IRPro NDVI lens, triggered from a GoPro remote triggered by the Pixhawk flight controllers. This was done due to time limitations, but also allowed a wide angle lens, thus reducing flight time. This none GPS approach required combination of the images and flight logs in the stitching software. GCP were also used.
The MacGyver GoPro 4 Wireless Remote
Fixed Wing versus Multirotor (The Kextrel Hayabusa versus the 3D Robotics Folding X8)
So to test the theory we tested flew the mapping plane and 3D Robotics FX8 Enduro Copter at our local flying area, capturing images from both flights:
The Kextrel Hayabusa Plane
And the generated Pix4D stitched NIR orthomosaic.
The 3D Robotics FX8 Copter
The FX8 was a prototype 3DR product that was to be an Enduro copter, with a 40 minute flight time, possible to cover 400 acres. Sadly 3DR decided not to make this design, it truly was a massive monster, but ultra-compact when it folded down.
3DR FX8 in flight
The NIR images are then stitched by Pix4D but unprocessed for NDVI, see below. You can see how the plane and copter orthomosaic match. As such it proves it really doesn’t matter if you use a plane or a copter, you get the same result. It’s just that each platform is better in different use cases.
The NDVI post-processed data from Pix4D:
At the bottom you can see airfield landing strip, with the associated bays in red. Below the airfield is a river which is also red. Then you have the service road that goes across the bottom, just above the airfield in red as well. Now you have the fields, starting from the right, you have a very green field and next to it a field that is generally red. The field to the right had new crop growing, whilst the field in red, was just ploughed. Interestingly enough you can clearly see in the NDVI image that where the crop is growing, that there is a thin red line, which when walking the field shows a line of crop with poor health, either from bad planting or due to some other event.
Paso Robles Vineyard Mapping Survey
Josh Metz who is a geospatial professional, introduced Steve Allott (Kextrel CTO) and myself to Glenn McGourty the owner of a Vineyard in Paso Robles. Glenn is a very well respected Viticulturist and was interested to see how UAV’s could help in Vineyard data analysis.
The Vineyard is located in a valley with the main property at the top of a hill, with the vines covering the hilly area. There is approx. 80 acres of vineyard and almond trees, with a 260 feet elevation change. To comply with the FAA 400 AGL limit for UAV’s, the mission was planned with a 130 feet AGL at the top of the hill by the property with a small designated landing area. This would give us 130 feet flying over the property, plus 390 feet at the lowest point in the property, which happened to be a dried up lake bed (you can clearly see this in the later NDVI images, where the lake bed vegetation is vibrant.) Another concern was a US Army helicopter base 12 miles away. Even though we weren’t expecting any low flying helicopters terrain hugging down in the valley below 400 ‘ AGL, we planned accordingly and posted observers watching both ends of the valley.
The planned mission area was approx. 80 acres. Now as we were using an NDVI GoPro 4, the wide angle lens meant that the spacing between flight tracks could be wider. The advantage of this is that we could map the area faster than a narrower angle lens/camera. A GoPro with a low distortion NDVI lens up at 400’ can see A LOT of acres in one picture.
The equipment was setup so we had an unobstructed view of the mapping area, we had a FPV video downlink, plus telemetry which we were monitoring on a large LCD monitor, so much easier to watch and read, particularly in bright sunlight. We powered up the plane, and got GPS lock. We then did a ground check and launched. Steve was flying takeoffs and landing, as the landing area was very small bordered by vines and the hilltop property. So off we go, I throw the plane overhand and up we go. Steve then switches to mission mapping mode, and the plane starts hunting for waypoint 1. This continues for a while, where Steve brings it back into manual mode, then back into auto mode, again hunting around for waypoint 1. At this point, Steve pulls off a tricky landing in manual mode. We disconnect the battery and start again, GPS lock, checklist etc. Off we go again, this time switches to auto mode and the plane gracefully drops a wing and heads straight to waypoint 1.
Once at waypoint 1, the plane starts tracking down the flight path with nice sweeping turns at the end of each track. When you have put blood, sweat and tears into a mapping plane, it is the most beautiful thing to see it soaring along. The whole mission took 5 minutes from start to finish and we mapped 80 acres, yes and I double checked the logs to make sure, 80 acres in 5 minutes. This is the benefit of a plane, a copter just cannot match a plane for large area mapping. The tricky bit was the landing, but Steve again managed some Top Gun flying to get the plane back down in one piece (putting it up on one wing and swinging it around a hawk post, and dropping it at his feet.)
The question now, did we get any useful pictures, did the MacGyver GoPro wireless remote work?
Pix4D and the Post-Processed Results
So we had no cellular connection, so no cloud processing here. Luckily we were using Pix4D, so we could do a “Rapid Check” in the field. We pulled the SD card from the GoPro and loaded the images. Remember that we were using a GoPro, so no geotagged images here. Overall we had 103 images over a 5 minute period, which averages at an image every 3 seconds. You can see from the rapid check report below that we had significant overlap.
As part of the survey we took a number of 3D GCP’s as well. Lucky that we did, as when we tried to combine the GoPro images with the plane flight log to get geotagged images it failed, the log was corrupted. Luckily Pix4D allows you to input 3D GCP’s, so after getting the GCP parameters and feeding these into Pix4D we were able to get the near-infrared orthomosaic:
And a zoomed in section of the near infrared orthomosaic showing the Vines. You can also see the lakebed to the upper right of the image. Good data points for size are the trees and the storage tanks:
Pix4D was then used to process the orthomosaic and generate contour lines, a digital elevation model (DEM), a 3D model, a video fly through and a NDVI image.
Contour lines with 10’ spacing. It’s clear to see the elevation change here with the highest point been the dwelling, and the lowest point the lakebed:
The Digital Elevation Model just adds to the contour line data, showing the topography in color:
The 3D Model really is quite impressive, it always impresses me how much information can be gathered from just a dataset of NADIR images:
The Pix4D video fly through:
Pix4D Video Fly Through
And finally the Normalized Difference Vegetation Index (NDVI):
What we can see in the NDVI are the man-made structures in red such as the road at the bottom and side of the image. The vines in green are highlighted by the red bare earth between the vines. In some cases there is some vegetation growing, as such the vine rows are harder to detect. A high index is noted down around the lakebed, probably due to this been a natural point for drainage and thus vegetation growth. Using the NDVI information it clearly allows the viticulturist the ability to identify areas of interest, then walk the vineyard to determine what is exactly happening to the grapes. Combining UAV NDVI and RGB data, with soil measurements and grape data, could be used to identify issues and head them off in the future. This could save money by increasing yields, using less pesticides and reduced water usage. I’m not going to go into an analysis of the vines, but just highlight yet again the power of using drones in Precision Agriculture.
The next generation Kextrel Hayabusa
The progression of the Kextrel Hayabusa is the Gen 2 shown below. The Gen 2 has a large payload capability, using Sony QX1 and QX10 NDVI cameras, an airframe which can be broken down very fast into a small compact space. On top of that is the inclusion of Drone Deploy (you can see the cellular antenna on the left fin,) where mission planning, flight monitoring and field analysis can be done in the field as the plane is flying. Drone Deploy also has nice features such as a descending circle when the plan comes into to landing, keeping the plane away from obstacles as it lands. Overall it’s a great system. I’ll let you know how it works.
It’s clear that when it comes to large area mapping, fixed-wing autonomous planes are much more efficient that multirotors, but this is traded off against cost, complexity and experience needed to fly them. Landing is particular troublesome, as such other means of landing would be more appropriate such as a VTOL design or parachute. The VTOL approach has been investigated by Sony, Google X-Wing and Amazon Prime Air for that exact reason, the efficient of fixed-wing flight but takeoff and landing ease of a multirotor. It is obvious though that planes are the perfect platform for mapping large areas such as farms and vineyards. Combined with NIR and multispectral sensors, and processed with image software such as Pix4D, we have a powerful tool to aid viticulturists and farmers do more with less resources.
So you can build your own autonomous fixed-wing drone for a relatively low cost, however be prepared for lots of heartache, crashes and long nights. However if you do, when you have it working it’s the best feeling in the world. Or wait a few months and buy the Parrot Disco, it does it all for you wrapped up in a nice package. It even won lots of awards at CES for innovation. Know I just need to figure out how to get a NADIR NDVI camera in to one. I’ll let you know how I do…….
Many thanks to Glenn McGourty, Josh Metz and Steve Allott on making this survey possible.