Goodbye HICO, Hello PACE – Ocean Colour’s Satellite Symmetry

HICO™ Data, image of Hong Kong from the Oregon State University HICO Sample Image Gallery, provided by the Naval Research Laboratory

HICO™ Data, image of Hong Kong from the Oregon State University HICO Sample Image Gallery, provided by the Naval Research Laboratory

Ocean colour is the acorn from which Pixalytics eventually grew, and so we were delighted to see last week’s NASA announcement that one of their next generation ocean colour satellites is now more secure with a scheduled launched for 2022.

Unsurprisingly the term ocean colour refers to the study of the colour of the ocean, although in reality it’s a name that includes a suite of different products, with the central one for the open oceans being the concentration of phytoplankton. Ocean colour is determined by the how much of the sun’s energy the ocean scatters and absorbs, which in turn is dependent on the water itself alongside substances within the water that include phytoplankton and suspended sediments together with dissolves substances and chemicals. Phytoplankton can be used a barometer of the health of the oceans; in that phytoplankton are found where nutrient levels are high and oceans with low nutrients have little phytoplankton. Sam’s PhD involved the measurement of suspended sediment coming out of the Humber estuary back in 1995, and it’s remained an active field of her research for the last 20 years.

Satellite ocean colour remote sensing began with the launch of NASA’s Coastal Zone Colour Scanner (CZCS) on the 24th October 1978. It had six spectral bands, four of which were devoted to ocean colour, and a spatial resolution of around 800m. Despite only having an anticipated lifespan of one year, it operated until the 22nd June 1986 and has been used as a key dataset ever since. Sadly, CZCS’s demise marked the start of a decade gap in NASA’s ocean colour data archive.

Although there were some intermediate ocean colour missions, it was the launch of the Sea-viewing Wide Field-of-view (SeaWiFS) satellite that brought the next significant archive of ocean colour data. SeaWiFS had 8 spectral bands optimized for ocean colour and operated at a 1 km spatial resolution. One of Sam’s first jobs was developing a SeaWiFS data processor, and the satellite collected data until the end of its mission in December 2010.

Currently, global ocean colour data primarily comes from either NASA’s Moderate Resolution Imaging Spectroradiometer (MODIS) on-board the twin Aqua and Terra satellites, or the Visible Infrared Imaging Radiometer Suite (VIIRS) which is on a joint NOAA / NASA satellite called Suomi NPP. MODIS has 36 spectral bands and spatial resolution ranging from 250 to 1000 m; whilst VIIRS has twenty two spectral bands and a resolution of 375 to 750 m.

Until recently, there was also the ONR / NRL / NASA Hyperspectral Imager for the Coastal Ocean (HICO) mission on-board the International Space Station. It collected selected coastal region data with a spectral resolution range of 380 to 960nm and 90m spatial resolution. It was designed to collect only one scene per orbit and has acquired over 10,000 such scenes since its launch. However, unfortunately it suffered during a solar storm in September 2014. Its retirement was officially announced a few days ago with the confirmation that it wasn’t possible to repair the damage.

In the same week we wave goodbye to HICO, NASA announced the 2022 launch of the Pre-Aerosol and ocean Ecosystem (PACE) mission in a form of ocean colour symmetry. PACE is part of the next generation of ocean colour satellites, and it’s intended to have an ocean ecosystem spectrometer/radiometer called built by NASA’s Goddard Space Flight Centre and will measure spectral wavebands from ultraviolet to near infrared. It will also have an aerosol/cloud polarimeter to help improve our understanding of the flow, and role, of aerosols in the environment.

PACE will be preceded by several other missions with an ocean colour focus including the European Sentinel-3 mission within the next year; it will have an Ocean and Land Colour Instrument with 21 spectral bands and 300 m spatial resolution, and will be building on Envisat’s Medium Resolution Imaging Spectrometer (MERIS) instrument. Sentinel-3 will also carry a Sea and Land Surface Temperature Radiometer and a polarimeter for mapping aerosols and clouds. It should help to significantly improve the quality of the ocean colour data by supporting the improvement of atmospheric correction.

Knowledge the global phytoplankton biomass is critical to understanding the health of the oceans, which in turn impacts on the planet’s carbon cycle and in turn affects the evolution of our planet’s climate. A continuous ocean colour time series data is critical to this, and so we are already looking forward to the data from Sentinel-3 and PACE.

British Science Won’t Be Eclipsed

Hawthorn leaves opening in Plymouth on 18th March 2015

Hawthorn leaves opening in Plymouth on 18th March 2015

We’re celebrating science in this blog, as it’s British Science Week in the UK! Despite its name British Science Week is actually a ten day programme celebrating science, technology, engineering, and maths (STEM). The week is co-ordinated by the British Science Association, a charity founded in 1831.

The British Science Association, like ourselves at Pixalytics, firmly believe that science should be at heart of society and culture and have the desire to inform, educate, and inspire people to get interested and involved in science. They promote their aims by supporting a variety of conferences, festivals, awards, training and encouraging young people to get involved in STEM subjects.

British Science week is one of their major annual festivals, and has hundreds of events running up and down the country. The website has a search facility, so you can see what events are running locally. Down here in Plymouth, the events include Ocean Science at The National Marine Aquarium, tomorrow at Museum & Art Gallery learn about the science behind the headlines and on Saturday, also at the Museum, an animal themed day including some real mini-beasts from Dartmoor Zoo – the place that inspired the 2011 film ‘We Bought A Zoo’, which starred Matt Damon and Scarlett Johnansson.

If you can’t get to any of the events in your local area, British Science Week is also promoting two citizen’s science projects:

  • Nature’s Calendar run by the Woodland Trust, asking everyone to look out for up to six common natural events to see how fast spring is arriving this year. They want to be informed of your first sightings of the orange tipped butterfly, the 7-spot ladybird, frog spawn, oak leaves, Hawthorn leaves, and Hawthorn flowers. This will continue a dataset which began in 1736, and we thought the Landsat archive was doing well.
  • Worm Watch Lab – A project to help scientists better understand how our brain works by observing the egg laying behaviour of nematode worms. You watch a 30 second video, and click a key if you see a worm lay an egg. We’ve watched a few and are yet to see the egg laying moment, but all the video watching is developing a lot of datasets for the scientists.

If you are interested in Citizen Science and go to sea, why not get involved in the citizen science work we support, by taking part in the Secchi Disk Project. Phytoplankton underpin the marine food chain and is particularly sensitive to changes in sea-surface temperatures, so this project aims to better understand their current global phytoplankton abundance. You do this by lowering a Secchi disk, a plain white disk attached to a tape measure, over the side of a boat and then recording the depth below the surface where it disappears from sight. This measurement is uploaded to the website and helps develop a global dataset of seawater clarity, which turn indicates the amount of phytoplankton at the sea surface. All the details on how to get involved are on the website.

On Friday, nature is getting involved by providing a partial solar eclipse over the UK. Starting at around 8.30am the moon will take about an hour to get to the maximum effect where the partial eclipse will be visible to the majority of the country – although the level of cloud will determine exactly what you see. Plymouth will be amongst the first places in the country to see the maximum effect around 9.23am – 9.25am, however the country’s best views will be on the Isle of Lewis in Scotland with a 98% eclipse predicted. The only two landmasses who will see a total eclipse will be the Faroe Islands and the Norwegian arctic archipelago of Svalbard. The last total eclipse in the UK was on the 24th August 1999, and the next one isn’t due until 23 September 2090!

Although the eclipse is a spectacular natural event, remember not to look directly at the sun, as this can damage your eyes. To view the eclipse wear a pair of special eclipse glasses, use a pinhole camera or watch it on the television!

We fully support British Science Week, it’s a great idea and we hope it will inspire more people to get involved in science.

Did you know remote sensing goes extra-terrestrial?

Ceres captured by NASA's Dawn spacecraft on 19 Feb 2015. Image courtesy NASA/JPL-Caltech/UCLA/MPS/DLR/IDA

Ceres captured by NASA’s Dawn spacecraft on 19 Feb 2015.
Image courtesy NASA/JPL-Caltech/UCLA/MPS/DLR/IDA

If you didn’t realise remote sensing of other planets and space objects occurs, you’re not alone. Remote sensing is playing an important role in helping us understand how our planet, and our universe, was created; however this isn’t celebrated much outside, or even within, the remote sensing community. We discussed this topic when ESA’s Rosetta arrived at Comet 67P, and it surfaced again last week when NASA’s Dawn spacecraft went into orbit around the dwarf planet, Ceres, which lies 38 000 miles away, between Mars and Jupiter.

Dawn’s mission is to study Ceres and the asteroid Vesta, which it orbited during 2011 and 2012, to develop our understanding of early solar system formation. There was a lot of media attention about Dawn’s arrival at Ceres, as it’s the first spacecraft to visit a dwarf planet and also the first to orbit two different non-earth objects. The technical and engineering feat to get Dawn to Vesta and Ceres is amazing, but the science to acquire, and interpret, the data is pure remote sensing. However, you rarely see it described as such within the headlines.

Dawn carries three scientific instruments:

  1. A camera, designed by the Max Planck Institute for Solar System Research in Germany, which will provide both three colour and panchromatic images, and when it descends into a low orbit around Ceres it will offer 62m spatial resolution. It can use 7 different colour filters, detect near-infrared energy and has an 8 gigabyte internal memory. As the camera is vital to both the navigation, and the science, side of the mission, Dawn carries two identical, but physically separate, versions.
  2. A Visible and Infrared Mapping Spectrometer (VIR-MS) designed and built by Galileo Avionica in Italy to provide surface maps. The instrument has a high spectral resolution of between 0.25 – 1µm in the visible light range, and 0.95 – 5µm in infrared, and has 6 gigabits of internal memory. Interestingly, it was based on the VIRTIS instrument carried by Rosetta to map Comet 67P.
  3. Gamma Ray and Neutron Detector (GRaND) — The instrument has 21 sensors, a wide field of view and produces maps of Ceres measuring the rock forming elements, trace elements, radioactive elements as well as Hydrogen, Carbon and Nitrogen. It was developed by the Los Alamos National Laboratory in the United States, and unlike the other two instruments has no internal storage.

Supporting these instrument measurements will be various radiometric and navigational data to help determine the gravitational field. The fundamental principles of remote sensing – measuring the reflected energy of the planet to determine what is on the surface – is right at the heart of Dawn’s mission. So why isn’t the remote sensing community shouting more about it?

We’re probably as guilty as everyone else here; we refer to Pixalytics as either a remote sensing company and/or an Earth observation company. Is it this association to Earth, which means we don’t always acknowledge the work, and achievements, beyond our planet?

Remote sensing is leading the way in enhancing knowledge about how the universe began; this is our scientific field that is helping make this possible. So let’s make some noise for the remote sensing community, elbow the space engineers out of the way to get ourselves into the news and let everybody else know what remote sensing can do!

Landsat Showing What The Eye Can’t See

Landsat 8 True colour composite of Paris from 11/11/14. Courtesy NASA/USGS.

Landsat 8 True colour composite of Paris from 11 November 2014. Courtesy NASA/USGS.

In our recent blog we described the five simple steps to select, download and view LandsatLook Natural Colour Images. However, did you know that the Natural Colour Image isn’t actually a single image? Instead, it’s a combination of three separate images!

This is because remote sensing works by exploiting the fact that the Earth’s surfaces, and the substances on it, reflect electromagnetic energy in different ways. Using different parts of the electromagnetic spectrum makes it possible to see details, features and information that aren’t obvious to the naked eye. Some remote sensing satellites carry instruments that can measure more than part of the electromagnetic spectrum, with each different measurement known as a spectral band.

Landsat 8 currently has two instruments, measuring eleven different spectral bands:

  • Three visible light bands that approximate red, green and blue
  • One near infrared band
  • Two shortwave infrared bands
  • Two thermal bands used for sensing temperature
  • Panchromatic band with a higher spatial resolution
  • The two final bands focus on coastal aerosols and cirrus clouds.

Combing the red, green and blue bands produces a single image that is very similar to what your eye would see; and this composite is the Natural Colour Image product that Landsat offers. However, you can also create your own colour composites using Image Processing Software, as Landsat offers the possibility of downloading an image for each of the individual spectral bands, known as the Level 1 GeoTIFF files.

Once imported into an image processing package, it’s straightforward to create different composites by combining different variations of the spectral bands. For example, combing the red, green and blue bands creates an image like the one at top of the blog showing the eastern edge of Paris, with the Bois de Vincennes, the largest public park in Paris, on the left hand side.

This image has colours your eyes expect to see, for example, trees are green, water is blue, etc, known as a true colour or RGB composite. Combining other spectral bands produces images where the colours are different to what you would expect, these are known as false colour composites. As they use different parts of the electromagnetic spectrum, the surface of the earth reacts differently to the light and allows features hidden when showing true colour to become far more prominent.

Landsat 8 False colour composite of Paris from 11 November 2014. Courtesy NASA/USGS.

Landsat 8 False colour composite of Paris from 11 November 2014. Courtesy NASA/USGS.

An example of a false composite can be seen on the right, it uses the near infrared, red and green bands. Like in the RGB image, the park is easily distinguishable from the surrounding Paris; but in the false colour image, the park’s water features of the Lac Daumesnil and the Lac des Minimes have become visible as black swirls.

Landsat 8 False colour composite of Paris from 11 November 2014. Courtesy NASA/USGS.

Landsat 8 False colour composite of Paris from 11 November 2014. Courtesy NASA/USGS.

A second example of a false colour composite is shown on the right, which this time combines the near infrared, shortwave infrared 2 and the coastal aerosol band. In this case, the vegetation of Paris appears orange and jumps out of the image when compared to urbanisation shown in blue.

Using different combinations of spectral bands is just one remote sensing technique to create valuable information and knowledge from an image. However, every satellite measures different spectral bands and you need to be aware of what you are looking at. For example, we’ve described Landsat 8 in this blog, previous Landsat missions have measured similar, but slightly different spectral bands; full details of all Landsat missions and their spectral bands can be found here.

Using the individual spectral bands, rather than relying on the set Landsat products, means you may gain new insights into the area you are looking at and you can great some fantastic images. You can literally make things appear before your eyes!

Five Landsat Quirks You Should Know

South West England from the 8th December 2014. Landsat 7 imagery courtesy of NASA Goddard Space Flight Center and U.S. Geological Survey

South West England from the 8th December 2014. Landsat 7 imagery courtesy of NASA Goddard Space Flight Center and U.S. Geological Survey

If you’ve started using Landsat after our five simple steps blog last week, or perhaps you’ve used its imagery for awhile, you may have come across, what we’ll call, quirks of Landsat. These may be things you didn’t understand, things that confused you or where you thought you’d done something wrong. This week we’re going to try to demystify some of the common quirks and questions with using Landsat data and imagery.

Quirk One: What do the WRS path and row numbers mean?
The Worldwide Reference System (WRS) is what Landsat uses to map its orbits around the world, and is defined by sequential path and row numbers. Despite its name, there are in fact two versions of the WRS; WRS-1 that’s used for Landsat’s 1-3, and WRS-2 for the rest of the missions.

The paths are a series of vertical-ish tracks going from east to west, where Path 001 crosses the equator at 65.48 degrees west Longitude. In WRS-1, there are 251 tracks, whereas the instruments in Landsat 4 and beyond have a wider swath width and only require 233 tracks to cover the globe. Both WRS-1 and WRS-2 use the same 119 Rows, where Row 001 starts near the North Pole at Latitude 80 degrees, 1 minute and 12 seconds north , Row 60 coincides with the Equator at Latitude 0, and row 119 mirrors the start at Latitude 80 degrees, 1 minute and 12 seconds south. A combination of path and row numbers gives a unique reference within Landsat, the path number always comes first, followed by the row number. For example, 204-025 is the WRS-2 path and row for Plymouth.

There are maps available of the paths and rows. However, there is also handy website from USGS that converts path and row numbers to Latitude and Longitude and vice versa; it’s accompanied by a map so you can tell you’ve got the area you want!

Quirk Two: My image has a minus one percent cloud cover
This one can be confusing! On the GloVis image selector you have the option to specify the maximum percentage of cloud cover on your image. Selecting 50% means up to 50% of the image could be cloud, and selecting 0% means no cloud at all.

Cloud cover is calculated using both the optical and thermal bands, and therefore as any Landsat imagery taken using the Multispectral Scanner System (MSS) does not include a thermal band, the cloud cover percentage is not easily calculated. Where a calculation does not occur the cloud cover percentage is set to -1%.

At the bottom of the Scene Information Box, there is line for Sensor/Product. Although, the title changes it effectively displays similar information. If the sensor/product line includes TM, ETM+ or OLI-TIRS, meaning Thematic Mapper, Enhanced Thematic Mapper Plus or Operational Land Imager-Thermal InfraRed Sensor respectively, the cloud cover will usually be calculated as all these sensors have a thermal band. Whereas, if the sensor/product is MSS, then the cloud cover percentage will be -1%.

Landsat 8 uses the OLI-TIRS sensor, Landsat 7 has the ETM+ sensor, whereas Landsat’s 4 & 5 have both TM and MSS sensors, and Landsat’s 1, 2 & 3 only have MSS.

Quirk Three: What are all the other files alongside the LandsatLook Natural Colour Image?
When you select an image from Landsat, you’re given all available Landsat products associated with it. The most common additional products you’ll be offered are:

  • LandsatLook Thermal Image – This is usually a jpeg of the thermal band, which shows the variations in temperature, where the darker areas are colder, and the lighter areas are warmer.
  • LandsatLook Quality Image – Currently only available with Landsat 8, and is a jpeg which shows the positions of the clouds and other features such as snow and ice on your image.
  • LandsatLook Images with Geographic Reference – These are a series of compressed data files which can be uploaded into a Geographical Information System, allowing the application of image processing techniques. These are big files compressed, an even bigger uncompressed, and so you need a lot of storage space if you start downloading them!

Quirk Four: Why do some Landsat 7 images have black stripes on them?

South West England from the 8th December 2014, showing black stripes.  Landsat 7 imagery courtesy of USGS/NASA.

South West England from the 8th December 2014, showing black stripes.
Landsat 7 imagery courtesy of USGS/NASA.

This is due to the failure of Landsat 7’s Scan Line Corrector on the 31st May 2003. The Scan Line Corrector’s role is to compensate for the forward movement of the satellite as it orbits, and the failure means instead of mapping in straight lines, a zigzag ground track is followed. This causes parts of the edge of the image not to be mapped; hence giving you the black stripe effect – it can be seen clearly to the right with a zoomed in version of the image at the top of the blog. The location of the black stripes varies, and each stripe represents between 390 – 450m of the image; therefore US Geological Survey (USGS) estimates that affected images lose about 22% of their data.

The centre of the image can still be used, however it’s more complicated to use Landsat 7 data after May 2003. It’s worth noting that on the sensor/product line in the Scene Information Box, it uses the notation SLC-off to indicate that the image was taken after the Scan Line Corrector failed.

Quirk Five: My image has brightly coloured single pixels

Landsat 5 MSS image acquired on 16 January 1997 via ESA receiving station. Image courtesy of USGS/NASA/ESA.

Landsat 5 MSS image acquired on 16 January 1997 via ESA receiving station. Image courtesy of USGS/NASA/ESA.

Brightly coloured single pixels that don’t match the surrounding area, is phenomena known as Impulse Noise; which is also seen with dark or missing pixels. An example of an image with this phenomena is shown on the right. Technical issues during the downlink from the satellite or during the transcription from tape to digital media are the most frequent causes. However, small fires on the ground can also show up as bright pixels that cause the same effect, although these are less frequent. As Landsat has a 30m spatial resolution, these aren’t campfires or barbecues; but are high temperature features such as brush burning, wildfires or gas flares.

Images heavily affected by Impulse Noise aren’t released into the USGS archive. Also it’s only visible when zoomed it, and selecting another image from a different date will mostly likely cure the phenomena.

We hope this quintet of quirks has explained some of the queries and questions you might have about using Landsat data, and if you’ve not come across any of these yet this should give you a heads up for when you do come across them.

Mastering Landsat Images in 5 Simple Steps!

Landsat 8 image of South West England from the 25th July 2014. Landsat imagery courtesy of NASA Goddard Space Flight Center and U.S. Geological Survey

Landsat 8 image of South West England from the 25th July 2014. Landsat imagery courtesy of NASA Goddard Space Flight Center and U.S. Geological Survey

Always wanted to use satellite imagery, but weren’t sure where to start? This blog shows you the five simple steps to find, download and view free imagery from the United States Geological Survey (USGS) Landsat satellite. Within fifteen minutes of reading this post you could have images from Landsat’s 40 year global archive on your computer, like the one at the top of this blog of Plymouth Hoe from the 25th July 2014. So what are we waiting for, let’s get started …

Step One: Register!
Register for a user account with the USGS who, along with NASA, manages the Landsat data archive. It’s free to create an account, although you will need an email address and answer a quick few questions to help USGS assess their users. Once the account is activated, you’re ready to go and you can download as much data as you need.

Step Two: Selecting your data download tool
USGS offers three tools for downloading data: Landsat LookViewer, Global Visualisation Viewer (GloVis) and EarthExplorer. Whilst all three offer options to view Landsat data, we’d suggest you use GloVis as it’s the easiest tool for new users to navigate. GloVis has a main screen and a left sidebar; the sidebar controls which Landsat image is displayed in main screen.

Step Three: Selecting the image
At the top of the sidebar is a map centred on the US, and the red dot indicates the position of the displayed image. To choose another location use the map’s scroll bars to wander the world, and simply click on the area you want to see. The four arrow buttons on the sidebar allow you to fine-tune the precise location.

Finally, select the month and year you’re interested in, and the Landsat image that most closely matches your selection will appear in the main window. As Landsat is an optical sensor, it cannot see through clouds. If the chosen image has clouds obscuring the view, use the Previous Scene and Next Scene buttons to move easily around the closet images to your preferred date.

It is worth noting, the Max Cloud dropdown option, which allows you to choose the maximum percentage of the image you are willing to have covered by cloud. For example, if you select 40%, GloVis will only give you images that have 40% or less cloud coverage.

Step Four: Downloading the Landsat image
Once you have an image you like, simply click on Add at the bottom of the sidebar, and then click Send to Cart. This will take you to the download screen.

Your image will have entity ID, which was also visible in the Scene Information Box on the previous screen, consisting of 21 characters such as LC82040252014206LGN00, where:

  • The first three characters describe the Landsat satellite the image is from and LC8 refers to Landsat 8.
  • The next six (204025) are a Landsat catalogue number known as the Worldwide Reference Systems. If you remember the numbers for your area of interest, entering them in GloVis can be a quick way of navigating to that location.
  • The following seven characters give the year (2014) and the day of year (206) the image was taken; the day of the year is a numerical count starting with 001 on 1st January, and so 206 is 25th July.
  • A three-digit ground station identifier is next, in this case LGN indicates that the USGS Landsat Ground Network received this data.
  • Finally, the last two-digits are a version number (00).

Clicking the download button, gives you options to download any of the Landsat products available for the image you’ve selected. The LandsatLook Natural Colour Image is a jpeg version of the image you were looking at in GloVis, and is the easiest one to use. Click on download and the image you’ve chosen will be downloaded to your computer.

Step Five: Viewing, and using, the Landsat image

Plymouth Sound on 25th July 2014 from Landsat 8: Image courtesy of USGS/NASA Landsat

Plymouth Sound on 25th July 2014 from Landsat 8: Image courtesy of USGS/NASA Landsat

The easiest way to view the image is to use the Windows Photo Viewer tool, where you will be able to see the image and zoom in and out of it. You can also open the image in Windows Paint, and use its basic tools to resize and crop the image. For example, the image on the right is a zoomed in version of the image at the top of this post.

Landsat images are free, and they carry no copyright; however, NASA does request you attribute them appropriately – “Landsat imagery courtesy of NASA Goddard Space Flight Center and U.S. Geological Survey” or “USGS/NASA Landsat” – which means you, can use Landsat images on your website or other materials. The full information on Landsat copyright can be found here.

Next week, we’ll talk more about the other products you can download from Landsat. We hope these five simple steps have inspired you to find, download and use some Landsat data.

Twinkle, Twinkle, Little SAR

Copyright : NASA/JPL Artist's impression of the Seasat Satellite

Copyright : NASA/JPL
Artist’s impression of the Seasat Satellite

Last week ESA released a new synthetic aperture radar (SAR) dataset from NASA’s Seasat mission; nothing unusual in that you might think, except that this data is over 36 years old. As part of its Long Term Data Preservation Programme, ESA has retrieved, consolidated and reprocessed the Seasat data it holds, and made this available to the Earth observation (EO) community.

Seasat was a landmark satellite in EO terms when it was launched on the 27th June 1978. Not only was it the first satellite specifically designed for remote sensing of the oceans, but it was also the first to carry a SAR instrument. Seasat was only in orbit for 106 days as a problem with the electrical system ended the mission just over three months later on 10th October. Although, there is a conspiracy theory that the electrical fault was just a cover story, and the military actually shut down Seasat once they discovered it could detect submerged submarines wakes!

Synthetic aperture radar (SAR) is so called as it uses a small physical antenna to imitate having a large physical antenna; to detect the long wavelengths would require a physical antenna of thousands of metres, while the same result can be achieved with a synthetic antenna of around 10 metres in length. It is an active sensing radar system which works in the microwave part of the electromagnetic spectrum, and uses pulses of radiation to map the surface of the Earth. Pulses are transmitted with wavelengths of between metres and millimetres, some of these pules are absorbed by the surface, whereas others are reflected back and recorded by the SAR. As the satellite moves, the antenna’s position relative to the area that it is mapping changes over time providing multiple observations. This movement crates a large synthetic antenna aperture, because all the recorded reflections of a particular area are processed together as if they were collected by a single large physical antenna, which gives an improved spatial resolution.

SAR is extremely sensitive to small changes in surface roughness, and can provide both day and night imagery as it works independently of visible light, and is generally unaffected by cloud cover. It is used for assessing changes in waves, sea ice features and ocean topography, and recent research is applying it to other fields such as flood mapping. Seasat blazed the trail for SAR instruments, which has since been followed by many other satellites including ESA’s ERS-1 and ERS-2, ENVISAT’s ASAR, RadarSAT, COSMO-SkyMed, TerraSAR-X; and in 2014 both the Japanese ALOS, and ESA’s Sentinel-1, satellites carried SAR instruments.

The potential value residing in Seasat data is demonstrated not only by ESA reprocessing Seasat, but last year NASA also released a reprocessed Seasat dataset. The use of historic data is one of EO most powerful tools, and it is one the remote sensing community needs to exploit more.

Temporal: The forgotten resolution

Time, Copyright: scanrail / 123RF Stock Photo

Time, Copyright: scanrail / 123RF Stock Photo

Temporal resolution shouldn’t be forgotten when considering satellite imagery; however it’s often neglected, with its partners of spatial and spectral resolution getting the limelight. The reason is the special relationship spatial and spectral has, where a higher spectral resolution has meant a lower spatial resolution and vice-versa, because of limited satellite disk space and transmission capabilities. Therefore, when considering imagery most people focus on their spatial or spectral needs and go with whatever best suits their needs, rarely giving temporal resolution a second thought, other than if immediate data acquisition is required.

Temporal resolution is the amount of time it takes a satellite to return to collect data for exactly the same location on Earth, also known as the revisit or recycle time, expressed as a function of time in hours or days. Global coverage satellites tend to have low earth polar, or near-polar, orbits travelling at around 27,000kph and taking around 100 minutes to circle the Earth. With each orbit the Earth rotates twenty-five degrees around its polar axis, and so on each successive orbit the ground track moves to the west, meaning it takes a couple of weeks to fully rotate, for example, Landsat has a 16 day absolute revisit time.

Only seeing the part of the Earth you want to image once every few weeks, isn’t very helpful if you want to see daily changes. Therefore, there are a number of techniques satellites use to improve the temporal resolution:

  • Swath Width- A swath is the area of ground the satellite sees with each orbit, the wider the swath the greater the ground coverage, but generally a wider swath means lower spatial resolution. A satellite with a wide swath will have significant overlaps between orbits that allows areas of the Earth to be imaged more frequently, reducing the revisit time. MODIS uses a wide swath and it images the globe every one to two days.
  • Constellations – If you have two identical satellites orbiting one hundred and eighty degrees apart you will reduce revisit times, and this approach is being used by ESA’s Sentinel missions. Sentinel-1A was launched in 2014, with its twin Sentinel-1B is due to be launched in 2016. When operating together they will provide a temporal resolution of six days. Obviously, adding more satellites to the constellations will continue to reduce the revisit time.
  • Pointing – High-resolution satellites in particular use this method, which allows the satellites to point their sensors at a particular point on earth, and so can map the same area from multiple orbits. However, pointing changes the angle the sensor looks at the Earth, and means the ground area it can observe can be distorted.
  • Geostationary Orbits – Although technically not the same, a geostationary satellite remains focussed on an area of the Earth at all times and so the temporal resolution is the number of times imagery is taken, for example, every fifteen minutes. The problem is that you can only map a restricted area.

Hopefully, this has given you a little oversight on temporal resolution, and whilst spectral and spatial resolution are important factors when considering what imagery you need; do spent a bit a time considering temporal needs too!

SMAP ready to map!

Artist's rendering of the Soil Moisture Active Passive satellite.  Image credit: NASA/JPL-Caltech

Artist’s rendering of the Soil Moisture Active Passive satellite.
Image credit: NASA/JPL-Caltech

On the 31st January NASA launched their Soil Moisture Active Passive satellite, generally known by the more pronounceable acronym SMAP, aboard the Delta 2 rocket. It will go into a near polar sun-synchronous orbit at an altitude of 685km.

The SMAP mission will measure the amount of water in the top five centimetres of soil, and whether the ground is frozen or not. These two measurements will be combined to produce global maps of soil moisture to improve understanding of the water, carbon and energy cycles. This data will support applications ranging from weather forecasting, monitoring droughts, flood prediction and crop productivity, as well as providing valuable information to climate science.

The satellite carries two instruments; a passive L-Band radiometer and an active L-Band synthetic aperture radar (SAR). Once in space the satellite will deploy a spinning 6m gold-coated mesh antenna which will measure the backscatter of radar pulses, and the naturally occurring microwave emissions, from off the Earth’s surface. Rotating 14.6 times every minute, the antenna will provide overlapping loops of 1000km giving a wide measurement swath. This means that whilst the satellite itself only has an eight day repeat cycle, SMAP will take global measurements every two to three days.

Interestingly, although antennas have previously been used in large communication satellites, this will be the first time a deployable antenna, and the first time a spinning application, have been used for scientific measurement.

The radiometer has a high soil moisture measurement accuracy, but has a spatial resolution of only 40km; whereas the SAR instrument has much higher spatial resolution of 10km, but with lower soil moisture measurement sensitivity. Combining the passive and active observations will give measurements of soil moisture at 10km, and freeze/thaw ground state at 3km. Whilst SMAP is focussed on provided on mapping Earth’s non-water surface, it’s also anticipated to provide valuable data on ocean salinity.

SMAP will provide data about soil moisture content across the world, the variability of which is not currently well understood. However, it’s vital to understanding both the water and carbon cycles that impact our weather and climate.

Is space a good investment?

Space is an expensive, and uncertain, environment to work in, and decisions to invest in space technology and missions are frequently questioned in the current global economic climate. Headline figures of tens of millions, or billions, do little to counter the accusations that there are more appropriate things to be investing in. Is the cost of investing in space worthwhile?

Image of East Devon, UK taken by Landsat 8 on 4th November 2013.  The River Exe flows from top to bottom and the River Teign from left to right. Plumes of suspended sediment are clearly visible following periods of heavy rainfall in late October and early November 2013.  Image courtesy of the U.S. Geological Survey

Image of East Devon, UK taken by Landsat 8 on 4th November 2013.
The River Exe flows from top to bottom and the River Teign from left to right. Plumes of suspended sediment are clearly visible following periods of heavy rainfall in late October and early November 2013.
Image courtesy of the U.S. Geological Survey

Last week the Landsat Advisory Group, a sub-committee of the US Government’s National Geospatial Advisory Committee, issued a report looking at the economic value of Landsat data to America. As Landsat data is freely available, quantifying the value of that data isn’t easy; and the Group approached it by considering the cost of providing alternative solutions for Landsat data.

They considered sixteen applications, linked to US Government departments, which use Landsat data. These ranged from flood mitigation, shoreline mapping and coastal change; through forestry management, waterfowl habitats and vineyard management; to mapping, wildfire assessment and global security support. The report estimated that these sixteen streams alone produced savings of between $350 million and $436 million to the US economy. The report concluded that the economic value of just one year of Landsat data far exceeds the multi-year total cost of building, launching, and managing Landsat satellites and sensors.

This conclusion was interesting given reports in 2014 that Landsat 8 cost around $850m to build and launch, a figure which will increase to almost $1 billion with running costs; and that NASA were estimating that Landsat 9 would cost in excess of the $650m budget they had been given. These figures are significantly in excess of the quantified figures in the Advisory Group report; however work undertaken by US Geological Survey in 2013 identified the economic benefit of Landsat data for the year 2011 is estimated to be $1.70 billion for US users, and $400 million for international users.

The discrepancy between the two figures is because the Advisory Group did not include private sector savings; nor the fact that Landsat data is also collected, and disseminated, by the European Space Agency; nor did it include unquantified societal benefits or contribution to scientific research. For example, it highlighted that humanitarian groups use Landsat imagery to monitor human rights violations at low cost and without risking staff entering dangerous, and often inaccessible, world regions.

Last week also demonstrated the uncertain side of space, with the discovery of the Beagle-2 spacecraft on the surface of Mars. The UK led probe mission was assumed to have crash landed on Christmas Day 2003, however recent images indicate it landed successfully but its solar panels did not unfurl successfully. The Beagle 2 discovery has obvious echoes with the recent shady site of the Philea comet landing, and demonstrates that space exploration is a risky business. Given the Beagle 2 mission cost £50 million and the Philea mission was estimated to cost around region of €1.4 billion, is the cost of investing in space worthwhile?

Consider satellite television, laptops, smoke detectors, tele-medicine, 3D graphics and satellite navigation – all of these developments came through the space industry, and so now think about the jobs and economic activity generated by these sectors. Working in space is expensive and challenging, but it’s precisely because of this that the space industry is innovative and experimental. The space sector works at the technological cutting edge, investment in space missions benefits and enhances our life on earth. So if anyone ever asks whether space is a good investment, tell them about the financial benefits of Landsat, the development of laptops, the number of lives saved by smoke detectors or the humanitarian support provided to Amnesty International.