Earth observation satellites in space in 2016

Blue Marble image of the Earth taken by the crew of Apollo 17 on Dec. 7 1972. Image Credit: NASA

Blue Marble image of the Earth taken by the crew of Apollo 17 on Dec. 7 1972.
Image Credit: NASA

Earth Observation (EO) satellites account for just over one quarter of all the operational satellites currently orbiting the Earth. As noted last week there are 1 419 operational satellites, and 374 of these have a main purpose of either EO or Earth Science.

What do Earth observation satellites do?
According to the information within the Union of Concerned Scientists database, the main purpose of the current operational EO satellites are:

  • Optical imaging for 165 satellites
  • Radar imaging for 34 satellites
  • Infrared imaging for 7 satellites
  • Meteorology for 37 satellites
  • Earth Science for 53 satellites
  • Electronic Intelligence for 47 satellites
  • 6 satellites with other purposes; and
  • 25 satellites simply list EO as their purpose

Who Controls Earth observation satellites?
There are 34 countries listed as being the main controllers of EO satellites, although there are also a number of joint and multinational satellites – such as those controlled by the European Space Agency (ESA). The USA is the leading country, singularly controlling one third of all EO satellites – plus they are joint controllers in others. Of course, the data from some of these satellites are widely shared across the world, such as Landsat, MODIS and SMAP (Soil Moisture Active Passive) missions.

The USA is followed by China with about 20%, and Japan and Russia come next with around 5% each. The UK is only listed as controller on 4 satellites all related to the DMC constellation, although we are also involved in the ESA satellites.

Who uses the EO satellites?
Of the 374 operational EO satellites, the main users are:

  • Government users with 164 satellites (44%)
  • Military users with 112 satellites (30%)
  • Commercial users with 80 satellites (21%)
  • Civil users with 18 satellites (5%)

It should be noted that some of these satellites do have multiple users.

Height and Orbits of Earth observation satellites
In terms of operational EO satellite altitudes:

  • 88% are in a Low Earth Orbit, which generally refers to altitudes of between 160 and 2 000 kilometres (99 and 1 200 miles)
  • 10% are in a geostationary circular orbit at around 35 5000 kilometres (22 200 miles)
  • The remaining 2% are described as having an elliptical orbit.

In terms of the types of orbits:

  • 218 are in a sun-synchronous orbit
  • 84 in non-polar inclined orbit
  • 16 in a polar orbit
  • 17 in other orbits including elliptical, equatorial and molniya orbit; and finally
  • 39 do not have an orbit recorded.

What next?

Our first blog of 2016 noted that this was going to be an exciting year for EO, and it is proving to be the case. We’ve already seen the launches of Sentinel-1B, Sentinel-3A, Jason-3, GaoFen3 carrying a SAR instrument and further CubeSat’s as part of Planet’s Flock imaging constellation.

The rest of the year looks equally exciting with planned launches for Sentinel-2B, Japan’s Himawari 9, India’s INsat-3DR, DigitalGlobe’s Worldview 4 and NOAA’s Geostationary Operational Environmental Satellite R-Series Program (GOES-R). We can’t wait to see all of this data in action!

Four Fantastic Forestry Applications of Remote Sensing

Landsat Images of the south-east area of Bolivia around Santa Cruz de la Sierra 27 years apart showing the changes in land use. Data courtesy of USGS/NASA.

Landsat Images of the south-east area of Bolivia around Santa Cruz de la Sierra 27 years apart showing the changes in land use. Data courtesy of USGS/NASA.

Monitoring forest biomass is essential for understanding the global carbon cycle because:

  • Forests account for around 45 % of terrestrial carbon, and deforestation accounts for 10% of greenhouse gas emissions
  • Deforestation and forest degradation release approximately identical amounts of greenhouse gases as all the world’s road traffic
  • Forests sequester significant amounts of carbon every year

The United Nations (UN) intergovernmental Reducing Emissions from Deforestation and forest Degradation in developing countries (REDD+) programme, was secured in 2013 during the 19th Conference of the Parties to the UN Framework Convention on Climate Change. It requires countries to map and monitor deforestation and forest degradation, together with developing a system of sustainable forest management. Remote sensing can play a great role in helping to deliver these requirements, and below are three fantastic remote sensing initiatives in this area.

Firstly, the Real Time System for Detection of Deforestation (DETER) gives monthly alerts on potential areas of deforestation within Amazon rainforests. It uses data from MODIS, at 250 m pixel resolution, within a semi-automated classification technique. A computer model detects changes in land use and cover such as forest clearing that are then validated by interpreters. It has been valuable helping Brazil to reduce deforestation rates by around 80% over the last decade; however, it takes two weeks to produce the output of this computer model.

Zoomed in Landsat Images of the south-east area of Bolivia around Santa Cruz de la Sierra 27 years apart showing the changes in land use. Data courtesy of USGS/NASA.

Zoomed in Landsat Images of the south-east area of Bolivia around Santa Cruz de la Sierra 27 years apart showing the changes in land use. Data courtesy of USGS/NASA.

A similar initiative is FORest Monitoring for Action (FORMA), which also use MODIS data. FORMA is fully automated computer model which combines vegetation reflectance data from MODIS, active fires from NASA’s Fire Information for Resource Management and rainfall figures, to identify potential forest clearing. Like DETER it produces alerts twice a month, although it works on tropical humid forests worldwide.

A third initiative aims to provide faster alerts for deforestation using the research by Hansen et al, published in 2013. The researchers used successive passes of the current Landsat satellites to monitor land cover, and when gaps appear between these passes it is flagged. These will be displayed on an online map, and the alerts will be available through the Word Resources Institute’s Global Forest Watch website, starting in March 2016. With the 30 m resolution of Landsat, smaller scale changes in land use can be detected than is possible for sensors such as MODIS. Whilst this is hoped to help monitor deforestation, it doesn’t actually determine it, as they could be other reasons for the tree loss and further investigation will be required. Being an optical mission, Landsat has problems seeing both through clouds and beneath the forestry canopy, and so it’s use will be limited in areas such as tropical rain forests.

Finally, one way of combat the weather and satellite canopy issue is to use radar to assess forests, and the current AfriSAR project in Gabon is doing just that – although it’s with flights and Unmanned Aerial Vehicles (UAV) rather than satellites. It began in the 2015 with overflights during the dry season, and the recent flights in February 2016 captured the rainy season. This joint ESA, Gabonese Space Agency and Gabon Agency of National Parks initiative aims of the project is to determine the amount of biomass and carbon stored in forests, by using the unique sensitivity of P-band SAR, the lowest radar frequency used in remote sensing at 432–438 MHz. NASA joined the recent February missions adding its Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) and the Land, Vegetation and Ice Sensor (LVIS) instrument, which are prototypes of sensors to be used on future NASA missions. Overall, this is giving a unique dataset on the tropical forests.

These are just four example projects of how remote sensing can contribute towards towards understanding what is happening in the world’s forests.

Reprocessing Data Challenges of Producing A Time Series

August 2009 Monthly Chlorophyll-a Composite; data courtesy of the ESA Ocean Colour Climate Change Initiative project

August 2009 Monthly Chlorophyll-a Composite; data courtesy of the ESA Ocean Colour Climate Change Initiative project

Being able to look back at how our planet has evolved over time, is one of the greatest assets of satellite remote sensing. With Landsat, you have a forty year archive to examine changes in land use and land cover. For in situ (ground based) monitoring, this is something that’s only available for a few locations, and you’ll only have data for the location you’re measuring. Landsat’s continuous archive is an amazing resource, and it is hoped that the European Union’s Copernicus programme will develop another comprehensive archive. So with all of this data, producing a time series analysis is easy isn’t it?

Well, it’s not quite that simple. There are the basic issues of different missions having different sensors, and so you need to know whether you’re comparing like with like. Although data continuity has been a strong element of Landsat, the sensors on Landsat 8 are very different to those on Landsat 1. Couple this with various positional, projection and datum corrections, and you have lots of things to think about to produce an accurate time series. However, once you’ve sorted all of these out and you’ve got your data downloaded, then everything is great isn’t it?

Well, not necessarily; you’ve still got to consider data archive reprocessing. The Space Agencies, who maintain this data, regularly reprocess satellite datasets. This means that the data you downloaded two years ago, isn’t necessarily the same data that could be downloaded today.

We faced this issue recently as NASA completed the reprocessing of the MODIS Aqua data, which began in 2014. The data from the MODIS Aqua satellite has been reprocessed seven times, whilst its twin, Terra, has been reprocessed three times.

Reprocessing the data can include changes to some, or all, of the following:

  • Update of the instrument calibration, to take account of current knowledge about sensor degradation and radiometric performance.
  • Appyling new knowledge, in terms of atmospheric correction and/or derived product algorithms.
  • Changes to parallel datasets that are used as inputs to the processing; for example, the meteorological conditions are used to aid the atmospheric correction.

Occasionally, they also change the output file format the data is provided in; and this is what has caught us out. The MODIS output file format has changed from HDF4 to NetCDF4 with the reason being that NetCDF is a more efficient, sustainable, extendable and interoperable data file format. A change we’ve known about for a long time, as it resulted from community input, but until you get the new files you can’t check and update your software.

We tend to use a lot of Open Source software, enabling our clients to carry on working with remote sensing products without having to invest in expensive software. The challenge is that it takes software provider time to catch up with the format changes. Hence, the software is unable to load the new files or the data is incorrectly read e.g., comes in upside down. Sometimes large changes, mean you may have to alter your approach and/or software.

Reprocessing is important, as it improves the overall quality of the data, but you do need to keep on top what is happening with the data to ensure that you are comparing like with like when you analyse a time series.

Ocean Colour Cubes

August 2009 Monthly Chlorophyll-a Composite; data courtesy of the ESA Ocean Colour Climate Change Initiative project

August 2009 Monthly Chlorophyll-a Composite; data courtesy of the ESA Ocean Colour Climate Change Initiative project

It’s an exciting time to be in ocean colour! A couple of weeks ago we highlighted the new US partnership using ocean colour as an early warning system for harmful freshwater algae blooms, and last week a new ocean colour CubeSat development was announced.

Ocean colour is something very close to our heart; it was the basis of Sam’s PhD and a field of research she is highly active in today. When Sam began studying her PhD, Coastal Zone Color Scanner (CZCS) was the main source of satellite ocean colour data, until it was superseded by the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) that became the focus of her role at Plymouth Marine Laboratory.

Currently, there are a number ocean colour instruments in orbit:

  • NASA’s twin MODIS instruments on the Terra and Aqua satellites
  • NOAA’s Visible Infrared Imager Radiometer Suite (VIIRS)
  • China’s Medium Resolution Spectral Imager (MERSI), Chinese Ocean Colour and Temperature Scanner (COCTS) and Coastal Zone Imager (CZI) onboard several satellites
  • South Korea’s Geostationary Ocean Color Imager (GOCI)
  • India’s Ocean Colour Monitor on-board Oceansat-2

Despite having these instruments in orbit, there is very limited global ocean colour data available for research applications. This is because the Chinese data is not easily accessible outside China, Oceansat-2 data isn’t of sufficient quality for climate research and GOCI is a geostationary satellite so the data is only for a limited geographical area focussed on South Korea. With MODIS, the Terra satellite has limited ocean colour applications due to issues with its mirror and hence calibration; and recently the calibration on Aqua has also become unstable due to its age. Therefore, the ocean colour community is just left with VIIRS; and the data from this instrument has only been recently proved.

With limited good quality ocean colour data, there is significant concern over the potential loss of continuity in this valuable dataset. The next planned instrument to provide a global dataset will be OLCI onboard ESA’s Sentinel 3A, due to be launched in November 2015; with everyone having their fingers crossed that MODIS will hang on until then.

Launching a satellite takes time and money, and satellites carrying ocean colour sensors have generally been big, for example, Sentinel 3A weighs 1250 kg and MODIS 228.7 kg. This is why the project was announced last week to build two Ocean Colour CubeSats is so exciting; they are planned to weigh only 4 kg which reduces both the expense and the launch lead time.

The project, called SOCON (Sustained Ocean Observation from Nanosatellites), will see Clyde Space, from Glasgow in the UK, will build an initial two prototype SeaHawk CubeSats with HawkEye Ocean Colour Sensors, with a ground resolution of between 75 m and 150 m per pixel to be launched in early 2017. The project consortium includes the University of North Carolina, NASA’s Goddard Space Flight Centre, Hawk Institute for Space Sciences and Cloudland Instruments. The eventual aim is to have constellations of CubeSats providing a global view of both ocean and inland waters.

There are a number of other planned ocean colour satellite launches in the next ten years including following on missions such as Oceansat-3, two missions from China, GOCI 2, and a second VIIRS mission.

With new missions, new data applications and miniaturised technology, we could be entering a purple patch for ocean colour data – although purple in ocean colour usually represents a Chlorophyll-a concentration of around 0.01 mg/m3 on the standard SeaWiFS colour palette as shown on the image at the top of the page.

We’re truly excited and looking forward to research, products and services this golden age may offer.

Goodbye HICO, Hello PACE – Ocean Colour’s Satellite Symmetry

HICO™ Data, image of Hong Kong from the Oregon State University HICO Sample Image Gallery, provided by the Naval Research Laboratory

HICO™ Data, image of Hong Kong from the Oregon State University HICO Sample Image Gallery, provided by the Naval Research Laboratory

Ocean colour is the acorn from which Pixalytics eventually grew, and so we were delighted to see last week’s NASA announcement that one of their next generation ocean colour satellites is now more secure with a scheduled launched for 2022.

Unsurprisingly the term ocean colour refers to the study of the colour of the ocean, although in reality it’s a name that includes a suite of different products, with the central one for the open oceans being the concentration of phytoplankton. Ocean colour is determined by the how much of the sun’s energy the ocean scatters and absorbs, which in turn is dependent on the water itself alongside substances within the water that include phytoplankton and suspended sediments together with dissolves substances and chemicals. Phytoplankton can be used a barometer of the health of the oceans; in that phytoplankton are found where nutrient levels are high and oceans with low nutrients have little phytoplankton. Sam’s PhD involved the measurement of suspended sediment coming out of the Humber estuary back in 1995, and it’s remained an active field of her research for the last 20 years.

Satellite ocean colour remote sensing began with the launch of NASA’s Coastal Zone Colour Scanner (CZCS) on the 24th October 1978. It had six spectral bands, four of which were devoted to ocean colour, and a spatial resolution of around 800m. Despite only having an anticipated lifespan of one year, it operated until the 22nd June 1986 and has been used as a key dataset ever since. Sadly, CZCS’s demise marked the start of a decade gap in NASA’s ocean colour data archive.

Although there were some intermediate ocean colour missions, it was the launch of the Sea-viewing Wide Field-of-view (SeaWiFS) satellite that brought the next significant archive of ocean colour data. SeaWiFS had 8 spectral bands optimized for ocean colour and operated at a 1 km spatial resolution. One of Sam’s first jobs was developing a SeaWiFS data processor, and the satellite collected data until the end of its mission in December 2010.

Currently, global ocean colour data primarily comes from either NASA’s Moderate Resolution Imaging Spectroradiometer (MODIS) on-board the twin Aqua and Terra satellites, or the Visible Infrared Imaging Radiometer Suite (VIIRS) which is on a joint NOAA / NASA satellite called Suomi NPP. MODIS has 36 spectral bands and spatial resolution ranging from 250 to 1000 m; whilst VIIRS has twenty two spectral bands and a resolution of 375 to 750 m.

Until recently, there was also the ONR / NRL / NASA Hyperspectral Imager for the Coastal Ocean (HICO) mission on-board the International Space Station. It collected selected coastal region data with a spectral resolution range of 380 to 960nm and 90m spatial resolution. It was designed to collect only one scene per orbit and has acquired over 10,000 such scenes since its launch. However, unfortunately it suffered during a solar storm in September 2014. Its retirement was officially announced a few days ago with the confirmation that it wasn’t possible to repair the damage.

In the same week we wave goodbye to HICO, NASA announced the 2022 launch of the Pre-Aerosol and ocean Ecosystem (PACE) mission in a form of ocean colour symmetry. PACE is part of the next generation of ocean colour satellites, and it’s intended to have an ocean ecosystem spectrometer/radiometer called built by NASA’s Goddard Space Flight Centre and will measure spectral wavebands from ultraviolet to near infrared. It will also have an aerosol/cloud polarimeter to help improve our understanding of the flow, and role, of aerosols in the environment.

PACE will be preceded by several other missions with an ocean colour focus including the European Sentinel-3 mission within the next year; it will have an Ocean and Land Colour Instrument with 21 spectral bands and 300 m spatial resolution, and will be building on Envisat’s Medium Resolution Imaging Spectrometer (MERIS) instrument. Sentinel-3 will also carry a Sea and Land Surface Temperature Radiometer and a polarimeter for mapping aerosols and clouds. It should help to significantly improve the quality of the ocean colour data by supporting the improvement of atmospheric correction.

Knowledge the global phytoplankton biomass is critical to understanding the health of the oceans, which in turn impacts on the planet’s carbon cycle and in turn affects the evolution of our planet’s climate. A continuous ocean colour time series data is critical to this, and so we are already looking forward to the data from Sentinel-3 and PACE.

Temporal: The forgotten resolution

Time, Copyright: scanrail / 123RF Stock Photo

Time, Copyright: scanrail / 123RF Stock Photo

Temporal resolution shouldn’t be forgotten when considering satellite imagery; however it’s often neglected, with its partners of spatial and spectral resolution getting the limelight. The reason is the special relationship spatial and spectral has, where a higher spectral resolution has meant a lower spatial resolution and vice-versa, because of limited satellite disk space and transmission capabilities. Therefore, when considering imagery most people focus on their spatial or spectral needs and go with whatever best suits their needs, rarely giving temporal resolution a second thought, other than if immediate data acquisition is required.

Temporal resolution is the amount of time it takes a satellite to return to collect data for exactly the same location on Earth, also known as the revisit or recycle time, expressed as a function of time in hours or days. Global coverage satellites tend to have low earth polar, or near-polar, orbits travelling at around 27,000kph and taking around 100 minutes to circle the Earth. With each orbit the Earth rotates twenty-five degrees around its polar axis, and so on each successive orbit the ground track moves to the west, meaning it takes a couple of weeks to fully rotate, for example, Landsat has a 16 day absolute revisit time.

Only seeing the part of the Earth you want to image once every few weeks, isn’t very helpful if you want to see daily changes. Therefore, there are a number of techniques satellites use to improve the temporal resolution:

  • Swath Width– A swath is the area of ground the satellite sees with each orbit, the wider the swath the greater the ground coverage, but generally a wider swath means lower spatial resolution. A satellite with a wide swath will have significant overlaps between orbits that allows areas of the Earth to be imaged more frequently, reducing the revisit time. MODIS uses a wide swath and it images the globe every one to two days.
  • Constellations – If you have two identical satellites orbiting one hundred and eighty degrees apart you will reduce revisit times, and this approach is being used by ESA’s Sentinel missions. Sentinel-1A was launched in 2014, with its twin Sentinel-1B is due to be launched in 2016. When operating together they will provide a temporal resolution of six days. Obviously, adding more satellites to the constellations will continue to reduce the revisit time.
  • Pointing – High-resolution satellites in particular use this method, which allows the satellites to point their sensors at a particular point on earth, and so can map the same area from multiple orbits. However, pointing changes the angle the sensor looks at the Earth, and means the ground area it can observe can be distorted.
  • Geostationary Orbits – Although technically not the same, a geostationary satellite remains focussed on an area of the Earth at all times and so the temporal resolution is the number of times imagery is taken, for example, every fifteen minutes. The problem is that you can only map a restricted area.

Hopefully, this has given you a little oversight on temporal resolution, and whilst spectral and spatial resolution are important factors when considering what imagery you need; do spent a bit a time considering temporal needs too!

Why understanding spatial resolution is important?

Spatial resolution is a key characteristic in remote sensing, where it’s often used to refer to the size of pixels within an acquired image. However this is a simplification as the detector in the satellite doesn’t see the square suggested by a pixel, but rather it sees an ellipse due to the angle through which the detector receives the signal – known as the instantaneous field of view. The ellipses are turned into square pixels by data processing in creating the image.

The area of the port of Rotterdam shown using a Landsat image (background) at 30m resolution and MERIS full resolution image (inset image) at 300m resolution; data courtesy of the USGS and ESA. Example used within Hydrographic Academy eLearning material.

The area of the port of Rotterdam shown using a Landsat image (background) at 30m resolution and MERIS full resolution image (inset image) at 300m resolution; data courtesy of the USGS and ESA. Example used within Hydrographic Academy eLearning material.

Therefore, for example, when viewing an image with 1km resolution not only will you not be able to see anything that is smaller than 1km in size, but objects needs to be significantly larger than 1km for any detail to be discernable. Whilst this might be fine if you looking at changes in temperature across the Atlantic Ocean, it won’t be much use if you are interested in suspended sediment blooms at the mouth of a small river.

Any image with a spatial resolution of between 50m and 1km, is described as having low spatial resolution. For example, MODIS operates low spatial resolutions ranging from 250m to 1000m as the primary focus is global mapping rather than capturing detailed imagery for local regions.

If you want to look for smaller objects, you’ll need use images with medium spatial resolutions of between 4m to 50m. There is quite a lot of freely available imagery within this range. For example, NASA’s Landsat 8 operates at 15, 30m and 100m resolution and ESA’s Sentinel-1A operates at the three resolutions of 5m, 20m and 100m. If you want go even finer, you will require high spatial resolution images that go down to resolutions of between 4m and 1m, or very high spatial resolution images which cover the 0.5m – 1m range. Commercial organisations tend to operate satellites with these higher levels of resolution, and they charge for making the images available. It’s likely that military satellites offer imagery down to 0.15m, but there are regulations in place to prevent the sale of extremely high resolution imagery as it’s considered to be a potential danger to security.

Spatial resolution was in the headlines last week with launch of the DigitalGlobe’s WorldView-3 satellite that can produce spectral images with a resolution down to 0.31m. Technologies to produce images at this resolution have been around for some time, but as reported by Reuters in June, DigitialGlobe has only recently received a license from the US Commerce Department to start selling images with a resolution of up to 0.25m; without this licence they wouldn’t be able to sell this higher resolution imagery.

Regulatory involvement in very high resolution imagery was also demonstrated earlier this year, when in January, the UK government blocked the European Commission’s effort to set common European regulations on the sale of high-resolution satellite imagery. The UK government currently controls access to data through export licencing conditions on the satellite hardware, and they felt regulations would impact on UK’s ability to export space technology.

Therefore, spatial resolution is an important term, and one every remote sensing client should understand. Different services require different spatial resolutions, and selecting the most appropriate resolution for your needs will not only ensure that you get exactly what you want, but could also save you money as you don’t want to over-specify.

Rosetta: Extra-terrestrial Observation

Full-frame NAVCAM image taken on 9 August 2014 from a distance of about 99 km from comet 67P/Churyumov-Gerasimenko. Image: ESA/Rosetta/NAVCAM

Full-frame NAVCAM image taken on 9 August 2014 from a distance of about 99 km from comet 67P/Churyumov-Gerasimenko. Image: ESA/Rosetta/NAVCAM

Most people will have seen last week’s news about ESA’s Rosetta spacecraft arriving at comet 67P/Churyumov-Gerasimenko and the animated images of the ‘rubber-duck’ shaped object taken from Navigation Camera (NavCam), part of Rosetta’s Attitude and Orbital Control System. The arrival generated many headlines, from the 10 years it took to catch the comet, through the history making first rendezvous and comet orbit, to the final part of the mission and the intention to land on the comet. However there was little detail about the remote sensing aspect of the mission, which we feel is a missed opportunity as it’s using many of the techniques and methodologies employed in Earth observation (EO).

The orbiter part of Rosetta carries eleven different remote sensing experiments with a wide variety of sensors gathering data about the comet before the lander touches down. Amongst the instruments on-board are three separate spectrometers; a visible and infrared thermal imaging spectrometer (VIRTIS) focussing on temperature and geography; an ultraviolet imaging spectrometer (ALICE) looking at gases and the production of water and carbon dioxide/monoxide; and finally ROSINA has sensors for measuring the composition of the comet’s atmosphere and ionosphere.

The VIRTIS instrument has two channels; the VIRTIS-H channel is a high spectral resolution mapper operating from 2 to 5µm, whereas the VIRTIS-M is the mapper operates at a coarser spectral resolution and one of its main products will be a global spectral map of the comet’s nucleus. This instrument has already been used to undertake measurements of Earth. In November 2009, on Rosetta’s third Earth fly-by, VIRTIS measurements were compared to existing EO instruments from ENVISAT/AATSR, SCIAMACHY and MODIS. Overall, there was a strong correlation with the EO data, but differences were also seen – especially in the 1.4µm water absorption feature.

VIRTIS has a key role in supporting the selection of the November’ landing site, a task that has become more difficult now the comet has been imaged in detail and is seen to have a complex shape. In addition, recent VIRTIS measurements have shown the comet’s average surface temperature to be around minus seventy degrees centigrade, which means the comet is likely to be too warm to be ice covered and instead must have a dark, dusty crust.

Remote sensing is playing a huge part in the Rosetta mission and it should be celebrated that these instruments will gather data over the next eighteen months to help scientists determine the role comets play in the evolution of planets. It will be amazing if remote sensing techniques developed to explore, monitor and analyse our planet, will be the same techniques that help determine if the water on Earth originally came from comets.

The Science Behind Springwatch

Last Wednesday Pixalytics made it’s TV debut on the BBC2 Springwatch programme, where they showed a video we’d made on phytoplankton blooms.  The video was based on NASA MODIS-Aqua daily images. MODIS, or the Moderate-Resolution Imaging Spectroradiometer, is an optical sensor that’s used for mapping the both land and the oceans. It can be thought of as a digital camera that operates at a number of different wavelengths of light.

Spring 2014 phytoplankton image

Spring 2014 phytoplankton image, MODIS data from NASA with movie animation by Pixalytics Ltd.

As an ocean colour sensor it detects the change in colour of the ocean caused by what’s both dissolved and suspended in the water, e.g. the microscopic plants of the sea that are called phytoplankton. The chlorophyll pigments in plants (both on land and in the oceans) absorb light at blue and red wavelengths making waters high in phytoplankton appear green in colour. This colour change is picked up by chlorophyll algorithms (mathematical equations) and equated to changes in concentration that are displayed using a rainbow colour palette, which goes from purple to blue, green, yellow and red as the concentrations go from low to high values. Black on the imagery is where there’s no data, which for optical imagery is primarily due to cloud cover.

MODIS is on both the Aqua (travels south to north over the equator in the afternoon) and Terra (north to south across the equator in the morning) satellites, which orbit the Earth several times a day collecting strips of imagery 2330 km wide at a spatial resolution of around 1 km. The strips from a day are combined to create a daily composite image, and by looking at images over time we can see the changes in the phytoplankton concentrations as we as we move out of the winter through months into spring. The ‘spring bloom’ is an increase in phytoplankton concentrations as the days become lighter and the phytoplankton make use of the nutrients mixed into the surface waters over the winter.