Monitoring ocean acidification from space

Enhanced pseudo-true colour composite of the United Kingdom showing coccolithophore blooms in light blue. Image acquired by MODIS-Aqua on 24th May 2016. Data courtesy of NASA.

Enhanced pseudo-true colour composite of the United Kingdom showing coccolithophore blooms in light blue. Image acquired by MODIS-Aqua on 24th May 2016. Data courtesy of NASA.

What is ocean acidification?
Since the industrial revolution the oceans have absorbed approximately 50% of the CO2 produced by human activities (The Royal Society, 2005). Scientists previously saw this oceanic absorption as advantageous, however ocean observations in recent decades have shown it has caused a profound change in the ocean chemistry – resulting in ocean acidification (OA); as CO2 dissolves into the oceans it forms carbonic acid, lowering the pH and moving the oceans into a more acidic state. According to the National Oceanic Atmospheric Administration (NOAA) ocean pH has already decreased by about 30% and some studies suggest that if no changes are made, by 2100, ocean pH will decrease by 150%.

Impacts of OA
It’s anticipated OA will impact many marine species. For example, it’s expected it will have a harmful effect on some calcifying species such as corals, oysters, crustaceans, and calcareous plankton e.g. coccolithophores.

OA can significantly reduce the ability of reef-building corals to produce their skeletons and can cause the dissolution of oyster’s and crustacean’s protective shells, making them more susceptible to predation and death. This in turn would affect the entire food web, the wider environment and would have many socio-economic impacts.

Calcifying phytoplankton, such as coccolithophores, are thought to be especially vulnerable to OA. They are the most abundant type of calcifying phytoplankton in the ocean, and are important for the global biogeochemical cycling of carbon and are the base of many marine food webs. It’s projected that OA may disrupt the formation and/or dissolution of coccolithophores, calcium carbonate (CaCO3) shells, impacting future populations. Thus, changes in their abundance due to OA could have far-reaching effects.

Unlike other phytoplankton, coccolithophores are highly effective light scatterers relative to their surroundings due to their production of highly reflective calcium carbonate plates. This allows them to be easily seen on satellite imagery. The figure at the top of this page shows multiple coccolithophore blooms, in light blue, off the coast of the United Kingdom on 24th March 2016.

Current OA monitoring methods
Presently, the monitoring of OA and its effects are predominantly carried out by in situ observations from ships and moorings using buoys and wave gliders for example. Although vital, in situ data is notoriously spatially sparse as it is difficult to take measurements in certain areas of the world, especially in hostile regions (e.g. Polar Oceans). On their own they do not provide a comprehensive and cost-effective way to monitor OA globally. Consequently, this has driven the development of satellite-based sensors.

How can OA be monitored from space?
Although it is difficult to directly monitor changes in ocean pH using remote sensing, satellites can measure sea surface temperature and salinity (SST & SSS) and surface chlorophyll-a, from which ocean pH can be estimated using empirical relationships derived from in situ data. Although surface measurements may not be representative of deeper biological processes, surface observations are important for OA because the change in pH occurs at the surface first.

In 2015 researchers at the University of Exeter, UK became the first scientists to use remote sensing to develop a worldwide map of the ocean’s acidity using satellite imagery from the European Space Agency’s Soil Moisture and Ocean Salinity (SMOS) satellite that was launched in 2009 and NASA’s Aquarius satellite that was launched in 2011; both are still currently in operation. Thermal mounted sensors on the satellites measure the SST while the microwave sensors measure SSS; there are also microwave SST sensors, but they have a coarse spatial resolution.

Future Opportunities – The Copernicus Program
The European Union’s Copernicus Programme is in the process of launching a series of satellites, known as Sentinel satellites, which will improve understanding of large scale global dynamics and climate change. Of all the Sentinel satellite types, Sentinels 2 and 3 are most appropriate for assessment of the marine carbonate system. The Sentinel-3 satellite was launched in February this year andwill be mainly focussing on ocean measurements, including SST, ocean colour and chlorophyll-a.

Overall, OA is a relatively new field of research, with most of the studies being conducted over the last decade. It’s certain that remote sensing will have an exciting and important role to play in the future monitoring of this issue and its effects on the marine environment.

Blog written by Charlie Leaman, BSc, University of Bath during work placement at Pixalytics.

Sentinel-3 Sets Sail

Artist's view of Sentinel-3. Image courtesy of ESA–Pierre Carril.

Artist’s view of Sentinel-3. Image courtesy of ESA–Pierre Carril.

At 17.57 GMT yesterday (16th February 2016) Sentinel-3 set sail from the Plesetsk Space Centre in Russia, heading for its 814 km sun-synchronous low Earth orbit. Like all the other Sentinel launches, we were at home watching the live feed!

This is the third Sentinel launch of the European Commission’s Copernicus Programme, following Sentinel-1 and 2. Sentinel-3, like its predecessors, will be part of a twin satellite constellation with Sentinel-3B’s launch expected to be in 2017.

Sentinel-3 carries four scientific instruments:

  • Sea and Land Surface Temperature Radiometer (SLSTR) will measure temperatures of both the sea and land, to an accuracy of better than 0.3 K. This instrument has 9 spectral bands with a spatial resolution of 500 m for visible/near-infrared wavelengths and 1 km for the thermal wavelengths; and has swath widths of 1420 km at nadir and 750 km looking backwards. It’s worth noting that two thermal infrared spectral wavebands are optimised for fire detection, providing the fire radiative power measurement.
  • Ocean and Land Colour Instrument (OLCI) has 21 spectral bands (400–1020 nm) focussed on ocean colour and vegetation measurements. All bands have a spatial resolution of 300 m with a swath width of 1270 km.
  • Synthetic Aperture Radar Altimeter (SRAL) which has dual frequency Ku and C bands. It offers 300 m spatial resolution after SAR processing, and is based on the instruments from the CryoSat and Jason missions. This will be first satellite altimeter to provide 100% coverage of the Earth’s surfaces in SAR mode.
  • Microwave Radiometer (MWR) dual frequency at 23.8 & 36.5 GHz, it is used to derive atmospheric column water vapour measurements for correcting the SRAL instrument.

The scientific instruments are supported by four positioning/navigation instruments to ensure the satellite maintains its precise orbit.

Sentinel-3 will mainly be focussing on ocean measurements and will include the measurement of sea-surface height (similar to the recently launched Jason-3); however it will also measure sea surface temperature, ocean colour, surface wind speed, sea ice thickness and ice sheets. Whereas over land the satellite will provide indices of vegetation, measuring the height of rivers and lakes and help monitor wildfires.

Sentinel-3 is a very exciting satellite for us, as the data and products it will produce are very much within the wheelhouse of the services that Pixalytics offers. Sam’s background is in ocean colour, she’s world renown for atmospheric correction research and we offer a variety of agritech services including vegetation indices. You can probably now see why we’re so excited!

The satellite is currently in its commissioning phases where ESA tests the data produced by the sensors. This is undertaken in conjunction with a group of users, and Pixalytics is one of them! This phase is expected to last five months, after which the satellite will be transferred to Eumetsat and the data should be released.

Like all the data from the Copernicus programme, it will be offered free of charge to users. This will challenge organisations, like us, to see what innovative services we can offer with this new data stream. Exciting times ahead!

Sentinel-2A dips its toe into the water

Detailed image of algal bloom in the Baltic Sea acquired by Sentinel-2A on 7 August 2015. Data courtesy of Copernicus Sentinel data (2015)/ESA.

Detailed image of algal bloom in the Baltic Sea acquired by Sentinel-2A on 7 August 2015. Data courtesy of Copernicus Sentinel data (2015)/ESA.

With spectacular images of an algal bloom in the Baltic Sea, ESA’s Sentinel-2A has announced its arrival to the ocean colour community. As we highlighted an earlier blog, Sentinel-2A was launched in June predominately as a land monitoring mission. However, given it offers higher resolution data than other current marine focussed missions; it was always expected to dip it’s toe into ocean colour. And what a toe it has dipped!

The images show a huge bloom of cyanobacteria in the Baltic Sea, with the blue-green swirls of eddies and currents. The image at the top of the blog shows the detail of the surface floating bloom caught in the currents, and there is a ship making its way through the bloom with its wake producing a straight black line as deeper waters are brought to the surface.

Algal bloom in the Baltic Sea acquired by Sentinel-2A on 7 August 2015. Data courtesy of Copernicus Sentinel data (2015)/ESA.

Algal bloom in the Baltic Sea acquired by Sentinel-2A on 7 August 2015. Data courtesy of Copernicus Sentinel data (2015)/ESA.

To the right is a wider view of the bloom within the Baltic Sea. The images were acquired on the 7th August using the Multispectral Imager, which has 13 spectral bands and the visible, which were used here, have a spatial resolution of 10 m.

The Baltic Sea has long suffered from poor water quality and in 1974 it became the first entire sea to be subject to measures to prevent pollution, with the signing of the Helsinki Convention on the Protection of the Marine Environment of the Baltic Sea Area. Originally signed by the Baltic coastal countries, a revised version was signed by the majority of European countries in 1992. This convention came into force into force on the 17th January 2000 and is overseen by the Helsinki Commission – Baltic Marine Environment Protection Commission – also known as HELCOM. The convention aims to protect the Baltic Sea area from harmful substances from land based sources, ships, incineration, dumping and from the exploitation of the seabed.

Despite the international agreements, the ecosystems of the Baltic Sea are still threatened by overfishing, marine and chemical pollution. However, the twin threats that cause the area to suffer from algal blooms are warm temperatures and excessive levels of nutrients, such as phosphorus and nitrogen. This not only contributes towards the algal blooms, but the Baltic Sea is also home to seven of the world’s ten largest marine dead zones due to the low levels of oxygen in the water, which prevent marine life from thriving.

These images certainly whet the appetite of marine remote sensors, who also have Sentinel-3 to look forward to later this year. That mission will focus on sea-surface topography, sea surface temperature and ocean colour, and is due to the launched in the last few months of 2015. It’s an exciting time to be monitoring and researching the world’s oceans!

Lidar: From space to your garage and pocket

Lidar data overlaid on an aerial photo for Pinellas Point, Tampa Bay, USA. Data courtesy of the NASA Experimental Airborne Advanced Research Lidar (EAARL), http://gulfsci.usgs.gov/tampabay/data/1_lidar/index.html

Lidar data overlaid on an aerial photo for Pinellas Point, Tampa Bay, USA. Data courtesy of the NASA Experimental Airborne Advanced Research Lidar (EAARL), http://gulfsci.usgs.gov/tampabay/data/1_lidar/index.html

Lidar isn’t a word most people use regularly, but recent developments in the field might see a future where is becomes part of everyday life.

Lidar, an acronym for LIght Detection And Ranging, was first developing in the 1960’s and is primarily a technique for measuring distance; however, other applications include atmospheric Lidar which measures clouds, particles and gases such as ozone. The system comprises of a laser, a scanner and GPS position receiving, and it works by emitting a laser pulse towards a target, and measuring the time it takes for the pulse to return.

There are two main types of Lidar used within remote sensing for measuring distance, topographic and bathymetric; topographic Lidar uses a near infrared laser to map land, while bathymetric Lidar uses water-penetrating green light to measure the seafloor. The image at the top of the blog is a bathymetric Lidar overlaying an aerial photograph Pinellas Point, Tampa Bay in the USA, showing depths below sea level in metres. Airborne terrestrial Lidar applications have also been expanded to include measuring forest structures and tree canopies mapping; whilst there’s ground based terrestrial laser scanners for mapping structures such as buildings.

As a user getting freely accessible airborne Lidar data isn’t easy, but there are some places that offer datasets including:

Spaceborne terrestrial Lidar has been limited, as it has to overcome a number of challenges:

  • It’s an active remote sensing technique, which means it requires a lot more power to run, than passive systems and for satellites this means more cost.
  • It’s an optical system that like all optical systems is affected by cloud cover and poor visibility, although interestingly it works more effectively at night, as the processing doesn’t need to account for the sun’s reflection.
  • Lidar performance decreases with inverse square of the distance between the target and the system.
  • Lidar collects individual points, rather than an image, and images are created by combining lots of individual points. Whilst multiple overflies are possible quickly in a plane, with a satellite orbiting the Earth you’re effectively collecting lines of points over a number of days, which takes time.

The only satellite that studied the Earth’s surface using Lidar is NASA’s Ice, Cloud and Land Elevation Satellite – Geoscience Laser Altimeter system (IceSAT-GLAS); launched in 2003, it was decommissioned in 2010. It measured ice sheet elevations and changes, together with cloud and aerosol height profiles, land elevation and vegetation cover, and sea ice thickness; and you find its data products here. IceSAT-GLAS 2 is scheduled for launch in 2017. The Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO), part of the satellite A-Train, is a joint NASA and CNES mission launched in 2006. Originally designed as an atmospheric focused Lidar, it has since developed marine applications that led to the SABOR campaign we discussed in previous blog.

Beyond remote sensing, Lidar may become part of every household in the future, if recent proof-of-concepts come to fruition. The Google self-drive car uses a Lidar as part of its navigation system to generate a 3D maps of the surrounding environment. In addition, research recently published in Optics Express, by Dr. Ali Hajimiri of California Institute of Technology has described the potential of a tiny Lidar device capable of turning mobile phones into 3D scanning devices. Using a nanophotonic coherent imager, the proof-of-concept device has put together a 3-D image of the front of a U.S. penny from half a meter away, with 15-μm depth resolution and 50-μm lateral resolution.

Lidar has many remote sensing and surveying applications, however, in the future we all could have lasers in our garage and pockets.

Current Work in Remote Sensing and Photogrammetry

Last week the annual Remote Sensing and Photogrammetry Society (RSPSoc) conference was held in Aberystwyth. Now I’ve stepped down as RSPSoc Chairman I could relax and enjoy this year’s event as a delegate.

Arriving on Wednesday morning, the first session I attended was organised by the Technology and Operational Procedures Special Interest Group (TOPSIG), which was focused on Operational Earth observation. There were a great range of presentations, and I particularly enjoyed the user insights by Andy Wells on how customers are really using imagery. Recent developments in on-the-fly importing, georeferencing and autocorrelation means bringing data together from different sources isn’t a time consuming chore. Users can therefore spend more time analysing data, extracting information and adding value to their organisations or research. In addition, as highlighted by other presentations, open software repositories continue to grow and now include complex algorithms that were once only available to specialists. Finally, Steve Keyworth reminded us that what we do should be seen as a component of the solution rather than the specification; the ultimate aim should be on solving the customer’s problem, which in the current climate is often financially motivated.

Landsat 7 image showing features in the Baltic, data courtesy of ESA

Landsat 7 image showing features in the Baltic, data courtesy of ESA

On Thursday I co-chaired the Water and Marine Environments session alongside Professor Heiko Balzter, on behalf of the Marine Optics Special Interest Group (SIG). My presentation was focused on the European Space Agency (ESA) Landsat archive that’s been acquired via the ESA ground stations. This data is being reprocessed to create a consistent high resolution visible and infrared image dataset combining the three primary sensors used by the series of Landsat satellites; MSS (Multi-spectral Scanner), TM (Thematic Mapper), and ETM+ (Enhanced Thematic Mapper Plus). Although historical Landsat missions are not ideally suited to observing the ocean, due to a low signal-to-noise ratio, features can be clearly seen and the new processing setup means images are being processed over the open ocean.

Mark Danson’s keynote lecture on Friday morning described the application of terrestrial laser scanners to understanding forest structure. He showcased his post PhD research which has led to the development of the Salford Advanced Laser Canopy Analyser, a dual-wavelength full-waveform laser scanner. The presentation also showed the importance of fieldwork in understanding what remote techniques are actually sensing, and in this case included a team of people cutting down example trees and counting every leaf!

Mark also made me feel less guilty that I am still working on a component of my PhD – atmospheric correction. In research your own learning curve, and the scientific process, mean you gain new insights as you understand more, often explaining why answers are not as simple as you might have assumed. It’s one of the reasons why I love doing research.

Overall, I had a great time at RSPSoc, catching up and seeing what’s new in the field. My next conference event is Ocean Optics, in the US, at the end of October where I’ll be discussing citizen science in a marine science context.

Report on Last Week’s Global Oceans Action Summit

Last week I attended the Global Oceans Action Summit for Food Security and Blue Growth; held at The World Forum in The Hague (the Netherlands). It brought together 500 ocean stakeholders from over 80 countries to address the three key threats to ocean health and food security; overfishing, habitat destruction and pollution. The summit also highlighted the challenges facing the creation of integrated solutions combating these threats in terms of public-private partnerships, funding and the need for good ocean governance being balanced with the growth, sustainability, conservation and private sector interests.Global Oceans Action Summit

The plenary session of talks, including a presentation by H.E. Sharon Dijksma, Dutch Minister for Agriculture, and the Chair of the summit, were interspersed with some interesting performances; notably the oceanic reworded renditions of The Snowman and Circle of Life from the Lion King which were accompanied by laser displays. The speakers highlighted the need for focused efforts on the oceans because over one billion people worldwide derive their food and livelihoods from them, 40% have world’s countries have more ocean than land under their jurisdiction and 13 of the world’s megacities lie on the coast.

We heard how the oceans are currently under pressure from multiple sources including dead zones, disappearing ecosystems, ocean acidification and sea level rise. One third of fish stocks are over exploited and so restoring fish stocks could create $50 billion annual economic gain. One thing that surprised me is that 40% of the world’s fish catch is currently used to feed farmed fish (aquaculture).

In addition I attended a variety of parallel session discussions. One session focussed on the concept that the ocean is a complex, moving and 3D environment; and we need stop applying current land management principles to the ocean; instead we need to better understand them and manage them as oceans. Other sessions highlighted the need to engage with the local community rather than imposing outside solutions, as 80% of aquaculture production is by SMEs.

A strong theme coming out of the conference was that greater recognition was required on the negative impact of climate change on the ocean, and local adaptations will not offset this. Whilst a number of partnerships and principles were announced at the Summit, we are long way from solutions. The next World Ocean Summit will be in June this year, but we need global, and local, action now to achieve healthier oceans and fish stocks for the future.