Hyperspectral image compression using convolutional neural networks with local spectral transforms and non-uniform sample normalisation
Mijares i Verdú, Sebastià 
(Universitat Autònoma de Barcelona. Escola d'Enginyeria)
Serra Sagristà, Joan 
(Universitat Autònoma de Barcelona. Escola d'Enginyeria)
Bartrina Rapesta, Joan 
(Universitat Autònoma de Barcelona. Escola d'Enginyeria)
Hernández Cabronero, Miguel 
(Universitat Autònoma de Barcelona. Escola d'Enginyeria)
Laparra, Valero 
(Universitat de València. Laboratori Processat Imatges)
Ballé, Johannes 
(Google Research)
| Data: |
2022 |
| Resum: |
In recent years there has been increasing interest in the usage of neural networks for image compression, with learned codecs achieving highly competitive results in comparison to conventional methods. Inspired by the success of these methods for natural images, a variety of designs and techniques have been proposed for neural compression of remote sensing data [1,2,3] and hyperspectral images [4,5,6]. A key challenge when compressing hyperspectral images in a band-by-band compression framework is that the distribution of samples may vary significantly from one channel to another, despite neighbouring channels being highly similar in general [7]. A learned codec trained to compress all channels in the image must, therefore, be able to adapt its transform to achieve high-performance results. We find transposed deconvolution -the standard operation in convolutional neural networks- produces significant artefacts in decompressing low-variance images, whose negative impact increases in proportion to the bit depth of the samples. To solve these issues, we propose the usage of non-uniform sample normalisation in neural network codecs for compression of hyperspectral images. Much of the correlation among the samples of hyperspectral images is along the spectral axis. However, a spectral-spatial transform of a hyperspectral image (such as KLT+JPEG 2000) is computationally costly. To take advantage of local correlation among the channels without incurring into too high computational cost, our proposed learned transform performs local spectral-spatial decorrelation by compressing the image in batches of channels. In this paper we show that this is an effective solution for the artefacts we described and that our resulting neural-network codecs surpass current onboard payload data compression standards such as CCSDS 122. 0, JPEG 2000, or KLT+JPEG 2000 in rate-distortion by up to 3dB PSNR. Finally, we discuss how this highy competitive method could be used in on-board settings and tradeoffs involved in that deployment. |
| Drets: |
Aquest document està subjecte a una llicència d'ús Creative Commons. Es permet la reproducció total o parcial, la distribució, la comunicació pública de l'obra i la creació d'obres derivades, fins i tot amb finalitats comercials, sempre i quan es reconegui l'autoria de l'obra original.  |
| Llengua: |
Anglès |
| Document: |
Comunicació de congrés ; recerca ; Versió publicada |
| Publicat a: |
8th International Workshop on OnBoard Payload Data Compression. Atenes, 2022 |
DOI: 10.5281/zenodo.7245233
El registre apareix a les col·leccions:
Contribucions a jornades i congressos >
Ponències i comunicacions >
Ponències i comunicacions de la UAB
Registre creat el 2026-02-11, darrera modificació el 2026-02-22