banner
News center
The loyalty of our customers is a testament to the quality of our merchandise.

A broadband hyperspectral image sensor with high spatio-temporal resolution | Nature

Nov 09, 2024

Nature volume 635, pages 73–81 (2024)Cite this article

948 Accesses

3 Altmetric

Metrics details

Hyperspectral imaging provides high-dimensional spatial–temporal–spectral information showing intrinsic matter characteristics1,2,3,4,5. Here we report an on-chip computational hyperspectral imaging framework with high spatial and temporal resolution. By integrating different broadband modulation materials on the image sensor chip, the target spectral information is non-uniformly and intrinsically coupled to each pixel with high light throughput. Using intelligent reconstruction algorithms, multi-channel images can be recovered from each frame, realizing real-time hyperspectral imaging. Following this framework, we fabricated a broadband visible–near-infrared (400–1,700 nm) hyperspectral image sensor using photolithography, with an average light throughput of 74.8% and 96 wavelength channels. The demonstrated resolution is 1,024 × 1,024 pixels at 124 fps. We demonstrated its wide applications, including chlorophyll and sugar quantification for intelligent agriculture, blood oxygen and water quality monitoring for human health, textile classification and apple bruise detection for industrial automation, and remote lunar detection for astronomy. The integrated hyperspectral image sensor weighs only tens of grams and can be assembled on various resource-limited platforms or equipped with off-the-shelf optical systems. The technique transforms the challenge of high-dimensional imaging from a high-cost manufacturing and cumbersome system to one that is solvable through on-chip compression and agile computation.

Hyperspectral imaging captures spatial, temporal and spectral information of the physical world, characterizing the intrinsic optical properties of each location1. Compared with multispectral imaging, hyperspectral imaging acquires a substantially large number of wavelength channels ranging from tens to hundreds and maintains a superior spatial mapping ability compared with spectrometry6. This high-dimensional information enables precise distinction of different materials with similar colours, empowering more intelligent inspection than human vision with higher spectral resolution and wider spectral range. With these advantages, hyperspectral imaging has been widely applied in various fields such as remote sensing, machine vision, agricultural analysis, medical diagnostics and scientific monitoring2,3,4,5.

The most important challenge to realizing hyperspectral imaging is acquiring the dense spatial–spectral data cubes efficiently. Most of the existing hyperspectral imaging systems use individual optical elements (such as prism, grating or spectral filters) and mechanical components to scan hyperspectral cubes in the spatial or spectral dimension7. However, these systems typically suffer from drawbacks such as large size, heavy weight, high cost and time-consuming operation, which limit their widespread application. Because of the developments in the compressive sensing theory and computational photography technique8, various computational snapshot hyperspectral imaging techniques, such as computed-tomography imaging system (CTIS)9 and coded aperture snapshot spectral imaging (CASSI)10, have been developed, which encode multidimensional hyperspectral information into single-shot measurements and decode the data cube using compressive sensing or deep learning algorithms. Although these systems effectively improve temporal resolution, they still require individual optical elements for explicit light modulation that takes a heavy load for lightweight integration11.

Numerous on-chip acquisition trials have been conducted to achieve integrated hyperspectral imaging. The most logical approach is to extend the classic Bayer pattern of red, green and blue (RGB) colour cameras by introducing more narrow-band filters, which has led to the development of commercial multispectral imaging sensors12. However, besides a substantial tradeoff between spatial and spectral resolution, this technique also wastes the most light throughput due to narrow-band filtering. Benefiting from finely tunable spectrum filtering ability, nano-fabricated metasurface13,14,15,16, photonic crystal slab arrays17 and Fabry–Pérot filters18 have also been used for spectral modulation in a certain spectral range. Experimentally, most of these existing prototypes cover about 200 nm in the visible range18, with only around 20 channels. Recently, scattering media have been used for compact lensless hyperspectral imaging systems, building on their spatial multiplexing and point spread function properties19,20,21,22. Despite these on-chip techniques, most of them suffer from narrow spectral range, low-light throughput and the intrinsic tradeoff between spatial and spectral resolution. A comparison of the comprehensive performance of different techniques is provided in Extended Data Table 1.

In this work, we report an on-chip computational hyperspectral image sensor, termed the HyperspecI sensor, and its comprehensive framework of hardware fabrication, optical calibration and computational reconstruction. First, to acquire both spatial and spectral information effectively, we developed a broadband multispectral filter array (BMSFA) fabrication technique using photolithography. The BMSFA is composed of different broadband spectral modulation materials at different spatial locations. In contrast to the conventional narrow-band filters, the BMSFA can modulate incident light across the entire wide spectral range, resulting in much higher light throughput that benefits low-light and long-distance imaging applications. The modulated information is then intrinsically compressed and acquired by the underlying broadband monochrome sensor chip, enabling spatial–spectral compression with full temporal resolution. Second, to efficiently restore hyperspectral data cubes from the BMSFA-compressed measurements, we derived a lightweight and high-performance neural network (spectral reconstruction network (SRNet)), which has stronger feature extraction and prior modelling ability. Consequently, we can reconstruct hyperspectral images (HSIs) with high spatial and spectral resolution from each frame, realizing high-throughput real-time hyperspectral imaging.

Following the above framework, we fabricated two visible–near-infrared (VIS–NIR) hyperspectral image sensors (HyperspecI-V1 and HyperspecI-V2). The spectral response range of HyperspecI-V1 and HyperspecI-V2 is 400–1,000 nm and 400–1,700 nm, and the average light throughput is 71.8% and 74.8%, respectively. In low-light conditions, the HyperspecI sensors perform substantially better than mosaic multispectral cameras and scanning hyperspectral systems, as shown in Fig. 3. The average spectral resolution is 2.65 nm for HyperspecI-V1 and 8.53 nm for HyperspecI-V2. For hyperspectral imaging, the HyperspecI-V1 sensor produces 61 channels in the 400–1,000 nm range, each with 2,048 × 2,048 pixels at 47 fps. The HyperspecI-V2 sensor produces 96 wavelength channels with a 10-nm interval in the range of 400–1,000 nm and a 20-nm interval in the range of 1,000–1,700 nm. Each channel consists of 1,024 × 1,024 pixels at 124 fps. For more details of performance, please refer to Extended Data Table 1.

To demonstrate the practical abilities and wide application potentials of the HyperspecI sensors, we conducted soil plant analysis development (SPAD) and soluble solid content (SSC) evaluation for intelligent agriculture, blood oxygen and water quality monitoring for human health, textile classification and apple bruise detection for industrial automation, and remote lunar detection for astronomy. These applications demonstrate the high signal-to-noise ratio (SNR), high-resolution, ultra-broadband and dynamic hyperspectral imaging abilities of our HyperspecI technique, providing unique benefits in low-light conditions, targeting dynamic scenes and detecting small or remote targets that are unattainable using other techniques. Furthermore, the compact size, lightweight and high integration level of HyperspecI make it suitable for use on platforms with limited payload capacity. We anticipate that this scheme may provide opportunities for the development of next-generation image sensors of higher information dimension, higher imaging resolution and higher degree of intelligence.

The HyperspecI sensor consists of two main components: a BMSFA mask and a broadband monochrome image sensor chip (Fig. 1a). The BMSFA encodes the high-dimensional hyperspectral information of the target scene in the spectral domain, and the underlying image sensor chip acquires the coupled two-dimensional measurements (Fig. 1d). Using a hybrid neural network SRNet, multi-channel HSIs can be reconstructed from each frame with high fidelity and efficiency (Extended Data Fig. 6a,b and Supplementary Information section 5).

a, The HyperspecI sensor consists of a BMSFA mask and a broadband monochrome image sensor chip. The BMSFA consists of a cyclic arrangement of 4 × 4 broadband materials for broadband spectral modulation, with each modulation unit 10 μm in size. The BMSFA is cured onto the bare photodiode array surface using SU-8 photoresist. b, The manufacturing process of BMSFA. We developed a low-cost fabrication strategy to produce BMSFA using photolithography. c, The transmission spectra of the 16 modulation materials and the coefficient correlation matrix. d, The imaging principle of the HyperspecI sensor. The light emitted from the target scene is modulated after passing through the BMSFA and then captured by the underlying broadband image sensor chip. The collected compressed data are then given as input to a reconstruction algorithm to decouple and output HSIs. e, The exemplar hyperspectral imaging results of the HyperspecI sensor. f, Illustration of the collected large-scale HSI image and video dataset using the HyperspecI sensor.

Source Data

We developed a photolithography technique to fabricate BMSFA. First, we prepared broadband materials based on organic materials with different spectral responses. Then, by coupling the broadband materials with the negative photoresist, we fabricated broadband spectral modulation materials suitable for lithography (Fig. 1c). The materials were selected for optimal broadband spectral modulation characteristics (Extended Data Fig. 9 and Supplementary Information section 2). Then, using an improved photolithography process, we solidified the spectral modulation materials on a high-transmission quartz substrate following the pre-designed photomask, forming the BMSFA (Fig. 1b and Extended Data Fig. 7). The photolithography process includes a series of steps, including photomask design, substrate preparation, photoresist coating, soft baking, UV exposure, post-exposure baking, development and hard baking (Supplementary Information section 3). To meet the demands of different spectral ranges, we designed and prepared BMSFAs with different material systems and spatial arrangements. We integrated the fabricated BMSFAs with CMOS (complementary metal oxide semiconductor) and InGaAs (indium gallium arsenide) image sensor chips, respectively (Fig. 1a and Extended Data Fig. 7b,c).

Figure 1e shows exemplar hyperspectral imaging results, demonstrating that the HyperspecI sensors can acquire rich spatial details and maintain high spectral accuracy across a wide spectral range. The comprehensive imaging results of more channels and scenes are provided in Extended Data Fig. 3. Furthermore, to demonstrate the high accuracy and efficiency of the HyperspecI sensors for hyperspectral image reconstruction, we compared the reported SRNet with the existing state-of-the-art model-based and deep-learning-based algorithms, which indicates that our SRNet model outperforms others in terms of both accuracy and efficiency (Supplementary Information section 5.3). Figure 1f shows the structure of the collected image and video dataset using the HyperspecI sensors, which might be useful for further hyperspectral imaging and sensing studies.

We conducted a series of experiments to validate the quantitative and qualitative performance of the HyperspecI sensors. First, we examined the spectral and spatial resolution of the HyperspecI sensors. Figure 2a and Extended Data Fig. 3 show the reconstructed HSIs in the synthesized RGB format. We also compared the reconstructed spectra with the corresponding ground truth collected by commercial spectrometers (Ocean Optics USB 2000+ and NIR-Quest 512) at the locations indicated by yellow markers in the synthesized RGB images. We presented the reconstruction results of monochromatic light with an interval of 0.2 nm and compared the reconstruction results of our HyperspecI sensors with a commercial spectrometer under single-peak monochromatic light (Fig. 2b). The full width at half maximum (FWHM) of the monochromatic light is 2 nm. The average spectral resolution of HyperspecI-V1 and HyperspecI-V2 sensors are 2.65 nm and 8.53 nm, respectively (Fig. 2b (iii) and (iv)). Moreover, we also used double-peak monochromatic light (FWHM 2 nm) to calibrate the spectral resolving ability of our sensors based on the Rayleigh criterion. The results demonstrate the average resolvable double-peak distance of HyperspecI-V1 and HyperspecI-V2 reaching 3.23 nm and 9.76 nm, respectively. Second, to evaluate the spatial resolution, we acquired images of the USAF 1951 spatial resolution test chart using our HyperspecI sensors and corresponding monochrome cameras (with the same sensor chips and lens configuration). We presented the HyperspecI-V1 results in Fig. 2c as a demonstration. The results show that the HyperspecI sensor can distinguish the fourth element of the third group, in which the width of the three lines is about 0.26 mm on the chart and occupies 9 pixels of the image, resulting in a spatial resolution of 11.31 lines per mm, which is comparable to the commercial monochrome camera. Furthermore, a comparison of the light throughput of several representative hyperspectral imaging techniques is shown in Fig. 2d. The comparison shows that the average light throughput is 71.8% for HyperspecI-V1 and 74.8% for HyperspecI-V2, which is much higher than that of common RGB colour cameras (<30%), mosaic multispectral cameras (<10%) and CASSI systems (<50%) (Supplementary Information section 6.5).

a, Exemplar hyperspectral imaging results. The reconstructed hyperspectral images are shown in the synthesized RGB format on the left. The spectral comparison between the reconstructed spectra (RS) and ground truth (GT), acquired by the commercial spectrometers, are shown on the right (denoted by solid and dashed lines, respectively). b, Spectral resolution calibration. (i), (ii), Spectral comparison between the HyperspecI sensors (green solid lines) and commercial spectrometer (black dashed lines). The monochromatic light (FWHM 2 nm) was produced by the commercial Omno151 monochromator. (iii), (iv), Single-peak and double-peak monochromatic light were used to analyse the spectral resolving ability of our sensors. The average FWHM of the reconstructed spectra under single-peak monochromatic light for HyperspecI-V1 and HyperspecI-V2 are 2.65 nm and 8.53 nm, respectively. The average resolvable peak distance of reconstructed spectra based on the Rayleigh criterion for HyperspecI-V1 and HyperspecI-V2 under double-peak monochromatic light are 3.23 nm and 9.76 nm, respectively. c, Spatial resolution calibration using the USAF 1951 resolution test chart. The curves of a monochrome camera (red line) and our HyperspecI-V1 sensor (blue line) for elements 1–6 of group 3 are presented. d, Light throughput calibration. a.u., arbitrary units.

Source Data

We conducted an imaging experiment on small point targets of varying sizes (Fig. 3a). The results indicate that the HyperspecI sensor can achieve stable and accurate spectral reconstruction even when the radius of the targets is smaller than a superpixel. In Fig. 3b, we also compared the hyperspectral imaging performance of our HyperspecI sensor, a commercial mosaic multispectral camera (Silios, CMS-C) and a scanning hyperspectral camera (FigSpec, FS-23) in low-light conditions. The light source is Thorlabs SLS302 with an illuminance level of 290 lux. These experiments demonstrate that our sensor exhibits superior hyperspectral imaging quality in low-light environments, attributed to its higher light throughput and SNR. The superiority is further illustrated by the remote lunar detection experiment presented in Extended Data Fig. 1. We further demonstrated the real-time imaging performance of our HyperspecI-V1 sensor at a frame rate of 47 fps (Fig. 3c). As a comparison, we presented the imaging results using the scanning hyperspectral imaging camera. The result comparison validates that our HyperspecI sensor achieves a full temporal resolution of the underlying image sensor chip for dynamic imaging at a high frame rate, whereas the traditional scanning hyperspectral cameras are unable to capture dynamic scenes (Supplementary Information section 6.2).

a, Hyperspectral imaging results of small targets. The raw measurements, synthesized RGB images and hyperspectral images of several exemplar bands are shown on the left. A comparison of the background spectrum, ground truth and reconstructed spectra of the small targets, which are marked in the synthesized RGB image with a blue rectangle, is shown on the right. b, Hyperspectral imaging comparison in low-light conditions. The imaging results of our HyperspecI sensor, a commercial mosaic multispectral camera and a commercial scanning hyperspectral imaging camera are compared at a fixed exposure time of 1 ms. The synthesized RGB images, exemplar spectral images at 580 nm and 690 nm and the corresponding normalized data are shown. c, Hyperspectral imaging results at video frame rate. The results at three different time points (0 s, 1 s and 2 s) while the object was undergoing translational motion at a speed of about 0.5 m s−1 are shown on the left. The results at three different time points (0 s, 0.02 s, and 0.04 s) while the object was undergoing rotational motion at a speed of around 6 rad s−1 are shown on the right. The comparison of the result is demonstrated using synthesized RGB images and spectral images at 550 nm, 700 nm and 850 nm. a.u., arbitrary units.

Source Data

The above experiments demonstrate the broad spectral range, high spatial resolution, high spectral accuracy, high light throughput and real-time frame rate of our HyperspecI sensors. Furthermore, we studied the SNR (Supplementary Information section 6.1), noise resistance (Supplementary Information section 6.3), dynamic range (Supplementary Information section 6.4) and thermal stability (Extended Data Fig. 4 and Supplementary Information section 6.7) of our HyperspecI sensors.

Effective detection of target components is imperative for improving crop management strategies23. The SPAD index, highly correlated with the chlorophyll content24, is important for assessing plant physiology. Similarly, the SSC is an important indicator for fruit quality assessment and determination of harvest time25. However, conventional SPAD and SSC measurements involve destructive sampling, which is complicated and time-consuming. Advancements in molecular spectroscopy, coupled with chemometric techniques, have popularized VIS–NIR spectroscopy as a non-destructive alternative for internal quality assessment26. To demonstrate the applicability of the HyperspecI sensor in intelligent agriculture, we developed a prototype for non-destructive SPAD and SSC measurements (Fig. 4a).

a, The prototype using the HyperspecI sensor for agriculture spectra acquisition. It includes two distinct modes: the mode of leaf transmission spectra acquisition and the mode of apple reflectance spectra acquisition. b, The working principle of measuring the SPAD index, which is used to evaluate the chlorophyll content of leaves, is shown on the left. SPAD evaluation results using the HyperspecI sensor are shown on the right. c, The working principle of measuring SSC, used to evaluate apple quality, is shown on the left. The comparison between the measured SSC using a commercial product and the predicted SSC using our HyperspecI sensor and PLS regression model is shown on the right. d, The comparison between RGB images and the synthesized RGB images using the reconstructed hyperspectral images. The figure in the middle shows a comparison between the spectra acquired by a commercial spectrometer and the reconstructed spectra at exemplar randomly selected locations. a.u., arbitrary units.

Source Data

Figure 4b shows the SPAD detection principles based on the Lambert–Beer law, using the HyperspecI sensor to acquire transmission spectra of 200 leaves. The values at the characteristic peaks (660 nm and 720 nm) were used to establish the regression model. Validation with the additional 20 leaves resulted in high precision with a root mean square error of 1.0532 and a relative error of 3.73%. Figure 4c outlines the non-destructive SSC detection procedure in apples. Spectral curves show peaks and troughs indicative of various apple characteristics. Our partial least squares regression model accurately predicts SSC, with a correction coefficient of 0.8264 and a root mean square error of 0.6132% for the training set, and 0.6162 and 0.7877% for the test set, respectively. The relative error of the prediction set is 5.30%. Figure 4d shows the RGB and reconstructed hyperspectral images of leaves and apples, highlighting the potential of the sensor for agricultural applications. These results emphasize the promise of the HyperspecI sensor for non-destructive analysis in intelligent agriculture. For more details, refer to Supplementary Information section 7.

The rising attention to health concerns has led to a proliferation of health monitoring equipment, yet its progress is hampered by limitations in resolution, real-time abilities and portability. To demonstrate the advantages of our HyperspecI in dynamic, high-resolution ability, we conducted experiments on blood oxygen detection and water quality assessment, illustrating its potential for real-time health monitoring as an alternative to traditional bulky and complex equipment. For blood oxygen saturation monitoring, we developed a prototype device to detect changes in arterial blood absorption at specific wavelengths due to pulsation (Fig. 5a). When the finger under measurement is placed into the device, the transmission spectra are acquired using a broad-spectrum light source and the HyperspecI sensor. By reducing the effective number of pixels in the HyperspecI sensor, we can achieve a collection frame rate of up to 100 Hz. Subsequently, the acquired data are processed to obtain a series of spectral profiles at a certain area on the finger. Finally, the pulsatile component (AC) is extracted from the photoplethysmography signal at two characteristic bands (780 nm and 830 nm), which produces blood oxygen saturation (Supplementary Information section 8.1). Figure 5b shows a comparison of measurements between the HyperspecI sensor and a commercial oximeter.

a, The prototype of the HyperspecI sensor for blood oxygen saturation (SpO2) monitoring. b, i, The transmission spectra through the finger were obtained at a collection rate of 100 Hz. ii, Two photoplethysmography (PPG) signals at 780 nm and 830 nm, corresponding to two bands with different intensities of HbO2 and Hb absorption. The blood oxygen saturation can be accurately determined by analysing and calibrating the PPG signals at these two characteristic bands. iii, Comparative analysis with a commercial oximeter product demonstrates a high level of consistency in the obtained results. c, Three exemplar frames of HyperspecI measurements demonstrating the solution diffusion process, accompanied by the corresponding images captured using an RGB camera. In the petri dish, solution 1 was positioned at the top left corner, and solution 2 was placed at the bottom left corner. These two solutions were added to distilled water at the top right corner. Hyperspectral images acquired by the HyperspecI sensor are presented in the synthesized RGB format. d, Comparison of segmentation maps between an RGB camera (left) and the HyperspecI sensor (right). a.u., arbitrary units.

Source Data

Furthermore, we conducted an effluent diffusion monitoring experiment to explore the ability of the HyperspecI sensor for water quality detection. During the experiment, two solutions with similar colours but different compositions were rapidly injected into distilled water; the diffusion process was simultaneously recorded using the HyperspecI sensor and an RGB camera (Fig. 5c). Distinguishing between these two solutions using RGB images is challenging. However, their differentiation becomes straightforward through the disparities in their spectral curves and spectral images at the NIR range (780 nm). Furthermore, the segmentation results of RGB images and reconstructed hyperspectral images show the superiority and potential of our HyperspecI in real-time high-resolution spectral imaging and water quality assessment (Fig. 5d and Supplementary Information section 8.2).

To demonstrate the near-infrared hyperspectral imaging ability and accuracy of our sensors, we applied them in textile classification and apple bruise detection. For textile classification, reflectance spectra of textiles were acquired using the HyperspecI sensor (Fig. 6a). Previous research27 has shown that characteristic spectral bands of cotton fabrics (at 1,220 nm, 1,320 nm and 1,480 nm) and polyester fabrics (at 1,320 nm, 1,420 nm and 1,600 nm) are distinct, facilitating their classification (Fig. 6b–d). In our experiment, we prepared 204 samples, including various cotton and polyester fabrics, divided into training (75 cotton and 75 polyester) and testing datasets (27 cotton and 27 polyester). Given the diverse appearance of these samples, their classification by visual inspection is challenging (Fig. 6b). Subsequently, we used the support vector machine (SVM) algorithm for automatic fabric categories classification (Fig. 6c). For the testing phase, the overall classification accuracy reached 98.15% (Supplementary Information section 9.1).

a, The experiment configuration for the acquisition of fabric spectra. b, Measurements and reconstructed hyperspectral images of textile samples, together with synthesized RGB (sRGB) representations and exemplar hyperspectral images (1,220 nm, 1,320 nm and 1,480 nm for cotton fabrics and 1,320 nm, 1,420 nm and 1,600 nm for polyester fabrics). c, An SVM model for fabric classification based on spectral characteristics, achieving a high accuracy of 98.15% on the prediction set. d, The apple samples and experiment configuration. Apple samples with random bruises were constructed using the device shown on the right. e, The acquired measurement of apples and the corresponding spectral curves of bruised and normal portions. The characteristic wavelengths of apple bruises are distributed at 1,060 nm, 1,260 nm and 1,440 nm. f, Comparison of apple bruise detection between manual labelling (green bounding boxes) and model prediction (red bounding boxes). We used the pre-trained YOLOv5 network to detect bruised portions of apples. g, Quantitative results of apple bruise detection based on NIR and RGB images, respectively. a.u., arbitrary units.

Source Data

Apple bruises, often located beneath the skin, are challenging to detect visually, leading to low identification accuracy and efficiency. Benefiting from the wide spectral range of our HyperspecI, bruised areas exhibit spectral characteristics near wavelengths of 1,060 nm, 1,260 nm and 1,440 nm because of water absorption of NIR light, which is crucial for invisible bruise detection. In our experiment, we prepared 224 samples of Qixia Fuji apples and used a 30-cm steel pipe to systematically create bruises on random locations of each apple (Fig. 6d). We used the HypersepcI sensor and an RGB camera to acquire hyperspectral and colour images of these apples, constructing two separate image datasets (Fig. 6e). Each dataset, comprising 224 images, was applied to train a YOLOv5-based detection network, and the rest 40 samples were used for testing (Supplementary Information section 9.2). Spectral images were processed to create synthesized colour representations, distinctly marking bruised regions for enhanced visualization (Fig. 6f). The detection precision and recall scores on the near-infrared spectral images are markedly higher than those on the RGB images (Fig. 6g). The higher mAP50 and mAP50-95 scores also indicate the effectiveness of using infrared spectral information for apple bruise detection, and further demonstrate that our HyperspecI sensor can capture crucial spectral features of subtle changes in the NIR range.

This work introduces an on-chip hyperspectral image sensor technique, termed HyperspecI, which follows the computational imaging principle to realize integrated and high-throughput hyperspectral imaging. The HyperspecI sensor first acquires encoded hyperspectral information by integrating a BMSFA and a broadband monochrome sensor chip, and then reconstructs hyperspectral images using deep learning. Compared with the classic scanning scheme, the HyperspecI sensor maintains the full temporal resolution of the underlying sensor chip. Compared with the existing snapshot systems, the reported technique demonstrates enhanced integration with lightweight and compact size. Extensive experiments demonstrate the superiority of the HyperspecI sensor on high spatial–spectral–temporal resolution, wide spectral response range and high light throughput. These advantages provide great benefits in hyperspectral imaging applications such as detecting under low light, targeting dynamic scenes, and detecting unattainable small or remote targets using existing methods. We demonstrated the wide application potentials of the HyperspecI sensor such as in intelligent agricultural monitoring and real-time human health monitoring. The different applications validated the versatility, flexibility and robustness of the HyperspecI technique.

The HyperspecI technique can be further extended. First, by using advanced fabrication techniques such as electron beam lithography28, nanoimprinting and two-photon polymerization, higher degrees of freedom and precision for BMSFA design and HyperspecI integration can be achieved. Moreover, considering the excellent compatibility with other materials, the derived BMSFA strategy can be paired with high-performance 2D materials29, enabling more precise optical control and enhanced optical performance. Second, the generalization ability of hyperspectral reconstruction can be further enhanced by training data augmentation, transfer learning and illumination decomposition, which can help in tackling common challenges such as outlier input, metamerism and varying illumination30. Third, the real-time hyperspectral imaging ability of HyperspecI can be combined with heterogeneous detection devices, such as LIDAR and SAR, to achieve multi-source fusion detection31. This is important for realizing high-precision sensing and making high-reliability decisions in complex environments. Fourth, the highly compatible architecture of HyperspecI provides off-the-shelf solutions for easy integration with various imaging platforms, thus directly upgrading their sensing dimension and enabling multifunctional applications. For instance, the integration of vibration-coded microlens arrays into the BMSFA can enable high-resolution hyperspectral 3D photography32. The combination with ultrafast imaging systems can realize hyperspectral transient observation33. By further designing BMSFA with multidimensional multiplexing abilities (such as polarization and phase encoding), large-scale multidimensional imaging can be achieved34. When incorporated with fluorescence imaging systems, the fluorescence signals of different dyes can be effectively separated based on spectral characteristics in a snapshot manner, thus improving detection sensitivity and efficiency in biomedicine science35,36. Overall, we believe this work may provide opportunities for the development of next-generation image sensors of higher information dimension, higher imaging resolution and higher degree of intelligence.

We used 16 types of organic dyes covering 400–1,000 nm as spectral modulation materials for HyperspecI-V1 sensor fabrication. For the HypersepcI-V2 sensor, we used 10 types of organic dyes and 6 types of nano-metal oxides to cover 400–1,700 nm. To prepare the organic dyes for photolithography processes, we mixed 0.2 g of each organic dye with 20 ml of photoresist (SU-8 2010) and used an ultrasonic liquid processor (NingHuai NH-1000D) at room temperature. To ensure complete dissolution and remove impurities, the mixed solution was filtered using 3 μm pore size filters. To prepare the nano-metal oxides for photolithography processes, we used a dispersion solution (PGMEA), photoresist (SU-8 2025) and nano-metal oxide powder. We mixed 20 g of each material powder with 80 g PGMEA. Following a dispersion process of 48 h using the ultrasonic liquid processor at room temperature, we obtained material dispersion fluids with a mass fraction of 20%. To address the issue of inappropriate concentration, we mixed 10 ml of each material dispersant with 20 ml photoresist at a concentration ratio of 1:2. These mixtures were stirred for 15 min using the ultrasonic liquid processor. Then the filters with 3 μm pore size were used to remove the impurities. Subsequently, we applied the spectral modulation photoresist onto the quartz substrates (JGS3) and the test smears formed at 4,000 rpm on the spin coater (Helicoater, HC220PE). We validated the spectral properties of these modulation photoresists using a spectrophotometer (PerkinElmer Lambda 950). The details of the experimental equipment, operating procedures and analysis of the results are presented in Extended Data Figs. 8 and 9 and Supplementary Information section 2.

The fabrication of BMSFA includes a series of processes of photoresist dissolution, photomask design, substrate preparation, photoresist coating, soft baking, UV exposure, post-exposure baking, development and hard baking (Fig. 1b and Extended Data Fig. 7). First, we prepared the photoresists with different spectral modulation properties. Then, we dropped a solvent of one kind of photoresist onto the quartz substrate (JGS3, 4 inches), ensuring uniform distribution of the photoresist containing spectral modulation materials at 4,000 rpm on a spin coater. After soft baking at 95 °C for 5 min, we used UV photolithography (SUSS MA6 Mask Aligner, SUSS MicroTec AG) to cure the photoresist at the designed position of the quartz substrate. The exposure dose of the UV lithography machine is 1,000 mJ cm−2. This process was conducted for different modulation materials using a designed photomask. After post-exposure baking at 95 °C for 10 min, the development removes the unexposed areas, leaving the photoresist only at the specific locations. Then, hard baking of the wafers was done on a hotplate at 150 °C for 5 min. We repeated the above steps to pattern all 16 types of spectral modulation photoresists onto the quartz substrate. Eventually, we poured pure SU-8 photoresist onto the finished substrate, completing the photolithography process through photoresist coating, soft baking, UV exposure, post-exposure baking and hard baking. By following these steps, the BMSFA was prepared for our HyperspecI sensors.

For sensor integration, the HyperspecI-V1 sensor was prepared by combining BMSFA with the Sony IMX264 chip, which covers the 400–1,000 nm spectral range. The HyperspecI-V2 sensor was prepared by combining BMSFA with the Sony IMX990 chip, covering the 400–1,700 nm spectral range. We used a laser engraving machine to remove the packaging glass from the monochrome sensor. Then, we cured the BMSFA onto the sensor surface using a photoresist under ultraviolet lighting, ensuring optimal sensor integration. The details of BMSFA fabrication and sensor integration are presented in Extended Data Fig. 7 and Supplementary Information section 3.

We used a monochromator (Omno151, spectral range 200–2,000 nm) to generate monochromatic light with a FWHM of 10 nm. The monochromatic light was uniformly irradiated onto a power meter probe (Thorlabs S130VC, S132C) and the HyperspecI sensors after passing through a collimated optical path. In the automated calibration process, we developed a program to control the wavelength of monochromatic light, acquire the power meter value and save the corresponding measurements of the HyperspecI sensors. This process was repeated for each wavelength to automatically collect the compressive sensing matrix. For more details, refer to Extended Data Fig. 7f-g and Supplementary Information section 4.

We used a data-driven method to reconstruct HSIs from measurements (Extended Data Fig. 6a,b). The SRNet is a hybrid neural network that combines the core features of Transformer and convolutional neural network architectures for efficient, high-precision reconstruction. It uses a U-Net-shaped architecture as the baseline, the basic component of which is the spectral attention module (SAM) that focuses on extracting the spectral features of HSIs. SAM applies the attention mechanism in the spectral dimension rather than in the spatial dimension to reduce running time and memory cost. Moreover, this strategy enables us to compute cross-covariance across spectral channels and create attention feature maps with implicit knowledge of spectral information and global context37,38. For more details, refer to Supplementary Information section 5.2.

Our training dataset was collected using the commercial FigSpec-23 and GaiaField Pro-N17E-HR hyperspectral cameras, both integrated under a push-broom scanning mechanism, as shown in Extended Data Fig. 5. The training data consisted of 96 spectral channels, with 61 channels at intervals of 10 nm in the 400–1,000 nm range and 35 channels at intervals of 20 nm in the 1,000–1,700 nm range (see Supplementary Information section 5.1 for more details). Considering the high spatial resolution of measurements, we randomly divided the calibrated pattern into several sub-patterns for training, which can also avoid overfitting to a particular BMSFA encoding pattern. During each iteration, we randomly selected a 512 × 512 sub-pattern from the original full-resolution BMSFA pattern (2,048 × 2,048 for HyperspecI-V1 and 1,024 × 1,024 for HyperspecI-V2). The model was trained using the Adam optimizer (β1 = 0.9, β2 = 0.999) for 1 × 106 iterations. The learning rate was initialized to 4 × 10−4, and the cosine annealing scheme was adopted. We chose the root mean square error, mean relative absolute error, and total variation (TV) as the hybrid loss function. We trained the model on the Pytorch platform with a single NVIDIA RTX 4090 GPU. The measurements and reconstructed HSIs synthesized as RGB images are shown in Extended Data Fig. 6c,d. The exemplar reconstructed spectra are shown in Fig. 2a and Extended Data Fig. 3.

As shown in Extended Data Fig. 1a, we used a telescope (CELESTRON NEXSTAR 127SLT, 1,500 mm focal length, 127 mm aperture Maksutov Cassegrain) to image the moon and compared the imaging results of our HyperspecI sensor with that of line-scanning hyperspectral camera (FigSpec-23) and mosaic multispectral camera (Silios CMS-C). The target scenes include the Mare Crisium (Extended Data Fig. 1c, region 2) and Mare Fecunditatis (Extended Data Fig. 1c, regions 1 and 4) regions of the moon during the crescent phase. The acquisition frame rate of our HyperspecI was set to 47 fps, with an exposure time of 21 ms. The mosaic multispectral camera has an acquisition frame rate of 30 fps, with an exposure time of 33 ms. The line-scanning hyperspectral camera requires approximately 100 s to capture an HSI frame. For more details, refer to Supplementary Information section 10.

Metamerism denotes that different spectra project the same colour in the visible spectral range. To validate the ability of HyperspecI to distinguish materials with identical RGB values, we conducted two experiments (Extended Data Fig. 2). First, we tested real and fake potted plants. Points with the same colour are marked in Extended Data Fig. 2a (ii), in which the red points on the real plant and the yellow points on the fake plant have the same RGB values. Extended Data Fig. 2a (iii) shows the original measurement from our HyperspecI sensor, and the reconstructed hyperspectral image is shown in Extended Data Fig. 2a (v). We plotted the spectra of points P1 and P2 on both real and fake plants (Extended Data Fig. 2a (iv)). The spectra show that the leaves of the real plant exhibit distinct spectral features because of variations in chlorophyll and water content (highlighted in the blue block of Extended Data Fig. 2a (iv)), and the fake plant shows completely different spectra. Second, we tested real and fake strawberries, which present nearly identical appearances, textures and colours (Extended Data Fig. 2b). By extracting the spectra of points P1 and P2 on both real and fake strawberries, we observed distinct absorption peaks at 670 nm and 750 nm in the real strawberries, whereas the spectra of the fake strawberry appeared smoother.

The experimental setup for studying the thermal stability of the BMSFA modulation mask is shown in Extended Data Fig. 4a. This setup consists of a light source (Thorlabs SLS302 with a stabilized quartz tungsten halogen lamp of 10 W output optical power), an illumination module (consisting of optical lens, aperture stop, field stop, beam splitter of Thorlabs VDFW5/M and objective lens of Olympus microscope objective A 10 PL 10 × 0.25), a heating stage (JF-956, 30–400 °C), several support components (Thorlabs CEA1400) and a fine-tune module (GCM-VC 13M). The fabricated BMSFA modulation mask was placed on the heating stage, and the temperature was sequentially increased in a step of 10 °C from 20 °C (room temperature) to 200 °C. After the temperature stabilized at each step (waiting 10 min after the actual temperature reached the set temperature), we collected an image of the mask, as shown in Extended Data Fig. 4b. For more details, refer to Supplementary Information section 6.7 and Supplementary Information Video.

Next, we placed the HyperspecI sensor on the heating stage and used it to acquire hyperspectral images of the same scene at different operation temperatures, with subsequent comparisons regarding image similarity and spectral consistency. According to the manual provided by Sony, the operational temperature range of the sensor chip is from 0 °C to 50 °C, with the common operating surface temperature being around 37 °C at room temperature (20 °C). To assess the reconstruction performance of the sensor at varying temperatures, the heating stage was incrementally increased from 40 °C to 70 °C at 10 °C intervals. At each temperature, the sensor was powered on for 1 h to achieve temperature stabilization. Extended Data Fig. 4d–f shows the hyperspectral imaging performance of the same scene at different temperatures.

All data generated or analysed during this study are included in this published article and the public repository at GitHub (https://github.com/bianlab/Hyperspectral-imaging-dataset).

The demo code of this work is available from the public repository at GitHub (https://github.com/bianlab/HyperspecI).

Landgrebe, D. Hyperspectral image data analysis. IEEE Signal Proc. Mag. 19, 17–28 (2002).

Article ADS Google Scholar

Li, S. et al. Deep learning for hyperspectral image classification: an overview. IEEE Trans. Geosci. Remote 57, 6690–6709 (2019).

Article ADS Google Scholar

Backman, V. et al. Detection of preinvasive cancer cells. Nature 406, 35–36 (2000).

Article ADS CAS PubMed Google Scholar

Hadoux, X. et al. Non-invasive in vivo hyperspectral imaging of the retina for potential biomarker use in Alzheimer’s disease. Nat. Commun. 10, 4227 (2019).

Article ADS PubMed PubMed Central Google Scholar

Mehl, P. M., Chen, Y.-R., Kim, M. S. & Chan, D. E. Development of hyperspectral imaging technique for the detection of apple surface defects and contaminations. J. Food Eng. 61, 67–81 (2004).

Article Google Scholar

Yang, Z. et al. Single-nanowire spectrometers. Science 365, 1017–1020 (2019).

Article ADS CAS PubMed Google Scholar

Green, R. O. et al. Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (AVIRIS). Remote Sens. Environ. 65, 227–248 (1998).

Article ADS Google Scholar

Pian, Q., Yao, R., Sinsuebphon, N. & Intes, X. Compressive hyperspectral time-resolved wide-field fluorescence lifetime imaging. Nat. Photonics 11, 411–414 (2017).

Article ADS CAS PubMed PubMed Central Google Scholar

Descour, M. & Dereniak, E. Computed-tomography imaging spectrometer: experimental calibration and reconstruction results. Appl. Opt. 34, 4817–4826 (1995).

Article ADS CAS PubMed Google Scholar

Wagadarikar, A., John, R., Willett, R. & Brady, D. Single disperser design for coded aperture snapshot spectral imaging. Appl. Opt. 47, 44–51 (2008).

Article Google Scholar

Arguello, H. & Arce, G. R. Colored coded aperture design by concentration of measure in compressive spectral imaging. IEEE Trans. Image Process. 23, 1896–1908 (2014).

Article ADS MathSciNet PubMed Google Scholar

Geelen, B., Tack, N. & Lambrechts, A. A compact snapshot multispectral imager with a monolithically integrated per-pixel filter mosaic. In Advanced Fabrication Technologies for Micro/nano Optics and Photonics VII, Vol. 8974, pp. 80–87 (SPIE, 2014).

Yesilkoy, F. et al. Ultrasensitive hyperspectral imaging and biodetection enabled by dielectric metasurfaces. Nat. Photon. 13, 390–396 (2019).

Article ADS CAS Google Scholar

Faraji-Dana, M. et al. Hyperspectral imager with folded metasurface optics. ACS Photon. 6, 2161–2167 (2019).

Article CAS Google Scholar

Xiong, J. et al. Dynamic brain spectrum acquired by a real-time ultraspectral imaging chip with reconfigurable metasurfaces. Optica 9, 461–468 (2022).

Article ADS CAS Google Scholar

He, H. et al. Meta-attention network based spectral reconstruction with snapshot near-infrared metasurface. Adv. Mater. 2313357 (2024).

Wang, Z. et al. Single-shot on-chip spectral sensors based on photonic crystal slabs. Nat. Commun. 10, 1020 (2019).

Article ADS CAS PubMed PubMed Central Google Scholar

Yako, M. et al. Video-rate hyperspectral camera based on a CMOS-compatible random array of Fabry–Pérot filters. Nat. Photon. 17, 218–223 (2023).

Article ADS CAS Google Scholar

Kim, T., Lee, K. C., Baek, N., Chae, H. & Lee, S. A. Aperture-encoded snapshot hyperspectral imaging with a lensless camera. APL Photon. 8, 066109 (2023).

Article ADS Google Scholar

Redding, B., Liew, S. F., Sarma, R. & Cao, H. Compact spectrometer based on a disordered photonic chip. Nat. Photon. 7, 746–751 (2013).

Article ADS CAS Google Scholar

Monakhova, K., Yanny, K., Aggarwal, N. & Waller, L. Spectral DiffuserCam: lensless snapshot hyperspectral imaging with a spectral filter array. Optica 7, 1298–1307 (2020).

Article ADS Google Scholar

Jeon, D. S. et al. Compact snapshot hyperspectral imaging with diffracted rotation. ACM Trans. Graph. 38, 117 (2019).

Article Google Scholar

Cortés, V., Blasco, J., Aleixos, N., Cubero, S. & Talens, P. Monitoring strategies for quality control of agricultural products using visible and near-infrared spectroscopy: a review. Trends Food Sci. Technol. 85, 138–148 (2019).

Article Google Scholar

Limantara, L. et al. Analysis on the chlorophyll content of commercial green leafy vegetables. Procedia Chem. 14, 225–231 (2015).

Article CAS Google Scholar

Li, L. et al. Calibration transfer between developed portable Vis/NIR devices for detection of soluble solids contents in apple. Postharvest Biol. Technol. 183, 111720 (2022).

Article CAS Google Scholar

Ma, T., Xia, Y., Inagaki, T. & Tsuchikawa, S. Rapid and nondestructive evaluation of soluble solids content (SSC) and firmness in apple using Vis–NIR spatially resolved spectroscopy. Postharvest Biol.Technol. 173, 111417 (2021).

Article CAS Google Scholar

Liu, Z., Li, W. & Wei, Z. Qualitative classification of waste textiles based on near infrared spectroscopy and the convolutional network. Text. Res. J. 90, 1057–1066 (2020).

Article CAS Google Scholar

Kim, S. et al. All-water-based electron-beam lithography using silk as a resist. Nat. Nanotechnol. 9, 306–310 (2014).

Article ADS CAS PubMed Google Scholar

Yu, S., Wu, X., Wang, Y., Guo, X. & Tong, L. 2D materials for optical modulation: challenges and opportunities. Adv. Mater. 29, 1606128 (2017).

Article Google Scholar

Zheng, Y., Sato, I. & Sato, Y. Illumination and reflectance spectra separation of a hyperspectral image meets low-rank matrix factorization. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, pp. 1779–1787 (IEEE, 2015).

Abdar, M. et al. A review of uncertainty quantification in deep learning: techniques, applications and challenges. Inf. Fusion 76, 243–297 (2021).

Article Google Scholar

Wu, J. et al. An integrated imaging sensor for aberration-corrected 3D photography. Nature 612, 62–71 (2022).

Article ADS CAS PubMed PubMed Central Google Scholar

Gao, L., Liang, J., Li, C. & Wang, L. V. Single-shot compressed ultrafast photography at one hundred billion frames per second. Nature 516, 74–77 (2014).

Article ADS CAS PubMed PubMed Central Google Scholar

Altaqui, A. et al. Mantis shrimp–inspired organic photodetector for simultaneous hyperspectral and polarimetric imaging. Sci. Adv. 7, 3196 (2021).

Article ADS Google Scholar

Shi, W. et al. Pre-processing visualization of hyperspectral fluorescent data with spectrally encoded enhanced representations. Nat. Commun. 11, 726 (2020).

Article ADS CAS PubMed PubMed Central Google Scholar

Wu, J. et al. Iterative tomography with digital adaptive optics permits hour-long intravital observation of 3D subcellular dynamics at millisecond scale. Cell 184, 3318–3332 (2021).

Article CAS PubMed Google Scholar

Wang, Z. et al. Uformer: a general u-shaped transformer for image restoration. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, pp. 17683–17693 (IEEE, 2022).

Zamir, S.W. et al. Restormer: efficient transformer for high-resolution image restoration. In Proc. IEEE Conferemce on Computer Vision and Pattern Recognition, pp. 5728–5739 (IEEE, 2022).

Gehm, M. E., John, R., Brady, D. J., Willett, R. M. & Schulz, T. J. Single-shot compressive spectral imaging with a dual-disperser architecture. Opt. Express 15, 14013–14027 (2007).

Article ADS CAS PubMed Google Scholar

Cao, X., Du, H., Tong, X., Dai, Q. & Lin, S. A prism-mask system for multispectral video acquisition. IEEE Trans. Pattern Anal. 33, 2423–2435 (2011).

Article Google Scholar

Kim, M. H. et al. 3D imaging spectroscopy for measuring hyperspectral patterns on solid objects. ACM Trans. Graph. 31, 38 (2012).

Article Google Scholar

Lin, X., Liu, Y., Wu, J. & Dai, Q. Spatial-spectral encoded compressive hyperspectral imaging. ACM Trans. Graph. 33, 233 (2014).

Article Google Scholar

Ma, C., Cao, X., Tong, X., Dai, Q. & Lin, S. Acquisition of high spatial and spectral resolution video with a hybrid camera system. Int. J Comput. Vision 110, 141–155 (2014).

Article Google Scholar

Lin, X., Wetzstein, G., Liu, Y. & Dai, Q. Dual-coded compressive hyperspectral imaging. Opt. Lett. 39, 2044–2047 (2014).

Article ADS PubMed Google Scholar

Golub, M. A. et al. Compressed sensing snapshot spectral imaging by a regular digital camera with an added optical diffuser. Appl. Opt. 55, 432–443 (2016).

Article ADS PubMed Google Scholar

Wang, P. & Menon, R. Computational multispectral video imaging. J. Opt. Soc. Am. 35, 189–199 (2018).

Article ADS Google Scholar

Mu, T., Han, F., Bao, D., Zhang, C. & Liang, R. Compact snapshot optically replicating and remapping imaging spectrometer (ORRIS) using a focal plane continuous variable filter. Opt. Lett. 44, 1281–1284 (2019).

Article ADS PubMed Google Scholar

McClung, A., Samudrala, S., Torfeh, M., Mansouree, M. & Arbabi, A. Snapshot spectral imaging with parallel metasystems. Sci. Adv. 6, eabc7646 (2020).

Article ADS CAS PubMed PubMed Central Google Scholar

Williams, C., Gordon, G. S., Wilkinson, T. D. & Bohndiek, S. E. Grayscale-to-color: scalable fabrication of custom multispectral filter arrays. ACS Photon. 6, 3132–3141 (2019).

Article CAS Google Scholar

Zhang, W. et al. Handheld snapshot multi-spectral camera at tens-of-megapixel resolution. Nat. Commun. 14, 5043 (2023).

Article ADS CAS PubMed PubMed Central Google Scholar

Yuan, L., Song, Q., Liu, H., Heggarty, K. & Cai, W. Super-resolution computed tomography imaging spectrometry. Photonics Res. 11, 212–224 (2023).

Article Google Scholar

Download references

This work was supported by the National Natural Science Foundation of China (62322502, 61827901, 62088101 and 61971045).

These authors contributed equally: Liheng Bian, Zhen Wang, Yuzhe Zhang

State Key Laboratory of CNS/ATM & MIIT Key Laboratory of Complex-field Intelligent Sensing, Beijing Institute of Technology, Beijing, China

Liheng Bian, Zhen Wang, Yuzhe Zhang, Lianjie Li, Yinuo Zhang, Chen Yang, Wen Fang, Jiajun Zhao, Chunli Zhu, Qinghao Meng, Xuan Peng & Jun Zhang

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

L.B., Z.W. and Yuzhe Zhang conceived the idea. Z.W., Yuzhe Zhang, C.Y. and W.F. conducted the material optical performance tests and photoresist preparation. Z.W. and Yuzhe Zhang designed and fabricated the optical filter arrays. Yuzhe Zhang and Yinuo Zhang designed and implemented sensor integration. Z.W. and Yuzhe Zhang developed the reconstruction algorithms and conducted the model training. Z.W. and Yuzhe Zhang calibrated the sensors and tested their imaging performance. L.L., X.P., Yinuo Zhang and J. Zhao designed and implemented the experiments of chlorophyll detection, SSC detection, textile classification and apple bruise detection. Yuzhe Zhang, Q.M. and Yinuo Zhang conducted blood oxygen and water quality monitoring experiments. L.B., Z.W., Yuzhe Zhang, Yinuo Zhang, C.Y., L.L., C.Z. and J. Zhang prepared the figures and wrote the paper with input from all the authors. L.B. and J. Zhang supervised the project.

Correspondence to Liheng Bian or Jun Zhang.

L.B., Z.W., Yuzhe Zhang and J. Zhang hold patents on technologies related to the devices developed in this work (China patent nos. ZL 2022 1 0764166.5, ZL 2022 1 0764143.4, ZL 2022 1 0764141.5, ZL 2019 1 0441784.4, ZL 2019 1 0482098.1 and ZL 2019 1 1234638.0) and submitted the related patent applications.

Nature thanks Yidong Huang, Yunfeng Nie and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

a, The experiment configuration. A telescope (CELESTRON NEXSTAR 127SLT, 1,500 mm focal length, 127 mm aperture Maksutov-Cassegrain) was employed to image the moon combined with different cameras. b, The lunar spectrum comparison. c, The HSI results by different cameras. The results of the HyperspecI sensor present fine details of lunar topography, and the reconstructed spectrum corresponds well with the ground truth. In contrast, the results of the mosaic multispectral camera contain serious measurement noise due to limited light throughput, and the topography details are buried. The results of the line-scanning hyperspectral camera suffer from a similar degradation, and severe scanning overlapping exists since the moon was moving during the line-scanning process. The above experiment demonstrates the unique high-light throughput advantage of our HyperspecI sensor, which leads to high imaging SNR that enables the acquisition of dynamic, remote, and fine details in low-light conditions. d, The dynamic imaging results by different cameras.

Source Data

a, Hyperspectral imaging results of our HyperspecI sensor on real and fake potted plants with the same colour but different spectra. (i) RGB images of real and potted fake plants. (ii) Locations of real and fake plants of the same colour are marked with red points and yellow points, respectively. (iii) The raw measurement of the HyperspecI sensor. (iv) Reconstructed spectra of metamerism locations. (v) Synthesized RGB image of the reconstructed HSI. b, Hyperspectral imaging results of our HyperspecI sensor on real and fake strawberries with the same colour but different spectra. (i) RGB images of real and fake strawberries. (ii) Locations of real and fake strawberries of the same colour are marked with red points and yellow points, respectively. (iii) The raw measurement of the HyperspecI sensor.(iv) Reconstructed spectra of metamerism locations. (v) Synthesized RGB image of the reconstructed HSI.

Source Data

a-d, HSI results of four different indoor and outdoor scenes. The measurements were acquired by our HyperspecI sensors. The hyperspectral images were reconstructed via SRNet. Synthesized RGB images and several spectral images are presented. The spectral comparison between reconstructed spectra (RS) and ground truth (GT, acquired by the commercial spectrometers of Ocean Optics USB 2000+ and NIR-Quest 512) are also represented (denoted by solid and dashed lines, respectively).

Source Data

a, The experimental configuration for BMSFA thermal stability test, comprising an optical system (including components of light source, illuminating system, camera, camera tube, beam splitter, objective lens, etc.) for uniform light illumination on the target, mechanical elements (featuring a manual focusing module, heating stage, translation stage, main support, etc.) for precise control of the target’s observation position and target heating, and the modulation mask. b, The visual representations of the modulation mask at different temperatures. These observations reveal that the modulation mask is stable under different temperature conditions, maintaining its structural integrity and properties. c, The experiment configuration for sensor thermal stability test. The HyperspecI sensor was fixed on a heating stage with controllable temperatures ranging from 40 °C to 70 °C. Measurements were acquired after each thermal step reached stability, with the sensor operating for 1 hour at each temperature. d, Similarity evaluation results of raw data. The SSIM and PSNR measurements consistently indicate that the camera’s performance remains unaffected across different operating temperatures. e, The acquired raw data and corresponding HSI reconstruction results at different temperatures. f, Reconstructed spectral comparison of different regions. We calculated the Pearson correlation coefficients of the spectra in the same region at different temperatures. The minimum correlation coefficient for each region is 0.99, indicating that the sensor’s spectral reconstruction performance is robust to temperature variations.

Source Data

a, The system to collect hyperspectral image dataset. Our dataset was mainly captured using the commercial FigSpec-23 (400-1000 nm @ 960 × 1,230 pixels, 2.5 nm interval) and GaiaField Pro-N17E-HR (900-1700 nm, @ 640 × 666 pixels, 5 nm interval) hyperspectral cameras, both integrated under a push-broom scanning mechanism. Measurements were acquired using our HyperspecI (V1 for 400-1000 nm and V2 for 400-1700 nm). b, Image registration between the two commercial hyperspectral cameras. The scale-invariant feature transform (SIFT) technique was employed to align the field of view. c, The visualization of our constructed hyperspectral image dataset. After data registration, there yields the hyperspectral image dataset comprising 1,000 scenes (500 outdoor scenes and 500 indoor scenes), covering the entire spectral range of 400-1700 nm, with a spatial resolution of 640 × 666 pixels and a total number of 131 spectral bands at 10 nm intervals.

Source Data

a, The overall framework of SRNet. SRNet is a hybrid neural network that combines the core features of Transformer and CNN architectures for efficient, high-precision reconstruction. b, The framework of the Spectral Attention Module (SAM). SAM is the basic component of SRNet, which calculates the attention across spectral channel dimensions, extracting the spectral features of HSIs. c,d, Measurements and corresponding spectral reconstruction result (presented as the synthesized RGB form). The measurements were acquired using the HyperspecI-V1 sensor. Close-ups are provided, marked in the measurements with rectangular outlines.

a, The demonstration of BMSFA photolithography fabrication. b, Display of integrated HyperspecI-V1 sensor. c, Display of integrated and packaged HyperspecI-V2 sensor. d, Photolithography mask used for BMSFA fabrication. Multiple lithography operations can be achieved using this single mask. e, BMSFA fabrication and its microstructure, including microscopic images during the fabrication process. f, Display of the HyperspecI-V1 sensor’s sensing matrix calibrated with monochromatic light in several spectral bands (550 nm, 650 nm, 750 nm). g, Display of the HyperspecI-V2 sensor’s sensing matrix calibrated with monochromatic light in several spectral bands (600 nm, 800 nm, 1300 nm).

a, The evolutionary optimization based material selection method for BMSFA design. This method starts with an initially selected subset of materials and iterates through the operations, including survival of the fittest, crossover, mutation, and random replacement. The iterative process ends when it converges to the optimal accuracy performance on the hyperspectral image dataset. b, The preprocess and analysis of the massive hyperspectral image data through the dimensionality reduction technique. We analysed the distribution of the hyperspectral images using the PCA method. We calculated the information loss (reconstruction error) in different latent dimensions and compressed ratios to determine the potential compressive dimension. We can see that the reconstruction error is low with the dimension number being ten at 400-1000 nm range and six at 1000-1700 nm range, which demonstrates the sparsity of HSI in the spectral dimension. c, The spectral fidelity under different numbers of modulation filters selected by the material selection method. The signal-to-noise ratio of input measurements was set as 20 dB. It further validates the reasonability of the number and selection of broadband filters, and shows that the current choice of our HyperspecI sensor prototypes is optimal considering the tradeoff between spectral and spatial resolution. d, The organic dyes and nano-metal oxides prepared for BMSFA design. e, The correlation coefficient map of the prepared 35 materials.

Source Data

a, Schematic diagram depicting the production of experimental smears using spectral modulation materials. This process follows the steps of weighting, mixing, filtering, and spin coating. b, Schematic diagram of the optical path for transmission spectra measurements of spectral modulation materials. c, The smears of organic dyes, employing photoresist as a carrier, are obtained through spin coating. d, The transmission spectra of organic dyes. e, The smears of nano-metal oxides, utilizing photoresist and dispersant as carriers, are obtained through spin coating. f, The transmission spectra of nano-metal oxides at the optimum concentration.

Source Data

Supplementary Information sections 1–11, including Supplementary Tables 1–9.

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

Bian, L., Wang, Z., Zhang, Y. et al. A broadband hyperspectral image sensor with high spatio-temporal resolution. Nature 635, 73–81 (2024). https://doi.org/10.1038/s41586-024-08109-1

Download citation

Received: 17 July 2023

Accepted: 24 September 2024

Published: 06 November 2024

Issue Date: 07 November 2024

DOI: https://doi.org/10.1038/s41586-024-08109-1

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative