Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Misalignments measurement of CPV optical components through image acquisition

Open Access Open Access

Abstract

Concentrator Photovoltaics (CPV) technology relies on optical systems that concentrate sunlight on solar cells in exchange for a reduction of the permitted angular tolerance when pointing at the sun. A proper alignment between optics and photovoltaic receivers is crucial for the performance of this technology, particularly point focus CPV systems with concentration ratios above 100X that have narrow angular tolerances. This study presents the theoretical fundamentals of a method for evaluating misalignments in a CPV module. The method is based on the acquisition and analysis of images, taken by a camera, of the photovoltaic receivers magnified through the primary optics. The method has been successfully validated by empirical measurements and ray tracing simulations of a single lens-receiver unit.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Concentrator Photovoltaics (CPV) [1] is a technology that uses optical systems such as mirrors or lenses to focus sunlight into small-area photovoltaic (PV) devices. The goal of this concept is to use high-efficient multi-junction photovoltaic solar cells based on III-V semiconductors. CPV technologies have proved the highest PV efficiencies reaching around 40% efficiency in CPV modules [2] and around 46% efficiency in cells [3,4]. The most widely used optical configuration in CPV modules is formed by primary optical elements (POE), as Fresnel lenses, and by secondary optical elements (SOE) that are optically attached to the solar cell [5]. High Concentrator Photovoltaics (HCPV) modules are characterized by very tight mechanical tolerances resulting from the narrow angular transmission of the optical system, typically below or close to ± 1° [1,5]. The angular transmission defines the angular performance of the module, quantifying the power loss that the module has depending on its pointing angle at the sun. Thus, misalignments between CPV module subparts (POE, SOE, PV receivers, back-plate, lens parquet…) may degrade both electrical and angular performance [6,7].

In order to preserve the module electrical performance, control and correction of misalignments may be conducted at two levels. On the one hand, intra-module misalignments characterization provides valuable information for correction actions in the module assembly on the production line [8,9] or at least to determine the impact of such misalignments on the module performance. On the other hand, an alignment characterization of modules mounted on a tracker may assure a correct installation and maintenance in CPV plants [1012].

However, there is a lack of tools for characterizing such misalignments. The straightforward method to evaluate misalignments between the elementary units (optical system and photovoltaic receiver) comprising the CPV module is to measure the angular transmission of each unit. For this, the electrical performance for each unit should be evaluated while it is illuminated with a directional light source at different incident angles, obtaining a curve of the power transmitted as a function of the light incident angle. This task is very tedious and time-consuming and, in addition, it is not possible to access to the electrical connections of each individual unit in a module once it is assembled.

An alternative to this evaluation is the MOA (Module Optical Analyzer) [13], developed at the Instituto de Energía Solar. This system measures the misalignments between the elementary units based on the luminescence inverse method, which consists of measuring the light emitted by the module while it is forward biased in dark conditions [1416].

The MOA is conceived to be used indoors together with the Helios 3198 CPV solar simulator [1719], as it takes advantage of the collimator mirror (typically 2 meters in diameter) provided by the simulator. Thus, the size of the measurable module is limited by the size of the mirror. Beyond this constraint, there is another limitation for the use of the MOA on a CPV production line: some CPV modules include blocking diodes with high breakdown voltages, which prevents the electroluminescence of the module.

In order to overcome these drawbacks, an alternative method for the evaluation of misalignments is proposed and described in this paper. The method is based on the analysis of the photovoltaic receiver image formed through the CPV optics, directly captured by a camera placed in front of the CPV aperture area. Varying the distance between the camera and the CPV part, the method can be applied indoors or outdoors to determine both intra-module misalignments and on-tracker module misalignments respectively. For the last case, the method would be embedded into an unmanned aerial vehicle (with a camera as payload) to measure misalignments between modules in operation at tracker level. Consequently, the impact of thermo-mechanical deformations and associated misalignment of the CPV module at operating conditions and during its lifetime can be detected. This is a unique capability of the proposed method, which would provide feedback for the module design and assembly process.

The article is organized as follows: first section introduces the concepts of angular transmission and misalignments. Secondly, the proposed method is described. Finally, a case study of a CPV elementary unit with misalignments measured by the proposed method is presented for a preliminary validation.

2. Angular transmission function and misalignment definition

The angular transmission function (ATF) (see Fig.  1(a)) is defined as the percentage of light power that is transmitted from the CPV optical aperture to the cell as a function of the incident angle of the light source with respect to the optical axis. This function is inherently bi-dimensional but, since many CPV optical systems have rotational symmetry, it is commonly examined along one single axis (i.e., ρ axis in Fig.  1(b)) and referred to as one-dimensional. The ATF depends on the considered light source, particularly on its spectrum and angular size (i.e., angle subtended by the light source from the CPV input aperture) [14].

 figure: Fig. 1.

Fig. 1. (a) The one-dimensional angular transmission function of a misaligned unit (B) is shifted with respect to an aligned unit (A). (b) Scheme (cross section) of two elementary units without and with misalignment (receiver displaced dφ). The maximum incident angle (at which light is transmitted) varies over the POE input aperture. A camera focusing to the receiver seen through the POE may observe the photovoltaic receiver position.

Download Full Size | PDF

We define the misalignment between two elementary units (a single cell with its corresponding optics) as the fact that they do not share the same optical pointing vector, which causes a shift between their ATFs (see Fig.  1). There are many possible sources of misalignments in the manufacturing process of the module [6,13], namely pitch errors between POE units, rotatory errors between POE parquet and rear plate, errors in the placement of the SOEs on the cell, placement errors of receivers in the rear plate, lack of coplanarity between POE parquet and rear plate, etc. All of them result in a relative misalignment between each POE unit and its cell, and consequently causes the corresponding shifts in ATFs. The proposed method detects such shifts in ATFs, and further analysis of the misalignment patterns may provide information about its causes.

Figure  1(a) shows the angular transmission function (related to collimated and monochromatic light beams) for two elementary units based on Fresnel lens as POE: one aligned (Unit A) and one with a displacement (dφ) between receiver and lens (Unit B). Because the optical performance of the POE varies over the optics aperture, the maximum incident angle (α) in the POE, that is transmitted to the receiver, varies with the radial distance to the center of the lens (ρ). This is because the receiver cross section subtends a variable angle along the aperture of the Fresnel lens (i.e., the distance between each point of the primary lens and the receiver varies). Therefore, the central area has a larger angular tolerance than the exterior rings (see Fig.  1(b)), which means that the maximum angle of incidence of the light impinging the Fresnel lens which is still focused on the receiver is larger at the center lens area.

The result is that ATF of CPV optics are typically piecewise functions where the piece is limited by α(ρ=0) and α(ρ=R), the maximum transmitted incident angles at the center (ρ=0) and at the edge (ρ=R) of the lens respectively. The angular misalignment (φ) is identified as the displacement between the angular transmission functions, which is coincident with the displacement between the maximum incident angles (αA(ρ=0) and αB(ρ=0) in Fig.  1(a)). This last will be ultimately used throughout this work. Moreover, it can be noticed that the misalignment angle φ is directly related to the receiver displacement with a linear relation (for small misalignments, which are usually lower than 0.5 degrees).

The ATF is the convolution between the angular distribution of the incident light and the impulse-response angular transmission function of the concentrator [20], so its shape depends not only on the optical system but also on the light source. Usually, the impulse-response angular transmission function of a CPV optics, obtained with a perfectly collimated and monochromatic light source, yields a trapezoidal shape. Thus, for the sake of clarity, the ATF is approximated by a trapezoidal-shaped function (as the one presented in Fig.  1(a)) to explain the theoretical fundamentals of the proposed method in the next section. Nevertheless, the proposed method also makes it possible to evaluate misalignments with round shape ATF, whose implications for the method application are also analyzed in the text.

3. Proposed method

3.1 Description and set-up

The proposed method aims at the quantification of the misalignments and consists of measuring the receiver position with respect to the optical system in each elementary unit, and translate this position to a misalignment angle. To perform this, a camera is placed in front of the CPV module while focusing to the receiver image magnified by the POE (e.g., a Fresnel lens in Fig.  1(b)). Since the magnification (M) is proportional to the POE-observer distance, the last is chosen to see the edges of the whole receiver through the Fresnel lens. Despite of CPV optical systems are non-imaging optics designed to maximize light power transmission, when used in reverse they form images of fairly good quality for this purpose. Figure  2 shows the images taken by a camera of the photovoltaic receiver magnified by the Fresnel lens for two elementary units (aligned and misaligned), as in the scheme of Fig.  1(b).

 figure: Fig. 2.

Fig. 2. (Left) Image scheme of the receiver magnified by the Fresnel lens. (Center) Receiver image taken by the camera of the aligned unit, in which the receiver position (its center) coincides with the optical axis of the lens (Image A). (Right) Receiver image taken by the camera of the misaligned unit, in which the receiver position (its center) is displaced a given distance (dφ·M) from its optimum (Image B). M represents the magnification of the receiver observed through the lens and dφ the displacement (mm) between the receiver and the lens.

Download Full Size | PDF

Let’s assume that the camera is placed at the center of an elementary unit and normal to the CPV module aperture. If the center of the receiver is coincident with the center of the lens in the image, it means that there is no misalignment in the elementary unit (Fig.  2 center). A shift (dφ) between the center of the receiver and the optical axis of the lens causes a misalignment (φ) in the elementary unit (Fig.  2 right). A linear relation links the misalignment angle φ to the receiver position dφ for small angles. Thus, if one image is taken for every unit comprising the CPV module and dφ is determined, the value of the misalignments φ (in degrees) for every unit may be obtained provided that the relationship φ(dφ) is known.

Regarding the camera position in the measurement scheme of the method, some considerations can be made. The absolute values of misalignments are referred to the camera axis in the set-up. Thus, if the camera is normal to the CPV aperture, the misalignments are given with respect to the normal of the CPV aperture. However, other references may be considered (as the backplane of the module, for example, if the camera is normal to this plane). If the camera is not at the center of the POE, the receiver is displaced with respect to the Fresnel lens in the image. This last displacement must be considered to correct the obtained receiver position. If the camera is not placed perpendicular to the POE optical axis, but the camera movement is contained in a plane (which cannot be necessary parallel to the POE aperture plane), the acquired images would not provide the absolute but the relative misalignments between elementary units in a CPV module, which is the ultimate objective of the method.

The measurement set-up must accomplish some requirements. Firstly, the camera to POE distance must be selected to have a magnified image of the receiver that does not fully cover the POE aperture area. If the magnified receiver exceeds the POE, the edges of the receiver would be lost, and the detection of the receiver position would be less accurate. Furthermore, some margin is needed accordingly to the required full-scale range for the misalignments as shown in Fig.  2 left. Other relevant characteristic of the set-up is the focal length of the camera lens, that determines the image resolution accordingly to the sensor size and pixel dimensions.

The method relies on the determination of the relative displacement between POE and receiver, so not only the position of the receiver must be considered but also of the POE. This last can be implemented in either of these two options:

  • - The POE position relies on a very accurate 2-axes stage that holds and positions the camera in front of the POE. The uncertainty depends on the one hand in the 2-axes stage, and on the other in the determination of the receiver position through the image.
  • - The POE position is also determined through an image acquisition. The best focusing of the receiver is usually not the best for the POE, so two different focusing and images may be needed. In practice, it is possible to have an intermediate focusing which provides a good recognition of both the POE and the receiver. The uncertainty depends on the detection of both receiver and POE positions through the images.

3.2 Non-idealities applied to method

The proposed method depends on the evaluation of the positions of the receivers by means of images formed through the POE. But CPV optics are anidolic, so they are designed for light transmission and not for image formation. Non idealities in this image formation produces blurring that worsen the receiver recognition.

On the one hand, the angular performance α(ρ) varies along the whole aperture of the optical system as explained above, and constitutes a source of blurring. On the other hand, the light involved in the image formation is not monochromatic, and the chromatic aberration of a refractive POE causes that the angular performance varies also with the wavelength α(ρ,λ), which rounds the shape of the ATF. Figure  3 shows the effect of the chromatic aberration in the image formation. The light rays exiting the edge of the receivers have a different angle when exiting the POE, varying with the wavelength.

 figure: Fig. 3.

Fig. 3. Maximum angles of the outgoing rays from the edges of the receiver through the center of the lens α(ρ=0). The chromatic aberration of the POE causes that α depends also on the wavelength; blue and red colors are drawn to show this effect.

Download Full Size | PDF

A consequence of these non-idealities is that the rays exiting the edge of the receiver have a different angle α(ρ,λ) at each position of the lens and for each wavelength, which may produce blurring in the receiver image acquired by a camera. The impact of chromatic aberration α(λ) can be limited by using filters (for example, by using one of the channels given by the Bayer filter of the camera sensor.). But the blurring effect associated to α(ρ), which is equivalent to a shallow depth of field, cannot be avoided. A requisite to apply this method is that the receiver shape, particularly its edges, are recognizable in the image despite the blurring.

3.3 Methodology for uncertainty reduction

The blurring originated by the causes above mentioned is translated into an uncertainty in the definition of the receiver position (its center) in the captured image. In order to achieve a sharp image and to minimize the blurring effect on the receiver edges, proper acquisition conditions and a later image processing are used. Both POE and receiver have simple shapes, namely polygons or ovals. The accuracy of the method depends on an accurate positioning of such objects in the image, which can be enhanced with proper image processing and pattern recognition techniques [21].

To identify the receiver position, the camera must be focused on the plane where the CPV optics forms the receiver image. In this regard, taking images with minimum camera lens aperture allows greater depth of field. In addition, blurring caused by the chromatic aberration of the CPV optics is significantly reduced by wavelength filtering, for instance by using only the red channel (Bayer filter) of a camera sensor. This channel is chosen because in the wavelength range captured by the red channel (from 550 to 1000nm) the refractive index varies in a lesser extent (Fig.  4 left), thus the effect of chromatic aberration is lower. Finally, an image processing helps to improve the image definition and to automate the detection of the receiver position on the photographs [2224]. This processing increases the contrast in the acquired image and sharpens the receiver edges (Fig.  4 right).

 figure: Fig. 4.

Fig. 4. Red (R) channel of the image shows less blurring than Green (G) or Blue (B) (left). Automated image processing enhances the image and detects the edges and center of the receiver area (right). Red edges are used to determine the position in one dimension, blue edges in the other.

Download Full Size | PDF

3.4 Approximation to image-forming optics

As mentioned in section 3.1, the proposed method relies on the determination of the relative positions of POE and receiver by means of image acquisition, focusing on the plane where the POE forms the receiver’s image.

Once the image is acquired, it is necessary to convert the receiver displacement (in pixels) to angular misalignment (in degrees). The conversion constant can be obtained from the receiver size, which must be known. The angular size of the photovoltaic receiver is the angle subtended by the receiver (r) from the center of the POE. Since the solar cell is commonly much smaller than the POE’s focal distance, it is often possible to use the small-angle approximation:

$$\alpha {\; }({rad} )\sim tg{\; }\alpha = \frac{r}{{{F_{CPV}}}}$$
where FCPV is the distance from the POE to the receiver and r represents the radius of the aperture of the receiver (i.e., the aperture of the SOE if exists or the cell in the case that there is not SOE). This angle α is the maximum angle of incidence of the light that, impinging the POE, reaches the PV receiver, and it is related to the center area of the lens (i.e., the system maximum angular tolerance, α = α (ρ=0) in Fig.  1(b)).

Hence, knowing the real angular size of the receiver (α, obtained from Eq. (1)), the size of the receiver (r), the image magnification M of the CPV lens, and the conversion factor of the camera (tp), it is possible to obtain a calibration constant K (°/pixel):

$$K\left( {\frac{\circ}{{pixel}}} \right) = \frac{{\alpha (\circ)}}{{r(mm) \cdot M\cdot tp\left( o{\frac{{pixel}}{{mm}}} \right)}}$$

This constant can be used later to translate differences in the receiver position on the image (such as dφ in Fig.  2 right) to misalignments in the elementary units.

4. Case of study

In this section, a case of study is used to validate the proposed method, which shows how the results obtained match the expected ones. For this, an elementary unit, formed by a Fresnel lens and a photovoltaic receiver, whose relative position can be precisely controlled, has been used. Moreover, ray-tracing simulations have been performed to characterize the maximum angle of incidence of the unit under study and, in consequence, to obtain the calibration constant K (°/pixel).

4.1 Description

An experiment in which controlled misalignments are introduced in an elementary Fresnel lens-cell unit has been carried out. For this, a square Fresnel lens 40mm side with known CPV focal distance (76.3 mm) composed by 11 grooves has been fixed on an optical table. The design is based on curved facets limited by a maximum depth of 500 µm. Then, a receiver consisting of a 2.3mm diameter circular cell inscribed on a 3.5mm diagonal square substrate has been fixed on automated XYZ stage attached to the optical table. The stage can be programmed to introduce preset misalignments.

Before conducting the experiment, the lens-cell system has been aligned while illuminating with collimated white light, identifying the minimum light spot with the optimal position of the photovoltaic receiver [25,26].

A camera, centered in front of the Fresnel lens, has been fixed at a proper distance so the magnified receiver fills about one half of the lens area. Pictures have been taken for a bunch of preset misalignments. These misalignments are introduced by shifting the receiver from its optimum position over the parallel plane to the lens (in X and Y directions). Performed displacements of the receiver have consisted of combinations of X and Y stepped movements of 0.1mm between 0.1mm and 0.6mm. For this, the set-up included a computer-controlled XYZ stage that allows receiver displacements with a precision of ± 0.011mm. The introduced displacements are translated into misalignments with the relation:

$$\varphi = \arctan \left( {\frac{{{d_\varphi }}}{{{F_{CPV}}}}} \right)$$
where φ is the misalignment in degrees, dφ is the introduced X-Y displacement in mm and FCPV is the distance from the lens to the receiver in mm.

4.2 Ray-tracing simulations

The performance of the elementary unit used in this case of study has already been evaluated by ray-tracing simulations. The ATF of the elementary unit was simulated considering two different light sources, both perfectly collimated (i.e., with an angular size of 0°) but with different spectral content: a 530nm monochromatic light (Fig.  5 left) and the reference solar spectrum AM 1.5D weighted with the spectral response of the red channel (Bayer filter) of the camera sensor (Fig.  5 right). The value of 530nm is chosen because the focal distance related to 530nm corresponds to the focal distance of the CPV system (FCPV = 76.3mm), defined as the distance between the receiver and the lens.

 figure: Fig. 5.

Fig. 5. ATF obtained from raytracing simulations using as light source a 530 nm monochromatic (left) and the reference solar spectrum AM 1.5D weighted with the spectral response of the camera’s red channel (right).

Download Full Size | PDF

Two important results are gathered by this simulation. First, it is possible to observe that the maximum angle of incidence (α) in the ATF of the unit under study (considering the whole lens) is different depending on the light source used in the simulation (0.86° for the monochromatic light, α530nm, and 1° for the AM 1.5D and red channel response, αred). Moreover, it is possible to observe the variation of the angular tolerance over the Fresnel lens aperture by comparing the ATF for two different grooves (see Fig.  2) of the lens: one linked to the central ring of the lens and other to the outer ring of the simulated lens (lens center and lens edges in Fig.  5 respectively).

4.3 Calibration process and results

To translate from receiver spatial displacement (dφ) to angular misalignment (φ in degrees), a constant as defined in Eq. (2) must be determined. For a given CPV configuration, this constant can be deduced from the ray-tracing simulations as shown in previous section or calculated experimentally through a calibration process.

The calibration requires a specific set-up that includes the spare parts of the CPV architecture, namely a single POE and receiver. Controlled misalignments are introduced in the CPV unit under study (by shifting the receiver position from the center of the POE) and measured with the proposed method (by taking an image for each preset misalignment. As these introduced misalignments (in degrees) are proportional to the receiver displacements measured in the images (in pixels), the conversion constant K (°/pixel) is obtained with the best fitting between both.

The plot in Fig.  6 shows the measured points of the calibration process for the case of study, and a linear regression to determine the calibration constant. It is noteworthy that the resulted non linearity (NL) is below the pixel size, which ultimately limits the resolution of the method (to 0.0068° in the example case). Edge detection algorithms [2729] are key to achieve such a low NL, the basis is that the receiver position for each dimension is not detected with two points (pixels) but two lines (array of pixels) corresponding to the edges of a square shaped receiver.

 figure: Fig. 6.

Fig. 6. Result of calibration process for the case of study. The plot shows the error in pixels measured with the acquired images as a function of the misalignment of the receiver (in degrees) introduced with the XY stage. The black dots correspond to the measured points, while the line is the result of a linear regression. The slope of the line is the reciprocal of the calibration constant K(°/pixel)

Download Full Size | PDF

It must be noted that the calibration process requires components that are not always available, the ray-tracing simulation can be then a valid alternative. Table  1 shows a comparison between the conversion constants K (°/pixel) obtained by means of both ray-tracing simulations and the experimental calibration. As expected, the experimental one is very similar to the simulated under similar spectral conditions, i.e., with red light in the band of the red channel of the Bayer filter of the camera sensor. Conversely, the simulation using monochromatic light (530nm) shows a significant deviation.

Tables Icon

Table 1. conversion constants K(°/pixel) and corresponding α(°) values accordingly to Eq. (2) obtained both by ray-tracing and experimentally. Study case: Fresnel lens 40 × 40 mm, focal distance 76.3mm

Figure  7 presents the error, defined as the difference between the measured misalignment with the images and the true misalignments introduced with the XY stage, when the calibration constants are determined by ray-tracing simulations as in Table  1. The ray-tracing simulation with red light, similar to the red channel of the filter Bayer, is quite similar to the experimental one (about 2% gap), corresponding to an error lower than 0.015° for the case of study (see Fig.  7 red dots). On the contrary, the constant obtained with the monochromatic simulation (α530nm) has a significant deviation to the experimental one (about 12%) which results in a noticeable gain error (see Fig.  7 blue dots). But even for this worst case the error in the determination of the misalignment would be below 0.06°.

 figure: Fig. 7.

Fig. 7. Error (defined as difference between true misalignment and measured misalignment) vs. true misalignments for the two conversion constants obtained through ray-tracing simulations accordingly to Fig.  5 and Table  1.

Download Full Size | PDF

5. Conclusions

A method for misalignments characterization between CPV elementary units, that does not require manipulating modules connections or expensive and large equipment, has been presented. The method, based on image-acquisition, inspects the differences of the receiver to lens positions seen through the CPV elementary optics. Despite the variation of the angular tolerance along the lens aperture and the fact that CPV optics are not conceived for image formation, restrictions associated to a blurred image are overcome by image processing and pattern recognitions algorithms. In order to find an adequate constant that translates displacements (measured in images) to angular misalignments, a calibration procedure in which the system angular tolerances are evaluated was performed both experimentally and through ray-tracing simulations. Both exhibited similar results if the light spectrum is correctly considered in the simulations.

The method has been validated showing misalignments measurements in a case study. An experimental calibration has shown that the measured vs true misalignment function exhibits a linear dependency with a non-linearity error below the pixel resolution, thanks to the applied edge detection algorithms. Even when the calibration is derived from ray tracing, errors can be as low as 0.015° if proper spectral conditions are considered in the simulations.

Funding

Ministerio de Ciencia, Innovación y Universidades – Agencia Estatal de investigación (ENE2017-87825-C2-1-R); Comunidad de Madrid (MADRID-PV2 P2018/EMT-4308).

Acknowledgments

This work has been partially supported by FEDER / Ministerio de Ciencia, Innovación y Universidades – Agencia Estatal de investigación / Project MICRO-PV ref. ENE2017-87825-C2-1-R with cofounding from Comunidad de Madrid Program MADRID-PV2 P2018/EMT-4308, with co-funding from Fondo Europeo de Desarrollo Regional (FEDER) and Fondo Social Europeo (FSE) - Unión Europea.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. M. Wiesenfarth, I. Anton, and A. W. Bett, “Challenges in the design of concentrator photovoltaic (CPV) modules to achieve highest efficiencies,” Appl. Phys. Rev. 5(4), 041601 (2018). [CrossRef]  

2. S. P. Philipps, M. Baudrit, K. Hillerich, V. Moreau, R. Parmesani, E. Román, G. Sala, B. Schineller, G. Timò, and A. W. Bett, “CPVMatch - Concentrating photovoltaic modules using advanced technologies and cells for highest efficiencies,” (2016) AIP Conference Proceedings, 1766, art. no. 060002.

3. M. Steiner, G. Siefer, T. Schmidt, M. Wiesenfarth, F. Dimroth, and A. W. Bett, “43% Sunlight to Electricity Conversion Efficiency using CPV,” IEEE J. Photovoltaics 6(4), 1020–1024 (2016). [CrossRef]  

4. M. A. Green, E. D. Dunlop, D. H. Levi, J. Hohl-Ebinger, M. Yoshita, and A. W. Ho-Baillie, “Solar cell efficiency tables (version 54),” Prog. Photovolt. Res. Appl. 27(7), 565–575 (2019). [CrossRef]  

5. M. Victoria, C. Domínguez, I. Antón, and G. Sala, “Comparative Analysis of Different Secondary Optical Elements for Aspheric Primary Lenses,” Opt. Express 17(8), 6487–6492 (2009). [CrossRef]  

6. R. Herrero, I. Antón, M. Victoria, C. Domínguez, S. Askins, G. Sala, D. De Nardis, and K. Araki, “Experimental Analysis and Simulation of a Production Line for CPV Modules: Impact of Defects, Misalignments, and Binning of Receivers,” Energy Sci. Eng. 5(5), 257–269 (2017). [CrossRef]  

7. K. Araki, H. Nagai, R. Herrero, I. Antón, G. Sala, K.-H. Lee, and M. Yamaguchi, “1-D and 2-D Monte Carlo Simulations for Analysis of CPV Module Characteristics Including the Acceptance Angle Impacted by Assembly Errors,” Sol. Energy 147, 448–454 (2017). [CrossRef]  

8. A. Ritou, P. Voarino, B. Goubault, N. David, S. Bernardis, O. Raccurt, and M. Baudrit, “Mechanical Tolerances Study through Simulations and Experimental Characterization for a 1000X Micro-Concentrator CPV Module,” AIP Conference Proceedings, 1881, art. no. 030007 (2017).

9. C. Rapp, M. Steiner, G. Siefer, and A. W. Bett, “Stepwise Measurement Procedure for the Characterization of Large-Area Photovoltaic Modules,” Prog. Photovolt: Res. Appl. 23(12), 1867–1876 (2015). [CrossRef]  

10. A. Minuto and G. Timò, “Innovative Test Facility for a Comparative Outdoor CPV Modules Characterization,” In AIP Conference Proceedings (Vol. 1881, No. 1, p. 020009). AIP Publishing (2017).

11. A. Minuto and G. Timò, “Accurate and low cost sun pointing detector unit for concentrator photovoltaic applications,” In AIP Conference Proceedings (Vol. 2149, No. 1, p. 080006). AIP Publishing (2019).

12. P. Voarino, C. Domínguez, R. Bijl, and P. Penning, “Angular tolerance and daily performance variability of the Suncycle tracking-integrated CPV system,” In AIP Conference Proceedings (Vol. 1679, No. 1, p. 130006). AIP Publishing (2015).

13. R. Herrero, S. Askins, I. Antón, G. Sala, K. Araki, and H. Nagai, “Module optical analyzer: Identification of defects on the production line,” (2014) AIP Conference Proceedings, 1616, pp. 119–123.

14. R. Herrero, C. Domínguez, S. Askins, I. Antón, and G. Sala, “Luminescence inverse method for CPV optical characterization,” Opt. Express 21(S6), A1028–A1034 (2013). [CrossRef]  

15. R. Herrero, S. Askins, I. Antón, and G. Sala, “Evaluation of Misalignments within a Concentrator Photovoltaic Module by the Module Optical Analyzer: A Case of Study Concerning Temperature Effects on the Module Performance,” Jpn. J. Appl. Phys. 54(8S1), 08KE08 (2015). [CrossRef]  

16. R. Herrero, C. Domínguez, S. Askins, I. Antón, G. Sala, and J. Berrios, “Angular transmission characterization of CPV modules based on CCD measurements,” (2010) AIP Conference Proceedings, 1277, pp. 131–134.

17. C. Domínguez, I. Antón, and G. Sala, “Solar simulator for concentrator photovoltaic systems,” Opt. Express 16(19), 14894–14901 (2008). [CrossRef]  

18. C. Dominguez, S. Askins, I. Antón, and G. Sala, “Characterization of five CPV module technologies with the Helios 3198 solar simulator,” In 2009 34th IEEE Photovoltaic Specialists Conference (PVSC) (pp. 001004-001008). IEEE.

19. C. Domínguez, S. Askins, I. Antón, and G. Sala, “Indoor characterization of CPV modules using the Helios 3198 solar simulator,” Proceedings of the 24th European Photovoltaic Solar Energy Conference (2009).

20. A. C. Bovik, Handbook of image and video processing. (Academic press, 2010).

21. R. Herrero, C. Domínguez, S. Askins, I. Antón, and G. Sala, “Two-dimensional angular transmission characterization of CPV modules,” Opt. Express 18(S4), A499–A505 (2010). [CrossRef]  

22. S. Paris, S. W. Hasinoff, and J. Kautz, “Local Laplacian filters: Edge-aware image processing with a Laplacian pyramid,” Commun. ACM 58(3), 81–91 (2015). [CrossRef]  

23. M. Aubry, S. Paris, S. W. Hasinoff, J. Kautz, and F. Durand, “Fast local Laplacian filters: Theory and applications,” ACM Trans. Graph. 33(5), 1–14 (2014). [CrossRef]  

24. N. Otsu, “A threshold selection method from gray-level histograms,” IEEE Trans. Syst., Man, Cybern. 9(1), 62–66 (1979). [CrossRef]  

25. M. Victoria, S. Askins, R. Herrero, I. Antón, and G. Sala, “Assessment of the optical efficiency of a Primary Lens to be used in a CPV system,” Sol. Energy 134, 406–415 (2016). [CrossRef]  

26. M. Victoria, S. Askins, R. Herrero, C. Domínguez, R. Nuñez, I. Antón, and G. Sala, “Measuring primary lens efficiency: A proposal for standardization,” AIP Conference Proceedings. Vol. 1766. No. 1. AIP Publishing LLC, 2016.

27. M. D. Heath, S. Sarkar, T. Sanocki, and K. W. Bowyer, “A robust visual method for assessing the relative performance of edge-detection algorithms,” IEEE Trans. Pattern Anal. Mach. Intell. 19(12), 1338–1359 (1997). [CrossRef]  

28. T. Peli and D. Malah, “A study of edge detection algorithms,” Comput. Graph. Image Process. 20(1), 1–21 (1982). [CrossRef]  

29. L. San José, R. Herrero, and I. Antón, “Relative misalignments estimation in on-tracker CPV modules through image processing,” AIP Conference Proceedings. Vol. 2149. No. 1. AIP Publishing LLC, 2019.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. (a) The one-dimensional angular transmission function of a misaligned unit (B) is shifted with respect to an aligned unit (A). (b) Scheme (cross section) of two elementary units without and with misalignment (receiver displaced dφ). The maximum incident angle (at which light is transmitted) varies over the POE input aperture. A camera focusing to the receiver seen through the POE may observe the photovoltaic receiver position.
Fig. 2.
Fig. 2. (Left) Image scheme of the receiver magnified by the Fresnel lens. (Center) Receiver image taken by the camera of the aligned unit, in which the receiver position (its center) coincides with the optical axis of the lens (Image A). (Right) Receiver image taken by the camera of the misaligned unit, in which the receiver position (its center) is displaced a given distance (dφ·M) from its optimum (Image B). M represents the magnification of the receiver observed through the lens and dφ the displacement (mm) between the receiver and the lens.
Fig. 3.
Fig. 3. Maximum angles of the outgoing rays from the edges of the receiver through the center of the lens α(ρ=0). The chromatic aberration of the POE causes that α depends also on the wavelength; blue and red colors are drawn to show this effect.
Fig. 4.
Fig. 4. Red (R) channel of the image shows less blurring than Green (G) or Blue (B) (left). Automated image processing enhances the image and detects the edges and center of the receiver area (right). Red edges are used to determine the position in one dimension, blue edges in the other.
Fig. 5.
Fig. 5. ATF obtained from raytracing simulations using as light source a 530 nm monochromatic (left) and the reference solar spectrum AM 1.5D weighted with the spectral response of the camera’s red channel (right).
Fig. 6.
Fig. 6. Result of calibration process for the case of study. The plot shows the error in pixels measured with the acquired images as a function of the misalignment of the receiver (in degrees) introduced with the XY stage. The black dots correspond to the measured points, while the line is the result of a linear regression. The slope of the line is the reciprocal of the calibration constant K(°/pixel)
Fig. 7.
Fig. 7. Error (defined as difference between true misalignment and measured misalignment) vs. true misalignments for the two conversion constants obtained through ray-tracing simulations accordingly to Fig.  5 and Table  1.

Tables (1)

Tables Icon

Table 1. conversion constants K(°/pixel) and corresponding α(°) values accordingly to Eq. (2) obtained both by ray-tracing and experimentally. Study case: Fresnel lens 40 × 40 mm, focal distance 76.3mm

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

α ( r a d ) t g α = r F C P V
K ( p i x e l ) = α ( ) r ( m m ) M t p ( o p i x e l m m )
φ = arctan ( d φ F C P V )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.