Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Tunable reservoir computing based on iterative function systems

Open Access Open Access

Abstract

In this study, a performance-tunable model of reservoir computing based on iterative function systems is proposed and its performance is investigated. Iterated function systems devised for fractal generation are applied to embody a reservoir for generating diverse responses for computation. Reservoir computing is a model of neuromorphic computation suitable for physical implementation owing to its easy feasibility. Flexibility in the parameter space of the iterated function systems allows the properties of the reservoir and the performance of reservoir computation to be tuned. Computer simulations reveal the features of the proposed reservoir computing model in a chaotic signal prediction problem. An experimental system was constructed to demonstrate an optical implementation of the proposed method.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Reservoir computing is a kind of computational model using a recurrent neural network [1]. Unlike a feedforward neural network, a recurrent neural network has feedback connections between the neurons. With these recurrent connections, the past input signal remains in the network, and an output in accordance with the sequence of input signals can be acquired. Recurrent neural networks are suitable for treatment of dynamic data such as sequential signals.

In typical supervised training of neural networks, the weights connecting all the neurons are updated to minimize the difference between the target and calculated output values. As the number of the neurons composing the network increases, the number of parameters and the calculation costs grow. Reservoir computing avoids these problems by fixing the connection weights between some of the neurons and updating the weights of connections to the output nodes, which allows the hardware to be simplified and the training process to be accelerated. Such a fixed-connection neural network is called a reservoir, and it functions as a generator of non-linear responses from the input signals. Owing to the ease of implementation, many physical processes can be utilized for the reservoir.

In previous research, several optical reservoir computing systems using optical components with nonlinear responses and delayed feedback functions have been proposed [2]. The use of an optical fiber and Mach-Zehnder modulator provides nonlinear responses to input signals and realizes the signal classification such as square and sine wave [3,4]. Moreover, hundreds of nodes in the reservoir layer are implemented by using a nonlinear optoelectronic oscillator, and the spoken digit recognition is achieved [5] Another approach is to implement the nodes by using integrated silicon photonics chips with ring resonators [6] and semiconductor optical amplifiers [7]. The construction of a photonic integrated circuit with a feedback loop structure provides boolean logic operation [8]. Utilization of photonic crystal cavities enables to generate nonlinear and delayed responses and provides the header recognition [9]. In each system, a connection matrix, which describes the property of the network, is determined by the structure of the optical system. In general, the prediction performance depends on this matrix and sometimes decreases according to the sort of prediction tasks [10]. To predict various types of time-series data, it is essential to integrate the system so that the connection matrix can be adjusted by the optical setup. This limitation can be avoided by reservoir computing using free-space optics in which the connection matrix is expressed by a light transfer matrix. A diffuser and a multimode fiber can be used to modulate the wavefront of propagating light and easily perform matrix operations [1114]. Furthermore, phase modulation by using a spatial light modulator (SLM) allows various light patterns to be generated flexibly [15]. Although these implementations provide control flexibility to some extent, their performance is not high enough to change the computational properties for extending the systems to applicable problems. To alleviate this limitation, a new architecture based on free-space optics is considered for reservoir computing.

In this paper, a performance-tunable model of reservoir computing based on iterated function systems is proposed and its performance is investigated. In Sec. 2, the model of reservoir computing based on iterative function systems is explained, and a method for optimizing the system is presented. In Sec. 3, the performance of the proposed model is evaluated by multi-step and single-step ahead prediction, as well as by the spectral radius of the connection matrix. Finally, Sec. 4 describes the verification results obtained with an experimental testbed based on an optical feedback system.

2. Principle

2.1 Reservoir computing

Figure 1 shows a conceptual diagram of reservoir computing. As typical models of reservoir computing, an echo state network (ESN) [16] and a liquid state machine [17] have been presented. The ESN has feedback loop in the reservoir layer with memory capability and generates responses with dynamics to temporal input [16,18]. A variety of ESN models, such as a deep echo state network, have been proposed and their performance investigated. In this study, a mathematical model of ESN is implemented optically, and the performance is evaluated.

 figure: Fig. 1.

Fig. 1. Model of reservoir computing.

Download Full Size | PDF

An echo state machine consists of three layers: input, reservoir, and output layers. Neurons in the reservoir layer are connected each other, so that they have a recurrent structure. The connecting weights between the input and the reservoir layers, $W_\textrm {in}$, and those in the reservoir layers, $W_\textrm {res}$, are set in advance and are not updated during the processing. In contrast, the weights of the linear connections between the reservoir and the output layers, $W_\textrm {out}$, are revised based on a linear regression.

The states of the neurons in the reservoir layer at time $t$, $X(t)$, are updated by

$$X(t+1)=f[W_\textrm{res}X(t) + W_\textrm{in}u(t)],$$
where $u(t)$ is the input signal at time $t$, and $f$ is a nonlinear function such as a hyperbolic tangent or sigmoid function. $X(t)$ stored in the neurons is transferred to the other neurons in the reservoir layer according to the connecting weights $W_\textrm {res}$. After being added to the input signals $W_\textrm {in}u(t)$, a nonlinear function is applied, and the state of the reservoir is updated to $X(t+1)$.

The ESN is capable of handling linearly inseparable tasks with mapping of the input signals on a high dimensional space. For ESN to perform tasks such as sequential signal prediction, a condition called the echo state property should be satisfied. The echo state property is a condition required for gradual vanishing of past input signals from the reservoir layer.

2.2 Iterative function systems

In this research, implementation of node connections in the reservoir layer, $W_\textrm {res}$, was attempted using an optical fractal synthesizer [19]. The optical fractal synthesizer was originally proposed as an optical computing system for producing fractal shapes based on iterative function systems, and was then extended to generate pseudorandom signals utilized in stream ciphers [20]. In particular, the capabilities of pseudorandom signal generation can be applied to the neuron connection in the reservoir layer.

An iterated function system (IFS) is defined as a function system consisting of a complete metric space $(\mathbf {X}, d)$ and a finite set of contraction mappings $w_n:\mathbf {X}\longrightarrow \mathbf {X}\; (n=1,2,\dots,N)$, where $\mathbf {X}$ is a space, $d$ is a real-valued function for distance, $N$ is a non-negative integer. Considering easy implementation by optical processing, the optical fractal synthesizer adopts a version of affine transformations composed of rotation, scaling, and translation for the contraction mapping as follows:

$$\begin{bmatrix} x' \\ y' \end{bmatrix} = \begin{bmatrix} s & 0 \\ 0 & s \end{bmatrix} \begin{bmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} + \begin{bmatrix} t_x \\ t_y \end{bmatrix},$$
where $x', y'$ are the coordinates after translation, $x, y$ are those before translation, $s$ is a scaling factor, $\theta$ is a rotation angle, and $t_x$ and $t_y$ are translation parameters, respectively. Unlike the original iterated function systems, the optical fractal synthesizer can generate pseudorandom signals by introducing extension mapping. Namely, for the case of $s<1$, fractal patterns are generated, and for $s>1$, pseudorandom patterns are obtained.

Because the optical fractal synthesizer processes the entire input image all at once, it provides high performance in spatio-parallel processing. The operations in Eq. (2) can be implemented easily by optical elements such as dove prisms, mirrors, and lenses. Image duplication and overlapping using beam splitters enable concurrent evaluation of multiple mapping in the iterative function systems.

The presented system utilizes signal diffusion achieved by $s>1$ for the node connection $W_\textrm {res}$ in the reservoir layer. The number of mappings, the parameters of affine transformations, and the number of iterations are related to $W_\textrm {res}$, which can be controlled by adjusting the optical setup, such as the rotation angles of the dove prisms and the tilt angles of the mirrors. In most physical implementations of reservoirs, the reservoir properties are determined by the physical system, and it is difficult to configure the properties. Note that in the reservoir based on the optical fractal synthesizer, $W_\textrm {res}$ can be easily configured by controlling the optical parameters.

2.3 Computational model

The model of reservoir computing based on iterative function systems is explained here. In the following, we call the reservoir implemented by an iterative function system an IFS reservoir, and the computational model is simply called IFS reservoir computing.

IFS reservoir computing is performed according to Eq. (1), and the process is depicted in Fig. 2. The input signal at time $t$, $u(t)$, is converted into a two-dimensional image by multiplication with the connection matrix $W_\textrm {in}$. The status of the IFS reservoir at time $t$, $X(t)$, is assigned as the input image of the iterative function systems, and then signal transfer expressed as $W_\textrm {res}$ is performed in a pre-determined number of iterations. These two images are added and processed by using a nonlinear function, and the resultant image is set as the status of the IFS reservoir at time $t+1$.

 figure: Fig. 2.

Fig. 2. Model of IFS reservoir computing.

Download Full Size | PDF

The processed image is multiplexed by a variable weight $W_\textrm {out}$ to generate the output signal. In this research, a subset of pixels $X'(t)$ was extracted from the reservoir state $X(t)$ and transferred to the output layer. Then the output signal $y(t)$ is obtained by:

$$y(t) = W_\textrm{out}X'(t).$$

By repeating these processes, a series of output signals corresponding to a series of input signals are generated sequentially. In the framework of reservoir computing, only the connection $W_\textrm {out}$ in the output layer is trained using pairs of sequences of input and output signals under supervised training. After the training phase, output signals are predicted for an untrained sequence of inputs signals. Ridge regression was adopted for the training in our implementation. The loss function for ridge regression, $E$, is

$$E = \frac{1}{n}\sum_{t=1}^n(y(t)-\hat{y}(t))^2+\lambda\sum_{i=1}^{N}\omega_i^2,$$
where $n$ is the number of training sets, $\hat {y}(t)$ is the correct value, $\lambda$ is a regulation parameter, and $\omega _i$ is the $i$-th element of $W_\textrm {out}$. The optimized $W_\textrm {out}$ is determined by minimizing the loss function $E$. Selection of an appropriate value of $\lambda$ enables optimization of $W_\textrm {out}$ while suppressing overtraining.

The most important advantage of reservoir computing is the easy physical implementation owing to the fact that training of $W_\textrm {res}$ is not required in the reservoir layer. In contrast, the limited ability to control the connection $W_\textrm {out}$ restricts the processing performance of this model. IFS reservoir computing provides an effective solution to the problem. In addition, to improve the memory effect of the reservoir and to reflect the restrictions of the optical implementation, the state transition of the reservoir in Eq. (1) is modified to

$$X(t+1)=\alpha f[B[W_\textrm{res}B[X(t)]]+W_\textrm{in}u(t)] + (1-\alpha)X(t),\quad 0<\alpha<1,$$
where $\alpha$ is a leaking rate, and $B$ is a quantization operator. The leaking rate adjusts the degree of signals remaining in the reservoir, which affects the sequential signal prediction capability. The quantization operator corresponds to signal digitization on the display and the image sensor. Eight-bit quantization is common in usual devices.

3. Performance evaluation

3.1 Multi-step-ahead prediction of the Mackey Glass equation

Multi-step-ahead prediction of the Mackey Glass equation was performed by using the modified transition equation in Eq. (5) while varying the parameters of the IFS reservoir and the leaking rate. The Mackey Glass equation is a kind of delay differential equation which shows chaotic behavior with specific parameters. A discrete time representation is

$$u(t+1)=au(t)+\frac{bu(t-\tau)}{c+u(t-\tau)^m}+0.5,$$
where $a, b, c$, and $m$ are constant, and $\tau$ is the delay. In the experiment, we set $a=0.9$, $b=0.2$, $c=1$, $m=10$, and $\tau =17$ to generate chaotic signals. In $\tau =17$, the signal of the Mackey Glass equation shows chaotic behavior with two peaks and is used as a benchmark of the prediction [21].

The number of pixels of the IFS reservoir $X(t)$ was set as 64 $\times$ 64. Sequential pairs of $u(t)$ and $u(t+1)$ for 30,000 time slots calculated by Eq. (6) were used for training, and the successive output signals were predicted. The number of prediction data values satisfying a mean square error (MSE) between the output $y(t)$ and the correct value $u(t+1)$ of less than 0.01 was used as the performance evaluation. Figure 3 shows an example of prediction showing a high score. The parameters of the IFS reservoir are summarized in Table 1. Chaotic signals were predicted for 261 time steps while satisfying $\textrm {MSE} < 0.01$.

 figure: Fig. 3.

Fig. 3. Multi-step-ahead prediction of Mackey Glass equation by simulation.

Download Full Size | PDF

Tables Icon

Table 1. Simulation parameters in IFS reservoir computing to predict Mackey Glass equation.

3.2 One-step-ahead prediction of a Santa Fe time series

As another problem for IFS reservoir computing, a one-step-ahead prediction task was performed on a Santa Fe time series. A Santa Fe time series is a series of response signals generated from a far-infrared laser in a chaotic state, which are used as a benchmark of reservoir computing [22]. The numbers of data items for training and prediction were set at 3,000 and 1,000, respectively, and the prediction performance was evaluated by the normalized mean square error (NMSE) as follows:

$$\textrm{NMSE} = \frac{1}{n\sigma^2}\sum_{t=1}^n(y(t)-\hat{y}(t))^2,$$
where $n$ is the number of data items, $\sigma$ is the standard deviation of the inputs, $y(t)$ is the prediction, and $\hat {y}(t)$ is the correct value.

By varying the parameters of the IFS reservoir and the leaking rate, one-step-ahead prediction of a Santa Fe series was performed. The best performance was obtained when the parameters were set to each value shown in Table 2. The result is shown in Fig. 4. In the figure, the original series, the predicted signals, and the difference are displayed in order. The obtained NMSE is 8.5 $\times 10^{-3}$, which confirms the high accuracy in predicting the Santa Fe series by the presented system.

 figure: Fig. 4.

Fig. 4. (a) One-step-ahead prediction, (b) label of Santa Fe time series, and (c) the difference by simulation.

Download Full Size | PDF

Tables Icon

Table 2. Simulation parameters in IFS reservoir computing to predict Santa Fe time series.

3.3 Spectral radius evaluation

The spectral radius of the connection matrix in the reservoir layer $W_\textrm {res}$ can be used as a performance measure of reservoir computing. The spectral radius is the largest absolute value of eigenvalues of a matrix, which is related to the memory performance of the reservoir. The memory performance becomes larger as the spectral radius increases. The spectral radius of the matrix $W$ is defined as

$$\rho(W)=\max(|\lambda_i|, i=1, 2, \ldots, n),$$
where $\lambda _1, \lambda _2, \ldots$, and $\lambda _n$ are the eigenvalues of the matrix. For the IFS reservoir with leaking rate $\alpha$, the effective connection matrix is calculated as
$$W = \alpha W_\textrm{res} + (1-\alpha)I,$$
where $I$ is a unit matrix. The spectral radius in the best performance of reservoir computing differs from individual target signal [6,23], and the control of spectral radius enables to handle various types of time-series data. To investigate the characteristics of the IFS reservoir, the spectral radius and the NMSE of one-step-ahead prediction of a Santa Fe time series were calculated. Individual parameters were set to the values shown in Table 3, and the relationship between the spectral radius and NMSE was verified comprehensively. The size of the input image was 64 $\times$ 64 and all the pixels were used for training. The leaking rate was set to 1.0, which provided the best performance in the performance evaluation. Figure 5 indicates the relation between the spectral radius and the NMSE. For the cases of three- and five-times iterations, the correlation coefficients were larger than 0.7, which indicates the connection between the spectral radius and the prediction performance. This result shows that one-step-ahead prediction of the Santa Fe time series requires a memory capability. In addition, it was also confirmed that the combination of the scaling factors 0.8 and 1.0 generates a smaller spectral radius and increases the prediction performance. The result indicates that the spectral radius depends on not only the number of iteration but also the parameters of Affine transformation, and the control of each parameter improves the prediction.

 figure: Fig. 5.

Fig. 5. Spectral radius and NMSE in one-step-ahead prediction of Santa Fe time series.

Download Full Size | PDF

Tables Icon

Table 3. Combination of parameters in IFS reservoir computing.

4. Experimental verification

4.1 Experimental IFS reservoir system

To evaluate the performance of the optical implementation of the proposed IFS reservoir, an experimental testbed was constructed. Figure 6 shows the optical setup of the experimental IFS reservoir testbed. BS is a beam splitter, and PM is a partial mirror for image dividing and combining. The reflection ratio of both devices was 50%. Lenses L1 through L3 had the same focal length of 200 mm, and Lens L4 had a focal length of 150 mm. The input signals to the IFS reservoir were supplied by the display (MIP3508, Prament, number of pixels: 480 $\times$ 320). A region of 37 $\times$ 30 pixels in the projection area (218 $\times$ 180 pixels) of the display was sampled and used as the signals of the IFS reservoir to decease the computational cost in Ridge regresion. The input signal was duplicated by the beam splitter (BS1), and rotation and translation of the image mapping were performed by using Dove prisms and tilted mirrors, respectively. The mapped images were combined by the beam splitter (BS2) and captured by the image sensor (GS3-U3-123S6, FLIR, number of pixels: 4096 $\times$ 3000). The captured image of 3496 $\times$ 3000 pixels was resized to the projection size and fed back electrically, owing to the simplicity of the optical implementation as well as the flexibility of signal processing. In the case of multiple iterations, the captured image is fed back to the display and implement iteration function system. After a predetermined number of iterations of the IFS process, the obtained signal was updated by Eq. (5). The next state of the IFS reservoir is supplied to the display as input, and the same procedure is repeated to develop the state. The weight is determined by using the reservoir state at each time, and the output signal $y(t)$ is obtained.

 figure: Fig. 6.

Fig. 6. Optical setup of the experimental IFS reservoir testbed.

Download Full Size | PDF

4.2 Experimental results of signal prediction

Multi-step-ahead prediction was performed for the Mackey Glass equation shown in Eq. (6) with the same parameters. The parameters are shown in Table 4. Each value was estimated from the captured images. These parameters were different from the simulation because the iteration parameter was fixed to 1 in the optical setup, and the captured image was resized for feed back to the display. The number of training data items was 30,000, and a hyperbolic tangent function was used as the nonlinear function. The initial status of the reservoir was set to zero. Figure 7 shows the predicted and correct values for the given function. As shown in the figure, the correct values were predicted successively after the training phase. The prediction term satisfying $\textrm {MSE} < 0.01$ was 85 time steps.

 figure: Fig. 7.

Fig. 7. Multi-step-ahead prediction of Mackey Grass equation by experimental IFS reservoir testbed.

Download Full Size | PDF

Tables Icon

Table 4. Experimental parameters in IFS reservoir computing.

As another problem, one-step-ahead prediction was performed on a Santa Fe time series with the experimental testbed. The experimental conditions were set to the same as the best combination in the performance evaluation. The numbers of training and prediction data were also set at 3,000 and 1,000, respectively. The parameters were the values listed in Table 4. Figure 8 shows the original series, the predicted signals, and the difference. The obtained NMSE was 0.033, which was superior to another demonstration of a physical reservoir using an optical fiber loop [14]. Although the prediction performance of the experimental IFS reservoir system was not good as a computer simulation, the obtained results offer a promising perspective for IFS reservoir computing, considering the ability to tune the performance and the flexibility in optical implementations.

 figure: Fig. 8.

Fig. 8. (a) One-step-ahead prediction, (b) label of Santa Fe time series, and (c) the difference by experimental IFS reservoir testbed.

Download Full Size | PDF

5. Conclusions

In this study, a performance-tunable model of reservoir computing based on iterated function systems has been proposed and its performance investigated. Iterated function systems devised for fractal generation are applied to embody a reservoir for generating diverse responses for computation. Flexibility in the parameter space of the iterated function systems allows the characteristics of the reservoir and the performance of reservoir computation to be tuned. Computer simulations revealed the features of the proposed reservoir computing model in a chaotic signal prediction problem. An experimental system demonstrated the performance that could be achieved in signal prediction and showed promising aspects of the proposed framework.

Funding

Japan Science and Technology Agency (JPMJCR18K2).

Acknowledgments

The authors would thank Sho Shirasaka and Hideyuki Suzuki, Osaka University, for informative discussions on the performance evaluation of the proposed system.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. B. Schrauwen, D. Verstraeten, and J. Van Campenhout, An overview of reservoir computing: theory, applications and implementations, in Proceedings of the 15th european symposium on artificial neural networks. p. 471-482 2007, (2007), pp. 471–482.

2. G. Van Der Sande, D. Brunner, and M. C. Soriano, “Advances in photonic reservoir computing,” Nanophotonics 6(3), 561–576 (2017). [CrossRef]  

3. F. Duport, B. Schneider, A. Smerieri, M. Haelterman, and S. Massar, “All-optical reservoir computing,” Opt. Express 20(20), 22783–22795 (2012). [CrossRef]  

4. Y. Paquot, F. Duport, A. Smerieri, J. Dambre, B. Schrauwen, M. Haelterman, and S. Massar, “Optoelectronic reservoir computing,” Sci. Rep. 2(1), 287 (2012). [CrossRef]  

5. L. Larger, M. C. Soriano, D. Brunner, L. Appeltant, J. M. Gutierrez, L. Pesquera, C. R. Mirasso, and I. Fischer, “Photonic information processing beyond Turing: an optoelectronic implementation of reservoir computing,” Opt. Express 20(3), 3241 (2012). [CrossRef]  

6. K. Vandoorne, J. Dambre, D. Verstraeten, B. Schrauwen, and P. Bienstman, “Parallel Reservoir Computing Using Optical Amplifier,” IEEE Trans. Neural Netw. 22(9), 1469–1481 (2011). [CrossRef]  

7. F. D. L. Coarer, M. Sciamanna, A. Katumba, M. Freiberger, J. Dambre, P. Bienstman, and D. Rontani, “All-Optical Reservoir Computing on a Photonic Chip Using Silicon-Based Ring Resonators,” IEEE J. Sel. Top. Quantum Electron. 24(6), 1–8 (2018). [CrossRef]  

8. K. Vandoorne, P. Mechet, T. Van Vaerenbergh, M. Fiers, G. Morthier, D. Verstraeten, B. Schrauwen, J. Dambre, and P. Bienstman, “Experimental demonstration of reservoir computing on a silicon photonics chip,” Nat. Commun. 5(1), 3541 (2014). [CrossRef]  

9. F. Laporte, A. Katumba, J. Dambre, and P. Bienstman, “Numerical demonstration of neuromorphic computing with photonic crystal cavities,” Opt. Express 26(7), 7955 (2018). [CrossRef]  

10. A. A. Ferreira, T. B. Ludermir, and R. R. De Aquino, “An approach to reservoir computing design and training,” Expert Systems with Applications 40(10), 4172–4182 (2013). [CrossRef]  

11. U. Paudel, M. Luengo-Kovac, J. Pilawa, T. J. Shaw, and G. C. Valley, “Classification of time-domain waveforms using a speckle-based optical reservoir computer,” Opt. Express 28(2), 1225–1237 (2020). [CrossRef]  

12. M. Rafayelyan, J. Dong, Y. Tan, F. Krzakala, and S. Gigan, “Large-Scale Optical Reservoir Computing for Spatiotemporal Chaotic Systems Prediction,” Phys. Rev. X 10(4), 041037 (2020). [CrossRef]  

13. E. Khoram, A. Chen, D. Liu, L. Ying, Q. Wang, M. Yuan, and Z. Yu, “Nanophotonic media for artificial neural inference,” Photonics Res. 7(8), 823–827 (2019). [CrossRef]  

14. S. Sunada, K. Kanno, and A. Uchida, “Using multidimensional speckle dynamics for high-speed, large-scale, parallel photonic computing,” Opt. Express 28(21), 30349 (2020). [CrossRef]  

15. J. Bueno, S. Maktoobi, L. Froehly, I. Fischer, M. Jacquot, L. Larger, and D. Brunner, “Reinforcement learning in a large scale photonic recurrent neural network,” Optica 5(6), 756–2536 (2018). [CrossRef]  

16. H. Jaeger and H. Haas, “Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication,” Science 304(5667), 78–80 (2004). [CrossRef]  

17. W. Maass and H. Markram, “On the computational power of circuits of spiking neurons,” Journal of computer and system sciences 69(4), 593–616 (2004). [CrossRef]  

18. G. Manjunath and H. Jaeger, “Echo state property linked to an input: Exploring a fundamental characteristic of recurrent neural networks,” Neural computation 25(3), 671–696 (2013). [CrossRef]  

19. J. Tanida, A. Uemoto, and Y. Ichioka, “Optical fractal synthesizer: Concept and experimental verification,” Appl. Opt. 32(5), 653 (1993). [CrossRef]  

20. T. Sasaki, H. Togo, J. Tanida, and Y. Ichioka, “Stream cipher based on pseudorandom number generation with optical affine transformation,” Appl. Opt. 39(14), 2340 (2000). [CrossRef]  

21. L. Junges and J. A. Gallas, “Intricate routes to chaos in the mackey–glass delayed feedback system,” Phys. Lett. A 376(30-31), 2109–2116 (2012). [CrossRef]  

22. A. Weigend and N. Gershenfeld, Results of the time series prediction competition at the santa fe institute, in IEEE International Conference on Neural Networks, (1993), pp. 1786–1793 vol.3.

23. K. Vandoorne, W. Dierckx, B. Schrauwen, D. Verstraeten, R. Baets, P. Bienstman, and J. Van Campenhout, “Toward optical signal processing using Photonic Reservoir Computing,” Opt. Express 16, 1182–1192 (2008). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Model of reservoir computing.
Fig. 2.
Fig. 2. Model of IFS reservoir computing.
Fig. 3.
Fig. 3. Multi-step-ahead prediction of Mackey Glass equation by simulation.
Fig. 4.
Fig. 4. (a) One-step-ahead prediction, (b) label of Santa Fe time series, and (c) the difference by simulation.
Fig. 5.
Fig. 5. Spectral radius and NMSE in one-step-ahead prediction of Santa Fe time series.
Fig. 6.
Fig. 6. Optical setup of the experimental IFS reservoir testbed.
Fig. 7.
Fig. 7. Multi-step-ahead prediction of Mackey Grass equation by experimental IFS reservoir testbed.
Fig. 8.
Fig. 8. (a) One-step-ahead prediction, (b) label of Santa Fe time series, and (c) the difference by experimental IFS reservoir testbed.

Tables (4)

Tables Icon

Table 1. Simulation parameters in IFS reservoir computing to predict Mackey Glass equation.

Tables Icon

Table 2. Simulation parameters in IFS reservoir computing to predict Santa Fe time series.

Tables Icon

Table 3. Combination of parameters in IFS reservoir computing.

Tables Icon

Table 4. Experimental parameters in IFS reservoir computing.

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

X ( t + 1 ) = f [ W res X ( t ) + W in u ( t ) ] ,
[ x y ] = [ s 0 0 s ] [ cos θ sin θ sin θ cos θ ] [ x y ] + [ t x t y ] ,
y ( t ) = W out X ( t ) .
E = 1 n t = 1 n ( y ( t ) y ^ ( t ) ) 2 + λ i = 1 N ω i 2 ,
X ( t + 1 ) = α f [ B [ W res B [ X ( t ) ] ] + W in u ( t ) ] + ( 1 α ) X ( t ) , 0 < α < 1 ,
u ( t + 1 ) = a u ( t ) + b u ( t τ ) c + u ( t τ ) m + 0.5 ,
NMSE = 1 n σ 2 t = 1 n ( y ( t ) y ^ ( t ) ) 2 ,
ρ ( W ) = max ( | λ i | , i = 1 , 2 , , n ) ,
W = α W res + ( 1 α ) I ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.