Abstract

The interaction between light and matter during laser machining is particularly challenging to model via analytical approaches. Here, we show the application of a statistical approach that constructs a model of the machining process directly from experimental images of the laser machined sample, and hence negating the need for understanding the underlying physical processes. Specifically, we use a neural network to transform a laser spatial intensity profile into an equivalent scanning electron microscope image of the laser-machined target. This approach enables the simulated visualization of the result of laser machining with any laser spatial intensity profile, and hence demonstrates predictive capabilities for laser machining. The trained neural network was found to have encoded functionality that was consistent with the laws of diffraction, hence showing the potential of this approach for discovering physical laws directly from experimental data.

Published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

1. Introduction

Analytical approaches for the simulated visualization of the result of laser machining requires an understanding of the propagation of light, and the interaction of light and matter. In the case of femtosecond laser pulses, where nonlinear effects can dominate, the required calculations can become considerably more complicated [1–5]. Such precise analytical modelling approaches, whilst useful in furthering the understanding of the principles of laser machining, cannot readily be applied to produce a visualization of the appearance of the laser machined surface that occurs in practice. This is predominantly due the difficulty in scaling up the calculations to larger areas whilst also including appropriate experimental randomness, such as non-uniformity of laser spatial intensity, imperfections in the target sample, and the production of surface debris. Here we demonstrate an approach that creates a model of the laser machining processes directly from experimental images of the laser machined surface, and hence negating the need for any analytical modelling. Specifically, we apply a neural network (NN) as a purely statistical modeling approach that enables the prediction of the result of femtosecond laser machining, with zero requirement for understanding the interactions between light and matter, hence showing the potential for modelling complex physical processes.

2. Neural networks

NNs [6–10] are programmatically constructed by connecting large numbers of processing elements known as neurons, which each take in a series of weighted input values and produce a single parameter. Such a network can introduce nonlinearities between the input and output layers, and hence a NN can be made equivalent to any transfer function [11]. A NN is generally trained on a set of input-output data pairs, via a process known as backpropagation [12], which repeatedly produces small modifications in the neuron weightings in order to minimize the difference between the NN output and the real output, and hence no hard-coding of neuron weightings are required. Once trained, the NN can then be used to generate the output result for inputs that were not in the training data set.

A convolutional neural network (CNN) is a type of NN that applies a series of convolutional processes on the input data to extract spatial information for use in subsequent neuron layers, where the weightings of the convolutional processes are self-optimised during training [13–15]. CNNs can act as a transfer function between two images [16] via training on a data set of paired input-output images. The choice of loss function that is used during training to minimize the difference between the CNN output images and the real output images is an ongoing challenge, as loss functions such as the sum of least-squares difference for all images pixels have been shown to train the CNN to produce blurry output images [17].

A conditional generative adversarial network (cGAN), which can be envisaged as two CNNs that are trained simultaneously, learns both the transfer function and the loss function, and can produce output images that are sharp (i.e. not blurry) and hence hard to distinguish from real images, such as those in the training data set [18–23]. The network consists of a generator, which transforms an input image into a generated output image, and a discriminator, which receives an input image along with either the associated real output image (from the training data set) or the associated generated output image (from the generator). The discriminator then judges whether the output image is real or generated. In general, the discriminator is trained to minimize the probability of itself judging ‘generated’ when real and vice-versa, and the generator is trained to maximize the probability that the discriminator judges ‘real’ when generated, whilst also minimizing the difference between the generated output and the real output. The consequence of this adversarial training approach is that both the generator and the discriminator improve in effectiveness during the training process. The generator and discriminator are randomly initialised, and hence all encoding related to the training data set is learnt during training. Once the cGAN is trained, the generator can be used as an image transfer function, for the transformation of images that were not in the training data set.

3. Experimental setup

As shown in Fig. 1, a digital micromirror device (DMD) was used as a binary intensity spatial light modulator, in order to spatially shape femtosecond laser pulses [24–28]. In this manuscript, a DMD pattern refers to a 1024 by 1024 white and black image, corresponding to laser light or no laser light respectively, that was rescaled and uploaded to the DMD, and resulted in the spatial intensity profile of each laser pulse to have the equivalent pattern immediately after the DMD. These spatially shaped pulses were then imaged onto the target sample (an electroless nickel mirror) resulting in laser machining of the surface of the sample, where the size of the laser-machined region for a single DMD pattern was 28 µm by 28 µm. A total of 172 DMD patterns were laser-machined, using three laser pulses each, and paired with the associated experimentally-measured scanning electron microscope (SEM) images in order to form the training data set. The cGAN was trained on the training data set, with the purpose of creating an image transfer function that would transform any DMD pattern into an equivalent generated SEM image. No post-processing occurred on any generated or experimental SEM images in this manuscript.

 figure: Fig. 1

Fig. 1 Procedure for creating an image transfer function that can turn a laser spatial intensity into an equivalent generated SEM image. A DMD was used for shaping the spatial intensity profile of single laser pulses that were then imaged onto the sample for laser machining, where the white and black pixels on the DMD patterns correspond to laser light and no laser light, respectively. Each DMD pattern was paired with the associated experimentally-measured SEM image, in order to form the training data set. Single DMD patterns were paired with either the associated experimental or generated SEM image and used as the input to the discriminator. The discriminator was trained to detect whether the input was an experimental or generated SEM image, and the generator was trained to convince the discriminator to judge generated SEM images as experimental. The overriding goal was the creation of the image transfer function.

Download Full Size | PPT Slide | PDF

In more detail, spatially homogenized laser pulses (800 nm central wavelength, 150 femtosecond pulse length) were spatially shaped by a DMD (Texas Instruments DLP 3000) acting as a binary intensity spatial light modulator in order to enable high-precision laser machining. The DMD acted as a binary intensity mask, as each DMD mirror could be in the ‘on’ or the ‘off’ position, and as such, the patterns uploaded to the DMD were monochrome bitmaps, where white and black pixels corresponded to the presence of laser light and no laser light, respectively. The region of the DMD used was 2.28 mm by 2.28 mm, and the corresponding laser-machined region was 28 µm by 28 µm, a demagnification of approximately 82 times. The target sample, an electroless nickel mirror (ULO Optics, 38 mm wide and 8 mm thick copper, with a 5 µm thick electroless nickel coating), was positioned on an XYZ translation stage, and a co-linear imaging camera was used to ensure that the sample was always at the image plane of the DMD. Single laser pulses were obtained via external control of the laser cavity, and for each DMD pattern three laser pulses were used to machine the sample, where each pattern was separated by 150 µm to minimize cross contamination via laser machining debris. Three pulses were chosen as this was found to produce a high contrast between machined and not-machined regions on the surfaces of the sample when imaging using an SEM. The laser fluence on the DMD and the surface of the sample was calculated to be 2.8 mJ.cm−2 and 1.83 J.cm−2 respectively.

The cGAN training process required the DMD patterns and SEM images in the training data set to be identical in size (1024 by 1024 was chosen, due to the limitations of the GPU used for training) and also have the same magnification for the features. In order to avoid rescaling the experimentally-measured SEM images, which could have caused information loss, the SEM images were recorded at a magnification such that a laser-machined DMD pattern would fill approximately 65% of the area of the 1024 by 1024 pixels on the resultant SEM image. This meant that the SEM images could be cropped at an image size of 1024 by 1024, and used directly in the training data set. In order to match the SEM magnification, the region of interest on the DMD pattern had to be 825 by 825 pixels, where the region outside was set to black. In order to have DMD-enabled control of laser-machined patterns at a resolution that would provide a test of the ability of the cGAN to encode the diffraction-limit, whilst also being limited by the 1024 by 1024 resolution, the number of DMD mirrors that corresponded to the 825 by 825 region of interest was chosen to be 300 by 300. The number of DMD mirrors was chosen as this total area corresponded to the extent of homogenous spatial intensity profile of the laser pulses incident on the DMD. In order to scale the DMD pattern to the 300 by 300 array of mirrors, nearest neighbor interpolation was used, as this ensured the binary nature of the patterns was maintained. Consequently, each image pixel in the experimental and generated SEM images was approximately 32 nm in dimension. Some rounding errors were therefore unavoidable.

The cGAN architecture was adapted from previous work [13] with modifications in order to increase the resolution to 1024 by 1024 and to optimise for use on an NVidia Titan X graphics processing unit. The training process involved alternating between experimental and generated data set pairs. Training of each image took approximately 1 second, leading to a total time of approximately 5 hours per 100 iterations through the entire training data set, where the results shown here correspond to 400 iterations, unless stated otherwise. The image transfer function from the trained cGAN took approximately 1 second to generate a single SEM image from a single DMD pattern.

4. Experimental results and discussion

Figure 2 illustrates the distinct differences between the DMD patterns and associated experimental SEM images from the training set, which were due to the following reasons. Firstly, the limited numerical aperture of the microscope objective (NA = 0.42) led to spatial filtering of the shaped laser pulses and hence caused broadening and intensity reductions that were dependent on the exact spatial intensity profile [29–32]. Secondly, as femtosecond laser machining is a nonlinear process [33–36], a threshold effect is often observed, where a small change in the laser fluence may lead to the difference between machining and no machining. Thirdly, for SEM imaging, the image contrast is dependent on the number of electrons that are scattered from each point on the target surface into an electron detector. The consequence is that the transformation from DMD pattern to SEM image must account for diffraction theory, the non-linear interaction of light and matter, and the interaction of electrons and matter. The DMD patterns were generated randomly, by combining up to twenty lines and circles. The lines had randomly determined position, rotation, and width, and the circles had randomly determined position, radius, arc length, and width. The resultant pattern had a 50% chance of being inverted (switching all pixels from white-to-black and vice-versa).

 figure: Fig. 2

Fig. 2 Examples of DMD patterns and associated experimentally-measured SEM images from the training data set. Showing nine of the 172 data set pairs, corresponding to training data set numbers of 15, 24, 45, 74, 85, 104, 121, 144, and 161 for (a)-(i), respectively, where the decrease in image brightness was caused by the gradual degradation in the SEM titanium filament during the 5 hours of data capture. The DMD patterns were produced in order to provide a wide distribution of training data. The cGAN was trained on the training data set in order to create an image transfer function that could generate an equivalent SEM image from any input DMD pattern. In order to generate a realistic SEM image, the image transfer function needed to be consistent with the laws of diffraction, laser machining, and SEM imaging.

Download Full Size | PPT Slide | PDF

Additional data set pairs, which were not in the training data set, were used in order to quantify the effectiveness of the image transfer function. These data set pairs were laser-machined at random points during the machining of DMD patterns from the training data set. Figure 3 shows DMD patterns corresponding to the letters B (offering curved and straight edges) and X (offering a symmetrical intersection of lines), along with SEM images generated via the image transfer function (for training iterations 1, 2, 5 and 400), and the associated experimentally-measured SEM images. The generated SEM images corresponding to 400 iterations appear to demonstrate features such as: 1) ablation at different depths due to spatial filtering, 2) a distinct interface between ablated and non-ablated regions, and 3) a region of melted material surrounding the machined structures. The generated SEM images also convey a sense of ‘experimental randomness’, where, for example, straight lines in the DMD pattern were transformed into uneven lines in the generated SEM images, despite the fact that the image transfer function is entirely deterministic and hence identical DMD patterns result in identical generated SEM images. The differences between the generated and experimental SEM images are attributed to two causes. Firstly, the nonlinear nature of femtosecond laser machining means that unavoidable small changes in experimental parameters such as pulse energy, sample homogeneity and machining debris, can lead to significant differences in the experimentally-measured SEM images, and therefore the experimental SEM images correspond to one of many possible outcomes. Secondly, the size of the training data set and a non-optimal convergence of the cGAN on the training data set. In order to provide evidence of the accuracy of the neural network for feature sizes close to the resolution limit of the experiment, DMD patterns with periodic designs were also investigated. The experimental SEM images from the periodic designs were not used in the training data set, and hence are used here to demonstrate the application of the NN to unseen intensity profiles. Figure 4 shows DMD patterns, along with the associated experimental and generated SEM images for (a) above and (b) close to the resolution limit of the experimental setup. Once again, the neural network outputs are remarkably similar to the experimental measurements, hence providing further experimental verification of the predictive ability of the NN.

 figure: Fig. 3

Fig. 3 Demonstration of the effectiveness of the image transfer function for different training iterations. Showing DMD patterns corresponding to the letters (a) B and (b) X, with associated generated SEM images for image transfer functions for 1, 2, 5 and 400 cGAN training iterations. For comparison, the associated experimentally-measured SEM images are also shown. The DMD patterns and experimentally-measured SEM images were not part of the training data set, and hence this result shows the effectiveness of the image transfer function on unseen data. The generated SEM images provide a qualitative portrayal of the cGAN training convergence, showing that features, such as shadows and uneven edges, became obvious at different numbers of iterations. No further improvements were realised for iteration numbers greater than 400.

Download Full Size | PPT Slide | PDF

 figure: Fig. 4

Fig. 4 Demonstration of the effectiveness of the image transfer function for DMD patterns corresponding to periodic designs, for (a) above and (b) close to the resolution limit of the experimental setup, showing the DMD pattern, and the associated experimental and generated SEM images. These DMD patterns were not in the training data set, and hence this result provides experimental verification of the accuracy of the neural network for feature sizes close to the resolution limit of the experiment.

Download Full Size | PPT Slide | PDF

Figure 5 shows DMD patterns consisting of lines, gaps and ring structures of varying widths, along with the associated generated SEM images, and strongly indicates the existence of a minimum resolution of features. For ease of notation, we refer to the width of features within ideally minified DMD patterns at the sample (that is, no broadening or blurring by diffraction effects) as the projected line width. The generated SEM images in Fig. 5 appear to show behaviour such as a minimum possible width of a laser-machined structure and the inability to resolve two adjacent laser-machined structures that is consistent with the laws of diffraction.

 figure: Fig. 5

Fig. 5 SEM images generated using the image transfer function from the trained cGAN, for DMD patterns consisting of lines, gaps and ring structures. Showing generated SEM images for projected line widths of (a) 250 nm, (b) 500 nm, (c) 1 µm, (d) 2 µm, (e) 3 µm and (f) 5 µm. For DMD patterns with a single vertical line, (c)-(f), show laser-machined structures with widths approximately proportional to the projected line width, while (a)-(b), indicate that the widths of the laser-machined structures do not decrease below a minimum size. For the DMD patterns with gaps and ring structures, (a)-(c), show the inability to resolve two adjacent laser-machined structures. These images indicate that the image transfer function from the trained cGAN contains encoding that appears consistent with the laws of diffraction. There are no associated experimental SEM images for these DMD patterns, hence demonstrating the predictive capabilities of this approach.

Download Full Size | PPT Slide | PDF

The sizes of laser-machined structures in generated SEM images for projected line widths from above to below the diffraction limit of the experimental setup were measured, with the goal of evaluating the encoding of diffraction. Figure 6(a) shows a generated SEM image for a projected line width of 2 µm. Part (b) (top) shows the same generated image using a high-contrast colour-map in order to emphasize the position of the edges of the generated laser-machined structures, corresponding to the purple and dark blue data regions. Part (b) (top) was averaged over the Y-axis, for 500 rows of image pixels, in order to produce a single row of data corresponding to the averaged cross-section, as shown in (b) (bottom). This process was repeated for projected line widths from 124 nm to 12 µm, in steps of 64 nm. Part (c) shows the concatenation of all averaged cross-sections, along with the size of the projected line width for each case (white dotted lines), where (d), which is a close-up of (c), shows the diffraction limit of the experimental setup (952 nm [37], white vertical lines). The experimental and generated SEM images have an intensity that is dependent on the number of electrons reflected into a backscatter detector. As such, surfaces that are angled will appear as a different intensity to non-angled surfaces. Here, when applying the high-contrast colour-map to the generated images, the angled surfaces appear as a higher colour intensity. Whilst it is challenging to determine the exact line width directly from the generated SEM images, the inside edges of the higher colour intensity regions provide a reliable measure. This is confirmed for projected line widths greater than 1 µm, where the position of the dotted line white lines correspond closely to the inside edges of the high colour regions. However, for projected line widths smaller than 1 µm, this relationship ceases. Figure 6 demonstrates that a minimum laser-machined feature size, comparable to the diffraction limit, was observed, hence indicating that the cGAN had encoded functionality consistent with the laws of diffraction.

 figure: Fig. 6

Fig. 6 Reverse-engineering the image transfer function to evaluate the encoding of diffraction. Showing (a) a generated SEM image corresponding to a projected line width of 2 µm, (b) the same image converted to a high-contrast colour-map and averaged over the central 500 rows of pixels in the Y-axis to produce an averaged cross-section, and (c)-(d) the concatenation of averaged cross sections for projected line widths from 128 nm to 12 µm. In (c) the edges of the generated laser-machined structures (the purple and dark blue data regions) closely match the associated projected line width (white dotted lines) for values > 1 µm. However, as emphasized in (d) this relationship is not observed for projected line widths < 1 µm, where instead the widths of the laser-machined structures are observed to be always greater than the diffraction limit of the experimental setup (952 nm [37], white vertical lines).

Download Full Size | PPT Slide | PDF

It is important to realise that here, the NN should ideally only be used to visualise parameters that can be interpolated from the information residing in the training data set, and that the behaviour of extrapolation outside the realm of the data set may not be as accurate. In other words, the NN generates SEM images by matching and combining components, such as straight and curved lines of particular widths, from information that was encoded during training. It is for this reason that the training data set was carefully designed to include both straight and curved lines, for a wide variety of rotations, translations, combinations, and line widths, where the minimum projected line width was 272 nm.

5. Conclusions

In conclusion, a cGAN was trained on a paired data set of laser spatial intensity profiles and the associated experimentally-measured SEM images of the laser-machined target. The image transfer function from the trained cGAN was able to transform input spatial intensity profiles into the equivalent generated SEM images, hence demonstrating predictive capabilities for laser machining. With additional data, such an approach could provide real-time visualization of the result of laser machining for any set of laser machining parameters, such as beam shape, pulse energy, laser wavelength, and material type. The cGAN was shown to have encoded functionality that was consistent with the laws of diffraction, purely from observation of experimental data, and with zero requirement for any programmatical description of the underlying physical processes. This result demonstrates the potential for NNs to encode scientific laws and theories, but also for producing realistic but fraudulent data.

Funding

Engineering and Physical Sciences Research Council (EPSRC) (EP/N03368X/1, EP/N509747/1).

Acknowledgments

We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan X GPU used for this research. Supporting data for this submission can be found at https://doi.org/10.5258/SOTON/D0393.

References and links

1. B. Rethfeld, D. S. Ivanov, M. E. Garcia, and S. I. Anisimov, “Modelling ultrafast laser ablation,” J. Phys. D Appl. Phys. 50(19), 193001 (2017). [CrossRef]  

2. H. O. Jeschke, M. E. Garcia, M. Lenzner, J. Bonse, J. Krüger, and W. Kautek, “Laser ablation thresholds of silicon for different pulse durations: theory and experiment,” Appl. Surf. Sci. 197–198, 839–844 (2002). [CrossRef]  

3. J. K. Chen and J. E. Beraun, “Modelling of ultrashort laser ablation of gold films in vacuum,” J. Opt. A, Pure Appl. Opt. 5(3), 168–173 (2003). [CrossRef]  

4. S. Amoruso, R. Bruzzese, X. Wang, N. N. Nedialkov, and P. A. Atanasov, “Femtosecond laser ablation of nickel in vacuum,” J. Phys. D Appl. Phys. 40(2), 331–340 (2007). [CrossRef]  

5. M. S. Amer, M. A. El-Ashry, L. R. Dosser, K. E. Hix, J. F. Maguire, and B. Irwin, “Femtosecond versus nanosecond laser machining: comparison of induced stresses and structural changes in silicon wafers,” Appl. Surf. Sci. 242(1–2), 162–167 (2005). [CrossRef]  

6. D. F. Specht, “A general regression neural network,” IEEE Trans. Neural Netw. 2(6), 568–576 (1991). [CrossRef]   [PubMed]  

7. L. K. Hansen and P. Salamon, “Neural network ensembles,” IEEE Trans. Pattern Anal. Mach. Intell. 12(10), 993–1001 (1990). [CrossRef]  

8. A. Krogh and J. Vedelsby, “Neural network ensembles, cross validation, and active learning,” in Advances in Neural Information Processing Systems (1995), pp. 231–238.

9. G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science 313(5786), 504–507 (2006). [CrossRef]   [PubMed]  

10. N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” J. Mach. Learn. Res. 15(1), 1929–1958 (2014).

11. K. Hornik, M. Stinchcombe, and H. White, “Multilayer feedforward networks are universal approximators,” Neural Netw. 2(5), 359–366 (1989). [CrossRef]  

12. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature 323(6088), 533–536 (1986). [CrossRef]  

13. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems (2012), pp. 1097–1105.

14. S. Lawrence, C. L. Giles, A. C. Tsoi, and A. D. Back, “Face recognition: A convolutional neural-network approach,” IEEE Trans. Neural Netw. 8(1), 98–113 (1997). [CrossRef]   [PubMed]  

15. A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei, “Large-scale video classification with convolutional neural networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2014), pp. 1725–1732. [CrossRef]  

16. Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4(11), 1437–1443 (2017). [CrossRef]  

17. D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros, “Context encoders: Feature learning by inpainting,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 2536–2544.

18. P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” https://arXiv:1611.07004 (2017). [CrossRef]  

19. M. Mirza and S. Osindero, “Conditional generative adversarial nets,” https://arXiv:1411.1784 (2014).

20. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems (2014), pp. 2672–2680.

21. A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks” https://arXiv:1511.06434 (2015).

22. E. L. Denton, S. Chintala, and R. Fergus, “Deep generative image models using a laplacian pyramid of adversarial networks.” in Advances in Neural Information Processing Systems (2015), pp. 1486–1494.

23. S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee, “Generative adversarial text to image synthesis,” https://arXiv:1605.05396 (2016).

24. B. Mills, M. Feinaeugle, C. L. Sones, N. Rizvi, and R. W. Eason, “Sub-micron-scale femtosecond laser ablation using a digital micromirror device,” J. Micromech. Microeng. 23(3), 035005 (2013). [CrossRef]  

25. L. Gong, Y. Ren, W. Liu, M. Wang, M. Zhong, Z. Wang, and Y. Li, “Generation of cylindrically polarized vector vortex beams with digital micromirror device,” J. Appl. Phys. 116(18), 183105 (2014). [CrossRef]  

26. D. J. Heath, B. Mills, M. Feinaeugle, and R. W. Eason, “Rapid bespoke laser ablation of variable period grating structures using a digital micromirror device for multi-colored surface images,” Appl. Opt. 54(16), 4984–4988 (2015). [CrossRef]   [PubMed]  

27. R. Bruck, K. Vynck, P. Lalanne, B. Mills, D. J. Thomson, G. Z. Mashanovich, G. T. Reed, and O. L. Muskens, “All-optical spatial light modulator for reconfigurable silicon photonic circuits,” Optica 3(4), 396–402 (2016). [CrossRef]  

28. Y.-X. Ren, R.-D. Lu, and L. Gong, “Tailoring light with a digital micromirror device,” Ann. Phys. 527(7–8), 447–470 (2015). [CrossRef]  

29. N. J. Jenness, R. T. Hill, A. Hucknall, A. Chilkoti, and R. L. Clark, “A versatile diffractive maskless lithography for single-shot and serial microfabrication,” Opt. Express 18(11), 11754–11762 (2010). [CrossRef]   [PubMed]  

30. B. P. Cumming, A. Jesacher, M. J. Booth, T. Wilson, and M. Gu, “Adaptive aberration compensation for three-dimensional micro-fabrication of photonic crystals in lithium niobate,” Opt. Express 19(10), 9419–9425 (2011). [CrossRef]   [PubMed]  

31. L. Yang, D. Qian, C. Xin, Z. Hu, S. Ji, D. Wu, Y. Hu, J. Li, W. Huang, and J. Chu, “Two-photon polymerization of microstructures by a non-diffraction multifoci pattern generated from a superposed Bessel beam,” Opt. Lett. 42(4), 743–746 (2017). [CrossRef]   [PubMed]  

32. C. Zhang, Y. Hu, W. Du, P. Wu, S. Rao, Z. Cai, Z. Lao, B. Xu, J. Ni, J. Li, G. Zhao, D. Wu, J. Chu, and K. Sugioka, “Optimized holographic femtosecond laser patterning method towards rapid integration of high-quality functional devices in microchannels,” Sci. Rep. 6(1), 33281 (2016). [CrossRef]   [PubMed]  

33. E. G. Gamaly, A. V. Rode, B. Luther-Davies, and V. T. Tikhonchuk, “Ablation of solids by femtosecond lasers: Ablation mechanism and ablation thresholds for metals and dielectrics,” Phys. Plasmas 9(3), 949–957 (2002). [CrossRef]  

34. P. P. Pronko, S. K. Dutta, J. Squier, J. V. Rudd, D. Du, and G. Mourou, “Machining of sub-micron holes using a femtosecond laser at 800 nm,” Opt. Commun. 114(1–2), 106–110 (1995). [CrossRef]  

35. M. S. Amer, M. A. El-Ashry, L. R. Dosser, K. E. Hix, J. F. Maguire, and B. Irwin, “Femtosecond versus nanosecond laser machining: comparison of induced stresses and structural changes in silicon wafers,” Appl. Surf. Sci. 242(1–2), 162–167 (2005). [CrossRef]  

36. B. N. Chichkov, C. Momma, S. Nolte, F. V. Alvensleben, and A. Tünnermann, “Femtosecond, picosecond and nanosecond laser ablation of solids,” Appl. Phys., A Mater. Sci. Process. 63(2), 109–115 (1996). [CrossRef]  

37. D. J. Heath, J. A. Grant-Jacob, M. Feinaeugle, B. Mills, and R. W. Eason, “Sub-diffraction limit laser ablation via multiple exposures using a digital micromirror device,” Appl. Opt. 56(22), 6398–6404 (2017). [CrossRef]   [PubMed]  

References

  • View by:
  • |
  • |
  • |

  1. B. Rethfeld, D. S. Ivanov, M. E. Garcia, and S. I. Anisimov, “Modelling ultrafast laser ablation,” J. Phys. D Appl. Phys. 50(19), 193001 (2017).
    [Crossref]
  2. H. O. Jeschke, M. E. Garcia, M. Lenzner, J. Bonse, J. Krüger, and W. Kautek, “Laser ablation thresholds of silicon for different pulse durations: theory and experiment,” Appl. Surf. Sci. 197–198, 839–844 (2002).
    [Crossref]
  3. J. K. Chen and J. E. Beraun, “Modelling of ultrashort laser ablation of gold films in vacuum,” J. Opt. A, Pure Appl. Opt. 5(3), 168–173 (2003).
    [Crossref]
  4. S. Amoruso, R. Bruzzese, X. Wang, N. N. Nedialkov, and P. A. Atanasov, “Femtosecond laser ablation of nickel in vacuum,” J. Phys. D Appl. Phys. 40(2), 331–340 (2007).
    [Crossref]
  5. M. S. Amer, M. A. El-Ashry, L. R. Dosser, K. E. Hix, J. F. Maguire, and B. Irwin, “Femtosecond versus nanosecond laser machining: comparison of induced stresses and structural changes in silicon wafers,” Appl. Surf. Sci. 242(1–2), 162–167 (2005).
    [Crossref]
  6. D. F. Specht, “A general regression neural network,” IEEE Trans. Neural Netw. 2(6), 568–576 (1991).
    [Crossref] [PubMed]
  7. L. K. Hansen and P. Salamon, “Neural network ensembles,” IEEE Trans. Pattern Anal. Mach. Intell. 12(10), 993–1001 (1990).
    [Crossref]
  8. A. Krogh and J. Vedelsby, “Neural network ensembles, cross validation, and active learning,” in Advances in Neural Information Processing Systems (1995), pp. 231–238.
  9. G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science 313(5786), 504–507 (2006).
    [Crossref] [PubMed]
  10. N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” J. Mach. Learn. Res. 15(1), 1929–1958 (2014).
  11. K. Hornik, M. Stinchcombe, and H. White, “Multilayer feedforward networks are universal approximators,” Neural Netw. 2(5), 359–366 (1989).
    [Crossref]
  12. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature 323(6088), 533–536 (1986).
    [Crossref]
  13. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems (2012), pp. 1097–1105.
  14. S. Lawrence, C. L. Giles, A. C. Tsoi, and A. D. Back, “Face recognition: A convolutional neural-network approach,” IEEE Trans. Neural Netw. 8(1), 98–113 (1997).
    [Crossref] [PubMed]
  15. A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei, “Large-scale video classification with convolutional neural networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2014), pp. 1725–1732.
    [Crossref]
  16. Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4(11), 1437–1443 (2017).
    [Crossref]
  17. D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros, “Context encoders: Feature learning by inpainting,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 2536–2544.
  18. P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” https://arXiv:1611.07004 (2017).
    [Crossref]
  19. M. Mirza and S. Osindero, “Conditional generative adversarial nets,” https://arXiv:1411.1784 (2014).
  20. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems (2014), pp. 2672–2680.
  21. A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks” https://arXiv:1511.06434 (2015).
  22. E. L. Denton, S. Chintala, and R. Fergus, “Deep generative image models using a laplacian pyramid of adversarial networks.” in Advances in Neural Information Processing Systems (2015), pp. 1486–1494.
  23. S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee, “Generative adversarial text to image synthesis,” https://arXiv:1605.05396 (2016).
  24. B. Mills, M. Feinaeugle, C. L. Sones, N. Rizvi, and R. W. Eason, “Sub-micron-scale femtosecond laser ablation using a digital micromirror device,” J. Micromech. Microeng. 23(3), 035005 (2013).
    [Crossref]
  25. L. Gong, Y. Ren, W. Liu, M. Wang, M. Zhong, Z. Wang, and Y. Li, “Generation of cylindrically polarized vector vortex beams with digital micromirror device,” J. Appl. Phys. 116(18), 183105 (2014).
    [Crossref]
  26. D. J. Heath, B. Mills, M. Feinaeugle, and R. W. Eason, “Rapid bespoke laser ablation of variable period grating structures using a digital micromirror device for multi-colored surface images,” Appl. Opt. 54(16), 4984–4988 (2015).
    [Crossref] [PubMed]
  27. R. Bruck, K. Vynck, P. Lalanne, B. Mills, D. J. Thomson, G. Z. Mashanovich, G. T. Reed, and O. L. Muskens, “All-optical spatial light modulator for reconfigurable silicon photonic circuits,” Optica 3(4), 396–402 (2016).
    [Crossref]
  28. Y.-X. Ren, R.-D. Lu, and L. Gong, “Tailoring light with a digital micromirror device,” Ann. Phys. 527(7–8), 447–470 (2015).
    [Crossref]
  29. N. J. Jenness, R. T. Hill, A. Hucknall, A. Chilkoti, and R. L. Clark, “A versatile diffractive maskless lithography for single-shot and serial microfabrication,” Opt. Express 18(11), 11754–11762 (2010).
    [Crossref] [PubMed]
  30. B. P. Cumming, A. Jesacher, M. J. Booth, T. Wilson, and M. Gu, “Adaptive aberration compensation for three-dimensional micro-fabrication of photonic crystals in lithium niobate,” Opt. Express 19(10), 9419–9425 (2011).
    [Crossref] [PubMed]
  31. L. Yang, D. Qian, C. Xin, Z. Hu, S. Ji, D. Wu, Y. Hu, J. Li, W. Huang, and J. Chu, “Two-photon polymerization of microstructures by a non-diffraction multifoci pattern generated from a superposed Bessel beam,” Opt. Lett. 42(4), 743–746 (2017).
    [Crossref] [PubMed]
  32. C. Zhang, Y. Hu, W. Du, P. Wu, S. Rao, Z. Cai, Z. Lao, B. Xu, J. Ni, J. Li, G. Zhao, D. Wu, J. Chu, and K. Sugioka, “Optimized holographic femtosecond laser patterning method towards rapid integration of high-quality functional devices in microchannels,” Sci. Rep. 6(1), 33281 (2016).
    [Crossref] [PubMed]
  33. E. G. Gamaly, A. V. Rode, B. Luther-Davies, and V. T. Tikhonchuk, “Ablation of solids by femtosecond lasers: Ablation mechanism and ablation thresholds for metals and dielectrics,” Phys. Plasmas 9(3), 949–957 (2002).
    [Crossref]
  34. P. P. Pronko, S. K. Dutta, J. Squier, J. V. Rudd, D. Du, and G. Mourou, “Machining of sub-micron holes using a femtosecond laser at 800 nm,” Opt. Commun. 114(1–2), 106–110 (1995).
    [Crossref]
  35. M. S. Amer, M. A. El-Ashry, L. R. Dosser, K. E. Hix, J. F. Maguire, and B. Irwin, “Femtosecond versus nanosecond laser machining: comparison of induced stresses and structural changes in silicon wafers,” Appl. Surf. Sci. 242(1–2), 162–167 (2005).
    [Crossref]
  36. B. N. Chichkov, C. Momma, S. Nolte, F. V. Alvensleben, and A. Tünnermann, “Femtosecond, picosecond and nanosecond laser ablation of solids,” Appl. Phys., A Mater. Sci. Process. 63(2), 109–115 (1996).
    [Crossref]
  37. D. J. Heath, J. A. Grant-Jacob, M. Feinaeugle, B. Mills, and R. W. Eason, “Sub-diffraction limit laser ablation via multiple exposures using a digital micromirror device,” Appl. Opt. 56(22), 6398–6404 (2017).
    [Crossref] [PubMed]

2017 (4)

2016 (2)

C. Zhang, Y. Hu, W. Du, P. Wu, S. Rao, Z. Cai, Z. Lao, B. Xu, J. Ni, J. Li, G. Zhao, D. Wu, J. Chu, and K. Sugioka, “Optimized holographic femtosecond laser patterning method towards rapid integration of high-quality functional devices in microchannels,” Sci. Rep. 6(1), 33281 (2016).
[Crossref] [PubMed]

R. Bruck, K. Vynck, P. Lalanne, B. Mills, D. J. Thomson, G. Z. Mashanovich, G. T. Reed, and O. L. Muskens, “All-optical spatial light modulator for reconfigurable silicon photonic circuits,” Optica 3(4), 396–402 (2016).
[Crossref]

2015 (2)

2014 (2)

L. Gong, Y. Ren, W. Liu, M. Wang, M. Zhong, Z. Wang, and Y. Li, “Generation of cylindrically polarized vector vortex beams with digital micromirror device,” J. Appl. Phys. 116(18), 183105 (2014).
[Crossref]

N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” J. Mach. Learn. Res. 15(1), 1929–1958 (2014).

2013 (1)

B. Mills, M. Feinaeugle, C. L. Sones, N. Rizvi, and R. W. Eason, “Sub-micron-scale femtosecond laser ablation using a digital micromirror device,” J. Micromech. Microeng. 23(3), 035005 (2013).
[Crossref]

2011 (1)

2010 (1)

2007 (1)

S. Amoruso, R. Bruzzese, X. Wang, N. N. Nedialkov, and P. A. Atanasov, “Femtosecond laser ablation of nickel in vacuum,” J. Phys. D Appl. Phys. 40(2), 331–340 (2007).
[Crossref]

2006 (1)

G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science 313(5786), 504–507 (2006).
[Crossref] [PubMed]

2005 (2)

M. S. Amer, M. A. El-Ashry, L. R. Dosser, K. E. Hix, J. F. Maguire, and B. Irwin, “Femtosecond versus nanosecond laser machining: comparison of induced stresses and structural changes in silicon wafers,” Appl. Surf. Sci. 242(1–2), 162–167 (2005).
[Crossref]

M. S. Amer, M. A. El-Ashry, L. R. Dosser, K. E. Hix, J. F. Maguire, and B. Irwin, “Femtosecond versus nanosecond laser machining: comparison of induced stresses and structural changes in silicon wafers,” Appl. Surf. Sci. 242(1–2), 162–167 (2005).
[Crossref]

2003 (1)

J. K. Chen and J. E. Beraun, “Modelling of ultrashort laser ablation of gold films in vacuum,” J. Opt. A, Pure Appl. Opt. 5(3), 168–173 (2003).
[Crossref]

2002 (2)

H. O. Jeschke, M. E. Garcia, M. Lenzner, J. Bonse, J. Krüger, and W. Kautek, “Laser ablation thresholds of silicon for different pulse durations: theory and experiment,” Appl. Surf. Sci. 197–198, 839–844 (2002).
[Crossref]

E. G. Gamaly, A. V. Rode, B. Luther-Davies, and V. T. Tikhonchuk, “Ablation of solids by femtosecond lasers: Ablation mechanism and ablation thresholds for metals and dielectrics,” Phys. Plasmas 9(3), 949–957 (2002).
[Crossref]

1997 (1)

S. Lawrence, C. L. Giles, A. C. Tsoi, and A. D. Back, “Face recognition: A convolutional neural-network approach,” IEEE Trans. Neural Netw. 8(1), 98–113 (1997).
[Crossref] [PubMed]

1996 (1)

B. N. Chichkov, C. Momma, S. Nolte, F. V. Alvensleben, and A. Tünnermann, “Femtosecond, picosecond and nanosecond laser ablation of solids,” Appl. Phys., A Mater. Sci. Process. 63(2), 109–115 (1996).
[Crossref]

1995 (1)

P. P. Pronko, S. K. Dutta, J. Squier, J. V. Rudd, D. Du, and G. Mourou, “Machining of sub-micron holes using a femtosecond laser at 800 nm,” Opt. Commun. 114(1–2), 106–110 (1995).
[Crossref]

1991 (1)

D. F. Specht, “A general regression neural network,” IEEE Trans. Neural Netw. 2(6), 568–576 (1991).
[Crossref] [PubMed]

1990 (1)

L. K. Hansen and P. Salamon, “Neural network ensembles,” IEEE Trans. Pattern Anal. Mach. Intell. 12(10), 993–1001 (1990).
[Crossref]

1989 (1)

K. Hornik, M. Stinchcombe, and H. White, “Multilayer feedforward networks are universal approximators,” Neural Netw. 2(5), 359–366 (1989).
[Crossref]

1986 (1)

D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature 323(6088), 533–536 (1986).
[Crossref]

Akata, Z.

S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee, “Generative adversarial text to image synthesis,” https://arXiv:1605.05396 (2016).

Alvensleben, F. V.

B. N. Chichkov, C. Momma, S. Nolte, F. V. Alvensleben, and A. Tünnermann, “Femtosecond, picosecond and nanosecond laser ablation of solids,” Appl. Phys., A Mater. Sci. Process. 63(2), 109–115 (1996).
[Crossref]

Amer, M. S.

M. S. Amer, M. A. El-Ashry, L. R. Dosser, K. E. Hix, J. F. Maguire, and B. Irwin, “Femtosecond versus nanosecond laser machining: comparison of induced stresses and structural changes in silicon wafers,” Appl. Surf. Sci. 242(1–2), 162–167 (2005).
[Crossref]

M. S. Amer, M. A. El-Ashry, L. R. Dosser, K. E. Hix, J. F. Maguire, and B. Irwin, “Femtosecond versus nanosecond laser machining: comparison of induced stresses and structural changes in silicon wafers,” Appl. Surf. Sci. 242(1–2), 162–167 (2005).
[Crossref]

Amoruso, S.

S. Amoruso, R. Bruzzese, X. Wang, N. N. Nedialkov, and P. A. Atanasov, “Femtosecond laser ablation of nickel in vacuum,” J. Phys. D Appl. Phys. 40(2), 331–340 (2007).
[Crossref]

Anisimov, S. I.

B. Rethfeld, D. S. Ivanov, M. E. Garcia, and S. I. Anisimov, “Modelling ultrafast laser ablation,” J. Phys. D Appl. Phys. 50(19), 193001 (2017).
[Crossref]

Atanasov, P. A.

S. Amoruso, R. Bruzzese, X. Wang, N. N. Nedialkov, and P. A. Atanasov, “Femtosecond laser ablation of nickel in vacuum,” J. Phys. D Appl. Phys. 40(2), 331–340 (2007).
[Crossref]

Back, A. D.

S. Lawrence, C. L. Giles, A. C. Tsoi, and A. D. Back, “Face recognition: A convolutional neural-network approach,” IEEE Trans. Neural Netw. 8(1), 98–113 (1997).
[Crossref] [PubMed]

Beraun, J. E.

J. K. Chen and J. E. Beraun, “Modelling of ultrashort laser ablation of gold films in vacuum,” J. Opt. A, Pure Appl. Opt. 5(3), 168–173 (2003).
[Crossref]

Bonse, J.

H. O. Jeschke, M. E. Garcia, M. Lenzner, J. Bonse, J. Krüger, and W. Kautek, “Laser ablation thresholds of silicon for different pulse durations: theory and experiment,” Appl. Surf. Sci. 197–198, 839–844 (2002).
[Crossref]

Booth, M. J.

Bruck, R.

Bruzzese, R.

S. Amoruso, R. Bruzzese, X. Wang, N. N. Nedialkov, and P. A. Atanasov, “Femtosecond laser ablation of nickel in vacuum,” J. Phys. D Appl. Phys. 40(2), 331–340 (2007).
[Crossref]

Cai, Z.

C. Zhang, Y. Hu, W. Du, P. Wu, S. Rao, Z. Cai, Z. Lao, B. Xu, J. Ni, J. Li, G. Zhao, D. Wu, J. Chu, and K. Sugioka, “Optimized holographic femtosecond laser patterning method towards rapid integration of high-quality functional devices in microchannels,” Sci. Rep. 6(1), 33281 (2016).
[Crossref] [PubMed]

Chen, J. K.

J. K. Chen and J. E. Beraun, “Modelling of ultrashort laser ablation of gold films in vacuum,” J. Opt. A, Pure Appl. Opt. 5(3), 168–173 (2003).
[Crossref]

Chichkov, B. N.

B. N. Chichkov, C. Momma, S. Nolte, F. V. Alvensleben, and A. Tünnermann, “Femtosecond, picosecond and nanosecond laser ablation of solids,” Appl. Phys., A Mater. Sci. Process. 63(2), 109–115 (1996).
[Crossref]

Chilkoti, A.

Chu, J.

L. Yang, D. Qian, C. Xin, Z. Hu, S. Ji, D. Wu, Y. Hu, J. Li, W. Huang, and J. Chu, “Two-photon polymerization of microstructures by a non-diffraction multifoci pattern generated from a superposed Bessel beam,” Opt. Lett. 42(4), 743–746 (2017).
[Crossref] [PubMed]

C. Zhang, Y. Hu, W. Du, P. Wu, S. Rao, Z. Cai, Z. Lao, B. Xu, J. Ni, J. Li, G. Zhao, D. Wu, J. Chu, and K. Sugioka, “Optimized holographic femtosecond laser patterning method towards rapid integration of high-quality functional devices in microchannels,” Sci. Rep. 6(1), 33281 (2016).
[Crossref] [PubMed]

Clark, R. L.

Cumming, B. P.

Darrell, T.

D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros, “Context encoders: Feature learning by inpainting,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 2536–2544.

Donahue, J.

D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros, “Context encoders: Feature learning by inpainting,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 2536–2544.

Dosser, L. R.

M. S. Amer, M. A. El-Ashry, L. R. Dosser, K. E. Hix, J. F. Maguire, and B. Irwin, “Femtosecond versus nanosecond laser machining: comparison of induced stresses and structural changes in silicon wafers,” Appl. Surf. Sci. 242(1–2), 162–167 (2005).
[Crossref]

M. S. Amer, M. A. El-Ashry, L. R. Dosser, K. E. Hix, J. F. Maguire, and B. Irwin, “Femtosecond versus nanosecond laser machining: comparison of induced stresses and structural changes in silicon wafers,” Appl. Surf. Sci. 242(1–2), 162–167 (2005).
[Crossref]

Du, D.

P. P. Pronko, S. K. Dutta, J. Squier, J. V. Rudd, D. Du, and G. Mourou, “Machining of sub-micron holes using a femtosecond laser at 800 nm,” Opt. Commun. 114(1–2), 106–110 (1995).
[Crossref]

Du, W.

C. Zhang, Y. Hu, W. Du, P. Wu, S. Rao, Z. Cai, Z. Lao, B. Xu, J. Ni, J. Li, G. Zhao, D. Wu, J. Chu, and K. Sugioka, “Optimized holographic femtosecond laser patterning method towards rapid integration of high-quality functional devices in microchannels,” Sci. Rep. 6(1), 33281 (2016).
[Crossref] [PubMed]

Dutta, S. K.

P. P. Pronko, S. K. Dutta, J. Squier, J. V. Rudd, D. Du, and G. Mourou, “Machining of sub-micron holes using a femtosecond laser at 800 nm,” Opt. Commun. 114(1–2), 106–110 (1995).
[Crossref]

Eason, R. W.

Efros, A. A.

D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros, “Context encoders: Feature learning by inpainting,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 2536–2544.

El-Ashry, M. A.

M. S. Amer, M. A. El-Ashry, L. R. Dosser, K. E. Hix, J. F. Maguire, and B. Irwin, “Femtosecond versus nanosecond laser machining: comparison of induced stresses and structural changes in silicon wafers,” Appl. Surf. Sci. 242(1–2), 162–167 (2005).
[Crossref]

M. S. Amer, M. A. El-Ashry, L. R. Dosser, K. E. Hix, J. F. Maguire, and B. Irwin, “Femtosecond versus nanosecond laser machining: comparison of induced stresses and structural changes in silicon wafers,” Appl. Surf. Sci. 242(1–2), 162–167 (2005).
[Crossref]

Fei-Fei, L.

A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei, “Large-scale video classification with convolutional neural networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2014), pp. 1725–1732.
[Crossref]

Feinaeugle, M.

Gamaly, E. G.

E. G. Gamaly, A. V. Rode, B. Luther-Davies, and V. T. Tikhonchuk, “Ablation of solids by femtosecond lasers: Ablation mechanism and ablation thresholds for metals and dielectrics,” Phys. Plasmas 9(3), 949–957 (2002).
[Crossref]

Garcia, M. E.

B. Rethfeld, D. S. Ivanov, M. E. Garcia, and S. I. Anisimov, “Modelling ultrafast laser ablation,” J. Phys. D Appl. Phys. 50(19), 193001 (2017).
[Crossref]

H. O. Jeschke, M. E. Garcia, M. Lenzner, J. Bonse, J. Krüger, and W. Kautek, “Laser ablation thresholds of silicon for different pulse durations: theory and experiment,” Appl. Surf. Sci. 197–198, 839–844 (2002).
[Crossref]

Giles, C. L.

S. Lawrence, C. L. Giles, A. C. Tsoi, and A. D. Back, “Face recognition: A convolutional neural-network approach,” IEEE Trans. Neural Netw. 8(1), 98–113 (1997).
[Crossref] [PubMed]

Gong, L.

Y.-X. Ren, R.-D. Lu, and L. Gong, “Tailoring light with a digital micromirror device,” Ann. Phys. 527(7–8), 447–470 (2015).
[Crossref]

L. Gong, Y. Ren, W. Liu, M. Wang, M. Zhong, Z. Wang, and Y. Li, “Generation of cylindrically polarized vector vortex beams with digital micromirror device,” J. Appl. Phys. 116(18), 183105 (2014).
[Crossref]

Göröcs, Z.

Grant-Jacob, J. A.

Gu, M.

Günaydin, H.

Hansen, L. K.

L. K. Hansen and P. Salamon, “Neural network ensembles,” IEEE Trans. Pattern Anal. Mach. Intell. 12(10), 993–1001 (1990).
[Crossref]

Heath, D. J.

Hill, R. T.

Hinton, G.

N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” J. Mach. Learn. Res. 15(1), 1929–1958 (2014).

Hinton, G. E.

G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science 313(5786), 504–507 (2006).
[Crossref] [PubMed]

D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature 323(6088), 533–536 (1986).
[Crossref]

Hix, K. E.

M. S. Amer, M. A. El-Ashry, L. R. Dosser, K. E. Hix, J. F. Maguire, and B. Irwin, “Femtosecond versus nanosecond laser machining: comparison of induced stresses and structural changes in silicon wafers,” Appl. Surf. Sci. 242(1–2), 162–167 (2005).
[Crossref]

M. S. Amer, M. A. El-Ashry, L. R. Dosser, K. E. Hix, J. F. Maguire, and B. Irwin, “Femtosecond versus nanosecond laser machining: comparison of induced stresses and structural changes in silicon wafers,” Appl. Surf. Sci. 242(1–2), 162–167 (2005).
[Crossref]

Hornik, K.

K. Hornik, M. Stinchcombe, and H. White, “Multilayer feedforward networks are universal approximators,” Neural Netw. 2(5), 359–366 (1989).
[Crossref]

Hu, Y.

L. Yang, D. Qian, C. Xin, Z. Hu, S. Ji, D. Wu, Y. Hu, J. Li, W. Huang, and J. Chu, “Two-photon polymerization of microstructures by a non-diffraction multifoci pattern generated from a superposed Bessel beam,” Opt. Lett. 42(4), 743–746 (2017).
[Crossref] [PubMed]

C. Zhang, Y. Hu, W. Du, P. Wu, S. Rao, Z. Cai, Z. Lao, B. Xu, J. Ni, J. Li, G. Zhao, D. Wu, J. Chu, and K. Sugioka, “Optimized holographic femtosecond laser patterning method towards rapid integration of high-quality functional devices in microchannels,” Sci. Rep. 6(1), 33281 (2016).
[Crossref] [PubMed]

Hu, Z.

Huang, W.

Hucknall, A.

Irwin, B.

M. S. Amer, M. A. El-Ashry, L. R. Dosser, K. E. Hix, J. F. Maguire, and B. Irwin, “Femtosecond versus nanosecond laser machining: comparison of induced stresses and structural changes in silicon wafers,” Appl. Surf. Sci. 242(1–2), 162–167 (2005).
[Crossref]

M. S. Amer, M. A. El-Ashry, L. R. Dosser, K. E. Hix, J. F. Maguire, and B. Irwin, “Femtosecond versus nanosecond laser machining: comparison of induced stresses and structural changes in silicon wafers,” Appl. Surf. Sci. 242(1–2), 162–167 (2005).
[Crossref]

Ivanov, D. S.

B. Rethfeld, D. S. Ivanov, M. E. Garcia, and S. I. Anisimov, “Modelling ultrafast laser ablation,” J. Phys. D Appl. Phys. 50(19), 193001 (2017).
[Crossref]

Jenness, N. J.

Jesacher, A.

Jeschke, H. O.

H. O. Jeschke, M. E. Garcia, M. Lenzner, J. Bonse, J. Krüger, and W. Kautek, “Laser ablation thresholds of silicon for different pulse durations: theory and experiment,” Appl. Surf. Sci. 197–198, 839–844 (2002).
[Crossref]

Ji, S.

Karpathy, A.

A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei, “Large-scale video classification with convolutional neural networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2014), pp. 1725–1732.
[Crossref]

Kautek, W.

H. O. Jeschke, M. E. Garcia, M. Lenzner, J. Bonse, J. Krüger, and W. Kautek, “Laser ablation thresholds of silicon for different pulse durations: theory and experiment,” Appl. Surf. Sci. 197–198, 839–844 (2002).
[Crossref]

Krahenbuhl, P.

D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros, “Context encoders: Feature learning by inpainting,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 2536–2544.

Krizhevsky, A.

N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” J. Mach. Learn. Res. 15(1), 1929–1958 (2014).

Krüger, J.

H. O. Jeschke, M. E. Garcia, M. Lenzner, J. Bonse, J. Krüger, and W. Kautek, “Laser ablation thresholds of silicon for different pulse durations: theory and experiment,” Appl. Surf. Sci. 197–198, 839–844 (2002).
[Crossref]

Lalanne, P.

Lao, Z.

C. Zhang, Y. Hu, W. Du, P. Wu, S. Rao, Z. Cai, Z. Lao, B. Xu, J. Ni, J. Li, G. Zhao, D. Wu, J. Chu, and K. Sugioka, “Optimized holographic femtosecond laser patterning method towards rapid integration of high-quality functional devices in microchannels,” Sci. Rep. 6(1), 33281 (2016).
[Crossref] [PubMed]

Lawrence, S.

S. Lawrence, C. L. Giles, A. C. Tsoi, and A. D. Back, “Face recognition: A convolutional neural-network approach,” IEEE Trans. Neural Netw. 8(1), 98–113 (1997).
[Crossref] [PubMed]

Lee, H.

S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee, “Generative adversarial text to image synthesis,” https://arXiv:1605.05396 (2016).

Lenzner, M.

H. O. Jeschke, M. E. Garcia, M. Lenzner, J. Bonse, J. Krüger, and W. Kautek, “Laser ablation thresholds of silicon for different pulse durations: theory and experiment,” Appl. Surf. Sci. 197–198, 839–844 (2002).
[Crossref]

Leung, T.

A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei, “Large-scale video classification with convolutional neural networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2014), pp. 1725–1732.
[Crossref]

Li, J.

L. Yang, D. Qian, C. Xin, Z. Hu, S. Ji, D. Wu, Y. Hu, J. Li, W. Huang, and J. Chu, “Two-photon polymerization of microstructures by a non-diffraction multifoci pattern generated from a superposed Bessel beam,” Opt. Lett. 42(4), 743–746 (2017).
[Crossref] [PubMed]

C. Zhang, Y. Hu, W. Du, P. Wu, S. Rao, Z. Cai, Z. Lao, B. Xu, J. Ni, J. Li, G. Zhao, D. Wu, J. Chu, and K. Sugioka, “Optimized holographic femtosecond laser patterning method towards rapid integration of high-quality functional devices in microchannels,” Sci. Rep. 6(1), 33281 (2016).
[Crossref] [PubMed]

Li, Y.

L. Gong, Y. Ren, W. Liu, M. Wang, M. Zhong, Z. Wang, and Y. Li, “Generation of cylindrically polarized vector vortex beams with digital micromirror device,” J. Appl. Phys. 116(18), 183105 (2014).
[Crossref]

Liu, W.

L. Gong, Y. Ren, W. Liu, M. Wang, M. Zhong, Z. Wang, and Y. Li, “Generation of cylindrically polarized vector vortex beams with digital micromirror device,” J. Appl. Phys. 116(18), 183105 (2014).
[Crossref]

Logeswaran, L.

S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee, “Generative adversarial text to image synthesis,” https://arXiv:1605.05396 (2016).

Lu, R.-D.

Y.-X. Ren, R.-D. Lu, and L. Gong, “Tailoring light with a digital micromirror device,” Ann. Phys. 527(7–8), 447–470 (2015).
[Crossref]

Luther-Davies, B.

E. G. Gamaly, A. V. Rode, B. Luther-Davies, and V. T. Tikhonchuk, “Ablation of solids by femtosecond lasers: Ablation mechanism and ablation thresholds for metals and dielectrics,” Phys. Plasmas 9(3), 949–957 (2002).
[Crossref]

Maguire, J. F.

M. S. Amer, M. A. El-Ashry, L. R. Dosser, K. E. Hix, J. F. Maguire, and B. Irwin, “Femtosecond versus nanosecond laser machining: comparison of induced stresses and structural changes in silicon wafers,” Appl. Surf. Sci. 242(1–2), 162–167 (2005).
[Crossref]

M. S. Amer, M. A. El-Ashry, L. R. Dosser, K. E. Hix, J. F. Maguire, and B. Irwin, “Femtosecond versus nanosecond laser machining: comparison of induced stresses and structural changes in silicon wafers,” Appl. Surf. Sci. 242(1–2), 162–167 (2005).
[Crossref]

Mashanovich, G. Z.

Mills, B.

Momma, C.

B. N. Chichkov, C. Momma, S. Nolte, F. V. Alvensleben, and A. Tünnermann, “Femtosecond, picosecond and nanosecond laser ablation of solids,” Appl. Phys., A Mater. Sci. Process. 63(2), 109–115 (1996).
[Crossref]

Mourou, G.

P. P. Pronko, S. K. Dutta, J. Squier, J. V. Rudd, D. Du, and G. Mourou, “Machining of sub-micron holes using a femtosecond laser at 800 nm,” Opt. Commun. 114(1–2), 106–110 (1995).
[Crossref]

Muskens, O. L.

Nedialkov, N. N.

S. Amoruso, R. Bruzzese, X. Wang, N. N. Nedialkov, and P. A. Atanasov, “Femtosecond laser ablation of nickel in vacuum,” J. Phys. D Appl. Phys. 40(2), 331–340 (2007).
[Crossref]

Ni, J.

C. Zhang, Y. Hu, W. Du, P. Wu, S. Rao, Z. Cai, Z. Lao, B. Xu, J. Ni, J. Li, G. Zhao, D. Wu, J. Chu, and K. Sugioka, “Optimized holographic femtosecond laser patterning method towards rapid integration of high-quality functional devices in microchannels,” Sci. Rep. 6(1), 33281 (2016).
[Crossref] [PubMed]

Nolte, S.

B. N. Chichkov, C. Momma, S. Nolte, F. V. Alvensleben, and A. Tünnermann, “Femtosecond, picosecond and nanosecond laser ablation of solids,” Appl. Phys., A Mater. Sci. Process. 63(2), 109–115 (1996).
[Crossref]

Ozcan, A.

Pathak, D.

D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros, “Context encoders: Feature learning by inpainting,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 2536–2544.

Pronko, P. P.

P. P. Pronko, S. K. Dutta, J. Squier, J. V. Rudd, D. Du, and G. Mourou, “Machining of sub-micron holes using a femtosecond laser at 800 nm,” Opt. Commun. 114(1–2), 106–110 (1995).
[Crossref]

Qian, D.

Rao, S.

C. Zhang, Y. Hu, W. Du, P. Wu, S. Rao, Z. Cai, Z. Lao, B. Xu, J. Ni, J. Li, G. Zhao, D. Wu, J. Chu, and K. Sugioka, “Optimized holographic femtosecond laser patterning method towards rapid integration of high-quality functional devices in microchannels,” Sci. Rep. 6(1), 33281 (2016).
[Crossref] [PubMed]

Reed, G. T.

Reed, S.

S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee, “Generative adversarial text to image synthesis,” https://arXiv:1605.05396 (2016).

Ren, Y.

L. Gong, Y. Ren, W. Liu, M. Wang, M. Zhong, Z. Wang, and Y. Li, “Generation of cylindrically polarized vector vortex beams with digital micromirror device,” J. Appl. Phys. 116(18), 183105 (2014).
[Crossref]

Ren, Y.-X.

Y.-X. Ren, R.-D. Lu, and L. Gong, “Tailoring light with a digital micromirror device,” Ann. Phys. 527(7–8), 447–470 (2015).
[Crossref]

Rethfeld, B.

B. Rethfeld, D. S. Ivanov, M. E. Garcia, and S. I. Anisimov, “Modelling ultrafast laser ablation,” J. Phys. D Appl. Phys. 50(19), 193001 (2017).
[Crossref]

Rivenson, Y.

Rizvi, N.

B. Mills, M. Feinaeugle, C. L. Sones, N. Rizvi, and R. W. Eason, “Sub-micron-scale femtosecond laser ablation using a digital micromirror device,” J. Micromech. Microeng. 23(3), 035005 (2013).
[Crossref]

Rode, A. V.

E. G. Gamaly, A. V. Rode, B. Luther-Davies, and V. T. Tikhonchuk, “Ablation of solids by femtosecond lasers: Ablation mechanism and ablation thresholds for metals and dielectrics,” Phys. Plasmas 9(3), 949–957 (2002).
[Crossref]

Rudd, J. V.

P. P. Pronko, S. K. Dutta, J. Squier, J. V. Rudd, D. Du, and G. Mourou, “Machining of sub-micron holes using a femtosecond laser at 800 nm,” Opt. Commun. 114(1–2), 106–110 (1995).
[Crossref]

Rumelhart, D. E.

D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature 323(6088), 533–536 (1986).
[Crossref]

Salakhutdinov, R.

N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” J. Mach. Learn. Res. 15(1), 1929–1958 (2014).

Salakhutdinov, R. R.

G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science 313(5786), 504–507 (2006).
[Crossref] [PubMed]

Salamon, P.

L. K. Hansen and P. Salamon, “Neural network ensembles,” IEEE Trans. Pattern Anal. Mach. Intell. 12(10), 993–1001 (1990).
[Crossref]

Schiele, B.

S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee, “Generative adversarial text to image synthesis,” https://arXiv:1605.05396 (2016).

Shetty, S.

A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei, “Large-scale video classification with convolutional neural networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2014), pp. 1725–1732.
[Crossref]

Sones, C. L.

B. Mills, M. Feinaeugle, C. L. Sones, N. Rizvi, and R. W. Eason, “Sub-micron-scale femtosecond laser ablation using a digital micromirror device,” J. Micromech. Microeng. 23(3), 035005 (2013).
[Crossref]

Specht, D. F.

D. F. Specht, “A general regression neural network,” IEEE Trans. Neural Netw. 2(6), 568–576 (1991).
[Crossref] [PubMed]

Squier, J.

P. P. Pronko, S. K. Dutta, J. Squier, J. V. Rudd, D. Du, and G. Mourou, “Machining of sub-micron holes using a femtosecond laser at 800 nm,” Opt. Commun. 114(1–2), 106–110 (1995).
[Crossref]

Srivastava, N.

N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” J. Mach. Learn. Res. 15(1), 1929–1958 (2014).

Stinchcombe, M.

K. Hornik, M. Stinchcombe, and H. White, “Multilayer feedforward networks are universal approximators,” Neural Netw. 2(5), 359–366 (1989).
[Crossref]

Sugioka, K.

C. Zhang, Y. Hu, W. Du, P. Wu, S. Rao, Z. Cai, Z. Lao, B. Xu, J. Ni, J. Li, G. Zhao, D. Wu, J. Chu, and K. Sugioka, “Optimized holographic femtosecond laser patterning method towards rapid integration of high-quality functional devices in microchannels,” Sci. Rep. 6(1), 33281 (2016).
[Crossref] [PubMed]

Sukthankar, R.

A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei, “Large-scale video classification with convolutional neural networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2014), pp. 1725–1732.
[Crossref]

Sutskever, I.

N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” J. Mach. Learn. Res. 15(1), 1929–1958 (2014).

Thomson, D. J.

Tikhonchuk, V. T.

E. G. Gamaly, A. V. Rode, B. Luther-Davies, and V. T. Tikhonchuk, “Ablation of solids by femtosecond lasers: Ablation mechanism and ablation thresholds for metals and dielectrics,” Phys. Plasmas 9(3), 949–957 (2002).
[Crossref]

Toderici, G.

A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei, “Large-scale video classification with convolutional neural networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2014), pp. 1725–1732.
[Crossref]

Tsoi, A. C.

S. Lawrence, C. L. Giles, A. C. Tsoi, and A. D. Back, “Face recognition: A convolutional neural-network approach,” IEEE Trans. Neural Netw. 8(1), 98–113 (1997).
[Crossref] [PubMed]

Tünnermann, A.

B. N. Chichkov, C. Momma, S. Nolte, F. V. Alvensleben, and A. Tünnermann, “Femtosecond, picosecond and nanosecond laser ablation of solids,” Appl. Phys., A Mater. Sci. Process. 63(2), 109–115 (1996).
[Crossref]

Vynck, K.

Wang, H.

Wang, M.

L. Gong, Y. Ren, W. Liu, M. Wang, M. Zhong, Z. Wang, and Y. Li, “Generation of cylindrically polarized vector vortex beams with digital micromirror device,” J. Appl. Phys. 116(18), 183105 (2014).
[Crossref]

Wang, X.

S. Amoruso, R. Bruzzese, X. Wang, N. N. Nedialkov, and P. A. Atanasov, “Femtosecond laser ablation of nickel in vacuum,” J. Phys. D Appl. Phys. 40(2), 331–340 (2007).
[Crossref]

Wang, Z.

L. Gong, Y. Ren, W. Liu, M. Wang, M. Zhong, Z. Wang, and Y. Li, “Generation of cylindrically polarized vector vortex beams with digital micromirror device,” J. Appl. Phys. 116(18), 183105 (2014).
[Crossref]

White, H.

K. Hornik, M. Stinchcombe, and H. White, “Multilayer feedforward networks are universal approximators,” Neural Netw. 2(5), 359–366 (1989).
[Crossref]

Williams, R. J.

D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature 323(6088), 533–536 (1986).
[Crossref]

Wilson, T.

Wu, D.

L. Yang, D. Qian, C. Xin, Z. Hu, S. Ji, D. Wu, Y. Hu, J. Li, W. Huang, and J. Chu, “Two-photon polymerization of microstructures by a non-diffraction multifoci pattern generated from a superposed Bessel beam,” Opt. Lett. 42(4), 743–746 (2017).
[Crossref] [PubMed]

C. Zhang, Y. Hu, W. Du, P. Wu, S. Rao, Z. Cai, Z. Lao, B. Xu, J. Ni, J. Li, G. Zhao, D. Wu, J. Chu, and K. Sugioka, “Optimized holographic femtosecond laser patterning method towards rapid integration of high-quality functional devices in microchannels,” Sci. Rep. 6(1), 33281 (2016).
[Crossref] [PubMed]

Wu, P.

C. Zhang, Y. Hu, W. Du, P. Wu, S. Rao, Z. Cai, Z. Lao, B. Xu, J. Ni, J. Li, G. Zhao, D. Wu, J. Chu, and K. Sugioka, “Optimized holographic femtosecond laser patterning method towards rapid integration of high-quality functional devices in microchannels,” Sci. Rep. 6(1), 33281 (2016).
[Crossref] [PubMed]

Xin, C.

Xu, B.

C. Zhang, Y. Hu, W. Du, P. Wu, S. Rao, Z. Cai, Z. Lao, B. Xu, J. Ni, J. Li, G. Zhao, D. Wu, J. Chu, and K. Sugioka, “Optimized holographic femtosecond laser patterning method towards rapid integration of high-quality functional devices in microchannels,” Sci. Rep. 6(1), 33281 (2016).
[Crossref] [PubMed]

Yan, X.

S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee, “Generative adversarial text to image synthesis,” https://arXiv:1605.05396 (2016).

Yang, L.

Zhang, C.

C. Zhang, Y. Hu, W. Du, P. Wu, S. Rao, Z. Cai, Z. Lao, B. Xu, J. Ni, J. Li, G. Zhao, D. Wu, J. Chu, and K. Sugioka, “Optimized holographic femtosecond laser patterning method towards rapid integration of high-quality functional devices in microchannels,” Sci. Rep. 6(1), 33281 (2016).
[Crossref] [PubMed]

Zhang, Y.

Zhao, G.

C. Zhang, Y. Hu, W. Du, P. Wu, S. Rao, Z. Cai, Z. Lao, B. Xu, J. Ni, J. Li, G. Zhao, D. Wu, J. Chu, and K. Sugioka, “Optimized holographic femtosecond laser patterning method towards rapid integration of high-quality functional devices in microchannels,” Sci. Rep. 6(1), 33281 (2016).
[Crossref] [PubMed]

Zhong, M.

L. Gong, Y. Ren, W. Liu, M. Wang, M. Zhong, Z. Wang, and Y. Li, “Generation of cylindrically polarized vector vortex beams with digital micromirror device,” J. Appl. Phys. 116(18), 183105 (2014).
[Crossref]

Ann. Phys. (1)

Y.-X. Ren, R.-D. Lu, and L. Gong, “Tailoring light with a digital micromirror device,” Ann. Phys. 527(7–8), 447–470 (2015).
[Crossref]

Appl. Opt. (2)

Appl. Phys., A Mater. Sci. Process. (1)

B. N. Chichkov, C. Momma, S. Nolte, F. V. Alvensleben, and A. Tünnermann, “Femtosecond, picosecond and nanosecond laser ablation of solids,” Appl. Phys., A Mater. Sci. Process. 63(2), 109–115 (1996).
[Crossref]

Appl. Surf. Sci. (3)

M. S. Amer, M. A. El-Ashry, L. R. Dosser, K. E. Hix, J. F. Maguire, and B. Irwin, “Femtosecond versus nanosecond laser machining: comparison of induced stresses and structural changes in silicon wafers,” Appl. Surf. Sci. 242(1–2), 162–167 (2005).
[Crossref]

M. S. Amer, M. A. El-Ashry, L. R. Dosser, K. E. Hix, J. F. Maguire, and B. Irwin, “Femtosecond versus nanosecond laser machining: comparison of induced stresses and structural changes in silicon wafers,” Appl. Surf. Sci. 242(1–2), 162–167 (2005).
[Crossref]

H. O. Jeschke, M. E. Garcia, M. Lenzner, J. Bonse, J. Krüger, and W. Kautek, “Laser ablation thresholds of silicon for different pulse durations: theory and experiment,” Appl. Surf. Sci. 197–198, 839–844 (2002).
[Crossref]

IEEE Trans. Neural Netw. (2)

D. F. Specht, “A general regression neural network,” IEEE Trans. Neural Netw. 2(6), 568–576 (1991).
[Crossref] [PubMed]

S. Lawrence, C. L. Giles, A. C. Tsoi, and A. D. Back, “Face recognition: A convolutional neural-network approach,” IEEE Trans. Neural Netw. 8(1), 98–113 (1997).
[Crossref] [PubMed]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

L. K. Hansen and P. Salamon, “Neural network ensembles,” IEEE Trans. Pattern Anal. Mach. Intell. 12(10), 993–1001 (1990).
[Crossref]

J. Appl. Phys. (1)

L. Gong, Y. Ren, W. Liu, M. Wang, M. Zhong, Z. Wang, and Y. Li, “Generation of cylindrically polarized vector vortex beams with digital micromirror device,” J. Appl. Phys. 116(18), 183105 (2014).
[Crossref]

J. Mach. Learn. Res. (1)

N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” J. Mach. Learn. Res. 15(1), 1929–1958 (2014).

J. Micromech. Microeng. (1)

B. Mills, M. Feinaeugle, C. L. Sones, N. Rizvi, and R. W. Eason, “Sub-micron-scale femtosecond laser ablation using a digital micromirror device,” J. Micromech. Microeng. 23(3), 035005 (2013).
[Crossref]

J. Opt. A, Pure Appl. Opt. (1)

J. K. Chen and J. E. Beraun, “Modelling of ultrashort laser ablation of gold films in vacuum,” J. Opt. A, Pure Appl. Opt. 5(3), 168–173 (2003).
[Crossref]

J. Phys. D Appl. Phys. (2)

S. Amoruso, R. Bruzzese, X. Wang, N. N. Nedialkov, and P. A. Atanasov, “Femtosecond laser ablation of nickel in vacuum,” J. Phys. D Appl. Phys. 40(2), 331–340 (2007).
[Crossref]

B. Rethfeld, D. S. Ivanov, M. E. Garcia, and S. I. Anisimov, “Modelling ultrafast laser ablation,” J. Phys. D Appl. Phys. 50(19), 193001 (2017).
[Crossref]

Nature (1)

D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature 323(6088), 533–536 (1986).
[Crossref]

Neural Netw. (1)

K. Hornik, M. Stinchcombe, and H. White, “Multilayer feedforward networks are universal approximators,” Neural Netw. 2(5), 359–366 (1989).
[Crossref]

Opt. Commun. (1)

P. P. Pronko, S. K. Dutta, J. Squier, J. V. Rudd, D. Du, and G. Mourou, “Machining of sub-micron holes using a femtosecond laser at 800 nm,” Opt. Commun. 114(1–2), 106–110 (1995).
[Crossref]

Opt. Express (2)

Opt. Lett. (1)

Optica (2)

Phys. Plasmas (1)

E. G. Gamaly, A. V. Rode, B. Luther-Davies, and V. T. Tikhonchuk, “Ablation of solids by femtosecond lasers: Ablation mechanism and ablation thresholds for metals and dielectrics,” Phys. Plasmas 9(3), 949–957 (2002).
[Crossref]

Sci. Rep. (1)

C. Zhang, Y. Hu, W. Du, P. Wu, S. Rao, Z. Cai, Z. Lao, B. Xu, J. Ni, J. Li, G. Zhao, D. Wu, J. Chu, and K. Sugioka, “Optimized holographic femtosecond laser patterning method towards rapid integration of high-quality functional devices in microchannels,” Sci. Rep. 6(1), 33281 (2016).
[Crossref] [PubMed]

Science (1)

G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science 313(5786), 504–507 (2006).
[Crossref] [PubMed]

Other (10)

A. Krogh and J. Vedelsby, “Neural network ensembles, cross validation, and active learning,” in Advances in Neural Information Processing Systems (1995), pp. 231–238.

D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros, “Context encoders: Feature learning by inpainting,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 2536–2544.

P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” https://arXiv:1611.07004 (2017).
[Crossref]

M. Mirza and S. Osindero, “Conditional generative adversarial nets,” https://arXiv:1411.1784 (2014).

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems (2014), pp. 2672–2680.

A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks” https://arXiv:1511.06434 (2015).

E. L. Denton, S. Chintala, and R. Fergus, “Deep generative image models using a laplacian pyramid of adversarial networks.” in Advances in Neural Information Processing Systems (2015), pp. 1486–1494.

S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee, “Generative adversarial text to image synthesis,” https://arXiv:1605.05396 (2016).

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems (2012), pp. 1097–1105.

A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei, “Large-scale video classification with convolutional neural networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2014), pp. 1725–1732.
[Crossref]

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1
Fig. 1 Procedure for creating an image transfer function that can turn a laser spatial intensity into an equivalent generated SEM image. A DMD was used for shaping the spatial intensity profile of single laser pulses that were then imaged onto the sample for laser machining, where the white and black pixels on the DMD patterns correspond to laser light and no laser light, respectively. Each DMD pattern was paired with the associated experimentally-measured SEM image, in order to form the training data set. Single DMD patterns were paired with either the associated experimental or generated SEM image and used as the input to the discriminator. The discriminator was trained to detect whether the input was an experimental or generated SEM image, and the generator was trained to convince the discriminator to judge generated SEM images as experimental. The overriding goal was the creation of the image transfer function.
Fig. 2
Fig. 2 Examples of DMD patterns and associated experimentally-measured SEM images from the training data set. Showing nine of the 172 data set pairs, corresponding to training data set numbers of 15, 24, 45, 74, 85, 104, 121, 144, and 161 for (a)-(i), respectively, where the decrease in image brightness was caused by the gradual degradation in the SEM titanium filament during the 5 hours of data capture. The DMD patterns were produced in order to provide a wide distribution of training data. The cGAN was trained on the training data set in order to create an image transfer function that could generate an equivalent SEM image from any input DMD pattern. In order to generate a realistic SEM image, the image transfer function needed to be consistent with the laws of diffraction, laser machining, and SEM imaging.
Fig. 3
Fig. 3 Demonstration of the effectiveness of the image transfer function for different training iterations. Showing DMD patterns corresponding to the letters (a) B and (b) X, with associated generated SEM images for image transfer functions for 1, 2, 5 and 400 cGAN training iterations. For comparison, the associated experimentally-measured SEM images are also shown. The DMD patterns and experimentally-measured SEM images were not part of the training data set, and hence this result shows the effectiveness of the image transfer function on unseen data. The generated SEM images provide a qualitative portrayal of the cGAN training convergence, showing that features, such as shadows and uneven edges, became obvious at different numbers of iterations. No further improvements were realised for iteration numbers greater than 400.
Fig. 4
Fig. 4 Demonstration of the effectiveness of the image transfer function for DMD patterns corresponding to periodic designs, for (a) above and (b) close to the resolution limit of the experimental setup, showing the DMD pattern, and the associated experimental and generated SEM images. These DMD patterns were not in the training data set, and hence this result provides experimental verification of the accuracy of the neural network for feature sizes close to the resolution limit of the experiment.
Fig. 5
Fig. 5 SEM images generated using the image transfer function from the trained cGAN, for DMD patterns consisting of lines, gaps and ring structures. Showing generated SEM images for projected line widths of (a) 250 nm, (b) 500 nm, (c) 1 µm, (d) 2 µm, (e) 3 µm and (f) 5 µm. For DMD patterns with a single vertical line, (c)-(f), show laser-machined structures with widths approximately proportional to the projected line width, while (a)-(b), indicate that the widths of the laser-machined structures do not decrease below a minimum size. For the DMD patterns with gaps and ring structures, (a)-(c), show the inability to resolve two adjacent laser-machined structures. These images indicate that the image transfer function from the trained cGAN contains encoding that appears consistent with the laws of diffraction. There are no associated experimental SEM images for these DMD patterns, hence demonstrating the predictive capabilities of this approach.
Fig. 6
Fig. 6 Reverse-engineering the image transfer function to evaluate the encoding of diffraction. Showing (a) a generated SEM image corresponding to a projected line width of 2 µm, (b) the same image converted to a high-contrast colour-map and averaged over the central 500 rows of pixels in the Y-axis to produce an averaged cross-section, and (c)-(d) the concatenation of averaged cross sections for projected line widths from 128 nm to 12 µm. In (c) the edges of the generated laser-machined structures (the purple and dark blue data regions) closely match the associated projected line width (white dotted lines) for values > 1 µm. However, as emphasized in (d) this relationship is not observed for projected line widths < 1 µm, where instead the widths of the laser-machined structures are observed to be always greater than the diffraction limit of the experimental setup (952 nm [37], white vertical lines).

Metrics