Abstract

Optical lithography is a fundamental process to fabricate integrated circuits, which are the basic fabric of the information age. Due to image distortions inherent in optical lithography, inverse lithography techniques (ILT) are extensively used by the semiconductor industry to improve lithography image resolution and fidelity in semiconductor fabrication. As the density of integrated circuits increases, computational complexity has become a central challenge in ILT methods. This paper develops a new and powerful framework of a model-driven convolution neural network (MCNN) to obtain the approximate guess of the ILT solutions, which can be used as the input of the following ILT optimization with much fewer iterations as compared with conventional ILT algorithms. The combined approach to use the proposed MCNN together with the gradient-based method can improve the speed of ILT optimization algorithms up to an order of magnitude and further improve the imaging performance of coherent optical lithography systems. This paper, to the best of our knowledge, is the first to exploit a state-of-the-art MCNN to solve the ILT problem and provide considerable performance advantages. The imaging model of optical lithography is utilized to establish a neural network architecture and an unsupervised training strategy. The neural network architecture and the initial network parameters are derived by unfolding and truncating the model-based iterative ILT optimization procedure. Then, a model-based decoder is proposed to enable the unsupervised training of the neural network, which averts the time-consuming labelling process in training data. This work opens a new window for MCNN techniques to effectively improve the computational efficiency and imaging performance of conventional ILT algorithms. Some impressive good simulation results are provided to verify the superiority of the proposed MCNN approach.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Nonlinear compressive inverse lithography aided by low-rank regularization

Xu Ma, Zhiqiang Wang, Jianchen Zhu, Shengen Zhang, Gonzalo R. Arce, and Shengjie Zhao
Opt. Express 27(21) 29992-30008 (2019)

Gradient-based inverse extreme ultraviolet lithography

Xu Ma, Jie Wang, Xuanbo Chen, Yanqiu Li, and Gonzalo R. Arce
Appl. Opt. 54(24) 7284-7300 (2015)

Efficient optical proximity correction based on semi-implicit additive operator splitting

Yijiang Shen, Fei Peng, and Zhenrong Zhang
Opt. Express 27(2) 1520-1528 (2019)

References

  • View by:
  • |
  • |
  • |

  1. G. E. Moore, “Cramming more components onto integrated circuits,” Electronics 38, 114ff (1965).
  2. A. K. Wong, Resolution Enhancement Techniques in Optical Lithography(SPIE, 2001).
    [Crossref]
  3. X. Ma and G. R. Arce, Computational Lithography, Wiley Series in Pure and Applied Optics, 1st ed. (John Wiley and Sons, 2010).
    [Crossref]
  4. Y. Liu and A. Zakhor, “Binary and phase shifting mask design for optical lithography,” IEEE Trans. on Semicond. Manuf. 5, 138–152 (1992).
    [Crossref]
  5. N. B. Cobb and Y. Granik, “Dense OPC for 65nm and below,” Proc. SPIE 5992, 599259 (2005).
    [Crossref]
  6. P. M. Martin, C. J. Progler, G. Xiao, R. Gray, L. Pang, and Y. Liu, “Manufacturability study of masks created by inverse lithography technology (ILT),” Proc. SPIE 5992, 599235 (2005).
    [Crossref]
  7. A. Poonawala and P. Milanfar, “OPC and PSM design using inverse lithography: a non-linear optimization approach,” Proc. SPIE 6154, 61543H (2006).
    [Crossref]
  8. X. Ma and G. R. Arce, “Generalized inverse lithography methods for phase-shifting mask design,” Opt. Express 15, 15066–15079 (2007).
    [Crossref] [PubMed]
  9. A. Poonawala, B. Painter, and C. Kerchner, “Model-based assist feature placement for 32nm and 22nm technology nodes using inverse mask technology,” Proc. SPIE 7488, 748814 (2009).
    [Crossref]
  10. Y. Granik, “Fast pixel-based mask optimization for inverse lithography,” J. Microlith. Microfab. Microsyst. 5, 043002 (2006).
  11. X. Ma and G. R. Arce, “Binary mask optimization for inverse lithography with partially coherent illumination,” J. Opt. Soc. Am. A 25, 2960–2970 (2008).
    [Crossref]
  12. Y. Shen, N. Wong, and E. Y. Lam, “Level-set-based inverse lithography for photomask synthesis,” Opt. Express 17, 23690–23701 (2009).
    [Crossref]
  13. J. Yu and P. Yu, “Impacts of cost functions on inverse lithography patterning,” Opt. Express 18, 23331–23342 (2010).
    [Crossref] [PubMed]
  14. Y. Shen, N. Jia, N. Wong, and E. Y. Lam, “Robust level-set-based inverse lithography,” Opt. Express 19, 5511–5521 (2011).
    [Crossref] [PubMed]
  15. X. Ma, Y. Li, and L. Dong, “Mask optimization approaches in optical lithography based on a vector imaging model,” J. Opt. Soc. Am. A 29, 1300–1312 (2012).
    [Crossref]
  16. X. Ma, Z. Song, Y. Li, and G. R. Arce, “Block-based mask optimization for optical lithography,” Appl. Opt. 52, 3351–3363 (2013).
    [Crossref] [PubMed]
  17. N. Jia and E. Y. Lam, “Machine learning for inverse lithography: using stochastic gradient descent for robust photomask synthesis,” J. Opt. 12, 045601 (2010).
    [Crossref]
  18. X. Ma and G. R. Arce, “Pixel-based OPC optimization based on conjugate gradients,” Opt. Express 19, 2165–2180 (2011).
    [Crossref] [PubMed]
  19. X. Ma, G. R. Arce, and Y. Li, “Optimal 3D phase-shifting masks in partially coherent illumination,” Appl. Opt. 50, 5567–5576 (2011).
    [Crossref] [PubMed]
  20. W. Lv, S. Liu, Q. Xia, X. Wu, Y. Shen, and E. Y. Lam, “Level-set-based inverse lithography for mask synthesis using the conjugate gradient and an optimal time step,” J. Vac. Sci. Technol. B 31, 041605 (2013).
    [Crossref]
  21. X. Wu, S. Liu, W. Lv, and E. Y. Lam, “Robust and efficient inverse mask synthesis with basis function representation,” J. Opt. Soc. Am. A 31, B1–B9 (2014).
    [Crossref]
  22. R. Luo, “Optical proximity correction using a multilayer perceptron neural network,” J. Opt. 15, 075708 (2013).
    [Crossref]
  23. K. Luo, Z. Shi, X. Yan, and Z. Geng, “SVM based layout retargeting for fast and regularized inverse lithography,” J. Zhejiang Uni-Sci C (Comput & Electron) 15, 390–400 (2014).
    [Crossref]
  24. X. Ma, B. Wu, Z. Song, S. Jiang, and Y. Li, “Fast pixel-based optical proximity correction based on nonparametric kernel regression,” J. Micro/Nanolith. MEMS MOEMS 13, 043007 (2014).
    [Crossref]
  25. X. Ma, S. Jiang, J. Wang, B. Wu, Z. Song, and Y. Li, “A fast and manufacture-friendly optical proximity correction based on machine learning,” Microelectron. Eng. 168, 15–26 (2016).
    [Crossref]
  26. G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science 313, 504–507 (2006).
    [Crossref] [PubMed]
  27. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015).
    [Crossref] [PubMed]
  28. I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning, Adaptive Computation and Machine Learning Series (The MIT Press, 2016).
  29. Y. LeCun and Y. Bengio, “Convolutional networks for images, speech, and time series,” in the handbook of brain theory and neural networks, 255–258 (The MIT Press, 1998).
  30. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Commun. ACM 60, 84–90 (2017).
    [Crossref]
  31. O. A. Hamid, A. Mohamed, H. Jiang, L. Deng, G. Penn, and D. Yu, “Convolutional neural networks for speech recognition,” IEEE/ACM Trans. Audio, Speech, and Language Process. 22, 1533–1545 (2014).
    [Crossref]
  32. S. Lawrence, C. L. Giles, A. C. Tsoi, and A. D. Back, “Face recognition: a convolutional neural-network approach,” IEEE Trans. Neural Network 8, 98–113 (1997).
    [Crossref]
  33. Z. Wang, Q. Ling, and T. S. Huang, “Learning deep l0 encoders,” Proc. Thirtieth AAAI Conference on Artificial Intelligence, 2194–2200 (2016).
  34. K. Gregor and Y. LeCun, “Learning fast approximations of sparse coding,” Proc. 27th International Conference on Machine Learning, 399–406 (2010).
  35. P. Sprechmann, A. M. Bronstein, and G. Sapiro, “Learning efficient sparse and low rank models,” IEEE Trans. Pattern Anal. 37, 1821–1833 (2015).
    [Crossref]
  36. J. Hershey, J. Roux, and F. Weninger, “Deep unfolding: model-based inspiration of novel deep architectures,” ar”Xiv:1409.2574 (2014).
  37. B. Xin, Y. Wang, W. Gao, and D. Wipf, “Maximal sparsity with deep networks?,” ar”Xiv:1605.01636 (2016).
  38. E. Alpaydin, Introduction to Machine Learning, 2nd ed. (China Machine Press, 2014).
  39. H. H. Hopkins, “On the diffraction theory of optical images,” Proc. R”. Soc. Lond. A 217, 408–432 (1953).
    [Crossref]
  40. H. H. Hopkins, “The concept of partial coherence in optics,” Proc. R. Soc. Lond. A 208, 263–277 (1951).
    [Crossref]

2017 (1)

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Commun. ACM 60, 84–90 (2017).
[Crossref]

2016 (1)

X. Ma, S. Jiang, J. Wang, B. Wu, Z. Song, and Y. Li, “A fast and manufacture-friendly optical proximity correction based on machine learning,” Microelectron. Eng. 168, 15–26 (2016).
[Crossref]

2015 (2)

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015).
[Crossref] [PubMed]

P. Sprechmann, A. M. Bronstein, and G. Sapiro, “Learning efficient sparse and low rank models,” IEEE Trans. Pattern Anal. 37, 1821–1833 (2015).
[Crossref]

2014 (4)

X. Wu, S. Liu, W. Lv, and E. Y. Lam, “Robust and efficient inverse mask synthesis with basis function representation,” J. Opt. Soc. Am. A 31, B1–B9 (2014).
[Crossref]

O. A. Hamid, A. Mohamed, H. Jiang, L. Deng, G. Penn, and D. Yu, “Convolutional neural networks for speech recognition,” IEEE/ACM Trans. Audio, Speech, and Language Process. 22, 1533–1545 (2014).
[Crossref]

K. Luo, Z. Shi, X. Yan, and Z. Geng, “SVM based layout retargeting for fast and regularized inverse lithography,” J. Zhejiang Uni-Sci C (Comput & Electron) 15, 390–400 (2014).
[Crossref]

X. Ma, B. Wu, Z. Song, S. Jiang, and Y. Li, “Fast pixel-based optical proximity correction based on nonparametric kernel regression,” J. Micro/Nanolith. MEMS MOEMS 13, 043007 (2014).
[Crossref]

2013 (3)

W. Lv, S. Liu, Q. Xia, X. Wu, Y. Shen, and E. Y. Lam, “Level-set-based inverse lithography for mask synthesis using the conjugate gradient and an optimal time step,” J. Vac. Sci. Technol. B 31, 041605 (2013).
[Crossref]

R. Luo, “Optical proximity correction using a multilayer perceptron neural network,” J. Opt. 15, 075708 (2013).
[Crossref]

X. Ma, Z. Song, Y. Li, and G. R. Arce, “Block-based mask optimization for optical lithography,” Appl. Opt. 52, 3351–3363 (2013).
[Crossref] [PubMed]

2012 (1)

2011 (3)

2010 (2)

J. Yu and P. Yu, “Impacts of cost functions on inverse lithography patterning,” Opt. Express 18, 23331–23342 (2010).
[Crossref] [PubMed]

N. Jia and E. Y. Lam, “Machine learning for inverse lithography: using stochastic gradient descent for robust photomask synthesis,” J. Opt. 12, 045601 (2010).
[Crossref]

2009 (2)

A. Poonawala, B. Painter, and C. Kerchner, “Model-based assist feature placement for 32nm and 22nm technology nodes using inverse mask technology,” Proc. SPIE 7488, 748814 (2009).
[Crossref]

Y. Shen, N. Wong, and E. Y. Lam, “Level-set-based inverse lithography for photomask synthesis,” Opt. Express 17, 23690–23701 (2009).
[Crossref]

2008 (1)

2007 (1)

2006 (3)

Y. Granik, “Fast pixel-based mask optimization for inverse lithography,” J. Microlith. Microfab. Microsyst. 5, 043002 (2006).

A. Poonawala and P. Milanfar, “OPC and PSM design using inverse lithography: a non-linear optimization approach,” Proc. SPIE 6154, 61543H (2006).
[Crossref]

G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science 313, 504–507 (2006).
[Crossref] [PubMed]

2005 (2)

N. B. Cobb and Y. Granik, “Dense OPC for 65nm and below,” Proc. SPIE 5992, 599259 (2005).
[Crossref]

P. M. Martin, C. J. Progler, G. Xiao, R. Gray, L. Pang, and Y. Liu, “Manufacturability study of masks created by inverse lithography technology (ILT),” Proc. SPIE 5992, 599235 (2005).
[Crossref]

1997 (1)

S. Lawrence, C. L. Giles, A. C. Tsoi, and A. D. Back, “Face recognition: a convolutional neural-network approach,” IEEE Trans. Neural Network 8, 98–113 (1997).
[Crossref]

1992 (1)

Y. Liu and A. Zakhor, “Binary and phase shifting mask design for optical lithography,” IEEE Trans. on Semicond. Manuf. 5, 138–152 (1992).
[Crossref]

1965 (1)

G. E. Moore, “Cramming more components onto integrated circuits,” Electronics 38, 114ff (1965).

1953 (1)

H. H. Hopkins, “On the diffraction theory of optical images,” Proc. R”. Soc. Lond. A 217, 408–432 (1953).
[Crossref]

1951 (1)

H. H. Hopkins, “The concept of partial coherence in optics,” Proc. R. Soc. Lond. A 208, 263–277 (1951).
[Crossref]

Alpaydin, E.

E. Alpaydin, Introduction to Machine Learning, 2nd ed. (China Machine Press, 2014).

Arce, G. R.

Back, A. D.

S. Lawrence, C. L. Giles, A. C. Tsoi, and A. D. Back, “Face recognition: a convolutional neural-network approach,” IEEE Trans. Neural Network 8, 98–113 (1997).
[Crossref]

Bengio, Y.

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015).
[Crossref] [PubMed]

I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning, Adaptive Computation and Machine Learning Series (The MIT Press, 2016).

Y. LeCun and Y. Bengio, “Convolutional networks for images, speech, and time series,” in the handbook of brain theory and neural networks, 255–258 (The MIT Press, 1998).

Bronstein, A. M.

P. Sprechmann, A. M. Bronstein, and G. Sapiro, “Learning efficient sparse and low rank models,” IEEE Trans. Pattern Anal. 37, 1821–1833 (2015).
[Crossref]

Cobb, N. B.

N. B. Cobb and Y. Granik, “Dense OPC for 65nm and below,” Proc. SPIE 5992, 599259 (2005).
[Crossref]

Courville, A.

I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning, Adaptive Computation and Machine Learning Series (The MIT Press, 2016).

Deng, L.

O. A. Hamid, A. Mohamed, H. Jiang, L. Deng, G. Penn, and D. Yu, “Convolutional neural networks for speech recognition,” IEEE/ACM Trans. Audio, Speech, and Language Process. 22, 1533–1545 (2014).
[Crossref]

Dong, L.

Gao, W.

B. Xin, Y. Wang, W. Gao, and D. Wipf, “Maximal sparsity with deep networks?,” ar”Xiv:1605.01636 (2016).

Geng, Z.

K. Luo, Z. Shi, X. Yan, and Z. Geng, “SVM based layout retargeting for fast and regularized inverse lithography,” J. Zhejiang Uni-Sci C (Comput & Electron) 15, 390–400 (2014).
[Crossref]

Giles, C. L.

S. Lawrence, C. L. Giles, A. C. Tsoi, and A. D. Back, “Face recognition: a convolutional neural-network approach,” IEEE Trans. Neural Network 8, 98–113 (1997).
[Crossref]

Goodfellow, I.

I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning, Adaptive Computation and Machine Learning Series (The MIT Press, 2016).

Granik, Y.

Y. Granik, “Fast pixel-based mask optimization for inverse lithography,” J. Microlith. Microfab. Microsyst. 5, 043002 (2006).

N. B. Cobb and Y. Granik, “Dense OPC for 65nm and below,” Proc. SPIE 5992, 599259 (2005).
[Crossref]

Gray, R.

P. M. Martin, C. J. Progler, G. Xiao, R. Gray, L. Pang, and Y. Liu, “Manufacturability study of masks created by inverse lithography technology (ILT),” Proc. SPIE 5992, 599235 (2005).
[Crossref]

Gregor, K.

K. Gregor and Y. LeCun, “Learning fast approximations of sparse coding,” Proc. 27th International Conference on Machine Learning, 399–406 (2010).

Hamid, O. A.

O. A. Hamid, A. Mohamed, H. Jiang, L. Deng, G. Penn, and D. Yu, “Convolutional neural networks for speech recognition,” IEEE/ACM Trans. Audio, Speech, and Language Process. 22, 1533–1545 (2014).
[Crossref]

Hershey, J.

J. Hershey, J. Roux, and F. Weninger, “Deep unfolding: model-based inspiration of novel deep architectures,” ar”Xiv:1409.2574 (2014).

Hinton, G.

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015).
[Crossref] [PubMed]

Hinton, G. E.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Commun. ACM 60, 84–90 (2017).
[Crossref]

G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science 313, 504–507 (2006).
[Crossref] [PubMed]

Hopkins, H. H.

H. H. Hopkins, “On the diffraction theory of optical images,” Proc. R”. Soc. Lond. A 217, 408–432 (1953).
[Crossref]

H. H. Hopkins, “The concept of partial coherence in optics,” Proc. R. Soc. Lond. A 208, 263–277 (1951).
[Crossref]

Huang, T. S.

Z. Wang, Q. Ling, and T. S. Huang, “Learning deep l0 encoders,” Proc. Thirtieth AAAI Conference on Artificial Intelligence, 2194–2200 (2016).

Jia, N.

Y. Shen, N. Jia, N. Wong, and E. Y. Lam, “Robust level-set-based inverse lithography,” Opt. Express 19, 5511–5521 (2011).
[Crossref] [PubMed]

N. Jia and E. Y. Lam, “Machine learning for inverse lithography: using stochastic gradient descent for robust photomask synthesis,” J. Opt. 12, 045601 (2010).
[Crossref]

Jiang, H.

O. A. Hamid, A. Mohamed, H. Jiang, L. Deng, G. Penn, and D. Yu, “Convolutional neural networks for speech recognition,” IEEE/ACM Trans. Audio, Speech, and Language Process. 22, 1533–1545 (2014).
[Crossref]

Jiang, S.

X. Ma, S. Jiang, J. Wang, B. Wu, Z. Song, and Y. Li, “A fast and manufacture-friendly optical proximity correction based on machine learning,” Microelectron. Eng. 168, 15–26 (2016).
[Crossref]

X. Ma, B. Wu, Z. Song, S. Jiang, and Y. Li, “Fast pixel-based optical proximity correction based on nonparametric kernel regression,” J. Micro/Nanolith. MEMS MOEMS 13, 043007 (2014).
[Crossref]

Kerchner, C.

A. Poonawala, B. Painter, and C. Kerchner, “Model-based assist feature placement for 32nm and 22nm technology nodes using inverse mask technology,” Proc. SPIE 7488, 748814 (2009).
[Crossref]

Krizhevsky, A.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Commun. ACM 60, 84–90 (2017).
[Crossref]

Lam, E. Y.

X. Wu, S. Liu, W. Lv, and E. Y. Lam, “Robust and efficient inverse mask synthesis with basis function representation,” J. Opt. Soc. Am. A 31, B1–B9 (2014).
[Crossref]

W. Lv, S. Liu, Q. Xia, X. Wu, Y. Shen, and E. Y. Lam, “Level-set-based inverse lithography for mask synthesis using the conjugate gradient and an optimal time step,” J. Vac. Sci. Technol. B 31, 041605 (2013).
[Crossref]

Y. Shen, N. Jia, N. Wong, and E. Y. Lam, “Robust level-set-based inverse lithography,” Opt. Express 19, 5511–5521 (2011).
[Crossref] [PubMed]

N. Jia and E. Y. Lam, “Machine learning for inverse lithography: using stochastic gradient descent for robust photomask synthesis,” J. Opt. 12, 045601 (2010).
[Crossref]

Y. Shen, N. Wong, and E. Y. Lam, “Level-set-based inverse lithography for photomask synthesis,” Opt. Express 17, 23690–23701 (2009).
[Crossref]

Lawrence, S.

S. Lawrence, C. L. Giles, A. C. Tsoi, and A. D. Back, “Face recognition: a convolutional neural-network approach,” IEEE Trans. Neural Network 8, 98–113 (1997).
[Crossref]

LeCun, Y.

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015).
[Crossref] [PubMed]

Y. LeCun and Y. Bengio, “Convolutional networks for images, speech, and time series,” in the handbook of brain theory and neural networks, 255–258 (The MIT Press, 1998).

K. Gregor and Y. LeCun, “Learning fast approximations of sparse coding,” Proc. 27th International Conference on Machine Learning, 399–406 (2010).

Li, Y.

X. Ma, S. Jiang, J. Wang, B. Wu, Z. Song, and Y. Li, “A fast and manufacture-friendly optical proximity correction based on machine learning,” Microelectron. Eng. 168, 15–26 (2016).
[Crossref]

X. Ma, B. Wu, Z. Song, S. Jiang, and Y. Li, “Fast pixel-based optical proximity correction based on nonparametric kernel regression,” J. Micro/Nanolith. MEMS MOEMS 13, 043007 (2014).
[Crossref]

X. Ma, Z. Song, Y. Li, and G. R. Arce, “Block-based mask optimization for optical lithography,” Appl. Opt. 52, 3351–3363 (2013).
[Crossref] [PubMed]

X. Ma, Y. Li, and L. Dong, “Mask optimization approaches in optical lithography based on a vector imaging model,” J. Opt. Soc. Am. A 29, 1300–1312 (2012).
[Crossref]

X. Ma, G. R. Arce, and Y. Li, “Optimal 3D phase-shifting masks in partially coherent illumination,” Appl. Opt. 50, 5567–5576 (2011).
[Crossref] [PubMed]

Ling, Q.

Z. Wang, Q. Ling, and T. S. Huang, “Learning deep l0 encoders,” Proc. Thirtieth AAAI Conference on Artificial Intelligence, 2194–2200 (2016).

Liu, S.

X. Wu, S. Liu, W. Lv, and E. Y. Lam, “Robust and efficient inverse mask synthesis with basis function representation,” J. Opt. Soc. Am. A 31, B1–B9 (2014).
[Crossref]

W. Lv, S. Liu, Q. Xia, X. Wu, Y. Shen, and E. Y. Lam, “Level-set-based inverse lithography for mask synthesis using the conjugate gradient and an optimal time step,” J. Vac. Sci. Technol. B 31, 041605 (2013).
[Crossref]

Liu, Y.

P. M. Martin, C. J. Progler, G. Xiao, R. Gray, L. Pang, and Y. Liu, “Manufacturability study of masks created by inverse lithography technology (ILT),” Proc. SPIE 5992, 599235 (2005).
[Crossref]

Y. Liu and A. Zakhor, “Binary and phase shifting mask design for optical lithography,” IEEE Trans. on Semicond. Manuf. 5, 138–152 (1992).
[Crossref]

Luo, K.

K. Luo, Z. Shi, X. Yan, and Z. Geng, “SVM based layout retargeting for fast and regularized inverse lithography,” J. Zhejiang Uni-Sci C (Comput & Electron) 15, 390–400 (2014).
[Crossref]

Luo, R.

R. Luo, “Optical proximity correction using a multilayer perceptron neural network,” J. Opt. 15, 075708 (2013).
[Crossref]

Lv, W.

X. Wu, S. Liu, W. Lv, and E. Y. Lam, “Robust and efficient inverse mask synthesis with basis function representation,” J. Opt. Soc. Am. A 31, B1–B9 (2014).
[Crossref]

W. Lv, S. Liu, Q. Xia, X. Wu, Y. Shen, and E. Y. Lam, “Level-set-based inverse lithography for mask synthesis using the conjugate gradient and an optimal time step,” J. Vac. Sci. Technol. B 31, 041605 (2013).
[Crossref]

Ma, X.

X. Ma, S. Jiang, J. Wang, B. Wu, Z. Song, and Y. Li, “A fast and manufacture-friendly optical proximity correction based on machine learning,” Microelectron. Eng. 168, 15–26 (2016).
[Crossref]

X. Ma, B. Wu, Z. Song, S. Jiang, and Y. Li, “Fast pixel-based optical proximity correction based on nonparametric kernel regression,” J. Micro/Nanolith. MEMS MOEMS 13, 043007 (2014).
[Crossref]

X. Ma, Z. Song, Y. Li, and G. R. Arce, “Block-based mask optimization for optical lithography,” Appl. Opt. 52, 3351–3363 (2013).
[Crossref] [PubMed]

X. Ma, Y. Li, and L. Dong, “Mask optimization approaches in optical lithography based on a vector imaging model,” J. Opt. Soc. Am. A 29, 1300–1312 (2012).
[Crossref]

X. Ma, G. R. Arce, and Y. Li, “Optimal 3D phase-shifting masks in partially coherent illumination,” Appl. Opt. 50, 5567–5576 (2011).
[Crossref] [PubMed]

X. Ma and G. R. Arce, “Pixel-based OPC optimization based on conjugate gradients,” Opt. Express 19, 2165–2180 (2011).
[Crossref] [PubMed]

X. Ma and G. R. Arce, “Binary mask optimization for inverse lithography with partially coherent illumination,” J. Opt. Soc. Am. A 25, 2960–2970 (2008).
[Crossref]

X. Ma and G. R. Arce, “Generalized inverse lithography methods for phase-shifting mask design,” Opt. Express 15, 15066–15079 (2007).
[Crossref] [PubMed]

X. Ma and G. R. Arce, Computational Lithography, Wiley Series in Pure and Applied Optics, 1st ed. (John Wiley and Sons, 2010).
[Crossref]

Martin, P. M.

P. M. Martin, C. J. Progler, G. Xiao, R. Gray, L. Pang, and Y. Liu, “Manufacturability study of masks created by inverse lithography technology (ILT),” Proc. SPIE 5992, 599235 (2005).
[Crossref]

Milanfar, P.

A. Poonawala and P. Milanfar, “OPC and PSM design using inverse lithography: a non-linear optimization approach,” Proc. SPIE 6154, 61543H (2006).
[Crossref]

Mohamed, A.

O. A. Hamid, A. Mohamed, H. Jiang, L. Deng, G. Penn, and D. Yu, “Convolutional neural networks for speech recognition,” IEEE/ACM Trans. Audio, Speech, and Language Process. 22, 1533–1545 (2014).
[Crossref]

Moore, G. E.

G. E. Moore, “Cramming more components onto integrated circuits,” Electronics 38, 114ff (1965).

Painter, B.

A. Poonawala, B. Painter, and C. Kerchner, “Model-based assist feature placement for 32nm and 22nm technology nodes using inverse mask technology,” Proc. SPIE 7488, 748814 (2009).
[Crossref]

Pang, L.

P. M. Martin, C. J. Progler, G. Xiao, R. Gray, L. Pang, and Y. Liu, “Manufacturability study of masks created by inverse lithography technology (ILT),” Proc. SPIE 5992, 599235 (2005).
[Crossref]

Penn, G.

O. A. Hamid, A. Mohamed, H. Jiang, L. Deng, G. Penn, and D. Yu, “Convolutional neural networks for speech recognition,” IEEE/ACM Trans. Audio, Speech, and Language Process. 22, 1533–1545 (2014).
[Crossref]

Poonawala, A.

A. Poonawala, B. Painter, and C. Kerchner, “Model-based assist feature placement for 32nm and 22nm technology nodes using inverse mask technology,” Proc. SPIE 7488, 748814 (2009).
[Crossref]

A. Poonawala and P. Milanfar, “OPC and PSM design using inverse lithography: a non-linear optimization approach,” Proc. SPIE 6154, 61543H (2006).
[Crossref]

Progler, C. J.

P. M. Martin, C. J. Progler, G. Xiao, R. Gray, L. Pang, and Y. Liu, “Manufacturability study of masks created by inverse lithography technology (ILT),” Proc. SPIE 5992, 599235 (2005).
[Crossref]

Roux, J.

J. Hershey, J. Roux, and F. Weninger, “Deep unfolding: model-based inspiration of novel deep architectures,” ar”Xiv:1409.2574 (2014).

Salakhutdinov, R. R.

G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science 313, 504–507 (2006).
[Crossref] [PubMed]

Sapiro, G.

P. Sprechmann, A. M. Bronstein, and G. Sapiro, “Learning efficient sparse and low rank models,” IEEE Trans. Pattern Anal. 37, 1821–1833 (2015).
[Crossref]

Shen, Y.

W. Lv, S. Liu, Q. Xia, X. Wu, Y. Shen, and E. Y. Lam, “Level-set-based inverse lithography for mask synthesis using the conjugate gradient and an optimal time step,” J. Vac. Sci. Technol. B 31, 041605 (2013).
[Crossref]

Y. Shen, N. Jia, N. Wong, and E. Y. Lam, “Robust level-set-based inverse lithography,” Opt. Express 19, 5511–5521 (2011).
[Crossref] [PubMed]

Y. Shen, N. Wong, and E. Y. Lam, “Level-set-based inverse lithography for photomask synthesis,” Opt. Express 17, 23690–23701 (2009).
[Crossref]

Shi, Z.

K. Luo, Z. Shi, X. Yan, and Z. Geng, “SVM based layout retargeting for fast and regularized inverse lithography,” J. Zhejiang Uni-Sci C (Comput & Electron) 15, 390–400 (2014).
[Crossref]

Song, Z.

X. Ma, S. Jiang, J. Wang, B. Wu, Z. Song, and Y. Li, “A fast and manufacture-friendly optical proximity correction based on machine learning,” Microelectron. Eng. 168, 15–26 (2016).
[Crossref]

X. Ma, B. Wu, Z. Song, S. Jiang, and Y. Li, “Fast pixel-based optical proximity correction based on nonparametric kernel regression,” J. Micro/Nanolith. MEMS MOEMS 13, 043007 (2014).
[Crossref]

X. Ma, Z. Song, Y. Li, and G. R. Arce, “Block-based mask optimization for optical lithography,” Appl. Opt. 52, 3351–3363 (2013).
[Crossref] [PubMed]

Sprechmann, P.

P. Sprechmann, A. M. Bronstein, and G. Sapiro, “Learning efficient sparse and low rank models,” IEEE Trans. Pattern Anal. 37, 1821–1833 (2015).
[Crossref]

Sutskever, I.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Commun. ACM 60, 84–90 (2017).
[Crossref]

Tsoi, A. C.

S. Lawrence, C. L. Giles, A. C. Tsoi, and A. D. Back, “Face recognition: a convolutional neural-network approach,” IEEE Trans. Neural Network 8, 98–113 (1997).
[Crossref]

Wang, J.

X. Ma, S. Jiang, J. Wang, B. Wu, Z. Song, and Y. Li, “A fast and manufacture-friendly optical proximity correction based on machine learning,” Microelectron. Eng. 168, 15–26 (2016).
[Crossref]

Wang, Y.

B. Xin, Y. Wang, W. Gao, and D. Wipf, “Maximal sparsity with deep networks?,” ar”Xiv:1605.01636 (2016).

Wang, Z.

Z. Wang, Q. Ling, and T. S. Huang, “Learning deep l0 encoders,” Proc. Thirtieth AAAI Conference on Artificial Intelligence, 2194–2200 (2016).

Weninger, F.

J. Hershey, J. Roux, and F. Weninger, “Deep unfolding: model-based inspiration of novel deep architectures,” ar”Xiv:1409.2574 (2014).

Wipf, D.

B. Xin, Y. Wang, W. Gao, and D. Wipf, “Maximal sparsity with deep networks?,” ar”Xiv:1605.01636 (2016).

Wong, A. K.

A. K. Wong, Resolution Enhancement Techniques in Optical Lithography(SPIE, 2001).
[Crossref]

Wong, N.

Wu, B.

X. Ma, S. Jiang, J. Wang, B. Wu, Z. Song, and Y. Li, “A fast and manufacture-friendly optical proximity correction based on machine learning,” Microelectron. Eng. 168, 15–26 (2016).
[Crossref]

X. Ma, B. Wu, Z. Song, S. Jiang, and Y. Li, “Fast pixel-based optical proximity correction based on nonparametric kernel regression,” J. Micro/Nanolith. MEMS MOEMS 13, 043007 (2014).
[Crossref]

Wu, X.

X. Wu, S. Liu, W. Lv, and E. Y. Lam, “Robust and efficient inverse mask synthesis with basis function representation,” J. Opt. Soc. Am. A 31, B1–B9 (2014).
[Crossref]

W. Lv, S. Liu, Q. Xia, X. Wu, Y. Shen, and E. Y. Lam, “Level-set-based inverse lithography for mask synthesis using the conjugate gradient and an optimal time step,” J. Vac. Sci. Technol. B 31, 041605 (2013).
[Crossref]

Xia, Q.

W. Lv, S. Liu, Q. Xia, X. Wu, Y. Shen, and E. Y. Lam, “Level-set-based inverse lithography for mask synthesis using the conjugate gradient and an optimal time step,” J. Vac. Sci. Technol. B 31, 041605 (2013).
[Crossref]

Xiao, G.

P. M. Martin, C. J. Progler, G. Xiao, R. Gray, L. Pang, and Y. Liu, “Manufacturability study of masks created by inverse lithography technology (ILT),” Proc. SPIE 5992, 599235 (2005).
[Crossref]

Xin, B.

B. Xin, Y. Wang, W. Gao, and D. Wipf, “Maximal sparsity with deep networks?,” ar”Xiv:1605.01636 (2016).

Yan, X.

K. Luo, Z. Shi, X. Yan, and Z. Geng, “SVM based layout retargeting for fast and regularized inverse lithography,” J. Zhejiang Uni-Sci C (Comput & Electron) 15, 390–400 (2014).
[Crossref]

Yu, D.

O. A. Hamid, A. Mohamed, H. Jiang, L. Deng, G. Penn, and D. Yu, “Convolutional neural networks for speech recognition,” IEEE/ACM Trans. Audio, Speech, and Language Process. 22, 1533–1545 (2014).
[Crossref]

Yu, J.

Yu, P.

Zakhor, A.

Y. Liu and A. Zakhor, “Binary and phase shifting mask design for optical lithography,” IEEE Trans. on Semicond. Manuf. 5, 138–152 (1992).
[Crossref]

Appl. Opt. (2)

Commun. ACM (1)

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Commun. ACM 60, 84–90 (2017).
[Crossref]

Electronics (1)

G. E. Moore, “Cramming more components onto integrated circuits,” Electronics 38, 114ff (1965).

IEEE Trans. Neural Network (1)

S. Lawrence, C. L. Giles, A. C. Tsoi, and A. D. Back, “Face recognition: a convolutional neural-network approach,” IEEE Trans. Neural Network 8, 98–113 (1997).
[Crossref]

IEEE Trans. on Semicond. Manuf. (1)

Y. Liu and A. Zakhor, “Binary and phase shifting mask design for optical lithography,” IEEE Trans. on Semicond. Manuf. 5, 138–152 (1992).
[Crossref]

IEEE Trans. Pattern Anal. (1)

P. Sprechmann, A. M. Bronstein, and G. Sapiro, “Learning efficient sparse and low rank models,” IEEE Trans. Pattern Anal. 37, 1821–1833 (2015).
[Crossref]

IEEE/ACM Trans. Audio, Speech, and Language Process. (1)

O. A. Hamid, A. Mohamed, H. Jiang, L. Deng, G. Penn, and D. Yu, “Convolutional neural networks for speech recognition,” IEEE/ACM Trans. Audio, Speech, and Language Process. 22, 1533–1545 (2014).
[Crossref]

J. Micro/Nanolith. MEMS MOEMS (1)

X. Ma, B. Wu, Z. Song, S. Jiang, and Y. Li, “Fast pixel-based optical proximity correction based on nonparametric kernel regression,” J. Micro/Nanolith. MEMS MOEMS 13, 043007 (2014).
[Crossref]

J. Microlith. Microfab. Microsyst. (1)

Y. Granik, “Fast pixel-based mask optimization for inverse lithography,” J. Microlith. Microfab. Microsyst. 5, 043002 (2006).

J. Opt. (2)

N. Jia and E. Y. Lam, “Machine learning for inverse lithography: using stochastic gradient descent for robust photomask synthesis,” J. Opt. 12, 045601 (2010).
[Crossref]

R. Luo, “Optical proximity correction using a multilayer perceptron neural network,” J. Opt. 15, 075708 (2013).
[Crossref]

J. Opt. Soc. Am. A (3)

J. Vac. Sci. Technol. B (1)

W. Lv, S. Liu, Q. Xia, X. Wu, Y. Shen, and E. Y. Lam, “Level-set-based inverse lithography for mask synthesis using the conjugate gradient and an optimal time step,” J. Vac. Sci. Technol. B 31, 041605 (2013).
[Crossref]

J. Zhejiang Uni-Sci C (Comput & Electron) (1)

K. Luo, Z. Shi, X. Yan, and Z. Geng, “SVM based layout retargeting for fast and regularized inverse lithography,” J. Zhejiang Uni-Sci C (Comput & Electron) 15, 390–400 (2014).
[Crossref]

Microelectron. Eng. (1)

X. Ma, S. Jiang, J. Wang, B. Wu, Z. Song, and Y. Li, “A fast and manufacture-friendly optical proximity correction based on machine learning,” Microelectron. Eng. 168, 15–26 (2016).
[Crossref]

Nature (1)

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015).
[Crossref] [PubMed]

Opt. Express (5)

Proc. R. Soc. Lond. A (1)

H. H. Hopkins, “The concept of partial coherence in optics,” Proc. R. Soc. Lond. A 208, 263–277 (1951).
[Crossref]

Proc. R”. Soc. Lond. A (1)

H. H. Hopkins, “On the diffraction theory of optical images,” Proc. R”. Soc. Lond. A 217, 408–432 (1953).
[Crossref]

Proc. SPIE (4)

A. Poonawala, B. Painter, and C. Kerchner, “Model-based assist feature placement for 32nm and 22nm technology nodes using inverse mask technology,” Proc. SPIE 7488, 748814 (2009).
[Crossref]

N. B. Cobb and Y. Granik, “Dense OPC for 65nm and below,” Proc. SPIE 5992, 599259 (2005).
[Crossref]

P. M. Martin, C. J. Progler, G. Xiao, R. Gray, L. Pang, and Y. Liu, “Manufacturability study of masks created by inverse lithography technology (ILT),” Proc. SPIE 5992, 599235 (2005).
[Crossref]

A. Poonawala and P. Milanfar, “OPC and PSM design using inverse lithography: a non-linear optimization approach,” Proc. SPIE 6154, 61543H (2006).
[Crossref]

Science (1)

G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science 313, 504–507 (2006).
[Crossref] [PubMed]

Other (9)

I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning, Adaptive Computation and Machine Learning Series (The MIT Press, 2016).

Y. LeCun and Y. Bengio, “Convolutional networks for images, speech, and time series,” in the handbook of brain theory and neural networks, 255–258 (The MIT Press, 1998).

Z. Wang, Q. Ling, and T. S. Huang, “Learning deep l0 encoders,” Proc. Thirtieth AAAI Conference on Artificial Intelligence, 2194–2200 (2016).

K. Gregor and Y. LeCun, “Learning fast approximations of sparse coding,” Proc. 27th International Conference on Machine Learning, 399–406 (2010).

J. Hershey, J. Roux, and F. Weninger, “Deep unfolding: model-based inspiration of novel deep architectures,” ar”Xiv:1409.2574 (2014).

B. Xin, Y. Wang, W. Gao, and D. Wipf, “Maximal sparsity with deep networks?,” ar”Xiv:1605.01636 (2016).

E. Alpaydin, Introduction to Machine Learning, 2nd ed. (China Machine Press, 2014).

A. K. Wong, Resolution Enhancement Techniques in Optical Lithography(SPIE, 2001).
[Crossref]

X. Ma and G. R. Arce, Computational Lithography, Wiley Series in Pure and Applied Optics, 1st ed. (John Wiley and Sons, 2010).
[Crossref]

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1
Fig. 1 (a) Sketch of an optical lithography system and (b) the ILT method where the mask is predistorted to compensate for the optical distortion of the system (revised from Fig. 1 in [11]).
Fig. 2
Fig. 2 The imaging process of coherent optical lithography systems.
Fig. 3
Fig. 3 The flowchart of the gradient-based ILT algorithm.
Fig. 4
Fig. 4 (a) The architecture of MCNN, and (b) the decoder used to train the MCNN. The architecture of MCNN is generated by truncating the unfolded loop of the gradient-based ILT algorithm, where the input is the target layout, and the output is the optimized mask pattern. The decoder is set up according to the lithography imaging model and photoresist model, where the output of the decoder is expected to be close to the target layout.
Fig. 5
Fig. 5 The training layouts for MCNN.
Fig. 6
Fig. 6 The test layouts for MCNN and their corresponding print images.
Fig. 7
Fig. 7 The simulations of MCNN with two layers at 90nm and 45nm technology nodes.
Fig. 8
Fig. 8 The resulting pattern errors corresponding to different number of convolution layers at (a) 90nm and (b) 45nm technology nodes.
Fig. 9
Fig. 9 Comparison between MCNN and gradient-based approaches at 90nm technology node.
Fig. 10
Fig. 10 The convergence curves of pattern errors for MCNN and gradient-based approaches.
Fig. 11
Fig. 11 Comparison between MCNN and gradient-based approaches at 45nm technology node.
Fig. 12
Fig. 12 Comparison between MCNN and gradient-based approaches based on the complex target layout.
Fig. 13
Fig. 13 The additional three training samples.
Fig. 14
Fig. 14 The simulations of MCNN using different number of training samples.

Tables (2)

Tables Icon

Table 1 Average runtimes of MCNN (K = 2) and gradient-based approaches over the three test layouts.

Tables Icon

Table 2 Runtimes of MCNN (K = 2) and gradient-based approaches for the complex test layouts.

Equations (29)

Equations on this page are rendered with MathJax. Learn more.

I = | h M | 2 ,
h = J 1 ( 2 π r NA / λ ) 2 π r NA / λ ,
Z = Γ { I t r } = Γ { | h M | 2 t r } ,
Z = sig r ( I , t r ) = 1 1 + exp [ a r ( I t r ) ] ,
F = Z ˜ Z 2 2 = Z ˜ sig r ( I , t r ) 2 2 ,
M ^ = arg min M F ,
M n + 1 = M n step F ,
F = 2 a r { h ° [ ( Z ˜ Z ) Z ( 1 Z ) ( h M ) ] } ,
S = δ ( x , y ) , D = h , T = ( Z ˜ Z ) Z ( 1 Z ) , W = 2 a step h ° ,
M k g = S k M k b + W k [ T k ( D k M k b ) ] ,
T k = T = ( Z ˜ Z ) Z ( 1 Z ) ,
M k + 1 b = sig m ( M k g , t m ) = 1 1 + exp [ a m ( M k g t m ) ] ,
I = x s , y s J ( x s , y s ) | h x s y s M | 2 ,
F = 2 a r x s , y s J ( x s , y s ) { ( h x s y s ) ° [ ( Z ˜ Z ) Z ( 1 Z ) ( h x s y s M ) ] } .
Z = sig r ( | h M ^ | 2 , t r ) = 1 1 + exp [ a r ( | h M ^ | 2 t r ) ] .
{ S ^ k , D ^ k , W ^ k } = arg min S k , D k , W k E = arg min S k , D k , W k M 1 b Z 2 2 .
R Q = 1 N × 1 T 4 M ^ ( 1 M ^ ) 1 N × 1 ,
{ S ^ k , D ^ k , W ^ k } = arg min S k , D k , W k E = arg min S k , D k , W k { M 1 b Z 2 2 + γ Q R Q } ,
E A k ( x , y ) = x ^ , y ^ ( E M ^ ( x ^ , y ^ ) x K , y K ( M ^ ( x ^ , y ^ ) M K g ( x K , y K ) x k , y k ( M k + 1 g ( x k + 1 , y k + 1 ) M K g ( x k , y k ) M k g ( x k , y k ) A k ( x , y ) ) ) ) ,
A k n + 1 = A k n step A E | A ,
E | M ^ = E | M ^ + R Q | M ^ .
E | M ^ = 2 a r · { h ° [ ( M 1 b Z ) Z ( 1 Z ) ( h M ^ ) ] } ,
R Q | M ^ = 8 M ^ + 4 .
M k + 1 b | M k g = a m sig m ( M k g , t m ) [ 1 sig m ( M k g , t m ) ] .
M k g | M k b = S k ° M k g + D k ° [ T k ( W k ° M k g ) ] .
M k g | S k = { M k b M k g } ,
M k g | W k = { B ° M k g } , where B = T ( D k M k b ) .
M k g | D k = { ( M k b ) ° [ T ( W k ° M k g ) ] } .
E A k ( x , y ) = x ^ , y ^ ( E M ^ ( x ^ , y ^ ) x K , y K ( M ^ ( x ^ , y ^ ) M K g ( x K , y K ) x k , y k ( M k + 1 g ( x k + 1 , y k + 1 ) M K g ( x k , y k ) M k g ( x k , y k ) A k ( x , y ) ) ) ) ,

Metrics