Stylized Art Generator – An CNN Based Effective Tool for Gaming

Authors

  • Dhiviya Rose J School of Computer Science, University of Petroleum and Energy Studies (UPES), Bidholi, Dehradun. India 248007
  • Shubham Sundriyal School of Computer Science, University of Petroleum and Energy Studies (UPES), Bidholi, Dehradun. India 248007
  • Harshwardhan Kumar Bhagat School of Computer Science, University of Petroleum and Energy Studies (UPES), Bidholi, Dehradun. India 248007
  • Harshvardhan Sharma School of Computer Science, University of Petroleum and Energy Studies (UPES), Bidholi, Dehradun. India 248007

Keywords:

Stylist Art, Deep Learning 2, Image Tranformation3, CNN

Abstract

Stylization refers to the process of simplifying or exaggerating the visual appearance of an object or a scene in a particular style, such as cartoonish, sketchy, or cubism. A neural network is a method in artificial intelligence that teaches computers to process data in a way that is inspired by the human brain. Convolutional Neural Networks (CNNs) are a type of deep neural network that is commonly used in computer vision tasks, such as image recognition and classification. Style representation refers to the visual patterns, textures, and color palettes that make up the artistic style of an image. The ”Stylized Art Generator” project is an AI-based system that uses deep learning techniques to generate stylized art from input images. The system is designed to provide users with an easy-to-use interface to generate art that is unique and visually appealing. The system has the potential to be used in various applications such as creating customized artwork for advertising, graphic design, or even as a tool for artists to explore new styles and forms of expression. Overall, the ”Stylized Art Generator” project aims to demonstrate the power of AI and its potential in the field of art and design.

 

References

H. Yan et al., “Toward Intelligent Design: An AI-Based Fashion Designer Using Generative Adversarial Networks Aided by Sketch and Rendering Generators,” IEEE Trans. Multimed., vol. 25, pp. 2323–2338, 2023, doi: 10.1109/TMM.2022.3146010.

C. Science, A. K. Sahu, P. K. Gupta, A. K. Singh, and K. Singh, “Research Paper A Convolutional Neural Network Approach for Detecting the Distracted Drivers,” vol. 11, no. 1, pp. 1–6, 2023.

Gatys, Leon A., Alexander S. Ecker, and Matthias Bethge. "Image style transfer using convolutional neural networks." In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2414-2423. 2016.

Sun, Haochen, Lei Wu, Xiang Li, and Xiangxu Meng. "Style-woven Attention Network for Zero-shot Ink Wash Painting Style Transfer." In Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277-285. 2022.

Y. Kang, S. Gao, and R. E. Roth, “Transferring multiscale map styles using generative adversarial networks,” Int. J. Cartogr., vol. 5, no. 2–3, pp. 115–141, 2019, doi: 10.1080/23729333.2019.1615729.

S. R. Gundu and T. Anuradha, “Digital Data Growth and the Philosophy of Digital Universe in View of Emerging Technologies,” vol. 8, no. 2, pp. 59–64, 2020.

J. Chen, J. An, H. Lyu, and J. Luo, “Learning to Evaluate the Artness of AI-generated Images,” 2023, [Online]. Available: http://arxiv.org/abs/2305.04923.

Deng, Yingying, Fan Tang, Weiming Dong, Chongyang Ma, Xingjia Pan, Lei Wang, and Changsheng Xu. "Stytr2: Image style transfer with transformers." In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 11326-11336. 2022.

Q. Mao and S. Ma, “Enhancing Style-Guided Image-to-Image Translation via Self-Supervised Metric Learning,” IEEE Trans. Multimed., vol. XX, no. Xx, pp. 1–16, 2023, doi: 10.1109/TMM.2023.3238313.

D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Instance Normalization: The Missing Ingredient for Fast Stylization,” no. 2016, 2016, [Online]. Available: http://arxiv.org/abs/1607.08022.

S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee, “Generative adversarial text to image synthesis,” 33rd Int. Conf. Mach. Learn. ICML 2016, vol. 3, pp. 1681–1690, 2016.

J. Park and Y. Kim, "Styleformer: Transformer based generative adversarial networks with style vector," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8983-8992, 2022.

Li, Chuan, and Michael Wand. "Combining markov random fields and convolutional neural networks for image synthesis." In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2479-2486. 2016.

D. Chen, L. Yuan, J. Liao, N. Yu, and G. Hua, “StyleBank: An explicit representation for neural image style transfer,” Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017, vol. 2017-January, pp. 2770–2779, 2017, doi: 10.1109/CVPR.2017.296.

Downloads

Published

2023-08-31

How to Cite

[1]
D. R. J, S. Sundriyal, H. K. Bhagat, and H. Sharma, “Stylized Art Generator – An CNN Based Effective Tool for Gaming”, Int. J. Sci. Res. Comp. Sci. Eng., vol. 11, no. 4, pp. 51–57, Aug. 2023.

Issue

Section

Research Article

Similar Articles

1 2 3 4 5 6 7 8 9 10 > >> 

You may also start an advanced similarity search for this article.