Skip to main navigation menu Skip to main content Skip to site footer

Submissions

Login or Register to make a submission.

Submission Preparation Checklist

As part of the submission process, authors are required to check off their submission's compliance with all of the following items, and submissions may be returned to authors that do not adhere to these guidelines.
  • The submission has not been previously published, nor is it before another journal for consideration (or an explanation has been provided in Comments to the Editor).
  • The submission file is in PDF file format.
  • Where available, URLs for the references have been provided.
  • The text is single-spaced; uses a 10-point font; and all illustrations, figures, and tables are placed within the text at the appropriate points, rather than at the end.
  • The text adheres to the stylistic and bibliographic requirements outlined in the Author Guidelines.

Author Guidelines

Download the journal template JDSAI_Template.docx here

Paper title should use 16-point font, bold, in Times New Roman. Author affiliations should be use 12-point font, in Times New Roman.

Begin the abstract two lines below author names and addresses. The abstract summarizes key findings in the paper, and should be of 250 words or less. For the keywords, select up to 8 key terms for a search on your manuscript's subject. 

Main section headers should use 12-point font, bold, in Times New Roman capital letters. Subsection headers should use 10-point font, bold, in Times New Roman.

Table text and figure captions should use 9-point font, in Times New Roman.

Examples for references are as follows. 

[1] I. Goodfellow, Y. Bengio, and A. Courville, [Deep Learning], MIT Press, Cambridge, 50-58 (2016).
[2] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, Nov 1998.
[3] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” pp. 1097–1105, 2012.
[4] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” vol. abs/1409.1556, 2014. [Online]. Available: http://arxiv.org/abs/1409.1556
[5] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 00, June 2015, pp. 1–9. [Online]. Available: doi.ieeecomputersociety.org/10.1109/CVPR.2015.7298594
[6] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016, pp. 2818–2826.
[7] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778, 2016. [12] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, “Inception-v4, inception-resnet and the impact of residual connections on learning,” in AAAI, 2017.
[8] F. Chollet, “Xception: Deep learning with depthwise separable convolutions,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1800–1807, 2017.
[9] G. Huang, Z. Liu, L. v. d. Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017, pp. 2261–2269.
[10] B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le, “Learning transferable architectures for scalable image recognition,” vol. abs/1707.07012, 2017. [Online]. Available: http://arxiv.org/abs/1707.07012

 

Privacy Statement

The names and email addresses entered in this journal site will be used exclusively for the stated purposes of this journal and will not be made available for any other purpose or to any other party.