Call for papers
DSS Special Issue on Blockchain Technology and Applications
Blockchain technology has received enormous attention since Bitcoin was launched in 2009 and has become the frontier of technology advancements and application innovations in recent years (Ilk, et al., 2021; Kumar et al., 2020; Shang et al. 2022; Zhang et al. 2021). In particular, blockchain is now recognized as a critical part of the new ABCD of modern technology, that is, Artificial intelligence, Blockchain, Cloud computing, and big Data. However, many research challenges and opportunities remain to be tackled and surmounted in areas such as blockchain infrastructure decentralization, blockchain network governance, blockchain security and privacy, and the nature of machine trust in blockchain-based systems. Because blockchain is an integral part of automated business processes, the implementation of this technology can vary greatly between organizations across different industries.
The aim of this special issue is to highlight novel and high-quality research in blockchain technology and applications, and to examine the current and future impact of blockchain systems, and related technologies including data verification before block confirmation, authentication of data ownership, and dataflow across blockchain systems. Considering the decision-making focus of DSS publications that bridge the gap between managerial and technical perspectives, this special issue is open to all manuscripts that make a significant research contribution to blockchain systems and applications in business sectors such as finance, insurance, healthcare, manufacturing, supply chain, education, and government.
In terms of research paradigm, we invite manuscripts with system-based implications that draw on various analytical, empirical, and technical methodologies including, but not limited to, system development, econometrics, decision theory, operations management, experimentation, and engineering. We strongly encourage submissions that follow a design science research perspective (Hevner et al. 2004), which aims to develop cutting-edge IT artifacts. That is, all technical and quantitative research methods that are helpful in tackling real-world challenges confronted by managers, engineers, and researchers via blockchain technology are welcome.
- Process-aware blockchain design and management
- Blockchain applications in metaverse platforms
- Design and implementation issues in the transition from PoW to PoS
- Data management issues on the blockchain
- Security and privacy issues on the blockchain
- Design and implementation issues of Metaverse systems
- Integration of blockchain into existing business infrastructure
- Modeling, design and implementation of trust mechanisms in blockchain-based systems
- New and faster consensus algorithms for blockchain implementation
- NFT analysis and design paradigms in various business sectors
- Blockchain-based NFT casting models and mechanisms
Guest Editors:
Dr. Shaokun Fan
Oregon State University, Corvallis, OR, USA
Email: [email protected]
Dr. Noyan Ilk
Florida State University, Tallahassee, FL, USA
Email: [email protected]
Prof. Akhil Kumar
Penn State University
Email: [email protected]
Prof. J. Leon Zhao
Chinese University of Hong Kong, Shenzhen, China
Email: [email protected]
Managing Guest Editor
Dr. Ruiyun “Rayna” Xu
Chinese University of Hong Kong, Shenzhen, China
Email: [email protected]
Associate Editors of the Special Issue
- Adams, Michael, Queensland University of Technology, Brisbane, Australia
- Carvalho, Arthur, Miami University, Oxford, Ohio, USA
- Hu, Daning, South University of Science and Technology, Shenzhen, China
- Jiang, Qiqi, Copenhagen Business School, Frederiksberg, Denmark
- Leng, Jiewu, Guangdong University of Technology, Guangzhou, China
- Liu, Rong (Emily), Stevens Institute, Hoboken, NJ, USA
- Luo, Xin (Robert), The University of New Mexico, Albuquerque, New Mexico, USA
- Peng, Chih-Hung, National Chengchi University, Taipei, Taiwan
- Shan, Zhe, Miami University, Oxford, Ohio, USA
- Subramanian, Hemang, Florida International University, Miami, Florida, USA
- Tan, Yinliang (Ricky), University of Houston, USA
- Wei, Chih-Ping, National Taiwan University, Taiwan
- Xue, Ling, Georgia State University, USA
- Zhang, Wenping, Remin University of China, Beijing, China
- Zhao, Xi, Xi'an Jiaotong University, China
Submission Guidelines
- All manuscripts should be submitted through the Decision Support Systems online submission system during October 15, 2022 – January 15, 2023. See Guide for Authors and submission details at https://www.journals.elsevier.com/decision-support-systems
- Submissions must fully follow the Guide for Authors for Decision Support Systems.
- Authors should select "Special Issue: Blockchain Technology and Applications" as "Manuscript Type" at https://www.editorialmanager.com/decsup/default1.aspx.
Important Dates:
- Submissions System opens: October 15, 2022
- Paper Submission Deadline: January 15, 2023
- Initial Screening of Submissions: January 30, 2023
- First Review Decisions: April 15, 2023
- Revision Due: June 30, 2023
- Acceptance Decisions: September 15, 2023
- Final Manuscript Due: November 15, 2023
References
Hevner, A.R., March, S.T., Park, J., and Ram, S. (2004). Design science in information systems research. MIS Quarterly, 28(1), 75-105.
Ilk, N., Shang, G., Fan, S., and Zhao, J. L. (2021). Stability of Transaction Fees in Bitcoin: A Supply and Demand Perspective. MIS Quarterly, 45(2), 563-692.
Kumar, A., Liu, R., and Shan, Z. (2020). Is Blockchain a Silver Bullet for Supply Chain Management? Technical Challenges and Research Opportunities. Decision Sciences, 51(1), 8-37.
Shang, G., Ilk, N., and Fan, S. (2022). Need for Speed, but How Much Does It Cost? Unpacking the Fee-Speed Relationship in Bitcoin Transactions. Journal of Operations Management, Forthcoming.
Zhang, W., Wei, C-P., Jiang, Q., Peng, C-H., and Zhao, J. L. (2021). Beyond the Block: A Novel Blockchain-Based Technical Model for Long-Term Care Insurance. Journal of Management Information Systems, 38(2), 374-400.
DSS Special Issue on Explainable AI for Enhanced Decision Making
Artificial Intelligence (AI) defined as the development of computer systems that are able to perform tasks normally requiring human intelligence by understanding, processing and analyzing large amounts of data has been a prevalent domain for several decades now. An increasing number of businesses rely on AI to achieve outcomes that operationally and/or strategically support (human) decision making in different domains. At present, AI based machine learning (ML) has become widely popular as a subfield of AI, both in industry as in academia. ML has been widely used to enhance decision making including predicting organ transplantation risk (Topuz et al., 2018), forecasting remaining useful life of machinery (Kraus et al., 2020), student dropout prediction (Coussement et al., 2020), money laundering (Fu et al., 2021), money laundering detection (Vandervorst et al., 2022) amongst others. In the early days, AI attempts to imitate human decision-making rules were only partially successful, as humans often could not accurately describe the decision-making rules, they use to solve problems (Fügener et al., 2022). With the development of advanced AI, exciting progress has been made in algorithmic development to support decision making in various fields including finance, economics, marketing, human resource management, tourism, computer science, biological science, medical science, and others (Liu et al., 2022).
Recently, advances have heavily focused on boosting the predictive accuracy of AI methods, with deep learning (DL) methods being a prevalent example. The stringent focus on improved prediction performance often comes at the expense of missed explainability, which leads to decision makers’ distrust and even rejection of AI systems (Shin, 2021). Explainable AI describes the process that allows one to understand how an AI system decides, predicts, and performs its operations. Therefore, explainable AI reveals the strengths and weaknesses of the decision-making strategy and explains the rationale of the decision support system (Rai, 2020). Numerous scholars confirm that explainable AI is the key to developing and deploying AI in industries such as retail, banking and financial services, manufacturing, and supply chain/logistics (Kim et al., 2020; Shin, 2021; Zhdanov et al., 2022). In addition, explainable AI has also received attention from governments due to its ability to improve the efficiency and effectiveness of governments’ functionalities and decision supports (Phillips-Wren et al., 2021).
In fact, in many cases, understanding why a model makes certain decisions and predictions is as important as its accuracy. Because model explainability helps managers to better understand models’ parameters and apply them more confidently, allowing managers to communicate the analytical rationale more convincingly for their decisions to stakeholders (Wang et al., 2022). Among others, exploring the applications of AI explainability and precise understandability in decision making is one of the main contributions of this special issue.
Therefore, this special issue proposal on “Explainable AI for Enhanced Decision Making” deals with the following topics as an illustrative but not restrictive list:
- Explainability and interpretability in AI decision support systems
- Use explainable AI for corporate investment decisions
- Explainable AI in banking, insurance, and micro enterprises
- Explainable AI in healthcare, transportation, and education
- Causality of AI models
- Property risk assessment using explainable AI
- Make enhanced business decisions using explainable AI
- Use explainable AI to make predictions for IT industry decisions
- Explainable AI, big data, and decision support systems
- Explainable AI, applications and services
- Explainable methods for deep learning architectures
- Decision model visualization
- Evaluate decision making metrics and processes
- Measuring explainability in decision support systems
"Please note that we are particularly interested in research papers that focus on the explainability aspects of AI based ML research. All articles that simply focus on improving the accuracy of AI algorithms/machine learning classifiers, without highlighting the benefit to improved explainable decision making, are strictly not encouraged".
Dr. Mohammad Abedin
Senior Lecturer in Fintech & Financial Innovation
Teesside University International Business School
Teesside University, United Kingdom
Email: [email protected]
Prof. Dr. Kristof Coussement
Professor of Business Analytics
IESEG School of Management, France
Email: [email protected]
Dr. Mathias Kraus
Assistant Professor of Data Analytics
Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany
Email: [email protected]
Prof. Dr. Sebastián Maldonado
Professor of Information Systems
University of Chile, Chile
Email: [email protected]
Dr. Kazim Topuz
Assistant Professor of Business Analytics & Operations Management
The University of Tulsa, United States
Email: [email protected]
Submission Guidelines
Kindly submit your paper to the Special Issue category (VSI: Explainable AI) through the online submission system (https://www.editorialmanager.com/decsup/default1.aspx) of Decision Support Systems. All the submissions should follow the general author guidelines of Decision Support Systems available at https://www.elsevier.com/journals/decision-support-systems/0167-9236/guide-for-authors.
Submission Timeline
• Paper submission system opens: November 1st, 2022.
• Paper submission deadline: June 15th, 2023.
References
Coussement, K., Phan, M., De Caigny, A., Benoit, D. F., & Raes, A. (2020). Predicting student dropout in subscription-based online learning environments: The beneficial impact of the logit leaf model. Decision Support Systems. https://doi.org/10.1016/j.dss.2020.113325
Fu, R., Huang, Y., & Singh, P. V. (2021). Crowds, Lending, Machine, and Bias. Information Systems Research, 32(1), 72–92. https://doi.org/10.1287/isre.2020.0990
Fügener, A., Grahl, J., Gupta, A., & Ketter, W. (2022). Cognitive Challenges in Human–Artificial Intelligence Collaboration: Investigating the Path Toward Productive Delegation. Information Systems Research. https://doi.org/10.1287/isre.2021.1079
Kim, B., Park, J., & Suh, J. (2020). Transparency and accountability in AI decision support: Explaining and visualizing convolutional neural networks for text information. Decision Support Systems, 134(July 2019), 113302. https://doi.org/10.1016/j.dss.2020.113302
Kraus, M., Feuerriegel, S., & Oztekin, A. (2020). Deep learning in business analytics and operations research: Models, applications and managerial implications. European Journal of Operational Research, 281(3), 628–641. https://doi.org/10.1016/j.ejor.2019.09.018
Liu, H., Ye, Y., & Lee, H. Y. (2022). High-Dimensional Learning Under Approximate Sparsity with Applications to Nonsmooth Estimation and Regularized Neural Networks. Operations Research. https://doi.org/10.1287/opre.2021.2217
Phillips-Wren, G., Daly, M., & Burstein, F. (2021). Reconciling business intelligence, analytics and decision support systems: More data, deeper insight. Decision Support Systems, 146(March), 113560. https://doi.org/10.1016/j.dss.2021.113560
Rai, A. (2020). Explainable AI: from black box to glass box. Journal of the Academy of Marketing Science, 48(1), 137–141. https://doi.org/10.1007/s11747-019-00710-5
Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human Computer Studies, 146, 102551. https://doi.org/10.1016/j.ijhcs.2020.102551
Topuz, K., Zengul, F. D., Dag, A., Almehmi, A., & Yildirim, M. B. (2018). Predicting graft survival among kidney transplant recipients: A Bayesian decision support model. Decision Support Systems, 106, 97–109. https://doi.org/10.1016/j.dss.2017.12.004
Vandervorst, F., Verbeke, W., & Verdonck, T. (2022). Data misrepresentation detection for insurance underwriting fraud prevention. Decision Support Systems, 159, 113798. https://doi.org/https://doi.org/10.1016/j.dss.2022.113798
Wang, L., Gopal, R., Shankar, R., & Pancras, J. (2022). Forecasting venue popularity on location-based services using interpretable machine learning. Production and Operations Management. https://doi.org/https://doi.org/10.1111/poms.13727
Zhdanov, D., Bhattacharjee, S., & Bragin, M. A. (2022). Incorporating FAT and privacy aware AI modeling approaches into business decision making frameworks. Decision Support Systems, 155, 113715. https://doi.org/https://doi.org/10.1016/j.dss.2021.113715