It is an understatement to hear: “I think AI (artificial intelligence) has come to stay.” AI has been with us for decades; yes, it’s not going anywhere. On the contrary, it has revolutionized science and society itself. AI has its impact on human relationships, human interactions, how we learn, how we transmit knowledge, and many other aspects of human life. How can we, as humans, establish boundaries? Fundamentally, we develop practical ethical guidelines for its use. How do we do that? This article offers some suggestions that could draw a starting point.
The term artificial intelligence (AI) was first introduced in 1955 by Professor John McCarthy of Stanford University and highlighted by Reamer in 2023. He further elaborates that AI refers to machines simulating human intelligence processes, enabling them to perform first-grade and second-degree tasks usually requiring human-level cognitive abilities. AI includes what is known as machine learning, using historical data to predict and shape new output. AI has the potential to transform and enhance diverse professions and education. It can enable us to analyze data quickly, learn from it, and make decisions or perform actions based on that data. Knowing these potential calls for some guidelines.
In recent years, a proliferation of ethical guidelines documenting AI principles has arisen. While not legally binding, these “soft law” instruments exert a significant influence over decision-making in relevant fields. Prominent corporations, including Google and SAP (System Analysis Program Development), have published their AI principles, highlighting the private sector’s involvement in shaping AI ethics (Jobin, et al., 2019). The substantial output of these documents reflects an urgent need for ethical guidelines as interest in the implications of AI on public welfare intensifies.
Research conducted by Jobin et al. (2019) involved a scoping review to evaluate existing documents related to AI principles, focusing on soft-law guidelines from both private and public institutions. In this research, 84 papers containing ethical principles were analyzed. The key findings of this research revealed a general convergence towards five core ethical principles: transparency, justice and fairness, non-maleficence, responsibility, and privacy. While these principles appear broadly accepted, there are notable differences in interpretation, application, and implementation strategies. For example, transparency—identified as the most cited principle—necessitates varied efforts ranging from data usage disclosures to automated decision-making accountability. Other principles, like justice and fairness, emphasize the prevention of biases and discrimination while advocating for equitable access to AI benefits. Then, what should be considered when establishing practical ethical guidelines in using AI, either in professional service or education settings?
The National Association of Social Workers (NASW) (2023) sustains eight ethical considerations for using artificial intelligence in social work. The first consideration is the development of competence in AI use. Before using AI, users must ensure they are well-educated regarding AI applications and continuously engage in professional growth activities relating to AI. Clear communication about AI use, including risks and benefits, is necessary for ethical client engagement in a professional setting. Clients should be informed and have the ultimate authority to consent to using AI in their care. Another consideration is that AI must not undermine client goals; professionals should facilitate client self-determination and respect their choices, ensuring AI supports rather than defines these choices.
The potential for AI to perpetuate bias stresses the importance of cultural competence, and the need for users to actively oppose discriminatory practices is another consideration. NASW (2023) argues that protecting client data is crucial, necessitating compliance with ethical standards and legal frameworks to ensure confidentiality. Other considerations mentioned by NASW (2023) are:
- Continue education: Staying informed about new developments in AI and their implications for practice is essential for ethical social work.
- Technology Awareness: Social workers should recognize their use of AI tools and seek to understand their functionalities to provide informed services.
- Policy Development: A structured technology policy must outline ethical practices concerning AI use, ensuring alignment with national standards and client protection. Consideration could also be applied in an educational setting.
Another consideration stated by Sundvall (2023) is that all users of AI simulators,
applications, or tools should remember that AI is NOT human. We cannot control the output information that we receive from AI, and we cannot critically analyze all AI data before it reaches its users. We cannot predict what can be harmful to users. These considerations should be considered when using AI.
On the other hand, AI has its advantages for professionals, educators, and students. AI tools can identify and correct grammar and style errors, helping professionals, educators, and students produce more polished written work. AI tools can recognize speech and guide pronunciation, aiding professionals, educators, and students in developing better speaking skills (Reamer, 2023; Sundvall, 2023).
Reamer (2023) mentions other advantages of AI use. This author states that AI can assist in clinical interventions, permitting clients’ digital exchanges with artificial intelligence tools and chatbots. Other advantages are:
- Client self-monitoring: Moods and behaviors (smartphone apps, wearable sensors) (Reamer, 2023).
- “Ecological momentary assessment” (EMA): Repeated sampling of clients’ behaviors and experiences in real-time (Reamer, 2023).
- Data mining: Extracting potentially useful information from data to identify patterns and trends using learning algorithms (e.g., symptom patterns and predictors, risk assessment, treatment outcomes) (Reamer, 2023).
- Populate content: Grant applications, needs assessments, program evaluations, final reports to funders, advocacy, and social justice campaigns (Reamer, 2023).
When AI simulators are used, they provide a safe space for humans to interact with virtual clients. Educators can overlook deficits, practice education, and individualize the training of students. Also, AI simulators reduce anxiety in students, and no harm is caused to humans while practicing.
Conclusions
While leveraging AI offers numerous advantages, incorporating these technologies also introduces challenges and ethical considerations (Spivakovsky, et al., 2023). For instance, reliance on AI can risk diminishing critical thinking, where students may rely heavily on generated suggestions instead of developing their independent reasoning. Concerns about biases in AI outputs and the lack of regulatory frameworks further complicate its seamless integration into education and professionals’ settings (Spivakovsky, et al., 2023).
The conclusion solidifies the urgent call for higher education institutions and professionals to craft comprehensive policies regarding AI applications. Some recommendations presented by Reamer (2023) and Sundvall (2023) are:
- Create ethics-based governing principles
- Establish a digital ethics steering committee
- Convene diverse focus groups
- Subject algorithms to peer review
- Conduct AI simulations
- Create for interpreting AI results
- Develop rigorous training protocols
- Maintain a log of AI results to identify positive and negative trends
- Test algorithms for possible biases and inaccuracies
With rapid technological progress, continued research must focus on enhancing AI literacy and transparency in academic and professional practices to maximize its benefits while minimizing potential harms (Spivakovsky et al., 2023). This continued research should foster greater co
Dr. Débora Fontánez-Flecha, MCSW, LSW, LTSE, BSW
Dr. Débora Fontánez-Flecha is a Professor and Coordinator of the Social Work Doctorate Program at AGM University. She holds a PhD in Social Work from the Universidad de Puerto Rico. Her research focuses on Structural Violence in Education, emphasizing Citizenship and Human Rights. Dr. Fontánez-Flecha has received multiple awards from Puerto Rican institutions and published extensively in peer-reviewed journals.
References
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature
Machine Intelligence, 1(9), 389-399. https://doi.org/10.1038/s42256-019-0088-2
NASW, (2023). 8 Ethical considerations for the use of artificial intelligence in social work.
Ethics Table Talks. https://naswinstitute.inreachce.com
Reamer, F. G. (2023). Artificial intelligence in social work: Emerging ethical issues.
International Journal of Social Work Values and Ethics, 20(2), 52-71. https://doi.org/10.55521/10-020-205
Spivakovsky, O. V., Omelchuk, S. A., Kobets, V. V., Valko, N. V., & Malchykova, D. S. (2023).
Institutional policies on artificial intelligence in university learning, teaching and research. Information Technologies and Learning Tools, 97(5), 181-202. https://doi.org/10.33407/itlt.v97i5.5395
Sundvall, J. (2023). Artificial Intelligence in Social Work: Exploring Key Ethical
Considerations. Social Work Online CE Institute. https://naswinstitute.inreachce.com

