The Role of Artificial Intelligence in Modern IT Systems
Akkali Dinara
Atyrau University of Oil and Gas University
Abstract
The rapid advancement of artificial intelligence has profoundly influenced contemporary legal systems, reshaping traditional concepts of responsibility, authority, evidence, and governance. As AI technologies increasingly perform tasks that were historically reserved for human actors, legal frameworks face growing pressure to adapt to new forms of decision-making, automation, and technological regulation. This study examines the multifaceted relationship between artificial intelligence and law, focusing on the disruptive effects of AI on legal institutions and the challenges it poses to fundamental legal principles.
The paper explores key areas where artificial intelligence intersects with law, including automated legal processes, liability for AI-induced harm, the use of AI in legal evidence and judicial decision-making, and the emergence of technology-based governance. Particular attention is given to the problem of accountability, as autonomous systems lack moral awareness and intentionality, yet can cause significant legal and social consequences. The analysis also highlights the risks associated with overreliance on automation, such as the erosion of human judgment, transparency deficits, and threats to individual autonomy.
Furthermore, the study considers the regulatory responses to artificial intelligence at both national and international levels, emphasizing the importance of human-centric and trustworthy AI. While artificial intelligence offers substantial benefits in terms of efficiency, accessibility, and predictive capacity within legal practice, it cannot fully replace human reasoning and ethical evaluation. The paper argues that artificial intelligence should be viewed not as a substitute for law, but as a tool that can support legal reform when embedded within a coherent and flexible legal framework. Ultimately, preserving human autonomy, dignity, and accountability remains essential in ensuring that technological progress strengthens rather than undermines the authority of law.
-
Introduction
The rapid development of artificial intelligence has significantly transformed modern societies and raised complex legal, ethical, and institutional questions. Artificial intelligence is no longer limited to technical or experimental environments; it actively influences economic systems, labor markets, education, healthcare, public administration, and legal decision-making. As AI technologies increasingly participate in processes that were traditionally governed exclusively by human judgment, concerns emerge regarding responsibility, accountability, transparency, and the preservation of fundamental legal values.
One of the central challenges is the autonomy of intelligent systems. When AI systems operate independently, they may produce outcomes that resemble human decisions but lack human intent, moral reasoning, or awareness. This creates uncertainty regarding responsibility in cases of error or harm. Questions arise as to whether liability should rest with developers, operators, users, or regulatory institutions. These concerns demonstrate that the development of AI cannot be separated from legal frameworks and social norms.
At the same time, artificial intelligence offers opportunities to enhance decision-making, improve efficiency, and support legal professionals in complex analytical tasks. However, these benefits must be balanced against risks related to authors’ rights, data protection, judicial fairness, and the erosion of trust in legal institutions. The fundamental issue is whether legal systems can adapt to technological change without sacrificing their core principles.
-
Legal Disruption Caused by Artificial Intelligence
Artificial intelligence acts as a disruptive force within legal systems by challenging traditional assumptions about authority, responsibility, and evidence. Legal norms were historically designed for human actors who possess intention, consciousness, and accountability. AI systems, by contrast, operate through algorithms and data-driven processes, which do not align neatly with existing legal categories.
This disruption is visible in areas such as deepfake technology, autonomous vehicles, automated decision-making, and predictive analytics. Deepfakes, for example, undermine the reliability of audio-visual evidence and allow individuals to deny authentic recordings by claiming they were artificially generated. This weakens public trust in digital evidence and complicates legal procedures that rely on factual verification.
Artificial intelligence also exposes structural gaps within legal systems. Rather than fully replacing legal institutions, AI often reveals inefficiencies and inconsistencies in existing regulatory frameworks. While some scholars argue that highly autonomous systems should be recognized as legal persons, most legal theories maintain that legal personality remains fundamentally human-centered. AI may assist in reforming legal structures, but it does not equate to human agency.
-
Automation in Legal Processes
One of the most significant impacts of artificial intelligence on law is the automation of legal tasks. AI systems are increasingly used for document review, legal research, contract analysis, and the prediction of case outcomes. Machine learning models trained on extensive databases can identify patterns in judicial decisions and assist legal professionals in developing strategies.
Automation also extends to regulatory enforcement, where compliance with legal norms can be monitored automatically. In some contexts, sanctions may be imposed without direct human involvement. While this can increase efficiency and reduce administrative costs, it raises serious concerns regarding due process, fairness, and proportionality. Automated systems lack the ability to fully understand social context, moral nuance, and individual circumstances.
The expansion of automation represents a critical moment in legal evolution. While technology can support existing legal institutions, excessive reliance on automated decision-making risks undermining the human judgment that is essential to justice.
-
Liability and Regulation of AI-Induced Harm
As artificial intelligence becomes more capable, it can also be misused for criminal activities, including fraud, market manipulation, and cybercrime. Determining liability for harm caused by AI systems is increasingly complex due to the involvement of multiple actors, such as developers, manufacturers, operators, and users.
Most legal systems attribute responsibility to human actors rather than to machines. Although AI systems may outperform humans in speed and accuracy, they lack self-awareness and moral responsibility. As a result, liability assessments typically focus on whether harm resulted from defective design, inadequate supervision, or improper use.
Some scholars propose limited forms of legal personality for AI systems, granting them specific rights or obligations. However, the dominant view remains that legal responsibility must ultimately rest with humans. Regulatory frameworks therefore aim to balance innovation with accountability, ensuring that technological progress does not undermine public safety or individual rights.
-
Artificial Intelligence and Legal Evidence
Artificial intelligence plays an expanding role in the management and analysis of legal evidence. AI tools are used to retrieve legal information, analyze precedents, and model alternative narratives in criminal and civil cases. Projects such as narrative reconstruction systems demonstrate how AI can assist in organizing complex case files.
Despite these advancements, legal evidence differs fundamentally from scientific evidence. Legal proof often depends on unique human experiences, credibility assessments, and social context. Unlike scientific experiments, legal events cannot be replicated or tested under controlled conditions.
Another challenge is the cultural and social acceptance of AI-generated evidence. Legal narratives must be comprehensible and persuasive to judges and juries, which requires sensitivity to societal values. AI systems are still limited in their ability to construct narratives that align with human expectations of justice and fairness.
-
The Relationship Between Law and Technology
Technology reshapes the conditions under which legal rights are exercised. While the law does not directly prevent the use of technology, technological design can restrict or enable certain forms of behavior. This creates tension between rapid technological development and slower legal adaptation.
Legal systems must remain flexible without abandoning their internal logic. Overregulation can hinder innovation, while insufficient regulation may allow technological harm. Legal disputes arising from technological change are often unpredictable, requiring courts and legislators to adapt continuously.
In the field of copyright, generative AI challenges traditional notions of authorship and originality. Platforms increasingly require disclosure of AI-generated content, yet reliable detection remains difficult. Errors in enforcement may infringe upon legitimate authors’ rights or distort competition within creative industries.
-
AI-Based Monitoring and Governance
Artificial intelligence is increasingly used as a regulatory tool, shaping behavior through technological constraints rather than legal texts. This shift toward governance by technology raises concerns about autonomy, transparency, and democratic accountability. When rules are embedded in code, individuals may have limited ability to challenge or understand the systems that regulate their actions.
European regulatory initiatives emphasize the importance of trustworthy and human-centric AI. Trustworthy AI requires transparency, high-quality data, and respect for fundamental rights. While humans currently remain in control of AI governance, future developments may shift this balance.
Technological regulation risks decentralizing human authority and weakening traditional legal discourse. Invisible technological constraints may replace explicit legal norms, making it difficult to identify responsibility or seek judicial review.
-
Authority of Law in the Age of Artificial Intelligence
The replacement of human authority with automated systems raises fundamental ethical concerns. While technology can improve efficiency and access to justice, excessive automation may endanger the social foundations of law. Legal authority derives not only from compliance but also from respect, legitimacy, and human judgment.
Courts increasingly use technology to reduce costs and streamline procedures. Software tools facilitate dispute resolution and may even prevent litigation. However, governance based solely on technology risks prioritizing control over justice.
The choice between governance by law and governance by technology reflects a deeper ethical dilemma. An imperfect legal order governed by humans may be preferable to a perfectly efficient system that lacks moral reasoning and accountability.
-
Advantages and Disadvantages of AI in Legal Practice
Artificial intelligence offers numerous advantages in legal practice, including rapid access to legal information, improved service quality, cost reduction, and continuous availability. Predictive analytics can support legal strategy and decision-making.
At the same time, AI systems struggle with ethical reasoning, contextual understanding, and accountability. Security risks, privacy concerns, and the erosion of human legal skills remain significant challenges. AI should therefore complement, rather than replace, human judgment.
-
Discussion and Conclusion
The interaction between artificial intelligence and law should be viewed as an opportunity for reform rather than a threat. Legal systems must remain coherent while adapting to technological change. Responsibility, transparency, and human oversight are essential to preserving justice in an AI-driven environment.
Artificial intelligence requires a flexible yet reliable legal framework that protects individual rights and public security. While developed countries may rely on structured regulations, developing legal systems depend more heavily on legal theory and judicial interpretation.
Ultimately, the future of law in the age of artificial intelligence depends on maintaining a human-centered approach. Technology should serve society, not govern it. The preservation of autonomy, dignity, and ethical judgment remains the defining task of modern legal systems.
References
Alqodsi, E. M., & Gura, D. (2023). High-tech and legal challenges: Artificial intelligence caused damage regulation. Cogent Social Sciences, 9(2), 1–18. https://doi.org/10.1080/23311886.2023.2270751
Balkin, J. M. (2015). The path of robotics law. California Law Review Circuit, 6, 45–60.
Brownsword, R. (2022). Law, authority and respect: Three waves of technological disruption. Law, Innovation and Technology, 14(1), 1–30. https://doi.org/10.1080/17579961.2022.2047517
Brownsword, R., & Somsen, H. (2021). Law, innovation and technology: Before we fast forward. Law, Innovation and Technology, 13(1), 1–28. https://doi.org/10.1080/17579961.2021.1898298
European Parliament. (2017). Resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2013(INL)).
European Parliament. (2020). Resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL)).
Floridi, L., Cowls, J., Beltrametti, M., et al. (2018). AI4People—An ethical framework for a good AI society. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5
Gellers, J. C. (2021). Rights for robots: Artificial intelligence, animal and environmental law. Routledge.
Hacker, P. (2021). A legal framework for AI training data: From first principles to the Artificial Intelligence Act. Law, Innovation and Technology, 13(2), 257–301. https://doi.org/10.1080/17579961.2021.1977219
Hildebrandt, M. (2015). Smart technologies and the end(s) of law. Edward Elgar Publishing.
International Committee of the Red Cross. (2019). Autonomous weapon systems: Implications of increasing autonomy in the critical functions of weapons. ICRC.
Lakhani, S. (2025). Bridging law and technology: Navigating policy challenges in the age of artificial intelligence. International Review of Law, Computers & Technology, 39(2), 137–159.
Liu, H.-Y., Maas, M., Danaher, J., Scarcella, L., Lexer, M., & Van Rompaey, L. (2020). Artificial intelligence and legal disruption: A new model for analysis. Law, Innovation and Technology. https://doi.org/10.1080/17579961.2020.1815402
Luk, A. (2024). The relationship between law and technology: Copyright law responses to generative AI. Law, Innovation and Technology, 16(1), 148–169.
Milaj-Weishaar, J., & Mifsud Bonnici, J. (2024). Transparency as the defining feature for developing risk assessment AI technology for border control. International Review of Law, Computers & Technology.
Nissan, E., & Martino, A. A. (2004). Artificial intelligence and formalisms for legal evidence: An introduction. Applied Artificial Intelligence, 18(3–4), 185–229.
Salins, A. (2006). Robots as moral agents. Ethics and Information Technology, 8(3), 209–221.
Surden, H. (2014). Machine learning and law. Washington Law Review, 89(1), 87–115.
Završnik, A. (2020). Criminal justice, artificial intelligence systems, and human rights. ERA Forum, 20, 567–583.
The Guardian. (2023, September 11). Self-publishers must declare if content sold on Amazon’s site is AI-generated.
жүктеу мүмкіндігіне ие боласыз
Бұл материал сайт қолданушысы жариялаған. Материалдың ішінде жазылған барлық ақпаратқа жауапкершілікті жариялаған қолданушы жауап береді. Ұстаз тілегі тек ақпаратты таратуға қолдау көрсетеді. Егер материал сіздің авторлық құқығыңызды бұзған болса немесе басқа да себептермен сайттан өшіру керек деп ойласаңыз осында жазыңыз
The Role of Artificial Intelligence in Modern IT Systems
The Role of Artificial Intelligence in Modern IT Systems
Akkali Dinara
Atyrau University of Oil and Gas University
Abstract
The rapid advancement of artificial intelligence has profoundly influenced contemporary legal systems, reshaping traditional concepts of responsibility, authority, evidence, and governance. As AI technologies increasingly perform tasks that were historically reserved for human actors, legal frameworks face growing pressure to adapt to new forms of decision-making, automation, and technological regulation. This study examines the multifaceted relationship between artificial intelligence and law, focusing on the disruptive effects of AI on legal institutions and the challenges it poses to fundamental legal principles.
The paper explores key areas where artificial intelligence intersects with law, including automated legal processes, liability for AI-induced harm, the use of AI in legal evidence and judicial decision-making, and the emergence of technology-based governance. Particular attention is given to the problem of accountability, as autonomous systems lack moral awareness and intentionality, yet can cause significant legal and social consequences. The analysis also highlights the risks associated with overreliance on automation, such as the erosion of human judgment, transparency deficits, and threats to individual autonomy.
Furthermore, the study considers the regulatory responses to artificial intelligence at both national and international levels, emphasizing the importance of human-centric and trustworthy AI. While artificial intelligence offers substantial benefits in terms of efficiency, accessibility, and predictive capacity within legal practice, it cannot fully replace human reasoning and ethical evaluation. The paper argues that artificial intelligence should be viewed not as a substitute for law, but as a tool that can support legal reform when embedded within a coherent and flexible legal framework. Ultimately, preserving human autonomy, dignity, and accountability remains essential in ensuring that technological progress strengthens rather than undermines the authority of law.
-
Introduction
The rapid development of artificial intelligence has significantly transformed modern societies and raised complex legal, ethical, and institutional questions. Artificial intelligence is no longer limited to technical or experimental environments; it actively influences economic systems, labor markets, education, healthcare, public administration, and legal decision-making. As AI technologies increasingly participate in processes that were traditionally governed exclusively by human judgment, concerns emerge regarding responsibility, accountability, transparency, and the preservation of fundamental legal values.
One of the central challenges is the autonomy of intelligent systems. When AI systems operate independently, they may produce outcomes that resemble human decisions but lack human intent, moral reasoning, or awareness. This creates uncertainty regarding responsibility in cases of error or harm. Questions arise as to whether liability should rest with developers, operators, users, or regulatory institutions. These concerns demonstrate that the development of AI cannot be separated from legal frameworks and social norms.
At the same time, artificial intelligence offers opportunities to enhance decision-making, improve efficiency, and support legal professionals in complex analytical tasks. However, these benefits must be balanced against risks related to authors’ rights, data protection, judicial fairness, and the erosion of trust in legal institutions. The fundamental issue is whether legal systems can adapt to technological change without sacrificing their core principles.
-
Legal Disruption Caused by Artificial Intelligence
Artificial intelligence acts as a disruptive force within legal systems by challenging traditional assumptions about authority, responsibility, and evidence. Legal norms were historically designed for human actors who possess intention, consciousness, and accountability. AI systems, by contrast, operate through algorithms and data-driven processes, which do not align neatly with existing legal categories.
This disruption is visible in areas such as deepfake technology, autonomous vehicles, automated decision-making, and predictive analytics. Deepfakes, for example, undermine the reliability of audio-visual evidence and allow individuals to deny authentic recordings by claiming they were artificially generated. This weakens public trust in digital evidence and complicates legal procedures that rely on factual verification.
Artificial intelligence also exposes structural gaps within legal systems. Rather than fully replacing legal institutions, AI often reveals inefficiencies and inconsistencies in existing regulatory frameworks. While some scholars argue that highly autonomous systems should be recognized as legal persons, most legal theories maintain that legal personality remains fundamentally human-centered. AI may assist in reforming legal structures, but it does not equate to human agency.
-
Automation in Legal Processes
One of the most significant impacts of artificial intelligence on law is the automation of legal tasks. AI systems are increasingly used for document review, legal research, contract analysis, and the prediction of case outcomes. Machine learning models trained on extensive databases can identify patterns in judicial decisions and assist legal professionals in developing strategies.
Automation also extends to regulatory enforcement, where compliance with legal norms can be monitored automatically. In some contexts, sanctions may be imposed without direct human involvement. While this can increase efficiency and reduce administrative costs, it raises serious concerns regarding due process, fairness, and proportionality. Automated systems lack the ability to fully understand social context, moral nuance, and individual circumstances.
The expansion of automation represents a critical moment in legal evolution. While technology can support existing legal institutions, excessive reliance on automated decision-making risks undermining the human judgment that is essential to justice.
-
Liability and Regulation of AI-Induced Harm
As artificial intelligence becomes more capable, it can also be misused for criminal activities, including fraud, market manipulation, and cybercrime. Determining liability for harm caused by AI systems is increasingly complex due to the involvement of multiple actors, such as developers, manufacturers, operators, and users.
Most legal systems attribute responsibility to human actors rather than to machines. Although AI systems may outperform humans in speed and accuracy, they lack self-awareness and moral responsibility. As a result, liability assessments typically focus on whether harm resulted from defective design, inadequate supervision, or improper use.
Some scholars propose limited forms of legal personality for AI systems, granting them specific rights or obligations. However, the dominant view remains that legal responsibility must ultimately rest with humans. Regulatory frameworks therefore aim to balance innovation with accountability, ensuring that technological progress does not undermine public safety or individual rights.
-
Artificial Intelligence and Legal Evidence
Artificial intelligence plays an expanding role in the management and analysis of legal evidence. AI tools are used to retrieve legal information, analyze precedents, and model alternative narratives in criminal and civil cases. Projects such as narrative reconstruction systems demonstrate how AI can assist in organizing complex case files.
Despite these advancements, legal evidence differs fundamentally from scientific evidence. Legal proof often depends on unique human experiences, credibility assessments, and social context. Unlike scientific experiments, legal events cannot be replicated or tested under controlled conditions.
Another challenge is the cultural and social acceptance of AI-generated evidence. Legal narratives must be comprehensible and persuasive to judges and juries, which requires sensitivity to societal values. AI systems are still limited in their ability to construct narratives that align with human expectations of justice and fairness.
-
The Relationship Between Law and Technology
Technology reshapes the conditions under which legal rights are exercised. While the law does not directly prevent the use of technology, technological design can restrict or enable certain forms of behavior. This creates tension between rapid technological development and slower legal adaptation.
Legal systems must remain flexible without abandoning their internal logic. Overregulation can hinder innovation, while insufficient regulation may allow technological harm. Legal disputes arising from technological change are often unpredictable, requiring courts and legislators to adapt continuously.
In the field of copyright, generative AI challenges traditional notions of authorship and originality. Platforms increasingly require disclosure of AI-generated content, yet reliable detection remains difficult. Errors in enforcement may infringe upon legitimate authors’ rights or distort competition within creative industries.
-
AI-Based Monitoring and Governance
Artificial intelligence is increasingly used as a regulatory tool, shaping behavior through technological constraints rather than legal texts. This shift toward governance by technology raises concerns about autonomy, transparency, and democratic accountability. When rules are embedded in code, individuals may have limited ability to challenge or understand the systems that regulate their actions.
European regulatory initiatives emphasize the importance of trustworthy and human-centric AI. Trustworthy AI requires transparency, high-quality data, and respect for fundamental rights. While humans currently remain in control of AI governance, future developments may shift this balance.
Technological regulation risks decentralizing human authority and weakening traditional legal discourse. Invisible technological constraints may replace explicit legal norms, making it difficult to identify responsibility or seek judicial review.
-
Authority of Law in the Age of Artificial Intelligence
The replacement of human authority with automated systems raises fundamental ethical concerns. While technology can improve efficiency and access to justice, excessive automation may endanger the social foundations of law. Legal authority derives not only from compliance but also from respect, legitimacy, and human judgment.
Courts increasingly use technology to reduce costs and streamline procedures. Software tools facilitate dispute resolution and may even prevent litigation. However, governance based solely on technology risks prioritizing control over justice.
The choice between governance by law and governance by technology reflects a deeper ethical dilemma. An imperfect legal order governed by humans may be preferable to a perfectly efficient system that lacks moral reasoning and accountability.
-
Advantages and Disadvantages of AI in Legal Practice
Artificial intelligence offers numerous advantages in legal practice, including rapid access to legal information, improved service quality, cost reduction, and continuous availability. Predictive analytics can support legal strategy and decision-making.
At the same time, AI systems struggle with ethical reasoning, contextual understanding, and accountability. Security risks, privacy concerns, and the erosion of human legal skills remain significant challenges. AI should therefore complement, rather than replace, human judgment.
-
Discussion and Conclusion
The interaction between artificial intelligence and law should be viewed as an opportunity for reform rather than a threat. Legal systems must remain coherent while adapting to technological change. Responsibility, transparency, and human oversight are essential to preserving justice in an AI-driven environment.
Artificial intelligence requires a flexible yet reliable legal framework that protects individual rights and public security. While developed countries may rely on structured regulations, developing legal systems depend more heavily on legal theory and judicial interpretation.
Ultimately, the future of law in the age of artificial intelligence depends on maintaining a human-centered approach. Technology should serve society, not govern it. The preservation of autonomy, dignity, and ethical judgment remains the defining task of modern legal systems.
References
Alqodsi, E. M., & Gura, D. (2023). High-tech and legal challenges: Artificial intelligence caused damage regulation. Cogent Social Sciences, 9(2), 1–18. https://doi.org/10.1080/23311886.2023.2270751
Balkin, J. M. (2015). The path of robotics law. California Law Review Circuit, 6, 45–60.
Brownsword, R. (2022). Law, authority and respect: Three waves of technological disruption. Law, Innovation and Technology, 14(1), 1–30. https://doi.org/10.1080/17579961.2022.2047517
Brownsword, R., & Somsen, H. (2021). Law, innovation and technology: Before we fast forward. Law, Innovation and Technology, 13(1), 1–28. https://doi.org/10.1080/17579961.2021.1898298
European Parliament. (2017). Resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2013(INL)).
European Parliament. (2020). Resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL)).
Floridi, L., Cowls, J., Beltrametti, M., et al. (2018). AI4People—An ethical framework for a good AI society. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5
Gellers, J. C. (2021). Rights for robots: Artificial intelligence, animal and environmental law. Routledge.
Hacker, P. (2021). A legal framework for AI training data: From first principles to the Artificial Intelligence Act. Law, Innovation and Technology, 13(2), 257–301. https://doi.org/10.1080/17579961.2021.1977219
Hildebrandt, M. (2015). Smart technologies and the end(s) of law. Edward Elgar Publishing.
International Committee of the Red Cross. (2019). Autonomous weapon systems: Implications of increasing autonomy in the critical functions of weapons. ICRC.
Lakhani, S. (2025). Bridging law and technology: Navigating policy challenges in the age of artificial intelligence. International Review of Law, Computers & Technology, 39(2), 137–159.
Liu, H.-Y., Maas, M., Danaher, J., Scarcella, L., Lexer, M., & Van Rompaey, L. (2020). Artificial intelligence and legal disruption: A new model for analysis. Law, Innovation and Technology. https://doi.org/10.1080/17579961.2020.1815402
Luk, A. (2024). The relationship between law and technology: Copyright law responses to generative AI. Law, Innovation and Technology, 16(1), 148–169.
Milaj-Weishaar, J., & Mifsud Bonnici, J. (2024). Transparency as the defining feature for developing risk assessment AI technology for border control. International Review of Law, Computers & Technology.
Nissan, E., & Martino, A. A. (2004). Artificial intelligence and formalisms for legal evidence: An introduction. Applied Artificial Intelligence, 18(3–4), 185–229.
Salins, A. (2006). Robots as moral agents. Ethics and Information Technology, 8(3), 209–221.
Surden, H. (2014). Machine learning and law. Washington Law Review, 89(1), 87–115.
Završnik, A. (2020). Criminal justice, artificial intelligence systems, and human rights. ERA Forum, 20, 567–583.
The Guardian. (2023, September 11). Self-publishers must declare if content sold on Amazon’s site is AI-generated.
шағым қалдыра аласыз


