Статья 'Философско-антропологический анализ противоречий развития искусственного интеллекта' - журнал 'Философская мысль' - NotaBene.ru
Journal Menu
> Issues > Rubrics > About journal > Authors > About the journal > Requirements for publication > Editorial collegium > Peer-review process > Policy of publication. Aims & Scope. > Article retraction > Ethics > Online First Pre-Publication > Copyright & Licensing Policy > Digital archiving policy > Open Access Policy > Article Processing Charge > Article Identification Policy > Plagiarism check policy > Editorial board
Journals in science databases
About the Journal

MAIN PAGE > Back to contents
Philosophical Thought
Reference:

Philosophical anthropology analysis of contradictions in the development of artificial intelligence

Gluzdov Dmitry Viktorovich

ORCID: 0000-0001-7043-5139

Postgraduate Student, Department of Philosophy and Social Sciences, Nizhny Novgorod State Pedagogical University named after Kozma Minin

603950, Russia, Nizhny Novgorod, Ulyanova str., 1

dmitry.gluzdov@mail.ru
Other publications by this author
 

 

DOI:

10.25136/2409-8728.2023.10.40062

EDN:

NFQRFV

Received:

27-03-2023


Published:

28-10-2023


Abstract: The object of philosophical research is artificial intelligence. The subject of the study covers the impact of the development of artificial intelligence on a person, on the formation and change of ideas about a person, his nature and essence. But in the study, the emphasis is on the contradictoriness of this impact. The philosophical and anthropological analysis of artificial intelligence is focused on understanding the impact of this technology through the phenomenon of man, human existence and his experience. The article is an attempt to study the problem from different positions, including the question of how to ensure control over the growing "consumption" of artificial intelligence in a variety of ways, as well as what can affect the development of a person himself and how current trends contribute to change or create social and cultural norms, such as the ideas of "roboethics" and ethical responsibility in the creation and use of intelligent machines. The presence of fragmented or insufficiently complete coverage in the study of the presented topic in the works of researchers allows us to set the task of formulating the problem and studying it. It is the need for a comprehensive study that is the idea that initiated this work, which boils down to an attempt to conduct a philosophical and anthropological analysis, identify the shortcomings of the existing situation and determine the prospects. From this position, in the process of research, no materials were found that consider the problem comprehensively, and on the other hand, combine the task of identifying the causes and foundations of these contradictions in order to analyze them from the standpoint of philosophical anthropology, which determines the novelty of the study.


Keywords:

philosophical anthropology, human, artificial intelligence, technology, contradictions, consciousness, freedom, identity, ethics, interdisciplinary cooperation

This article is automatically translated. You can find original text of the article here.

Introduction

In recent years, artificial intelligence (hereinafter referred to as AI or AI systems) has been making significant progress due to the development of computing power, the accumulation of data in digital form (material for processing), as well as due to advances in machine learning, neural networks and deep learning algorithms. As AI continues to evolve and become more sophisticated, its development poses a number of heterogeneous tasks that require an interdisciplinary approach to solve them. Undoubtedly, the development of AI is accompanied by certain contradictions, for a full and comprehensive understanding of which requires a philosophical understanding of the problem.

Artificial intelligence is able to qualitatively change the existence of a person. Therefore, philosophical anthropology, striving to understand the principles governing human existence, is faced with the task of analyzing the issues and problems associated with the development of artificial intelligence. The purpose of this article is to analyze the contradictions in the development of AI from the standpoint of philosophical anthropology, which also corresponds to its research goals - the knowledge of what it means to be human.

The relevance is determined by the fact that the existing state of control over the development of AI can be called negligible.  Despite the fact that government agencies in different countries have already begun to form a legislative framework for the control of artificial intelligence, it does not cover all possible areas and problems. More often, only individual issues are raised in relation to the subject area being studied (for example, only legal aspects or issues, or security) and contradictions as a whole are not considered.

The novelty of the research lies in identifying the causes and grounds of contradictions in the development of artificial intelligence, an attempt to analyze these contradictions from the standpoint of philosophical anthropology.

In methodological terms, classification and comparative analysis are used in this work. Hermeneutical approach is also applied, which allows to interpret the primary sources, and a phenomenological approach, which allows to consider AI as a phenomenon of the socio-cultural environment of a person.

About the origins of contradictions in the development of artificial intelligence

Various reasons have contributed to the development of artificial intelligence and actualization of the need for the study of contradictions. These contradictions arise due to differences in expectations from the use of artificial intelligence, the possibility of choosing different directions in its development, as well as the lack of regulatory principles and regulatory bodies.

The beginning of the accelerated development of artificial intelligence has become one of the most significant technological achievements in recent history. Starting from virtual assistants, gadgets, personal computers, to unmanned vehicles – artificial intelligence has penetrated into almost all aspects of modern life and continues to penetrate deeper and deeper. But one of the main sources of contradictions associated with the development of AI is the lack of a unified approach, theory, legal norms, controlling structures that could guide and coordinate its development [1, pp.501-502][2, pp.91][3, pp.18,22]. Currently, artificial intelligence research is characterized by the presence of various approaches: learning through neural networks, evolutionary algorithms, deep learning, etc. Each approach has its own strengths and weaknesses. But the development of artificial intelligence is complicated by the presence of ethical and social problems. Among them are the mandatory impact on the field of human employment, and problems with access to confidential information, and its security, and the correctness of data processing. The latter may lead to the aggravation of existing inequalities and injustices. If algorithms are not developed and tested properly, then AI systems can retain a certain bias in the very "core" of artificial intelligence - "knowledge", and as a result, discrimination against a person (or groups of people).

One of the key sources of contradictions in the development of artificial intelligence is the contradiction between the desire to increase efficiency and the need for ethical and social responsibility. AI can really significantly increase productivity and efficiency by automating tasks and processes that were previously performed by humans, can increase the speed and accuracy of decision-making. As an example, this leads to the development of algorithms that can more effectively predict consumer behavior, optimize supply chains faster and diagnose diseases better. But what is the downside of improving efficiency?

The algorithms used to create AI (with rare exceptions) are a trained "black box", in respect of which there is no confidence in the correctness of the decision made in 100% of cases. This is due to the fact that the training material collected for him does not contain answers for all cases of life, and the ability to make a perfectly correct and morally balanced decision is not always available even to a person.

Contradictions also arise on the border between the need for greater control over AI and in cases of need for greater autonomy provided to the systems being created. Such a need arises due to the fact that the artificial intelligence systems being created have the opportunity to adapt and learn from their own experience. The results of this can be seen in the development of self-driving cars that use sensors and algorithms to navigate in difficult conditions. However, this same adaptability can lead to unpredictable behavior, which raises concerns about the security of artificial intelligence systems.

Literature review

If, within the boundaries of philosophical anthropology, we focus only on the issues of intelligence from the point of view of the problem of philosophy of consciousness, then it can be noted that in European philosophy the question of human nature and his consciousness arose constantly. So in Modern times, questions have been actively raised both about human thinking itself and about the nature of this thinking. And naturally, it is at this time that the comparison of natural human intelligence with a machine takes place. At a time when there was an explosive development of mechanics as a science, it is not surprising that the comparison of intelligence was reduced to a comparison with a clock, as with a machine that was most studied and developed. We find a similar comparison with clocks in various philosophers of Modern times, and first of all in Rene Descartes, Gottfried Leibniz and Julien Ofre de Lamettri.

The problems of artificial intelligence are reflected in the works of many foreign scientists, including M. Arbib, J. Weizenbaum, S. Dreyfus, X. Dreyfus, J. McKinsey, X. Putnam, R. Penrose, B. Rosenblum, A. Turing, R. Schenck. Undoubtedly, domestic scientists and philosophers, including A. P. Alekseev, A. Y. Alekseev, I. Y. Alekseeva, V. V. Vasiliev, D. B. Volkov, D. I. Dubrovsky, A. F. Zotov, V. A. Lectorsky, A. P. Ogurtsov, Yu. V. Orpheev, V. I. Samokhvalov, N. M., do not bypass this topic. Smirnova, A. G. Spirkin, V. S. Tyukhtin, N. S. Yulina.

The contradictions of artificial intelligence in one form or another were directly considered in their works by N. Winner, N. Bostrom, D. Dennett, X. Dreyfus, P. Norvig, S. Russell, D. Searle. Among domestic scientists and researchers, one can distinguish such as A. Y. Alekseev, D. I. Dubrovsky, E. V. Ilyenkov, V. A. Kutyrev, A.L. Lectorsky, Yu. Yu. Petrunin. Logical problems of artificial intelligence are also considered in the works of I. Y. Alekseeva, S. L. Katrechko. A fairly extensive range of informatization issues, in which the topic of artificial intelligence is only a part, is considered in the works of J. Weizenbaum, N. Wiener, V.A. Zvegintsev, K.A. Zuev, G. L. Smolyan, A. I. Rakitov.

Based on the results of the review of scientific sources, we can say that the authors do not have a single position in terminology. As a rule, only certain aspects of the problem of artificial intelligence are revealed, more often focusing on the negative component of its development. Nevertheless, a large number of scientists working in the framework of AI research once again emphasizes the relevance of the topic under study.

The role of philosophical anthropology in the development of artificial intelligence

Technologies can be considered as an extension of human capabilities and a tool for achieving existing goals. And if a person is a naked monkey, which is covered by the veil of culture created by him, then technologies can (one–sidedly) be considered as "warming" this "blanket".

However, from the standpoint of philosophical anthropology, there is another side. Technology can be seen as a force involved in shaping our perceptions, our values and social relationships. The key question is: how are technologies changing our understanding of what it means to be human? Does our growing dependence on technology change our sense of freedom of action and autonomy, or does it increase our ability to create and innovate? Equally important issues: how does technology affect our relationships with other people? Does the use of social networks and digital communication technologies allow you to communicate with other people in a new way, or does it lead to a feeling of isolation and isolation from relationships in the real world?

Scientists and philosophers are exploring the ethical and moral implications of technology. For example, they ask questions about the responsibilities of those who create and design technologies, and how we should balance technological progress with concern for human prosperity and, for example, environmental sustainability.

Artificial intelligence is the same technology, but with noticeably wider and more sophisticated opportunities to penetrate into human life and with the ability to influence it. Philosophical anthropology can and should play a significant role in identifying and resolving contradictions arising in the development of AI. By studying the nature and meanings of man, philosophical anthropology can help us better understand the ethical and social consequences of the development of artificial intelligence.

One of the categories of philosophical anthropology is agency – the ability to act as an independent agent and make an informed and free choice. Although AI systems can perform tasks and make decisions, they lack independence, which is a fundamental aspect of human existence. But the presence of agency in systems with artificial intelligence is already an openly discussed issue [4, pp.296-297], for this reason, this topic raises questions about the possibility of moral responsibility of artificial intelligence systems and the role of human control in their development and deployment [5, 6].

Another important category in philosophical anthropology is consciousness. Although artificial intelligence systems can already imitate human behavior and decision-making, they still lack the subjective experience of consciousness. This area raises questions about the relationship between artificial intelligence and human consciousness [7, p.19-20], as well as the ethical consequences of creating conscious machines [8, p.33][9].

When determining the subject of the study, the task was set to identify the role of philosophical anthropology in the study of artificial intelligence problems. Speaking about the role of philosophical anthropology in the development of artificial intelligence, the following can be distinguished:

· Artificial intelligence is a rapidly developing field that transforms society and affects people's lives in various ways. As a consequence, the development of artificial intelligence requires an understanding of human nature and human experience, which can be obtained through research, which is also carried out by philosophers in relation to man.

· Philosophical anthropology provides a framework for understanding the relationship between humans and technology, including the ethical and social implications of artificial intelligence technologies.

· Artificial intelligence developers consider it possible to isolate intelligence from a person as a function, which leads to the following conclusion – it is possible to transplant it into something else [10, pp.58-59]. Reducing a person to the level of functions, you can eventually abandon a person. It turns out that this is the way to the posthuman. By including philosophical anthropology as a participant in interdisciplinary cooperation on the development of AI, we will be able to create systems that are more human-oriented and better correspond to our values and goals of society.

· The development of artificial intelligence poses new challenges to philosophical anthropology, requiring us to revise and refine our understanding of human nature in the light of new opportunities and limitations of technology.

· Philosophical anthropology can help identify the potential limitations of AI in understanding and evaluating the complexity of human experience and culture, emphasizing the importance of embodied experience, language and culture, irrational factors and context. Understanding these limitations can help in the development and use of AI systems to treat human experience and culture more responsibly and with respect.

· Philosophical anthropology can contribute to the development of artificial intelligence systems that better reflect the diversity and richness of human experience and culture, providing more inclusive and equitable outcomes.

· Through interdisciplinary collaboration, we can develop more comprehensive and subtle approaches to the development of artificial intelligence, taking into account the ethical, social and cultural aspects of technology. 

Possible questions and contradictions

The development of artificial intelligence causes opposite assessments and concerns in different planes. On the one hand, artificial intelligence can improve the quality of human life and speed up the solution of complex tasks. However, on the other hand, it can lead to job cuts, generate ethical problems, and lead to information security problems, such as violation of confidentiality and access to information.

Let's try to classify and group the contradictions of the development of artificial intelligence by types. Let's do it as follows:

1. Economic contradictions. High efficiency of artificial intelligence can increase productivity, but at the same time can, for example, reduce the number of jobs, which includes:

· Increase of labor productivity. The deployment of AI can bring significant benefits to the economy, including increasing productivity, improving decision-making and opening up new business opportunities.

· Elimination of jobs. AI can automate many tasks and work functions, which will lead to people losing their jobs [21].

· Impact on the distribution of labor. The deployment of AI systems can disrupt existing labor markets and affect the jobs and skills of workers in a wide variety of industries. This requires careful consideration of the impact of AI on the workforce, as well as the development of policies and programs that support retraining and advanced training of workers in response to the changing demands of the labor market.

· Aggravation of inequality. The cost of developing and deploying AI systems is high, and there is a risk that only a limited number of organizations and individuals will have access to the technology. This raises concerns about the distribution of benefits and the possibility of exacerbating existing inequalities. 

2. Contradictions in the field of social impacts. The deployment of AI can change society in various ways, including new forms of communication, through improving healthcare, making changes to forms of education, and offering other, possibly more effective and efficient systems and means of providing basic services. It is also necessary to understand how it can have a potential impact on society as a whole, ensure the distribution of its benefits and understand its impact on various communities and groups. Here you can highlight:

· Accessibility and fairness. There is a risk that AI will benefit only those who have access to technologies and resources for their development and use, which will exacerbate the existing inequality [3, p.30].

· Bias and discrimination, prejudice and prejudice. AI algorithms can perpetuate (preserve in their data sets – "knowledge") and possibly strengthen human biases and prejudices, including issues related to race [3, p.32], gender and socio-economic status, which, among other things, will lead to discriminatory results [3, p. 32].30-31].

· Improved support for Human Care work: AI systems can be used to support care work, for example by providing care robots or digital tools that simplify care work and make it more manageable.

· Inclusiveness and diversity. The development of artificial intelligence can benefit society as a whole, but it is necessary to ensure a fair distribution of its benefits and the availability of technology for everyone. This requires an appropriate approach to the development of AI, taking into account the perspectives and needs of various communities and groups.

3. Ethical contradictions

Systems with artificial intelligence, making decisions, can carry out actions that do not comply with ethical standards [13]. Ethical issues include issues related to the violation of confidentiality and access to information. In addition, AI can be used for purposes that are not only beneficial, but also for other purposes, such as, for example, hacking or attack. Therefore, the important points are:

· Safety and ethics. As AI systems become more autonomous, there is a risk of accidents or unintended consequences, and the ethical implications of AI decision-making need to be considered.

· Alignment of values. There is a growing need to ensure that AI systems comply with human values and ethical principles, including issues related to fairness, transparency and accountability. This includes questions about the development of artificial intelligence systems that correspond to human values and ethical principles, as well as the development of decision-making algorithms that take into account human biases and preferences. 

4. Technological contradictions. "At present, the post-turing era has come, which has fixed technological contradictions in the development of artificial intelligence and is focused on the transition from attempts to simulate human intelligence to the creation of non-human intelligence that complements human intelligence" [23, p.62].

5. Ecology – environmental impact. The deployment of AI systems can bring significant environmental benefits [24, 25], but it is also important to take into account the impact of technology and associated infrastructure on the environment and ensure its sustainability in the long term. This requires careful consideration of the environmental impact of AI, as well as the development of policies and programs that promote sustainable and environmentally responsible practices.

6. Technological and regulatory issues, contradictions and challenges [3].

· Lack of explanation, interpretability of the model. Many AI models are considered "black boxes" [3, p.31], which makes it difficult for even specialists to explain how decisions are made - this can reduce confidence in the technology. That is, systems with artificial intelligence raise important questions about the explainability, interpretability of their functioning, interpretation of decisions and results. This requires the development and implementation of measures to ensure transparency and clarity of decision-making processes.

· Standardization, regulation, and management. The rapid pace of AI development has surpassed the capabilities of governments and regulatory authorities to regulate it, which has led to the formation of a "patchwork" of incompatible laws and standards in different jurisdictions. The development and implementation of AI requires a reliable regulatory framework that takes into account its ethical, social and technical implications. This includes the need for clear and effective rules that ensure the responsible use of AI and the protection of individuals and society, as well as the development of a governance framework that encourages collaboration and coordination among various stakeholders.

· Human control and supervision. The introduction of AI systems involves important issues about human control and supervision, including the need to ensure that AI systems are under the control of human operators, and that their results are controlled by humans. This requires the development of artificial intelligence systems that must be managed and subject to human supervision, as well as taking measures to ensure that their results comply with the values and principles of society.

· Responsibility and accountability. The deployment of AI systems raises important questions about responsibility and accountability, including the need to ensure that AI systems are accountable and accountable for their results. This requires the development of artificial intelligence systems that must be responsible and accountable, as well as taking measures to ensure that their results comply with the values and principles of society.

· Human accountability. With the development of AI, the need for human responsibility in their deployment and use is growing. This includes questions about who should be held responsible for the actions of AI systems and how to hold individuals and organizations accountable for any damage caused by AI technology.

· Interoperability (functional compatibility). The development of AI systems and platforms by different organizations has led to fragmentation of the AI landscape, which raises important questions about interoperability. This includes the need for open standards and protocols that can allow different artificial intelligence systems to work seamlessly together.

· Misuse. As AI systems become more capable, the risk of their misuse for malicious purposes, including cyber attacks, propaganda and manipulation, increases. It is important to eliminate these risks and develop security measures to prevent the misuse of AI technology.

7. Integration communication issues.

· Interaction with a person. AI is increasingly being integrated into our daily lives and questions are being raised about how people will interact with technology and whether it will complement or replace human capabilities.

· Interaction with people. As a result of the spread of AI in our daily lives, there is a growing need to understand how people interact with technology, and it interacts with them. This includes questions about user experience, interfaces, and human-AI collaboration opportunities.

· Integration with other technologies. As AI continues to evolve, there is a growing need for its integration with other emerging technologies such as the Internet of Things, blockchain and 5G. This includes the development of AI-based systems that can seamlessly work with other technologies, providing new opportunities and benefits.

· Joint development. AI development is a global challenge that requires collaboration between various organizations, groups, and individuals, including industry, academia, and government. This requires a collaborative approach to AI development, combining expertise from various fields and encouraging an open and transparent dialogue about the potential benefits and challenges associated with the technology. 

8. Contradictions introduced by the interdisciplinary approach. The development and deployment of AI requires an interdisciplinary approach that brings together experts from various fields, including computer science, engineering, social sciences and humanities. And this diversity creates its own problems and contradictions. This will require the development of interdisciplinary research and educational programs aimed at studying the ethical, social and technical aspects of AI.

9. Education and health issues.

· Changes in education: AI-based educational tools and resources can not only expand access to education, especially in regions where educational opportunities are limited, but also destroy what exists: "Others go further and say that ChatGPT and its inevitably smarter successors mean the instant death of traditional education"[21].

· Development in the field of healthcare: artificial intelligence systems can be used to improve health outcomes, for example, through the development of personalized medicine [26], but will undoubtedly have a downside. For example, it is possible to assume a decrease in the practical skills of professional medical workers if it will be easier to assess the patient's condition "automatically". 

10. International contradictions[27, p.23].

· International cooperation. The development and implementation of AI is a global phenomenon, and international cooperation is required to ensure the fair distribution of its benefits worldwide. This requires the development of international structures and initiatives that promote cooperation and collaboration between countries and stakeholders and ensure responsible and sustainable development and use of AI in the development of AI governance frameworks and ethical standards, as well as the development of global networks for the exchange of ideas and information.

· International competition. The development and implementation of AI is a source of global competition as countries and organizations compete to be at the forefront of technology. At the same time, global cooperation is needed to solve the problems and contradictions associated with AI and ensure its responsible development and use. 

We have not yet gone a long way of adapting this relatively new technology into our lives (it is only on the way of becoming) and for this reason, like any "child", there is a great variety of problems created by it. But this only highlights the existence of contradictions and the need to find solutions. All these contradictions and their solutions need to be analyzed through the prism of the impact on a person, the possibility of his development and his life principles, which further highlights the role of philosophical anthropology as a possible participant and arbiter.

The importance of interdisciplinary collaboration

AI development is essentially an interdisciplinary problem. The ongoing involvement of philosophy, ethics and social sciences only strengthens this thesis. Moreover, all of the above confirms that such cooperation is fundamentally necessary, since only human– and society-oriented sciences can provide the required benchmark in development - something that no engineering science can provide. And here philosophy can play a key role, having, for example, ethics in its arsenal. Ethics specialists can help identify and solve ethical AI problems, as well as ensure the development and use of AI systems in accordance with our ethical and social values. Moreover, ethics specialists can help raise awareness of society and involve its members in the ethical consequences of the development and implementation of artificial intelligence in our lives.

Social sciences also play an important role in understanding the social and political aspects of AI development. Sociologists, anthropologists, and political scientists can help study the social and cultural implications of AI development, including its impact on employment, privacy, and security. Moreover, sociologists can help identify and eliminate potential social and economic inequalities that may arise as a result of the development and implementation of AI.

Elimination of contradictions in the development of artificial intelligence

Here, of course, it is necessary to use an interdisciplinary approach that would include the position of philosophy. The elimination of contradictions in the development of artificial intelligence with the participation of philosophical anthropology involves the application of its principles and concepts to systems with AI at the stage of their development and development.

In the section "Possible issues and contradictions" we have shown in which planes contradictions related to AI can be considered and each of them is a separate spectrum of problems. But in each case, at least two possible directions can be distinguished: from the standpoint of private sciences and from the standpoint of universal (philosophical) issues. The problems of private sciences are not the subject of this article, so we will focus on universal and philosophical. Here it is necessary to start from the nature of man and, consequently, the question "What does it mean to be a man?". From this position, we can identify potential areas of controversy and form the following questions:

· How does the further development of AI affect the formation of our identity, our goals and meanings?

· How to ensure the diversity of human values: racial, national, ethnic, gender, etc.?

· What should be prohibited from "improving" in a person?

· How can we ensure that AI systems are designed in such a way that the rights and interests of all people are respected and preserved?

· What qualities of people can we or should we provide with artificial intelligence systems to motivate or improve them?

· How can we ensure that AI systems are developed while preserving human autonomy?

· What are the ethical implications of creating AI systems that mimic human behavior and emotions?

...there will be no end to the questions.

 

The "from man" approach (in fact, from the standpoint of philosophical anthropology) immediately allows us to localize potential areas of contradictions in the development of AI systems and work on their elimination. Thus, by designating a person as a "fulcrum" in the question of contradictions, we automatically designate a vector of movement in contradictions that were initially equal without this "point". So, for example, without a cultural context in which a person exists, any technical solution has no signs of evaluation in pairs "good – bad", "successful – unsuccessful", "necessary – not necessary", etc. Studying the nature of people, determining ethical consequences, understanding the cultural context and considering the ultimate goal of AI, we We can work on creating responsible and ethical AI systems that correspond to our fundamental values and interests.

The future of artificial intelligence development

The future of AI development is likely to be determined by a number of factors, including advances in machine learning and deep learning algorithms, the development of more sophisticated robotics and automation systems, and the growing availability of big data.

The economic benefits of commercial structures, the benefits of marketers and political groups derived from the results of processing large amounts of data will not stop, or rather stimulate the development of AI, which continues, and it is impossible to stop this process. But it is important that we maintain a critical, balanced and reasonable position regarding its development.

Assuming the involvement of philosophical anthropologists, one of the potential directions of the future development of AI is and will be the consideration of ethical considerations in the development of systems with AI. By defining an ethical framework and forming recommendations/requirements in the development process, we can begin to ensure that AI systems are developed and used in accordance with our ethical and social values. Fortunately, in large corporations and international political and economic entities, this approach has been practiced in the last few years [18, 19, 20].

Another potential direction for the future development of AI may be the formation and development of a theoretical and practical basis for eliminating the concept of a "black box" when discussing AI, which will lead to the construction of more transparent and explicable AI systems. This can help ensure that AI decisions are not driven by bias or discrimination, and can contribute to greater public trust and confidence in the technology. 

Conclusion

The development of artificial intelligence poses serious challenges to man, society and humanity. Currently, there is neither an arbitrator on any issues, nor ready-made practical or at least theoretical solutions to resolve contradictions arising in the process of artificial intelligence development. The contradictions inherent in the development of AI, such as the contradictions between efficiency and responsibility, control and autonomy, require an interdisciplinary approach combining ideas from philosophy, ethics, social and technical sciences. But the place of philosophical anthropology turns out to be one of the most relevant for participation in solving these contradictions, since it approaches any issue from the standpoint of a person, while having the opportunity not only to act as a mentor, but also to accumulate new research and material within its subject area. Philosophical anthropology has the potential for the harmonious development of AI, enriches the philosophy of science and technology with its concepts, for which AI is also the subject of close study.

As AI continues to evolve, it is imperative that we maintain a critical and balanced position regarding its design, development and application. We must ensure that AI systems are developed and used in accordance with our ethical and social values. Moreover, we must keep under constant control the assessment of the potential impact of AI on humans.

The approach to the study and development of artificial intelligence from the standpoint of philosophical anthropology helps a person to understand and interpret the degree of influence of AI on their own being, society, and the state. Philosophical anthropology will help to verbalize possible problems, evaluate the ways of AI development from the point of view of changing human existence and correct this direction at the right time. Therefore, it is very important that technical specialists, scientists of different fields of knowledge, and philosophers participate in an open dialogue about the development and application of AI now and in the future.

References
1. Mittelstadt B. Principles alone cannot guarantee ethical AI // Nature Machine Intelligence-Volume 1, November 2019-p.501-507. DOI: 10.1038/s42256-019-0114-4
2. Afanasevskaya A.A. Legal status of artificial intelligence // Bulletin of the Saratov State Law Academy-2021, No. 4 (141),-p.88-92 DOI: 10.24412/2227-7315-2021-4-88-92
3. Preparing for the Future of Artificial Intelligence. 2016. Washington, DC. Executive Office of the President National Science and Technology Council Committee on Technology-2016-[Electronic resource]. URL: https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf (accessed 3/14/2023)
4. Shatkin M. A. The agency of digital platforms: a value approach. Bulletin of the Saratov University. New episode. Series: Philosophy. Psychology. Pedagogy-2022, No. 3-p.293-297. DOI: 10.18500/1819-7671-2022-22-3-293-297
5. Leshchev S. V. Artificial-intellectual agency in the space of the humanitarian dimension // Modern problems of the humanities and social sciences-2021-pp. 65-68.
6. Mertsalov A. V. Agency, personal identity and moral responsibility // Bulletin of the Moscow University. Series 7: Philosophy-2022, No. 5-p.72-90.
7. Gluzdov D.V. Philosophical and anthropological grounds for the interaction of artificial and natural intelligence // Bulletin of the Minin University. 2022. V. 10, No. 4. P.15. DOI: 10.26795/2307-1281-2022-10-4-15
8. Yakovleva E. V., Isakova N. V. Artificial intelligence as a modern philosophical problem: an analytical review // Humanitarian and social sciences. 2021. №6. – P.30-35. DOI: 10.18522/2070-1403-2021-89-6-30-35
9. Roche C., Wall P. J., Lewis D. Ethics and diversity in artificial intelligence policies, strategies // AI and Ethics. Springer Nature, 2022. DOI: 10.1007/s43681-022-00218-9
10. Smirnov S. A. The place of man in the anthropology of the future // Man as an open integrity: Monograph / Ed. ed. L. P. Kiyashchenko, T. A. Sidorova.-Novosibirsk: Academizdat-2022-P.54-62 DOI: 10.24412/cl-36976-2022-1-54-62
11. Thomas S. AI is the end of writing. The computers will soon be here to do it better. // The Spectator-11 March 2023-[Electronic resource]. URL: https://www.spectator.co.uk/article/ai-is-the-end-of-writing/ (accessed 03/14/2023)
12. Jones M. Is ethical risk getting the better of artificial intelligence? // TechHQ, 2021-2 February 2021-[Electronic resource]. URL: https://techhq.com/2021/02/is-ethical-risk-getting-the-better-of-artificial-intelligence/ (accessed 03/14/2023).
13. Tsurkan D. A. The problem of human constitution and personal self-determination in the digital era of risk: dis. … cand. philosopher. Sciences. – Kursk, 2020 – Access mode: https://cloud.kursksu.ru/kursksu.ru/pages/2020/December/9/5HhGA8Yy.pdf (date of access: 14.03.2023)
14. Kwame N. E, Cobbina S. J., Attafuah E. E., Opoku E., Gyan M. A. Environmental sustainability technologies in biodiversity, energy, transportation and water management using artificial intelligence: A systematic review // Sustainable Futures-2022, Volume 4-[Electronic resource]. URL: https://www.sciencedirect.com/science/article/pii/S2666188822000053 (Accessed: 03/14/2023), DOI: 10.1016/j.sftr.2022.100068
15. Gorodnova N. V. The use of artificial intelligence in projects "Smart-ecology" // Discussion. 2021. No. 2-3 (105-106), DOI: 10.24411/2077-7639-2019-10094
16. Abduganieva Sh.Kh., Nikonorova M.L. Digital solutions in medicine // Crimean Journal of Experimental and Clinical Medicine-2022, V.12, No. 2-p.75-83 DOI: 10.37279/2224-6444-2022-12-2-73-85
17. Korobkov A.D. The impact of artificial intelligence technologies on international relations // Bulletin of MGIMO-University. – 2021 – С.1-25 DOI: 10.24833/2071-8160-2021-olf1
18. The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. [Electronic resource]. – Access Mode: URL: https://standards.ieee.org/industry-connections/ec/autonomous-systems.html (03/14/2023)
19. UNESCO Recommendation on the Ethics of Artificial Intelligence-23 November, 2021-[Electronic resource]. – Access mode: URL: https://unesdoc.unesco.org/ark:/48223/pf0000381137_eng (Accessed: 03/14/2023)
20. Pullella P., Dastin J. Vatican joins IBM, Microsoft to call for facial recognition regulation // Reuters-FEBRUARY 28, 2020-[Electronic resource]. – Access Mode: URL: https://www.reuters.com/article/us-vatican-artificial-intelligence/vatican-joins-ibm-microsoft-to-call-for-facial-recognition-regulation-idUSKCN20M0Z1 (date accessed : 03/14/2023)

First Peer Review

Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
The list of publisher reviewers can be found here.

The reviewed article examines the impact of the development of artificial intelligence on social and interpersonal relationships, as well as the possible negative consequences of these processes in the field of human security and the maintenance of moral values. It cannot be said that this issue is not discussed enough in modern Russian literature, nevertheless, the author managed to find some aspects of the topic that are disclosed in his article very fully and in detail. The author believes that the initial and most acute contradiction is between the desire of government agencies and business to increase labor efficiency and the need to maintain the proper level of responsibility of those who develop and use technical devices based on artificial intelligence. Further, the author, I think, justifiably defends the point of view, according to which it is philosophical anthropology that can play an important role in identifying and resolving contradictions that arise in the development of artificial intelligence: "By studying the nature and meanings of man," he writes, "philosophical anthropology can help us better understand the ethical and social consequences of the development of artificial intelligence". It is impossible not to agree with the author and with the importance he emphasizes of the "social aspect" of the problem: "One of the key problems in eliminating contradictions in the development of AI is the need to raise awareness and public participation." Nevertheless, despite the general rather high assessment of the article, there are many omissions in it that prevent the decision on the possibility of publishing it in its current form. First of all, the article can be significantly shortened (and this will give the presentation additional semantic saturation) by eliminating fragments in which the author offers the reader only some well-known statements (for example, what is philosophical anthropology, and what place this section occupies in philosophical knowledge; it should also be noted that similar remarks about "philosophical anthropology"or "artificial intelligence" is also repeated repeatedly in the text). You can also try to describe the relevance, subject, and significance of the topic in a less "formal" way. Within the framework of a journal article, emphasis should be placed on the conceptual rigor of the justification of the topic and solving the problems posed, rather than on compliance with the "academic" requirements that are commonly used in dissertation research and abstracts reflecting their content and structure. Even against this background, a detailed review of the literature seems all the more redundant, which indicates the erudition of the author, but does little to help the reader understand the essence of the problem. There are often comments on the style of the text, for example: "Different reasons stimulated the development of artificial intelligence, etc.". Apparently, it should have been said "different", not "different", but "stimulated" - this is an expression from the dictionary of "modern managers", which is clearly superfluous in science. Or: "Can we assume that ..." – consider it "undoubted"? But what does "can be considered" mean? Let's read the beginning of another sentence: "Scientists from philosophical anthropology also investigate..." – why "from" philosophical anthropology? In some places there are also extra commas, which, perhaps, were put by the author in accordance with the intonation of Russian speech, for example: "... the formation of theory in the last century and the beginning of the accelerated development of artificial intelligence in this century, has become one of the most ...". There are also typos, for example, "confidentiality" instead of "confidentiality". The elimination of such shortcomings is necessary to bring the level of the text to a state that meets the requirements of a modern scientific publication. I recommend sending the article for revision.

Second Peer Review

Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
The list of publisher reviewers can be found here.

Review of the article Philosophical and anthropological analysis of contradictions in the development of artificial intelligence The scientific article "Philosophical and anthropological analysis of contradictions in the development of artificial intelligence", submitted by the author to the journal "Philosophical Thought", touches on an urgent topic that has recently aroused interest and heated discussions in various scientific circles. The purpose is outlined in the article as an analysis of contradictions in the development of artificial intelligence from the standpoint of philosophical anthropology. The author sees the relevance of the proposed research topic in the fact that today there is practically no proper control "over the development of artificial intelligence" (apparently we are talking about government structures). There is no established legislative framework and, in general, contradictions in this area are not considered. The author of the article argues, in my opinion, it is completely unjustified that this issue is not being studied by modern scientists specifically from the standpoint of philosophical anthropology (is it not clear by domestic or Western scientists?). In general, over the past few years, the volume of research related to the problem of human-artificial intelligence interaction in the near future has increased many times. An example is the list of references of the author of the article, which amounted to 39 positions! One of the tasks that the author points out in the work is related to the classification of existing material in the field of development and widespread implementation of artificial intelligence in modern practice. It should be noted that the author managed to collect extensive material on the topic under study and organize it. In the introductory part of his research, the author gives a rather voluminous review of thinkers who in one way or another consider the topic of artificial intelligence, but points out that they are all not deep enough and touch on too narrow a range of issues. In this case, I would like to object to the author that working on a specific issue is just an example of professionalism. The article contains a very brief overview of research (both domestic and foreign) on this topic, but it is presented extremely strangely for a scientific article, that is, by a simple enumeration (even in the abstract there is a more detailed description). Listing such an extensive list of names without references to sources looks doubtful. The stated research methodology occupies a separate place in the work, it is indicated quite widely, but it is difficult to define in the text. This is especially true of hermeneutics, phenomenological analysis, historical and dialectical method. It is not clear why the author included the hypothetical deductive method, what exactly did he mean by this? In my opinion, the author summarizes the material, classifies some points and conducts a partial comparative analysis. The title of the article corresponds to the content. The novelty of the research is not obvious, since the article is of an overview nature. The author needs to specifically identify the novelty in the work. Considering the main contradiction, the author notes really important points, for example, he writes: "One of the key sources of contradictions in the development of artificial intelligence is the contradiction between the desire to increase efficiency and the need for ethical and social responsibility." However, scientists and writers have already written about this in the 20th century, long before modern technological breakthroughs. The author emphasizes that: "philosophical anthropology can help us better understand the ethical and social consequences of the development of artificial intelligence." There are inconsistent sentences in the text, so it is important to read the article again. For example, the author writes: "artificial intelligence systems are designed, for example, for a quick reaction to the environment, could have the opportunity to adapt..." "When determining the subject of research, the task was set to identify the role of philosophical anthropology, which should be formulated" ? "AI poses to a person and the possibility of choosing different directions in his development but also the lack of regulatory principles"? The presence of different approaches? (such as systems, neural networks, evolutionary algorithms, and deep learning). And there are many such suggestions in the article! The necessary links have been made in the text. The bibliography reflects the research material and is designed in accordance with the requirements. The article is structured, but the nature and style of presentation of the material requires improvement. The text is written, from my point of view, in "heavy" language (many phrases have to be reread several times due to incorrect structure). In some cases, it is difficult to grasp the logic and sequence of thought in the author's presentation. There is often a feeling of retelling individual ideas from a wide variety of researchers dealing with this problem, but links and direct citations are absent. Although the bibliography reflects the research material as a whole, it does not quite appropriately include a number of sources. I do not think that such sources in the form of articles in the newspaper belong to the bibliography: ROC: Strategies for the development of artificial intelligence need ethical regulations // Rossiyskaya Gazeta – 2021 – [Electronic resource]. A full-fledged conclusion and conclusions on the topic of the study were very lacking! Summing up, the author points out the need to focus on the problem: "contradictions between efficiency and responsibility, control and autonomy", which should be considered from the perspective of an interdisciplinary approach. Further, the author makes the obvious conclusion that: "philosophical anthropology has the potential for the harmonious development of AI, enriches the philosophy of science and technology with its concepts." No specific conclusions on the text could be found in the conclusion, although the potential is present in the content itself. Despite the comments made, this topic, in my opinion, has good prospects and can be interesting for a wide range of audiences if it is deployed in the right way, which I sincerely wish the author. The article may be of interest to specialists in the field of philosophical anthropology, philosophy of science. Thus, the article "Philosophical and anthropological analysis of contradictions in the development of artificial intelligence", provided that the article is finalized, the above comments are corrected and a more detailed conclusion can be recommended for publication.

Third Peer Review

Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
The list of publisher reviewers can be found here.

This article is devoted to an interesting and relevant modern topic, which is in the constant focus of researchers of human cognitive abilities in various fields. "Intelligence" is a concept whose content has long been so diverse that without clarifying this content it is impossible either to formulate the goals and objectives of the study, or to correctly interpret and understand the results of this research. This concept is used not only by psychologists, but also by sociologists, educators, and philosophers, and it has found its place in cybernetics. In everyday life, when we talk about intelligence in the most general sense, we are talking about consciousness and the ability to think. Intelligence was considered a special property of a human being – something that neither animals nor objects, things do not have, and, as a result, intelligence was defined in such a way as to highlight a human feature, the specifics of what a person has. At the same time, intelligence was most often considered through such concepts as "reason", "reason", "common sense", which still raises doubts about the identity or difference of these concepts. Indeed, the etymology of the word "intellect" is related to the Latin word "intellectus", meaning "perception" and "sensation", "understanding" and "understanding", "meaning" and "meaning", "reason". It is worth noting that all these meanings relate to conscious experience, to what is difficult to see from the outside – what is some kind of inner "life", an internal process. This was the reason why for a long time a person noticed the properties of intelligence only in himself. But in the 20th century, the concept of intelligence was expanded with new meanings, in particular, the concept of intelligence was also correlated with animals whose intelligence began to be studied. Today, intelligence is associated with a set of abilities that a being possessing it should be distinguished by. And among these abilities: the ability to reason logically, the ability to think abstractly (about abstract concepts), plan and predict your actions, the ability to flexibly adapt to the world around you, to learn – to accumulate experience and use it for further actions. The author poses a problem and identifies areas that are most directly related to the issues of modern human existence. The "locomotive" of widespread digitalization, where philosophical anthropology and cultural philosophy in the field of artificial intelligence are used as one of the advanced fuels, will no longer be able to stop. But the means of controlling this locomotive, its direction and speed of movement must be found. The directions we have highlighted in the development of artificial intelligence are not chosen by chance. Thus, dichotomies, which are part of the essence of a person, push him to search for options for possible acceleration of his development, preservation of his existence. There is no clear understanding of the consequences of this. Against this background, we see and predict that systems with elements of artificial intelligence will limit a person's ability to independently choose the vector of his development, reducing his volitional and mental qualities, leveling possible freedom and dissolving identity in their hidden or even non-interpretable goals. And if these are not the goals of commercial corporations, then the lack of understanding of the existence of opportunities for dialogue between natural and artificial intelligences in the case of serious development of the latter once again indicates the unwillingness to realize, understand and analyze these very goals. What kind of scientist is able to be involved in the identification, formulation and participation in the resolution of emerging issues? Undoubtedly, modern science (including technical science) is increasingly separating itself from the sphere of use (that is, social implementation) of its achievements. The researcher, as it were, sells the result of his work as a product on the market. And if, from a consumer point of view, there are examinations that control the quality of products for compliance with technical standards and regulations, then who is able to control and limit such products that affect essential human qualities in a person? The presence of such expertise in the age of digitalization and digital transformation can already be considered more important, especially when, with the accelerating pace of life, only operational optimization tasks, optimization of the functioning of a device, programs or system, and not human issues are increasingly put at the forefront. The work analyzes various points of view on this issue, and there is also an appeal to the arguments of opponents, the article is based on a large bibliographic array of both domestic and foreign research literature. The text will be of interest to a certain part of the magazine's audience.
Link to this article

You can simply select and copy link from below text field.


Other our sites:
Official Website of NOTA BENE / Aurora Group s.r.o.