<?xml version="1.0" encoding="utf-8"?>
<journal>
  <titleid>75447</titleid>
  <issn>2712-9934</issn>
  <journalInfo lang="ENG">
    <title>Technology and Language</title>
  </journalInfo>
  <issue>
    <volume>5</volume>
    <number>2</number>
    <altNumber>15</altNumber>
    <dateUni>2024</dateUni>
    <pages>1-177</pages>
    <articles>
      <article>
        <artType>EDI</artType>
        <langPubl>RUS</langPubl>
        <pages>1-10</pages>
        <authors>
          <author num="001">
            <authorCodes>
              <orcid>0000-0003-2506-2374</orcid>
            </authorCodes>
            <individInfo lang="ENG">
              <orgName>Perm State University</orgName>
              <surname>Seredkina</surname>
              <initials>Elena</initials>
              <address>Perm, Russia</address>
            </individInfo>
          </author>
          <author num="002">
            <authorCodes>
              <orcid>0000-0002-5785-7553</orcid>
            </authorCodes>
            <individInfo lang="ENG">
              <orgName>Renmin University of China</orgName>
              <surname>Liu</surname>
              <initials>Yongmou</initials>
              <address>Beijing, China</address>
            </individInfo>
          </author>
        </authors>
        <artTitles>
          <artTitle lang="ENG">Chat GPT and the Voices of Reason, Responsibility, and Regulation</artTitle>
        </artTitles>
        <abstracts>
          <abstract lang="ENG">ChatGPT, a large language model (LLM) by OpenAI, is expected to have a transformative impact on many aspects of society. There is much discussion in the media and a rapidly growing academic debate about its benefits and ethical risks. This article explores the profound influence of Socratic dialogue on Western and non-Western thought, emphasizing its role in the pursuit of truth through active thinking and dialectics. Unlike Socratic dialogue, ChatGPT generates plausible-sounding answers based on pre-trained data, lacking the pursuit of objective truth, personal experience, intuition, and empathy. The LLM’s responses are limited by its training dataset and algorithms, which can perpetuate biases or misinformation. While a true dialogue is a creative, philosophical exchange filled with ontological, ethical, and existential meanings, interactions with ChatGPT are characterized as interactive data processing. But is this really true? Perhaps we are underestimating the evolutionary growth potential of large language models? These questions have important implications for theoretical debates in cognitive science, changing our understanding of what cognition means in artificial and natural intelligence. This special issue examines ChatGPT as a subject of philosophical analysis from a position of cautious optimism and rather harsh criticism.  It includes six articles covering a wide range of topics. The first group of researchers emphasizes that machine understanding and communication matches human practice. Others argue that AI cannot reach human levels of intelligence because it lacks conceptual thinking and the ability to create. Such contradictory interpretations only confirm the complexity and ambiguity of the phenomenon.</abstract>
        </abstracts>
        <codes>
          <doi>10.48417/technolang.2024.02.01</doi>
          <udk>1:004.8</udk>
        </codes>
        <keywords>
          <kwdGroup lang="ENG">
            <keyword>ChatGPT</keyword>
            <keyword>Artificial Intelligence</keyword>
            <keyword>Large language model</keyword>
            <keyword>Dialogue</keyword>
            <keyword>AI Ethics Code</keyword>
            <keyword>Responsibility</keyword>
          </kwdGroup>
        </keywords>
        <files>
          <furl>https://soctech.spbstu.ru/article/2024.15.1/</furl>
          <file>1-10.pdf</file>
        </files>
      </article>
      <article>
        <artType>RAR</artType>
        <langPubl>RUS</langPubl>
        <pages>11-25</pages>
        <authors>
          <author num="001">
            <authorCodes>
              <scopusid>57191037928</scopusid>
              <orcid>0000-0002-9256-4342</orcid>
            </authorCodes>
            <individInfo lang="ENG">
              <orgName>Institute of Philosophy, Russian Academy of Sciences</orgName>
              <surname>Arshinov</surname>
              <initials>Vladimir</initials>
              <address>Moscow, Russia</address>
            </individInfo>
          </author>
          <author num="002">
            <authorCodes>
              <orcid>0009-0003-9571-9260</orcid>
            </authorCodes>
            <individInfo lang="ENG">
              <orgName>Arteus LLM Artificial Intelligence Lab</orgName>
              <surname>Yanukovich </surname>
              <initials>Maxim</initials>
              <address>Limassol, Cyprus </address>
            </individInfo>
          </author>
        </authors>
        <artTitles>
          <artTitle lang="ENG">Neural Networks as Embodied Observers of Complexity: An Enactive Approach</artTitle>
        </artTitles>
        <abstracts>
          <abstract lang="ENG">This article explores a conceptual framework for understanding neural networks through the lens of the enactivist paradigm, a philosophical theory that posits that cognition arises from the dynamic interaction of an organism with its environment. We explore how neural networks, as complex adaptive systems, transcend their traditional role as computational machines and become active participants in their data-rich environment, evolving through continuous feedback and adaptation. Drawing parallels with biological systems, we argue that artificial neural networks exhibit what enactivists call “structural coupling” – symbiotic co-evolution with their information ecosystems. From this perspective, knowledge is not passively processed but actively constructed through repetitive interactions, each of which shapes the internal state of the system in a self-organizing manner similar to the sensorimotor activity of natural organisms. This approach goes beyond classical computational theories by emphasizing that machine cognition resembles human-like cognitive processes, an emergent form of “world creation.” Our analysis shows that these artificial entities have focal points, or internal observers, associated with patterns learned during training, suggesting that neural networks shape worldviews through active participation rather than passive observation. The paper reconceptualizes machine learning models as cognitive agents that bring new forms to our understanding of cognition and signals an epistemological shift in which knowledge itself is seen as participation and creation mediated by technologically complex but organically similar structures. This has important implications for both technical applications and theoretical debates in cognitive science, potentially changing the way we think about what cognition means in artificial and natural intelligence.</abstract>
        </abstracts>
        <codes>
          <doi>10.48417/technolang.2024.02.02</doi>
          <udk>1: 004.8</udk>
        </codes>
        <keywords>
          <kwdGroup lang="ENG">
            <keyword>Enactivism</keyword>
            <keyword>Neural Networks</keyword>
            <keyword>Complexity Observer</keyword>
            <keyword>Structural Coupling</keyword>
            <keyword>Cognitive Science</keyword>
            <keyword>Embodied Cognition</keyword>
            <keyword>Consciousness</keyword>
          </kwdGroup>
        </keywords>
        <files>
          <furl>https://soctech.spbstu.ru/article/2024.15.2/</furl>
          <file>11-25.pdf</file>
        </files>
      </article>
      <article>
        <artType>RAR</artType>
        <langPubl>RUS</langPubl>
        <pages>26-39</pages>
        <authors>
          <author num="001">
            <authorCodes>
              <orcid>0000-0002-3116-437X</orcid>
            </authorCodes>
            <individInfo lang="ENG">
              <orgName>Institution of Science Institute of Philosophy of the Russian Academy of Sciences</orgName>
              <surname>Shalack</surname>
              <initials>Vladimir </initials>
              <address>Moscow, Russia</address>
            </individInfo>
          </author>
        </authors>
        <artTitles>
          <artTitle lang="ENG">Exposing Illusions – The Limits of AI by the Example of ChatGPT</artTitle>
        </artTitles>
        <abstracts>
          <abstract lang="ENG">The article critically analyzes modern developments in the field of artificial intelligence using the example of the ChatGPT program created by OpenAI. The idea of creating AI was expressed already in 1950 by Alan Turing who also proposed a test, the passing of which would allow us to assert that an AI was created. Defining the concept of AI faces difficulties. According to the point of view adopted here, the so-called intellectual activities allowed Homo sapiens to stand out against the surrounding animal world. With intellectual activity one no longer relies on strength and speed of movement alone. Pattern recognition, self-learning, and purposefulness of activity are not characteristic features of intelligence. The main type of human activity that is specific to humans and which – when added to pattern recognition, self-learning and purposeful activity – makes them intelligent, is conceptual thinking. namely the ability to represent things in language and use them in reasoning. Historically, there have been two main competing approaches to AI – logical and neural networks. One of the serious flaws of the neural network approach is its inability to explain the course of reasoning that leads to a particular conclusion, which makes it difficult to verify its correctness. Specific examples show that ChatGPT is not able to correctly model the simplest conceptual reasoning. The reason for this lies in fundamental limitations of the underlying large language model that cannot be corrected by additional training. Another disadvantage of ChatGPT is its susceptibility to neurohacking – forcing the user to make the necessary decisions during the dialogue. This is a serious threat to the widespread use of neural networks for decision-making in the field of management. The paper is based on research conducted in the summer of 2023.</abstract>
        </abstracts>
        <codes>
          <doi>10.48417/technolang.2024.02.03</doi>
          <udk>1:004.8</udk>
        </codes>
        <keywords>
          <kwdGroup lang="ENG">
            <keyword>Artificial intelligence</keyword>
            <keyword>Pattern recognition</keyword>
            <keyword>Pattern search</keyword>
            <keyword>Neural network</keyword>
            <keyword>ChatGPT</keyword>
            <keyword>large language model</keyword>
          </kwdGroup>
        </keywords>
        <files>
          <furl>https://soctech.spbstu.ru/article/2024.15.3/</furl>
          <file>26-39.pdf</file>
        </files>
      </article>
      <article>
        <artType>RAR</artType>
        <langPubl>RUS</langPubl>
        <pages>40-56</pages>
        <authors>
          <author num="001">
            <authorCodes>
              <orcid>0000-0003-4912-4912</orcid>
            </authorCodes>
            <individInfo lang="ENG">
              <orgName>Universidad Nacional Autónoma de México</orgName>
              <surname>Perez Leon</surname>
              <initials>Rebeca</initials>
              <address>Estado de México, Mexico</address>
            </individInfo>
          </author>
        </authors>
        <artTitles>
          <artTitle lang="ENG">Do Language Models Communicate? Communicative Intent and Reference from a Derridean Perspective</artTitle>
        </artTitles>
        <abstracts>
          <abstract lang="ENG">This paper assesses the arguments of Emily M. Bender, Timnit Gebru, Angelina McMillan-Major and Margaret Mitchell in the influential article “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” These arguments disputed that Language Models (LM) can communicate and understand. In particular, I discuss the argument that LMs cannot communicate because their linguistic productions lack communicative intent and are not based on the real world or a model of the real world, which the authors regard as conditions for the possibility of communication and understanding. I argue that the authors’ view of communication and understanding is too restrictive and cannot account for vast instances of communication, not only human-to-human communication but also communications between humans and other entities. More concretely, I maintain that communicative intent is a possible but not necessary condition for communication and understanding, as it is oftentimes absent or unreliable. Communication need not be grounded in the real world in the sense of needing to refer to objects or state of affairs in the real world, because communication can very well be about hypothetical or unreal worlds and object. Drawing on Derrida’s philosophy, I elaborate alternative concepts of communication as the transmission of an operation of demotivation and overwhelming of interpretations with differential forces, and of understanding as the best guess or best interpretation. Based on these concepts, the paper argues that LMs could be said to communicate and understand.</abstract>
        </abstracts>
        <codes>
          <doi>10.48417/technolang.2024.02.0</doi>
          <udk>1:004.8</udk>
        </codes>
        <keywords>
          <kwdGroup lang="ENG">
            <keyword>Language Model</keyword>
            <keyword>Stochastic Parrot</keyword>
            <keyword>Communication</keyword>
            <keyword>NLU</keyword>
            <keyword>ChatGPT</keyword>
            <keyword>Derrida</keyword>
          </kwdGroup>
        </keywords>
        <files>
          <furl>https://soctech.spbstu.ru/article/2024.15.4/</furl>
          <file>40-56.pdf</file>
        </files>
      </article>
      <article>
        <artType>RAR</artType>
        <langPubl>RUS</langPubl>
        <pages>57-66</pages>
        <authors>
          <author num="001">
            <authorCodes>
              <orcid>0000-0001-5643-6071</orcid>
            </authorCodes>
            <individInfo lang="ENG">
              <orgName>National Research University Higher School of Economics</orgName>
              <surname>Kartasheva</surname>
              <initials>Anna</initials>
              <address>Moscow, Russia</address>
            </individInfo>
          </author>
        </authors>
        <artTitles>
          <artTitle lang="ENG">Dialogue as Autocommunication - On Interactions with Large Language Models</artTitle>
        </artTitles>
        <abstracts>
          <abstract lang="ENG">In a dialog with large language models (LLM) there is a coincidence of the addressee and addressee of the message, so such a dialog can be called autocommunication. A neural network can only answer a question that has a formulation. The question is formulated by the one who asks it, i.e. a human being. Human activity in dialog with neural networks provokes thoughts about the nature of such dialog. Composing prompts is one of the most creative parts of dialog with neural networks. But it is worth noting that a neural network is often better at composing prompts than a human. Does this mean that humans need to develop their questioning skills? In LLM-based dialog systems, the main value to the user is the ability to clarify and structure their own thoughts. The structuring of thoughts happens through questioning, through formulating and clarifying questions. Asking the right question is practically answering that question. Thus, thanks to autocommunication, the development, transformation, and restructuring of the human "I" itself takes place. Dialogue with large linguistic models acts as a discursive practice that allows people to formulate their own thoughts and transform their self through autocommunication. It is worth noting that for this kind of dialog, a certain image of the audience is normative or determinative of the material that can be produced in response to a given question. This is because the data for model training is provided by people, even if they do not and have never thought about it. Thus, a dialogic relationship develops between the generated text and the questioning audience that develops all participants in the communication.</abstract>
        </abstracts>
        <codes>
          <doi>10.48417/technolang.2024.02.05</doi>
          <udk>1:004.8</udk>
        </codes>
        <keywords>
          <kwdGroup lang="ENG">
            <keyword>Large Language Models</keyword>
            <keyword>Autocommunication</keyword>
            <keyword>Artificial intelligence</keyword>
            <keyword>Authorship</keyword>
            <keyword>Communication</keyword>
            <keyword>Dialogue</keyword>
          </kwdGroup>
        </keywords>
        <files>
          <furl>https://soctech.spbstu.ru/article/2024.15.5/</furl>
          <file>57-66.pdf</file>
        </files>
      </article>
      <article>
        <artType>RAR</artType>
        <langPubl>RUS</langPubl>
        <pages>67-79</pages>
        <authors>
          <author num="001">
            <authorCodes>
              <orcid>0000-0001-7358-6151</orcid>
            </authorCodes>
            <individInfo lang="ENG">
              <orgName>Perm State University</orgName>
              <surname>Vnutskikh</surname>
              <initials>Alexander</initials>
              <address>Perm, Russia</address>
            </individInfo>
          </author>
          <author num="002">
            <authorCodes>
              <orcid>0000-0003-4162-1033</orcid>
            </authorCodes>
            <individInfo lang="ENG">
              <orgName>Perm State University</orgName>
              <surname>Komarov </surname>
              <initials>Sergey </initials>
              <address> Perm, Russia</address>
            </individInfo>
          </author>
        </authors>
        <artTitles>
          <artTitle lang="ENG">Lebenswelt, Digital Phenomenology, and the Modification of Human Intelligence</artTitle>
        </artTitles>
        <abstracts>
          <abstract lang="ENG">The development of contemporary digital technologies leads to a profound modification of human intelligence. The authors assume that this modification should be studied by means of a special kind of phenomenology. It is digital phenomenology which examines the structures of consciousness of the modern technogenic subject. This builds on their previous works where the authors have already discussed a theory of the transformation of human intelligence driven by digital technologies. The influence of these technologies results in virtualization of affect. Affect becomes detached from its local manifestation in the human body and is manifested in material and energetic processes in digital infrastructure. As a result, space and time, categories of reason, and productive imagination become aspects of mobile devices and digital infrastructure. The aim of this contribution is to discuss the possibilities of digital phenomenology in the study of communication of the technogenic subject. Methodologically, the study refers to the phenomenological approach. Archetypes are compared of classical intelligence and technogenic subjectivity which defines the content of communication. The authors suggest that consciousness as a pure orientation can undergo digital modification, as the world of primordial objects is discovered through corporeal experience. A modern human body is not constituted within the boundaries of direct sensual experience but perceives digital devices as body organs. The peculiarities of the language of these devices determine human linguistic practices as well. So we can see non-human intelligence and non-human communication. Both intelligence and communication are becoming increasingly artificial. The prospect of further in-depth research in the digital humanities is outlined.</abstract>
        </abstracts>
        <codes>
          <doi>10.48417/technolang.2024.02.06</doi>
          <udk>001: 801.73</udk>
        </codes>
        <keywords>
          <kwdGroup lang="ENG">
            <keyword>Lebenswelt</keyword>
            <keyword>Digital technologies</keyword>
            <keyword>Digital modification of human intelligence and communication</keyword>
          </kwdGroup>
        </keywords>
        <files>
          <furl>https://soctech.spbstu.ru/article/2024.15.6/</furl>
          <file>67-79.pdf</file>
        </files>
      </article>
      <article>
        <artType>RAR</artType>
        <langPubl>RUS</langPubl>
        <pages>80-99</pages>
        <authors>
          <author num="001">
            <authorCodes>
              <orcid>0000-0003-2230-311X</orcid>
            </authorCodes>
            <individInfo lang="ENG">
              <surname> Alekseev </surname>
              <initials>Andrey</initials>
            </individInfo>
          </author>
          <author num="002">
            <authorCodes>
              <orcid>0000-0002-0006-5942</orcid>
            </authorCodes>
            <individInfo lang="ENG">
              <orgName>State Academic University of Humanities </orgName>
              <surname>Alekseeva</surname>
              <initials>Ekaterina</initials>
              <address>Moscow, Russia</address>
            </individInfo>
          </author>
        </authors>
        <artTitles>
          <artTitle lang="ENG">GPT Assistants and the Challenge of Personological Functionalism</artTitle>
        </artTitles>
        <abstracts>
          <abstract lang="ENG">The paper discusses whether it is correct to speak of “generative artificial intelligence” – a concept that is not within the scope of AI research. The discussion suggests that it is premature to claim that humans are being replaced by GPT assistants such as ChatGPT in the field of sociocultural digital communication. Personological functionalism, which would justify the replacement of people by machines, is based on the psychofunctionalism of Ned Block, who proves the need to psychologize machine functionalism by introducing “meaning” as a criterion for passing the original Turing test. For personological functionalism, in addition to “meaning” the minimum necessary requirements of the Turing test include “creativity.” The paper shows that GPT Assistants do not pass this creativity test. To demonstrate the inability to pass a Turing test for meaningfulness, the Block machine was modified in a pair of 1978 and 1981 papers by combining the neurocomputer with symbolic versions. For the now further expanded Block test, the argumentation of previous versions is preserved and strengthened, leading to the conclusion that machines like GPT Assistants are not capable of fulfilling either the roles of psychological functionalism or personological functionalism.</abstract>
        </abstracts>
        <codes>
          <doi>10.48417/technolang.2024.02.07</doi>
          <udk>1:004.8</udk>
        </codes>
        <keywords>
          <kwdGroup lang="ENG">
            <keyword>Generative AI</keyword>
            <keyword>Complex Turing test</keyword>
            <keyword>Block test</keyword>
            <keyword>Lovelace test</keyword>
            <keyword>Chinese nation</keyword>
            <keyword>Block's machine</keyword>
            <keyword>Psychofunctionalism</keyword>
            <keyword>Personological functionalism</keyword>
          </kwdGroup>
        </keywords>
        <files>
          <furl>https://soctech.spbstu.ru/article/2024.15.7/</furl>
          <file>80-99.pdf</file>
        </files>
      </article>
      <article>
        <artType>RAR</artType>
        <langPubl>RUS</langPubl>
        <pages>101-115</pages>
        <authors>
          <author num="001">
            <individInfo lang="ENG">
              <surname>Leroi-Gourhan</surname>
              <initials>André </initials>
            </individInfo>
          </author>
        </authors>
        <artTitles>
          <artTitle lang="ENG">The Origin and Dissemination of Scientific Knowledge</artTitle>
        </artTitles>
        <abstracts>
          <abstract lang="ENG">André Leroi-Gourhan (1911-1986) was a French ethnologist, prehistorian and paleo-anthropologist who is today also appreciated for his influence on the philosophy of technology. His first publications on L'Homme et la matière and Milieu et techniques (1943, 1945) secured his reputation as a specialist in the study of material civilizations and in comparative technology. This perspective was enriched by evolutionary and anthropological considerations in his best known work, Le geste et la parole (1964, 1965). This book has appeared in English as Gesture and Speech in 1993, but not all of his relevant publications have been translated, and several aspects of his technological approach remain little known. The translation here of his March 1952 lecture at the Maison des Sciences in Paris, as part of a lecture series on “The structures of the universe and their scientific perception,” is an opportunity to highlight the interest and relevance of Leroi-Gourhan for contemporary reflections about technology. For example, a jointly haptic and cognitive “material engagement” is for Leroi-Gourhan characteristic of specifically human manufacture, of “materially creative activities” as undertaken by artisans of all times. We can recognize here Leroi-Gourhan's adhesion to Henri Bergson's philosophical tenet regarding the epistemological primacy of action over contemplation, and consequently the active, dynamic, vital origins of knowledge.</abstract>
        </abstracts>
        <codes>
          <doi>10.48417/technolang.2024.02.08</doi>
          <udk>001.9</udk>
        </codes>
        <keywords>
          <kwdGroup lang="ENG">
            <keyword>André Leroi-Gourhan</keyword>
            <keyword>Technology</keyword>
            <keyword>Rationality</keyword>
            <keyword>Physical and social evolution</keyword>
            <keyword>Prehistoric flintknapping</keyword>
            <keyword>Chaine operatoire</keyword>
          </kwdGroup>
        </keywords>
        <files>
          <furl>https://soctech.spbstu.ru/article/2024.15.8/</furl>
          <file>101-115.pdf</file>
        </files>
      </article>
      <article>
        <artType>RAR</artType>
        <langPubl>RUS</langPubl>
        <pages>116-124</pages>
        <authors>
          <author num="001">
            <individInfo lang="ENG">
              <orgName>University of Technology of Compiègne</orgName>
              <surname>Guchet</surname>
              <initials>Xavier</initials>
              <address>Hauts-de-France, France</address>
            </individInfo>
          </author>
        </authors>
        <artTitles>
          <artTitle lang="ENG">Leroi-Gourhan and the Object of Technology</artTitle>
        </artTitles>
        <abstracts>
          <abstract lang="ENG">The emphasis on technical artifacts is a hallmark of contemporary philosophy of technology. How can Leroi-Gourhan’s conceptualization of the technical object enrich current discussions among philosophers of technology? This article aims not to exhaustively address this question but to briefly outline how Leroi-Gourhan, as an ethnologist, reconfigures the concept of the technical object inherited from ethnology. The article begins by presenting Leroi-Gourhan's ambition to revisit the central question of ethnology: what is the origin of the division of the human mass into distinct ethnic units called “peoples”, distributed across the globe? According to Leroi-Gourhan, ethnology did not divide humanity at its natural junctures, leading to inaccurate historical conclusions. For him, “peoples” are not fixed and uniform entities defined by constant, specific characteristics. Instead, they arise from the temporary convergence of traits, such as language and technology, which have their own independent existence. These traits may come together at a certain point, but beyond that, they diverge. Ethnology should focus on these traits, not on the “peoples.” In particular, technology serves as a reliable indicator of how the human mass has been divided and dispersed across space and time. However, to draw solid conclusions on this matter, it is essential to approach the extensive technical documentation with a rigorous method of classification and analysis. The article examines this method, leading Leroi-Gourhan to redefine the very concept of the technical object. The article highlights Leroi-Gourhan's focus on the concepts of “fact” and “tendency” in his analysis of technical objects. These objects are viewed both as solutions to general human challenges in transforming matter (representing “tendencies”) and as culturally significant items with varying “degrees of fact.” Thus, Leroi-Gourhan assigned a dual nature to technical objects, but in a way that differs significantly from the current analytical philosophy of technical artifacts.</abstract>
        </abstracts>
        <codes>
          <doi>10.48417/technolang.2024.02.09</doi>
          <udk>165</udk>
        </codes>
        <keywords>
          <kwdGroup lang="ENG">
            <keyword>Ethnology</keyword>
            <keyword>Fact</keyword>
            <keyword>Technology</keyword>
            <keyword>Tendency</keyword>
          </kwdGroup>
        </keywords>
        <files>
          <furl>https://soctech.spbstu.ru/article/2024.15.9/</furl>
          <file>116-124.pdf</file>
        </files>
      </article>
      <article>
        <artType>RAR</artType>
        <langPubl>RUS</langPubl>
        <pages>125-135</pages>
        <authors>
          <author num="001">
            <individInfo lang="ENG">
              <orgName>Università degli studi di Padova, Padova, Italy</orgName>
              <surname>Aurora</surname>
            </individInfo>
          </author>
        </authors>
        <artTitles>
          <artTitle lang="ENG">“Une véritable syntaxe”.  Some Notes on Leroi-Gourhan and Structural Linguistics</artTitle>
        </artTitles>
        <abstracts>
          <abstract lang="ENG">In what follows, I will try to offer a brief overview of the relationships between some of Leroi-Gourhan’s anthropological insights on technology and some of the fundamental theoretical claims that form the general framework of structural linguistics. This hermeneutical movement runs an evident risk, which needs to be addressed and overcome: the risk of including Leroi-Gourhan’s works in the wide range of the structuralist corpus. For this reason, in the introduction I clarify what I mean by “structuralism” so that, in the subsequent sections, I can try to show the epistemological relationship between Leroi-Gourhan’s ethnology and the “structuralist turn”, as described in the introduction. To this end, I will point out the possible theoretical influence exerted by structural linguistics, and especially by the structural phonology developed within the Prague linguistic circle, on Leroi-Gourhan’s conceptual toolbox. More specifically, in the paper, I will focus on some passages of La geste et la parole (1964), which I will consider in connection with two more minor and older texts, namely Origine et diffusion de la connaissance scientifique (1953) and L’homme et la nature, an article published in 1936 in the Encyclopédie française.</abstract>
        </abstracts>
        <codes>
          <doi>10.48417/technolang.2024.02.10</doi>
          <udk>81-116: 39</udk>
        </codes>
        <keywords>
          <kwdGroup lang="ENG">
            <keyword>Structuralism</keyword>
            <keyword>Linguistics</keyword>
            <keyword>Phonology</keyword>
            <keyword>Syntax</keyword>
            <keyword>Technological system</keyword>
          </kwdGroup>
        </keywords>
        <files>
          <furl>https://soctech.spbstu.ru/article/2024.15.10/</furl>
          <file>125-135.pdf</file>
        </files>
      </article>
      <article>
        <artType>RAR</artType>
        <langPubl>RUS</langPubl>
        <pages>136-152</pages>
        <authors>
          <author num="001">
            <authorCodes>
              <orcid>0000-0002-2029-4929</orcid>
            </authorCodes>
            <individInfo lang="ENG">
              <orgName>École nationale des chartes</orgName>
              <surname>Schlanger</surname>
              <initials>Nathan</initials>
              <address>Paris, France </address>
            </individInfo>
          </author>
        </authors>
        <artTitles>
          <artTitle lang="ENG">Between Endorsement and Disavowal: André Leroi-Gourhan's Russian Interactions</artTitle>
        </artTitles>
        <abstracts>
          <abstract lang="ENG">Material-based interpretations of everyday undertakings have long been of interest to the French social sciences, including anthropology and history. André Leroi-Gourhan (1911-1986) follows to some extend this trend, insofar as his pioneering contributions to ethnographic and prehistoric technology – from the “elementary forms of human activity,” to studies of stone tool manufacture, to the formulation of the “chaîne opératoire” – shed much light on the more tangible and infrastructural dimensions of human existence. At the same time, his predominantly idealist recourse to evolutionary “tendencies,” “vital thrusts” (élan vital), and suchlike metaphysical notions rather held him at bay from would-be historical and dialectical understandings of primitive socio-economic formations – and this, despite his ready access to and close acquaintance with the professional literature from the other side of the Iron Curtain. Hence the paradox, as outlined here, of Leroi-Gourhan's distant attitude towards the conceptual (historical-materialist) substrate of Russian-cum-Soviet archaeology, on whose practical achievements he nonetheless remained well-informed and appreciative. In turn, this ambivalence may partly explain the rather superficial and incomplete perception of Leroi-Gourhan's works within Soviet archaeology and anthropology, limited to his publications on Prehistoric art and religion while ignoring his broad-ranging contributions to “anthropogenesis.”</abstract>
        </abstracts>
        <codes>
          <doi>10.48417/technolang.2024.02.11</doi>
          <udk>130.2(091)</udk>
        </codes>
        <keywords>
          <kwdGroup lang="ENG">
            <keyword>André Leroi-Gourhan</keyword>
            <keyword>S. A. Semenov</keyword>
            <keyword>Palaeolithic archaeology</keyword>
            <keyword>Prehistoric technology</keyword>
            <keyword>Stone tool manufacture</keyword>
            <keyword>“Proletarian archaeology”</keyword>
            <keyword>Planimetric excavations</keyword>
            <keyword>Pincevent</keyword>
          </kwdGroup>
        </keywords>
        <files>
          <furl>https://soctech.spbstu.ru/article/2024.15.11/</furl>
          <file>136-152.pdf</file>
        </files>
      </article>
      <article>
        <artType>RAR</artType>
        <langPubl>RUS</langPubl>
        <pages>154-177</pages>
        <authors>
          <author num="001">
            <authorCodes>
              <orcid>0000-0001-7554-2902</orcid>
            </authorCodes>
            <individInfo lang="ENG">
              <orgName>Technical University of Berlin</orgName>
              <surname>Rammert</surname>
              <initials>Werner</initials>
              <address>Berlin, Germany</address>
            </individInfo>
          </author>
        </authors>
        <artTitles>
          <artTitle lang="ENG">Doing Things with Words and Things: A Social Pragmatist View on the Language–Technology Analogy</artTitle>
        </artTitles>
        <abstracts>
          <abstract lang="ENG">This paper claims to show that the making of technology and the material agency of technical objects can be analyzed analogously to the making of meaning through words and speech acts. It proposes the development of a more comprehensive view on the making and working of technology that connects the social pragmatist approach of technical practice and symbolic interagency (Kant, Dewey, Mead) with the linguistic concept of pragmatics and speech acts (Peirce, Wittgenstein, Austin). Both, speech acts and technical acts can be considered as two modes of meaning-making in the social construction of reality. Furthermore, the paper exhibits some parallels between the objectification processes of language and technology. It emphasizes how both evolve from early stages of signs and tools in practical contexts to encoded collections of grammatical rules and technological tools later on. Doing things with concrete things (technology) reveals two different modes of “efficacy” (Jullien). There is implicit experienced efficacy in the language of directed material forces and causes and also an explicit ascribed efficacy in the language of instituted ends–means relations. The text explores the analogy between language and technology through the lenses of semantics, syntax, pragmatics, and grammar. It emphasizes the importance of such an extended pragmatist/pragmatics approach in the face of new technologies that exhibits a high degree of self-activity, more modes of intra-action between physical and digital objects, and a growing interactivity at interfaces with human actors and environmental factors. They can be more appropriately understood, conceptualized, and also designed as sociotechnical constellations of distributed agencies between people, machines, and programs.</abstract>
        </abstracts>
        <codes>
          <doi>10.48417/technolang.2024.02.12</doi>
          <udk>1:62</udk>
        </codes>
        <keywords>
          <kwdGroup lang="ENG">
            <keyword>Technical practice</keyword>
            <keyword>Material agency</keyword>
            <keyword>Speech acts</keyword>
            <keyword>Linguistic pragmatics</keyword>
            <keyword>efficacy</keyword>
            <keyword>Meaning</keyword>
            <keyword>Interactivity</keyword>
            <keyword>Digital objects</keyword>
          </kwdGroup>
        </keywords>
        <files>
          <furl>https://soctech.spbstu.ru/article/2024.15.12/</furl>
          <file>154-177.pdf</file>
        </files>
      </article>
    </articles>
  </issue>
</journal>
