"Poletayev Readings X: Future(s) of Theories" took place.
The first section, “Do We Still Need Scientific Theory in the Time of Big Data and AI?,” was dedicated to the issues of big data, automation, AI, and the future of the humanities. Machine analysis and processing of data, according to the widespread standpoint, makes theoretical thinking obsolete. The speakers represented different positions: from optimism and enthusiasm to philosophical and historical-scientific skepticism.
The discussion was opened by Ivan Yamshchikov (Yandex, HSE). In his view, a theory is a mechanism for ‘compression’ of information in a form convenient for a human. Logical inferences in theory always have a probabilistic nature and therefore do not guarantee the correct conclusion. This entails an insurmountable incompleteness of all theoretical knowledge, from which computational methods can save us. The computer technologies available today are capable of processing colossal arrays of data at ultra-high speeds and allow us to find patterns and regularities that a human cannot recognize. In this sense, computers can significantly advance scientific knowledge. Ivan Yamshchikov’s conclusion is optimistic: a breakthrough in science is likely because machine learning and automatic data processing are already actively and productively used in science. As for the humanities and its future, then, according to his position, quantification will free us from ideological attachments and finally open the possibility of building ‘scientific’ humanities.
Alexey Grinbaum (Paris-Saclay) does not share the optimism of representatives of the IT industry. Machine information processing works with syntax, not semantics, meaning that AI is not yet capable of understanding. Nevertheless, machines are capable of violating the expectations of engineers and developers. For example, in facial recognition technologies, the machine determines images' relevant properties and features and forms stable connections between them. It creates a formal description of visual information necessary for the recognition of individual faces. At the same time, the meaning of these parameters remains opaque both for the machine and for its creators. Moreover, patterns and correlations detected during data processing are not necessarily laws in the sense of universal natural scientific laws. Correlation is not causation. On this basis, Alexey Grinbaum argues that AI is the new alchemy. Despite the fallacy of the basic assumptions of big data research, it can give a lot to the development of science and society. Nevertheless, the problem of the non-interpretability of the behavior of machines remains, and it will become increasingly acute in the future.
Yves Gingras (University of Quebec in Montreal) develops a skeptical attitude towards machine learning but focuses on earlier historical precedents similar to the rapid growth of interest in AI. In his view, behind the incredible fascination with computational methods among natural science and humanities lies the positivism of the first half of the XIX century. Researchers believe that by creating a formal, quantitative model of the phenomenon, we can discover hidden patterns that a human with limited cognitive abilities cannot identify. This belief can be found in different modifications in various sciences, but today its most influential form is the idea that AI will displace theoretical thinking. Philosophically, this position is naive because it seeks to reduce man's observed phenomena to statistical patterns. Yves Gingras agreed that quantitative methods are essential for understanding the hidden mechanisms of the functioning of phenomena but pointed out the insufficiency of quantitative measurement for their full understanding. A historical analogy may be the ‘magnetic crusade,’ undertaken in the middle of the XIX century by the British Royal Navy. Scientists of that time were looking for connections between electromagnetism and changing weather conditions. The establishment of regularities would have made it possible to organize shipping and maritime navigation better. Scientists of that time made a lot of efforts: organized expeditions to both hemispheres of the Earth and opened observatories throughout the British Empire. They were looking for cycles and regularities that could be quantified and would reveal similarities between the solar activity cycle and the geomagnetic cycle of the Earth. Yves Gingras concluded that the fascination with big data risks making the same mistake and urged to be more careful in assessing the capabilities of machines.
The second section of the conference was titled “Public History of the Digital Age” and included presentations by Serge Noiret (European University Institute), Mykola Makhortykh (University of Bern), and Andrei Zavadski (Humboldt University Berlin)
Serge Noiret focused on the main trends of digital public history. The idea of shared authority was one of the critical points of his presentation. Noiret noted the significance of collaboration in the latest public history projects. Such projects are based on crowdsourcing driven by digital technologies. The speaker also noted the glocal nature of contemporary public history: local public history projects use similar interdisciplinary methods and respond to similar public demands worldwide. He pointed to the endeavors that can help write history from below, as the authority does not belong exclusively to certified historians.
Mykola Mahortykh suggested looking at algorithms of Internet platforms as nonhuman actors of public history that determine what information people receive. Using empirical research on search engine algorithms, he pointed out that algorithms control access to historical content, giving priority to specific sites and topics or demonstrating irrelevant content. Mahortykh raised the question of whether public historians can and should manage algorithmic agents. And if so, what criteria should be used to optimize the processing of the abundant historical content?
Andrei Zavadski agreed with Mykola Mahortykh that the algorithms should be optimized. He addressed the issue of information bubbles in the digital space that separate different publics, their memories, and versions of the past from each other. Zavadski suggested that it is necessary to build communication between these publics to achieve unity. It is not the past itself that unites people but the dialogue about different versions of it. Speaking of public historians, he noted that they need to adapt to new forms of communication and reach a younger audience. In this regard, the Russian master’s programs in public history should be revised to make them more practice-oriented.
Serge Noire started the discussion with a comment about the importance of the quality of the content, not its quantity. The task of the public historian today is not only to solve conflicts between different memories but also to counteract the fake history that algorithms cannot recognize. Mykola Mahortykh argued that the quantity of data makes it challenging to filter the quality content and develop universal criteria.
The comments and questions that followed developed this theme. Alisa Maximova noted that one should distinguish between large media platforms that aggregate content (like YouTube) and independent websites with agendas and information selection principles. Kirill Molotov gave the example of neo-Stalinists in TikTok, who can ‘hack’ the algorithm to promote their views on Soviet history. Similarly, public historians can take advantage of this opportunity to work with the media. Boris Stepanov, pointing to the title of the 11th Poletaev Readings, emphasized that the future of theory is also essential for public history. What approaches can public historians implement to make sense of digital phenomena? Is it productive to use Habermas’s theory of the public sphere, and how should the public and the audience be understood now?
During the discussion, Mykola Makhortykh claimed that the public sphere theory does not shed light on the problem of information inequality and the impact of nonhuman actors. Some people understand how to ‘hack’ the algorithm, but others do not have access to information management. The problem can be solved with the structural changes in the transparency of the algorithms. Public historians should push the idea of transparency to the platform developers to level out the information inequalities.
But according to Andrey Zavadsky, public history has no theory of its own because it is a post-disciplinary project. Modern theoretical trends, postcolonial studies, queer studies, communication studies, influence public history and changes in audience inquiries. But public historians recklessly pay little attention to direct communication with the public. In addition to this, Serge Noiret noted that a difference between a public and an ‘academic’ historian lies in the skills of following the audience and addressing it through different media. Ensuring public and equal access to the past is, in fact, a question of forming active citizen participation in the public sphere.
The last section, “New Perspectives on Academic Ethics: Bridging the Gap between Contemporary Practices, the Social Sciences, and History,” was focused on academic ethics from the point of view of academic virtues, interaction with technology, and a combination of private and collective interests in the solving of ethical problems.
The section began with Mario Biagioli’s (University of California, Los Angeles) presentation on scientometric indexes and their role in evaluating academic performance. After a publication comes out, it takes a lot of time to calculate its impact factor. Still, it is possible to speed up the process with the estimates based on the journal’s existing impact factors and other indirect evidence. The main resource here is something that does not yet exist, namely projected citations. The impact factor is embedded not in the content (the text itself) but the articles’ metadata. The metadata has some properties of money since it is not valuable by itself, but its value is “imprinted” on it, i.e., it comes from complicated and non-obvious procedures. Contemporary forms of academic misconduct usually involve metadata rather than publications themselves. Instead of plagiarizing, fabricating and falsifying data, nowadays, scholars manipulate bibliometric indicators to give good performance.
In his talk, Ben (Arthur Benoit) Eklof (Indiana University at Bloomington) described the structural transformation of American universities in the 1970-2010s and its influence on ethical principles of different professors’ generations. American universities faced corporatization and privatization. Because of these changes, private and collective interests do not coincide, and pursuit of one’s career becomes contradictory to the ethical standards that guide relations with colleagues, students, and a university as a whole. As a result, professors tend to concentrate on their research at the expense of other duties and assignments.
Tomas Stapleford (University Notre Dame) described certain limitations in the conceptualization of academic ethics as the set of norms that separate us from the immoral. The speaker suggested employing A. Macintyre’s interpretation of Aristotelianism as the critical element of academic ethics. The way scholars usually describe their biographies and relations with colleagues in interviews and memoirs is reminiscent of Macintyre’s conception of the narrative unity of the self. Focus on virtues would allow showing the ethics as a practice where human beings analyze and discuss actions and replace negativist definition of ethics through various prohibitions by a conception of excellence.
In the final discussion, the temporality of metrics was one of the main topics. Alexey Pleshkov (HSE) emphasized that although in practice reaction alsways followed the text,the metrics were based on the formal and timless relations between the text and its citations.. Another key topic of discussion was virtue ethics in academia. Andrei Ilyin (HSE) pointed out that Aristotelian ‘excellence’ could be compared to the ‘excellence’ described by Bill Readings as a vague aim of management in the modern university. Ilya Guryanov raised the issue of virtues development from the perspective of Aristotelian ethics, the role of education and particular people who can facilitate this process. Alexey Pleshkov continued the topic of ancient ethics and virtues. He noted that it could be useful to address Aristotelian ethics through the concept of virtue by M. Nussbaum. This approach allows us to discover more sustainable practices and patterns of interactions that related to the academic benefits and goals.