Organized by the research group “Trust in information”
Trust is a central pillar of the scientific enterprise. Much work in the philosophy of science can be seen as coping with the problem of establishing trust in a certain theory, a certain model or even science as a whole. However, trust in science is threatened by various developments. With the advent of more complex models and the increasing usage of computer methods as machine learning and computer simulation it seems increasingly challenging to establish trust in science. Policy decisions that are made on the basis of such models (e.g. climate or more recently covid policies) not only require a high level of trust from their users but also from the people affected. In addition, there are increasingly visible difficulties in communicating scientific practices and results to a wider public. To mention just two points in this regard: Scientific communication as well as the scientific handling of non-knowledge often takes place differently than in everyday life. While dissent is a normal mode of scientific communication within the sciences, seen from the outside it is often perceived as a failure. The enormous degree of agreement between scientists, which forms the basis for dissent, is then overlooked. The same applies to scientific non-knowledge, which often only becomes possible based on high levels of shared knowledge. Thus, non-knowledge can at least temporarily be considered a success in the sciences. Such differences between scientific and non-scientific communication may explain some of the difficulties regarding the trust issues at hand. The question arises, however, as to what characterizes an appropriate relationship between trust and science in the first place. Blind trust in science is not a reasonable option. Skepticism is an essential moment of scientific progress; however, this should not result in elevating science and pseudoscience to the same level. This makes the question even more urgent: How and on what basis can an appropriate trust in science be built? We are interested in how trust is established in such cases of increasing complexity (of models and communication) and what could be appropriate measures to alleviate doubt.
–Topics–
Interested scientists, philosophers, sociologists, historians, mathematicians, and journalists can submit contributions on the following topics (non-exclusive):
The epistemology of trust in science (e.g. increasing trust through replication, RCTs,etc.)
Trust as an epistemic virtue
Scientist trusting scientist
Benchmarks, measures, criteria for trustworthy science
Principles, guidelines, best practices as attempts to make science trustworthy
The public trusting scientists/science (communicating scientific results)
Role of publishing raw research data for creating trust
Images of science and scientists in public
The role of trust for science in an open society
Historical perspectives on trust in science
Coping with doubt in the sciences
Trusting or doubting computer methods, especially AI and computer simulation.
Algorithmic Bias
–Dates & Deadlines–
Abstracts (max. 3,000 characters including spaces without references) can be submitted until 15th June, 21. Submissions should be prepared for anonymous review (no information identifying the author). Applicants will be notified latest by 15th July, 21. Accepted papers will be published in a proceeding volume by Springer. It is planned that the conference wilI take place at the HLRS in Stuttgart. But depending on the Covid19 situation in Autumn the conference might be held partially or completely online. Even if the conference is to be held on venue in Stuttgart, speakers will have the opportunity to give their presentation virtually.
For submissions: https://philo.hlrs.de/openconf/openconf.php .
If you have any questions, please contact phil@hlrs.de
No comments:
Post a Comment