Danish researchers have developed an exaggeration detection system designed to mitigate the effects of journalists who highlight the effects of new scientific findings when they are summarized and reported. The work has been encouraged by the extent to which the newly published COVID-19 study has been distorted in reporting channels, although the authors acknowledge that it can be applied to a broad part of the scientific reporting sector.
The paper, right Detection of partially controlled exaggeration in health science Press releases, comes from the University of Copenhagen and notes that the problem is exacerbated by the tendency of publications not to include source links in the original study. “source information” – even if the paper is publicly available.
The problem is not limited to the external journalistic reaction to new articles, but may extend to other summaries, including internal public relations activities of universities and research institutes; advertising material intended to attract the attention of news sources; and useful reference links (and possible ammunition for funding rounds) related to suppliers biting.
The work takes advantage Natural language handling (NLP) against a new set of press releases and summaries, and researchers claim to have developed ”[a] a new, more realistic form of task ‘to detect scientific exaggeration. The authors have promised to publish the code and information of the work On GitHub soon.
Numerous studies have addressed the problem of scientific sensationalism over the past decade and drawn attention to the misinformation that this can lead to. The late American scientific sociologist Dorothy Nelkin specifically addressed the issue in 1987 book Selling Science: How the Press Covers Science and Technology; 2006 Embo report Poor science in the headlines emphasized the need for scientifically trained journalists, just as the Internet put critical budgetary pressure on traditional media.
In addition, in 2014, the British Medical Journal raised the issue of a report; and a 2019 study by Wellcome Open Research even showed an exaggeration of scientific publications does not bring any benefit (in terms of reach or traffic) newsletters and other reporting systems that practice this practice.
However, the advent of the pandemic has brought the negative effects of this hyperbol to a critical focus on a number of data platforms, including the Google search results page and Cornell University’s Arxiv The index of scientific publications now automatically adds disclaimers to any content that appears to affect COVID.
Previous projects have attempted to create exaggeration detection systems for scientific publications using NLP, including 2019 cooperation Between Hong Kong and Chinese researchers and another (unrelated) Danish paper in 2017.
Researchers in the new paper point out that these previous efforts have developed argumentation data from PubMed and EurekAlert summaries and summaries marked as strengths and used them to train machine learning predictable models require strength in invisible data.
Instead, the new study combines a press release and an abstract as a combined unit of information and utilizes the resulting material in MT-PET, a multifunctional version of the Pattern Exploiting Training study. presented in 2020 as Utilizing Cloze questions for a few shot text classifications and natural language, a joint study by two German research institutes.
No existing data set was found to be suitable for the task, and therefore the team curated a new data set of paired abstracts and related press releases, which experts evaluated based on their tendency to exaggerate.
The researchers used a few shots of text classification framework PETAL as part of the pipeline to automatically create pattern and word counter pairs and repeat the data later until approximately equivalent doubles for two qualities were found: exaggeration detection and claim strength.
Gold data were re-used for testing from the aforementioned previous research projects consisting of 823 pairs of abstracts and press releases. The researchers rejected the possible use of 2014 BMJ data because it is paraphrased.
This process received a set of 663 abstract / release pairs marked for exaggeration and claim strength. The researchers randomly took 100 of them learning a few shots training data, and 553 examples are reserved for testing. In addition, a small training series consisting of 1,138 sentences was created, classified according to whether they represent a summary or a press release. They were used to identify “resolutions” in unmarked pairs.
The researchers tested the approach in three configurations: a fully controlled regulation with exclusively labeled data; single-task PET scenario; and in the new MT-PET, which adds a secondary design thread to the auxiliary task (since the project aims to study two separate qualities from a data set with paired data structures).
The researchers found that MT-PET improved baseline PET results in different test environments, and found that identifying the strength of the claim helped produce soft-labeled exercise data to detect exaggeration. However, the paper points out that in certain configurations among complex test series, particularly related to the strength of requirements, the presence of professionally labeled data may be a factor in improving results (compared to previous research problems addressing this problem). This can have implications for the extent to which the pipeline can be automated depending on the weighting of the task data.
Nevertheless, the researchers conclude that MT-PET ” helps in more difficult cases to identify and distinguish direct cause claims from weaker claims, and that the most effective approach involves classifying and comparing the strength of the individual requirements of the statements in the source and target documents ”.
Finally, the work speculates that MT-PET could not only be applied to a wider range of scientific documents (outside healthcare), but could also provide a basis for new tools to enable journalists to produce better reviews of scientific articles (although this may naively assume that journalists exaggerate the power of their claims through ignorance), and help the research community formulate clearer language to explain complex ideas. In addition, the magazine states:
”[it] It should be noted that the predictive performance results presented in this article apply to press releases of scientific journalists – worse results could be expected from press releases that greatly simplify scientific articles. ”