Concept: Style guide
Altmetric measurements derived from the social web are increasingly advocated and used as early indicators of article impact and usefulness. Nevertheless, there is a lack of systematic scientific evidence that altmetrics are valid proxies of either impact or utility although a few case studies have reported medium correlations between specific altmetrics and citation rates for individual journals or fields. To fill this gap, this study compares 11 altmetrics with Web of Science citations for 76 to 208,739 PubMed articles with at least one altmetric mention in each case and up to 1,891 journals per metric. It also introduces a simple sign test to overcome biases caused by different citation and usage windows. Statistically significant associations were found between higher metric scores and higher citations for articles with positive altmetric scores in all cases with sufficient evidence (Twitter, Facebook wall posts, research highlights, blogs, mainstream media and forums) except perhaps for Google+ posts. Evidence was insufficient for LinkedIn, Pinterest, question and answer sites, and Reddit, and no conclusions should be drawn about articles with zero altmetric scores or the strength of any correlation between altmetrics and citations. Nevertheless, comparisons between citations and metric values for articles published at different times, even within the same year, can remove or reverse this association and so publishers and scientometricians should consider the effect of time when using altmetrics to rank articles. Finally, the coverage of all the altmetrics except for Twitter seems to be low and so it is not clear if they are prevalent enough to be useful in practice.
- Journal of child psychology and psychiatry, and allied disciplines
- Published over 2 years ago
Numerous style guides, including those issued by the American Psychological and the American Psychiatric Associations, prescribe that writers use only person-first language so that nouns referring to persons (e.g. children) always precede phrases referring to characteristics (e.g. children with typical development). Person-first language is based on the premise that everyone, regardless of whether they have a disability, is a person-first, and therefore everyone should be referred to with person-first language. However, my analysis of scholarly writing suggests that person-first language is used more frequently to refer to children with disabilities than to refer to children without disabilities; person-first language is more frequently used to refer to children with disabilities than adults with disabilities; and person-first language is most frequently used to refer to children with the most stigmatized disabilities. Therefore, the use of person-first language in scholarly writing may actually accentuate stigma rather than attenuate it. Recommendations are forwarded for language use that may reduce stigma.
Measuring the usage of informatics resources such as software tools and databases is essential to quantifying their impact, value and return on investment. We have developed a publicly available dataset of informatics resource publications and their citation network, along with an associated metric (u-Index) to measure informatics resources' impact over time. Our dataset differentiates the context in which citations occur to distinguish between ‘awareness’ and ‘usage’, and uses a citing universe of open access publications to derive citation counts for quantifying impact. Resources with a high ratio of usage citations to awareness citations are likely to be widely used by others and have a high u-Index score. We have pre-calculated the u-Index for nearly 100,000 informatics resources. We demonstrate how the u-Index can be used to track informatics resource impact over time. The method of calculating the u-Index metric, the pre-computed u-Index values, and the dataset we compiled to calculate the u-Index are publicly available.
A thorough review of the literature is the basis of all research and evidence-based practice. A gold-standard efficient and exhaustive search strategy is needed to ensure all relevant citations have been captured and that the search performed is reproducible. The PubMed database comprises both the MEDLINE and non-MEDLINE databases. MEDLINE-based search strategies are robust but capture only 89% of the total available citations in PubMed. The remaining 11% include the most recent and possibly relevant citations but are only searchable through less efficient techniques. An effective search strategy must employ both the MEDLINE and the non-MEDLINE portion of PubMed to ensure all studies have been identified. The robust MEDLINE search strategies are used for the MEDLINE portion of the search. Usage of the less robust strategies is then efficiently confined to search only the remaining 11% of PubMed citations that have not been indexed for MEDLINE. The current article offers step-by-step instructions for building such a search exploring methods for the discovery of medical subject heading (MeSH) terms to search MEDLINE, text-based methods for exploring the non-MEDLINE database, information on the limitations of convenience algorithms such as the “related citations feature,” the strengths and pitfalls associated with commonly used filters, the proper usage of Boolean operators to organize a master search strategy, and instructions for automating that search through “MyNCBI” to receive search query updates by email as new citations become available.
The Protein Data Bank (PDB) is the worldwide repository of 3D structures of proteins, nucleic acids and complex assemblies. The PDB’s large corpus of data (> 100,000 structures) and related citations provide a well-organized and extensive test set for developing and understanding data citation and access metrics. In this paper, we present a systematic investigation of how authors cite PDB as a data repository. We describe a novel metric based on information cascade constructed by exploring the citation network to measure influence between competing works and apply that to analyze different data citation practices to PDB. Based on this new metric, we found that the original publication of RCSB PDB in the year 2000 continues to attract most citations though many follow-up updates were published. None of these follow-up publications by members of the wwPDB organization can compete with the original publication in terms of citations and influence. Meanwhile, authors increasingly choose to use URLs of PDB in the text instead of citing PDB papers, leading to disruption of the growth of the literature citations. A comparison of data usage statistics and paper citations shows that PDB Web access is highly correlated with URL mentions in the text. The results reveal the trend of how authors cite a biomedical data repository and may provide useful insight of how to measure the impact of a data repository.
Reference citations should be accurate, complete, and presented in a consistent format. This study analyzed information provided to authors on preparing citations and references for manuscripts submitted to nursing journals (n = 209). Half of the journals used the American Psychological Association reference style. Slightly more than half provided examples of how to cite articles and books; there were fewer examples of citing websites and online journals. Suggestions on improving accuracy of references are discussed.This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal.
Summary of findings tables in systematic reviews are highly informative but require epidemiological training to be interpreted correctly. The usage of fishbone diagrams as graphical displays could offer researchers an effective approach to simplify content for readers with limited epidemiological training. In this paper we demonstrate how fishbone diagrams can be applied to systematic reviews and present the results of an initial user testing.
Objective This paper identifies general properties of language style in social media to help identify areas of need in disasters. Background In the search for metrics of need in social media data, much of the existing literature ignores processes of language usage. Psychological concepts, such as narrative breach, Gricean maxims, and lexical marking in cognition, may assist the recovery of disaster-relevant metrics from altered patterns of word prevalence. Method We analyzed several hundred thousand location-specific microblogs from Twitter for Hurricane Sandy, Oklahoma tornadoes, and the Boston Marathon bombing along with a fantasy football control corpus, examining the relative frequency of words in 36 antonym pairs. We compared the ratio of words within these pairs to the corresponding ratios recovered from an online word norm database. Results Partial rank correlation values between observed antonym ratios demonstrate consistent patterns across disasters. For Hurricane Sandy data, 25 antonym pairs have moderate to large effect sizes for discrepancies between observed and normative ratios. Across disasters, 7 pairs are stable and meet effect size criteria. Sentiment analysis, supplementary word frequency counts with respect to disaster proximity, and examples support a “breach” account for the observed results. Conclusion Lexical choice between antonyms, only somewhat related to sentiment, suggests that social media capture wide-ranging breaches of normal functioning. Application Antonym selection contributes to screening tools based on language style for identifying relevant content and quantifying disruption using social media without the a priori specification of content keywords.
Few medical journals specifically instruct authors to use the active voice and avoid the passive voice, but advice to that effect is common in the large number of stylebooks and blogs aimed at medical and scientific writers. Such advice typically revolves around arguments that the passive voice is less clear, less direct, and less concise than the active voice, that it conceals the identity of the person(s) performing the action(s) described, that it obscures meaning, that it is pompous, and that the high rate of passive-voice usage in scientific writing is a result of conformity to an established and old-fashioned style of writing. Some of these arguments are valid with respect to specific examples of passive-voice misuse by some medical (and other) writers, but as arguments for avoiding passive-voice use in general, they are seriously flawed. In addition, many of the examples that stylebook writers give of inappropriate use are actually much more appropriate in certain contexts than the active-voice alternatives they provide. In this review, I examine the advice offered by anti-passive writers, along with some of their examples of “inappropriate” use, and argue that the key factor in voice selection is sentence word order as determined by the natural tendency in English for the topic of discourse (“old” information) to take subject position and for “new” information to come later. Authors who submit to this natural tendency will not have to worry much about voice selection, because it will usually be automatic.
To understand the role of symptom attribution in treatment-seeking behaviours, survey results of 1356 veterans (age = 38-72 years) were analysed. Controlling for symptom frequency, significant relationships were found for specialist and psychological-related consultations. Those who favoured psychological explanations for symptoms were more likely to attend specialist and psychology-related consultations and filled significantly more prescriptions than people who predominantly explained symptoms by situational factors (normalisers). Veterans who favoured somatic explanations attended more general practitioner consultations than normalisers. Attributional style should be considered part of the constellation of factors influencing healthcare usage. Normalisers, the predominant group, used fewest health services and filled fewest prescriptions; this may have important implications for healthcare considering their tendency to minimise or downplay symptoms.