LONG ANSWER QUESTIONS
COMN 1401: Take-Home Final
LONG ANSWER QUESTIONS
Question 1
Individuals who want to be well-informed citizens in today’s information-rich world must be able to assess the material they come across when surfing the Internet. A large part of the Civic Online Reasoning project is to bridge this information gap. The project can be termed as an educational endeavor that provides activities and tests, specially designed and developed to guide learners on how to evaluate the authenticity of online material. Developing civic online reasoning improves the individual’s capacity to seek for, assess, and authenticate political and social content while on the internet. As a global problem, Breakstone et al. (2019) note that students are unable to effectively discern the quality of information available to them through online platforms. The same findings are presented by McGrew et al. (2018) and Wineberg et al. (2019) who argue that students are bombarded with so much information and duped into believing fake information through well-designed websites and other online platforms such as blogs posing as educative channels to receive non-factual and inaccurate information. Materials from social media, blogs, websites, wikis, YouTube videos, and other sources of content are sometimes used in the learning process, yet, they are now key sources of fake news, misinformation, and disinformation, requiring that students apply evaluative skills as proposed in the civic online reasoning project to discern quality of information.
Civic online reasoning, source triangulation, and information literacy are key components of effective learning today that provides students and educators with the right direction and tools to better discern online content for quality and credibility. False information and distortion are a huge danger to people all around the globe, especially in developing countries. A number of studies have shown that people have difficulty distinguishing between truth and fiction, and between trustworthy information and false information. It is important for students and instructors to understand how to shift from weak information literacy approaches to professional fact-checking processes, according to McGrew et al. (2018). People have traditionally been drawn to the ease of having access to a single source of information because of its simplicity. However, one disadvantage of blindly trusting a single piece of evidence is that its content may be questionable and therefore must be double-checked before being accepted. It may be tough to detach from a questionable source that provides comforting information or reinforces current beliefs and attitudes. In this situation, comparisons and validation are quite beneficial in the process of learning. Rather than focusing on learning outcomes that are dependent on a single resource, Wineberg et al. (2019) found that educational investment should place a greater emphasis on the ability to recognize, evaluate, and get trustworthy information from a variety of resources via comparisons and verification. Source triangulation is a component of gaining Information Literacy that comprises learning to verify sources by doing research and comparing many sources in order to establish a true link between two or more sources. Because credibility changes based on the context, the term “credible” is essential to understanding quality of sources (McGrew et al., 2018). What is convincing in a corporate situation is not necessarily the same as what is believable within the confines of a critical studies, and vice versa.
The fact that authenticity often does not emerge from shared experiences is an important qualification. The existence of many major truths is concealed in sources, most of which have accessibility to the free market of ideas and some which are more rigid. The core strength in evaluating information richness by source triangulation is the knowledge that single resources are often readily refuted by alternate explanations. According to Wineberg et al. (2019), performing critical evaluations of many sources and looking for consistency among them might assist people in their efforts to identify meaningful commonalities across sources. It is also vital to find facts and opinions that may have been overlooked when just one source was considered. The goal is to assist individuals in identifying information issues while also teaching them how to take control of the content that is allowed into their thoughts and decision-making structures, as well as how to use information to improve performance.
In summary, source triangulation is merely one component of the greater information literacy training necessary for successfully navigating risky online information environments, which is why cautions about group think are equally significant as source triangulation and other quality assessment concepts. Despite the problems associated with actual commonality, information literacy practitioners and advocates continue to see source triangulation as a crucial strategy for supporting users in accessing the information necessary for democracy to function properly and effectively. It is the same principle that civic online reasoning applies in an effort to enhance information literacy. Therefore, source triangulation and civic online reasoning should be applied to provide better learning outcomes and to create a new generation of learners who are able to discern false information from a sea of information-rich internet sources. The overall aim is to ensure quality of information in learning.
Question 2
The dissemination of false information on the internet has far-reaching consequences for people’s daily lives off the internet. Despite the fact that the decentralization and participative character of electronic technology has helped in the diversifying of the knowledge and the dissemination process, it has also created an urgent need for the development of methods to evaluate the reliability of information (McGrew et al., 2018). Because of the unexpected growth in popularity of false news, such concerns of veracity in the media content have garnered a great deal of attention. The U.S. Presidential elections in 2016 is a landmark case that will be used in the future to mark a change in the way people perceive and interpret online sources of information. Information literacy and content moderation are two approaches that have been proposed to enable better information dissemination. As the transmission of false news grows, there is growing demand on sharing of content vis online platforms to engage and minimize its spread. However, intervening is met with charges of unfair censoring. The conflict involving fair moderating and suppression draws attention to two interconnected issues that occur when determining whether internet information is phony or authentic. To begin, the most difficult challenge is determining what sort of information is a concern that should be highlighted for attention. Uncertainty over whether it is practicable and technically conceivable to acquire and categorize examples of such material in an impartial way is a second challenge.
A likely drawback of information literacy and content moderation approaches is that they are likely to foster a new culture of free speech suppression if they are adopted as wider policies in institutions and governments. In terms of the issues that fact-checking is intended to address, there is, however, little uniformity among the many platforms and websites that provide fact-checking services (Breakstone et al., 2019). Fact-checking websites such as Snopes, for example, do not place as much emphasis on free expression as huge social media platforms such as Twitter and Facebook do, according to the organization. Snopes, on the other hand, is devoted to uncovering erroneous information, whether it is the product of ignorance, misinformation, or just a misinterpretation of the facts as they are presented. Therefore, when it comes to fact-checking, Snopes looks at a far wider spectrum of material than social media sites such as Twitter and Facebook. Snopes deals with material that has the potential to be misleading to an audience in this context, regardless of whether the information was meant to be deceptive to an audience or was just inaccurate.
The inclusion and exclusion criteria are not clearly specified and are not followed when doing a critical examination of content moderation (Wineberg et al., 2019), as may be seen in the following example. As seen by Twitter and Facebook, social media platforms seek to encourage as much interaction from its users as possible while yet allowing for freedom of expression and speech. Facebook algorithms are less likely to flag information that isn’t factual, such as opinion articles, for being untrue as a consequence of this. Despite the fact that opinion pieces are often incorrect, this is fact is ignored. Those who spread false information or make unintended misstatements will be highlighted on Facebook. For its part, Twitter has a more focused strategy, notifying users when they post messages that it considers to be dangerous. Twitter posts that indicate concern about physical, psychological, or informational concerns fall under this category, among other things. Examples of informational hazards include misinformation or disinformation that poses a threat to public health or civic engagement, such as election information, which is a kind of misinformation or disinformation. However, it suffers from the same fault as the previous system: a poorly defined scope in terms of content management exclusion and inclusion criteria that makes it difficult to differentiate between them.
References
Breakstone, J., Smith, M. A. R. K., & Wineburg, S. (2019). Students’ Civic Online Reasoning. A National Portrait.
McGrew, S., Breakstone, J., Ortega, T., Smith, M., & Wineburg, S. (2018). Can students evaluate online sources? Learning from assessments of civic online reasoning. Theory & Research in Social Education, 46(2), 165-193.
Wineburg, S., & McGrew, S. (2019). Lateral reading and the nature of expertise: Reading less and learning more when evaluating digital information. Teachers College Record, 121(11), 1-40.
Leave a Reply
Want to join the discussion?Feel free to contribute!