top of page

Module 4: SPEAKING. Tips

Updated: Dec 22, 2025

📚 EOI ORAL EXAM: Mass Media and Fake News

LEVEL C1 (Advanced)

1. EXAM INSTRUCTIONS

PART 1: MONOLOGUE (Long Turn)

Time: 3-4 minutes speaking time. (Preparation: 10-15 mins). Task: Discuss the statement below. Structure your ideas logically (Intro, Arguments, Conclusion). Topic: The "Echo Chamber" Phenomenon "We are no longer seeking truth; we are seeking validation. The personalized news feed has killed rational public discourse." You may mention:

  • Algorithmic Bias: How platforms feed us what we want to see.

  • Socio-political Polarization: How personalized feeds affect voting and social harmony.

  • Media Literacy: Is it the consumer's responsibility to verify information?

  • The Power of the Headline: Why clickbait and sensationalism win over factual reporting.


PART 2: DIALOGUE (Interaction)

Time: 5-6 minutes. Goal: Reach a consensus / Negotiation. Scenario: You are members of a Government-appointed task force with a mandate to spend €1 Million to combat disinformation. You must agree on ONE priority project.Candidate A: The Regulator You want to implement Tougher Legal Liability for social media platforms. Argument: Companies must be fined heavily for malicious disinformation circulated on their sites; financial consequences are the only effective deterrent. Candidate B: The Educator You want to implement a Mandatory Media Literacy Curriculum in all schools. Argument: We must fix the root cause. If people can identify bias and check sources, the problem will solve itself; this creates long-term resilience.


2. TIPS FOR C1 SUCCESS

Vocabulary: Disinformation, misinformation, scrutiny, propaganda, viral content, clickbait, echo chamber, algorithmic bias, censorship, journalistic integrity. Grammar: Use Conditional III for hypothetical past scenarios ("If platforms had acted sooner...") and the Passive Voice to sound objective ("Misinformation is being monetized...").Dialogue Strategy: Don't just assert your view. Use phrases like: "I see your point about the root cause, but consider the speed of the current crisis..." or "Let’s try to find a middle ground..."


3. EXERCISES DONE (Model Responses)

C1 EXTENDED MONOLOGUE: The "Echo Chamber" Phenomenon

"Good afternoon. Today I want to address what I believe is the most significant threat to rational public life: the echo chamber. We live in an era where truth is no longer a shared goal; it is merely an item to be verified only if it aligns with our existing beliefs. It is shocking to think that the technology designed to connect us has ultimately served to divide us.

To begin with, we must understand algorithmic bias. The platforms are not neutral. They are designed to maximize engagement, and their algorithms quickly learn that controversy and sensationalism keep us scrolling far longer than factual reporting. This creates a vicious cycle: the more we consume polarized content, the more the platform feeds us extreme viewpoints. This is where clickbait wins, and journalistic integrity loses.

Regarding its impact, this system directly fuels socio-political polarization. When citizens inhabit entirely different information realities, how can they find common ground to vote or govern? It prevents the kind of measured debate necessary for a healthy democracy. If platforms had been regulated ten years ago, we might not be facing such extreme political tribalism today.

The question of responsibility is crucial. While corporations are the main culprits for creating the machine, we cannot ignore the need for media literacyIt is unrealistic to expect platforms to police billions of posts daily. Therefore, every individual must be trained to verify sources, check for bias, and understand when content is being monetized. Unless consumers are educated, no amount of regulation will be enough.

In conclusion, the problem is structural. We have been sold a lie that convenience and personalization are always good. To fix this, we need to treat information consumption like a diet: we must choose factual, balanced sources, even if they aren't as instantly satisfying as the personalized junk food the algorithms offer."


C1 EXTENDED DIALOGUE: Liability vs. Education

(A meeting between two government advisors)

Candidate A (Regulator): "I’m convinced that we must allocate this €1 million to the Platform Liability Law. Frankly, the problem is not a lack of education; it's a lack of consequences. Social media companies are making vast sums from viral disinformationIt is imperative that we impose heavy fines for maliciously false content. Financial pain is the only language they speak. It’s the fastest, most effective deterrent."

Candidate B (Educator): "I understand the appeal of a fast fix, but you are only treating the symptom. You mentioned disinformation is profitable—and it is—but that is because consumers are vulnerable to it. I see your point about the deterrent, but consider the long-term impact: if we don't fix the root cause with a Mandatory Media Literacy Curriculum, new forms of manipulation will simply emerge. We need to build long-term resilience in the electorate."

Candidate A: "I admit that education is the ultimate solution. However, we are facing an immediate crisis. People are literally making health decisions based on falsehoods circulated on these sites. Let’s play devil's advocate: if we spend a million euros on a curriculum that won't show results for five years, what happens to the election next year? We need a rapid, structural change to the platform ecosystem now."

Candidate B: "That is a valid point about the urgency. But consider the logistics of liability. Who defines 'malicious'? The law could lead to massive censorship of political views that are merely controversial, not actually false. The platforms will over-filter to avoid fines. Isn't it better to empower the consumer to filter themselves rather than relying on a faceless corporation to do it for us?"

Candidate A: "True, the risk of censorship is a concern. But we could mitigate that by focusing liability only on content already flagged by certified independent fact-checkers. Let’s try to find a middle ground. What if we divide the funds: a smaller portion for initiating the legal framework (the liability threat) and the majority for launching the curriculum pilot program immediately?"

Candidate B: "A split budget... that’s an interesting compromise. If we allocate 75% (€750,000) to launching the Mandatory Media Literacy Curriculum—ensuring we address the root cause—and the remaining 25% (€250,000) to funding the independent fact-checkers who would inform your liability law, I think I can get on board with that. It addresses the urgency while prioritizing long-term change."

Candidate A: "Agreed. 75% for Education, 25% for Fact-Checker Funding to enforce the threat of liability. That gives us both the sword and the shield."


📚 EOI ORAL EXAM: Mass Media and Fake News

LEVEL C2 (Mastery)

1. EXAM INSTRUCTIONS

PART 1: MONOLOGUE (Long Turn)

Time: 4-5 minutes speaking time. Task: Deliver a monologue analyzing the nuance and complexity of the topic. Topic: The Ethics of Algorithmic Power and Epistemic Tribalism Analyze how personalized feeds have led to epistemic tribalism (only trusting your group's facts). Discuss:

  • The 'Black Box' Problem: The ethical challenge of algorithms that cannot be audited or understood.

  • Socio-Political Ramifications: How the destruction of a shared factual reality affects democratic accountability.

  • The Fiduciary Duty: Does a platform’s profit motive inherently contradict its societal contract?

  • Disinformation vs. Misinformation: Distinguishing between malicious intent and accidental falsehood.


PART 2: DIALOGUE (Interaction)

Time: 6-7 minutes. Goal: Debate a controversial policy. Scenario: An international summit on drafting global standards for digital governance. Candidate A: The Pragmatist (Pro-Accountability) You argue that platforms must be classified as Publishers (not neutral carriers) and held liable for the structural violence caused by their algorithms. Stance: This is a climate of moral turpitude; financial accountability is the only deterrentCandidate B: The Purist (Pro-Free Speech) You argue that any attempt to classify platforms as publishers will lead to state censorship and infringe upon the fundamental right to expression. Stance: The solution is not regulation, but radical transparency, forcing algorithms to be open-source.


2. TIPS FOR C2 SUCCESS

Vocabulary: Epistemic crisis, cognitive dissonance, structural violence, sociological ramifications, to be beholden to, fiduciary duty, moral turpitude, to be complicit in, antecedent condition. Style: Use rhetorical questions ("Is this not a wholesale abdication of responsibility?") and hedge your assertions ("One might contend that the crux of the problem lies..." or "The matter is arguably the most urgent ethical dilemma..."). Dialogue Strategy: Acknowledge complexity. Concede small, tactical points to reinforce the larger, philosophical argument. Use precise language for negotiation.


3. EXERCISES DONE (Model Responses)

C2 EXTENDED MONOLOGUE: Algorithmic Power and Epistemic Tribalism

"The climate of epistemic crisis that defines our age is, in my view, the direct consequence of algorithmic power. I would contend that the most corrosive effect of personalized feeds is the destruction of shared factual reality, leading inexorably to epistemic tribalism. This is where individuals are not merely misinformed; they are intellectually insular, incapable of trusting any facts that originate outside their ideological silo.

Firstly, we must tackle the 'Black Box' problem. Platforms are designed to maximize engagement, and this goal is inherently at odds with their societal contract. The algorithm, which dictates what three billion people see daily, cannot be audited, and its internal mechanisms are proprietary secrets. This is a wholesale abdication of responsibilityIt is incumbent upon us to ask: Is this not a form of structural violence, where policies—or algorithms—cause harm as surely as any physical act?

Secondly, the socio-political ramifications are severe. When citizens are incapable of agreeing on antecedent conditions—the basic facts—democratic accountability collapses. How can a society govern itself if opposing political sides live in different universes? The debate moves from the measured exchange of ideas to a simple assertion of tribal loyalty.

Furthermore, the discussion around fiduciary duty is critical. A platform’s primary obligation is to its shareholders; to generate profit. Never before has this financial imperative been so intrinsically linked to the dissemination of disinformation. We cannot allow companies to claim neutrality when their business model is beholden to the viral spread of falsehoods. The distinction between accidental misinformation and malicious disinformation becomes a moot point when both are amplified by the same ruthless profit motive.

In conclusion, technological optimism has failed us. Unless we impose radical transparency and demand that algorithms are opened to public scrutiny, we will continue to subsidize the erosion of the shared factual foundation necessary for a free society."


C2 EXTENDED DIALOGUE: Publisher vs. Free Speech

(A debate at an International Digital Governance Summit)

Candidate A (Pragmatist): "We need to look at the socio-political ramifications of inaction. The time for claiming 'platform neutrality' is over. These are not passive tools; they are powerful amplifiers. I maintain that platforms must be classified as Publishers and held financially liable for the structural violence caused by their amplification of malicious content. It is imperative that we impose this accountability, for a climate of moral turpitude cannot be tolerated. Financial accountability is the only deterrent that matches the scale of their profits."

Candidate B (Purist): "I find the classification of 'Publisher' deeply concerning. We are facing a climate emergency, but we must not panic and sacrifice fundamental rights. I concede that their algorithms cause societal harm, but not only is the 'Publisher' classification slow and prone to legal challenge, but it also hands the state a legal weapon for censorship. When the state can force a platform to remove content, it infringes the fundamental right to expression. I would argue that the solution is not regulation, but radical transparency, forcing algorithms to be open-source."

Candidate A: "Transparency is a long-term gambit. The algorithms are complex 'black boxes' that even the engineers don't fully understand. We don't have ten years. Your proposal delays action. Let's play devil's advocate: the failure to act is a political choice. If we do not impose liability, we become complicit in the erosion of public discourse. We can mitigate censorship by focusing liability only on content already flagged by independent international bodies—not the government."

Candidate B: "That is an astute compromise. Limiting the liability to third-party verification addresses the worst fears of state overreach. However, the long-term goal must remain non-regulatory. To synthesize our positions, I propose a two-tiered system: We immediately implement the Liability Law based on third-party verification for a five-year sunset period. During those five years, we divert a portion of the tax from the fines directly into a fund dedicated to developing open-source, auditable algorithms. This satisfies your demand for immediate deterrence and my demand for future transparency."

Candidate A: "A sunset clause is risky, but funding the switch to open-source algorithms with the fines themselves is an elegant solution. It forces the industry to pay for the future infrastructure that renders the liability obsolete. I think we have a framework there. Liability is transitional, but the goal is radical transparency."

Candidate B: "Agreed. We have established a framework that balances immediate accountability with the preservation of free expression."

 
 
 

Comments


bottom of page