Over the past two years, the pandemic has prompted a growing number of events to shift partially or entirely online. From a planning perspective, some events treated online formats as contingency plans, while others deliberately designed the entire experience for a digital environment from the outset. The Youth Development Administration’s Let’s Talk series, from its inception, trialed online deliberation sessions and allowed participating teams to adopt fully virtual or hybrid models. Despite these options, stable pandemic conditions in Taiwan at the time led most first-phase teams to prefer face-to-face activities. Yet recent shifts in the pandemic landscape prompted the organizers to transform the second phase into a semi-digital format: facilitators and staff onsite, while participants joined virtually.
In my dual role as second-phase subgroup facilitator and a first-phase deliberation coach, I recorded my thoughts and reflections. The title remains “miscellaneous notes” to emphasize the informal, free-flowing nature of these insights. My hope is that these reflections, drawn from actual practice, will inspire others who may consider conducting online deliberations. Just as in Go (the board game), where a player reviews their moves after each match, I seek to revisit and reevaluate my decisions, envisioning alternate strategies to grow personally and professionally. As for my personal shortcomings or areas for further improvement, I won’t enumerate them here. However, if anyone who participated in my sessions has suggestions or comments, please feel free to contact me privately.
Context and Disclaimer
This article DOES NOT represent official instructions or practices for all subgroups in the second phase of Let’s Talk. Each facilitator was given flexibility in designing and executing their chosen methods. The details recorded here reflect only my personal approach and should not be taken as indicative of any standard.
For background information about this year’s Let’s Talk activities, please refer to the official website. This article is released under a CC BY-NC-ND 4.0 (International) license.
Hardware, Software, and Their Rationale
While each subgroup facilitator had autonomy over their chosen methods, some commonalities persisted. For instance, Google Meet served as our main platform. A dedicated LINE group was available for technical assistance. Beforehand, the organizers provided a Google Drive link compiling issue-related materials, virtual backgrounds, and other resources. Additionally, each subgroup’s online meeting room included two team members offering administrative support alongside the facilitator.
On May 14 and May 21, I facilitated two online deliberation sessions. I did not enlist any co-facilitators and managed both discussion and note-taking myself. However, I experimented with different tools and slightly altered my setup each time. Below, I’ll detail my hardware and software configurations and my reasons for choosing them.
Hardware Setup
- Equipment: A standard Intel i5 MacBook Air + Type-C hub (with HDMI, Ethernet, USB ports) + Logitech StreamCam + ring light (optional) + jLab Talk Go microphone + headphones + a phone stand (optional) + a bag of green Kuai Kuai snacks (a Taiwanese superstition for smooth operations).
- My laptop wasn’t top-of-the-line (like an M1 MacBook Pro), yet the setup sufficed for all tasks.
Video
I did not plan to record on a physical whiteboard, so I didn’t need a separate camera to capture such content. A single webcam was enough. I chose an external webcam for better clarity and a more natural “face-to-face” feel.
On May 21, a technical glitch forced me to revert to my built-in camera—a reminder of why backup plans are essential. I also brought a ring light on May 14 to ensure that my image looked clear and well-lit, given the uncertain lighting conditions of the venue.
My purpose was not to achieve studio perfection, but to create a warmer, more engaging visual environment for participants.
Audio
Because I needed to speak all day while masked, and wanted participants to hear me clearly without straining my voice, I opted for a USB condenser microphone. Condenser mics are sensitive and pick up subtle vocal details, making speech sound more “alive” compared to typical headset mics. Given we were in separate rooms with minimal background noise, environment control was manageable.
I also prepared a webcam with a decent built-in mic as a backup, and I considered AirPods Pro as a fallback.
If using iPhone + AirPods, I’d recommend enabling the “Voice Isolation” feature in iOS 15 or later to minimize ambient noise.
Playback
To avoid echo and feedback, I relied on headphones to monitor participants’ voices. On May 14, I used over-ear headphones—comfortable but a bit stuffy after hours of wear. On May 21, I switched to in-ear monitors and wore them in just one ear, allowing me to remain aware of my surroundings while maintaining good audio quality.
Choosing a Real-Time Digital Whiteboard
I considered the real-time record displayed on-screen as the digital equivalent of a physical poster or whiteboard. Instead of writing on a real board and pointing a camera at it, I used online collaborative tools.
On May 14 and May 21, I experimented with two different platforms—both allowing me to see where participants were focusing their cursors. This immediate visual feedback helped me understand where their attention lay.
I also prepared a Google Doc “fallback message area” so that if anyone got disconnected, they could still review the live notes and add comments. Google Doc’s accessibility features and familiarity are advantages over more niche markdown-based platforms.
MURAL (May 14 Session)
Rather than popular tools like Jamboard or Miro, I used MURAL. MURAL’s ability to “summon participants” to a specific board area and its built-in timer made it ideal for focused, time-bound discussions. While I didn’t fully leverage all interactive features (like voting), these tools could easily enhance engagement. By “summoning” everyone’s view to where I was working, I ensured a synchronized visual experience—akin to directing everyone’s gaze toward a particular poster or diagram in a physical room.
Weje (May 21 Session)
Weje, more a hybrid “workspace” than a classic whiteboard, suited my second session, which involved numerous policy references, hyperlinks, and supporting documents. Weje let me assemble a “mini-database” of resources. However, it didn’t have built-in timers or the same level of visual coherence as MURAL.
This trade-off meant I had to rely more on verbal guidance and careful document structuring.
Designing the Online Deliberation Experience
Tools are chosen to serve the event’s design, not the other way around. After discussing the technical aspects, let’s refocus on the deliberation’s essence and what shifts when moving from in-person to online.
Transparent Expectations
Given that participants might be unfamiliar with the tools or the flow, I compiled all essential information in a single, one-page website: instructions, tool links, discussion norms, a brief video tutorial on using Google Meet, and clarifications about my role, the day’s goals, and the difference between phases one and two of Let’s Talk.
By anticipating participants’ needs and consolidating resources, I aimed to reduce confusion and enhance their sense of security and preparedness.
In the “Discussion Norms” section, in addition to outlining speaking rules and expectations—for instance, raising a hand before being invited to speak—I also included a pre-recorded Google Meet tutorial video. This video highlighted the platform’s most commonly used features and offered basic troubleshooting steps for connection issues.
By segmenting the video into chapters, participants could easily skip to the parts they were less familiar with. Although I initially considered adding subtitles on YouTube, I ultimately decided against it due to time constraints and the absence of participants with hearing impairments.
On May 14, I used MURAL, which, by default, prompts users with options in English (allowing them to join with a nickname, remain fully anonymous, or log in). I therefore provided supplementary instructions on the webpage to ensure that participants understood the interface and what to expect.
From my experience, technical aspects often influence participant engagement and the atmosphere of online discussions. While the organizing team did provide a technical support chat group, I believed that helping participants prepare in advance—knowing what the day would entail, which tools would be in use, and which guidelines to follow—could significantly improve their comfort level. Having these details at their fingertips, ready for reference whenever doubts arose, fostered a greater sense of familiarity and security in what might otherwise feel like an uncertain environment.
As for the issue materials, beyond the government agencies’ official responses, I supplemented the real-time notes with my own research and preparation. This included references to the 2025 National Health Policy White Paper, the draft amendment to the Mental Health Act, the Second National Mental Health Plan, the second-phase Social Safety Net enhancement project, the Integrated Mental Health Program, and the Collaborative Model for Supporting Individuals with Mental Illness.
By providing these resources up front, I aimed to help participants better understand the context behind the agencies’ replies and minimize the time needed to align on basic facts before delving deeper into the discussion.
Attention and Engagement
Online events often worry about participants’ attentiveness. But distractions occur in face-to-face settings too. The underlying reason for this perceptual gap and its impact is really about issues of synchrony and connection. Asynchronous communication can create uncertainty and a lack of feedback, especially when timely interaction is needed.
Attention is a high-level cognitive function, the process by which we choose which information to focus on when faced with various stimuli. Naturally, during an event, we hope participants will direct their attention to the discussion at hand. How can we enhance this element through event design? The answer involves planning our interaction models in a way that leverages both external (exogenous) cues and internal knowledge (endogenous) processes to spotlight the discussion.
From cognitive neuroscience, we know people are more sensitive to content that is personally relevant or familiar—this includes their own names and preferred forms of address. Therefore, in online deliberations, if the facilitator remembers how participants prefer to be addressed and what areas they find interesting or are familiar with, it becomes easier to refocus their attention when inviting them to speak.
True engagement depends on design that fosters trust, clarity, and purpose. By recalling participants’ preferred names, interests, and highlighting key points on the shared screen, I tried to channel their attention. For instance, during note-taking, I color-coded crucial content to guide participants visually. This approach replicates, in an online environment, the physical cues—like pointing at a flipchart—that we rely on in person.
Breaks and Time Management
In physical events, breaks often allow informal networking and quick side chats. Online, such spontaneous interactions are harder.
While I could display timers in MURAL or play background music through my system audio during breaks, participants might step away from their devices. Reminding them before and after each break was essential. On May 21, I used background music as a subtle auditory cue that the session was ongoing.
On the other hand, this setup also reflects a certain difficulty in practicing active listening during the deliberation. Even if we split everyone into pairs or breakout groups, it’s not as easy or natural for a facilitator to “walk around” and check on each group’s progress.
Those point to a deeper issue: how do we make trade-offs in tool usage? Perhaps this is one of the subtle constraints of online deliberation.
Adapting Discussion Flow
Google Meet does not offer an agenda feature like some other platforms, so both facilitators and participants have to rely on external methods to understand and keep track of the process and flow. In these two sessions, the presence of a physical whiteboard in the onsite venue helped the facilitator track progress and inform participants about adjustments. However, handling the flow differed between the two sessions, partly because the focal issues varied in complexity.
Balancing time among complex topics was challenging. In the first session, I tried a three-round structure—consensus building, problem deepening, and solution detailing. I found that without enough time to thoroughly dissect each issue, we risked superficiality.
On May 21, I refined the approach, splitting the discussion into distinct segments, each tackling specific sub-issues. After drafting preliminary consensus documents, participants had the chance to review and refine them. Despite these improvements, time constraints and complexity still pressured the depth of analysis.
Limits and Trade-Offs of Online Deliberation
Tool Selection
No single tool excels in every category—security, simplicity, interactivity, cost, device compatibility—and balancing these factors is tough. More interactive platforms like GatherTown, kumospace, or Mozilla Hubs can mimic social connectivity but may be harder for newcomers. Using widely known tools like Google Meet ensures accessibility, but limits advanced features.
Ensuring a sense of community and maintaining a fair, “safe” speaking environment online are crucial challenges for hosting virtual deliberations. While going online frees participants from the constraints of physical location and reduces travel costs, it shifts the burden onto having the right equipment and digital skills. These new requirements can act as a filter from the very start, affecting not only who participates but also risking the exclusion of those with unstable connections, inadequate devices, or limited familiarity with the tools. We must acknowledge that the design of many current deliberation processes already sets a certain threshold that effectively filters out some citizens. Still, we should strive to reduce any additional barriers and encourage more diverse participation.
Each tool’s features stem from the original context and purpose it was designed for. As a result, every platform carries certain assumptions and limitations, as well as a core ethos and set of objectives. For example, some collaborative whiteboard tools were intended for design teams, offering numerous templates for team-building activities or brainstorming sessions. Such tools are richer in feedback and cooperative features but may not align perfectly with the goals of a deliberation. Adopting these tools outside their original context may require additional adjustments and support measures. Simultaneously, when participants must juggle multiple tools, confusion may arise—participants might not know which page or app to open at any given moment.
Accessibility and Inclusivity
Hosting deliberations online lowers geographic barriers but raises new accessibility challenges. Stable internet, appropriate hardware, and digital literacy become prerequisites.
Many current deliberation formats have not fully accounted for accessibility needs. Even at events I’ve personally organized, I’ve noted that the chosen venue—let alone the tools—inevitably excludes certain citizens. However, we can use technology to lower participation barriers. This is one reason I record sessions in near-verbatim transcripts and then add a more structured summary.
Take Google Docs as an example: it can help hearing-impaired participants follow along by reading who said what, while visually impaired participants can use screen readers to navigate through the organized text without relying on posters. Achieving a fully accessible environment is still challenging given current conditions and my own capabilities, but we can see that selecting suitable tools and offering proper guidance—letting participants know which tools to use and when—can foster a group atmosphere that feels welcoming to everyone.
However, even platforms like Google Docs, with basic screen reader support, don’t automatically guarantee a fully inclusive environment. Technical communication and preparation are especially important in achieving this goal, but complete barrier-free participation remains a long-term goal that requires ongoing attention.
Additional Coordination
Online events require more pre-event communication and instructions—from basic platform usage to clarifying administrative support channels.
I created an advance webpage and recorded a brief tool tutorial video so participants could familiarize themselves with key features. For staff, I shared links to all platforms I’d use and clarified exactly how they should support me (e.g., how to indicate remaining time or provide reminders).
Preparation and alignment among all parties can minimize last-minute confusion.
The Difficulty of Deep, Second-Phase Discussions
Depth vs. Breadth
This second-phase deliberation aimed to refine and deepen the first-phase outcomes into more concrete, actionable recommendations.
Time limitations in both the first and second phases of the discussions inherently restrict the depth to which participants can investigate root causes and explore how to replicate or prevent them. As a result, problem analysis often remains at the level of core issues—some discussions not even moving beyond describing the situation before moving on to solutions.
Under such conditions, talking about remedies can be like an endless “while” loop that meets conditions but never truly resolves anything. Conversely, if too much time is spent drilling down into root causes, there may be insufficient time left to develop concrete solutions. The outcomes then remain very high-level, something the relevant government agencies likely already understand.
On the other hand, if we skip over problem clarification and jump straight to building on existing outcomes, the final conclusions might end up similar to previous results. The key difference lies in whether it’s possible to identify specific points of breakthrough and to refine and clarify the details.
Whether employing coaching techniques or a robust Root Cause Analysis (RCA), it’s essential to have the right group composition and sufficient time. Common RCA tools—fishbone diagrams, change analyses, and the “Five Whys” method—are well-known and widely used. The Five Whys is the simplest: keep asking “why?” to drill down into the underlying cause. You don’t literally have to ask five times; it could be fewer or more, depending on when you reach the root cause.
In the second-phase discussions, each table might have around a dozen to nearly twenty participants. Imagine applying the Five Whys to every concern raised—that would be tremendously time-consuming. How do we determine the appropriate scope and depth of analysis for the first and second phases? How far should we delve into the issues?
As mentioned earlier, sometimes the discussion can feel like a loop—new clarifications and more refined directions appear, but further observation is needed to see if they take hold. For participants immersed in the process, it can feel static, while from a broader perspective, incremental progress is being made.
To truly align with policy processes and generate a sense of forward momentum, simply analyzing the problem may not suffice. Once a general direction and consensus emerge, it’s crucial to identify and address points where there’s a disconnect—where everyone seems to agree on the general direction, but people still feel something’s “off” or have a poor user experience. Discussing these gaps based on observed changes might help various stakeholders find their niche and address unsatisfactory aspects.
Bridging Gaps and Achieving Concrete Solutions
Finding and resolving these gaps is a key objective. In practice, however, the second phase can easily produce generic suggestions like “create a website,” “launch a platform,” or “make an app.” Agencies might respond that they’re already doing it or will do it— but is that truly what the public wants or needs? At this point, it’s necessary to clarify what problems participants hope to solve, what outcomes they want to achieve, and what resources are needed. Why do they believe this approach is the right solution, and what shortcomings exist in current channels that prevent expected results? Realistically, I recognize that during a live event, it’s hard to thoroughly clarify each stage.
Another observation: when agencies mention a particular project or study, participants often find it challenging to immediately pinpoint what’s problematic about it. To address this in the May 21 session, I placed some of the agencies’ written responses, as well as related materials uncovered during my research, directly into the shared online record. Although I gave participants a break to review these materials before continuing, the short time might not have been enough for them to fully digest the information and relate it to their personal experiences.
Why, even with relevant stakeholders and a facilitator present, does a second-phase talk still struggle to produce a genuinely optimized policy user experience? From a user experience (UX) perspective, there are a few possibilities:
- Needs are not correctly identified.
- The proposed solution is not suitable.
- There’s a lack of steps to evaluate the solution’s effectiveness.
While the Talk itself isn’t obligated to handle monitoring and evaluation, what happens to these issues after the event? Who follows up on these assessments? Whether solutions proposed by agencies or participants are appropriate requires ongoing evaluation.
I believe the roles and positioning of the two phases of Let’s Talk, as well as the various types of participating teams, should be reconsidered. To broaden participation, we must lower entry barriers. At the same time, can we maintain the involvement of key stakeholders and ensure alignment with policy objectives? Doing so might require greater investment in preparing issue-related materials, offering guidance, or revising procedural arrangements.
Policy processes encompass problem analysis, planning, legitimization, execution, and evaluation. These steps cannot all be completed within a single Talk event. The Talk might best be viewed as a platform for clarifying issues or leveraging collective wisdom to inspire appropriate planning. Subsequent work must follow, or citizens will never see tangible results over time.
The Role of Issue Consultants
Though this article inevitably critiques aspects of the Talk, it’s undeniable that the YDA attempts new methods each year. In this second phase, in addition to agency representatives, first-phase participants, and the organizing team, an “issue consultant” role was introduced. Last year, other stakeholders related to the issue were invited. Different compositions produce different “chemical reactions.” In this year’s second phase, one of my subgroups on May 14 included an issue consultant, while the May 21 subgroup did not.
The invited issue consultant for this mental health topic was indeed well-versed in both practical and international trends and had collaborated with relevant agencies, understanding both the phenomena raised by participants and the policy context. They could provide diverse perspectives, a move likely tied to the unique nature of mental health issues. Such an expert can offer background knowledge and step in with professional assistance if a participant experiences distress during the session.
At the same time, from a deliberation standpoint, we must ensure that the issue consultant does not inadvertently become an authoritative figure overshadowing the process. Ideally, we should brief the consultant beforehand about the principles and spirit of the discussion. However, since this was an online event without a chance for a pre-session chat, the interaction dynamic was more challenging to manage in practice.
The Road Ahead
After engaging young people in policy issues directly affecting them through two phases of discussion and a final results-sharing forum, one inevitable question emerges: How will these broad directions and detailed implementation plans be presented and tracked over the years? If, after reviewing the records and transcripts, it becomes apparent that the same issues keep resurfacing and the outcomes remain similar—without any meaningful follow-up—people will naturally doubt whether such activities are truly effective or simply another form of briefing session. As mentioned previously, user experience processes include evaluation; with evaluation come new experiences, which then lead to identifying new needs. Although this seems like a cycle, it’s actually a forward-moving process, continually refining policies and narrowing the gap between practice and actual needs.
Policy development requires weighing numerous considerations and cannot be achieved overnight. Still, the platform established by Let’s Talk should at least ensure a consensus that goes beyond merely identifying general directions.
It’s true the Youth Development Administration (YDA) has experimented with solutions in the past—encouraging youth advisory committee members to attend the results-sharing sessions to gain an understanding of Let’s Talk, and then possibly using their visitation and proposal mechanisms to keep track of issue follow-ups. However, if the topics the teams care about aren’t aligned with the areas of expertise of the current advisory committee members, or if the committee members haven’t participated in the initial discussions, it remains unclear how they would communicate and collaborate with the teams, other stakeholders, and government agencies afterward.
In the short term, a more concrete approach might be to publicize the topics discussed in Let’s Talk during the results-sharing session, confirm points of consensus, and clarify how agencies will monitor and disclose progress. For instance, where can we find updates on initiatives under review? What’s the approximate timeline for the planned measures? The YDA’s role here is to construct and maintain this communication platform, ensuring its credibility, quality, and effectiveness. As for surveillance, execution, and evaluation, these responsibilities should return to the respective government agencies handling their domains. The information gleaned from these processes is valuable feedback for both the agencies and the YDA, providing insights on how to refine the deliberation activity and how to optimize policy.
From the YDA’s perspective, the focus naturally revolves around expanding opportunities and channels for youth public participation. Still, many issues that concern youth are also relevant to a broader population and cut across multiple agencies. The YDA lacks oversight authority over other agencies and has limited resources. Balancing increased participation, maintaining robust deliberation quality, and ensuring policy alignment demands even greater investment. Ideally, the agencies involved, when sending representatives, would recognize the deliberative format and understand how it differs from the current practice of consulting only civil groups, organizations, and experts. Instead, it could become a forum where diverse stakeholders engage in real dialogue and co-creation, prompting adjustments or optimization in the policy framework and content.