top of page

Not Smarter, Just Trained: How I Sort Fact from Fiction

  • Writer: Heather McSharry, PhD
    Heather McSharry, PhD
  • Sep 2
  • 26 min read

Updated: Sep 9

Summary

ree

This week, Heather pulls back the curtain on how she figures out what’s true. From gut checks and trusted sources to the brutal training of grad school and the constant practice of science writing, she shares the habits that help her navigate conflicting claims—like the current confusion around COVID vaccine access. You don’t need a PhD to think like a scientist, just the right process.

Listen here or scroll down to read full episode.


Full Episode

This week, instead of focusing on a single pathogen or outbreak, I want to tackle something that feels especially urgent right now. Because if you’ve been watching the news or scrolling through social media, you’ve probably seen the confusion—and the fighting—around COVID vaccine access.

Some public health leaders are saying one thing. Others are saying the exact opposite. And in the middle of all of that, people are left wondering who to believe. I’ve been in those conversations online myself over the past few days—calling out false claims, trying to explain what’s really happening, and, taking some heat for it. So I guess, I'm back, LOL

So today I want to step back and share the process I use to figure out what’s true when the information is messy and the voices are loud.

Honestly, the short answer is: I look it up. But when I say that, I don’t mean I type something into Google and click the first link that pops up. What I really mean is that I bring a scientist’s mindset to it—a process of gut checks, trusted sources, and cross-checks. It’s not glamorous, but it works.

And that process matters now more than ever, because we’re living in a moment where even people you’re supposed to be able to trust are saying completely different things. Take the new COVID vaccine rules as an example. Former U.S. Surgeon General Jerome Adams recently warned that limiting the shots to “high-risk groups” leaves out a huge portion of adults who still face serious risks. Meanwhile, Marty Makary—who presents himself as an FDA voice—said

ree

on social media, “100% of adults in this country can still get the vaccine if they choose. We are not limiting availability to anyone.” Those two statements can’t both be true in the real world. And they leave regular people stuck in the middle, wondering who to believe.

That confusion isn’t accidental. It’s a feature of the information environment we live in right now. Part of it is just the messiness of science itself—recommendations change when the evidence changes. But another part is that there are powerful incentives for certain voices to create clarity where there isn’t any, or to downplay risk for political reasons, or to exaggerate it to get attention. And most of us, whether we’re scientists or not, are left trying to sort through the noise.

If you’re not steeped in the world of public health, it can feel impossible. You see a blue checkmark, a fancy title, someone with “FDA” or “MD” in their handle, and you think, “Okay, this person must know what they’re talking about.” But the truth is, credentials are not a guarantee of accuracy. Smart people can still be wrong. Experts can cherry-pick. And sometimes—this is the hardest part to accept—people with letters after their name can knowingly mislead you.

The Makary versus Adams example is a perfect snapshot of this. Both men are doctors. Both have served in positions of influence. Both are confident in their statements. And yet their claims flatly contradict each other. If you don’t have the training or the time to dig in, how are you supposed to figure out what’s real?

This is the part where I admit something: even for me, with years of training in science and statistics, I don’t always know at first glance. I still have to stop, step back, and run it through my process. The difference is that I have tools and habits that make me slower to take things at face value. I’ve built muscle memory around asking certain questions, checking certain sources, and looking for certain red flags. And that’s what I want to share in this episode — not the idea that I have all the answers, but the way I go about finding them.

Because here’s the thing: misinformation thrives in that gap between authority and truth. It thrives when someone can say something with enough confidence that people won’t stop to check. It thrives when people are too busy, or too overwhelmed, or just too tired to dig in. And I get that—we’re all exhausted. Who has time to fact-check every headline that comes across their feed? But that’s why developing even a little bit of the scientist’s mindset can make such a big difference. It’s not about memorizing every study or spending hours in PubMed. It’s about learning a few simple moves that help you pause, take a breath, and recognize when you’re being sold something that doesn’t add up.

Think of it like being a detective. Most of us aren’t Sherlock Holmes, but we can all learn the basics of spotting clues. A detective doesn’t walk into a crime scene and accept the first story someone tells them. They look at the evidence. They compare accounts. They notice what’s missing, not just what’s there. That’s the mindset I use when I decide what’s true. And it’s something anyone can practice, even without a science degree.

So when I hear two doctors making opposite claims about vaccine access, I don’t panic, and I don’t pick a side just because one sounds reassuring. I treat it as a mystery to solve. I do a gut check: Who are these people? What’s their track record? What might they have to gain by framing things this way? Then I start looking for evidence—not just one article or one study, but patterns across multiple sources. And slowly, the fog lifts.

This doesn’t mean I always get to perfect certainty. Science rarely offers that. But it does mean I can get to a place of reasonable confidence. And that’s the goal—not absolute truth carved in stone, but enough clarity to act wisely in a world that often wants us confused.

Gut Check

The first step in my process for figuring out what’s true is simple: I do a gut check. Before I look anything up, I ask myself a few quick questions. Who’s saying this? What are they claiming? And how does it feel in my gut—is it too good to be true, or too scary to be true? Is it being presented as absolute certainty, when real science usually lives in shades of gray and probabilities? This step doesn’t give me the final answer, but it’s where I figure out how skeptical I should be before I even start digging.

Let me give you some examples.

Recently, my sister shared something on Facebook that set off every alarm bell in my head. It was a story claiming that tetanus vaccines cause sterility in African women. Now, if you’re not used to evaluating these kinds of claims, I understand how it could look believable. It had the hallmarks of something “scientific”: medical-sounding words, statistics thrown in without sources, and the implication of a global conspiracy. But when I saw it, my gut check lit up. Who’s making this claim? Does it make biological sense? Why would it only be tetanus vaccines, and only in African women? That kind of specificity is usually a red flag. Diseases don’t discriminate geographically in that way, and vaccines don’t either. My gut was telling me this was a fear-based story built to target vulnerable populations—and I knew I’d be looking for strong, credible evidence if I wanted to counter it.

Another recent one: people telling me that the COVID vaccines must have skipped “all the important standard safety testing” because they were developed so quickly. That’s a claim that sounds scary if you don’t know how vaccine trials work. But my gut check told me it was oversimplified. I’ve studied vaccine development enough to know the difference between speeding up paperwork and bureaucracy versus skipping actual safety trials. My first thought wasn’t, “Oh no, they cut corners.” It was, “Okay, where’s the misunderstanding here?” That gut instinct helps me frame the search for the truth.

Then there are the more everyday, almost comical examples that we all see floating around. Miracle foods that “burn fat while you sleep.” Essential oils that “cure cancer.” Superfoods that “boost immunity” so much you’ll never get sick again. These are the kinds of claims that prey on hope, and they usually sound too good to be true. That alone is reason to pause. Science almost never hands out miracle results. If you see a headline that reads like a sales pitch, that’s your first clue to dig deeper.

And then you’ve got the scary headlines, the ones that make you feel a jolt of fear. “This common household spice causes cancer.” “Your cell phone is giving you brain tumors.” “Scientists discover deadly virus hiding in your kitchen.” These are designed to grab your attention by freaking you out. And again, my gut check helps me slow down. Does it make sense biologically? Is it based on a single small study in mice that’s being overblown into a human health crisis? Those are the questions that come up before I even touch a search engine.

The important thing to understand here is that gut check doesn’t equal truth. My instincts don’t decide the science. What they do is keep me from being swept up in the emotional manipulation of the claim. Misinformation spreads because it bypasses logic and goes straight for feelings: fear, hope, outrage. My gut check is a way of hitting pause on that emotional surge.

And honestly, I’ve been fooled before too. I’ve seen a headline and thought, “Whoa, that sounds huge!” only to later realize it was based on flimsy evidence or completely misrepresented. So I don’t rely on my gut alone. What I rely on is the habit of noticing when my gut says, “Hold up, this sounds fishy.”

The point isn’t to make me cynical about everything, it’s to make me cautious enough to ask questions. And I’ll be honest: I don’t always get this right in real life. Sometimes when I see misinformation online, I get frustrated and react instead of responding thoughtfully. It’s hard not to when the stakes feel so high. But I try to remind myself that most of the time, the people sharing these things aren’t acting out of malice. They’re confused, or scared, or trying to make sense of something complicated. So I’m working on slowing down, asking questions, and engaging with curiosity instead of just exasperation. If my sister shares a scary vaccine claim, I want to pause and ask: Who’s behind this? What’s their motivation? Where did this information originally come from? If someone says the COVID vaccines skipped safety steps, instead of rolling my eyes, I can ask: Okay, what specific steps are they claiming were skipped? Because nine times out of ten, the problem isn’t malice, it’s misunderstanding.

And you can do this too. You don’t need advanced training to pause and ask yourself: Who’s making this claim? Do they have expertise, or do they just have a platform? Does this claim sound like a miracle cure or a doomsday warning? Is it written with absolute certainty, or does it acknowledge complexity? These are simple gut check questions anyone can use to flag what needs more investigation.

It doesn’t mean you’ll know the answer right away. I didn’t know right away what the story was behind the tetanus vaccine rumor, but I knew enough to recognize that those claims deserved a closer look. And that’s what gut check is about—it’s the first filter that keeps us from swallowing misinformation whole.

Looking it Up

Once I’ve done a gut check, the next step in my process is to look it up. And I know that sounds almost insulting in its simplicity. Like, of course you look it up. But here’s the thing: not all “looking it up” is equal. Where you start, what you prioritize, and how you weigh what you find makes all the difference.

I think of my sources in layers.

At the top layer—the gold standard—are what I call tier one sources. These are peer-reviewed studies, official data from government health agencies like recommendations from the World Health Organization.

Tier two sources are those that responsibly interpret tier one for a broader audience. Reputable science journalists, major medical organizations, academic centers. These are helpful when you don’t have the time or training to read raw data.

This podcast and my blog fall into that tier. I’m not running the studies myself—but I am trained to read them critically, explain what they actually show, and put them into context without oversimplifying. My job here is to bridge the gap: to take what tier one research says and make it understandable and useful for you, while always pointing back to the evidence itself.

And then there are tier three sources: blogs, YouTube rants, TikTok videos, Facebook posts. These can be entertaining, and sometimes they point you to an actual study, but they’re not where you want to stop. If tier one is the cookbook and tier two is the food blog that tested the recipe and explained the steps, tier three is like the random influencer who says, “I just know this spice will cure all your problems.” Fun? Maybe. Evidence? Not so much.

Let me walk you through two real examples of how I “look it up.”

The first example is one where the claim holds up under scrutiny—but only in a very specific context: vitamin A and measles. This one comes up because there are always claims floating around about vitamins being miracle cures—vitamin C for colds, vitamin D for everything under the sun, and so on. Most of the time, those claims don’t stand up to the evidence. But here’s where it gets interesting: in children who are already sick with measles, in places where vitamin A deficiency is common, supplementation really does reduce the risk of severe complications.

But the problem is the messaging in the US by RFKjr is that vitamin A actually treats measles. So here’s how I approach it. My gut check says, “Okay, I’ve heard whispers about this before, let’s see what the evidence actually says.” So I go to my tier one sources. PubMed is a great starting place. I type in “vitamin A measles randomized controlled trial.” And guess what? I find not just one study, but a whole body of them, many from the 1980s and 1990s, showing that giving children with severe measles two doses of vitamin A significantly reduces their risk of complications like pneumonia, diarrhea, and even blindness and death. The World Health Organization now actually recommends it as standard treatment in those cases.

[NOTE: If you go to the Pubmed search results linked above for measles and vitamin A, the first paper listed says vitamin A treats measles whether or no there is a vitamin A deficiency. This paper is an outlier and not representative of the consensus of the other studies listed. Look beyond it for the whole story]

That’s what good evidence looks like. It’s been studied across multiple populations. It’s been replicated. It has biological plausibility—vitamin A is important for maintaining healthy epithelial tissues, like the lining of your respiratory tract, which measles tends to damage. And the recommendation comes from a global body like WHO, not just one flashy headline.

But here’s the problem: in the U.S., people like RFK Jr. and others have taken that grain of truth and twisted it into something false and harmful. They’ll say “vitamin A treats measles” as if that’s universally true, or worse, they’ll claim it’s a better, safer alternative to vaccination. That’s not what the evidence shows. Vitamin A supplementation can make a critical difference for malnourished children in resource-limited settings—but it doesn’t prevent or treat measles, it doesn’t replace vaccination, and it’s not generally relevant in countries like the U.S., where vitamin A deficiency is rare.

So this is a case where “looking it up” confirms that yes, sometimes the simple claim is partly true—but only in a very narrow context. And outside that context, the same claim can become dangerous misinformation. Children in Texas were hospitalized earlier this year with vitamin A toxicity because parents of unvaccinated kids, treated them with vitamin A.

Now, let’s compare that with another example: COVID vaccines and myocarditis.

This is a claim that makes a lot of people understandably nervous. They’ve seen headlines about young men developing myocarditis—inflammation of the heart muscle—after mRNA vaccination. Some posts blow this up into “the vaccines are giving everyone heart problems.” Others downplay it completely, as if it doesn’t exist. So how do you sort it out?

First, the gut check: it’s not inherently implausible. Vaccines stimulate the immune system, and myocarditis can be triggered by immune responses. So it’s worth looking into seriously.

Then I go to tier one sources. What do the peer-reviewed studies say? And here’s what I find: yes, there is a very small but real increased risk of myocarditis after mRNA vaccination, especially in young men after the second dose. But I also see that the cases are mild, people recover quickly, and the risk of myocarditis from actual COVID infection is much higher than from the vaccine.

But how do other tier sources stack up? Tier two sources like me and reputable medical journalists explain the numbers, comparing risks side by side. A pre-RFK CDC advisory report might be dense, but a well-written article in STAT News or Nature translates it into plain English without losing accuracy.

Tier three, on the other hand, is a mess. On one side, viral posts claim “everyone is dropping dead of heart problems”—no nuance, no context. On the other, you’ll see people insisting “there is zero risk whatsoever”—which is also not true. Both extremes miss the point. The truth is in the messy middle: there is a risk, it’s rare, and it has to be weighed against the much higher risks of the disease itself. There was even a paper that antivaxxers used to claim the risk was much higher and more severe but apparently in the analyses, they miscalculated and it was retracted.

This is where “looking it up” really matters. If you stop at tier three, you’ll either be terrified or falsely reassured. If you make it to tier one and two, you get the full picture—risk exists, but so does protection, and the balance still overwhelmingly favors vaccination.

What ties the vitamin A example and the myocarditis example together is the way unreliable messaging fills the gaps—often from people with an agenda, or from voices who never actually look at the tier-one evidence. With vitamin A, a narrow, evidence-based finding—that supplementation helps malnourished children already sick with measles—gets inflated into a sweeping claim that vitamin A can replace vaccines. With myocarditis, a small, real risk associated with mRNA vaccination gets blown up into a universal danger on one side, or dismissed entirely on the other.

In both cases, the core problem is the same: instead of helping people understand the actual evidence, bad actors use certainty and distortion to push a narrative. The result is confusion, fear, and mistrust. And that’s exactly why it’s so important to not stop at a single headline or a single voice—but to cross-check, weigh evidence across multiple sources, and look for consensus.

And this is why I emphasize layers of sources. Because when I “look it up,” I don’t just mean clicking whatever shows up first in my feed. I mean deliberately choosing where I get my information, and how I weigh it.

That’s not something most of us were taught to do in school. We were taught to memorize facts, not to evaluate claims. But in today’s world, the skill of looking something up well—not just quickly—is just as important as the facts themselves.

Cross-Checking

Once I’ve done a gut check and started looking things up, the next step is cross-checking. This is where I stop myself from leaning too hard on any one source and instead look at the broader pattern.

Think of it like a courtroom. If you only had one witness, you might get one version of the story. But if you bring in ten witnesses from different angles, and they all roughly agree, you start to see what really happened. That’s how science works too. One flashy study might grab headlines, but until it’s replicated and confirmed across multiple groups, it doesn’t tell the whole story.

Let’s look at a couple of examples.

Thimerosal is a preservative that’s been used in some multi-dose vaccine vials. For decades, it’s been accused of causing autism—a claim that’s been debunked again and again. When I cross-check, here’s what I see:

  • Tier one sources, like the pre-RFK CDC and WHO, make it clear: thimerosal contains ethylmercury, which the body clears quickly, unlike the methylmercury found in fish. Large studies have found no link to autism. Autism rates didn’t decline after thimerosal was removed from nearly all U.S. childhood vaccines. I have an entire episode on this.

  • Tier two sources, like pediatric associations and medical journalists, explain this clearly and provide context—that removing thimerosal was mostly a precautionary move to reassure the public.

  • Tier three, though, is still full of posts (this has been fact-checked, yay!) and blogs claiming it’s dangerous. They cherry-pick, misinterpret, or ignore decades of evidence.

Cross-checking shows me where the consensus lies: thimerosal isn’t the cause of autism. But it also shows me how old misinformation can linger, recycled endlessly online.

Case Study B: Early COVID Mask Guidance

If you’re old enough to remember Ross shouting ‘PIVOT!’ while wedging a couch up a stairwell, you know sometimes communication fails spectacularly. That’s what happened with early mask messaging. And this one hits close to home, because I was personally involved in trying to communicate about masks at the beginning of the pandemic—and I didn't do a great job.

In the earliest days, public health agencies were downplaying masks for the general public. Part of that was to conserve supplies for healthcare workers. Another part was the assumption—at that time—that respiratory spread was mostly from people who were visibly sick, not from people with no symptoms.

When I talked about it back then, I tried to explain why officials were saying what they were saying. My messaging wasn’t “masks don’t work,” but it wasn’t quite right either. I focused heavily on the idea that N95s are the masks that truly work—but only if they’re fit-tested and worn correctly. I said that most people wouldn’t have access to fit-tested N95s, so they would think they were fully protected when they weren’t. That was my biggest concern at the time—I didn’t want people wearing ill-fitted N95s to assume they had 100% protection and then unknowingly put themselves at risk. Looking back, though, what I didn’t emphasize enough was that surgical masks and even cloth masks do provide meaningful protection, especially when lots of people use them consistently. I should have made it clear that while N95s offer the best protection, other masks are still far better than nothing.

I ended up writing a blog post later, digging into why my messaging had failed. It wasn’t because the science was fake, and it wasn’t because people didn’t care about safety. It was because my explanation focused so much on the risks of wearing N95s incorrectly that I left out the benefits of other kinds of masks. My intentions were good—I wanted people to understand that no mask, especially one worn poorly, was 100% protection. But by leaving out the bigger picture, I unintentionally added to the confusion.

For me, that was a humbling lesson in communication. Sometimes the error isn’t what you say, it’s what you don’t say. I had the evidence in front of me, but I didn’t frame it in a way that helped people make practical, safer choices. That experience taught me that clarity and balance are just as important as accuracy. Because when you leave gaps, people will fill them in—and not always in the right direction.

And that’s where cross-checking comes back in. Cross-checking isn’t only about weighing what other people say against the evidence—it’s also about checking myself. Did I look at the full range of sources, or did I let one part of the story dominate? Did I communicate the nuance, or did I oversimplify? When I wrote that later blog post on masks, it was my way of applying cross-checking inward, admitting that I had left out part of the picture.

Cross-checking is what turns scattered information into a pattern. It shows you not just what one paper or one official says, but whether the weight of evidence points in the same direction. And it forces you to slow down and ask: is this change because the science shifted, or because the context shifted, or both? For me, the mask example was proof that even with the best intentions, you can miss the balance—and that’s why the discipline of cross-checking, outward and inward, matters so much.

My Relevant Background & Training

By now you might be thinking: okay, Heather, it’s fine for you to talk about gut checks and looking things up and cross-checking. But you have a PhD—you’ve been trained in this stuff. Doesn’t that make it easier for you than for the rest of us?

And yes, in some ways it does. And it's why I created this podcast. But let me clarify something important: having a PhD doesn’t make me smarter than anyone else. What it does make me is trained. Highly trained. Skilled in ways that most people simply haven’t had the opportunity—or the desire, let’s be honest—to practice day in and day out.

One of the most important parts of that training was something called journal club in grad school. If you’ve never been in one, imagine this: a group of graduate students and faculty sitting around a table. Each week, one person is responsible for presenting and dissecting a recent scientific paper. And I don’t mean skimming the abstract and reading the figures. No. You’re expected to read every single word of that paper. To think about why the authors chose those methods, why they framed the data that way, what the statistics really mean.

Journal club trained me to read with suspicion—not because scientists are dishonest, but because science is complicated. Words matter. If an author wrote “this proves,” my red flag went up. Nothing in science is ever “proven.” At best, you show evidence under certain conditions, in certain models, that supports a hypothesis. Over time, evidence accumulates and consensus builds, but the word “prove” is a dangerous oversimplification. So if I saw it, I dug deeper into the methods, looking for where they might be overstating their case. That habit, drilled into me week after week in journal club, still shapes how I read today.

Then there were progress presentations. Honestly? They were brutal. In my program, you had to get up in front of a room full of professors, postdocs, and fellow grad students to present the state of your research. And the culture wasn’t “supportive feedback.” The culture was: let’s see if we can find the hole in your logic. Let’s see if we can spot the weak link in your methods, your analysis, your conclusions. It was survival of the sharpest.

At the time, it felt awful. People weren’t shy about making you feel like you’d missed something obvious. But here’s what it did: it trained me to think critically before I stepped into that room. I learned to design experiments with the right controls up front, because if I didn’t, someone would call me out on it. I learned to think about what statistical test was appropriate before I started collecting data, not afterward when I was scrambling to make sense of results. It was humbling, sometimes humiliating, but it forced me to sharpen my reasoning to the point where I could anticipate the criticisms before they were spoken.

That training is why I read and evaluate claims the way I do now. I can’t just look at a graph and take it at face value. I’m thinking: what’s the sample size? What are the error bars? Did they randomize? What are the possible confounders? It’s not that I’m smarter than anyone else. It’s that I was put through a process that demanded I learn those skills or be torn apart in public.

And here’s a story that’s stayed with me ever since my undergrad years. I was a senior, and I had a professor who taught genetics—one of those professors who saw more in me than I saw in myself. He let me sign up for a graduate-level virology course even though I was still an undergrad. And that class changed everything for me. I knew then that I wanted to go to grad school.

That same professor gave me a piece of advice I’ll never forget. He said: “Grad school is just standing up in front of people who tear you down. And when they can’t tear you down anymore, you graduate.” He wasn’t wrong. That’s exactly what it felt like. And as harsh as that sounds, it also captures what grad school is designed to do: it’s a training ground for critical thinking, for defending your ideas under pressure, for learning to stand firm when your evidence is strong and to back down when it’s not.

That’s the difference a PhD makes. Not intelligence, not superiority. Training. Years of reading critically, of being challenged, of designing and redesigning experiments until they could withstand scrutiny. Years of hearing “prove it” and learning that you never really can—you can only show evidence and keep building the case.

And that training hasn’t gone away, even though I’ve been out of the lab for years. My career as a science writer has kept those critical reading and thinking skills sharp. In fact, it’s broadened them. Because in my job, I don’t just stick to one narrow field, I pivot constantly between immunology, medicinal chemistry, virology, microbiology, pharmacology, structural biology, and molecular biophysics, to name a few. Each has its own jargon, its own style of evidence, its own pitfalls. And I’ve had to learn how to navigate all of them quickly and accurately.

It’s also taught me to write for very different audiences. Some days, I’m writing for professional reviewers who will pick apart every technical detail. Other days, I’m explaining complex concepts to non-scientists—like my brother, who was a farmer. I’ve had to distill ideas down to where someone with no science background can still follow the logic and trust that I’ve done the homework. At least I hope I can—that’s the challenge I set for myself every day.

So when I say I’ve been trained, this is what I mean. Training in critical reading, in skepticism, in anticipating flaws. Training in shifting between audiences. Training in distilling complexity into clarity. That’s what I bring to this podcast. Not certainty, not perfection, not superiority—but years of practice in thinking hard about evidence and how it’s communicated.

And here's the truth... you bring the same thing to your own work. You have training, experience, and skills that I don’t—skills that make you effective in your field. If I needed accurate information about your line of work, I wouldn’t try to wing it on my own. I’d ask someone like you, because you’ve built the expertise to answer those questions well. That’s all I’m doing here: applying my training where I have it, and sharing the process here, so you can use it too.

Why This Matters

ree

So why does all of this matter—gut checks, looking things up, cross-checking, all the training I’ve had? Because at the end of the day, deciding what’s true isn’t just an abstract exercise. It’s about who you choose to listen to, and how you decide which voices deserve your trust.

ree

That’s not a trivial question. Over the last few days, I’ve gotten a very personal reminder of that. I’ve been posting online about the changes in COVID vaccine access—trying to make sense of what’s happening, and correcting people like Marty Makary when they say things that simply aren’t true. And the backlash is there.

I’ve been called a shill for Big Pharma. I’ve been accused of lying. People have come at me with the same debunked talking points I’ve seen recycled a thousand times. And at one point my frustration got the better of me and I snapped back at someone and told them to stop lying. That person responded as you'd think and that's fair. For a moment, I thought about trying to engage thoughtfully. Then I stopped and asked myself: is this even a real person? Or is it a bot, or someone who’s never going to argue in good faith? I’ll never know. What I do know is that it wasn’t worth my energy. So I stepped back.

I’m telling you this because I want to be transparent. I don’t always get it right. I lose my temper sometimes. I react instead of responding. But here’s what I keep coming back to: the reason I do this podcast, the reason I speak up online, is because people need accurate information to make decisions about their health. And in an environment full of loud, confident, often misleading voices, it matters who you listen to.

Now, let me be clear: I’m not saying you should only listen to me. I’m not the sole gatekeeper of truth. What I’m saying is that you should pay attention to how people are talking about science. Are they acknowledging uncertainty, or pretending to have all the answers? Are they relying on consensus, or cherry-picking outliers? Are they transparent about their process, or just giving you a hot take that confirms what you already believe?

That’s why I’ve spent so much of this episode walking you through my process. Not because I think it’s perfect, but because I want you to see behind the curtain. When I evaluate a claim, I’m not leaning on blind faith. I’m checking sources, cross-referencing, weighing evidence, and trying to be transparent about my own biases and limitations. That’s what makes me comfortable asking you to trust me as a science communicator—not because I have all the answers, but because I’m showing you how I get to them.

And this is where my recent social media experience ties in. The people coming after me? They don’t show their work. They post confident statements—the vaccines skipped all safety tests, "100% of adults can get the new shots”—without backing them up. Or they link to tier-three sources: blogs, YouTube videos, Facebook posts. They often use certainty as a weapon. They make their claims sound simple, final, undeniable. And that feels good, because certainty is comforting.

But science isn’t simple or final. It’s complex, it’s evolving, and it requires humility. When someone tells you they have the one simple answer to a complicated scientific question, that’s when your gut check should go off. A great example right now is RFK saying he has the answer to autism and will announce it this month.

This is why who you listen to matters. Not because one person has all the answers, but because some people are honest about uncertainty and evidence, and others aren’t. Some people show their work. Others just sell you a feeling.

Telling the truth often means saying, “It’s complicated,” or “We’re not sure yet,” or “The balance of evidence says this, but we’re still learning.” Meanwhile, the loudest voices are saying, “It’s all a lie!” or “It’s all perfectly safe!” Those extremes get the clicks. The messy middle doesn’t.

But the messy middle is where the truth actually lives. That’s where science operates. And if I can help more people get comfortable living in that space—with nuance, with probabilities, with evolving evidence—then I’ve done something worthwhile.

So when you see me online, calling out lies about vaccine access or correcting misinformation about myocarditis, that’s what I’m trying to do. Not to be the only voice, not to “own” the truth, but to model what it looks like to evaluate claims carefully, critically, and transparently. To say, “Here’s what we know, here’s what we don’t, here’s where the consensus is, and here’s how I checked.”

And if you can take even a little bit of that process with you into your own daily life—into the next headline you see, or the next scary post your family member shares—then this episode has done its job.

Conclusion

So here we are. We’ve walked through gut checks, looking things up, cross-checking, and I’ve shared some of the background that shaped how I do this work. But let’s be honest: this episode isn’t really about me. It’s about you — about how any of us can navigate a world full of competing claims and confident voices, and how to figure out what’s true enough to guide our decisions.

Because we all know what it feels like to be overwhelmed. To scroll through your feed and see five different “experts” saying five different things. To watch the news and feel like the ground is constantly shifting beneath you. To hear a friend or family member confidently share something you know isn’t right, but not be sure how to respond. That’s the reality of our information environment. And it’s not going to get simpler.

What can get simpler, though, is the way you approach it. You don’t need to be a scientist, you don’t need a PhD, you don’t need access to journals. You just need a few habits—a way of slowing down, checking your instincts, choosing your sources carefully, and looking for consensus rather than certainty.

ree

Let me boil it down into a checklist. Call it “Heather’s Steps for Figuring Out What’s True”:

  1. Gut Check: Pause before reacting. Ask who’s making the claim, whether it sounds too good or too scary to be true, and whether it’s being presented with more certainty than science usually allows.

  2. Look It Up — Carefully: Start with the most trustworthy sources you can. Peer-reviewed research, government health agencies, the World Health Organization. If that’s too technical, go to reputable science journalists or professional organizations. Don’t stop at viral posts or youtubers.

  3. Cross-Check: Don’t lean on just one headline, one post, or one talking head. See if multiple reliable sources line up—things like the CDC, WHO, major hospitals, or reputable science communicators, like me. Check the date—is the information current? Look at the context—is it talking about your country, your age group, your situation? Ask whether the source has something to gain by framing it a certain way. If several trustworthy sources are saying the same thing, that’s a good sign you’re on solid ground.

  4. Watch the Language: Science doesn’t “prove.” It shows, it supports, it suggests. Be suspicious of oversimplified language that promises too much certainty.

  5. Look for Consensus: The strength of science comes not from one breakthrough paper but from dozens, even hundreds of studies pointing in the same direction. That’s where confidence comes from.

That’s it. Simple, but not easy. And the more you practice it, the more natural it becomes.

Now, let me circle back to where we started—the mess around COVID vaccine access and the flood of contradictory claims. If you’ve listened to my mini episode—Booster Dose: New COVID Vaccine Access Explained—you know I dug into the details there: what changed, what the rules mean, and why some of the public statements we’ve heard don’t line up with reality. This week’s episode is the companion piece. Instead of just explaining what’s happening, I’m showing you how I decide what’s true when the information is confusing and people in authority are saying completely different things.

And here’s the point: even for me, with years of training, this is hard. I still have moments when I want to react instead of respond. Having a process doesn’t make me perfect — but it does give me something solid to lean on when the noise gets loud.

And that’s what I want for you, too. Not to hand you a list of facts carved in stone, but to give you a way to think like a scientist: pause, check, cross-check, look for consensus. It’s not glamorous, but it works.

So here’s my challenge: this week, when you see a headline that makes you gasp, or a post that makes you feel outrage, or a claim that seems too good to be true — stop for just a moment. Do the gut check. Look it up, but carefully. Cross-check. Ask where the consensus really lies. And see what happens.

It may feel slow at first. It may feel awkward. But the more you practice, the more natural it becomes. And before long, you’ll find yourself navigating the noise with more clarity, more confidence, and maybe even a little more peace.

Because deciding what’s true isn’t just a scientist’s job. It’s everyone’s job. And the more of us who practice it, the harder it becomes for misinformation to take root.

Thanks for checking with Infectious Dose. I’ll be back next week with more science and more stories, and until then, stay healthy, stay informed, and spread knowledge not disease.

ree

 
 
 

Comments


bottom of page