
🧬 Who Is Most at Risk for Vaccine Injury—And Why?
-
🧬 Who Is Most at Risk for Vaccine Injury—And Why?
Posted by IMA-HelenT0.0336000919342 seconds
on July 8, 2025 at 7:52 am EDT🗓️ Wednesday, July 9 | 🕖 7:00 PM ET
📍 Watch live on Zoom or on X (Twitter)
A new peer-reviewed study published in Advances in Virology explores why some individuals are more vulnerable to vaccine-induced spike protein harm.
This week, host Dr. Paul Marik is joined by:
👨⚕️ Dr. James Thorp
🔬 Dr. Jack Tuszynski
📊 Matthew Halma
for a vital discussion on:
Key factors driving individual vulnerability
Mechanisms of spike protein injury
How to diagnose and treat post-vaccine illness
📣 Don’t miss this important conversation.
Please add your questions below and we can send them to our doctors, and of course and join us live!
IMA-GregT0.0311331748962 seconds
replied 1 month, 2 weeks ago 3 Members · 10 Replies -
10 Replies
-
I loved this talk and I’m so happy to see the bioinformatics angle come into play. The paper has a slew of SNPs listed and I am grateful for this. I’ve done full genome sequencing at Nebula and can now start digging with these SNPs in hand.
DNA sequencing is still something our medical society today has not honed skills in order to make use of. Yes, there are genetic, epigenetics, and environmental factors involved in health and you often can’t make a health prediction based on genetics alone. But this is a new area, one of personalized medicine. Why does person X have problems with something while person Y does not? If you can get to the root of the matter, then you can find new personalized and innovative solutions. This is different than clinical randomized placebo studies to design a one size fits all drug. This is understanding individual differences and navigating the mechanisms rather than a stochastic approach to medicine.
-
As always @jrgerber we really appreciate your reply, I’m curious—do you have any thoughts about about the risks? For example: how do we guard against misinterpretation, data privacy issues, or even over-reliance on genetic determinism?
-
I love the thoughts. I asked Copilot to dive into this:
Whole genome DNA sequencing holds immense promise for personalized medicine, but it also raises a constellation of concerns—ethical, technical, and societal. Here’s a breakdown of the most pressing issues:
🧬 Ethical and Privacy Concerns
- Informed Consent: Patients may not fully understand the scope of what sequencing reveals, including incidental findings unrelated to the original reason for testing.
- Data Privacy: Genomic data is deeply personal. If compromised, it could lead to discrimination or misuse by insurers, employers, or even governments.
- Autonomy and Return of Results: Deciding which findings to return—especially those with uncertain significance—can be ethically fraught.
🧠 Interpretation and Clinical Utility
- Uncertain Results: Many genetic variants have unknown implications, making it difficult to interpret their medical relevance.
- Overdiagnosis and Overtreatment: Sequencing may reveal predispositions that never manifest, potentially leading to unnecessary anxiety or interventions.
- Lack of Standardization: Variability in sequencing platforms and analysis pipelines can affect accuracy and reproducibility.
💰 Cost and Accessibility
- Hidden Costs: While sequencing itself is becoming cheaper, follow-up tests and counseling can be expensive.
- Healthcare Disparities: Access to sequencing and its benefits may be unevenly distributed, potentially widening existing gaps in care.
🔐 Cybersecurity Risks
- Genomic Hacking: Researchers warn that sequencing systems could be vulnerable to cyberattacks, including synthetic DNA-encoded malware.
- Data Sharing Risks: Open-access genomic databases, while valuable for research, pose risks for re-identification and misuse.
🧪 Technical Challenges
- Data Storage and Analysis: The sheer volume of data generated requires robust infrastructure and advanced bioinformatics tools.
- Evolving Reference Genomes: As our understanding of the human genome evolves, previously stored data may need reanalysis to remain clinically relevant.
<hr>
If you’re curious, we could explore how these concerns intersect with philosophical ideas about identity, autonomy, or even the ethics of predictive medicine. Want to go deeper?
-
It’s a lot to unpack for sure. I worked in computer security from the late 90s to the early 2000s and since then I have been working as a software engineer with privacy as a primary part of my job. That said, I will make the assertion that nothing can ever be completely secure. Security and usability are always at odds with one other. Do I think a reasonable amount of security can be applied to DNA sequence personal data? Yes, but this is quite a discussion.
One of the problems that copilot alluded to is data storage. We’re talking about around 300 gigabytes. This data isn’t easily downloaded; but if you do download, where do you want to store it? Do you trust the website that is storing your data? This is absolutely a headache for the non-techie person.
I think we’ve all gotten those emails on multiple occasions that say, “Our website was compromised.” If you need any convincing that “no website is completely secure”, have a look at the “row hammer attack” as an example, to see the level of sophistication computer security, or lack thereof, is at presently. Though, we’ve already opened Pandora’s box, and we access our medical records, and we bank online nowadays. The difference between medical records and banking is in banking we have accounting to track transactions… and our money is insured, to a degree. I think you can see where I’m going with this.
The internet itself is by design insecure. Let me give an example. The military came up with the concept of white, black, and grey systems. Black systems stand alone and are not allowed to be connected to the internet. White systems are connected to the internet. Grey systems can connect to white or black but never at the same time and are not left connected to the internet. Exposure is limited. If you have data that you really care about, you want it on a black system and you want to only allow grey systems to connect to it.
When speaking about privacy we use the terms “linkable” and “linked”. Copilot again:
Certainly! When discussing data privacy, the terms “linkable” and “linked” refer to how information can be associated with specific individuals or datasets. Here’s a breakdown:
<hr>
🔗 Linkable Data
Definition:
Data that can potentially be connected to an individual, even if it isn’t directly identifying on its own.Examples:
- Partial location history
- Browser fingerprint
- IP address
Why it matters:
While linkable data may seem anonymous at first glance, it can become identifying when combined with other sources. For example, knowing someone’s workplace, combined with the time they access a specific website, might reveal their identity.<hr>
🧷 Linked Data
Definition:
Data that has already been associated or joined with other information to identify a specific individual.Examples:
- Your name attached to purchase history
- An email address tied to medical records
- Social media profile connected with browsing activity
Why it matters:
Linked data is typically not anonymous and presents higher privacy risks because it’s already been assembled in a way that reveals personal details.<hr>
🧠 Quick Comparison
<hr>
Both play key roles in privacy discussions, especially around data anonymization and re-identification risks. Regulations like the GDPR and CCPA often distinguish between them when setting standards for user consent and protection.
Would you like to explore how these distinctions play out in AI systems or in public datasets? I suspect you might enjoy the deeper nuances.
-
👍And the thing is they promise security of your health data, and then the organization gets sold. It’s so risky. Whom do you trust, especially when there may actually be future health benefits, once you know the root cause at cellular or DNA level. So many pros and cons.
-
Exactly. 23andMe just filed for bankruptcy, so here we go with an example.
What could be done is, say you fully sequenced your DNA. Break that DNA data into blocks of data and give randomly assigned numbers to the blocks. Store those blocks “in the cloud”. Have a personal device that only contains the randomly assigned numbers, and that box represents you, your DNA. Accessing the blocks when you need them could be done through a peer-to-peer network, the design randomizes access to the blocks so no one sees that block X, N, and Q were requested by a single person, and therefore they are likely part of the same single DNA. Peer-to-peer is like you request a piece of data from hospital A but hospital A gets it from clinic B, etc. and no one is tracking who asked for what from who. Basically, you have your little black box that knows all the pieces, the pieces are not stored on the black box, only the randomized reference numbers, and then you have two factor authentication on the box, it could be as simple as using your cell phone with Bluetooth to talk to the box.
My point is, that this type of system could be developed where your data is secure and at the same time available for researchers, but nothing is easily linkable to you.
-
I’ll un-techno-weenie what I just said with a concrete example. You have a doctor’s appointment at 8:30AM for 30 minutes with doctor X, a cardiologist at Hospital Y. During the appointment Doctor X hears your symptoms and suspects you may have Marfan’s Syndrome. You brought your little black box to the appointment and your phone. You have an app on the phone and the doctor puts in a request for Fibrilin-1 gene that appears on your phone. You now use your phone to access the black box, and the phone does authentication that it’s you and you have access to the box, the box uses the authentication to decrypt the data and retrieves the access number for the block that contains the Fibrilin-1 gene. The box doesn’t have the data, just the reference number. These are those black and grey devices in the military example.
Now that reference number has to be used to retrieve the actual data stored in the cloud (this is a specially designed cloud with peer-to-peer networking). You share the result with the doctor, and he sees you indeed have a mutation on this gene which is a dominant gene (only one copy necessary to have the condition).
The problem isn’t this black box system; the problem is that the hospital has stored the fact that you, Patient W, was at an 8:30AM appointment for 30 minutes with Doctor X at Hospital Y. If someone has that information and sees that during that appointment a request from the cloud for a Fibrilin-1 gene block was made and they can see that the data contained a mutation, now you have linkable data, you can now guess that that patient W probably has Marfan’s Syndrome based on the time of the appointment, the type of doctor, and the facility. This is why you need a way to randomize that request for that piece of data, so it is not linkable.
Clear as mud right? 🤣
-
👍 To me, that is actually very clear. That ability to avoid the linkage being able to be created is critical, and would help ensure Palintir and related programs can’t create those linkages, so our health data can only be used for us, rather than linked and used against us.
It’s just so massively important.
-
-
-
-
Log in to reply.