Heatwaves Caused by Climate Change: How Geomedicine Can Improve Health Outcomes

What Is the Main Idea?

Extreme climate events, such as heatwaves, have become more common because of climate change and place a heavy burden on health systems. In the open-access research article “Beyond Usual Geographical Scales of Analysis: Implications for Healthcare Management and Urban Planning”, published in the journal Portuguese Journal of Public Health, the authors discuss how geomedicine can be used to aid urban planning and the allocation of health resources to reduce the number of deaths during heatwaves.

What Else Can You Learn?

In this blog post, the effects of climate change on health are discussed with a particular focus on heatwaves. Geomedicine and how it can be used is also described.

What Is Climate Change?

Climate change is defined as long-term and large-scale shifts in weather patterns and average temperatures. Although shifts like these can occur naturally, as the result of volcanic activity or changes in the Sun, human activities over the last 200 years have had significant effects. This has mainly been due to the burning of fossil fuels like coal, gas, and oil. As a result, the Earth is now about 1.1 °C warmer than it was 100–150 years ago, and the last decade (2011–2022) was the warmest on record. This is causing environmental effects such as rising sea levels, intense droughts, scarcity of water, and declining biodiversity (the variety of living organisms), which makes climate change an economic issue because it affects the availability of food and other resources.

How Does Climate Change Affect Our Health?

Climate change can affect human health in many ways. It can affect mental health through increased stress and anxiety, and extreme weather events can cause significant trauma. Rising sea levels and increased frequency of flooding can lead to people being displaced and increase the likelihood of water supplies becoming contaminated, which increases the spread of disease. Increasing droughts can decrease food production and the supply of water, and a warming climate also affects numbers of biting insects, such as ticks and mosquitos (both can spread disease), particularly in areas where numbers of these insects had previously been low. Extreme climate events such as heatwaves have also become more common and are lasting longer, placing a heavy burden on health systems.

What Are the Health Effects of Heatwaves?

Heatwaves are known to cause increases in death rates and the numbers of people needing medical care. During a heatwave in Europe in 2003, more than 70,000 excess deaths (the number of deaths that was above the number expected over that time period) were reported. Excess heat increases pressure on the heart, lungs, and brain, increasing the risk of death from respiratory (relating to the breathing system), cerebrovascular (relating to the brain and its blood vessels), or cardiovascular (relating to the heart and blood vessels) problems.

Who Is Most at Risk during a Heatwave?

People with pre-existing health conditions, especially cardiovascular and respiratory diseases, and the elderly are particularly at risk. Over the last 20 years, the rate of elderly people dying from heat-related causes has increased significantly. Children under 1 year of age are likely to be affected by the effects of heat and dehydration, as are people who do manual work outdoors, for whom an increased risk of chronic kidney disease has also been reported. There is also evidence that people living alone, living in areas that are more socioeconomically disadvantaged (this is defined as less access to or control over economic, social, or material resources and opportunities), or living in urban environments such as city centers are at increased risk.

What Did This Study Investigate?

To be able to deal with the challenges that heatwaves cause, healthcare systems need to be able to develop plans that will ensure that those most at risk can access the support they need during a heatwave. Advances in geographic information systems have been shown to be useful in mapping how diseases are distributed and identifying any clusters or trends. They can also take into account environmental and socioeconomic factors when analyzing data, and the availability of medical facilities. This area of research is termed “geomedicine”.

What Is Geomedicine and How Can It Improve Health Outcomes?

Geomedicine is based on the idea that good health does not come by accident. Instead, factors in our environment have an effect on our health, which means that the places where we live and work now and in the past affect our health status. By linking a person’s health status to geographic factors, such as a person’s address, geomedicine can provide health data that can help medical teams make diagnoses and better assess risk.

What Did the Authors Investigate?

In this study, the authors used an approach called “geocoding” to investigate how the scale of geographic information used in geomedical analysis affects the results. Geocoding involves defining a set of geographic coordinates, usually based on latitude and longitude, that correspond to a location. The authors argue that analyzing data by geocodes, which can specify a particular street, rather than by larger areas such as a parishes or districts provides more accurate information about public health in those areas. This means that local authorities can prioritize resources to areas with greater need.

In their study, the authors analyzed data concerning heat-related deaths among elderly people in Portugal, which were linked to cardiorespiratory problems, between 2014 and 2017. Each record included information about the house number, post code, and location of the person that died, which enabled it to be geocoded. Once geocoded, the data were generalized to the neighborhood level to protect the confidentiality of the people’s data that were included.

The results showed that some neighborhoods with low cardiorespiratory death rates were located within parishes with high rates, while conversely, neighborhoods with high death rates were located within parishes with low rates. The authors therefore stress the importance of carrying out analyses at several different scales, and note that analysis by smaller administrative areas is preferable. Just as personalized medicine has the potential to revolutionize health, so does analyzing data by individual neighborhoods.

However, the authors also note the need for authorities to develop multisector responses to the challenges that climate change brings to “keep vulnerability to a minimum and increase the resilience of healthcare and urban planning”. By improving health information systems, it is possible that the accuracy of health outcome monitoring, spatial planning in urban areas, and the management of health resources may be improved.

Blood Platelet Levels and Postpartum Hemorrhage Risk

What Is the Main Idea?

Postpartum hemorrhage (PPH) refers to a woman having sudden heavy bleeding after giving birth, which can be fatal. In the open-access research article “The Impact of Prepartum Platelet Count on Postpartum Blood Loss and Its Association with Coagulation Factor XIII Activity”, published in the journal Transfusion Medicine and Hemotherapy, the authors discuss how the levels of platelets (a type of blood cell) and a protein called coagulation factor XIII in a woman’s blood before she goes into labor may predict her risk of PPH.

What Else Can You Learn?

In this blog post, PPH in general and known risk factors are discussed. The process of blood clotting is also briefly described.

What Is Postpartum Hemorrhage?

Vaginal bleeding is normal after birth. It is mainly caused by the placenta, which delivers food and oxygen to the developing baby while it is in the uterus (womb), detaching from the wall of the uterus. Although bleeding can initially be fairly heavy, it reduces in the days after birth and usually stops within a few weeks.

PPH is different. It can start suddenly and large amounts of blood can be lost very quickly. PPH can be classed as either primary (when 500 mL of blood or more is lost in the first 24 hours after birth) or secondary (when bleeding is heavy or abnormal after the first 24 hours and up to the end of the 12th week after birth). PPH can occur after birth by vaginal delivery or delivery by cesarean section. The contractions that help the placenta to pass out of the uterus after birth also compress the blood vessels in the wall of the uterus where the placenta has been attached. PPH can develop if these contractions aren’t strong enough (this is known as “uterine atony”), if part of the placenta stays attached to the wall of the uterus, or if any internal cuts or tears happen during birth.

How Serious Is Postpartum Hemorrhage?

PPH is serious and potentially fatal because sudden heavy blood loss can cause a sharp drop in bloop pressure, which can reduce blood flow to other organs, including the brain and heart. It is treated as a medical emergency. It is important that new mothers keep their healthcare team and partner aware of any changes in their bleeding, and act quickly if bleeding suddenly becomes very heavy. Other symptoms that should be reported include blurred vision, dizziness, feeling faint, worsening pelvic or abdominal pain, nausea or vomiting, an increased heart rate and/or breathing rate, and pale or clammy skin. These symptoms may only start after the woman has left the hospital. Although PPH is estimated to occur in 1–10% of pregnancies and remains a key cause of maternal death (mortality) worldwide, the earlier the bleeding is treated the more successful the outcome.

What Increases Your Risk of Postpartum Hemorrhage?

If a woman is considered to be at high risk of PHH she will be advised to give birth in a hospital setting. Before birth, placental problems (like the placenta being located relatively low in the uterus or starting to detach from the wall of the uterus before it should) can increase a woman’s risk of PPH. Other risk factors include an overstretch uterus, which can be caused by having had more than one previous pregnancy, too much amniotic fluid (the fluid that surrounds the baby while it is in the uterus), and having a multiple pregnancy (expecting two or more babies at the same time).

During the birth, risk factors include a delay in the placenta being delivered or some of it remaining attached to the wall of the uterus, having a large baby, and the baby being delivered by forceps or ventouse. Another known risk factor is if the woman has a blood clotting disorder or other blood-related condition. The blood clotting system (known as the “coagulation” system) is activated when the lining of a blood vessel is damaged and regulates the process by which liquid blood changes to a gel, forming a blood clot, which stops the bleeding and starts the repair process.

How Does the Blood Clotting System Work?

The process by which blood clots are formed involves a number of proteins and platelets (a type of blood cell). When a blood vessel is damaged, such as when the placenta detaches from the uterus, platelets cluster at the site of damage and bind together to seal it. The platelets have receptors on their surfaces that bind a molecule called thrombin, which converts a soluble protein called fibrinogen into a different form called fibrin. Fibrin can form long, tough, insoluble strands that bind to the platelets and cross-link together to form a mesh on top of the platelet plug. Lots of different molecules are involved in this process, but platelets and fibrin are major players.

How Does Blood Clotting Relate to Postpartum Hemorrhage?

Some researchers have suggested that if a woman has a lower than normal level of platelets in her blood (a condition called “thrombocytopenia”) before she gives birth she may be at increased risk of PPH. Thrombocytopenia is estimated to occur in around 10% of pregnancies. There is also some evidence that the levels of a blood protein called coagulation factor XIII affect PPH risk. Coagulation factor XIII stabilizes fibrin as blood clots form. If low levels are present in the blood, clots can be less stable and the risk of bleeding increases.

What Did the Study Investigate?

The authors of the study evaluated whether a woman’s platelet count (the number of platelets measured in a sample of blood) measured before birth is linked to the extent of blood loss after birth. They also looked at whether there is an association between platelet count and levels of coagulation factor XIII, either before or after birth. They did this by looking at data collected as part of a previous study (this is termed “secondary analysis”) that analyzed the impact of coagulation factor levels before birth on blood loss after birth for 1,300 women. They found that the higher a woman’s pre-birth platelet count, the lower the probability of them developing PPH, and that this was seen for women whose babies were delivered either vaginally or by cesarean section. An increase in pre-birth platelet count by 50 G/L was shown to decrease the likelihood of PPH by 16%.

The authors also found that platelet count is significantly correlated (strongly linked) with coagulation factor XIII activity both before and after birth, which suggests that platelets may play an important role in the firmness of blood clots. Coagulation factor XIII is found in the cytoplasm of platelets (the fluid-like area inside a cell that does not include the nucleus, where the genetic information is stored). This suggests that the chance of developing PPH may be influenced not only by the number of platelets in the blood, but also by the availability of coagulation factor XIII in the areas of platelets that are involved in its blood clotting role.

The authors state that these findings support the importance of measuring platelet counts when identifying women who may be at high risk of PPH. Recent medical guidelines in Germany, Switzerland, and Austria have included platelet transfusion to increase the number of blood platelets in a six-step approach to treat continued bleeding. It is possible that platelet therapy may become useful in the prevention and treatment of PPH in the future.

Note: The authors of this paper make a declaration about patent ownership as well contributions to a new guideline. It is normal for authors to declare this in case it might be perceived as a conflict of interest. For more detail, see the Conflict of Interest Statement at the end of the paper.

Infant Antibody Profiles Can Predict Peanut Allergy

What Is the Main Idea?

Peanut allergy is a leading cause of anaphylaxis and some infants are more at risk of developing it than others. In the brief report “Epitope-Specific IgE at 1 Year of Age Can Predict Peanut Allergy Status at 5 Years”, published in the journal International Archives of Allergy and Immunology, the authors describe how levels of particular types of antibodies in blood samples given by infants at age 1 year can be used to predict whether they will develop peanut allergy by the time they are 5 years old.

What Else Can You Learn?

In this blog post, peanut allergy, anaphylaxis, and efforts to predict the development of severe allergy in children are described. Immune system antibodies, antigens, and epitopes are also discussed.

What Is Peanut Allergy?

Peanut allergy is a type of food allergy (an unusual reaction of the body’s immune system to a specific food). The immune systems of people with peanut allergy mistakenly identify peanut proteins as things that are harmful to the body and need to be removed. Although some allergic reactions to foods are relatively mild, causing symptoms such as a rash or abdominal pain, others are more serious. Severe allergic reactions can cause anaphylaxis, which is potentially life-threatening and should be treated as a medical emergency. In addition to the more usual allergy symptoms such as swelling, an itchy or raised rash, or feeling or being sick, anaphylaxis symptoms can include a fast heartbeat, confusion or anxiety, breathing difficulties, feeling lightheaded or faint, and the person losing consciousness. They can develop suddenly and worsen very quickly. Peanut allergy is the leading cause of anaphylaxis in the USA and ranks second (after milk) in the UK.

Peanut allergy usually develops in early childhood and incidence has increased over recent decades. Estimates of the number of affected children vary between countries, but can be as high as 3% of the population. Some infants are at greater risk of developing peanut allergy than others, including those with family members with food allergies and those with egg allergy and/or eczema. Some health services used to advise that infants should not be exposed to foods containing peanuts because of fears that it could trigger peanut allergy. However, there is now strong evidence that introducing infants at risk of peanut allergy to peanuts as early as age 4–6 months can significantly reduce their risk of developing food allergies in the future.

Why Do Some People Have More Severe Peanut Allergies than Others?

It is now believed that peanut allergy can take several different forms called “endotypes” (an endotype is a subtype of a health condition that differs from other subtypes in the way that changes in the body and its systems cause or are caused by the condition). Evidence to support this comes from the fact that around 20% of infants and young children that have an allergic reaction to peanut will outgrow their allergy, while in others the allergy will persist throughout their lives. The difference is thought to be linked to the specific molecules, called “antibodies” (also known as “immunoglobulins”), that the immune system produces when it comes into contact with peanut proteins.

What Are Antibodies?

Antibodies are glycoproteins (molecules that are made up of protein and carbohydrate chains). They are highly specific, and recognise and bind to “antigens” (this term describes anything that causes the immune system to produce antibodies against it and can include chemicals, molecules on the surfaces of bacteria and viruses, and proteins in food like peanuts). Antibodies are divided into five different classes – IgE, IgG, IgM, IgA, and IgE – based on their characteristics and roles. They all have a Y-shaped structure, and while the bottom part does not change from one antibody to another, the two “arms” do and make up the part of the antibody called the “antigen-binding site”. It is differences in this region that enable different antibodies to bind to specific regions of antigens (called “epitopes”) and not to others. For example, an antibody that binds to an epitope on a peanut protein will not bind to an epitope on protein made by the virus that causes flu. Epitopes can be described as “sequential” or “conformational”. Sequential epitopes are made up of a linear sequence of amino acids (the building blocks of proteins) like beads on a string, while conformational epitopes are made up of amino acids that are only brought close together when the string of amino acids is folded up into a three-dimensional structure.

How Can Different Antibody Types Be Used to Predict Peanut Allergy?

Some recent studies have reported that levels of sequential epitope-specific IgE (ses-IgE) antibodies in infants with persistent food allergies are lower than levels of IgE antibodies against a mixture of both conformational and sequential epitopes during the first year of life. ses-IgEs develop as infants get older, raising the possibility that children who develop a persistent peanut allergy later in life may have distinct epitope-specific profiles in infancy. If this is the case, it may become possible to identify infants who are at risk of developing peanut allergy via a simple blood test.

What Did the Study Show?

The authors monitored the development of ses-IgEs in 74 children who were at risk of developing peanut allergy who had either already been identified as allergic to peanut or were not yet allergic, and who were avoiding peanuts. They analysed blood samples taken when the children were aged 4–11 months, and again at 1 and 2.5 years of age. They used a machine learning strategy (a computer system that uses algorithms and statistical models to analyse patterns of data and draw conclusions from them) to identify prognostic biomarkers (characteristics, such as molecules in your blood, that indicate what is going on in the body) that could predict whether or not a child would have an allergic response to an oral food challenge with peanut at a 5-year visit. The results showed that blood samples from children aged as young as 1 year could be used to accurately predict the outcomes of oral food challenge tests at 5 years of age. If these results can be confirmed by further studies, it may become possible for healthcare professionals to identify infants who are likely to develop persistent peanut allergy in the future, enabling them to start peanut exposure interventions early and, hopefully, prevent severe and permanent peanut allergy from developing.

Note: This post is based on an article that is not open-access; i.e., only the abstract is freely available. Furthermore, the authors of this paper make a declaration about grants, research support, consulting fees, lecture fees, etc. received from pharmaceutical companies. It is normal for authors to declare this in case it might be perceived as a conflict of interest.

How Image-Enhanced Endoscopy Techniques May Improve Ulcerative Colitis Treatment

What Is the Main Idea?

Assessment of mucosal healing in people with ulcerative colitis by white-light endoscopy has several limitations. In the review article “Possible Role of Image-Enhanced Endoscopy in the Evaluation of Mucosal Healing of Ulcerative Colitis”, published in the journal Digestion, the authors describe how advances in image-enhanced endoscopy may improve the assessment of mucosal healing in people with ulcerative colitis and, as a result, help improve their treatment.

What Else Can You Learn?

In this blog post, image-enhanced endoscopy techniques and how they may help patients with ulcerative colitis are described. Ulcerative colitis in general, the gut microbiome, and mucosal healing are also discussed.

What Is Ulcerative Colitis?

Ulcerative colitis is an inflammatory bowel disease. People with ulcerative colitis have chronic (long-term) inflammation and ulcers (sores) in the colon (also known as the large bowel), which is part of the large intestine and removes water and some nutrients from partially digested food before the remaining waste is passed out of the body. Inflammation is the process by which your body responds to an injury or a perceived threat, such as a bacterial infection. Although the exact causes of ulcerative colitis aren’t yet fully understood, it may be an autoimmune condition, which means that the body’s immune system wrongly attacks normal, healthy tissue. The intestines contain hundreds of different species of bacteria, which are part of the “gut microbiome” (the term given to all of the microorganisms and their genetic material that live in the intestines). Although some of these species can cause illness, many are essential to our health and wellbeing, playing key roles in digestion, metabolism (the chemical reactions in the body that produce energy from food), regulation of the immune system, and mood. Several diseases are now thought to be influenced by changes to the gut microbiome, including cancer. Some researchers believe that in ulcerative colitis, the immune system may mistake harmless bacteria inside the colon as a threat and start to attack them, causing the colon to become inflamed.

What Is Mucosal Healing?

There is currently no cure for ulcerative colitis, with treatment focusing on relieving symptoms during a flare up and trying to stop them coming back, and the importance of choosing treatment strategies based on a specific therapeutic target (known as a “treat-to-target” approach) has become popular as a way to improve the long-term outcomes of patients. One of the ways that the efficacy of treatment is monitored is by assessing the level of “mucosal healing” in the colon. The mucosa is the innermost layer of the colon, and it is this layer that comes into direct contact with partially digested food and that becomes ulcered in ulcerative colitis. Mucosal healing is usually defined as an absence of friability (when the mucosa is inflamed and bleeds easily when touched), blood, erosions, and ulcers, or as a total absence of inflammation and ulcers. It is now considered a target of ulcerative colitis treatment because there is evidence that it is associated with better clinical outcomes (such as lower risks of surgery and relapse, and improved quality of life) and reduced risk of developing colorectal cancer in the future.

How Is Mucosal Healing Assessed?

Assessment of mucosal healing usually involves endoscopy, which uses a long, thin tube with a small camera inside to look inside the body, and histology, which involves the examination of samples of tissue taken from the colon by biopsy during an endoscopy procedure. Although useful, the samples that are examined by histology only reflect what is happening in the part of the colon from which they were taken, and may not represent the situation in the colon as a whole. Traditional endoscopy, which uses white light (i.e. apparently colorless light such as “normal” daylight, which is a mixture of different wavelengths of light in the visible spectrum), can also have limitations. These include the subjective nature of assessment because the results depend on the opinion of the person reviewing the results, variations in opinions between different reviewers, and difficulties seeing microscopic inflammation, which may be hard to see without some sort of enhancement.

What Is Image-Enhanced Endoscopy?

Image-enhanced endoscopy techniques produce high-contrast images using optical or electronic methods. These high-contrast images make it easier to see the detail and differences in the mucosal surface, patterns of blood vessels, and color tones of the mucosa. As a result, image-enhanced endoscopy has the potential to enable more objective (i.e., less dependent on the personal opinions of the person reviewing the results) assessment of mucosal healing and detect minute differences in mucosal healing that cannot be detected by endoscopy with white light. There are a number of different image-enhanced endoscopy approaches.

  • Narrow-band imaging uses narrow-band light created with two filters that filter light at specific wavelengths, one for blue light and one for green. It is better than white light for viewing microscopic blood vessel structures. It may be useful for detecting minor inflammation and predicting relapse by revealing incomplete renewal of blood vessels in patients with ulcerative colitis.
  • Another technique called linked-color imaging uses narrow-band imaging to pre-process images and then color separation to post-process them, so that blue, green, and red can be used to amplify differences in color, making it easier for slight differences in the color of the mucosa to be recognized. It therefore improves the visualization of changes in the mucosa caused by inflammation or a decrease in the mucosa (known as “atrophy”).
  • In contrast, a method called i-Scan uses three different algorithms to enhance images: surface, contrast, and tone enhancement. It is able to emphasize minute mucosal structures and subtle color changes, and there is evidence that it can be used to clinically stratify patients according to histologic activity, without them needing to undergo a biopsy procedure to obtain tissue samples.
  • Autofluorescence imaging involves the detection of the autofluorescence (the fluorescence of naturally occurring substances) that is produced by naturally occurring substances in the intestinal tissues (mainly type-I collagen, which is found in many structures in the body including skin, bones, tendons, cartilage, and connective tissue). Because the intensity of the autofluorescence is induced by various conditions in the body, autofluorescence imaging is expected to become useful for assessing the severity of tissue inflammation and differentiating between those changes that are due to damage and those that are due to uncontrolled, abnormal growth of cells or tissue (which may result in the development of a tumor).
  • Finally, dual-red imaging uses three wavelengths of light, of which two improve the ability to see blood vessels in submucosal tissues and bleeding points. Unusually, the pattern of blood vessels in the surface of the colon’s mucosa is partly or completely absent in the active phase of ulcerative colitis, making it difficult to assess it with traditional endoscopy that uses white light. Dual-red imaging enhances the pattern of blood vessels and makes it easier to visualize blood vessels in deeper tissues, so may be most useful in evaluating inflammation of the colon and predicting the prognosis of patients with mild to moderately active ulcerative colitis.

The approaches described above may contribute to the improved assessment of factors in ulcerative colitis that are difficult to assess by white-light endoscopy. It is hoped that this will, in turn, improve the use of treat-to-target approaches and the quality of life of people with ulcerative colitis.

Improving Active Surveillance of Low-Risk Prostate Cancer

What Is the Main Idea?

Some low-risk prostate cancers can be monitored by “active surveillance”, but the correct identification of patients with low-risk cancer at the time of diagnosis is essential. In the open access review article “Active Surveillance in Prostate Cancer: Current and Potentially Emerging Biomarkers for Patient Selection Criteria”, published in the journal Urologia Internationalis, the authors describe how biomarker testing may improve the selection of patients for this approach.

What Else Can You Learn?

In this blog post, prostate cancer, its signs and symptoms, and the active surveillance approach for managing low-risk prostate cancers are discussed. Different types of biomarkers that may indicate whether a prostate cancer has low or high risk of progression are also described.

What Is the Prostate?

The prostate is about the size of a walnut. It sits between the base of the penis and the rectum (the last few inches of the large intestine), deep inside the groin. It produces some of the fluid that mixes with sperm (from the testes) to form semen.

What Are the Signs and Symptoms of Prostate Cancer?

In the body, the growth and reproduction of cells is tightly controlled. If cells in the prostate start to grow and reproduce in an uncontrolled way prostate cancer can develop. Current estimates are that 1 in 8 men will develop prostate cancer in their lifetime. If the cancer is growing near the urethra (the tube that carries the urine or “wee” from the bladder to pass out of the body) it may start to press on it. This can cause changes in how the person urinates, such as weak flow when urinating, a feeling that the bladder hasn’t emptied properly, and needing to urinate more often and especially at night.

However, prostate cancer more usually starts to grow in a part of the prostate that is not near the urethra, so in many cases men with early-stage prostate cancer don’t have any signs or symptoms. When prostate cancer reaches a more advanced stage and starts to spread to other areas of the body (metastasize), the person may experience other symptoms such as blood in the urine or semen, problems getting or keeping an erection, and pain in the back, pelvis, or hip.

What Is “Active Surveillance”?

Some prostate cancers grow very slowly and are unlikely to spread or become life-threatening in the person’s lifetime. Such prostate cancers are described as “low risk”. In some countries, policies and screening programs have been introduced with the aim of increasing the detection of prostate cancers at an early stage when they can be cured. This has had the benefit of increasing the numbers of men who are diagnosed with early-stage prostate cancer, but also means that some men undergo treatments such as radical prostatectomy (surgical removal of the prostate gland and some of the tissue around it) and radiotherapy that they don’t necessarily need.

An alternative approach is “active surveillance”, where men with low-risk prostate cancer are monitored closely for any signs of disease progression (through regular testing to check whether there are any signs that the prostate cancer is starting to grow) and are only treated if their cancer progresses. It is essential that patients with low-risk prostate cancer are correctly identified at the time of diagnosis because men with higher-risk prostate cancer need prompt treatment. It is believed that biomarker analysis may aid this process by increasing the accuracy of identifying patients who have low-risk prostate cancer and the monitoring of their cancers over time.

What Are Biomarkers?

The term “biomarker” is short for “biological marker”. Unlike symptoms, which are things that you experience, biomarkers are measurable characteristics that indicate what is going on in the body. Your blood pressure, levels of molecules in your urine and blood, and your genes (DNA) are all biomarkers. Although they can suggest that your body is working normally, they can also show the development or progress of a disease or condition, or the effects of a treatment. Over the last decade, biomarker testing has started to transform the way that some diseases and conditions are treated, and offers hope of better outcomes through more personalized treatment.

What Did the Review Article Investigate?

The authors did a literature search (a systematic search through research that has already been published) for information about current and emerging biomarkers. They identified four currently available tissue, two blood, and six urine sample-based tests that can help identify patients with prostate cancer who could be monitored by active surveillance. In addition, new research over the last 10 years has identified new biomarkers that could improve existing tests or enable the development of new tools to identify patients for whom active surveillance may be suitable. These include messenger RNAs, microRNAs, long non-coding RNAs, and metabolites (substances used or formed when the body breaks down food, medicines or chemicals).

What Are Messenger, Micro-, and Long Non-Coding RNAs?

Your genes are short sections of DNA (deoxyribonucleic acid) that carry the genetic information for the growth, development, and function of your body. Each gene carries the code for a protein or an RNA (ribonucleic acid). There are several different types of RNA, each with different functions, and they play important roles in normal cells and the development of disease.

  • Messenger RNAs are single-stranded copies of genes that are made when a gene is switched on (expressed). In a cell, long strings of double-stranded DNA are coiled up as chromosomes in a part of the cell called the nucleus (the cell’s command center). Chromosomes are too big to move out of the nucleus to the place in the cell where proteins are made, but a messenger RNA copy of a gene is small enough. In other words, messenger RNA carries the message of which protein should be made from the chromosome to the cell’s protein-making machinery.
  • MicroRNAs are much smaller than messenger RNAs. They do not code for proteins but instead play important roles in regulating genes. They can inhibit (silence) gene expression by binding to complementary sequences in messenger RNA molecules, stopping their “messages” from being read and preventing the proteins they code for from being made. Some microRNAs also activate signaling pathways inside cells that turn processes on or off.
  • Long non-coding RNAs are another type of RNA that don’t code for proteins. They interact with other types of RNA, DNA, and proteins, and play key roles in the control of gene expression. Changes in the expression or structure of some long non-coding RNAs, or that affect the ability of proteins to bind to them, have been shown to be linked to cancer metastasis and patient survival.

How Can Biomarkers Be Used to Identify and Monitor Prostate Cancer?

Tissue biomarkers are biomarkers that can be detected in tissue samples that are obtained if a person with suspected prostate cancer has a needle biopsy (a procedure that uses a thin, hollow needle and a syringe to obtain a sample of cells, fluid, or tissue from inside the body). Although they can be highly effective at identifying prostate cancers they are less useful for long-term monitoring and the testing process can be expensive. Cancers develop different subpopulations of cells with different characteristics as they progress and these differences may affect the test results. In addition, biopsies are invasive and there can be complications. Existing tests using tissue samples primarily check for particular genes and proteins, but there is increasing evidence that some long non-coding RNAs can differentiate between prostate cancers that are likely to be aggressive and those that are low risk.

Both urine and blood sample analysis have the advantage of not being affected by the issue of tumor sampling that can occur with tissue biopsies; in other words, there is no issue regarding differences between different subpopulations of cells in the cancer. Several biomarkers found in the blood, which can be assessed using blood samples taken via normal blood tests, have been identified. These include proteins, hormones (low levels of testosterone in the blood may indicate advanced prostate cancer at diagnosis), microRNAs, and “circulating tumor cells” (tumor cells that get into the blood stream and can be detected in blood samples). The use of urine samples to detect prostate cancer has the advantages of being non-invasive, quick, and relatively cheap. Long non-coding RNAs and metabolites have been analyzed as biomarkers to improve the diagnosis of prostate cancer and assess progression. Several research studies have reported that the levels of some amino acids (the component units of proteins) are decreased and others increased in urine samples from people with prostate cancer.

Although our understanding of biomarkers and how they can be used to assess prostate cancer prognosis is improving all the time, further studies are needed to improve the identification of the aggressiveness of prostate cancers. The authors of the review article hope that future studies, including analysis of long-term outcomes and the cost-effectiveness of the use of different biomarkers, will improve the effectiveness of identifying patients who are suitable for active surveillance, reduce overtreatment, and further promote the development of personalized medicine.

Why Ureter Stone Relocation after Stenting Can Affect Treatment Decisions

What Is the Main Idea?

Treatment of large ureter stones depends on where they are located in the ureter. In the research article “Impact of Stone Localization before Emergency Ureteral Stenting on Further Stone Treatment”, published in the journal Urologia Internationalis, the authors describe how emergency ureteral stenting can change the location of a ureter stone, potentially changing the treatment approach that will be most effective.

What Else Can You Learn?

In this blog post, ureter stones and their symptoms are discussed. Ureteral stents and the different types of treatment for large ureter stones are also discussed.

What Are Ureter Stones?

Ureter stones (also known as ureteral stones) are essentially kidney stones that have moved from the kidney into the ureter (the tube that connects the kidney to the bladder, which is about the same diameter as a small vein). The main roles of the kidneys are removing waste products from the blood by filtering it, and making urine so that the waste products can be passed out of the body (excreted). Urine contains many dissolved minerals and salts. If they are present at high levels they can start to form crystals that may clump together into hard, stone-like lumps. Some are small enough to pass along the ureter and out of the body unnoticed, but larger ones may become stuck and block the flow of urine to the bladder.

What Are the Symptoms of Ureter Stones?

If a ureter stone is small it is unlikely to cause any symptoms, but for larger kidney and ureter stones the most common symptom is pain. The pain can range from mild and dull to intense and unbearable, and can radiate to other areas. People with ureter stones may also experience a need to urinate more frequently and pain or a burning sensation when they do. There may be blood in their urine, which gives it a pinkish color, and they may experience nausea and vomiting. If a person experiences fever or chills they may have a urinary tract infection (UTI). UTIs can spread to the kidney and cause a type of sepsis called “urosepsis”. It is important that people seek prompt medical treatment if they have any of the above symptoms, but sepsis can be life-threatening and is a medical emergency.

How Are Ureter Stones Treated?

The type of treatment recommended depends on the size, location and composition of the stones. If they are small enough, they can usually be encouraged to pass out of the body by the person drinking up to 3 liters of water per day. If they are larger, they may need to be removed by surgery.

  • Extracorporeal shock wave lithotripsy (ESWL) is a treatment method that uses X-rays or ultrasound from outside the body to break down the stones into particles so that they can pass out in the urine (“lithotripsy” is derived from the Greek words meaning “breaking stones”).
  • Percutaneous nephrolithotomy tends to be used if stones are large or located where it’s difficult for them to be treated by ESWL. A thin telescopic device called a nephroscope (a type of endoscope that is specially designed for looking inside the kidney) is inserted into the kidney through a small incision in the person’s back. Once the stone is located, it is either removed or broken down.
  • Uteroscopy involves a type of endoscope called a uteroscope being passed through the urethra (the tube that your urine passes through when it leaves your bladder and passes out of the body), into your bladder and then up into your ureter. Once located, the stone is either removed or broken down using laser energy or shock waves. It can only be performed if the stone is located in the lower half of the ureter.

What Is a Ureteral Stent?

If a person’s ureter is blocked by a ureter stone their urine is unable to drain from the kidney to the bladder properly. This causes the affected kidney to fill with urine and swell, and if the stone blocks the ureter for a long period of time the kidney can become damaged. To prevent this, a ureteral stent (a thin tube that’s placed inside the ureter) can be placed with one end inside the kidney and the other directly inside the bladder so that the urine can flow from one to the other. Emergency insertion of a ureteral stent is often used if a patient is experiencing severe pain and/or has developed urosepsis. However, this can change the location of the stone that’s causing the problem, which potentially changes how it needs to be treated.

What Did This Study Show?

The authors retrospectively analyzed stone locations in 649 patients who were treated by uteroscopy by looking at their medical records. For 469 patients, the locations of the ureter stones were checked both before emergency stent insertion and uteroscopy were performed. They found that around half (45.6%) of the patients had ureter stones that were accidentally relocated after the insertion of a stent, with around one-quarter (25.4%) experiencing displacement of their stones back into the kidney. Relocation of stones that were initially in the part of the ureter that connects with the kidney (known as the “proximal” ureter) was particularly likely. The authors note that the relocation of ureter stones affects the type of surgery that is most likely to be effective, and suggest that carrying out imaging to double-check the location of stones before surgical treatment may help patients to avoid more complex stone treatment in the future.

Take-Home Message

Neither national nor European guidelines for the diagnosis and therapy of ureter stones currently recommend that imaging to determine stone location be repeated after the insertion of a ureteral stent. Decision-making regarding whether to repeat imaging or the type of surgery depends on the opinions of both the surgeon and the patient. Patients with ureter stones who receive a ureteral stent may wish to discuss repeat imaging with their medical team before a final decision is made about the type of surgery that will be performed.

Note: This post is based on an article that is not open-access; i.e., only the abstract is freely available.

Predicting Outcomes after Stroke: How Components of the Blood Clotting System Can Help

What Is the Main Idea?

A stroke happens when the blood supply to part of the brain is cut off or reduced and can be life-threatening. In the open-access article “Clinical Significance of Plasma D-Dimer and Fibrinogen in Outcomes after Stroke: A Systematic Review and Meta-Analysis”, published in the journal Cerebrovascular Diseases, the authors investigate whether there is a relationship between the levels of D-dimer and fibrinogen in blood samples given by people who have experienced stroke and their outcomes.

What Else Can You Learn?

In this blog post, the symptoms and causes of stroke are described. The process of blood clotting and biomarkers are also discussed.

What Is Stroke?

A stroke is a serious medical emergency that can be life-threatening. The oxygen and nutrients that brain cells need to function properly are carried around the brain by the blood. A stroke happens when the blood supply to part of the brain is cut off or reduced, and the brain cells can no longer get all the oxygen and nutrients they need. They quickly begin to die (within minutes), which can cause brain damage and other complications.

There are two types of stroke:

  • Ischemic strokes are the most common (around 85% are this type) and are caused when blockages in blood vessels cut off or reduce the blood supply to part of the brain. The blockages may either develop in the blood vessels inside the brain or develop elsewhere in the body and travel to the brain via the bloodstream.
  • Hemorrhagic strokes are less common (around 15% are this type) and are caused by a blood vessel that supplies blood to the brain rupturing, causing bleeding in or around the brain. As well as causing brain cells to die, the bleeding causes irritation and swelling, and pressure can build up in surrounding tissues. This can lead to more brain damage.

As well as the two types of stroke described above, some people experience “mini-strokes” called transient ischemic attacks (TIAs). A TIA is essentially a stroke caused by a temporary, short-term blockage, so the symptoms do not last long. Once the blockage clears the symptoms stop. Although someone who has a TIA may feel better quickly they still need medical attention as soon as possible, because the TIA may be a warning sign that they will have a full stroke in the near future.

What Are the Symptoms of Stroke?

If someone is having a stroke they need urgent treatment. Don’t hesitate to call for medical help. The quicker they receive treatment the less brain damage is likely to occur. The main symptoms of stroke can be remembered using the word “FAST”.

  • Face: The person may be unable to smile, or one side of their face or their mouth may have dropped.
  • Arms: The person may not be able to lift both arms and keep them there.
  • Speech: The person may not be able to talk or their speech may be slurred; they may also have difficulty understanding what you are saying.
  • Time: Call for medical help immediately if the person has any of these signs or symptoms.

Other symptoms of stroke include sudden severe headache, weakness or numbness on one side of the body, confusion or memory loss, dizziness or a sudden fall, and/or blurred vision or loss of sight (in one or both eyes).

What Are the Effects and Outcomes of Stroke?

The effects of stroke vary from one person to another and depend on the type of stroke, its severity, whether this is the first stroke they’ve experienced, and which part of the brain is affected. Different parts of the brain have different functions, so the effects of stroke in the part of the brain that controls movement and speech can be very different to those in the part that controls breathing and heart functions. Predicting the outcomes of stroke is difficult. Although some people who survive a stroke recover well, others can be left with disabling problems that they never recover from. These can include physical and communication problems; extreme tiredness and fatigue; emotional, behavior, and memory changes; and thinking problems. Many factors are associated with the outcomes of people who have a stroke, including age, sex, the severity of the stroke, and whether or not they have other conditions such as atrial fibrillation or diabetes. It is hoped that the development of new ways to predict stroke outcomes can help to improve the outcomes of patients and maximise their recovery.

How Can We Predict the Outcomes of Stroke?

Studies have shown that the combination of a number of biomarkers could improve the accuracy of predicting the outcomes of stroke. Biomarkers are measurable characteristics, such as molecules in your blood or changes in your genes, that indicate what is going on in the body. They can indicate that your body is working normally, the development or progress of a disease or condition, or the effects of a treatment. Because ischemic stroke is caused by blockages in blood vessels, components of the system that regulates the process of blood clotting may be useful in stroke outcome prediction.

How Does the Blood Clotting System Work?

The blood clotting system (known as the “coagulation” system) plays an essential role in the body’s ability to heal. The system is activated when the lining of a blood vessel is damaged and regulates the process by which liquid blood changes to a gel, forming a blood clot, which stops the bleeding and starts the repair process. The process by which blood clots are formed involves a number of proteins and platelets (a type of blood cell). When a blood vessel is damaged, platelets cluster at the site of damage and bind together to seal it. The platelets have receptors on their surfaces that bind a molecule called thrombin, which converts a soluble protein called fibrinogen into a different form called fibrin. Fibrin can form long, tough, insoluble strands that bind to the platelets and cross-link together to form a mesh on top of the platelet plug. Lots of different molecules are involved in this process, but platelets and fibrin are major players.

While it is important that blood can clot when needed, it is also essential that the process is regulated so that unnecessary blood clots can be broken down. Plasminogen plays an important role in this. It circulates in the blood stream in a “closed” (inactive) form. When it binds to a blood clot it opens up, enabling enzymes to cleave (split) it to form a protein called plasmin. Plasmin is able to dissolve fibrin blood clots by cleaving fibrin and many other proteins found in blood plasma (the liquid component of blood that remains when all the blood cells are removed). One of the products when plasmin breaks down fibrin is called D-dimer, which is often measured in blood samples because it indicates whether or not the blood clotting system has been activated. Increased levels of D-dimer and fibrinogen in blood plasma have been reported to be linked to damage to the blood–brain barrier (this regulates the molecules in the blood that can enter the central nervous system).

What Did This Study Show?

The authors investigated whether or not levels of D-dimer and indicators of fibrin breakdown in the blood are associated with stroke outcomes by conducting a meta-analysis. A meta-analysis is a type of research study that statistically analyses the results of a number of studies that have been conducted independently but that have looked at the same research question. In this study, the authors analysed 52 studies that included 21,473 patients who had had a stroke. The results showed that high D-dimer and fibrinogen levels in blood samples given by the patients were significantly associated with poor outcomes such as death after stroke, having another stroke, and early neurologic degeneration (caused by cells in the nervous system stopping working or dying, affecting many of the body’s activities). This indicates that plasma D-dimer and fibrin levels could be used to screen patients for the likelihood of adverse outcomes after stroke, to identify patients at higher risk of poor outcomes so that they can benefit from close monitoring and potentially also preventive treatment. The authors hope that combining D-dimer and fibrin as biomarkers in clinical follow-up after stroke may help to improve the effectiveness of treatment strategies after stroke and enable them to be tailored to meet the needs of individual patients.

Effect of Hearing Loss on Cognition and Cognitive Reserve

What Is the Main Idea?

Subjective cognitive decline (SCD) is the self-reported experience of worsening or more frequent memory loss or confusion without clinical evidence for it. In the open-access research article “The Effect of Hearing Loss on Cognitive Function in Subjective Cognitive Decline”, published in the journal Dementia and Geriatric Cognitive Disorders, the authors investigate whether there is a relationship between hearing loss and cognitive function in people with SCD.

What Can Else You Learn?

In this blog post, dementia and particularly SCD are described. Cognition and the concept of cognitive reserve are also discussed.

What Is Dementia?

The term dementia does not describe a single, specific disease. It covers a wide range of conditions, including Alzheimer’s disease and vascular dementia. People with dementia may experience declines in memory, language, problem-solving, attention, reasoning, and other thinking skills to the extent that they have effects on normal daily activities. Behavior, feelings and relationships can also be affected. Although dementia mainly occurs in older adults (i.e., people aged over 65 years), it is not a part of normal ageing and is caused by abnormal changes in the brain. For example, Alzheimer’s disease is believed to be caused by two proteins, beta-amyloid and tau, forming plaques around brain cells that make it hard for them to stay healthy and communicate with each other. In contrast, vascular dementia develops when blood flow to parts of the brain is blocked or reduced, preventing them from getting all the oxygen and nutrients they need to function properly.

What Is Subjective Cognitive Decline (SCD)?

Most countries now have rising life expectancies, with the World Health Organization (WHO) estimating that 1 in 6 people in the world will be aged 60 years or older by 2030. An ageing global population and increased understanding of and information about dementia has led to increasing numbers of people reporting changes in cognition and seeking medical help. SCD is the name given when a person self-reports the experience of worsening or more frequent memory loss or confusion (“subjective” means “based on or influenced by personal feeling or opinions”) over the last 12 months. However, there is no objective evidence of cognitive decline, i.e., the results of standardized cognitive tests for mild cognitive impairment (MCI) and Alzheimer’s disease do not indicate that there is a problem. Dementia is a continuum, progressing from MCI to mild, moderate, and eventually severe dementia, and the boundary between SCD and MCI has not been defined clearly. Some individuals report SCD as early as 5 years before MCI is detected by objective test results. It is thought that improved understanding and management may reduce the future effects of SCD.

What Is Cognition and How Is It Assessed?

Cognition is an umbrella term that describes a combination of processes that take place in the brain, such as the ability to learn, remember, and make judgements based on experience, thinking, and information from the senses. These processes affect every aspect of life and our overall health. For example, how we form impressions about things, fill in gaps in knowledge, and interact with the world. A variety of tests have been developed to assess cognitive skills. These include the Rey Complex Figure Test, in which participants are asked to reproduce a complicated line drawing, and the Stroop Color Word Test, in which participants are asked to view a list of words printed in colors that differ from the colors that the words describe (for example, the word “blue” might be printed in yellow ink) and then name the color the word is printed in.

How Is Hearing Loss Thought to Be Linked to Cognitive Decline?

Research studies have identified a link between hearing loss and dementia, and some suggest that hearing loss may be a major risk factor for its development. This may be partly due to something called “cognitive reserve”, which is the idea that people build up a reserve of cognitive abilities during their lives, and that this reserve can protect them against some of the cognitive decline that can happen as the result of ageing or disease. In other words, some brains keep working more efficiently than others despite them experiencing similar amounts of cognitive decline and/or damage. It has been suggested that cognitive reserve may be affected by hearing loss because the cognitive resources (the capacity that a person has to carry out tasks and process information) of people with hearing loss are under greater demand than people unaffected by hearing loss. This is due to the increased effort that it takes for people with hearing loss to process auditory (relating to the sense of hearing) information.

What Did the Study Show?

The authors investigated whether hearing loss (as assessed by audiometry, which measures the range and sensitivity of a person’s hearing) affects cognitive function in people with SCD. Participants in the study were aged 60 years or older and were grouped according to whether they had normal hearing or bilateral (affecting both sides) hearing loss. They were then assessed using series of cognitive tests that evaluate attention, language, visuospatial functioning (the visual perception of the spatial relationships of objects), memory and executive functions (responsible for processes like planning, focused attention, self-control, and juggling multiple tasks). Participants also gave blood samples so that particular biomarkers could be measured and had magnetic resonance imaging (MRI) scans to look for differences in areas of the brain.

Although there were no differences between the two groups regarding biomarkers and other tests of cognition, the group with hearing loss performed worse in the Stroop Color Word Test. It is not clear why, but the authors suggest that it may be linked to the robustness of the Stroop test and its ability to measure executive function, particularly aspects to do with control of attention. When a person takes the Stroop test, they need to be able to selectively control their attention, so that they can suppress the automatic response of reading the word presented and instead focus on naming the color that the word is written in. If the idea that cognitive reserve is affected by hearing loss is correct, it might explain why the group with hearing loss did worse in the Stroop test. Another possibility is that people with SCD and hearing loss may participate in cognitive and social activities less often than people with unaffected hearing, reducing their cognitive reserve. High-level engagement in social activity and having large social networks is known to be linked to better cognitive functioning in later life.

The authors also found that people in the hearing loss group had smaller volumes of grey matter, one of the main components of the brain, in four brain regions. One of these is a major component involved in memory. However, it is unclear whether there is a causal link between hearing loss and reduced volumes of grey matter, and it may be more likely that they both result from a common cause, such as accelerated aging in some individuals.

How Can You Increase Your Cognitive Reserve?

Keeping your brain and body healthy and active is the best way to increase your cognitive reserve. Activities that engage your brain, such as learning a language or new skill or solving puzzles, as well as high levels of social interaction, are known to reduce your risk of developing dementia. However, doing the same type of puzzle every day isn’t enough. Novelty and variety are needed to stimulate the brain most effectively, even to the extent that deliberately taking routes to places that differ from the ones that you normally take can help. Regular physical activity, not smoking, and a healthy diet are also important.

Take-Home Message

There may be a link between hearing loss and cognition in people with SCD. People with SCD may be at increased risk of developing dementia in the future. As a result, it is important that people with SCD report any signs of hearing loss to healthcare practitioners promptly so that it can be managed effectively to reduce the risk of further cognitive decline. Importantly, everyone can take steps to increase their cognitive reserve and investing in making small, positive lifestyle changes now may pay dividends in the future.

Direct-to-Consumer Genetic Testing: Advantages, Disadvantages, and Ethical Concerns

What Is the Main Idea?

The direct-to-consumer genetic testing (DTC GT) market has grown rapidly over the last 20 years. In the open-access research article “Knowledge and Attitudes about Privacy and Secondary Data Use among African-Americans using Direct-to-Consumer Genetic Testing”, published in the journal Public Health Genomics, the authors describe the personal experiences and perceptions about DTC GT of a group of people that have purchased and used such tests, and their potential implications.

What Else Can You Learn?

In this blog post, gene variations and genetic testing in general are discussed. The advantages and disadvantages of DTC GT and ethical concerns are also described.

What Is Direct-to-Consumer Genetic Testing (DTC GT)?

The DTC GT market has expanded rapidly over the last 20 years as the costs of genetic testing and sequencing have fallen, and was estimated to be worth USD 1.24 billion in 2019. DTC GT involves the use of genetic tests that are marketed directly to consumers without a need for the involvement of a healthcare professional. The tests can be ordered online or by post, and the consumer completes the test at home (usually by rubbing a swab over the inside of their cheek or providing a saliva sample) before sending it away for analysis. The results are then sent by post, given via a telephone call, or accessed via a secure website or app.

How Does DTC GT Work?

Your DNA (deoxyribonucleic acid) carries the genetic information for the growth, development, and function of your body. It is made up of two long chains of units called “nucleotides” that coil around each other to form a double helix. The term “gene” describes a short section of DNA, whereas the term “genome” describes the complete set of genetic material in a cell or whole organism. Some genes have specific functions, like coding for proteins, but others don’t. The human genome is currently estimated to be made up of around 20,500 protein-coding genes, and the number of non-protein-coding genes may be greater.

Genetic testing looks for variations in the DNA sequences that make up genes. Gene variations can be inherited or can occur if a permanent change to the DNA sequence happens during a person’s lifetime. There are different types of variations that can occur in DNA, including the replacement of one nucleotide with another (known as a “substitution”) and the deletion and/or insertion of at least one nucleotide. If a substitution is found in at least 1% of the global population it is classified as a SNP (this stands for “single-nucleotide polymorphism” and is pronounced “snip”). Over 600 million SNPs have been reported to date, and although most SNPs have no effect on peoples’ health, some are associated with the risk of developing disease.

Many DTC GT test companies use a method called SNP-chip genotyping, which looks for the presence or absence of SNPs and nucleotide insertions and deletions. Other DTC GT companies use genome sequencing, which sequences a person’s entire genetic code and identifies variants that are present. Some companies provide the consumer with “raw”, uninterpreted data that needs to be interpreted by a third party, whereas others provide some form of health information based on their interpretation of the results. For example, some may combine results relating to a group of variants to place someone in a risk category while others notify the consumer if they have tested positive for specific variants implicated in a disease or response to a drug.

Why Do People Opt for DTC GT?

Some of the main advantages of DTC GT are that it is often less expensive than testing conducted via private healthcare providers, and it doesn’t involve the approval or involvement of a healthcare professional or insurance provider. The results are also delivered directly to the consumer so they don’t appear on the person’s medical or insurance records.

The process of collecting a sample for testing is usually quick, non-invasive, and can be done in the user’s home. Many users opt for DTC GT in the hope that they will get clear information about their future health and may feel that they are being proactive about their wellbeing. Some may feel that they are contributing to knowledge that may help others through future research. Consumers may also be curious to understand more about their ancestry.

What Are the Potential Disadvantages of DTC GT?

DTC GT often only takes a superficial look at particular genes and isn’t designed to diagnose genetic conditions. DTC GT is often good at detecting common genetic variants, but if a variant is detected that is rare, the result can often be a false positive (an error where the test result is positive but should actually be negative). This can be extremely worrying for the consumer if they think they have been identified as having a disease-linked genetic variation.

Equally, a test result could be a false negative (an error where the test result is negative but should actually be positive), and receiving a true negative result doesn’t necessarily mean there is no chance of developing that condition. It may just be that the company used doesn’t test the full set of possible genes that can be tested for that condition, and new variants and their implications are also being discovered all the time. Some companies offer someone to talk to about the results obtained, but consumers may struggle to access qualified advice if they receive a result that causes them to worry. Results can also be obtained that affect the whole family or demonstrate that family relationships are different to what is expected.

It is important that consumers understand the limitations of genetic testing. Finding a disease-linked genetic variation doesn’t automatically mean that someone will go on to develop that disease. A person’s health depends on a wide range of factors, not just their genetics. Environmental factors (such as exposure to harmful substances or access to healthcare), lifestyle choices and family history all play a role, and the health advice given to a person is unlikely to change as a result of them undertaking DTC GT.

Consumers also need to bear in mind that there may be implications regarding their health insurance if they undertake DTC GT and get a result that suggests genetic susceptibility to a disease. Some companies might not be clinically accredited (so the results might not credible or be backed by robust data) and there is currently little regulation of companies. As a result, the onus is on the consumer to assess the quality of the service before going ahead with testing. There may be other issues concerning whether the consumer is asked to provide proper informed consent, and some companies might collect, store, sell, or undertake research using the genetic data that they obtain (secondary data use is defined as the use of consumer data for purposes other than producing a report on the consumer’s health and/or ancestry). Law enforcement agencies may also request access to the information that the companies hold during their investigations. Risks associated with the use of secondary data and the reidentification of individuals from their data include civil lawsuits, forensic investigations, problems with immigration, discrimination, and the allocation of health resources.

What Did This Study Show?

The authors used semi-structured interviews to investigate the knowledge of and attitudes to DTC GT of a group of 20 people in the USA who self-identified as Black or African-American. In particular, the participants were asked about their motivations for testing and their views on secondary data use and privacy. The study showed that the participants were generally positive about DTC GT, but had little concrete knowledge about secondary data use practices by DTC GT companies. Few had read the company privacy policies in detail, and most expressed concerns about the privacy of their information. One-half of the group surveyed did not know whether they had the option to opt out of secondary data use by the DTC GT company. Most participants also expressed the opinion that informed consent, following the provision of clear information prior to testing about potential future uses of data, should be required for any secondary use, with the option to opt out of any and all potential future uses. The general view was that DTC GT companies need to improve transparency. In addition, when the authors compared their findings with those of a previous study, which had surveyed European-Americans who had undergone DTC GT, they found that there were differences in themes such as a moral duty to participate in research to redress historical underrepresentation in genetic studies, community considerations, and concerns about racial inequality.

Take-Home Message

DTC GT companies can do more to draw on the consumer experience and improve their consent processes and information, particularly regarding the clarity and accessibility of the language used in their privacy policies. However, the onus remains on the consumer to be informed. People considering DTC GT should look for balanced advice from reputable organisations before making a decision, and should speak to their healthcare professional if they have concerns regarding the possibility of a genetic condition in their family. If they decide to go ahead with DTC GT, consumers should check the privacy policy carefully and be clear on what help the company will provide regarding the interpretation of results.

How Might the Immune System Be Involved in the Development of Schizophrenia?

What Is the Main Idea?

Schizophrenia is a long-term (chronic) mental health condition. In the research article “Autoantibodies against Central Nervous System Antigens and the Serum Levels of IL-32 in Patients with Schizophrenia”, published in the journal Neuroimmunomodulation, the authors investigate whether dysregulation of the immune system may play a role in the development of schizophrenia.

What Else Can You Learn?

In this blog post, schizophrenia and psychosis are discussed, as well as the immune system in general and how disorders can be caused by autoimmune responses.

What Is Schizophrenia?

Schizophrenia is associated with a state of being called “psychosis”, in which a person loses some contact with reality and may be unable to distinguish their own thoughts and ideas from what is real. Key symptoms of active schizophrenia include delusions (where someone has strong beliefs that are not shared by others, for example that someone is trying to communicate important messages to them or that there is a conspiracy against them), hallucinations (where a person experiences things that only exist inside their mind, such as seeing, hearing, smelling, or feeling things that aren’t real), and muddled thoughts as a result of them. People with active schizophrenia may also have disorganized speech; lose interest in their normal social and everyday activities and personal hygiene; difficulty remembering things, understanding information, and making decisions; difficulties focusing on things; and a lack of emotion in their face or voice. Although people with schizophrenia are sometimes portrayed as having a split personality or multiple personalities, it is not something that is associated with this condition.

How Common Is Schizophrenia?

It is estimated that schizophrenia affects around 1 in 300 people worldwide (0.32% of the global population). Although it is believed to affect men and women equally, initial symptoms tend to appear earlier in men (in their late teens and early 20s) than in women (in their 20s and early 30s). Some people with schizophrenia will have episodes throughout their lifetime while others will have minimal symptoms.

What Causes Schizophrenia?

The exact causes of schizophrenia aren’t yet known but research suggests that it is likely to be caused by a combination of factors. Possible environmental factors (external influences that can affect an individual’s health and wellbeing) include increased urbanization, cannabis use in adolescence, infections, and traumatic life experiences. It is believed that some people may be more susceptible to developing schizophrenia, which suggests that genetic factors (things that are inherited from our parents) may be involved. Some research studies have suggested that abnormal or impaired regulation (dysregulation) of the immune system may also play a part.

What Is the Immune System?

The immune system protects your body from things that could make you ill and is divided into two branches: innate (non-specific) and adaptive (specific). The innate immune system defends against harmful germs and substances that enter the body. Key components are inflammation (which traps things that might be harmful and begins to heal injured tissue) and white blood cells (which identify and eliminate things that might cause infection; they are also called “leukocytes”).The adaptive immune system makes antibodies and involves specialized immune cells, which together enable the body to fight specific germs that it has previously come into contact with, sometimes providing lifelong protection.

How Might the Immune System Be Linked to Schizophrenia?

The term “antigen” describes anything that causes a response by the immune system and can include chemicals, or molecules on the surfaces of bacteria and viruses. The cells in your body also have molecules on their surfaces, but the immune system usually recognizes them as “self-antigens”; in other words, the immune system knows that they are not “foreign” and should not be removed. However, sometimes the body’s immune system starts to recognize self-antigens as foreign ones and begins to attack them. When this happens, it is described as an “autoimmune” response and can result in the destruction of normal, healthy body tissue, or changes in the function or abnormal growth of an organ. There are more than 80 known medical conditions caused by autoimmune responses that are all very different. They include type 1 diabetes, rheumatoid arthritis, multiple sclerosis and celiac disease.

Some research studies have suggested that dysregulation of the adaptive and innate immune systems may contribute to the development of schizophrenia. Some have reported that levels of cytokines, a type of protein that has an effect on the activity of the immune system, are higher during acute (short-term or begin and worsen quickly) schizophrenic episodes and lower when people are receiving treatment. The production of autoantibodies and inflammation may also be involved. Inflammation has been shown to cause a type of cell in the central nervous system called microglia to migrate into blood vessels in the brain. If the inflammation occurs over a prolonged period of time, the microglia in the blood vessels can disrupt the blood–brain barrier. The blood–brain barrier tightly regulates which molecules and cells can move from the body’s general bloodstream into the brain, and plays an important role in preventing infections from developing in it.

What Did This Study Find?

In this study, the authors investigated whether the levels of a cytokine called interleukin-32 (IL-32) differ between people with or without schizophrenia. IL-32 plays an essential role in activating the adaptive and innate immune responses, and upregulates inflammation by causing cells in the immune system to produce cytokines that increase inflammation. They found that levels of IL-32 in blood samples from people with schizophrenia were significantly higher than from a non-schizophrenia control group, and that levels of other cytokines that promote inflammation were also increased. Increased levels of IL-32 have also been reported in people with autoimmune diseases such as Grave’s disease and rheumatoid arthritis. However, the potential role of autoantibodies attacking self-antigens in the central nervous system in the development of schizophrenia is poorly understood. The authors went on to investigate whether autoantibody levels in people with schizophrenia were higher than in the non-schizophrenia group and found that levels of autoantibodies against an enzyme called GAD were significantly increased. GAD is involved in the production of a neurotransmitter (a signaling molecule that transmits a signal from one nerve to another) called gamma-aminobutyric acid (GABA), and it is known that GABA deficiency in the central nervous system can cause motor and cognitive problems. Dysfunctional neurons that rely on GABA have been seen in the brain in several neurological disorders (which affect the brain and the nerves found throughout the body). In one study, people with psychosis were found to be more than twice as likely to have GAD autoantibodies as people in the general population.

Take-Home Message

It is possible that dysregulation of the immune system plays a role in the development of schizophrenia. However, research in this area is at an early stage and more research is needed to improve our understanding of how schizophrenia develops and can be treated most effectively.

Note: This post is based on an article that is not open-access; i.e., only the abstract is freely available.

How Regular Exercise Can Benefit People Receiving Maintenance Hemodialysis

What Is the Main Idea?

Maintenance hemodialysis (HD) is a way of treating people with end-stage renal disease (ESRD) after they have experienced kidney failure. In the open-access review article “Exercise in Dialysis: Ready for Prime Time?”, published in the journal Blood Purification, the authors discuss the benefits of exercise for people receiving maintenance HD and review how it can be more widely incorporated into clinical care.

What Else Can You Learn?

In this blog post, HD in general and the advantages of regular exercise, particularly for people receiving maintenance HD, are discussed.

What Is Maintenance HD?

The kidneys do several important jobs in the body, including helping to control blood pressure and making red blood cells, and removing waste products and extra water from the body to make urine. If a person’s kidneys stop working (known as “kidney failure”), they will need kidney replacement therapy, in the form of dialysis or kidney transplant, to survive. Kidney failure treated in this way is referred to as ESRD. If a person is treated with maintenance HD, they usually have HD two or three times per week, often in a healthcare setting. During the HD process, the person’s blood leaves their body, goes through a filter in a machine that removes waste products and excess water, and the purified blood is then returned to their body.

Why Is Exercise Important for People Receiving Maintenance HD?

Regular exercise is important for everyone. Among other benefits, people who exercise regularly report that they sleep better and have more energy and muscle strength. Having ESRD has been shown to decrease a person’s level of physical activity and quality of life, partly because people with ESRD are likely to have other medical conditions (termed “comorbidities”). These may or may not be linked to their ESRD, and may contribute to them being physically inactive. However, it is widely acknowledged that people receiving maintenance HD may benefit from increasing their levels of physical activity. Regular exercise may benefit people with ESRD by improving their heart function, muscle strength, and blood pressure control, reducing the risk of diabetes, and helping to prevent anxiety and depression. The benefits of regular exercise described by the general population are similarly reported by people who receive maintenance HD. Moreover, they also perceive that they have better quality of life than those that don’t exercise because they are more able to do the things that they want and have to do in their daily lives (known as “physical function”).

What Is the Evidence that Exercise Can Benefit People Receiving Maintenance HD?

A number of research studies have been conducted that have sought to determine how exercise can benefit patients receiving maintenance HD. Most of these have involved patients using exercise bikes to cycle during HD sessions (termed “intradialytic cycling”), while some include at-home walking schedules and/or light resistance training. Although they have reported that exercise can improve physical function, cardiovascular health, and quality of life, many of the studies have only looked at small numbers of patients. Some have also suggested that the amount and intensity of the exercise that the participants are asked to do may not be enough for there to be significant improvements in their health or quality of life, and that this may be partly due to the comorbidities that they may have. Fatigue, muscle cramping, poor physical function, depression, and a lack of motivation, possibly in addition to serious comorbidities, have all been suggested to be barriers to exercise for patients receiving HD.

Nonetheless, the recently published CYCLE trial has shown that 6 months of intradialytic cycling improved the structure and function of the heart of patients receiving maintenance HD. This was shown by magnetic resonance imaging, with reductions seen in arterial stiffness and the mass of the left ventricle (one of the chambers in the heart), which are both associated with increased risk of a range of cardiovascular problems. Several other recent studies have reported that intradialytic exercise can improve a variety of patient-reported outcomes in people receiving maintenance HD, including reduced cramping, fatigue, and restless leg syndrome.

How Can People Receiving Maintenance HD Exercise Safely?

It is important that people receiving maintenance HD consult their healthcare team before starting a new program of exercise. The National Kidney Foundation suggests walking, swimming, aerobic dancing, and cycling (on an exercise bike or outside) to be good options because they involve continuously moving large muscle groups. Low-level strengthening and stretching exercises may also be good options, although heavy lifting should be avoided. As with any new exercise program, starting gently and building up from there is best. As little as 10 minutes of exercise, 3 days per week on non-consecutive days can have a positive effect. Importantly, exercise should be paused until the healthcare team can be consulted if the person’s dialysis or medicine schedule or physical condition changes.

How Is Exercise Incorporated into HD Care?

Although there are examples of exercise programs for people receiving maintenance HD in several countries, including Portugal, Germany, Mexico, and parts of Canada, implementation of exercise programs by healthcare providers worldwide remains low. It has been reported that less than 10% of dialysis centers offer exercise programs. This has been attributed by some to nephrologists possibly not feeling confident in their abilities to discuss the topic with their patients, and patients feeling that they don’t have the knowledge to exercise safely. The authors of the review article suggest that lifestyle interventions like exercise programs could be incentivized in HD centers if funding policies were changed to reward improvements in quality of life metrics as well as biochemical factors. Altering the physical environments in HD clinics, to be more inspiring and encouraging, could also help, particularly if exercise equipment was added to a designated space for exercise.

Take-Home Message for Patients

Exercise is as important for people receiving maintenance HD as anyone else and can have a wide range of benefits, including on their quality of life. People receiving maintenance HD who are interested in increasing their activity levels should consult their clinical team for advice on how to do this safely and consider accessing support and guidance from specialist organizations.

Is Warfarin Treatment Safe for Patients with Atrial Fibrillation and End-Stage Renal Disease Who Transition to Hemodialysis?

What Is the Main Idea?

Hemodialysis is a way of treating people with end-stage renal disease (ESRD) after they have experienced kidney failure. In the free-access research article “Warfarin Use, Stroke, and Bleeding Risk among Pre-Existing Atrial Fibrillation US Veterans Transitioning to Dialysis”, published in the journal Nephron, the authors discuss whether it is safe for patients with atrial fibrillation and ESRD to continue to take warfarin, a medication that reduces the risk of blood clots forming, while they transition to regular dialysis treatment.

What Else Can You Learn?

In this blog post, atrial fibrillation and ESRD are discussed, as well as anticoagulant treatments and what they are used for.

What Is End-Stage Renal Disease (ESRD)?

The kidneys help to control blood pressure and make red blood cells, and remove waste products and extra water from the body to make urine. If a person’s kidneys stop working (known as “kidney failure”) they will need kidney replacement therapy, in the form of dialysis or kidney transplant, to survive. Kidney failure treated in this way is referred to as ESRD. During the hemodialysis process, the person’s blood leaves their body, goes through a filter in a machine that removes waste products and excess water, and the purified blood is then returned to their body.

What Is Atrial Fibrillation?

A normal resting heart rate should be between 60 and 100 beats per minute and be regular. Atrial fibrillation is a heart condition that causes a person’s heart rate to be irregular (known as arrhythmia) and often very fast. The heart is divided into four “chambers”: two at the top called atria and two at the bottom called ventricles. Atrial fibrillation occurs if the atria start to beat irregularly in a way that is out of sync with the ventricles, causing the heart to be less efficient. Although some people with atrial fibrillation do not experience any symptoms, others may experience dizziness, heart palpitations (fluttering or irregular heartbeat), chest pain, shortness of breath, tiredness, and weakness. Importantly, atrial fibrillation can lead to blood clots in the heart and increases the risk of stroke, heart failure, and other heart-related complications.

How Is Atrial Fibrillation Treated?

Although not usually life-threatening, atrial fibrillation often requires treatment. Approaches to control the rate or rhythm of the heart include medications, cardioversion (where a controlled electric shock is given to the heart to restore a normal rhythm), and catheter ablation (where radiofrequency energy is used to destroy the area in the heart that’s causing the abnormal rhythm), which is often followed by a person having a pacemaker fitted to help their heart beat regularly. Because of the increased risk of stroke, people may also receive a type of medication called an anticoagulant.

What Is an Anticoagulant?

Coagulation (blood clotting) is the process by which blood clots are formed to stop bleeding. Although blood clots are an essential response to injury, for example preventing too much blood from being lost via a wound, coagulation can become a problem if blood clots form inside the body and stop blood from flowing through blood vessels, potentially starving the affected part of the body from oxygen. Depending on where a blood clot forms, this can lead to serious problems such as heart attack, deep vein thrombosis, and stroke (or mini-stroke, which is also called a transient ischemic attack). Although they are sometimes called “blood thinners”, anticoagulants don’t thin the blood. They work by reducing the blood’s ability to clot. There are three main types of anticoagulant: medicines that prevent the liver from processing vitamin K in a way that enables it to help clot the blood (these are called vitamin K antagonists), direct oral anticoagulants (also known as DOACs), and low molecular weight anticoagulants. The most commonly prescribed anticoagulant is warfarin, which is a vitamin K antagonist.

Is It Safe for Patients with Atrial Fibrillation with ESRD Who Transition to Hemodialysis to Take Anticoagulants?

Although patients with atrial fibrillation are commonly treated with anticoagulants to reduce their stroke risk, patients with ESRD are at greater risk of stroke. The decision of whether or not someone should be treated with an anticoagulant is usually weighed against the person’s risk of bleeding, which is also more common in patients receiving hemodialysis. Several risk scores have been developed to help healthcare practitioners assess this delicate balance, of which the CHA2DS2-VASc score for stroke risk and the HAS-BLED score for bleeding risk are the most widely used. However, neither has been fully assessed for validity in patients receiving dialysis, and it is unclear whether it is safe for patients with atrial fibrillation to continue anticoagulation treatment at the time of transition to hemodialysis. It is unclear whether patients who are about to transition to hemodialysis have similar stroke and bleeding risks compared with those who have received hemodialysis for years, or those who have chronic kidney disease who do not receive dialysis.

In this study, the authors looked at how accurate the CHA2DS2-VASc and HAS-BLED scores are in evaluating the stroke and bleeding risks of patients with atrial fibrillation. They also compared the risks of stroke and bleeding for patients with atrial fibrillation who transition to hemodialysis to assess whether they are likely to benefit from anticoagulation treatment with warfarin.

What Were the Findings of the Study?

The authors studied data relating to veterans of the United States military. Of the 28,620 veterans who had atrial fibrillation before they were transitioned to hemodialysis, 19% were treated with warfarin in the 6 months before transition while 81% didn’t receive any anticoagulation treatment. Of those receiving warfarin at the time of transition, 37% discontinued warfarin treatment after transition. Although the initial analyses showed that the risks of bleeding and stroke were similar between the groups taking or not taking warfarin, the authors went on to use a statistical approach called competing risk analysis to consider the effect of mortality (death). This time, the risk of stroke was 44% greater after transition for those receiving warfarin and the risk of bleeding increased by 38%.

Overall, the study suggests that patients with atrial fibrillation who receive warfarin may not have a lower risk of stroke or increased risk of bleeding compared with those who do not receive it. Importantly, the authors found that patients with atrial fibrillation who transition to hemodialysis who receive warfarin may have significantly higher bleeding and stroke risk than those who do not receive warfarin. The authors suggest that warfarin treatment should be re-evaluated at the time of transition to hemodialysis and should not be used for primary stroke prevention for people receiving hemodialysis with atrial fibrillation. However, newer anticoagulants like direct oral anticoagulants (DOACs) may be safer than warfarin, and studies are needed to assess whether patients with atrial fibrillation who transition to dialysis may benefit from switching treatment to them.

Take-Home Message for Patients

Warfarin treatment to reduce the risk of stroke in people with atrial fibrillation who are transitioning to hemodialysis may not be as safe as treatment with newer anticoagulant medications. People who are concerned should consult their clinical team.

How Can Telehealth Aid Genetic Testing for Cancer?

What Is the Main Idea?

Some people have mutations (changes) in their genes that increase their risk of developing particular types of cancer. In the open-access research article “Evaluating the Effectiveness of a Telehealth Cancer Genetics Program: A BRCA Pilot Study”, published in the journal Public Health Genomics, the authors describe the use of a telehealth platform for BRCA education and testing in people of Ashkenazi Jewish descent (PAJD) in the USA.

What Else Can You Learn?

In this blog post, tumor suppressor genes and their roles in the body are discussed, along with how telehealth can be used to aid genetic testing and education, and the role of genetic counseling.

What Is BRCA?

BRCA (pronounced “bra-ka”) is an abbreviation of BReast CAncer gene (genes are short sections of DNA that carry the genetic information for the growth, development, and function of your body, often in the form of instructions to make proteins). Everyone has two copies of two types of BRCA gene, called BRCA1 and BRCA2. They are both tumor suppressor genes.

What Are Tumor Suppressor Genes?

Tumor suppressor genes code for a type of protein, called tumor suppressor proteins, that help to control cell growth. They tend to play one of three roles: stopping cells from dividing and producing new cells, repairing damaged DNA, or causing damaged cells to be broken down through a process called apoptosis (this is also known as programmed cell death and is a normal part of development and aging; it removes cells that have become damaged or that are no longer needed). If a tumor suppressor gene gains a mutation (a change in the DNA code), the protein that it codes for may no longer be produced or may not work properly. As a result, the cell may start to grow and divide uncontrollably, which can eventually lead to the development of cancer.

How Are BRCA1 and BRCA2 Linked to Cancer?

Both BRCA1 and BRCA2 code for proteins that help repair damaged DNA. Everyone gets one copy of each gene from their mother and another from their father. If someone has a mutation in one of the BRCA genes that stops it from working properly, they will have a higher risk of developing cancer over their lifetime than someone that does not have a mutated BRCA gene. In particular, BRCA mutations are linked to breast and ovarian cancer. Although most cases of breast and ovarian cancer are thought to be sporadic (i.e., they develop in people who do not have a family history of that cancer or a DNA mutation that is known to increase their risk of developing it), 5–10% of cases are inherited. Of these cases of inherited breast and ovarian cancer, around 60% are caused by mutations in BRCA1 and BRCA2. All people with a BRCA mutation are at increased risk of developing breast and pancreatic cancer, and melanoma. Women and men are also at increased risk of developing ovarian and prostate cancer, respectively.

Are Harmful BRCA Mutations More Common in Some Populations than Others?

Individuals in some populations are more likely to have harmful BRCA mutations than others, and different racial/ethnic populations also tend to have different types of mutations. Although the exact prevalence of BRCA1 and BRCA2 mutations that can lead to cancer in the general population is not known, it’s estimated to be around 1 in 400 (0.2–0.3%). Norwegian, Dutch, and Icelandic peoples are known to have common mutations, as are PAJD. The prevalence of BRCA mutations in PAJD is around 1 in 40 (around 2.5%, which is 10 times greater than that of the general population), and in the USA, the mutations tend to be 1 of 3 types: two in BRCA1 and 1 in BRCA2. These are described as “founder mutations”: mutations that occur at high frequency in a group that is, or was, geographically or culturally isolated.

What Did the Research Article Investigate?

Although BRCA mutations are known to be more common in PAJD than the general population, it is not known whether rates of BRCA mutations are similar between those with or without a family or personal history of cancer. For many years, US national BRCA testing guidelines were defined by personal or significant family histories of cancer and/or the presence of mutations that were frequently found in families. These guidelines are often used by health insurance companies to set rules regarding what is covered. However, the national testing criteria have now expanded and the US Preventative Services Task Force has specified that being a PAJD is a risk factor. Despite this, in the USA, PAJD who do not have a family or personal history of cancer are not eligible for BRCA testing under their health insurance. Some have suggested that all PAJD should be offered BRCA testing, partly because it would reduce the number of cases of breast and ovarian cancer, which would save money and lives. All individuals with a BRCA mutation have a 50% chance of passing it on to future children, whether or not they have a family or personal history of cancer.

The authors of the above-mentioned article used a telehealth-based platform for BRCA education and testing with the goal of creating an effective model for BRCA testing in low-risk PAJD who do not meet US national testing criteria. They also sought to determine the rate of BRCA mutation in this group, to see if it is the same as in those with a family or personal history of cancer. The participants (501 people) received pre-test education in the form of a video and written summary, followed by complimentary BRCA1/2 testing (to determine whether there were any mutations), and post-test genetic counseling.

What Is Telehealth?

Telehealth, sometimes called telemedicine, describes the distribution of health-related services and information through the use of digital information and communication technologies. The advantages include increased access to healthcare of people in rural areas or who have difficulties with obtaining transport or mobility, and cost reduction. However, there are also disadvantages, including potential technical issues, the need for stable internet access, and (particularly in the USA) issues around billing and licensure between states. Examples of telehealth include apps on smartphones, test results being sent to a specialist, home monitoring through patients continuously sending health data, robotic surgery controlled by a surgeon at a different location, and health consultations such as genetic counseling using video conferencing rather than an in-person visit.

What Is Genetic Counseling?

Genetic counseling gives people information about how genetic conditions or specific gene mutations might affect them and/or their families. In relation to cancer, a genetic counselor can evaluate a person’s risk of getting certain types of cancer based on their family history. They can also help them decide whether or not to have genetic tests, explain the different types of test available, help work out whether some of the costs of testing are covered by the person’s medical insurance, and make suggestions for additional testing based on the results.

What Did the Study Find?

The study identified the rate of BRCA founder mutations in the low-risk PAJD participants to be around 0.6%, significantly lower than the generally reported rate of 2.5% for PAJD. However, one participant was found to have a non-founder mutation and the authors noted that, had only founder mutations been screened for, the participant’s BRCA mutation would not have been identified. There is a risk that carriers of BRCA mutations can be missed if only the presence of founder mutations is tested, with one potential reason being that many individuals that identify as Ashkenazi Jewish are actually of mixed Jewish ancestry.

The study also identified that most of the individuals that registered for the study but who did not participate because they did not meet eligibility criteria (because of their family histories) did not follow up with genetic counseling and testing, despite being sent information about its importance. Many of these individuals noted that this was, at least in part, because they had concerns about the ease of accessing genetic counseling. Telehealth has the potential to make this less of a problem. Of the PAJD that did take part in the study, feedback was very positive, with 97.9% stating that they were satisfied with the pre- and post-test education provided, and 99.5% stating that their post-test genetic counseling session was valuable.

What’s the Take-Home Message?

Individuals in populations with known founder mutations may benefit from considering genetic testing and counseling, whether or not they have a family or personal history of cancer. Genetic testing and counseling through telehealth is a good model for those that do not wish to, or cannot, access traditional in-person genetic counseling.

How Can Analyzing microRNAs in Blood Serum Improve the Diagnosis of Lung Cancer?

What Is the Main Idea?

Serum biomarkers can be detected by analyzing blood samples. In the open-access research article “Screening of Serum miRNAs as Diagnostic Biomarkers for Lung Cancer Using the Minimal-Redundancy-Maximal-Relevance Algorithm and Random Forest Classifier Based on a Public Database”, published in the journal Public Health Genomics, the authors describe an approach for screening serum miRNAs to see whether they are useful as diagnostic biomarkers for lung cancer.

What Else Can You Learn?

In this blog post, RNAs (particularly microRNAs) and their roles in the body are discussed, along with how serum biomarkers can aid the early diagnosis of lung cancer.

What Is a Serum Biomarker?

The term “biomarker” is short for “biological marker”. Biomarkers are measurable characteristics, such as molecules in your blood or changes in your genes, that indicate what is going on in the body. They can indicate that your body is working normally, the development or progress of a disease or condition, or the effects of a treatment. Serum biomarkers are biomarkers that can be detected by analyzing blood samples that are taken from patients (sometimes called “liquid biopsies”). Whole blood is made up of red blood cells, white blood cells, platelets, and clotting factors in a liquid called plasma. Serum is the liquid that you have left if all the cells and clotting factors are removed from the blood.

What Are the Advantages of Serum Biomarkers?

Because serum biomarkers can be easily obtained from samples taken during a standard blood test, it is relatively cheap to obtain large enough samples for analysis. In addition, the healthcare practitioners that take the samples do not need any specialist expertise. For these reasons, studies are underway to investigate how serum biomarkers can be used to diagnose a wide range of conditions, including cancer.

What Did the Research Article Investigate?

Lung cancer is one of the most common types of cancer, accounting for nearly one in six deaths worldwide in 2020. It can start in any part of the lungs or the airways that lead to the lungs from the windpipe (trachea), and is difficult to detect in its early stages. Like many other types of cancer, patients with lung cancer have better outcomes if their tumors are detected early. In this study, the authors investigated whether molecules found in blood serum called microRNAs have potential as biomarkers for the diagnosis of lung cancer and tested a method to identify them more efficiently. They screened 416 microRNAs and identified 5 that were present at different levels in people with lung cancer compared with people without lung cancer.

What Are microRNAs?

Your genes are short sections of DNA (deoxyribonucleic acid) that carry the genetic information for the growth, development, and function of your body. Each gene carries the code for a protein or an RNA (ribonucleic acid). Proteins do most of the work in cells and have lots of different functions in the body, including structural roles, catalyzing reactions (enzymes), and acting as signaling molecules. There are several different types of RNA, each with different functions, and they play important roles in normal cells and the development of disease.

Messenger RNAs are single-stranded copies of genes that are made when a gene is switched on (expressed). In a cell, long strings of double-stranded DNA are coiled up as chromosomes in a part of the cell called the nucleus (the cell’s command center). Chromosomes are too big to move out of the nucleus to the place in the cell where proteins are made, but a messenger RNA copy of a gene is small enough. In other words, messenger RNA carries the message of which protein should be made from the chromosome to the cell’s protein-making machinery.

MicroRNAs are much smaller than messenger RNAs. They do not code for proteins but instead play important roles in regulating genes. They can inhibit (silence) gene expression by binding to complementary sequences in messenger RNA molecules, stopping their “messages” from being read and preventing the proteins they code for from being made. Some microRNAs also activate signaling pathways inside cells, turning processes on or off.

Why Do microRNAs Have Potential as Serum Biomarkers?

MicroRNAs are present in body fluids such as urine, saliva, and blood. In addition, unlike some types of molecule that are relatively “unstable” and break down quickly, microRNAs that circulate in the blood are very stable. As a result, collecting samples is relatively cheap and easy, and microRNAs can also be easily detected and quantified in diagnostic laboratories.

How Are microRNAs Involved in Cancer?

MicroRNAs are involved in different types of cancer in a variety of ways. They may be expressed at abnormally high or low levels, affecting whether or not cells start to divide and multiply, or can enable cells to avoid processes that would normally cause cell death (this process is called “apoptosis”; it maintains the balance of cells in the body and removes cells that have become damaged). If microRNAs are expressed at different levels in cancer cells compared with normal cells, they could be used to indicate the presence of cancer in the body and aid earlier diagnosis. Different levels of particular microRNAs can also indicate the likely prognosis of patients with some types of cancer, and one particular microRNA (miR-506) has been shown to promote the apoptosis of cervical cancer cells.

What’s the Take-Home Message?

Over the last decade, biomarker testing has become a crucial part of optimizing the diagnosis and treatment of lung cancer. MicroRNAs in serum are biomarkers that are easy to collect and analyze, and show promise for screening to diagnose lung cancer at an early stage in the future.

Neuroblastoma: A Rare Condition in Adults

What Is the Main Idea?

Neuroblastoma is a type of solid tumor that is rare in adults. As a result, the disease course of neuroblastoma in adults is not well studied and there is no guideline-recommended chemotherapy strategy specifically for adults. In the open-access article “Adrenal Neuroblastoma Producing Catecholamines Diagnosed in Adults: Case Report”, published in the journal Case Reports in Oncology, the authors describe the case of a 24-year-old female patient and discuss considerations regarding the care of adults with neuroblastoma.

What Else Can You Learn?

In this blog post, neuroblastoma is discussed, along with issues relating to the treatment of adult patients compared with children and common complications that may develop. Embryogenesis and the role of the sympathetic nervous system are also addressed.

What Does the Case Report Describe?

In this case report, the case of a 24-year-old female patient with neuroblastoma is described. Although classification of her tumor according to an international staging system suggested that it was unlikely to be an aggressive tumor, it recurred 4 months after surgery and she needed further drug treatment.

What Is Neuroblastoma?

Neuroblastoma is a type of solid tumor, which means that it forms an abnormal mass (lump) of tissue that doesn’t usually contain any liquid areas. In childhood, neuroblastoma is the most common type of solid tumor that develops outside of the cranium (the bones that surround the brain) and the third most common childhood cancer worldwide. Neuroblastomas can develop at any location in the sympathetic nervous system but most commonly in the adrenal medulla, the inner regions of the adrenal glands, which are located on top of the kidneys.

What Is the Sympathetic Nervous System?

The sympathetic nervous system is the part of the autonomic nervous system (which controls things that you do without thinking about them, so it can be thought of as the “automatic” nervous system). The sympathetic nervous system controls rapid, involuntary responses of the body to dangerous or stressful situations, or when you are physically active. These include increasing your heart rate, improving oxygen delivery to your lungs, activating energy stores in your liver so that you can use energy quickly, and slowing down your digestion so that energy being used to digest food can go to other areas of the body that need it. Most of the signals that the sympathetic nervous system sends start in the spinal cord (a long, tube-like band of tissue that runs through the center of your spine, connecting your brain to your lower back) and are relayed all over your body. To communicate, the sympathetic nervous system uses chemicals called neurotransmitters. One family of neurotransmitters is the catecholamines, which include dopamine and epinephrine (adrenaline). Catecholamines are made by the brain and adrenal glands. Once they have been used, they are removed from your body via the urine.

How Does Neuroblastoma Develop?

When a human egg is fertilized, an embryo starts to develop through a series of processes that are together called “embryogenesis”. These processes include cell division and growth, and different groups of cells begin to develop that have specific roles in the body (this is called differentiation). In vertebrates, a group of cells called neural crest cells temporarily exists that go on to give rise to cells with very different roles, including nerve cells, melanin(pigment)-producing cells, and smooth muscle. It is thought that neuroblastoma can develop if neural crest cells start to gain mutations and changes during embryogenesis that disrupt their differentiation, although it’s not yet clear how.

Can Neuroblastoma Occur in Adults?

Neuroblastoma is considered by many to occur almost exclusively in children, with more than 90% of patients diagnosed before 10 years of age. However, although neuroblastoma is rare in adolescents and even rarer in adults, cases do occur, and while the clinical course of childhood neuroblastoma tends to be benign (mild), with some patients aged less than 1 year experiencing remission, there is evidence that the course of the disease in adults is more severe. Unfortunately, the fact that adult neuroblastoma is so rare means that it is not well understood.

What Are Specific Considerations for the Treatment of Adults with Neuroblastoma?

Children diagnosed with neuroblastoma are usually treated with intense polychemotherapy (chemotherapy involving several different drugs), which has been reported to be poorly tolerated by adult patients. However, there is currently no specific chemotherapy regimen for adult patients, so polychemotherapy is used with adjustments made on an individual-patient basis, depending on the needs of the patient and their ability to tolerate the drugs. As well as the treatment that adult patients receive, a key consideration is the level of catecholamines in the patient’s urine.

Why Are Catecholamine Levels Important?

As mentioned earlier, catecholamines are neurotransmitters that are used by the sympathetic nervous system and are involved in stress responses. It has been reported that 85–90% of patients with neuroblastoma have increased levels of catecholamines. As well as potentially indicating the presence of a rare tumor in the adrenal glands, high catecholamine levels can cause high blood pressure. As a result, patients with neuroblastoma can be at increased risk of stroke, kidney failure, and cardiovascular complications in the future, like arterial stiffness and thickening of the ventricles in the heart. Care needs to be taken so that high catecholamine levels and any complications are detected and followed up effectively.

What’s the Take-Home Message?

Although neuroblastoma is adults is rare, cases do occur. It is particularly important that the treatment of adults with neuroblastoma is tailored to the individual and any cardiovascular complications are detected and followed up properly. If a person has a high catecholamine level in their urine, the cause should be investigated because it may indicate the presence of a rare adrenal tumor.

How Mathematical Modelling Can Increase Digital Biomarker Use in Drug Development

What Is the Main Idea?

Digital biomarkers enable data from “smart” devices to be used to track health-related trends and patterns. In the open-access research article “Quantifying the Benefits of Digital Biomarkers and Technology-Based Study Endpoints in Clinical Trials: Project Moneyball”, published in the journal Digital Biomarkers, the authors show how mathematical modelling, using Parkinson’s disease as an example, can help solve some of the problems that have limited the use of digital biomarkers in drug development to date.

What Else Can You Learn?

In this blog post, biomarkers and their use in healthcare are discussed, along with Parkinson’s disease and how digital biomarkers can aid the development of future treatments.

What Are Biomarkers?

The term “biomarker” is short for “biological marker”. Unlike symptoms, which are things that you experience, biomarkers are measurable characteristics that indicate what is going on in the body. Your blood pressure, levels of molecules in your urine and blood, and your genes (DNA) are all biomarkers. Although they can suggest that your body is working normally, they can also show the development or progress of a disease or condition, or the effects of a treatment.

How Are Biomarkers Used to Help Patients?

Over the last decade, biomarker testing has started to transform the way that some diseases and conditions are treated. The development of treatments targeted against specific biomarkers and the ability to identify treatments that are not likely to work in certain patients offers hope of better outcomes through more personalized treatment.

What Are Digital Biomarkers?

The term “digital biomarker” is used to describe behavioral and physiological data that are quantifiable and objective (not influenced by personal feelings or opinions), collected and measured by portable, wearable, implantable, or digestible digital devices. Many people now use “smart” devices like watches and phones, and the large quantities of data that they collect can be paired with analytical tools to track trends and patterns, both for individuals and across populations.

How Can Digital Biomarkers Help Develop New Treatments?

Although digital biomarkers have the potential to have a significant impact on drug development, only a few have made meaningful contributions to bringing new treatments into the clinic. Understandably, pharmaceutical companies have been wary of adopting digital endpoints (events or outcomes that can be objectively measured in clinical trials to determine whether treatments being studied are beneficial) until they are fully proven. In 2019, stride velocity 95th centile (SV95C) became the first digital biomarker to be qualified by the European Medicines Agency (EMA) as a suitable endpoint for use in clinical trials researching treatments for Duchenne muscular dystrophy, a genetic disorder characterized by progressive muscle degeneration and weakness. Measured by the user wearing a device at their ankle, SV95C represents the speed of the fastest strides taken by the user over 180 hours. Interestingly, when the researchers involved in SV95C’s development analyzed the number of people from which data would need to be collected in a clinical trial to get a statistically significant sample, it was 70% less than if more traditional endpoints like the 6-minute walk test were used. This shows the potential of digital biomarkers to help new treatments go through clinical trials. Other potential benefits include more accurate selection of patients to take part in trials and better endpoint measurement.

What Did the Research Article Investigate?

The authors identified five gaps that pharmaceutical companies and technology providers need to address to increase the use of digital biomarkers in drug development:

  1. Biomarker measurements and objectives of treatment not being aligned.
  2. Differences in financial models (technology companies often expect quicker returns on investment than pharmaceutical companies).
  3. Assumptions that fast technological development in consumer technologies can be quickly translated to make regulated health devices.
  4. Uncertainties about possible impacts of digital biomarkers in clinical trials.
  5. Different value frameworks of the companies and researchers involved.

They then designed a proof-of-concept project called Moneyball (named after a book about baseball), which used mathematical modelling to try to address gaps 4 and 5 using Parkinson’s disease as an example (the disease model was deliberately oversimplified to limit the scope of the project). The authors assessed whether such an approach was useful and also discussed their ideas for technology inclusion with clinical development teams at pharmaceutical companies to get their feedback.

What Is Parkinson’s Disease?

Parkinson’s disease is a neurodegenerative disorder (a disorder that involves degeneration of the nervous system) that is characterized by motor impairments (partial or total loss of function of a body part) like tremor, slowness of movement (bradykinesia), uncontrolled involuntary movement (dyskinesia), and walking (gait) abnormalities. It usually develops in late adulthood and progresses over several decades. There is currently no approved treatment.

How Can Digital Biomarkers Help Patients with Parkinson’s Disease?

Like other neurodegenerative disorders, the diagnosis and assessment of Parkinson’s disease can be subjective (the opposite of objective: influenced by personal feelings and opinions). It often involves invasive or expensive procedures like lumbar punctures (also known as a “spinal tap”), where a thin needle is inserted between the bones in the patient’s lower spine, usually to collect some fluid for analysis, and positron emission tomography (PET) scans, which produce detailed three-dimensional images of the inside of the body. Lack of precision in assessing endpoints is a common problem in the treatment of Parkinson’s disease, and clinical trials often end in failure because not enough objective, high-quality data can be gathered. Digital biomarkers may help solve some of these problems by measuring changes in gait and speech, loss of automatic movements (reflexes), and slowed movement.

What Did the Authors Conclude?

Although the feedback that they received about their modelling approach was largely positive, some companies noted that it can be challenging to obtain the technology performance data needed to run the calculations. Nonetheless, the authors believe that their approach can identify technology-enabled measurements that will have a meaningful impact, aid the quantification of the benefits and costs of digital biomarker technologies during the design phase of clinical trials, and help resources (particularly money) to be allocated years before pivotal clinical trials begin. They hope that their work will increase collaboration between technology and pharmaceutical companies so that the potential of digital biomarkers to speed up the development of new therapies for a range of diseases can be realized.

Note: The authors of this paper make a declaration about grants, research support, consulting fees, lecture fees, etc. received from pharmaceutical companies. It is normal for authors to declare this in case it might be perceived as a conflict of interest. For more detail, see the Conflict of Interest Statement at the end of the paper.

Kangaroo Care Does Not Adversely Affect Oxygenation of Babies Born Preterm

What Is the Main Idea?

Kangaroo care is a method that puts babies born preterm or newborns in skin-to-skin contact with their parents. In the open-access review article “Impact of Kangaroo Care on Premature Infants’ Oxygenation: Systematic Review”, published in the journal Neonatology, the authors analyze and discuss the combined findings of studies that have investigated the long-term physiological effects of kangaroo care on babies born preterm compared with standard incubator care.

What Else Can You Learn?

In this blog post, general care of preterm babies is discussed, along with the method of kangaroo care and its advantages.

What Does It Mean If a Baby Is Born Preterm?

A premature birth is one that takes place more than 3 weeks before the baby’s estimated due date (at 40 weeks), in other words, before the 37th week of pregnancy. Babies born between 34 and 36 completed weeks of pregnancy are classed as “late preterm”, those born between 32 and 34 weeks as “moderately preterm”, those born at less than 32 weeks as “very preterm” and those born at or before 25 weeks as “extremely preterm”. Premature birth usually means that a baby will need to be cared for in hospital for longer than a baby born at term, with the amount of time influenced by how early he or she is born. Depending on how much care the baby needs, he or she may be admitted to an intermediate-care nursery or a neonatal intensive care unit (NICU).

What Affects Whether a Baby Is Born Preterm?

There are some known risk factors associated with premature birth. These include: the mother having had a previous premature birth, or multiple miscarriages or abortions; an interval of less than 6 months between pregnancies; smoking cigarettes or using illicit drugs; some infections and chronic conditions; stressful life events, physical injury or trauma; and being under- or overweight before pregnancy. However, the specific cause is not often clear and many women who have a premature birth have no known risk factors.

How Can Being Born Prematurely Affect a Baby?

Although some babies born prematurely do not have any complications, generally speaking, the earlier a baby is born the greater the risk. Birth weight also plays an important role. Some complications that may be apparent at birth include breathing, heart and temperature control problems, and babies may also have issues related to metabolism, the blood and the immune system (particularly increased risk of infection). Longer term, they are at increased risk of complications including cerebral palsy, chronic health issues, vision, hearing and behavioural problems, and developmental delay. Because complications at birth can influence the development of longer-term issues, babies admitted to an NICU are closely monitored by the medical team and things such as the baby’s heart rate and oxygenation (oxygen levels inside the body) are frequently checked. They are also at increased risk of developing hypothermia if they have difficulty regulating their body temperature, so are usually cared for in incubators. This helps the baby maintain an optimum temperature and can also protect him or her from noises and direct light, which can cause stress.

What Is Kangaroo Care?

Although incubator care is very effective, kangaroo care is an important component in the care of babies born both prematurely and at term. Kangaroo care is described by the World Health Organization (WHO) as a method of care consisting of putting babies in skin-to-skin contact with their parents. Skin-to-skin contact is known to be effective for thermal control, breastfeeding and bonding, regardless of setting, weight, gestational age and clinical conditions, and is recommended for all newly born babies whether they are born preterm or not. In kangaroo care, the baby wears only a nappy or diaper (and often also a hat), and is placed in a flexed (fetal) position on the parent’s chest. The baby can be secured with a wrap that goes around the naked torso of the parent, ensuring that the baby is properly positioned and supported, or both parent and baby can be covered with a blanket, gown or clothing for warmth. Kangaroo care can even be given if the baby is attached to tubes or wires, as long as the parent stays close to the machines.

What Are the Advantages of Kangaroo Care?

The skin-to-skin contact of kangaroo care provides physiological and psychological warmth and bonding to both the parent and baby. Because the parent’s body temperature is stable, it regulates the temperature of a premature baby more smoothly than an incubator. Babies born preterm that receive kangaroo care also experience more normalized heart and respiratory rates, increased weight gain and fewer hospital-acquired infections. Other benefits include the promotion of frequent breastfeeding, improved sleep/wake cycle and cognitive development, decreased stress levels and positive effects on motor development. There are advantages for the parent as well, with kangaroo care helping to promote attachment and bonding, decrease parental anxiety, improve parental confidence, and promote increased milk production and breastfeeding success. However, to date, studies on the physiological stability of preterm babies during kangaroo care have reported conflicting results.

How Does Kangaroo Care Affect Oxygenation in Premature Babies?

Uncertainties regarding the effects of kangaroo care on oxygen saturation (the oxygen level in the blood) and “regional” cerebral oxygen saturation (i.e. relating to the brain) were investigated through a systematic review of research articles that assessed oxygenation, using pulse oximetry and near-infrared spectroscopy, during kangaroo care in NICUs. Pulse oximetry is non-invasive and pain-free, involves a clip-like device being placed on a body part such as a finger or ear lobe, and uses light to measure how much oxygen is in the blood. Near-infrared spectroscopy is also non-invasive and can continuously monitor regional oxygen saturation. This is important for babies born preterm because early detection of low cerebral oxygen saturation can prevent irreversible cerebral damage that can lead to cerebral palsy.

What Do the Results of the Systematic Review Show?

In total, the results of 25 research articles were analyzed, which documented data for 1,039 premature babies undergoing kangaroo care at three different study points: pre-, during and post-kangaroo care. Although the results of the systematic review cannot be extended to premature babies requiring critical care (described in the review as “unstable”), “stable” premature babies showed no significant differences in heart rate, oxygen saturation in the arteries (blood vessels that carry oxygen-rich blood away from the heart to the tissues of the body) or fractional oxygen extraction (the balance between oxygen supply and demand) compared with routine incubator care. Regional cerebral oxygen saturation also remained stable with a slight upward trend. Although most of the studies included in the review were observational (where participants are simply compared with placebo, no treatment or an alternative condition without randomization) and further studies are needed, the authors conclude that stable preterm babies receiving or not receiving respiratory support are as physiologically stable as those receiving routine incubator care.

Take-Home Message for Parents

Parents of babies born preterm can be reassured that the many benefits of kangaroo care in the NICU do not come at the cost of their baby being adequately oxygenated. Although more research is needed, there is no evidence that premature babies receiving kangaroo care are less physiologically stable than those that receive only routine incubator care.

Kidney Failure: How Peritoneal Dialysis Has Helped Reduce COVID-19 Infections

What Is the Main Idea?

Peritoneal dialysis (PD) enables people with kidney failure to conduct dialysis at home by themselves. During the COVID-19 pandemic, increased use of PD has helped to limit the spread of COVID-19 in this vulnerable patient population. In the open-access review article “Should More Patients with Kidney Failure Bring Treatment Home? What We Have Learned from COVID-19”, published in the journal Kidney Diseases, the authors analyze and discuss the utility of PD in the Asia Pacific region during the COVID-19 pandemic.

What Else Can You Learn?

In this blog post, kidney failure in general and the advantages and disadvantages of PD, particularly in relation to the COVID-19 pandemic, are discussed.

What Is Kidney Failure?

The kidneys do several important jobs in the body, including helping to control your blood pressure and make red blood cells, and removing waste products and extra water from your body to make urine. In chronic kidney disease (CKD), the kidneys no longer work as well as they should and are unable to remove waste products from your blood. As a result, too much fluid and waste products remain in the body, which can cause health problems such as heart disease, stroke and anemia. Although CKD can be a mild condition with no or few symptoms, around 1 in 50 patients can progress to a very serious form of CKD known as kidney failure, where kidney function drops to below 15% of normal.

How Is Kidney Failure Treated?

When the kidneys stop working, kidney replacement therapy in the form of dialysis or kidney transplant are needed so that the person can survive. Kidney failure treated in this way is called end-stage renal disease. If you have a kidney transplant, a healthy kidney from a donor is placed in your body to filter your blood. In contrast, dialysis is a procedure by which the blood is “cleaned”. There are two types of dialysis. In hemodialysis (HD), your blood leaves your body, goes through a filter in a machine and is returned to your body. HD is usually delivered in a healthcare setting. In contrast, peritoneal dialysis (PD) uses the lining of your abdomen, the peritoneum, to filter the waste and extra fluid from your body. A key difference between the two is that, once you have been trained, PD can be done at home, at work or while travelling without the help of another person. Home HD is possible, but you need the help of a partner and it is not available in all regions.

How Does Peritoneal Dialysis Work?

Before a patient can begin to use PD, they need an operation to insert a catheter, usually near the bellybutton. The catheter will carry the dialysate into and out of their abdomen. The patient then usually waits up to 1 month before starting PD to give the catheter site time to heal, and is trained how to use the equipment. Once PD begins, in each session, a cleansing fluid (called “dialysate”) flows through the catheter into part of the abdomen and stays there for a fixed period of time (called the “dwell time”), usually 4–6 hours. The dialysate contains dextrose, which helps to filter waste and extra fluid from tiny blood vessels in the peritoneum. At the end of the dwell time, the dialysate drains into a sterile collecting bag, taking the waste products and extra fluid with it. There are two main ways of conducting PD: continuous ambulatory PD, which uses gravity to move the fluid through the catheter and into and out of the abdomen, and continuous cycling PD, which uses a machine to perform multiple exchanges while you sleep at night. Your medical team will help you identify which PD method is best for you.

What Are the Advantages and Disadvantages of Peritoneal Dialysis?

Compared with in-center HD, the benefits of PD include:

  • greater lifestyle flexibility and independence, which can be especially important if you have to travel long distances to a dialysis unit;
  • a less restricted diet than if you receive HD, because PD is done more continuously than HD, so there is less build-up of potassium, sodium and fluid;
  • and the possibility of longer lasting residual kidney function.

However, PD might not be suitable for you if you have extensive surgical scarring in your abdomen, a hernia, limited ability to care for yourself or caregiving support, or inflammatory bowel disease or diverticulitis. It is also likely that people using PD will eventually have a decline in kidney function that will require HD or a kidney transplant.

How Has the COVID-19 Pandemic Affected the Treatment of People with Kidney Failure?

Patients with kidney failure, especially those receiving dialysis, are more susceptible to infections like COVID-19 than the general population and are at greater risk of severe disease or death when infected, partly because they are more likely to have other conditions that have been linked to severe COVID-19 (such as cardiovascular disease, diabetes, and cerebrovascular disease). Many patients experienced difficulties accessing HD during lockdowns, and those that could travel to a dialysis unit risked exposing themselves, their family and healthcare staff to COVID-19 infection. As a result, patients and healthcare providers have been encouraged to consider PD as a preferred option for kidney replacement therapy because home-based treatment prevents chains of transmission through in-center dialysis units, reduces the risk of exposure through travel, and helps to preserve hospital resources being stretched by this and possible future pandemics.

What Has Been the Effect of Increased Use of Peritoneal Dialysis during the Pandemic?

Evidence suggests that increased use of PD during the pandemic has had a beneficial effect. Survival and efficacy rates for patients undergoing PD are similar to those undergoing HD, and observational data from multiple countries have identified lower rates of COVID-19 infection in patients undergoing PD than those receiving in-center HD. In addition, fewer healthcare staff can support a larger number of patients through ongoing interaction using telehealth, although careful monitoring is required to ensure any negative effects are identified.

Take-Home Message for Patients

PD is currently underutilized, thought to be in part because of patient hesitancy, less frequent interaction with nephrologists and perceived lower levels of clinical oversight. However, if available, PD is an important treatment option that can protect patients with kidney failure from exposure to infection and may be worth their consideration in consultation with their clinical team.

Note: The authors of this paper make a declaration about grants, research support, consulting fees, lecture fees, etc. received from pharmaceutical companies. It is normal for authors to declare this in case it might be perceived as a conflict of interest. For more detail, see the Conflict of Interest Statement at the end of the paper.