Food Protein-Induced Enterocolitis Syndrome: A Rare Food Allergy

What Is the Main Idea?

Food protein-induced enterocolitis syndrome (FPIES) is a rare type of food allergy that is not well understood and is thought to be underdiagnosed. In the open-access research article “Discrepancy between Caregivers’ Reports and Physicians’ Evaluation of Causative Foods in Food Protein-Induced Enterocolitis Syndrome in Japan: The Japan Environment and Children’s Study”, published in the journal International Archives of Allergy and Immunology, the authors investigate factors that may contribute to FPIES underdiagnosis.

What Else Can You Learn?

Different types of food allergy and their symptoms are described. The difficulties in accurately diagnosing FPIES and the food that triggers the reaction are also discussed.

Take-Home Message

Parents and healthcare practitioners should be aware that a child may have FPIES if they begin vomiting repeatedly once they begin weaning, particularly if they quickly improve after being sick. Taking notes whenever this happens, including what the child ate up to 3 days before the vomiting started, can help healthcare practitioners to make an accurate diagnosis.

What Is a Food Allergy?

A person is described as having a “food allergy” if their immune system has an unusual, and usually unpleasant, reaction to a specific food. Reactions to foods that are not classed as food allergies include food intolerances (where a food irritates the digestive system or the body cannot digest a particular food properly), reactions to food that has become contaminated, or something being in the food that can have drug-like effects on the body (like caffeine in coffee).

The body’s immune system protects your body from things that could make you ill, like harmful substances and infections. Key components are inflammation (which traps things that might be harmful and begins to heal injured tissue) and white blood cells (which identify and eliminate things that might cause infection). Some white blood cells make antibodies that, together with other specialized immune cells, enable the body to recognize and fight specific germs that it has previously come into contact with, sometimes providing lifelong protection. Antibodies are divided into five different classes – IgD, IgG, IgM, IgA, and IgE – based on their characteristics and roles.

Are There Different Types of Food Allergy?

Food allergies are divided into two types: IgE-mediated and non-IgE-mediated (NIM).

  • If a food allergy is IgE-mediated, it is caused by IgE antibodies wrongly recognizing the food as a threat. Within minutes of the food being ingested, hives (a raised and itchy rash, also called “urticaria”) and redness of the skin can appear. The person may also start to vomit and, if the reaction is serious, anaphylaxis can occur. Anaphylaxis is life-threatening and symptoms can include difficulty breathing, swelling of the throat and tongue, feeling faint or dizzy, wheezing or coughing, and tightness in the throat.
  • NIM food allergies are caused by components of the immune system other than IgE. They are not as well understood as IgE-mediated food allergies, but a key difference is that allergic reactions do not develop as quickly. Whereas IgE-mediated allergic reactions appear almost immediately after the trigger food is eaten, the appearance of symptoms of NIM reactions is delayed, sometimes appearing as long as several days later. This can make it more difficult to identify the food that is causing the reaction.

What Are the Symptoms of NIM Allergic Reactions?

NIM allergic reactions can affect any part of the gastrointestinal tract. This refers to the route that food and drink takes as it enters the body at the mouth, travels through the stomach and intestines, before waste is passed out of the body. Symptoms can include diarrhea, vomiting, discomfort in the stomach area, and constipation. Babies can also have “colic”, which is when a baby cries a lot without there seeming to be an obvious reason for it.

What Is FPIES?

FPIES is a rare type of NIM food allergy that is usually diagnosed in infants and that is likely to have been present from birth. It affects the small intestine, which is the part of the digestive tract that receives partially digested food from the stomach before it moves on to the colon (large intestine). In most cases, symptoms include repeated vomiting between 1 and 4 hours of the trigger food being eaten, often not long after the infant has first eaten it (for example, during weaning), and diarrhea within 24 hours. However, in some cases symptoms appear several days later. If the vomiting is severe the infant may become pale and floppy.

Common trigger foods include cow’s milk, hen’s eggs, and soy, but FPIES can also be caused by rice, meats, and other foods that are not often associated with food allergies. Although the reactions can be severe, some children “grow out of” the allergy and become able to tolerate the trigger food by the age of 2 years.

What Did the Study Investigate?

Unlike with IgE-mediated food allergies, there are currently no skin or blood allergy tests for NIM food allergies like FPIES. Instead, FPIES is diagnosed through a process of removing foods from the diet one at a time and then reintroducing them if the symptoms start to get better. As a result, it is thought that some cases of FPIES are not diagnosed, which is not helped by the fact that the symptoms are non-specific (in other words, they are common symptoms of illness that could be caused by several different things). Better understanding of the characteristics of FPIES that is not diagnosed by healthcare professionals may help to prevent underdiagnosis or someone being wrongly diagnosed with something else.

The authors of the study used information collected by the Japan Environment and Children’s Study to investigate how commonly FPIES is diagnosed in Japan, and to look for differences in parent- and healthcare professional-diagnosed FPIES. The Japan Environment and Children’s Study involved more than 100,000 pregnant Japanese women. Each woman was asked to complete questionnaires about their child and family at regular intervals until their child was aged 3 years, and were asked if their child had ever had any of the symptoms that suggest FPIES, particularly repeated vomiting. The authors of the study were then able to analyze the results collected at age 1.5 years and look for differences and trends.

What Did the Study Show?

The number of children diagnosed with FPIES was low, which was expected because FPIES is relatively rare, with less than 1% (0.69%) of parents reporting that their child had shown symptoms of FPIES. However, only 0.06% of children had been diagnosed as having FPIES by a healthcare practitioner, only around 10% of children whose parents had reported FPIES symptoms. This suggests that FPIES has been underdiagnosed in Japan.

In addition, there was a discrepancy between the trigger foods that the parents reported as causing the symptoms and the ones that healthcare professionals identified as being the cause of allergic reaction when making their diagnoses. Parents were more likely to report hen’s eggs as being the trigger food, while healthcare practitioners were more likely to diagnose an allergy to cow’s milk. These results suggest that, as well as more research being needed, healthcare practitioners need to have more accurate information about a child having episodes of similar symptoms in the past to be able to make an accurate diagnosis.

Allergic Rhinitis and Chronic Rhinosinusitis: Lessons from COVID-19

What Is the Main Idea?

Allergic rhinitis and chronic rhinosinusitis both involve inflammation of the nose. In the research article “National Trends in Allergic Rhinitis and Chronic Rhinosinusitis and COVID-19 Pandemic-Related Factors in South Korea, from 1998 to 2021”, published in the journal International Archives of Allergy and Immunology, the authors describe how the increasing rate of incidence of allergic rhinitis and chronic rhinosinusitis slowed in South Korea during the COVID-19 pandemic.

What Else Can You Learn?

The causes and symptoms of allergic rhinitis and chronic rhinosinusitis are discussed. Lifestyle measures adopted during the COVID-19 pandemic that might benefit people with these conditions, and the role of the sinuses, are also described.

Take-Home Message

Lifestyle factors such as the wearing of face masks and eye protection, as well as social distancing, frequent hand washing, and disinfection of surfaces, which were adopted to limit the spread of the SARS-CoV-2 virus during the COVID-19 pandemic, may benefit patients with allergic rhinitis and chronic rhinosinusitis.

What Are Allergic Rhinitis and Chronic Rhinosinusitis?

Allergic rhinitis and chronic rhinosinusitis both affect the nose (as indicated by the prefix “rhin”):

  • Allergic rhinitis describes inflammation of the inside of the nose caused by a person coming into contact with something that they are allergic to. Inflammation is a normal process through which the body responds to an injury or infection by causing blood cells and other substances to gather at the affected area. If someone is allergic to something, their immune system identifies it as potentially harmful and inflammation is triggered in an attempt to remove it. Common causes of allergic rhinitis include dust, mold spores, pollen (this form of allergic rhinitis is more commonly known as hay fever), contact with animals, and chemicals used to maintain the quality of the water in swimming pools.
  • Chronic rhinosinusitis is also caused by inflammation, but specifically describes inflammation of the sinuses that is not necessarily caused by an allergy and that lasts longer than 12 weeks, even with treatment. Although it is not yet known how inflammation of the sinuses becomes chronic in some people, smoking, having a weakened immune system, the presence of growths (known as “nasal polyps”) in the nose, and allergies and related conditions like asthma have all been shown to be associated with it.

What Are the Sinuses?

The term “sinus” is used in medicine to describe more than one thing, but one use of the term is to specifically describe air-filled cavities inside the skull that are connected with each other. The role of the sinuses is not fully understood, but their presence means that the overall mass of the skull is less than it would be if it was entirely made up of bone.

Both the sinuses and the inside of the nose are lined by membrane layer that produces and secretes mucus (snot). Mucus is a sticky liquid that contains water, salt, and cells that are produced by the immune system. It keeps the nasal passages lubricated, and also protects the body from irritants (like dust and pollen) and microbes that can cause infections. If a microbe or irritant enters the nose, it gets trapped in the sticky mucus and the body then tries to get rid of it, for example by sneezing. If you have a cold or an allergic reaction, more mucus is produced than normal because the body is trying to get rid of the microbes or irritants that are causing the immune system to mount a response.

How Do Allergic Rhinitis and Chronic Rhinosinusitis Differ?

The symptoms of allergic rhinitis often develop quickly after a person comes into contact with something they are allergic to, and are similar to those caused by having a cold: a runny or blocked nose, a cough, sneezing, and the eyes may become reddened, itchy, or watery. People with chronic rhinosinusitis often have similar symptoms, and because the condition keeps mucus from draining away, they can also experience swelling resulting in pain and tenderness around the forehead, nose, eyes, or cheeks. Other symptoms include aching in the teeth, bad breath, and ear pain. The key difference between the two conditions is the length of time over which a person experiences symptoms, with the symptoms of chronic rhinosinusitis lasting much longer.

How Common Are Allergic Rhinitis and Chronic Rhinosinusitis?

Allergic rhinitis and chronic rhinosinusitis are both common. Allergic rhinitis is estimated to affect up to 40–50% of the world’s population, while studies of chronic rhinosinusitis have estimated that it affects between 5 and 12%. Although the prevalences of both conditions differ between countries, global incidence is increasing and has been linked to increased environmental air pollution, climatic factors such as humidity and increased exposure to particles carried by winds, and lifestyle factors such as increased exposure to allergens and changes in the foods that people eat.

What Did the Study Investigate?

The authors of this article analyzed data collected as part of a large, national study conducted in South Korea called the Korea National Health and Nutrition Examination Study (KHANES), which was begun to enable the health and nutrition status of thousands of Korean citizens aged 1 year or older to be monitored over a long period of time. Studies like this enable researchers to identify changes in the health of a population and to identify things that may increase the risk of developing conditions like allergic rhinitis and chronic rhinosinusitis.

The authors analyzed data relating to a group of adult KHANES participants over a period of 24 years (between 1998 and 2021) to see how the incidence of allergic rhinitis and chronic rhinosinusitis changed. They found that the incidence of allergic rhinitis and chronic rhinosinusitis increased by more than 3% and more than 2%, respectively, over the course of the study. These findings mirror those of studies conducted in other countries.

How Did Incidence Change during the COVID-19 Pandemic?

The authors also observed that the rate at which the incidence of allergic rhinitis and chronic rhinosinusitis increased slowed down between the start of the COVID-19 pandemic in 2020 and 2021. COVID-19 is caused by a virus called SARS-CoV-2, which is mainly spread via small “respiratory droplets” (so small that you cannot see them) that are released into the air when a person infected with the virus breathes, coughs, speaks, and sneezes. Although South Korea did not experience a lockdown like some other countries, the wearing of face masks and eye protection, as well as social distancing, frequent hand washing, and disinfection of surfaces was quickly and widely adopted.

Other studies have reported that such lifestyle measures, which limit the spread of SARS-CoV-2, are also useful in the management of patients with allergic rhinitis. In addition, reductions in air pollution as a result of lockdowns have been reported by some researchers to have had positive effects on people with allergic rhinitis and chronic rhinosinusitis. Although more research is needed, the authors conclude that the reduced incidence of allergic rhinitis and chronic rhinosinusitis seen in South Korea during the pandemic indicates the potential for lifestyle changes like these to benefit people with these conditions.

Dr DongKeon Yon, corresponding author, on the relevance of the article for patients:

The increase in prevalence of allergic rhinitis (AR) and chronic rhinosinusitis (CRS) from 1998 to 2021 underscores the need for enhanced public health efforts to prevent and manage these conditions. These could include improving public awareness, increasing access to diagnostic and treatment services, and implementing preventive measures such as improving air quality. The pandemic-related decrease suggests that lifestyle and behavioral changes, such as reduced outdoor activity and increased use of face masks, have protective effects against these conditions.

Are Vitamin Treatments for Nail Disorders Effective?

What Is the Main Idea?

The nails on the ends of our fingers and toes protect them from damage, but can indicate underlying health conditions and may themselves become diseased. In the open-access review article “Vitamins for the Management of Nail Disease: A Literature Review”, published in the journal Skin Appendage Disorders, the authors review the evidence published to date that investigates whether the treatment of nail disorders with vitamins and their derivatives is effective or not.

What Else Can You Learn?

The roles of fingernails and toenails, and how they are formed, are discussed. Different types of nail disorders and their causes are also described.

Take-Home Message

There is little evidence to support vitamin supplements being effective in improving nail disorders. People who develop a nail disorder should seek advice and treatment from a dermatologist.

Why Do We Have Fingernails and Toenails?

Our fingernails and toenails are essentially tough, rigid plates that protect the ends of our fingers and toes from damage, and act as a barrier to stop microbes like bacteria, fungi, and viruses entering the body and causing infections. Nails also enhance the sensitivity of the ends of the fingers and toes when they touch an object, enable us to scratch things, and make it easier for us to pick up very fine things like a hair on a jumper or a needle on the floor.

How Do Nails Grow?

Nails have several parts. The hard surface that we think of as being the actual nail is called the nail plate and is mostly made of a protein called keratin, which is also found in claws, horns, and hooves of other animals. The nail bed is a layer of skin that is visible directly under the nail plate (which is semi-transparent). At the base of the nail, a thin layer of skin called the “cuticle” grows over the nail plate and provides a waterproof barrier.

Below the cuticle is the “proximal nail fold”, which covers a pouch of skin that the nail is tucked into called the nail matrix. The whitish, half-moon-shaped area at the base of the nail is the part of the matrix that is visible. The nail matrix is the part of the nail responsible for its growth. Keratin is constantly being produced and slowly pushes the nail plate forward, causing it to grow longer. Fingernails grow faster than toenails and do so, on average, at a rate of 3–3.5 millimeters per month.

How Does Nail Integrity Represent Overall Health?

The keratin that forms the nail plate comes from a specialized type of cell. The way in which these cells link together can affect the consistency, strength, and look of the nail plate. As a result, because the nail plate forms from living cells, changes to the way the nail looks can indicate that a person has a health problem like a nutritional deficiency.

For example, people with a chronic iron deficiency may have nails that bend up at the sides or that are unusually pale, while clubbing (when nails appear swollen or wider than normal) can indicate low oxygen levels in the blood, potentially caused by a chronic lung disorder. In addition to nail abnormalities caused by underlying health problems, the nails themselves can become diseased. The nail plate is more permeable (a measure of how easily gases and liquids can pass through something) than skin, which means that harmful substances and microbes can penetrate it more easily.

What Are Some Common Nail Disorders?

There are a range of nail disorders that are caused by different things:

  • Brittle nail syndrome causes the nail to become fragile.
  • Onychomycosis is a fungal nail infection that can cause the nail to become brittle and discolored.
  • Habit-tic nail deformity is caused by the repeated rubbing, picking, or pushing back of the proximal nail fold.
  • Periungual/subungual verrucas are a type of wart that forms in the grooves of the proximal nail fold or under the nail plate.
  • Nail psoriasis is an autoimmune disorder that can cause nails to become pitted and discolored, and may be accompanied by a psoriatic rash (patches of skin that are red, dry, and flaky).

What Did the Article Investigate?

Because many nail disorders are chronic conditions (lasting for a long time or that come back over and over again), there is a need for more safe, effective treatments for nail disorders that can be used long-term. Although there is limited clinical evidence that treatment with vitamin supplements can be effective, survey-based studies have shown that dermatologists (doctors that specialize in treating nail, skin, and hair problems) often recommend vitamin supplements to patients, and that self-reported use of vitamin supplements to improve nail, skin, and hair disorders among affected patients almost doubled between 2011–2012 and 2017–2020.

The authors of the article searched the published medical literature for studies that have assessed the effectiveness of vitamin and vitamin derivatives in treating nail disorders if taken by mouth (orally), applied on the outside of the body (“topically”, for example a cream that is rubbed into the skin or nail), or applied directly to lesions (areas of nail or skin damage, for example by being injected into an abnormal area of skin). In total, 49 articles were considered suitable for assessment. In addition to looking at research involving the common nail disorders described above, the authors also looked at evidence relating to a rare condition called yellow nail syndrome.

What Did the Authors Conclude?

The authors concluded that, overall, there is currently limited evidence to support treating nail disorders with vitamins and their derivatives. Exceptions to this included the treatment of some patients with yellow nail syndrome with oral or topical vitamin E, and the treatment of patients with onychomycosis with topical vitamin E (although the authors note that clinical trials are needed to investigate its efficacy and possible side effects).

The treatment of nail psoriasis with topical tazarotene, a type of retinoid (a form of vitamin A), or analogs of vitamin D (these are forms of naturally occurring vitamin D that have been chemically modified to have different or greater therapeutic effects) was evaluated as having been proven to be effective. In addition, intralesional vitamin D3 treatment seemed to be effective in treating periungual/subungual verrucas, but the authors noted that more studies are needed.

Overall, the authors did not find good evidence that taking vitamin supplements is effective in treating nail disorders. These findings confirm the fact that further research is needed to develop more effective treatments for nail disorders, and that people who develop nail disorders should seek specialist advice from a dermatologist rather than relying on taking vitamin supplements to improve the condition.

Shari Lipner MD, PhD, corresponding author, on the relevance of the article for patients:

“There is increasing interest by the public in treatment of medical conditions, including nail disorders, with vitamins because they believe they are safe and hope that they are effective. While more research is needed, vitamin E may be reasonable for yellow nail syndrome treatment, given limited treated options. Biotin is not recommended for brittle nail syndrome treatment, given potential laboratory interactions and lack of efficacy. Topical vitamin A and D analogs are efficacious and may safely be prescribed for nail psoriasis.”

Improving Platelet Storage before Transfusion

What Is the Main Idea?

Platelets, a type of blood cell, are involved in the control of blood loss. In the review article “In vitro Hemostatic Functions of Cold-Stored Platelets”, published in the journal Transfusion Medicine and Hemotherapy, the authors discuss how the temperature at which platelets are stored before they are transfused into patients affects their functions. They also review research published to date comparing the effects of storing them in the cold compared with at room temperature.

What Else Can You Learn?

The role of platelets in stopping blood loss and the importance of platelet transfusion are discussed. Transfusion-transmitted sepsis is also described.

Take-Home Message

While research continues to optimize how platelets are stored, there is an increasing need for people to become platelet donors.

What Are Platelets?

Blood is made up of a liquid called plasma and three main types of blood cell:

  • Red blood cells (also known as “erythrocytes”) carry oxygen around the body.
  • There are several different types of white blood cells (also known as “leukocytes”) and they help fight infection.
  • Platelets (also known as “thrombocytes”) are the smallest type of blood cell and are involved in the process that enables blood to clot to promote healing and control “hemostasis” (the process that stops us from losing blood if we begin to bleed).

How Do Platelets Help Control Blood Loss?

The process by which blood clots are formed involves platelets and a number of different types of protein. Platelets are made in the bone marrow and continuously travel around the body in the bloodstream. Under normal conditions, when there is no damage to blood vessels that needs to be repaired, platelets are “quiescent” (inactive) and plate-shaped. If a blood vessel becomes damaged, platelets start to be attracted to the site of injury because a protein called collagen becomes exposed. Once they get there, they start to stick to the collagen and also to each other. This is helped by the cells lining the damaged blood vessel releasing a molecule called von Willebrand factor.

The platelets then become activated. As this happens, their shape changes to spherical with long “spines” or “tentacles”. The platelets start to secrete chemical signals that attract other platelets to the injury site, and they clump together to temporarily cover and close the wound, a bit like a plaster. This platelet covering isn’t able to last long so clotting factors in the blood start to convert a protein called fibrinogen into a different form called fibrin. Fibrin can form long, tough, insoluble strands that bind to the platelets and cross-link together to form a strong, long-lasting mesh on top of the platelet plug. The fibrin then acts as a scaffold as part of the healing process.

What Is Platelet Transfusion and When Is It Needed?

Platelet transfusion is the process by which platelets that have been donated are transferred into the bloodstream of another person. The body needs to have a certain number of platelets in the blood to be able to control hemostasis properly. Too few or too many platelets can cause problems. A normal platelet count is considered to be 150,000–450,000 platelets per microliter of blood. If a person’s platelet count is greater than this they are classed as having a condition called thrombocytosis. Thrombocytosis can have a number of different causes, which affect how serious it is and whether or not a person needs treatment. If a person’s platelet count is less than 150,000 platelets per microliter of blood they are classed as having thrombocytopenia.

Thrombocytopenia can develop as the result of a number of conditions, including cancer, some types of anemia, autoimmune conditions, and viral infections, and as a result of certain types of medical treatment. Symptoms can include frequent gastrointestinal bleeding or bleeding from the gums and nose, and bruising easily. A person may also have a low platelet count if they have bled severely (for example during surgery or as the result of being in an accident) or if their spleen (the organ that “cleans” the blood to keep it healthy) starts to remove too many platelets. In addition to having too few platelets, people may also need a platelet transfusion if they have a platelet function disorder that means that they have enough or too few platelets but they do not work properly.

What Happens to Donated Platelets?

There is a constant and increasing demand for donations of platelets. Some donations are used to transfuse people with a low platelet count, while others are used to help patients who are receiving cancer treatment or are in intensive care. One of the reasons for the increasing need for new donors is that platelets don’t last very long. They are only usable for 7 days after they have been donated, and in the body they are removed from the bloodstream by the spleen or liver after 7–10 days.

Until the 1960s, platelets were stored in the cold (at 4 °C, i.e. in a fridge) because cold-stored platelets are better at stopping blood loss than ones stored at room temperature. However, research showed that the recovery of patients after platelet transfusion was better and that platelets lasted longer if they were stored at room temperature, and cold storage of platelets stopped. While this improved patient recovery after transfusion it brought a new problem; storage of platelets at room temperature increases the risk of septic transfusion reactions caused by the platelets being contaminated with bacteria.

What Is Transfusion-Transmitted Sepsis?

Transfusion-transmitted sepsis can develop if a patient is transfused with donated platelets that are contaminated with bacteria. It is typically caused by contamination with bacteria that usually live harmlessly on a person’s skin, contamination getting into the platelet sample during collection or processing, or the donor unknowingly having bacteria in their blood. Symptoms can begin during or shortly after a transfusion and include severe shivering and chills, high fever, nausea and vomiting, breathing difficulties, low blood pressure, a fast heart rate and circulatory collapse. Severe cases can be fatal.

What Did This Review Article Investigate?

Research continues to investigate ways to improve the safety of platelet transfusion and to optimize how platelets are stored. As a result, over recent years, interest in cold storing platelets has increased because it reduces the risk of bacterial contamination and potentially increases the length of time that platelets can be stored.

There is some evidence that storing platelets at room temperature for 5 days (during which time they are constantly gently shaken), followed by cold storage without shaking for up to another 16 days, may result in platelets being of better quality at the time of transfusion. Other studies have reported that cold-stored platelets are “primed” for activation to a greater extent when compared with platelets stored at room temperature, and also seem to be better at sticking to collagen at sites of blood vessel damage. In addition, one research group has reported that cold-stored platelets are able to form denser clots, with thinner fibers and more crosslinks, making them more effective at stopping blood loss.

Although these results seem promising, there is also evidence from research reports that how platelets are prepared for transfusion, the solutions that are added to them (such as plasma), and variations in how they are stored can affect how well they function. Until these factors become globally standardized it is difficult to draw conclusions regarding whether storage at room temperature or 4 °C is best. In the meantime, platelet donations are increasingly needed to help a broad group of patients with a variety of conditions. Requirements vary according to regions, but if you are interested in becoming a platelet donor you will be able to get information from the health service in your country.

Note: The author of this paper make a declaration about grants, research support, consulting fees, lecture fees, etc. received from pharmaceutical companies. It is normal for authors to declare this in case it might be perceived as a conflict of interest. For more detail, see the Conflict of Interest Statement at the end of the paper.

Gut Frailty and Health in Old Age

What Is the Main Idea?

The gastrointestinal system or “gut” refers to the parts of our bodies that are involved in digestion, including the stomach and intestines. In the open-access review article “Gut Frailty: Its Concept and Pathogenesis”, published in the journal Digestion, the author discusses a concept called “gut frailty”, and describes how the extent to which our gut becomes frail in old age can affect our overall health.

What Else Can You Learn?

The concept of gut frailty is discussed. The importance of the gut microbiome and the link between constipation and frailty are also described.

Take-Home Message

Taking steps to keep our guts healthy can increase the chances of us staying well in old age.

How Do Life Expectancy and Healthy Life Expectancy Differ?

In many countries, life expectancy (the average number of years that a person can expect to live) has increased over the last 50 years. However, there can be big differences between life expectancy and healthy life expectancy (the average number of years that a person can expect to live in good health). For example, in Japan, one out of every two babies born in 2023 is now expected to live until the age of 100 years; however, current healthy life expectancy is approximately 9 years less than life expectancy for men and 12 years less for women.

A number of research studies are being conducted to help us better understand how to narrow the gap between healthy life expectancy and life expectancy. Increased prevalence of obesity and levels of physical activity have both been shown to be significant factors. In addition, recent research has suggested that gut frailty may also be involved.

What Is Gut Frailty?

People are described as being frail when they are between the states of being healthy and needing care. As people age, they become frail when they have reduced physical (muscle) and mental strength and health, and the risk that they will need assistance with daily activities begins to increase. The term “gut frailty” refers to the functions of the gastrointestinal system becoming “weakened”. Recent research has shown that gut frailty can be a precursor to overall frailty, can worsen the symptoms and severity of some diseases, and also causes chronic inflammation.

What Are the Symptoms of Gut Frailty?

The following symptoms are considered to be potential indicators of gut frailty:

  • Pain or discomfort in the abdomen;
  • Constipation (finding it hard to poo or going to the toilet less often than usual) or diarrhea (when the poo is loose and watery, and needing to go to the toilet more often than usual);
  • Abdominal bloating;
  • Stress-related symptoms;
  • Weight loss or decreased appetite.

What Is the Link between Constipation and Frailty?

Among the symptoms listed above, constipation seems to be particularly associated with frailty. Studies have shown that people who experience constipation are at greater risk of developing a number of conditions that include disorders affecting the heart and blood vessels (cardiovascular disorders), chronic kidney disease, and neurodegenerative disorders such as Parkinson’s disease. Although constipation is often thought to simply be a result of the colon not functioning properly (the colon is the part of the digestive system where water and some other nutrients are absorbed into the body from our partially digested food), it can actually be a symptom that disease is developing and can make it worse.

In a study that compared the cognitive decline (changes in cognitive function that are considered to be a normal part of the aging process, like difficulties with multitasking and sustaining attention, and an overall slowing of thinking speed) of elderly people who experienced constipation with people who did not, the rate of cognitive decline was 2.7 times faster among the people with constipation. Similarly, another study reported that loss of muscle and strength as a person gets older (known by the medical term “sarcopenia”) was significantly greater in a group with constipation symptoms compared with a group without them.

What Causes Gut Frailty?

The exact causes of gut frailty aren’t yet known but they are thought to be a combination of reduced secretion of mucus inside the gut, thought to be a key factor in the early stages of gut frailty’s development, and an imbalance in the community of microbes that live in the gut (termed “dysbiosis”) among other factors. The guts of healthy adults contain more than 1,000 different species of microbes, collectively known as the gut microbiome. Although the majority of the microbes are beneficial to us, breaking down indigestible fibers and producing essential nutrients that you would not otherwise be able to get, some are pathogenic (cause disease). If the numbers of “good” bacteria decrease, it becomes possible that the “bad” bacteria will increase in number and overrun the population of good microbes.

Research has shown that the gut microbiome and the immune system are intimately linked. The gut microbiome communicates with the immune system and, if it is healthy, effectively helps it to increase the number of immune cells that dial down the immune system responses that cause inflammation. It is also becoming apparent that the gut microbiome and the nutrients that it produces influence aging. One study reported that people with a low level of gut microbiome diversity had a lower rate of survival compared with people with a higher level when compared after 4 years.

How Can Gut Frailty Be Prevented?

Research investigating how gut frailty can be prevented is ongoing. Potential approaches include dietary changes, medications, next-generation prebiotics (plant fibers that help “good” bacteria to thrive in your gut) and probiotics (live bacteria and yeasts, promoted as having health benefits, that are usually taken as supplements or added to yoghurts), and fecal microbiota transplantation (FMT, also known as poo or stool transplantation). FMT works by transferring the microbiome from a healthy donor to the intestines of a recipient, usually in capsule or liquid form, and has shown to have positive effects lasting several years in patients with irritable bowel syndrome.

Although the concept of gut frailty is not yet widely recognized, better understanding of how gut frailty affects our health will open up the possibility of developing new preventive and therapeutic interventions that focus on the gastrointestinal system, with the aim of helping us to lead healthier lives well into old age.

Note: The author of this paper make a declaration about grants, research support, consulting fees, lecture fees, etc. received from pharmaceutical companies. It is normal for authors to declare this in case it might be perceived as a conflict of interest. For more detail, see the Conflict of Interest Statement at the end of the paper.

How Surgery Can Reduce Intense Facial Pain

What Is the Main Idea?

Trigeminal neuralgia (also called “tic douloureux”) causes intense facial pain, usually on one side of the face. In the open-access clinical study “Open and Percutaneous Trigeminal Nucleotractotomy: A Case Series and Literature Review”, published in the journal Stereotactic and Functional Neurosurgery, the authors assess a surgical technique called nucleotractotomy that can reduce the pain experienced by people with this debilitating condition.

What Else Can You Learn?

The roles of the trigeminal nerve are discussed. Trigeminal neuropathy and two different ways that surgery can be done to treat it are also described.

Take-Home Message

Nucleotractotomy can be highly effective in treating intense facial pain that cannot be treated with medication.

What Is the Trigeminal Nerve?

The trigeminal nerve is a large, three-part nerve in the head that is responsible for us being able to feel sensations like touch and pain in our faces. We each have two trigeminal nerves, one on each side of the head. Each one starts in the brain and then splits into three different branches that extend out across the face like the branches of a tree, and that have different roles:

  • One branch travels to the lower part of your face and is involved in the lower jaw’s functions (biting, chewing, and swallowing). It’s also involved in feeling sensations with your lower lip and gums.
  • A second branch is involved in the upper lip and gums feeling sensation, and also the cheeks, nose, and lower eyelids.
  • The final branch covers the scalp and the upper part of the face, including the eyes, upper eyelids, and forehead.

What Is Trigeminal Neuralgia?

Like the skin, nerves can sometimes become damaged or bruised. Although the trigeminal nerve can recover over time if it becomes damaged, some people experience numbness or facial pain in the area that the damaged branch of the trigeminal nerve serves (this is known as “trigeminal neuropathy”).

Trigeminal neuralgia (also called “tic douloureux”) is a type of trigeminal neuropathy that causes intense pain, usually on one side of the face, that some people describe as being like severe stabbing, burning, or electric shock-like pain. People with trigeminal neuralgia often have attacks of pain that get worse over time, with shorter pain-free periods. It can be caused by compression of or pressure on the trigeminal nerve (for example as the result of the growth of a tumor or cyst), a facial injury, and disorders that affect the myelin sheaths of nerve cells (these act to insulate the signal-sending parts of the nerve cells, a bit like the covering of an electrical wire) like multiple sclerosis. Essentially, the trigeminal nerve keeps sending signals of intense pain to the brain, whether or not anything is actually happening to that part of the face.

How Is Trigeminal Neuropathy Treated?

Trigeminal neuropathy can be treated in different ways according to the individual needs of patients. A number of medications can be effective, but some patients find that their pain is not significantly reduced by medication and surgery is offered instead. Nucleotractotomy is a type of surgery that involves the selective cutting or damaging of a region of the trigeminal nerve called the nucleus caudalis. This is the area where the different signals from the branches extending out to the rest of the face are brought together. The technique works by stopping the intense pain signals from reaching the brain, but importantly does not stop the person from being able to sense that the affected region of the face is being touched.

How Is Nucleotractotomy Done?

There are two main ways that nucleotractotomy can be done:

  1. The first is an “open” technique under general anesthesia. A portion of the patient’s skull is removed and an electrode is inserted and used to “thermolesion” (damage part of the trigeminal nerve using heat) the part of the nerve that is causing the intense pain signals, so that they are no longer received.
  2. The second technique is done under local anesthetic while the patient is awake and involves an electrode being inserted through the skin (“percutaneous”) and guided by computed tomography scanning to the location that will be damaged.

What Did This Study Investigate?

The aim of this study was to review how effective nucleotractotomy is at stopping patients from experiencing severe facial pain in the long term. The authors of the study assessed the amount of pain that 13 patients (7 who underwent the open procedure and 6 that underwent the percutaneous one) experienced before and after surgery using a pain intensity score questionnaire (which rates pain from 0 to 10). They found that before surgery, patients’ pain was rated on average as 9.3. Not long after surgery, this had decreased to on average 1.57 for patients who underwent open nucleotractotomy and 2.66 for patients who underwent the percutaneous technique.

Although there was some evidence that there was a higher rate of pain getting worse again with the percutaneous method, the smaller area of tissue that is affected by this technique seems to be linked to it having a lower chance of patients experiencing severe complications after surgery. The patients were followed up for an average of 40 months (range 1 to 71 months), and at the end of this period the pain scores across the two groups were on average 2.6. Although severe facial pain developed again in 3 patients after percutaneous surgery and in 1 patient after open surgery, the authors of the study judged the techniques to be safe overall and to be equal in terms of how well they relieve severe facial pain in the long term.

How Advances in DNA Sequencing Can Aid Blood Transfusion

What Is the Main Idea?

Immunohematology is the study of antigens on red blood cells and antibodies that are associated with blood transfusions. In the open-access review article “Long-Read Sequencing in Blood Group Genetics”, published in the journal Transfusion Medicine and Hemotherapy, the authors review the latest developments in DNA sequencing technology and the potential benefits to the field of immunohematology.

What Else Can You Learn?

Different types of DNA sequencing are discussed. The structure of DNA, and the roles of different blood group systems in influencing whether a transfusion of blood from a donor to a recipient will be successful are also described.

Take-Home Message

There are currently few full-length blood group system variant sequences available, and it is hoped that long-read sequencing will change this, making it easier to accurately screen blood donors and therefore reducing the risk of a patient receiving an incompatible blood transfusion.

What Is Long-Read Sequencing?

Long-read sequencing is a method that is used to determine the sequences of stretches of DNA (deoxyribonucleic acid). The cells in your body contain long strings of double-stranded DNA that are coiled up as chromosomes in a part of the cell called the nucleus, which acts as the cell’s command center. Your genes are short sections of this DNA that carry the genetic information for the growth, development, and function of your body.

In many types of living organism, including humans, the DNA exists as a two-stranded molecule, which can be thought of as being like a “ladder”, that is twisted into a shape called a double helix. Each strand is made up of units called nucleotides, which consist of a sugar molecule (deoxyribose), a phosphate group, and a nitrogenous (nitrogen-containing) base. There are four different nitrogenous bases in DNA – adenine (A), thymine (T), cytosine (C), and guanine (G) – and they bind together in pairs (A with T and G with C) to form the “rungs” of the ladder.

When new cells are made, the sequence of the nucleotides in a gene should be copied exactly. If it is not, a mutation (a change in the sequence of the DNA) results. Although some mutations have no obvious effect or can have a positive effect on the organism, enabling it to adapt better to its environment over time, others can have a significant negative effect. A number of diseases are caused by the mutation of only one nucleotide, and mutations can also lead to the development of cancer.

DNA sequencing enables the sequence of nucleotides in a piece of DNA to be determined. It is used in medicine to diagnose and treat rare diseases, identify new drug targets, and can be used as a form of genetic testing to identify if someone is at risk of developing a genetic disease and to provide counselling to affected couples who want to have a child. DNA sequencing has also helped scientists to understand the functions of genes and other parts of the genome (all of the DNA in a living organism).

How Does DNA Sequencing Work?

The first DNA sequencing technique was developed in the 1970s by Fred Sanger and his team. It involves making lots of copies of a target region of DNA in which the deoxyribose sugar molecule is replaced by a different version called dideoxyribose, in which the part of the sugar molecule that acts as a “hook” to join the strand to the next nucleotide is missing. As a result, once a dideoxyribose-containing nucleotide has been added to a DNA strand, no more nucleotides can be added and the strand ends.

The dideoxyribose-containing nucleotides are also marked with different colored fluorescent dyes, one for each of the nitrogenous bases. The different copies of the target DNA strand are then “read” in order of size and the dye color on the end of each strand is detected, which enables the sequence of the original piece of target DNA to be worked out.

Although the Sanger sequencing method can produce accurate sequences of DNA segments up to 900 nucleotides long and is still used, it is expensive and it takes a long time to sequence a large amount of DNA like a human genome. As a result, second-generation DNA sequencing techniques were developed that used “short reads” (essentially, a large number of sequencing reactions are run in parallel that sequence DNA strands that are between 50 and 70 nucleotides long), enabling large quantities of DNA to be sequenced more quickly and cheaply. Third-generation sequencing techniques are now also being developed, which include long-read sequencing, that have technical advantages over short-read sequencing.

How Does Long-Read DNA Sequencing Work?

As its name suggests, long-read sequencing can sequence long reads of DNA in one go without them needing to be broken up into smaller fragments. There are currently two companies that offer long-read sequencing and they use different methods:

  1. The first involves making a copy of a long chain of DNA using the sequence of DNA that is going to be sequenced as the template, which has been joined at the ends to become circular. A single circular piece of DNA is placed on a surface with thousands of tiny little wells, so a different reaction can take place in each well. Nucleotides labelled with fluorescent dyes are then used to make new strands and the circular DNA is copied many times.
  2. The second method involves a single strand of DNA being passed through a small hole, called a nanopore, in a membrane that is submerged in a salt solution. When an electrical current is established through the pore as well, each nitrogenous base blocks the flow of the current in a different way as the DNA strand passes through the nanopore. The order of these flow disruptions can then be translated into the sequence of the nitrogenous bases on the DNA strand. For both methods, many different copies of a particular sequence are then put together to form a high-accuracy “consensus” sequence.

What Is Immunohematology?

Immunohematology is a medical specialty that brings together the fields of hematology (the study of the blood and blood disorders) and immunology (the study of the immune system) in the study of antigens on red blood cells and antibodies that are associated with blood transfusions. The term “antigen” describes anything that causes a response by the immune system, while antibodies are molecules that specifically bind to antigens and identify them to the immune system as needing to be dealt with.

There are many different types of antigens on the surfaces of red blood cells. These antigens are normally ignored by the immune system, but if a person receives blood from someone else in the form of a blood transfusion, their immune system will identify and attack any red blood cells with antigens that are different to the ones on their own red blood cells. As a result, it is essential that the red blood cell antigens of both the blood donor and the recipient are determined before a transfusion is given, which enables their “blood groups” to be identified.

Red blood cell antigens are coded for by genes in our DNA called blood group systems and more than 40 have been identified in humans. Some blood group systems can have more than one form called variants. We each inherit one set of our chromosomes from our mother and another from our father, and this can mean that we inherit different variants of a blood group system. Some variants are dominant over other ones, which means that a child’s blood group may be different than the blood groups of its parents.

How Can Long-Read Sequencing Aid Immunohematology?

A number of blood group systems are very long and have complicated sequences that are difficult to work out using short-read sequencing. One of the main advantages of long-read sequencing is that it is able to span entire lengths of complicated regions of DNA. This means that it is better able to detect and sequence regions that contain a lot of repetition or for which there are several different variant forms, or variation that affects more than 50 “rungs” on a DNA “ladder”.

Long-read sequencing can also be used to work out which copy of a variant exists on which copy of a chromosome (i.e., whether it’s on the chromosome inherited from the mother or the father). New variant forms of blood group systems are still being identified, and long-read sequencing is expected to help resolve confusion about what different variant forms of these blood group systems can mean for an individual and how the genes involved are regulated. The technology may also help to establish new reference databases for all blood group systems.

Treating Tremor in Parkinson’s Disease

What Is the Main Idea?

People with Parkinson’s disease often have movement-related symptoms such as tremors. In the open-access research article “Long-Term Follow-Up of Unilateral Deep Brain Stimulation Targeting the Caudal Zona Incerta in 13 Patients with Parkinsonian Tremor”, published in the journal Stereotactic and Functional Neurosurgery, the authors investigate whether using deep brain stimulation that targets a region of the brain called the posterior subthalamic area to treat patients with Parkinson’s disease who have severe tremor still reduces their symptoms at least 3 years after surgery.

What Else Can You Learn?

The symptoms and causes of Parkinson’s disease are discussed. The roles of different areas of the brain in regulating movement and of the neurotransmitter dopamine are also described.

Take-Home Message

The results suggest that treating patients with Parkinson’s disease who have severe tremor with deep brain stimulation that targets the PSA is effective and safe, and significantly improves their tremor symptoms, while slowness of movement is slightly improved. Future studies directly comparing the effects of targeting the VIN or PSA in deep brain stimulation will provide researchers with more information that can be used to refine the techniques and improve the quality of life of people with Parkinson’s disease.

What Is Parkinson’s Disease?

Parkinson’s disease is an age-related neurodegenerative disorder that develops when nerve cells in the nervous system or brain stop functioning and eventually die, causing more severe symptoms over time. It is the most common movement-related brain disease and is slightly more common in men than in women. Although it can develop in adults as young as 20 years old this is extremely rare. It is inherited in around 10% of cases, but in most cases is not linked to gene changes inherited from a parent. The average age at which Parkinson’s disease develops is 60 years and it is estimated to affect more than 1% of people aged over 60 years worldwide.

What Are the Symptoms of Parkinson’s Disease?

Parkinson’s disease is characterized by a range of symptoms that can be broadly divided into two groups: those that are movement-related and those that are not. The two best known movement-related symptoms are bradykinesia and tremor:

  1. The term bradykinesia means that movement is slow and that a person’s continuous movements may be hesitant or halt midway. This is caused by problems with muscle control rather than a loss of strength.
  2. Tremor is a rhythmic shaking of the muscles, even when a person is resting and not using them.

Other movement-related symptoms include a hunched or stooped posture, rigidity or stiffness of the joints, changes in the way a person walks (often resulting in them taking shorter, shuffling steps and needing to take more steps when turning), difficulty swallowing, and blinking less often than usual.

Symptoms that are not related to movement and muscle control include a loss of sense of smell, problems with focusing and with sleep, depression, problems relating to the stomach and intestines (gastrointestinal problems), urinary incontinence, and low blood pressure when standing up. Some of these symptoms are now thought to be warning signs that Parkinson’s disease is developing, which begin years before movement-related symptoms start to be noticeable.

What Causes Parkinson’s Disease?

The brain is made up of several regions with different roles. The outside surface of the brain is made up of a thin layer of cells called the cerebral cortex. This area of the brain is responsible for language and social skills, memory, reasoning, and decision-making. Below it is the sub-cortex, which contains four regions that are important for emotion, thinking, and movement.

One of these is the brainstem, a stalk-like structure that connects the brain to the spinal cord. It is in an area of the brainstem that the neurotransmitter dopamine is produced (neurotransmitters carry chemical signals between neurons, a type of cell that transmits messages from one part of the brain and nervous system to another, and trigger an action or change in the target cell).

Dopamine has a wide range of roles including:

  • motivation and pleasurable reward,
  • attention,
  • behavior,
  • cognition (an umbrella term that describes a combination of processes that take place in the brain, such as the ability to learn, remember, and make judgements based on experience, thinking, and information from the senses), and
  • movement.

The basal ganglia is another important region in the sub-cortex.

How Is the Basal Ganglia Linked to Parkinson’s Disease?

The basal ganglia is a group of structures near the center of the brain that is about 10 cubic centimeters in size. The structures within it are responsible for important connections between different areas of the brain that enable them to work together and send signals back and forth, a bit like a circuit board in an electronic device. The basal ganglia plays a key role in our ability to move by managing the signals that the brain sends to help you move your muscles.

The structures within it can filter out signals that are unnecessary or wrong, and approve or reject movement signals so that you can control particular muscles without using other ones in the same area of your body. They also process sensory information, which helps you to further refine your movements, and are involved in emotions, motivation, and habits.

Parkinson’s disease develops when the basal ganglia begins to deteriorate and causes a major change in the chemistry inside the brain that results in there not being enough dopamine. Because the basal ganglia’s fine-tuning of your movements involves cells that require dopamine to function properly, this reduction in the amount of dopamine results in people having the slowed movement and tremors that are characteristic of the disease.

How Is Parkinson’s Disease Treated?

Although there is currently no cure for Parkinson’s disease, the symptoms of the condition can be treated with medication and/or surgery. Although many people with Parkinson’s disease can have their symptoms reduced by taking medication, which often works by increasing the levels of dopamine in the brain, it can become less effective and side effects (these are positive or negative unintended effects of a medication) can become more severe as the condition progresses.

Tremors can be particularly difficult to treat with medications that affect dopamine levels, and some researchers have reported evidence that as well as dopamine, levels of other neurotransmitters are involved in causing tremors in Parkinson’s disease.

Patients who do not experience significant improvements in their symptoms as a result of taking medication may be offered a type of surgical therapy called deep brain stimulation. This approach involves the reversible implantation of a device that works in a similar way to how a pacemaker regulates the heart. A device called a pulse generator is implanted under the skin in the stomach or chest area that is connected to one or two fine wires. These are inserted into specific areas of the brain and deliver a mild electrical current that changes some of the signals in the brain that cause the movement-related symptoms of Parkinson’s disease.

What Did This Study Investigate?

The thalamus, which is located close to the basal ganglia, acts as the main relay station of signals that come into the brain and passes them on to other areas for interpretation and response. An area of the thalamus called the ventral intermediate nucleus (VIN) is usually the location of choice for the implantation of wires when a patient with a tremor disorder like Parkinson’s disease is treated with deep brain stimulation.

However, over the last decade, targeting an area below the thalamus called the posterior subthalamic area (PSA) has been shown to be at least as effective as targeting the VIN in reducing tremors in patients. It has also been reported to reduce other Parkinson’s disease symptoms, such as rigidity and the loss of the ability to move muscles voluntarily, which is not seen when the VIN is targeted. To assess whether this approach is effective in reducing symptoms of tremor in the long term, the authors of this research study investigated whether deep brain stimulation targeting the PSA remained effective in reducing tremors more than 3 years after patients with Parkinson’s disease underwent surgery.

Thirteen patients were included in the study and 12–24 months after surgery their tremor symptoms had improved by an average of 88% and slowness of movement by an average of 40%. When they were assessed on average 62 months after surgery, the improvement in tremors was still seen and slowness of movement symptoms remained an average of 20% better than when they were assessed before surgery.

Recognizing Sjögren’s Syndrome in Cancer Patients

What Is the Main Idea?

Dryness of the mouth and eyes are known side effects of some anticancer treatments. In the open-access article “Sjögren’s Syndrome Caused by PD-1 Inhibition in a Lung Cancer Patient”, published in the journal Case Reports in Oncology, the authors describe the case of a 71-year-old woman. She was receiving immunotherapy treatment for a type of lung cancer, went on to develop these symptoms, and was discovered to have an autoimmune condition called Sjögren’s syndrome.

What Else Can You Learn?

The symptoms of Sjögren’s syndrome, and how it differs from general symptoms of dryness of the mouth and eyes, are discussed. Differences between chemotherapy and immunotherapy anticancer treatments are also described.

Take-Home Message

This case report demonstrates the importance of patients who are receiving immunotherapy treatment and who have symptoms that are difficult to interpret being evaluated by multidisciplinary healthcare teams. Involving specialists from different medical disciplines means that less common adverse events can be identified and effective treatment started quickly, which can significantly improve the quality of life of patients.

What Is Sjögren’s Syndrome?

Sjögren’s syndrome is an autoimmune disease that is more common in women than in men, and usually develops between the ages of 45 and 55 years. The cells in your body have molecules on their surfaces that the immune system usually recognizes as “self-antigens” (in other words, it recognizes them as “not foreign” and therefore not potentially dangerous).

However, sometimes the body’s immune system starts to recognize self-antigens as foreign ones and begins to attack them. When this happens, the inflammation caused by the “autoimmune” response can result in the destruction of normal, healthy body tissue, or changes in the function or growth of an organ.

What Causes Sjögren’s Syndrome?

The exact causes of Sjögren’s syndrome are not yet understood, but it mainly affects “exocrine” glands, particularly the ones that produce tears (the lacrimal gland) and saliva (the salivary gland). Exocrine glands are organs in the body that produce and release substances through ducts (openings), and include the glands that release milk, digestive juices, tears, and sweat.

Although Sjögren’s syndrome is mainly associated with having dry eyes and a dry mouth, it is a systemic disease (a condition that affects the whole body rather than a single body part or organ) because the long-term (chronic) inflammation that causes it often occurs in other organ systems as well. Patients can experience tiredness, skin rashes (particularly after they have been out in the sun) and dry skin, pain in the muscles or joints, vaginal dryness, and swollen salivary glands.

For some people, Sjögren’s syndrome develops in isolation and is referred to as “primary” Sjögren’s syndrome. For others, its development can be associated with another related autoimmune condition, such as rheumatoid arthritis, and is referred to as “secondary” Sjögren’s syndrome. One reason that Sjögren’s syndrome is difficult to diagnose is that symptoms of constant dryness in areas of the body are not uncommon, particularly as people age, and can vary widely between one person and another.

To be able to diagnose Sjögren’s syndrome, healthcare practitioners look for evidence that an autoimmune response is causing the patient’s symptoms, often by measuring the levels of particular antibodies in blood samples. If there is no evidence that the dryness symptoms are being directly caused by an autoimmune response the symptoms are classified as “sicca” (which literally means “dry”) syndrome. It is worth noting that patients with other autoimmune conditions, such as rheumatoid arthritis and lupus erythematosus, can also experience dryness of the eyes and mouth.

How Is Dryness of the Eyes and Mouth Linked to Cancer Treatment?

Severe mouth and eye dryness are known side effects of some anticancer treatments, with approximately 9.4% of cancer patients who receive chemotherapy treatment developing them. Chemotherapy targets and kills rapidly dividing cells like cancer cells, but can also affect other cells in the body that divide rapidly, causing side effects. Patients with cancer who are treated with immunotherapy experience fewer adverse events (these are unintended and undesirable effects that develop after exposure to a medicine, although they may not have been caused by it) compared with patients treated with platinum-based chemotherapy.

Immunotherapy is a type of treatment that uses the body’s own immune system to tackle a cancer. For example, some immunotherapy medicines target and block a protein called PD-1. PD-1 is found on the surfaces of some immune cells and plays a role in preventing autoimmune responses from developing. Blocking PD-1 triggers these immune cells to find and kill cancer cells.

Although some adverse events that can occur during or after immunotherapy treatment are well known and are routinely looked for by patients’ healthcare teams, others are extremely rare and can have symptoms that are difficult to diagnose and interpret. This can lead to delays in diagnosis and patients receiving treatment for these symptoms.

Approximately 5.3% of cancer patients treated with immunotherapy develop symptoms of dryness of the mouth and eyes because the immune system starts to attack normal, healthy cells as well as the cancerous ones. Where this occurs, the signs and symptoms are different to those of primary Sjögren’s syndrome. Around 50% of all cases of mouth and eye dryness that are linked to immunotherapy occur in men, as opposed to only around 5% for primary Sjögren’s syndrome, and the average age of diagnosis is around 10 years older.

What Does This Case Report Describe?

In this case report, a type of study that looks in depth at the case of a single individual or a specific group of patients, the authors describe the case of a 71-year-old woman who was eventually diagnosed with Sjögren’s syndrome after receiving immunotherapy treatment for non-small cell lung cancer. Case reports are useful because they enable healthcare practitioners to communicate information about rare or previously unreported conditions, complications, or treatments to the rest of the medical community. The authors report that the case of this woman’s experience is unusual because she developed Sjögren’s syndrome only 18 months after her immunotherapy treatment began, and because she developed a broad range of signs and symptoms of the condition.

After 18 months of receiving immunotherapy treatment, the woman’s non-small cell lung cancer was in partial remission (this means that the cancer had reduced in size or stopped growing) and the immunotherapy was stopped. She had been experiencing some mild side effects from the immunotherapy and these had been treated with low-dose steroid treatment. However, once the steroid treatment ended, she quickly started to experience an extremely dry mouth that made swallowing difficult, resulting in her rapidly losing weight. She also developed severe dry eye syndrome (known as “xerophthalmia”) and a type of skin inflammation called “erythema nodosum”, which results in painful reddish lumps developing under the skin, usually on the shins.

In the case of this patient, evaluation by a dermatologist (a doctor that specializes in conditions that affect the nails, hair and, skin) and a rheumatologist (a doctor that specializes in chronic inflammatory conditions like Sjögren’s syndrome, rheumatoid arthritis, and lupus erythematosus) meant that her symptoms were diagnosed correctly as Sjögren’s syndrome. She was able to receive the treatment that she needed (corticosteroid treatment) and her symptoms rapidly improved.

Raising Awareness of a Rare Type of Adrenal Cancer

What Is the Main Idea?

Feminizing adrenocortical tumors are an extremely rare type of cancer that develops in the adrenal glands. In the open-access article “Feminizing Adrenocortical Tumor with Multiple Recurrences: A Case Report”, published in the journal Case Reports in Oncology, the authors describe the case of a 35-year-old man diagnosed with this type of cancer and how his treatment has been managed.

What Else Can You Learn?

Feminizing adrenocortical tumors and their symptoms are discussed, alongside the roles of case reports in raising awareness of rare conditions. The roles of the endocrine system and its key components, particularly the adrenal glands, are also described.

What Are Glands?

Glands are organs in the body that produce substances and release them either through ducts (openings) or directly into the bloodstream. Glands that release substances through ducts are called “exocrine” glands, and this group includes the glands that release milk, digestive juices, tears, and sweat.

“Endocrine” glands release hormones, molecules that act as chemical messengers, into the bloodstream. Together, hormones and endocrine glands make up the endocrine system, a messenger system that targets and regulates organs all over the body and controls almost all of the processes that take place within it.

What Does the Endocrine System Do?

To be able to function properly, the various parts of the body need to be able to communicate with each other to make sure that the internal environment is kept constant, and that any changes in the internal or external environment get an appropriate response. Two systems enable this communication:

  • The nervous system is made up of the nerves, spinal cord, and brain, and enables messages to travel from one part of the body to another within fractions of seconds.
  • In contrast, the endocrine system is better suited to responding to situations where a longer-lasting and more widespread response is needed, because it involves hormones being made and travelling around the body in the bloodstream.

Although the two systems complement and interact with each other, the endocrine system is responsible for regulating development, growth, metabolism (the process by which the food and drink that we consume is changed into energy), and our ability to reproduce, as well as the components that make up bodily fluids like saliva and blood, our emotions and moods, and even our sleep.

Which Parts of the Body Are Involved in the Endocrine System?

Although hormones are made in many parts of the body, there are several key components of the endocrine system. These include the pituitary and pineal glands and the hypothalamus in the brain, the thymus in the upper part of the chest, the thyroid and parathyroid glands in the neck, the pancreas (which is behind the stomach and is also part of the digestive system), the gonads (the “sex glands”: ovaries in women and testes in men), and the adrenal glands, which are located on top of the kidneys.

The production of hormones and their release must be tightly controlled to ensure that the body’s functions are regulated properly. To achieve this, many functions are regulated by several hormones that regulate each other via positive and negative feedback loops. For example, the effect of one hormone on an organ may cause that organ to release a second hormone that feeds back to the gland that sent the first hormone. This can prevent the message being sent by the first hormone from being “on” continuously.

What Do the Adrenal Glands Do?

The adrenal glands are triangular-shaped and there is one on each kidney. They are made up of two parts with different functions and that make different sets of hormones:

  • The inner part of the adrenal gland is called the “adrenal medulla”, and it is here that a type of hormone called “catecholamines” are made. The best known catecholamine is adrenaline (also known as epinephrine or the “fight or flight” hormone), which increases the body’s heart rate and blood pressure when it is under stress.
  • The outer part of the adrenal gland is called the adrenal cortex, and it is here that a type of hormone called “corticosteroids” are made. Some of the roles of these hormones include metabolism, the body’s response to stress, the immune system, and sexual development and function.

What Does This Case Report Describe?

A case report is a type of study that looks in depth at the case of a single individual or a specific group of patients. Case reports are particularly useful when healthcare practitioners want to communicate information about rare or previously unreported conditions, complications, or treatments to the rest of the medical community. In this study, the authors describe the case of a 35-year-old man who had a type of adrenal gland cancer called a “feminizing adrenocortical tumor”.

Primary tumors (tumors that have not spread from elsewhere in the body) that start in the tissues that cover your organs and glands can be classed as adenomas (these are “benign”, meaning that they are not able to invade surrounding tissue or spread to other areas of the body) or carcinomas (these are “malignant”, which means that they can invade and spread). Primary carcinomas of the adrenal glands are rare and, although it is unusual, sometimes a tumor in an adrenal gland can start to produce and release corticosteroids abnormally.

In the case of feminizing adrenocortical tumors, only estrogens are secreted. Estrogens are a type of sex hormone, so called because they are critical in regulating the biological differences between males and females, and are particularly involved in reproduction and puberty. In humans, the key sex hormones are estrogens, progesterone, and testosterone. The high levels of estrogens produced by feminizing adrenocortical tumors have a feedback effect on the levels of testosterone, meaning that testosterone production is usually suppressed in patients with this type of tumor.

As a result, common symptoms are hypogonadism (where the gonads produce low levels of or no hormones) and overdevelopment or enlargement of the breast tissue in men and boys. Patients with this type of tumor can also experience discomfort or pain in one side of the body between the back and the upper abdomen (belly area). Feminizing adrenocortical tumors most commonly occur in men but can also develop in women and children. In women, additional symptoms include irregular or postmenopausal bleeding.

One of the difficulties in treating feminizing adrenocortical tumors is that they are extremely rare, accounting for less than 2% of all adrenal gland tumors. In fact, only 50 cases were reported in the medical literature between 1970 and 2015. As a result, case reports have an important role to play in increasing awareness of this type of tumor and improving its diagnosis and treatment.

Take-Home Message

Feminizing adrenocortical tumors are often aggressive (meaning that they develop and/or spread quickly), are almost always malignant, and the chance that they will recur is high. Case reports like this study help to raise awareness of the need to recognize and treat this type of cancer aggressively, and monitor patients closely for signs of recurrence.

The Role of Immune Cells in Primary Immune Thrombocytopenia

What Is the Main Idea?

Primary immune thrombocytopenia is a type of autoimmune disorder. In the research article “The Role of Follicular Regulatory T Cells/Follicular Helper T Cells in Primary Immune Thrombocytopenia”, published in the journal Acta Haematologica, the authors discuss how two types of immune system cells are linked to primary immune thrombocytopenia and may have potential as future therapeutic targets.

What Else Can You Learn?

The symptoms and our understanding of the causes of primary immune thrombocytopenia are described. The roles of the innate and adaptive branches of the immune system, and of B and T cells, are also discussed.

Take-Home Message

Although further research is needed, these results suggest that targeted immunotherapies (treatments that work by activating or suppressing the immune system) may be worth investigation as potential treatments for patients with primary immune thrombocytopenia in the future. For example, it may become possible to directly target components of the immune system to prevent the recognition of platelets as foreign and reduce their breakdown so that normal levels of platelets in people with primary immune thrombocytopenia can be maintained.

What Is an Autoimmune Disorder?

Primary immune thrombocytopenia is an autoimmune disorder, which means that it develops when the body’s immune system starts to attack cells in the body that are not harmful by mistake. The immune system protects the body from things that are potentially harmful by recognizing “antigens”, which is a term used to describe anything that causes an immune response and can include chemicals or molecules on the surfaces of bacteria and viruses.

The cells in your body also have molecules on their surfaces, but the immune system usually recognizes them as “self-antigens”. In other words, the immune system knows that they are not “foreign” and should not be attacked. However, sometimes the body’s immune system starts to recognize self-antigens as foreign ones and begins to attack them. This is an “autoimmune” response and can result in the destruction of normal, healthy body tissue, or changes in the function of or the abnormal growth of an organ. Autoimmune disorders include type 1 diabetes, rheumatoid arthritis, and multiple sclerosis.

What Are the Symptoms of Primary Immune Thrombocytopenia?

The main symptom of primary immune thrombocytopenia is a low number of platelets in the blood. Blood is made up of a liquid called plasma and three main types of blood cells:

  1. Red blood cells carry oxygen around the body.
  2. White blood cells fight infection and there are several different types, including lymphocytes (the main type of white blood cell found in the lymph fluid that circulates around the body) called B cells and T cells.
  3. Platelets are the third type of blood cell and are involved in the process that enables blood to clot to promote healing and control blood loss.

In people with primary immune thrombocytopenia the immune system starts to mistakenly attack platelets in the blood, stopping them from working or breaking them down. This results in reduced levels of platelets in the blood, which can result in excessive bleeding because the blood is less able to clot. Although the symptoms of primary immune thrombocytopenia vary between patients, common symptoms include spontaneous bruising or bruising easily, bleeding from the gums, blood blisters on the insides of the cheeks, frequent heavy nose bleeds that are hard to stop, and fatigue.

How Do Immune Cells Contribute to Primary Immune Thrombocytopenia?

There is some evidence that abnormalities in the function and number of some types of immune cell are associated with primary immune thrombocytopenia. The immune system can be thought of as having two branches:

  • The first, the “innate immune system”, includes the inflammatory response and does not have the ability to “remember” antigens that it has encountered.
  • The second, the “adaptive immune system”, does have memory, which means that if the immune system has encountered an antigen once before it will be able to mount a stronger response if it encounters it again, a property that is exploited by vaccines.

B and T cells are the main mediators of the adaptive immune system. B cells have immunoglobulin molecules on their surfaces that act as receptors that recognize antigens and can be secreted as antibodies. T cells have specific antigen receptors called T-cell receptors on their surfaces that other types of lymphocyte do not have.

What Do T Cells Do?

There are three different types of T cells with different functions:

  1. Cytotoxic T cells can directly kill virus-infected cells and cancer cells (cytotoxic means toxic to cells or “cell killing”).
  2. Helper T cells help to activate other cells in the immune system, like B cells and cytotoxic T cells, and play a role in regulating the responses of the immune system.
  3. Regulatory T cells play important roles in the immune system’s ability to recognize self-antigens, and a type of these called T follicular regulatory (Tfr) cells can also suppress the functions of B cells and influence the breakdown of T helper cells.

What Did This Study Show?

The exact mechanisms by which changes in the immune system lead to the development of primary immune thrombocytopenia are currently unknown. If these changes can be better understood it may be possible to target parts of the immune system that become dysregulated during the development of primary immune thrombocytopenia as a way of treating the condition.

The authors of this study therefore compared blood samples from people with primary immune thrombocytopenia with blood samples donated by people without the condition, and looked for differences in gene expression and numbers of different types of T cells. (Gene expression is the process by which the information encoded by a gene is translated into a function, usually through the production of a protein, by being switched on or off or by the activity of the gene being increased or reduced.)

The results of the study showed that there were a number of genes that were expressed differently in patients with primary immune thrombocytopenia compared with people without the condition that were mainly involved in immune responses and the process of inflammation. In particular, there were differences in the proportions of Tfr and T follicular helper (Tfh) cells, and the proportions of these cell types also changed when patients with primary immune thrombocytopenia responded to treatment. The levels of Tfh cells were increased in patients with primary immune thrombocytopenia but decreased after they responded to treatment, while the ratio of Tfh cells to Tfr cells increased after treatment responses.

In addition, two genes were much more highly expressed in people with primary immune thrombocytopenia than in people without the condition. BCL-6 encodes a protein that regulates the proliferation of Thf cells, while IL-21 encodes a protein that is able to increase the differentiation of Thf cells and also suppress the differentiation of Tfr cells.

Note: This post is based on an article that is not open-access; i.e., only the abstract is freely available.

The Link between Parasitic Fluke Infections and Cancer

What Is the Main Idea?

Flukes are parasitic flatworms that can infect mammals, including humans. In the review article “Biliary Parasitic Diseases Associated with Hepatobiliary Carcinoma”, published in the journal Visceral Medicine, the authors discuss the links between infections with some types of fluke and the development of cancer in the liver or bile ducts.

What Else Can You Learn?

Three parasitic flukes that cause disease in humans are described. The roles of the liver and bile ducts in the digestive system are also discussed.

What Are Liver Flukes?

The name “fluke” describes a group of species of parasitic flatworms that are able to infect mammals including humans. Parasites are organisms that live on or in another organism (known as the “host”). They depend on their hosts for their survival, getting their food either from their hosts or at the hosts’ expense, and they are specially adapted to live in this way.

Because parasites need their hosts to be able to survive, many species do not kill the host directly but often carry diseases that can be life-threatening to the host. Once they have infected a person, blood flukes tend to reside in the blood vessels. Liver flukes are small enough to travel around the body in the blood circulation. They often end up in the liver, gallbladder, and bile ducts where they can cause disease.

What Do the Liver and Bile Ducts Do?

The liver has a number of roles in the body, including cleaning the blood to remove harmful substances and metabolizing proteins, fats, and carbohydrates so that the body can use them. The liver also makes a fluid called bile that helps the body to break down fats from food, which can be stored in the gallbladder or can travel directly from the liver to the small intestine.

Most of the digestion of the food we consume takes place in the small intestine and it is here that nutrients and minerals from our food are absorbed into the blood. The bile ducts are part of the digestive system and are small tubes that connect the liver to the gallbladder and small intestine.

Where Are Flukes Found?

Different types of fluke have different life cycles (the different stages that organisms go through during their lives) and are found in different areas of the world. Some types of liver fluke are endemic (this means “native to” or regularly occurring and/or present) in areas of southern and southeastern Asia, but other types can be found in all continents except Antarctica. Although health authorities in areas where flukes are endemic have made major efforts to prevent and control their presence, with some successes, they are still a problem and many people become infected.

How Are Fluke Infections Linked to Cancer?

Although the symptoms caused by fluke infections are often mild, flukes can survive in the human body for several decades if the infection is not treated. This can lead to chronic (long-term) inflammation, the process by which your body responds to an injury or a perceived threat.

Liver fluke infection can also lead to an increase in the number of cells lining the ducts and the passageways connecting the liver, gallbladder, and small intestine (known as “epithelial hyperplasia”), and thickening or scarring of the ducts (known as “periductal fibrosis”). These symptoms can cause further complications over time including the formation of stones and the development of hepatobiliary cancers (the prefix “hepato” refers to the liver and “biliary” refers to the gallbladder and bile ducts).

Which Flukes Are Linked to Cancer?

There are a number of flukes that have been linked to the development of hepatobiliary cancer. Three of the most common are Schistosoma japonicum, Clonorchis sinensis, and Opisthorchis viverrini.

Schistosoma japonicum

S. japonicum is a blood fluke that is responsible for a disease called schistosomiasis, which is estimated to affect 200 million people worldwide. People become infected with this type of fluke through contact with fresh water in which the parasite is present, either through work and agriculture, or activities of daily living. During its life cycle, S. japonicum can cause blockages in small blood vessels in the liver and cause cirrhosis (scarring of the liver tissue that causes long-term damage and prevents the liver from working properly).

It is suspected that S. japonicum infection is directly linked to the development of a type of liver cancer called hepatocellular carcinoma, because rates of this type of cancer are much higher in areas where S. japonicum is endemic. The International Agency for Research on Cancer (IARC) has classified this fluke as a group 2B carcinogen (a substance or organism that can cause cancer), which means that the IARC has brought together a panel of experts on the subject that has evaluated all of the available published evidence and agreed that it is “possibly” able to cause cancer in humans.

Clonorchis sinensis

C. sinensis is endemic in China and Korea, and is estimated to affect 35 million people worldwide. People can become infected with this liver fluke by eating raw or undercooked infected fish, crab, or crayfish. Similarly to S. japonicum, C. sinensis has been linked to the development of cholangiocarcinoma, the name given to a group of cancers that form in the bile ducts, because the incidence of cholangiocarcinoma is much higher in areas where C. sinensis is endemic.

In one study, C. sinensis infection was shown to increase the risk of developing cholangiocarcinoma by 14 times compared with individuals with no history of C. sinensis infection. The IARC has classified this liver fluke as a group 2A carcinogen, which means that it is “probably” able to cause cancer in humans.

Opisthorchis viverrini

O. viverrini is known by some as the “South East liver fluke”, is endemic in northern Thailand, and is estimated to affect 10 million people worldwide. Similarly to C. sinensis, people can become infected with this liver fluke by eating raw or undercooked infected fish, crab, or crayfish. The evidence that O. viverrini infection is linked to the development of cholangiocarcinoma is so strong that the IARC has classified it as a group 1 carcinogen, which means that the IARC views O. viverrini as “definitely” being carcinogenic.

Take-Home Message

It is clear that fluke infections can have serious long-term implications that go beyond the initial effects of the parasite on the host. It is therefore essential that fluke infections are recognized and treated as soon as possible after infection to reduce the risk of hepatobiliary cancer developing in the future.

Note: This post is based on an article that is not open-access; i.e., only the abstract is freely available.

Understanding the Skin–Brain Axis

What Is the Main Idea?

Psychoneuroimmunology is a field of study that brings together researchers that traditionally work in separate fields. In the open-access editorial article “2nd European Psychoneuroimmunology Network Autumn School: The Skin–Brain Axis and the Breaking of Barriers”, published in the journal Neuroimmunomodulation, the authors summarize how studying the skin is enabling us to better understand the relationships between the brain, hormones, and the immune system that contribute to good health when we are well and that fail when we become ill.

What Else Can You Learn

The different research fields that are brought together under the term psychoneuroimmunology are described. The skin–brain axis and how studying the skin is enabling researchers to gain new knowledge about health and disease are also discussed.

What Is Psychoneuroimmunology?

Psychoneuroimmunology is a multidisciplinary area of research that brings together people working in different fields so that they can pool knowledge and explore research questions across traditional subject boundaries. It incorporates fields that have traditionally been separate, such as:

  • neuroscience (which considers the function and disorders of the nervous system, including the brain),
  • physiology (which considers how living organisms or parts of the body function when they are working normally),
  • immunology (which considers the function and disorders of the immune system),
  • genetics (which studies genes, how traits are inherited, and genetic variation), and
  • psychosocial disciplines (which consider how psychological factors and the surrounding social environment influence how people function and behave).

How Does Psychoneuroimmunology Link Different Fields of Research?

Psychoneuroimmunology brings people like clinicians and healthcare practitioners, epidemiologists (researchers who study the causes, effects, and patterns of disease in groups of people), basic scientists (researchers who seek to improve our understanding of the world and how things work), and statisticians (researchers who compile and use statistical data to solve problems) together, to study how processes that influence our minds and thoughts interact with the body’s immune and nervous systems.

This can involve looking at how the nervous and immune systems function and their effects on people’s behavior when they are well or unwell, such as when disorders such as autoimmune diseases (which develop when the body’s immune system mistakenly starts to recognize the body’s own tissue as foreign and attacks it) and immune deficiencies (which occur when the immune system becomes weakened, potentially enabling problems like infections to occur more easily) develop. There is now good evidence that psychosocial stresses and interventions can affect our immune systems in ways that lead to changes in our health.

Although further research is needed, it seems that stressful events can trigger physical and cognitive (thought-based) responses that induce changes in the body that weaken or damage the immune system, by altering the way the endocrine system (a network of organs and glands that uses hormones to control processes in the body) and the sympathetic nervous system (the part of the nervous system that is responsible for the “flight-or-flight” response to things that we perceive to be threatening or harmful) work.

For example, environmental and psychosocial factors can influence the development of cancer and autoimmune diseases, and affect the speed at which we heal. It has also been shown that regular physical activity has an immunoregulatory effect, as well as improving symptoms of depression and low mood, and having positive effects on the heart and muscle fitness.

What Is the Skin–Brain Axis?

The skin–brain axis is the term given to the connections between the brain and the skin, and the ways that they communicate with each other. The skin is the body’s largest organ and has a number of functions. It cover’s the body’s entire surface and acts as a barrier to things like UV light, chemicals, and microbes that have the potential to make us ill like bacteria, and helps to regulate our body temperature.

The skin also plays a key role in sensing changes in our environment, and environmental changes that it senses are translated into chemical and biological messengers that travel via hormones, or the immune or nervous system, to reach the brain and other organs. For example, if you put your hand on something that is very hot, the skin on your hand sends signals via nerves to the brain that are translated as pain so that you move your hand away quickly, and the immune system is activated to repair any damage to your hand. Communication can also run from the brain to the skin.

Many skin conditions are linked with chronic stress, including eczema, psoriasis, and acne. Stress has also been shown to reduce the skin’s ability to act as a barrier, increasing the chance of infections. Sleep has also been shown to be important for the immune system to work effectively, and meditation has been shown to have positive effects on the mental health and immune system function in people with long COVID.

How Does Studying the Skin Help Research in Psychoneuroimmunology?

Studying the skin has several advantages for psychoneuroimmunology researchers. It is relatively easy for participants in research to give samples of skin and for them to be cultured in a laboratory. Skin swabs can be used to quickly and cheaply sample the populations of microbes (such as bacteria and viruses) that live on the skin’s surface when people have inflammatory skin diseases, and to investigate how they differ under non-stressed and stressed conditions.

Skin-related experimental models (these use systems, such as the culturing of cell in a laboratory, to investigate processes that are thought to be involved in diseases and to evaluate new drugs that are being developed) can also be used to investigate the influences of lifestyle, perception, and our behavior on how well our organs function. For example, it has been shown that the levels of stress that a person is experiencing around the time of being vaccinated against flu can predict how long lasting their antibody response will be against that flu strain in older adults.

It is also hoped that the study of skin diseases may improve our understanding of how inflammation in the different areas of the body affects the brain and, conversely, how inflammation and dysregulation in the brain and nervous system affects other areas of the body. Such research has the potential to increase our understanding of how trauma and disruptions to the immune system affect our mental health, and whether there are links between them and the development of neurodegenerative diseases such as dementia in later life.

How Virtual Scale Endoscopes Can Improve Colorectal Polyp Sizing during Screening

What Is the Main Idea?

A colorectal polyp is a small clump of cells that forms on the lining of the colon or rectum, and its size can indicate whether or not it is likely to become cancerous. In the research article “Usefulness and Educational Benefit of a Virtual Scale Endoscope in Measuring Colorectal Polyp Size”, published in the journal Digestion, the authors investigate the accuracy of a new type of endoscope, called a virtual scale endoscope, and explore its potential application as an educational tool.

What Else Can You Learn

Colorectal cancer and its symptoms are described. The roles of the colon and rectum in the digestive system, and different types of endoscopy procedure used to examine them during screening, are also discussed.

What Are Colorectal Polyps?

The colon and rectum are both part of the digestive system, and together with the anus are known as the large intestine. Once food enters the body it is broken down in the stomach before being passed on to the small intestine. The breakdown of food continues in the small intestine, and it is here that most of the nutrients in our food are absorbed into the body. The leftover material, which is mostly liquid, then moves into the colon where water and some further nutrients are absorbed.

The remaining waste (known as “stool”, “feces”, or “poo”) is stored in the rectum before it is passed out of the body via the anus. Colorectal polyps are small clumps of cells that form on the lining of the colon and/or rectum. Most colorectal polyps are harmless (benign) and do not cause any symptoms, but over time some polyps can begin to grow out of control leading to colorectal cancer.

What Is Colorectal Cancer?

Colorectal cancer is one of the most common types of cancer worldwide, with incidence varying widely between different regions and countries. Although colorectal cancer is usually diagnosed in people older than 50 years of age, it can occur in people who are younger than 50 years, particularly if they have a family history of colorectal cancer or have certain conditions that can be inherited. As well as genetic conditions, it is known that lifestyle choices such as eating a diet that is high in processed and red meat are linked to increased risk of developing it.

Common signs and symptoms of colorectal cancer include a persistent change in bowel habits (such as diarrhea, constipation, and/or a change in stool consistency that does not clear up after a short period of time), rectal bleeding or blood in the stool, a feeling that the bowel has not been completely emptied of stool when going to the toilet, losing weight without trying, a persistent feeling of weakness or tiredness, and discomfort in the abdomen area (the area between the chest and the pelvis) that does not go away.

Why Is Colorectal Polyp Size Important?

Research has shown that the larger the size of a colorectal polyp, the greater the chance that it will become cancerous. Changes in genes (mutations) that take place in the cells of a colorectal polyp over time can mean that they become able to grow more quickly and live longer, resulting in the polyp growing larger. If a polyp that is beginning to grow too large and too quickly is identified, and removed from the colon or rectum before it becomes cancerous and begins to spread, the development of colorectal cancer is prevented. As a result, the sizes of polyps influence how often follow-up screening will be conducted and whether or not they will need to be removed.

What Did This Study Investigate?

Because the size of a colorectal polyp can indicate whether or not it is likely to become cancerous, it is important that polyp size is estimated correctly. Polyp size can be assessed by endoscopy, a medical procedure that uses a long, thin, flexible tube with a small camera inside (called an endoscope) to look inside the body. The two types of endoscopy that are most commonly used to assess colorectal polyps are colonoscopy and flexible sigmoidoscopy.

Colonoscopy involves using an endoscope to assess the entire length of the colon, while flexible sigmoidoscopy only looks at the lower third. Most judgments about the sizes of colorectal polyps are made by endoscopists estimating polyp sizes by sight, but this means that there is a risk that their estimates may be wrong. Correctly estimating the sizes of polyps is difficult because endoscopes have “fisheye lenses” (also called an ultra-wide lens), which tend to curve straight lines and distort the images that endoscopists see. This means that objects such as polyps at the center of the display can appear larger than they really are, while things at the outer edge of the display can appear smaller. Although is has been suggested that forceps be used to measure polyps during endoscopy procedures, their use can make them more complicated and time-consuming.

To get around these problems, a new type of endoscope called a virtual scale endoscope (VSE) has been developed. A VSE is able to project a red laser dot onto the surface of the colon or rectum that changes positing according to the distance between the polyp and the end of the endoscope. The software that processes the VSE’s images then detects the position of the red laser dot and uses it to display a virtual scale (this can be linear, a bit like a ruler, or circular) on the image that is produced to help the endoscopist estimate polyp size in real time during the procedure. The authors of this study looked at whether polyp measurements made using a VSE were accurate, and also investigated whether using the images produced could be used as a teaching aid to help endoscopists better estimate polyp sizes.

How Was The Study Conducted?

The authors carried out two studies. The first study compared the sizes of polyps as measured using a VSE before they were removed from the colon with their actual size as measured with a ruler after they were removed. The second study involved 14 endoscopists with differing levels of experience estimating the sizes of 42 polyps in a pre-test before receiving a lecture about how to measure colorectal polyps using VSE images.

The endoscopists were categorized as beginners, intermediates, or experienced based on the number of years of experience that they had. The lecture that they received included the endoscopists being shown the correct sizes of the polyps that they had been asked to estimate in the pre-test, together with VSE images with a virtual scale, and each endoscopist received an explanation of how sizing errors were being introduced. The endoscopists then had a 1-month training period where they could practice what they had learned before doing a post-test using the original images (which were shown to them in an order that differed from the order shown in the pre-test to try to reduce bias).

What Did The Study Show?

The results of the first study indicated that there was agreement between the polyp size estimates produced using a VSE and the actual sizes of the polyps. In the second study, the accuracy of the beginner and intermediate endoscopists in measuring polyp sizes was significantly better in the test conducted after the training period than in the pre-test, with accuracy improving by approximately 50% in some cases.

These results indicate that VSEs can accurately measure colorectal polyp sizes before they are removed from the colon and that the images that they produce are useful tools with which to train endoscopists in the first few years of their careers, suggesting that VSE use has the potential to increase the accuracy of polyp measurement during endoscopy and, as a consequence, improve the detection of polyps that are beginning to grow out of control.

Note: This post is based on an article that is not open-access; i.e., only the abstract is freely available.

Early Signs of Atherosclerosis in Children with Atopic Dermatitis

What Is the Main Idea?

Atherosclerosis develops when our arteries begin to become narrowed or hardened. In the research article “Assessment of Subclinical Atherosclerosis in Children with Atopic Dermatitis”, published in the journal International Archives of Allergy and Immunology, the authors investigate whether early signs of atherosclerosis beginning to develop can be detected in children with atopic dermatitis and attempted to identify risk factors associated with both conditions.

What Else Can You Learn

Atherosclerosis and its symptoms are described. Atopic dermatitis and the role of inflammation in the development of cardiovascular disease are also discussed.

What Is Atherosclerosis?

Atherosclerosis is a progressive disease that develops slowly when the arteries, a type of blood vessel that carries oxygen-rich blood from our heart to the organs and tissues around our body, become narrowed or hardened. It is caused by the buildup of fatty deposits called plaque, which consist of fats, cholesterol, and other substances. Over time, as the amount of plaque in the arteries increases, the narrowing makes it more difficult for the blood to flow freely and cardiovascular disease (a general term that is used to describe diseases that affect the heart or blood vessels) can develop.

Cardiovascular diseases that can be caused by atherosclerosis include:

  • peripheral arterial disease (where a blockage develops in the arteries that deliver blood to your limbs, usually the legs),
  • aortic disease (where the aorta, the body’s main artery, is unable to work properly),
  • stroke (where the blood supply to the brain becomes disrupted), and
  • coronary artery disease (where the coronary arteries, which are the main sources of blood supply to the heart, become narrowed or blocked), which can lead to angina or heart attack.

Although many people with atherosclerosis do not have any symptoms, some people experience pain in their chest, or in their arms or legs when exercising, a feeling of weakness and/or confusion, and may feel short of breath or tired most of the time.

What Causes Atherosclerosis?

Atherosclerosis can begin to develop in early childhood. High levels of fats and cholesterol in the blood are known to contribute because they make up some of the components of plaque. Damage or injury to the inner layers of arteries is also thought to be involved because the immune system responds to and seeks to repair the damage through a process called inflammation.

When inflammation is initiated, it causes blood cells and other substances to gather at the site of injury, and this can contribute to plaque starting to build up inside the arteries. Interestingly, there is evidence that the inflammation caused by inflammatory diseases such as rheumatoid arthritis, psoriasis, and inflammatory bowel disease can contribute to the development of atherosclerosis, and atopic dermatitis (more commonly known as eczema) may also be involved.

What Is Atopic Dermatitis?

Atopic dermatitis is an inflammatory skin condition that is usually long-term and recurrent, although in children it can improve or clear up completely as they get older. It causes the skin to be dry, cracked, itchy, and sore, and can range from occurring in small, localized patches to all over the body. Although the exact causes of atopic dermatitis are unknown, it is considered to be a systemic disease (a condition that affects the whole body rather than a single body part or organ) because the chronic inflammation that causes it often occurs in other organ systems as well as in the skin. It also often occurs in people who have allergies or asthma.

What Did This Study Investigate?

Because atopic dermatitis is an inflammatory disease and chronic inflammation has been linked to the development of atherosclerosis, it is possible that there is a relationship between people having atopic dermatitis and developing cardiovascular disease later in life. Some studies have found that the systemic inflammation caused by atopic dermatitis may double the risk of cardiovascular disease. Some research has suggested that the two may have an indirect relationship due to atopic dermatitis causing risk factors linked to increased risk of cardiovascular disease, such as sleep problems caused by itching, inactivity, and the use of corticosteroid treatments).

However, other research has suggested that there is a direct relationship caused by the excessive inflammation in the body that is independent of other factors. Recent research has shown that the levels of molecules in the blood that are prognostic (in other words, they can be used to indicate how a condition is likely to progress) for atherosclerosis and damage to the arteries are increased in skin and blood serum samples from patients with atopic dermatitis.

Most studies looking at whether there is a link between atherosclerosis and atopic dermatitis to date have involved adult patients. Considering that atherosclerosis can start to develop in early childhood, the authors of this study investigated whether early signs of atherosclerosis beginning to develop can be detected in children with atopic dermatitis and attempted to identify risk factors associated with both conditions. They compared a group of children who had atopic dermatitis with a similar number of children who did not have the disease who were alike in terms of factors like their age, weight, and height.

What Did the Study Show?

The results of the study showed that early signs of atherosclerosis were detectable in children with atopic dermatitis, with the length of time that they had had atopic dermatitis, the severity of their disease, and their age all associated with the likelihood of signs being present.

In particular, increases in a factor called carotid intima–media thickness were found to be associated with children having atopic dermatitis. Carotid intima–media thickness is calculated using a special type of ultrasound by measuring the thickness of the two most inner layers of the carotid arteries (the major arteries that supply blood to your brain, with one on each side of your neck), the intima and the media, and is used to assess whether atherosclerosis may be present. The greater the carotid intima–media thickness, the greater the likelihood that atherosclerosis is developing.

The authors of the study suggest that it may be important that children with atopic dermatitis be monitored for signs of atherosclerosis development and other risk factors that are known to be associated with cardiovascular disease. These include obesity, high levels of fats in the blood, and high blood pressure. Studies following the health of children with atopic dermatitis over longer periods of time are now needed to shed more light on the relationship between it and the development of atherosclerosis and cardiovascular disease.

Note: This post is based on an article that is not open-access; i.e., only the abstract is freely available.

Models of Childhood Glioma contributing to Treatment Development

What Is the Main Idea?

Glioma is a type of tumor that develops in the nervous system, with differences between gliomas that develop in children and adults. In the open-access review article “Pediatric Glioma Models Provide Insights into Tumor Development and Future Therapeutic Strategies”, published in the journal Developmental Neuroscience, the authors summarize different experimental models that are being used to study glioma in children and how they may contribute to improvements in its treatment.

What Else Can You Learn?

Glioma and its symptoms are described. Differences in gliomas arising in children and adults, driver mutations, and the use of experimental models in cancer research are also discussed.

What Is Glioma?

Glioma is a type of tumor that is found in the nervous system. It usually develops in the brain but can also develop in the spinal cord (a tube of nervous tissue that runs from the brain to the lower back), although this is rare. Glioma develops when glial cells begin to develop and grow out of control. There are different types of glial cell and they play essential roles in the nervous system that support the function of neurons (cells that transmit messages from one part of the nervous system to another via electrical impulses), with glial cells sometimes being described as the “glue” that holds the nervous system together.

As well as surrounding neurons and holding them in place, glial cells also create a myelin sheath around neurons that insulates their electrical impulses so that they can transmit messages effectively, a bit like the coating of an electrical wire. They supply oxygen and nutrients to neurons to keep them nourished, regulate inflammation (an immune system process through which the body responds to an injury or a perceived threat, like a bacterial infection or damaged cells). They also form the blood–brain barrier, which is a barrier between the blood vessels in the brain and the other components that make up brain tissue that allows nutrients to reach the brain while preventing other things from entering it that could cause infections or damage.

What Are the Symptoms of Glioma?

There are different types of glioma, depending on the type of glial cell from which the glioma develops and speed at which the tumor is growing. As a result, the symptoms and signs of glioma can vary between people, and are also affected by where the tumor is in the nervous system and its size. Common symptoms are:

  • headache, which may hurt more in the morning;
  • changes in mental function (such as problems with understanding information and memory) and personality;
  • feeling sick and vomiting;
  • problems with vision (such as blurred or double vision);
  • seizures, especially if the person has not had them before.

Gliomas can develop at any age, and although glioma is most commonly diagnosed in adults there are some types of glioma that are more common in children and young adults.

What Did This Article Look at?

Review articles survey the information that has been published on a topic to date. Rather than presenting new findings from their own research, the authors aim to clarify current thinking on a topic and the evidence that supports it, and sometimes set out suggestions for changes to what is considered to be best practice.

In this article, the authors review the different experimental models that are being used to study glioma in children and summarize how these models may contribute to improvements in its treatment. Experimental models use systems, such as the culturing of cells in a laboratory, to investigate processes that are thought to be involved in diseases and to evaluate new drugs that are being developed before they are assessed by going through the clinical trials process in humans.

There is a need for new treatments for glioma in children. Treatments that are currently the standard of care for childhood glioma have been chosen based on their effects on gliomas in adults, but adult and child gliomas that are high-grade (this means that they are “malignant”, growing in an uncontrolled way and able to spread to nearby tissues and other parts of the body) progress differently and have different underlying “driver mutations” (these are changes in genes in the tumor cells that give them a growth advantage and, as a result, promote the development of cancer).

Until recently, a lack of experimental models that could accurately recreate the environment in which gliomas form meant that efforts to study child gliomas were limited. However, the discovery of child glioma-specific driver mutations has enabled researchers to investigate the origins of these tumors, laying the foundation for the development of more appropriate and effective treatments.

What Experimental Models Are Being Used?

Over the last 10 years there have been major advances in the development of cell lines (a population of cells that can be grown and maintained in a laboratory) and models derived from tissue samples obtained from glioma patients during surgical procedures. Cell lines have the advantages of being relatively cost-effective and easily shared with other researchers, as well as being suitable for use in high-throughput screening (this is a process by which hundreds of samples of cells and hundreds of different potential drugs can be tested quickly, often using robotics).

Advances in stem cell engineering have also opened up new opportunities to investigate the development of glioma. Stem cells are unique in that they can self-renew, are either undifferentiated or only partially differentiated, and are the source of specialized cell types, like red blood cells and types of brain cell. Stem cells are useful in glioma research because they can be used to model tumor types from which it is difficult to obtain tissue samples or establish cell lines, which is the case for some types of glioma, and they can be controlled so that specific cell types and driver mutations can be investigated.

Organoids are three-dimensional tissue cultures that are grown from stem cells. Although cell cultures can be very useful in cancer research, they are usually grown as flat sheets of cells in tissue culture flasks and do not accurately represent all of the complicated interactions that take place between tumor cells and their environment in the body. This “tumor microenvironment” includes immune cells, signaling molecules, the matrix that surrounds cells in tissues and supports them, and the surrounding blood vessels. Tumors and their surrounding microenvironment constantly interact and it is known that they can influence each other.

Immune cells in the microenvironment can affect the growth and development of tumor cells, while a tumor can influence its microenvironment by releasing signaling molecules that promote the development of new blood vessels, which increase the supply of nutrients to the tumor and aids its ability to start to spread around the body, and inhibiting and evading the immune system’s ability to recognize and destroy tumor cells. Using organoids enables elements of the tumor microenvironment to be incorporated into models of glioma so that the experiments more accurately mimic the situation in the body.

These model systems are enabling us to better understand how glioma develops. As our understanding increases, more features of glioma cells will be identified that can be targeted specifically by new treatments, increasing the range of therapies that can be used to treat glioma in children and improving the outcomes of patients.

Understanding Glutamate and Its Effects in the Brain

What Is the Main Idea?

Glutamate is the body’s main excitatory neurotransmitter, stimulating neurons to send signals around the body. In the free-access review article “Sex Hormones, Neurosteroids, and Glutamatergic Neurotransmission: A Review of the Literature”, published in the journal Neuroendocrinology, the authors summarize the current research evidence regarding whether or not there is a link between glutamate’s role as a neurotransmitter and the levels of sex hormones and neurosteroids in the body.

What Else Can You Learn

The role of the amino acid glutamate as a neurotransmitter in the brain is discussed. Sex hormones and neuropeptides, amino acids, and the general purpose of review articles are also discussed.

What Is Glutamate?

Glutamate is a naturally occurring amino acid that is found in the food we eat and is also produced by the body. It is a type of molecule called an “amino acid”. Amino acids are best known for being the component molecules that make up proteins, with the amino acids used and the order in which they are joined together in a protein influencing its functions, shape, and ability to interact with other molecules. If the order of the amino acids in a particular protein changes (for example if the gene that codes for it becomes mutated), the protein produced may no longer be able to function properly or even at all.

An example of this is when a single amino acid is changed in a protein called beta-globin because of a mutation in its coding gene. Beta-globin is a component of hemoglobin, which is found in red blood cells and is involved in carrying oxygen around the body. The single amino acid change creates a “sticky” patch on hemoglobin molecules that causes them to clump together and distort the red blood cells into a sickle shape, giving rise to a condition called sickle cell disease.

What Does Glutamate Do in the Body?

Glutamate plays several important roles in the body. It is a key component of metabolism, the process by which the food and drink that we consume is changed into energy, and can be broken down as an energy source in the brain when glucose levels are low. Glutamate is involved in the removal of excess nitrogen from our bodies via the production of urea (which is passed out of our bodies in urine). It is believed to be involved in the regulation of the sleep–wake cycle because levels are high during the rapid-eye-movement phase of sleep and when you are awake. Another major role of glutamate is as an “excitatory neurotransmitter”.

What Are Neurotransmitters?

Neurotransmitters carry chemical signals between neurons, a type of cell that transmits messages from one part of the brain and nervous system to another, and trigger an action or change in the target cell. This can be either “inhibitory” (it prevents or blocks the message from being transmitted any further), “modulatory” (it influences the effects of other neurotransmitters), or excitatory (it “excites” the target neuron, causing it to send the message on to the next cell).

Glutamate is the most abundant excitatory neurotransmitter in the human nervous system. It is involved in processes that take place in the brain such as memory and learning (it is estimated to be involved in more than 90% of the brain’s excitatory functions), and high levels of glutamate are also associated with increased pain levels. Glutamate is also converted into an important inhibitory neurotransmitter called gamma-aminobutyric acid (GABA) that is known as the “calming” neurotransmitter because it is involved in the regulation of anxiety, relaxation, and sleep. The process by which glutamate acts as a neurotransmitter is called “glutamatergic neurotransmission”.

What Are Sex Hormones and Neurosteroids?

Sex hormones are so called because they are critical in regulating the biological differences between males and females, and are particularly involved in reproduction and puberty (hormones are chemical messenger molecules that coordinate different processes and functions in the body). In humans, the key sex hormones are estrogen, progesterone, and testosterone. Neurosteroids are steroids that are produced in the brain or that have an effect on its functions (they can also act as signaling molecules). They are involved in a wide range of roles such as memory, learning, and behavior, as well as responses to stress and depression.

What Did This Article Look at?

Review articles are conducted as a sort of survey of all the information that has been published on a topic. Rather than presenting new findings, they aim to clarify current thinking on a topic and the evidence that supports it, and sometimes set out suggestions for changes to what is considered to be best practice. Increasing numbers of research articles are being published that are reporting a link between glutamate’s role as a neurotransmitter and the levels of sex hormones and neurosteroids in the body.

There is also evidence that changes to the regulation or levels of sex hormones and neurosteroids may be linked to the development of a range of neurological conditions. For example, dysregulation of glutamate’s role as a neurotransmitter has been linked to a number of disorders including epilepsy and post-traumatic stress disorder. It has also been linked to premenstrual dysphoric disorder, which is a severe form of premenstrual syndrome. It is therefore important that we gain a better understanding of how sex hormones and neurosteroids influence the normal functioning of the brain and identify any roles in the development of conditions that affect its function.

What Were the Review’s Findings?

The authors of the review concluded that the current evidence is that sex hormones can directly affect glutamate’s role as a neurotransmitter. In particular, there was evidence that estrogens can be protective against excitotoxicity, which occurs when excessive or prolonged activation of neurotransmission, particularly if mediated by glutamate, has a negative effect on neurons, leading to their loss of function or death. This is particularly relevant to stroke, where loss of blood flow (known as “ischemia”) in a region of the brain can not only damage neurons directly, but can also affect glutamate transport resulting in glutamate levels increasing to levels at which neurons die.

Other conditions known to be linked to too high levels of glutamate in the brain include Alzheimer’s disease, multiple sclerosis, Parkinson’s disease, and chronic fatigue syndrome. Equally, levels of glutamate in the brain that are abnormally low are linked to low energy, trouble concentrating, and insomnia. Estrogen levels in the brain have also been shown to be linked to memory function in several non-human species. Progesterone may also have a neuroprotective effect although further research is needed to investigate the link.

There was some conflicting evidence regarding whether testosterone has a protective or negative effect on neurons, and a number of neurosteroids that are produced from the conversion of testosterone and progesterone may also play an independent role in altering the levels of glutamate in the brain. As we learn more about the relationships of sex hormones and neuropeptides with glutamate-mediated neurotransmission it is hoped that we will gain new insights regarding how to prevent the development of disorders and treat them more effectively.

The Risk of Drug Interactions with Complementary and Alternative Medicines

What Is the Main Idea?

The use of biologically-based complementary and alternative medicines (CAMs) by patients with long-term health conditions is increasing. In the research article “Biologically-Based Complementary and Alternative Medicine Use in Breast Cancer Patients and Possible Drug-Drug Interactions”, published in the journal Breast Care, the authors describe how the use of biologically based CAMs by patients with breast cancer has the potential to cause drug interactions, both with anticancer medicines as part of a chemotherapy treatment and with each other.

What Else Can You Learn?

In this blog post, standard medical treatment for breast cancer and the possibility of drug interactions when medicines are taken together are discussed. Different types of complementary and alternative medicine are also described.

What Is Breast Cancer?

Breast cancer can start in one or both breasts. It develops when cells in the breast become abnormal, start to grow out of control, and begin to invade the surrounding tissue. Breast cancer cells can also spread to other areas of the body by being carried there by the blood and lymphatic systems. The lymph fluid that is transported around the body by the lymphatic system is an important part of the immune system. There are different types of breast cancer, with the exact type determined by which type of cells in the breast has become cancerous. Breast cancers are also classified on the basis of whether or not the cancer cells produce certain proteins or have changes (mutations) in specific genes. Genes are short sections of DNA that carry the genetic information for the growth, development, and function of your body.

How Is Breast Cancer Treated?

Treatments that have been assessed and accepted as effective treatments for particular diseases by the medical community are known as “standard medical treatments”. The standard medical treatments for breast cancer include surgery, chemotherapy, radiotherapy, hormone therapy, and targeted therapy.

  • Types of surgery that are used to treat breast cancer include breast-conserving surgery (where a cancerous lump is removed) and mastectomy (where a whole breast is removed).
  • Chemotherapy uses medicines that are “cytotoxic” (which means that they are toxic to cells, damaging them or causing them to die) to kill cancer cells. However, because cells in the body that are not cancerous can also be affected by chemotherapy medicines, many people who receive this type of treatment experience side effects. As this term is used to describe any unintended effects of a medicine, it can refer to beneficial and/or unfavorable effects.
  • Radiotherapy aims to kill cancer cells by using controlled doses of radiation.
  • Hormone therapy is used to lower the levels of the hormones estrogen and progesterone, which naturally circulate in the body, because some breast cancers develop the ability to be stimulated to grow by them.
  • Targeted therapy specifically targets molecules that cancer cells need to survive and spread.

What Is Complementary and Alternative Medicine?

The term “complementary and alternative medicine” (CAM) is an umbrella term that describes medical practices and products that are not part of standard medical care. Complementary medicine is used alongside standard medical treatment, whereas alternative medicine is used instead of standard medical treatment. A wide range of different types of products and practices are included in CAM that can be broadly divided into five groups.

  • Whole medical systems, such as ayurveda and naturopathy
  • Mind–body therapy, including meditation, yoga, and hypnotherapy
  • Manipulative and body-based practices, such as reflexology and massage
  • Energy healing, such as reiki
  • Biologically-based approaches, such as vitamins and dietary supplements, plants and plant extracts, and special foods or diets

The effectiveness and safety of most types of CAM approaches are less well understood than for standard medical treatment and more research is needed. However, while some CAM therapies have been shown to be generally safe and effective (such as acupuncture and yoga), some may be harmful and others may not work. Some may also cause drug interactions.

What Is a Drug Interaction?

A drug interaction happens when a medicine that is being taken by a person reacts with something else. Drug interactions can happen when one medicine reacts with another medicine or medicines, with something that the person is consuming (such as a herbal supplement or a particular food), or starts to cause side effects in the person because of another condition. When drug interactions occur, the results can range from mild side effects to a drug working less well or not at all. This means that a drug interaction has the potential to have a serious effect on the patient.

What Did the Study Investigate?

Advances in standard medical treatment for breast cancer have led to significant increases in 5- and 10-year survival rates in all countries in the European Union and in the UK in recent years. At the same time, health information has become more widely available and a large proportion of patients with long-term health conditions look for ways to improve their health and quality of life that fall outside of standard medical treatment.

Research has shown that the use of biologically-based CAMs is particularly popular among women with cancer, primarily because it is hoped that biologically-based CAMs can lessen the side effects of chemotherapy and strengthen the body against the effects of anticancer treatments. However, many of the biologically-based CAMs that people use carry the risk of drug interactions, and patients may begin taking them without consulting or notifying their medical team, making it difficult for any effects caused by drug interactions that do occur to be identified.

The authors of this study followed 47 patients with breast cancer as they began chemotherapy treatment, and asked them to complete questionnaires on their first day of treatment and again 10–12 weeks later. During this time period, 91% of the participants in the study reported that they used a biologically-based CAM, with the most popular types of biologically-based CAMs including the taking of vitamins, minerals, trace elements, and plants or plant extracts.

Drug interactions that had the potential to be clinically relevant (i.e., that could affect the effectiveness of a chemotherapy medicine or increase its toxicity in the body) were identified for 30 out of the 43 patients who reported using biologically-based CAMs. This was particularly true for patients who were using turmeric and ginger supplements together, which shows that the taking of more than one biologically-based CAM at once can cause drug interactions with each other, not just with anticancer medicines.

While the consumption of turmeric and ginger in food has generally been reported to have health benefits, they can both have a blood-thinning effect when high levels are consumed. This puts a person at risk of dangerous bleeding if they are also taking an anticoagulant (a type of medicine that prevents blood clots form forming). There are also some instances where drug interactions only occur if two substances are taken together. In such cases, it is possible for a patient’s medical team to help put together a medication plan that can help avoid drug interactions by ensuring that the taking of the two medicines is done at safe time intervals.

Take-Home Message

Although some biologically-based CAMs may have beneficial effects on the health of patients undergoing treatment for breast and other cancers, further studies are needed to identify potential interactions that can occur with chemotherapy drugs and with other biologically-based CAMs. If you are undergoing treatment for breast cancer, let your medical team know if you start to use a biologically-based CAM. This will enable them to monitor you for any potential drug interactions and will also add to the pool of knowledge regarding the best CAM options for patients undergoing anticancer treatment. There may also be known potential drug interactions that should be taken into consideration, and your medical team will be able to provide advice to help you support your standard medical treatment safely.

Note: This post is based on an article that is not open-access; i.e., only the abstract is freely available.

The Benefits of Cognitive Activity on Brain Health

What Is the Main Idea?

Some changes in cognitive function are considered to be a normal part of aging, but others can indicate the presence of disease, such as dementia. In the research article “Cognitive Activity Is Associated with Cognitive Function over Time in a Diverse Group of Older Adults, Independent of Baseline Biomarkers”, published in the journal Neuroepidemiology, the authors investigate whether there is a relationship between a person’s level of cognitive activity, biomarkers in their blood that can indicate Alzheimer’s disease or dementia, and changes in their cognitive function in older age.

What Else Can You Learn?

In this blog post, changes in cognitive function as we age are described. Cognitive reserve and different forms of dementia and also discussed.

How Does Cognitive Function Change as We Age?

The term “cognitive function” describes a combination of processes that take place in the brain that enable us to learn, manipulate information, remember, and make judgements based on experience, thinking, and information from the senses. These processes affect every aspect of life and our overall health, including how we form impressions about things, fill in gaps in knowledge, and interact with the world.

Some changes in cognitive function that are considered to be a normal part of the aging process include difficulties with multitasking and sustaining attention, and an overall slowing of the speed at which we think. The ability to “hold information in mind”, which means the ability to think about something without steady input about it from the outside world, also tends to decrease.

In contrast, skills like verbal reasoning and vocabulary tend to increase or stay the same as we get older. Changes in cognitive function that are considered a normal part of aging are usually subtle over time; however, some people experience major changes in cognitive function that may indicate the development of a neurodegenerative disease caused by abnormal changes in the brain, such as dementia. The term “neurodegenerative” means the degeneration or death of neurons, a type of cell that transmits messages from one part of the brain and nervous system to another.

What Is Dementia?

Dementia mainly occurs in people aged over 65 years and covers a range of conditions with different causes. For example, vascular dementia develops when blood flow to one or more areas in the brain is blocked or reduced, preventing cells from getting the oxygen and nutrients that they need to function properly.

In contrast, Alzheimer’s disease is believed to be caused by the abnormal functioning of two proteins called beta-amyloid and tau. In people with Alzheimer’s disease, beta-amyloid forms clumps called “plaques” on neurons that make it hard for them to stay healthy and communicate with each other, while abnormal forms of tau cling to other tau proteins inside neurons and form “tau tangles”. People with dementia often experience declines in cognitive function that affect their memory and other thinking skills like language, problem-solving, attention, and reasoning. Their behaviour, feelings, and relationships can also be affected, with significant effects on their daily lives.

What Did the Study Investigate?

It is well known that the extent to which a person engages in cognitive activity (mental tasks that require focus, reading, learning, creativity, memory, and/or reasoning) can affect their cognitive function as they age. There is strong evidence that people who are more cognitively active maintain higher levels of cognitive function over time than people who are less cognitively active, regardless of whether they develop a form of dementia. In other words, some brains keep working more efficiently than others despite them experiencing similar amounts of cognitive decline and/or damage. However, it remains unclear whether this is because cognitive activity directly benefits cognitive health or because people with declining cognitive function become less cognitively active.

What Is “Cognitive Reserve”?

The possibility that cognitive activity can positively affect our brain health relates to an idea called “cognitive reserve”. It suggests that people build up a reserve of cognitive abilities during their lives that can protect them against some of the cognitive decline that can happen as the result of ageing or the development of disease such as dementia. A person can increase their cognitive reserve through activities that engage their brain, such as learning a language or new skill, solving puzzles, and high levels of social interaction, particularly if the activities are novel and varied. Regular physical activity, not smoking, and a healthy diet are also important.

The idea of cognitive reserve is supported by research that has shown that the relationship between cognitive activity and function in older age is not affected by the degree of abnormal brain changes. In other words, two people with Alzheimer’s disease may have similar levels of beta-amyloid plaques and tau tangles in their brains, but may differ regarding the extent to which their cognitive function has declined. Equally, two people who seem to have the same level of cognitive function may differ regarding the extent of abnormal change that has happened in their brains.

What Role Do Biomarkers Play?

The authors investigated whether there is a relationship between the levels of three biomarkers in the blood that can be used to predict and stage some types of dementia, including Alzheimer’s disease, and the extent to which a person’s level of cognitive activity affects their cognitive function as they age. Biomarkers are measurable characteristics, such as molecules in the blood or changes in genes (mutations), that can indicate whether the body is working normally or a disease is present.

In this study, the authors measured the levels of three biomarkers in blood samples: total tau, neurofilament light chain (NfL),and glial fibrillary acidic protein (GFAP).

  • As already mentioned, tau tangles are a characteristic of Alzheimer’s disease, and high levels of total tau (both normal and abnormal forms of tau) in the blood have been reported to be associated with increased risk of cognitive impairment.
  • High levels of NfL in the blood have been linked to neurodegeneration and there is evidence that it may be possible to use levels of NfL in the blood to detect whether a person has dementia.
  • Levels of GFAP in the blood have been shown to be increased early on in the development of Alzheimer’s disease. This can be used to determine whether a person has Alzheimer’s disease or frontotemporal dementia which is a rarer type of dementia that affects the frontal and temporal lobes of the brain responsible for language, behavior, and emotions.

Who Participated in the Study?

The people who participated in the study were all aged 65 years or older, and one-third of the participants were randomly selected to give blood samples for biomarker testing at the start of the study. All of the participants reported how often they participated in cognitive activities that were judged to be common to older adults because they are not overly dependent on a person’s financial or social situation:

  • Watching television
  • Listening to the radio
  • Visiting a museum
  • Playing games or doing puzzles
  • Reading books
  • Reading magazines
  • Reading newspapers

Their cognitive function was also assessed at the start of the study and in 3-year cycles after that, using tests of short-term and immediate memory, perceptual speed, and language functioning.

What Did the Authors Find?

The authors of the study found that higher levels of cognitive activity were associated with better cognitive function not only at the start of the study, but also after an average of 6.4 years of follow-up when the authors made contact with participants at later, prearranged dates to check on progress. However, the levels of the blood biomarkers did not affect this relationship. In other words, the benefits of high levels of cognitive activity on cognitive function were not affected by the levels of tau, NfL, and GFAP in the blood, even when they were present at high levels.

These results lend weight to the idea of cognitive reserve and suggest that people who engage in enriching activities throughout their lives may enter old age with a higher level of cognitive function, which can delay or reduce any symptoms resulting from dementia or other neurodegenerative diseases from affecting their quality of life.

Take-Home Message

Ensuring that we are cognitively active before we reach our 60s (i.e., before the age at which the study’s participants were initially assessed) may benefit our brain health and cognitive function as we age. The fact that the authors of the study did not find a link between the blood biomarkers and cognitive activity over time also suggests that people benefit from enrichment activities throughout their lives, including in their later years.

Note: This post is based on an article that is not open-access; i.e., only the abstract is freely available.

Factors Increasing Stroke Risk in Young Adults

What Is the Main Idea?

Although stroke is more common among the elderly it can happen at any age. In the research article “Risk Factors for Stroke in the Young (18–45 Years): A Case-Control Analysis of INTERSTROKE Data from 32 Countries”, published in the journal Neuroepidemiology, the authors describe how the main risk factors causing stroke in young adults have changed in recent years.

What Else Can You Learn?

In this blog post, the different types of stroke and their effects are described. Ways that you can reduce your risk of stroke and how case–control research studies are conducted are also discussed.

What Is Stroke?

Arteries are blood vessels that carry oxygen-rich blood from the heart to cells and organs throughout the body. Stroke is a disease that affects the arteries that lead to and pass through the brain. The oxygen and nutrients that brain cells need to function properly are carried around the brain by the blood. When stroke happens, the blood supply to part of the brain is cut off or reduced.

This can be caused by a blockage in an artery (this is called an “ischemic” stroke) or by an artery rupturing, causing bleeding in or around the brain (this is called a “hemorrhagic” stroke). The cells in the affected area of the brain can no longer get all the oxygen and nutrients they need and quickly begin to die. The bleeding can also cause irritation and swelling, and pressure can build up in surrounding tissues, which can increase the amount of damage in the brain.

As well as the two main types of stroke, some people experience “mini-strokes” called transient ischemic attacks (TIAs). A TIA is essentially a stroke caused by a temporary, short-term blockage of an artery. Once the blockage clears the symptoms stop. Although someone who has a TIA may feel better quickly they still need medical attention as soon as possible, because the TIA may be a warning sign that they will have a full stroke in the near future.

What Are the Effects of Stroke?

The effects of stroke differ from one person to another and depend on the severity, the area of the brain that is affected, and the type of stroke experienced. The main symptoms of stroke include one side of the face dropping or the person being unable to smile, not being able to lift both arms and keep them raised, the person having difficulty understanding what you are saying, or slurred speech or not being able to talk.

Other symptoms include confusion or memory loss, numbness or weakness on one side of the body, a sudden fall or dizziness, sudden severe headache, and/or loss of sight or blurred vision (in one or both eyes). Although some people will have a full recovery after stroke, others will have permanent effects that do not get better.

What Causes Stroke?

There are some factors that are known to increase your chance of stroke. These include your age, ethnicity, having a close relative (a sibling, parent, or grandparent) who has had a stroke, especially if the stroke happened before they reached age 65 years, and having other conditions such as diabetes or a type of heart disease. Your arteries naturally become narrower as you get older, and blood clots that cause ischemic stroke often form in areas where arteries have become narrower or blocked over time as a result of the buildup of fatty deposits (a process called “atherosclerosis”).

Smoking, high levels of lipids (fats) in the blood (such as cholesterol and triglycerides), diabetes, drinking excessive amounts of alcohol (binge drinking), obesity, and high blood pressure (also called “hypertension”) can all speed up this process. High blood pressure is also the main cause of hemorrhagic stroke because it can weaken arteries in the brain. The roles of smoking, diabetes, high lipid levels, and high blood pressure in causing stroke are so well known that they are sometimes called “traditional” risk factors.

What Did This Study Investigate?

Although stroke is more common among the elderly it can happen at any age, even in infants. There is some evidence that the global incidence of stroke among younger and middle-aged people (aged 18–64 years) is increasing, with significant increases in low- and middle-income countries. As these countries have undergone economic changes, so too have the dietary and lifestyle habits of their inhabitants, resulting in increases in high blood pressure, diabetes, and obesity.

Although it used to be thought that rare, non-traditional risk factors were mainly responsible for stroke in younger people (such as conditions that mean that a person has a tendency to develop blood clots or rheumatic heart disease), this may no longer be the case.

The authors of this study used data from a study called INTERSTROKE to assess whether traditional risk factors are now the main cause of stroke in people aged 18–45 years. INTERSTROKE was a case–control study that involved 142 centers located in 32 countries across the world between 2007 and 2015. A case–control study is a type of study that compares the medical and lifestyle histories of two different groups of people to identify risk factors that may be associated with a disease or condition:

  • one group of people with the disease being studied (cases) and
  • another similar group of people who do not have the disease (controls).

In INTERSTROKE, people who experienced their first acute stroke and who presented to medical professionals within 5 days of their symptoms beginning were matched with control participants based on their age and sex. In total, 1,582 pairs of participants were assessed.

What Did the Study Show?

As in older people, ischemic stroke was more common than hemorrhagic stroke in younger adults (accounting for 71% of cases). No statistically significant regional differences in risk factors were identified, although this may have been influenced by the low numbers of participants from individual regions. Traditional risk factors such as high blood pressure, high lipid levels, smoking, excessive alcohol consumption, obesity, and psychosocial stress (caused by our environment and relationships) were also shown to be significant risk factors for stroke in younger adults. High blood pressure was shown to be particularly significant, and was consistently identified as the strongest risk factor across all of the regions included in the study, different stroke types, and both sexes.

These results show that, worldwide, the traditional risk factors for stroke are now as important for younger adults as they are for older members of the population. The authors suggest that public health efforts that aim to identify and address traditional risk factors for stroke should start when people are in their 20s and 30s, which is much earlier than previously thought.

Take-Home Message

Taking steps to control your blood pressure and keep it low, whatever your age, can have significant health benefits that include reducing the risk of stroke. Eating a healthy diet that includes plenty of vegetables, wholegrains, fruit, some dairy products, fish, poultry, nuts, seeds, and beans, and reducing your consumption of sugars and red and processed meat can help. Stopping smoking, only drinking moderate amounts of alcohol (and avoiding binge drinking in particular), and being more active can also have significant positive effects on your health.

Note: This post is based on an article that is not open-access; i.e., only the abstract is freely available.

Can Eating Watermelon Trigger Migraine?

What Is the Main Idea?

The exact causes of migraine are unknown, but it is thought that migraine attacks develop as a result of abnormal brain activity. In the research article “Migraine Attacks Triggered by Ingestion of Watermelon”, published in the journal European Neurology, the authors describe how watermelon consumption may trigger migraine headache attacks by activating a process called the L-arginine-nitric oxide pathway.

 What Else Can You Learn?

In this blog post, different types of migraine and what is known about how migraine attacks develop are described. The processes by which nerves transmit signals throughout the body and the L-arginine-nitric oxide pathway are also discussed.

What Is Migraine?

Migraine is often characterized as a headache that causes severe throbbing pain or a pulsing sensation, usually on one side of the head. However, there are different types of migraine and headache, and it can be difficult to tell them apart. Different people also experience different migraine symptoms.

Although many migraine attacks involve a severe throbbing headache, some people will experience migraine attacks without headache (known as silent migraine). When this happens, the person experiences “aura” symptoms such as flashing lights or seeing zigzag lines, but does not develop head pain.

Other people may experience migraine that includes severe head pain with or without aura symptoms, such as changes in their vision, numbness or tingling, feeling dizzy, having difficulty speaking, and feeling or being sick. Migraine attacks can last anywhere between several hours and three days, and symptoms may start and end one or two days before headache develops.

What Causes Migraine?

The exact causes of migraine are not known, although the fact that people are more likely to get them if they have a close family member that gets migraines suggests that there is some sort of genetic involvement. It is thought that migraines develop when nerve signals, chemicals and blood vessels in the brain are affected by abnormal brain activity.

Neurogenic inflammation (a type of inflammation caused when particular types of nerves are activated and release mediators of inflammation such as nitric oxide) and the widening of blood vessels in the membrane layers that protect the brain and spinal cord are believed by some researchers to be key causes of migraine headache. Leakage of blood plasma (the liquid component of blood that does not include blood cells) from blood vessels into the surrounding tissues may also be involved.

Nerves (also known as neurons), together with the spinal cord and brain, are key components of the nervous system and consist of bundles of nerve fibers wrapped up to form cable-like cells. Nerves send electrical signals that control our senses, like pain and touch, and essential processes such as breathing, digestion, and movement, from one part of the body to another. When an electrical signal reaches the end of a nerve it is converted into a chemical signal. This causes molecules called neurotransmitters, such as dopamine and epinephrine (also known as adrenaline), to be released into the space between the end of one nerve and the start of the next one, which is called a synapse.

Once they have crossed the synapse, the neurotransmitters bind to receptors on the new nerve, and the signal is converted back into a chemical signal and travels on along the neuron. The ability of nerves to transmit signals internally or between one nerve and another is dependent on a process called depolarization, which is essential to the function of many cells and communication between them. Most cells have an internal environment that is normally negatively charged compared with the cell’s external environment.

When depolarization occurs, the internal charge of the cell temporarily becomes more positive before returning back to normal. Migraine aura is thought to be caused by a wave of “spreading depolarization” in a part of the brain called the cortex. Nitric oxide and glutamate are released during spreading depolarization, and some studies have reported increased levels of nitric oxide during headache attacks. This has led some researchers to suggest that the pathways that break down nitric oxide may be involved in migraines.

What Did This Study Investigate?

Although the exact causes of migraine are still unclear, migraine attacks are known to be triggered by stress and tiredness, hormonal changes, prolonged fasting or skipping meals, and the consumption of too much alcohol or caffeine and certain foods. Watermelon is the main natural source of an amino acid (the component units that are joined together to make proteins) called L-citrulline (in fact, its name is derived from the scientific name for watermelon, Citrullus vulgaris).

L-citrulline is also made by the body in the liver and intestine, and is an important component of the urea cycle, the process by which toxic ammonia is converted into urea so that it can be passed out of the body in urine. L-citrulline in the body can be converted to another amino acid called L-arginine, from which nitric oxide is produced via a process called the L-arginine-nitric oxide pathway. This means that watermelon may be an indirect source of nitric oxide in the body and may trigger migraine in some people.

The authors of this study conducted a clinical trial to investigate whether eating watermelon causes headache attacks in people who experience migraine. They recruited 38 volunteers who experience migraine without aura and 38 who do not, and asked them to each consume a portion of watermelon after avoiding consumption of watermelon and other L-citrulline-containing foods in the preceding 7 days, and fasting for the preceding 8 hours.

All of the volunteers gave blood samples before and after eating the watermelon to enable the researchers to assess whether there were any changes in blood serum nitrite levels (produce by the breakdown of L-citrulline). All of the volunteers then ate and were followed up for 24 hours by telephone, so that the researchers could be informed if any of the volunteers developed headache.

What Were the Results of the Study?

Headache was triggered in almost one-quarter of the people in the group who experienced migraine (23.7%) after, on average, around 2 hours after watermelon was consumed. In contrast, none of the volunteers in the migraine-free group developed headache over the 24-hour follow-up period. Interestingly, around one-quarter of the volunteers in the migraine (23.4%) and migraine-free (24.3%) groups were shown to have increased nitrite levels in their blood serum samples after consuming watermelon. These increases from the values recorded before watermelon consumption were statistically significant.

These findings suggest that eating watermelon can trigger headache attacks in people who experience migraine and increase serum nitrite levels, which may be due to activation of the L-arginine-nitric oxide pathway. Although everyone is different and not all of the migraine group volunteers developed headache after consuming watermelon, people who experience migraine may wish to consider reducing or avoiding consumption of watermelon.

Note: This post is based on an article that is not open-access; i.e., only the abstract is freely available.

Emerging Treatments for Ulcerative Colitis

What Is the Main Idea?

The treatment of ulcerative colitis has traditionally focused on the control of symptoms. In the review article “Current and Emerging Targeted Therapies for Ulcerative Colitis”, published in the journal Visceral Medicine, the authors describe how advances in targeted treatments have the potential to improve the quality of life of people with ulcerative colitis.

What Else Can You Learn?

In this blog post, ulcerative colitis and emerging treatments for it are described. Different phases of clinical trials are also discussed.

What Is Ulcerative Colitis?

Ulcerative colitis is a form of inflammatory bowel disease. People with ulcerative colitis have chronic (long-term) inflammation and ulcers (sores) in the colon (also known as the large bowel and part of the large intestine, it removes water and some nutrients from partially digested food before the remaining waste is passed out of the body).

For many people with ulcerative colitis, the disease follows a “relapsing and remitting” course, which means that there will be times when their symptoms get worse and others when their symptoms partly or completely go away. Symptoms of ulcerative colitis include needing to go to the toilet frequently and urgently, abdominal pain, a general feeling of being unwell, and fatigue, which can combine to have a major impact on a person’s quality of life and ability to work.

What Causes Ulcerative Colitis?

The exact causes of ulcerative colitis are not fully understood, but it is known that a combination of factors cause inflammation to be activated by the immune system. Inflammation is a normal process through which your body responds to an injury or a perceived threat, such as a bacterial infection. In ulcerative colitis, a high level of inflammation taking place for too long results in tissue damage in the colon and disease-related complications that cause the symptoms described above.

Ulcerative colitis is thought by some to be an autoimmune condition, which means that the body’s immune system wrongly attacks normal, healthy tissue. The intestines contain hundreds of different species of bacteria, which are part of the “gut microbiome” (the term given to all of the microorganisms that live in the intestines and their genetic material). Although some of these species can cause illness, many are essential to our health and wellbeing, playing key roles in digestion, metabolism (the chemical reactions in the body that produce energy from food), regulation of the immune system, and mood.

Some researchers believe that in ulcerative colitis, the immune system may mistakenly identify harmless bacteria inside the colon as a threat and start to attack them, causing the colon to become inflamed. Genetic factors like changes in genes and environmental factors are also known to be involved in the development of ulcerative colitis, and recent advances in our understanding have enabled new targeted therapies to be developed that selectively block or reduce the activity of components involved in inflammation.

Treatment of ulcerative colitis has traditionally focused on symptom control, whereas the development of new targeted treatments aims to achieve remission (the signs and symptoms of disease are reduced either partially or completely) and the restoration of people’s quality of life. A number of new treatments are in phase 2 or 3 clinical trials and may soon add to the range of treatments available to people with ulcerative colitis.

What Are the Different Types of Clinical Trials?

To be approved, a treatment must be proven to be safe and better than existing treatments. New treatments have to successfully go through several phases of clinical trials before they are approved for use and cannot move on to the next phase unless that particular phase of trial has yielded positive results. Phase 0 and phase 1 trials are the earliest-phase trials. They usually involve a small number of people (usually up to 50 people), aim to determine whether a treatment is safe, and (if the treatment involves a drug being given) what happens to it in the body.

Once found to be safe, treatments enter larger phase 2 trials (usually up to 100 people) where they are assessed as treatments for specific illnesses and any side effects (an unintended effect of the drug) are investigated in more detail. Phase 3 trials include hundreds or thousands of people and test new treatments against an existing treatment to see whether it is better. Phase 3 trials are randomized and often take place over several years so that the long-lasting effects of the new treatment can be assessed.

Emerging Therapies for Ulcerative Colitis

Interleukin-23 (IL-23)

A protein called interleukin-23 (IL-23) is known to inhibit the responses of a type of white blood cell called regulatory T cells. These cells play an important role in the body by suppressing the response of the immune system, ensuring that its normal level of activity remains within set limits and that its activity is reduced once a threat has been dealt with. They are also critical in preventing the development of autoimmunity.

When IL-23 inhibits regulatory T cells, inflammation is able to continue unchecked. A particular form of IL-23 called IL-23p19 has been identified as being involved in the development of ulcerative colitis. Four IL-23p19 inhibitors are currently in or have completed phase 2 or 3 trials. They appear to be particularly effective in patients whose ulcerative colitis has become resistant to treatment with tumor necrosis factor (TNF) inhibitors, and their effectiveness in combination with TNF inhibitors is also being investigated.

S1P

S1P is a type of molecule called a “lipid mediator” and is produced in response to a cell receiving a stimulus, and then exported from the cell so that it can bind to a receptor to transmit a signal to target cells. S1P binds to five different S1P receptors expressed on various types of immune cell, resulting in lymphocytes (cells that make antibodies and help control the immune system) being able to travel toward inflamed tissue in the intestine. Drugs that bind to S1P receptors and cause them to be internalized back into the cell and broken down are called S1P agonists. One S1P agonist has already been approved for the treatment of ulcerative colitis and another is in clinical development.

Toll-Like Receptor 9 (TLR-9)

A receptor inside cells called Toll-like receptor 9 (TLR-9) recognizes and binds to bacterial and viral DNA that is present inside cells. It does this by recognizing components called CpG motifs, which are made of a cytosine and a guanine bound together (these are two of the four components of DNA that make up the “genetic code”). CpG motifs are known to be the components of bacterial and viral DNA that cause the immune system to be activated.

As a result, some researchers are investigating the use of short, single-stranded synthetic stretches of DNA (called CpG oligonucleotides) to stimulate the immune system. One such molecule, which activates TLR-9 on target cells, has been shown in clinical trials to suppress immune cells that promote inflammation and to activate immune cells that suppress it, and is undergoing further testing.

microRNAs

Another approach is investigating the potential use of microRNAs. Your genes are short sections of DNA that carry the genetic information for the growth, development, and function of your body. Each gene carries the code for a protein or an RNA. There are several different types of RNA, each with different functions, and they play important roles in normal cells and the development of disease. MicroRNAs are small RNA molecules that do not code for proteins and instead play important roles in regulating genes, for example by inhibiting (silencing) gene expression.

Some microRNAs also activate signaling pathways inside cells, turning processes on or off. One such microRNA is miR-124, which negatively regulates inflammation. Reduced expression levels of miR-124 have been reported in studies of patients with ulcerative colitis, and a treatment that has been designed to upregulate miR-124 is currently in clinical trials involving patients with a variety of inflammatory diseases, including ulcerative colitis and rheumatoid arthritis.

Interleukin-6 (IL-6)

Interleukin-6 (IL-6) is another molecule that promotes inflammation and has been shown to play a central role in the development of inflammatory bowel disease. The binding of IL-6 to its receptor results in uncontrolled accumulation of activated T cells that stop inflammation from being reduced. Results of a phase 2 trial investigating an IL-6 inhibitor have been positive and it will be investigated further to assess its safety and efficacy in treating ulcerative colitis.

Take-Home Message

It is hoped that the emerging treatments described above, and others, will increase the options available to patients with ulcerative colitis. In addition, their investigation will continue to improve our understanding of how ulcerative colitis is caused, enabling further targeted therapies to be developed and opening up the possibility of personalizing each patient’s treatment.

Note: This post is based on an article that is not open-access; i.e., only the abstract is freely available. Furthermore, in the Conflict of Interest Statement at the end of this paper, the authors make a declaration about grants, research support, consulting fees, lecture fees, etc. received from pharmaceutical companies. It is normal for authors to declare this in case it might be perceived as a conflict of interest.

Treatment of Neurological Disorders: How Systematic Reviews Help Guide Research

What Is the Main Idea?

Intravenous immunoglobulin is a treatment product that is used to treat a variety of neurological conditions. In the free access research article “Adverse Reactions Associated with Intravenous Immunoglobulin Administration in the Treatment of Neurological Disorders: A Systematic Review”, published in International Archives of Allergy and Immunology, the authors discuss how they conducted a systematic review to determine whether any particular characteristics of neurological disorders are associated with an increased chance of patients experiencing adverse reactions if they are treated with intravenous immunoglobulin.

What Else Can You Learn?

In this blog post, the use of systematic reviews to evaluate what is known about specific research questions is discussed. Intravenous immunoglobulin and antibodies are also described.

What Is a Systematic Review?

A systematic review is a type of research study that seeks to summarize all of the available primary research (i.e., research that has collected data first-hand) that has been conducted to answer a research question. It involves a systematic search for data using a specific, repeatable method with a clearly defined set of objectives. The search is usually conducted using databases that hold information about research publications and aims to identify all studies within them that meet predefined eligibility criteria.

The validity of the findings for each study is then assessed, particularly regarding whether there is any risk that the results may be biased, following which the results are considered together and any conclusions drawn. Systematic reviews enable up-to-date assessment of what is known about a subject and are often used in the development and updating of clinical guidelines.

What Did This Study Investigate?

The authors of this study conducted a systematic review to summarize the results of studies that have reported adverse reactions when patients with neurological disorders – conditions that affect the brain, spinal cord, and/or nerves throughout the body – are treated with intravenous immunoglobulin. Intravenous immunoglobulin is a product that is made up of different human antibodies (immunoglobulins is another word for antibodies) that have been pooled together and are given intravenously (through a vein).

Antibodies are specialized protective proteins that are made by the immune system and recognize anything that is foreign to the body (these are called “antigens”), like bacteria and viruses. Different antibodies specifically recognize and neutralize different antigens and, once they have recognized and responded to a particular antigen once, antibodies against that antigen continue to circulate in the blood to provide protection against it if it is encountered again (this is how we become immune to some diseases).

Because intravenous immunoglobulin is prepared from blood samples donated from a large number of different people (depending on the manufacturer, the number of donors can be between 1,000 and 100,000) it contains a diverse collection of antibodies, which reflects the exposure of everyone who has donated blood to their environment, against a broad range of antigens. As a result, intravenous immunoglobulin can be effective in preventing or treating infections in people who are unable to make enough antibodies (known as “humoral immunodeficiency”) or who have an autoimmune disease (where the body mistakenly recognizes a cell type or specific protein in the body as foreign, treats it as an antigen, and attacks it).

Although a large number of clinical trials have reported that treatment with intravenous immunoglobulin is safe and generally well tolerated, some patients experience adverse reactions (an undesired effect of the treatment). The authors of this study therefore set out to systematically review studies that have reported adverse reactions to intravenous immunoglobulin therapy when it is used to treat more than one neurological disorder, to investigate whether any particular characteristics of individual neurological disorders are associated with patients experiencing adverse reactions.

How Was the Study Conducted?

The authors of the study searched three electronic databases for all research studies published up until that date using the following combination of search terms:

  • IVIg (the acronym for intravenous immunoglobulin), intravenous immunoglobulin, or immunoglobulin G (the type of immunoglobulin that makes up the greatest proportion of intravenous immunoglobulin), and
  • any term beginning with “neurolog”, and
  • adverse reaction, adverse effect, side effect, or any term beginning with “allerg”.

Articles were then included in the review if they described primary research, reported adverse reactions to intravenous immunoglobulin therapy in more than one neurological disorder, and were available as full-text publications in English. Although 2,196 studies were identified initially, only 65 met all of the eligibility criteria and were included in the final analysis.

What Did the Study Find?

After systematically reviewing the eligible studies, the authors of this study reported that when the results from all the studies were combined, the chance of patients developing an adverse reaction was estimated to be between 24 and 34%. In many studies the definition of specific adverse reactions was unclear or not specified. In addition, a large proportion of studies were conducted retrospectively, which increased the chance of selection bias. Selection bias is introduced when a group of patients is selected for analysis in a way that does not allow the sample population to be truly randomized, which means that it isn’t representative of the population as a whole, potentially leading to errors when the researchers draw conclusions about associations or outcomes.

Overall, there was a lack of high-quality comparative data (data that can be used to estimate the extent of similarity or dissimilarity between two things), which made it difficult for the authors to determine whether any specific neurological symptoms or signs are associated with patients having an increased risk of having an adverse reaction if treated with intravenous immunoglobulin therapy. Although intravenous immunoglobulin treatment was found to be generally well tolerated by patients with neurological conditions, headache was a common adverse reaction and there were some reports of “thromboembolic” complications (caused by the obstruction of a blood vessel by a blood clot that has become dislodged from another site in the circulatory system, which circulates the blood and lymph fluid through the body).

The authors concluded that patients with limited mobility (as seen in some conditions that affect both nerves and muscles), paraproteinemia, which occurs when an abnormal protein called a paraprotein starts to be secreted by a population of antibody-producing cells (as seen in some conditions where nerve damage causes pain, weakness, or numbness, often in the hands, arms, and feet), and cardiomyopathy (a general term that describes problems with your heart that make it harder for it to pump blood) were likely to have an increased risk of experiencing adverse reactions. They also found some evidence that children might be at increased risk of experiencing them.

Although the systematic review was unable to identify neurological disease characteristics that are definitely associated with adverse reactions in patients treated with intravenous immunoglobulin, the knowledge gained from this study can be used to guide the design of research studies in the future. Systematic reviews like this one play a key role in shaping future research directions by identifying areas relating to research questions that remain poorly understood or that need further investigation because different studies have reported conflicting results. This increases the chance of positive discoveries in the future that may improve the prevention and treatment of adverse reactions.

Precision Medicine to Optimize Treatment of Cholangiocarcinoma

What Is the Main Idea?

Cholangiocarcinoma is a type of cancer that is exceedingly rare in children and, as a result, there is no standard treatment protocol. In the open access research article “Identification of a Novel NRG1 Fusion with Targeted Therapeutic Implications in Locally Advanced Pediatric Cholangiocarcinoma: A Case Report”, published in Case Reports in Oncology, the authors discuss the case of a 16-year-old girl and their use of a “precision medicine” approach to optimize her treatment plan.

What Else Can You Learn?

In this blog post, the use of precision medicine approaches to treat cancer is discussed. The symptoms of cholangiocarcinoma and the role of bile ducts in the digestive system are also described.

What Is Cholangiocarcinoma?

Cholangiocarcinoma is the name given to a group of cancers that form in the bile ducts. The bile ducts are part of the digestive system and are small tubes that connect the liver to the gallbladder and small intestine. The liver has a number of roles including cleaning the blood to remove harmful substances and metabolizing proteins, fats, and carbohydrates so your body can use them. The liver also makes a fluid called bile that helps the body to break down fats from food. Bile can be stored in the gallbladder or can travel directly from the liver to the small intestine. Most of the digestive process takes place in the small intestine and it is here that nutrients and minerals from our food are absorbed into the blood.

Cholangiocarcinoma is divided into three types based on where the cancer develops in the bile ducts:

  • Intrahepatic cholangiocarcinoma starts in parts of the bile ducts that are inside the liver (“intrahepatic” literally means “inside the liver”).
  • The other two types of cholangiocarcinoma are extrahepatic (meaning that they start outside the liver). Hilar (also sometimes known as “perihilar”) cholangiocarcinoma starts just outside the liver, where the right and left bile ducts join to form the common hepatic duct (which is the area of bile duct before the gallbladder). Distal bile duct cholangiocarcinoma starts in the common bile duct, where the ducts from the liver and gallbladder join together, which passes through the pancreas and ends in the small intestine.

What Are the Signs and Symptoms of Cholangiocarcinoma?

Most people with cholangiocarcinoma don’t have any symptoms when the cancer first starts to develop, and only start to experience symptoms when the cancer is at an advanced stage and the flow of bile from the liver becomes blocked. It is when this happens and the bile starts to move back into the blood and body tissue that signs and symptoms start to develop. These can include jaundice (yellowing of the skin and the whites of your eyes), dark urine and/or white-colored stools, itchy skin, pain in the stomach area (usually in the upper right-hand side), loss of appetite, and non-specific symptoms such as fever, night sweats, fatigue, and losing weight without trying. These signs and symptoms can also be caused by other conditions so if you have any of them it is important that you consult a medical professional.

What Causes Cholangiocarcinoma?

Cholangiocarcinoma is rare. According to the National Cancer Institute it affects fewer than 6 in 10,000 people worldwide each year, although it is more common in some countries than others. It is not yet clear why some cholangiocarcinomas develop, but some factors that have been identified as increasing a person’s risk include having primary sclerosing cholangitis (a rare type of liver disease that causes long-term inflammation of the liver and hardening and scarring of the bile ducts) and liver cirrhosis (permanent scarring of the liver tissue caused by damage), bile duct problems that are present at birth (such as Caroli’s disease and choledochal cysts), liver fluke infection in areas of Southeast Asia, and biliary stones (these are similar to gallstones but form in the liver). Some DNA changes that cause inherited conditions, including Lynch syndrome and cystic fibrosis, are also associated with increased risk.

Cholangiocarcinoma is slightly more common in men than women and the risk of developing it increases with age. Although most people who are diagnosed with cholangiocarcinoma are over the age of 65, it can occur at any age. Cholangiocarcinoma does occur in children but is exceedingly rare, with some analyses concluding that fewer than 22 cases have been reported in the last 40 years, with the majority of the children who developed the cancer having a gastrointestinal disorder that is linked to its development. Because of its rarity, there is no standard treatment protocol for children diagnosed with cholangiocarcinoma.

What Did This Study Investigate?

The authors of this study describe the case of a 16-year-old girl who was diagnosed with advanced hilar cholangiocarcinoma. The authors used a “precision medicine” approach for her treatment plan, which aims to optimize the efficiency of treatment by using genetic analysis (such as DNA sequencing) or molecular profiling (laboratory analysis of tissue, blood, or fluid samples to check for certain genes, proteins, or other molecules). Specific information about a person’s cancer can then be used to help make a diagnosis, develop a targeted treatment plan, or find out how well a treatment is working.

Precision medicine is increasingly being used in the treatment of cholangiocarcinoma. A number of mutated genes have been identified in samples from cholangiocarcinomas that belong to a family of genes called “oncogenes”. Oncogenes are genes that are involved in normal cell growth and division, but can cause cancer if they become altered by changes that cause there to be too many copies of the gene or result in it being more active than normal.

The genetic changes in oncogenes that cause them to become activated result in the proteins that they code for being slightly different from the proteins that would be made if they had not become altered. These differences are being exploited by scientists to develop targeted treatments that only attack cancer cells. In this study, the authors describe how DNA sequencing of tissue from the patient’s tumor that was obtained from biopsy sampling during the diagnostic process showed that the tumor had a genetic change called an “oncogenic gene fusion”.

This means that part of the tumor cell DNA had become structurally rearranged, causing an area on chromosome 1 to become fused with chromosome 8 (humans have 23 pairs of chromosomes), producing a hybrid gene and leading to the activity of a gene called neuregulin-1 (NRG1) becoming dysregulated. NRG1 fusions have been found in a number of cancer types, including lung cancer, and have been estimated to occur in 0.8% of cholangiocarcinomas. They lead to NRG1 protein expressed at the cell surface binding to a protein called ERBB3 (its name is “erythroblastic oncogene B 3” and also known as human epidermal growth factor 3 or HER3).

This causes it to bind with ERBB2, which is also known as HER2 and is an oncogene that has been shown to play an important role in the development and progression of certain types of aggressive breast cancer, leading the two proteins to form dimers (a complex made up of two molecules linked together) that result in the activation of signaling pathways in cells that result in abnormal cell proliferation. This can contribute to the growth of a tumor.

Having identified that the patient’s tumor DNA had an NRG1 fusion, the authors were able to treat her with a combination of conventional chemotherapy and radiotherapy, followed by targeted treatment with a drug that specifically blocks signaling by members of the ERBB family of proteins. The treatment with the targeted drug was able to slow the growth of the tumor. In addition, the genetic changes that were identified in the patient’s tumor DNA meant that she was eligible to take part in a clinical trial, which she would not have been able to do if her tumor DNA had not been sequenced.

This case highlights how the increasing use of precision medicine approaches and targeted therapies can improve the quality of life of patients with rare cancers.

Tooth Erosion and the Acidity of Soft Drinks

What Is the Main Idea?

Many soft drinks, whether they are sugary or artificially sweetened, are acidic. In the open access research article “Erosive Potential of Various Beverages in the United Arab Emirates: pH Assessment”, published in the Dubai Medical Journal, the authors discuss the results of their investigation into the acidity of soft drinks available in the United Arab Emirates (UAE) and the effects that drinking soft drinks can have on dental health.

What Else Can You Learn?

In this blog post, the effects of acidic soft drinks on the teeth are discussed. The structure of the teeth and the pH scale are also described.

What Is Dental Erosion?

Teeth are essential components of the digestive system, enabling us to cut and grind our food into smaller pieces so that we can swallow it more easily. Each tooth consists of four main layers. The tooth pulp is the innermost layer and contains connective tissue, blood vessels, and nerves. Cementum is a layer that covers the root of the tooth (the part of the tooth that is not exposed to the environment inside the mouth) and helps to anchor it in the jaw. Dentin is the main supporting structure of the tooth and is made of a bone-like matrix that protects the nerves in the pulp. It sits directly under the final layer, the enamel.

Tooth enamel forms a shiny, hard protective layer around the crown (the part of the tooth that is exposed above the gums) to protect it from damage and the effects of bacteria in the mouth that can cause small openings or holes called “cavities”. It is highly mineralized, with 95% of it consisting of calcium and phosphorus bound together in small crystals called hydroxyapatite that are extremely strong.

Although enamel is the hardest substance in the body it is unable to regenerate if it becomes damaged because there are no living cells in the tooth to replace it. Physical factors such as everyday wear and tear and teeth grinding can contribute to dental erosion (the gradual destruction of tooth enamel), leading to the inner layers of the teeth becoming exposed and increasing the chance of cavities developing. Chemical factors can also cause dental erosion. For example, sugary foods can interact with bacteria in dental plaque (a sticky substance that continuously builds up on the teeth) leading to the production of acid.

How Does pH Affect Tooth Enamel?

pH is a numerical scale, ranging from 0 to 14, that describes how acidic or alkaline a substance is. A pH of 7 describes a substance that is neutral (is neither acidic nor alkaline), while a pH less than 7 describes one that is acidic, with the acidity increasing as the pH approaches 0. Conversely, a pH greater than 7 describes an alkali with the strength of alkalinity increasing as the pH approaches 14. In the mouth, acid produced by bacteria in plaque or in acidic foods and drinks softens the enamel, and can dissolve the hydroxyapatite crystals within it if the pH drops below 5.5. It has been reported that the ability of tooth enamel to dissolve increases by 10-fold with each one-unit decrease in pH.

Why Are Many Soft Drinks Acidic?

Acids either occur naturally in drinks or are added to enhance their flavor or improve their shelf life. For example, citrus juices naturally contain citric acid, but it may also be added to other drinks to increase the tanginess of the flavor or to act as a preservative. Phosphoric acid is added to some soft drinks for similar reasons. In addition, fizzy drinks get their fizziness as a result of carbon dioxide being dissolved in water under pressure in a process that forms a weak solution of carbonic acid (resulting in a tingly sensation on your tongue when you drink them).

It would be wrong to think that the presence of acids in soft drinks is always a bad thing. Ascorbic acid, another name for vitamin C, is commonly found on lists of ingredients and has several important functions in the body including keeping cells healthy, wound healing, and the maintenance of healthy skin, bones and blood vessels.

 What Did This Study Investigate?

Over recent years, increased consumption of soft drinks and fruit juices has been linked to rising rates of a wide range of health conditions that include type 2 diabetes, obesity, and the development of osteoporosis later in life. In the UAE, it has been estimated that each resident consumes an average of 103 liters of soft drinks per year, and the country is one of the top five countries in the world in terms of juice consumption per person. The high rate of acidic drink consumption in the UAE has been reported to be having a significant effect on the dental health of the country’s citizens, with over 50% of 5-year-old preschool children showing signs of tooth damage caused by the enamel starting to dissolve.

The authors of this study analyzed 306 different soft drinks that are sold in the UAE, including fizzy drinks, energy drinks, sparkling water, iced teas, juices, non-alcoholic malt beverages, coconut water, and sports drinks. They measured the pH of each drink three times using a pH meter and classified them as mildly erosive (pH 4 or more), erosive (pH between 3 and 3.99), or extremely erosive (pH less than 3).

What Did the Study Find?

The authors of the study reported that 88% of the drinks tested had a pH of less than 4, with 51% classified as erosive and 37% as extremely erosive. The most acidic drink tested, a fizzy drink, had a pH of only 2.32, although the type of drink with the lowest average pH was non-alcoholic malt beverages (pH 2.99). In addition to the pH of a drink, there is some evidence that the type of acid added or naturally occurring in it may be linked to the amount of erosion that may occur, with citric acid having been reported previously to be more aggressive than phosphoric acid.

In this study, citric, phosphoric, ascorbic, and malic acids were the acids most frequently present according to the ingredients labels. Phosphoric acid was found in both fizzy and energy drinks, citric acid was found combined with pantothenic acid in energy drinks, malic acid (which contributes to the sour taste of some foods and drinks) was mainly present in sparkling water or combined with juices containing citric acid, and ascorbic acid was mainly found in juices and malt beverages.

Take Home Message for Patients

Reducing your consumption of soft drinks can benefit the health of your teeth because it will lessen the amount of time that your enamel is exposed to high levels of acidity. If you do consume a soft drink, drinking it through a straw can help to keep it away from the teeth. In addition, if you decide to brush your teeth after having a soft drink it is important to wait 30 minutes to 1 hour before doing so. This is because it takes around this length of time for the saliva in the mouth to return the environment back to being neutral. If you brush your teeth before this happens, the acidity in the mouth may mean that the enamel is still slightly soft, increasing the chance of physical erosion.

Developments in Stem Cell-Based Alzheimer’s Disease Research

What Is the Main Idea?

Alzheimer’s disease is the most common cause of dementia among older adults, and the incidence is increasing. In the free access research article “Comprehensive Bibliometric Analysis of Stem Cell Research in Alzheimer’s Disease from 2004 to 2022”, published in the journal Dementia and Geriatric Cognitive Disorders, the authors discuss the results of their review of research literature published about stem cells and Alzheimer’s disease over the last 20 years and highlight future research directions.

What Else Can You Learn?

In this blog post, Alzheimer’s disease is discussed. Stem cells and the specialization of cells to enable them to play different roles in the body are also described.

What Is Alzheimer’s Disease?

Alzheimer’s disease is a type of dementia. Dementia is an umbrella term that is used to describe a group of conditions that affect the nervous system (known as “neurological” conditions). They directly affect the brain and get worse over time (conditions like this are described as “progressive”), usually over a number of years.

Although symptoms can be similar among different types of dementia, and some people have more than one form, Alzheimer’s is associated with memory loss and confusion in the early stages. Mild symptoms and signs range from wandering, getting lost, and repeating questions to changes in mood or personality. More moderate symptoms include impulsive behavior, misplacing things, and problems recognizing family and friends, with people potentially losing the ability to communicate if the condition becomes severe.

Alzheimer’s disease is the most common type of dementia in adults and is usually diagnosed in people aged 60 years and older. It can develop in younger people but this is rare. Incidence is increasing and it is estimated that the number of people with Alzheimer’s disease worldwide will treble by 2050.

What Causes Alzheimer’s Disease?

Our understanding of the sequence of events that lead to the development of Alzheimer’s disease is still limited. It is well known that the brains of people with Alzheimer’s disease have abnormal clumps of proteins called “amyloid plaques” and tangled bundles of fibers called “tau tangles”. These are found throughout their brains, but rather than simply being caused by a build-up of plaques and tangles, Alzheimer’s disease is now believed to be a complex condition caused by a variety of factors – including genetic, environmental, and lifestyle factors – that affect the brain over time.

As well as having plaques and tangles, neurons (brain cells that transmit messages from one part of the brain to another) in people with Alzheimer’s disease become damaged and lose their connections with each other, and many other complex brain changes are thought to be involved. There is currently no cure for Alzheimer’s disease and treatment focuses on helping people maintain their brain health, slowing or delaying symptoms, and managing behavioral changes. There is growing evidence that adopting healthy lifestyle habits, like exercising regularly and eating a healthy diet, can reduce the risk of developing dementia, in addition to reducing the risk of other conditions like cancer and heart disease.

How Might Stem Cell-Based Therapy Help?

Cell differentiation is the process by which “immature” undifferentiated (unspecialized) cells take on specific characteristics and become specialized to have a particular role in the body. Stem cells are unique in that they can self-renew, are either undifferentiated or only partially differentiated, and are the source of specialized cell types, like red blood cells and types of brain cell.

Stem cells have become a focus of medical research because it is hoped that studying differentiation will give new insights into how some conditions develop. It is also possible to guide stem cells to become a particular cell type, raising the possibility that tissues that are damaged or affected by a disease could be regenerated or repaired (this is known as “regenerative medicine”).

Focuses of research regarding Alzheimer’s disease include:

  • attempting to replace injured or lost neurons,
  • increasing the production of chemicals in the brain that influence the growth of nervous tissue,
  • reducing the build-up of the proteins that form amyloid plaques and tau tangles,
  • increasing synaptic connections,
  • decreasing inflammation in the brain,
  • repairing metabolic systems that have gone wrong (metabolism is the process by which the body produces energy), and
  • improving the immediate environments of areas in the brain.

What Did This Study Investigate?

The authors of this study used an approach called “bibliometrics” to assess trends and developments across 3,428 stem cell research reports regarding Alzheimer’s disease published between 2004 and 2022. Bibliometrics uses mathematical and statistical methods to analyze and provide an overview of a large number of documents in a particular research field. It can help researchers understand the direction in which research in a given area is heading and can contribute to the formation of clinical guidelines. It can also identify where more collaboration between different research areas is needed and identify new avenues for study.

Their analysis showed that the number of reports published on stem cell research in Alzheimer’s disease has increased dramatically over the last 20 years, particularly since 2016. The increase since 2016 is partly attributed to the combination of induced pluripotent stem cell(iPSC)-based and 3D bioprinting techniques. iPSCs are cells that are derived by reprogramming differentiated skin or blood cells back into an embryonic-like “pluripotent” state (meaning that they can develop into many different cell or tissue types, just like the stem cells in a developing embryo).

This means that a person’s blood cells could potentially be treated to become iPSCs that could then produce new neurons. 3D bioprinting is a technology that uses living cells mixed with bioinks to print natural, 3D tissue-like structures. The combination of iPSC-based and 3D bioprinting techniques has meant that it has been possible to create cultures of cell that more closely mimic the situation in the brain.

Research Hot Spots and Future Directions

A number of fields have been key areas of research for some time. These include iPSCs, microglia, and mesenchymal stem cells. Mesenchymal stem cells are a type of stem cell that are unable to differentiate into blood cells and have limited self-renewal capacity. Microglia are specialized brain cells that regulate brain development, the repair of injury, and the maintenance of neural networks. There is significant interest in their roles in healthy brains and how their dysregulation may be involved in the development of neurological conditions.

Newer areas of research interest include the roles of mitochondrial dysregulation (mitochondria are the parts of the cell where energy is produced) and autophagy (a process by which old and damaged proteins or parts of cells are broken down and destroyed) in the development of Alzheimer’s disease.

Another research area is that of exosomes, tiny sac-like structures that are involved in cell-to-cell communication. Exosomes bud off the outer surfaces of cells and are found in body fluids including blood, saliva, and cerebrospinal fluid (the fluid found in the tissue that surrounds the brain and spinal cord). They carry DNA, RNA, and proteins from the cells from which they originate. Exosomes derived from a patient’s stem cells have a strong safety profile and are unlikely to provoke a strong immune reaction.

In addition, because their primary function is shuttling cargoes between cells, it is hoped that they may be used for patient-specific drug delivery in the future, which may prove to be a more successful approach than stem cell transplantation. Combined, these research directions raise the exciting possibility that the development of effective therapies for Alzheimer’s disease may not be far away.

The Link between Autophagy and Lupus Nephritis

What Is the Main Idea?

Lupus is a type of autoimmune disease that is hard to diagnose and is not well understood. In the open-access research article “Degradation of Ubiquitin-Editing Enzyme A20 following Autophagy Activation Promotes RNF168 Nuclear Translocation and NF-κB Activation in Lupus Nephritis”, published in the Journal of Innate Immunity, the authors discuss the role that a process called autophagy plays in the development and progression of kidney damage in patients with a form of lupus called systemic lupus erythematosus (SLE), and investigate the mechanisms involved.

What Else Can You Learn?

In this blog post, SLE and lupus nephritis are discussed. Autoimmune diseases and the processes of autophagy and ubiquitination are also described.

What Is an Autoimmune Disease?

When the body’s immune system is working correctly, it recognizes invaders like bacteria and viruses as “foreign”, and attacks them using white blood cells and antibodies. In contrast, it recognizes the body’s own cells as “self” or “not foreign” and does not attack them. Autoimmune diseases – like rheumatoid arthritis, Crohn’s disease, and lupus – develop when the body’s immune system mistakenly starts to recognize the body’s own tissue as foreign and attacks it. This can cause inflammation in tissues and organs that, over time, can lead to serious damage.

What Causes Lupus and What Are Its Symptoms?

The exact causes of lupus are unknown, but it is thought to be caused by a combination of genetic and environmental factors. A number of genetic mutations (changes in genes) have been reported that seem to be linked to a person being susceptible to developing lupus. Women are most likely to be affected by the disease, and there is some evidence that hormonal changes that occur during a woman’s lifetime (such as during puberty, pregnancy, and menopause) may play a role. Lupus can be difficult to diagnose because signs and symptoms can differ from one person to another. They can also vary in their severity and develop slowly or quickly.

There are several different types of lupus. Some only affect the skin but the most common type, called systemic lupus erythematosus (SLE), can affect many parts of the body. SLE is characterized by the release of autoantibodies that bind to contents of the cell nucleus (the part of the cell that houses the DNA and is where genes are activated), including double-stranded DNA. The most common symptoms are extreme fatigue or exhaustion, swelling of or pain in the muscles or joints, skin rashes (particularly on the wrists and hands, or a butterfly-shaped rash across the cheeks and nose), mouth ulcers that keep coming back, hair loss, and fever. In addition, a form of kidney disease called lupus nephritis can develop.

What Is Lupus Nephritis?

The kidneys help to control blood pressure and make red blood cells, and remove waste products and extra water from the body to make urine. Lupus nephritis develops when the immune system starts to attack the part of the kidney that filters the waste products out of your blood, called the glomeruli, and is estimated to affect around 50% of patients with SLE. Although it can often be successfully controlled, lupus nephritis can lead to kidney failure, where a person’s kidneys stop working and they need kidney replacement therapy (in the form of dialysis or kidney transplant) to survive. In addition, it can cause high blood pressure, which can increase the risk of stroke or heart attack. Symptoms of lupus nephritis include blood or protein in the urine, weight gain, and the extra fluid that the kidneys cannot remove causing swelling (known as “edema”) in body parts like your legs or ankles.

What Happens to Glomeruli When Lupus Nephritis Develops?

There are around 1 million glomeruli in each kidney. They are made up of bundles of looping blood vessels and several specialized types of epithelial cells (this is the name given to types of cell that cover the inside and outside surfaces of your body, such as the skin, the outer surfaces of organs and internal cavities, and blood vessels). When lupus nephritis develops the glomeruli stop working properly, partly because of swelling or scarring of the small blood vessels, but also because epithelial cells in the glomeruli do not function properly. One cell type that is affected is the podocytes. These are highly specialized cells that wrap around the outer surfaces of the blood vessels in the glomeruli and play an essential role in filtering the blood by stopping proteins from being filtered out. Exactly how podocytes become damaged in lupus nephritis is unknown, but it may be caused by a combination of genetic, inflammatory, and metabolic (the processes that convert food and drink to energy in the body) factors.

What Did This Study Investigate?

The number of cells in the body is tightly regulated and a number of processes exist that check that cells and the molecules inside them are functioning normally. There are also processes that repair or remove damaged cells and molecules if things go wrong. There have been some reports that one such process called “autophagy” is linked to the development of lupus nephritis. Autophagy, which means “self-eating”, is a process by which old and damaged proteins or parts of cells are broken down and destroyed. The breakdown products are then recycled inside the cell and reused, especially during periods of starvation or stress. Autophagy plays an essential role in the immune system because it helps to destroy bacteria or viruses and is involved in inflammation.

Autophagy is typically a protective process; however, its activity is tightly regulated because if too much autophagy is taking place, it can result in programmed cell death (a method by which the body gets rid of cells that have become damaged or are no longer needed). Similarly, if the level of autophagy activity in a cell is too low, faulty proteins and parts of cells are not removed and can contribute to the development of disease. Autophagy is known to be involved in autoimmune diseases and changes in the normal functioning of autophagy have been linked to the development of cancer. The authors of this study investigated how autophagy affects podocytes in lupus nephritis, particularly regarding its effects on the levels of two proteins called A20 and RNF168.

How Are A20 and RNF168 Linked to Lupus Nephritis?

A20 is an enzyme (a type of protein that speeds up a chemical reaction) that is involved in regulating a process called “ubiquitination”, where a small protein called ubiquitin is attached to a protein and acts as a tag indicating that something should happen to it (such as the activation of another process, that it should move from one part of the cell to another, or that the protein should be broken down). Abnormal levels or functioning of A20 is known to be involved in chronic inflammation and tissue damage.

RNF168 is another enzyme and is involved in the cell’s DNA damage repair process. It helps to repair breaks in double-stranded DNA by tagging histone proteins. Histones are found in chromosomes and act as spools that the DNA winds around to become more compact and form chromosomes. They can also be marked with different types of tags that indicate whether a particular gene is “on” or “off”. In the case of RNF168, it tags histones with ubiquitin molecules near the sites of breaks in double-stranded DNA that enable proteins to bind that can repair the break.

What Did the Study Show?

The results of the study showed that autophagy in podocytes is over-activated in lupus nephritis and that this leads to the activity of A20 being reduced. At the same time, the activity of RNF168 is increased, leading to increases in both the amounts of DNA damage in podocytes and the activation of a protein called NF-κB, which activates genes involved in inflammation. In contrast, when autophagy is inhibited (this means that something is slowed down or prevented from happening), levels of A20 increase and those of RNF168 decrease, leading to an increase in DNA damage repair. These findings suggest that increasing the level of DNA damage repair that takes place in podocytes may limit the damage that occurs as lupus nephritis progresses. They also raise the possibility of autophagy, A20, and RNF168 becoming future targets in the development of therapies for its treatment and prevention.

Reducing Neuroinflammation after Traumatic Brain Injury

What Is the Main Idea?

Long-lasting damage after a traumatic brain injury can be caused by excessive neuroinflammation in the brain. In the open access research article “MiR-124 Reduced Neuroinflammation after Traumatic Brain Injury by Inhibiting TRAF6”, published in the journal Neuroimmunomodulation, the authors discuss how the levels of a microRNA called miR-124 influence the extent of neuroinflammation after a traumatic brain injury and investigate the mechanisms involved.

What Else Can You Learn?

In this blog post, the effects of a traumatic brain injury on the brain and the role of neuroinflammation are discussed. The functions of RNAs, particularly microRNAs, are also described.

What Is Traumatic Brain Injury?

A traumatic brain injury can be caused by something piercing the skull and entering the brain tissue, or by a violent blow or jolt to the head or body (for example if a person is struck by an object or is involved in a vehicle accident). Although some traumatic brain injuries cause short-term or temporary problems, others can be fatal or lead to long-term disability. When a traumatic brain injury occurs, there are usually two phases of damage that affect the brain:

  • The first, “primary” phase happens immediately when the trauma takes place and may include bleeding, brain swelling, and damage to nerve fibers.
  • “Secondary” brain damage develops after the initial injury and may take hours or weeks to develop. Secondary damage can include an increase of pressure inside the skull (usually due to the brain swelling), reduced blood pressure or oxygen flow, a breakdown of the blood–brain barrier (which controls the movement of molecules and cells between the blood and the fluid that surrounds the nerve cells in the brain), and neuroinflammation.

What Is Neuroinflammation?

The term “neuroinflammation” describes inflammation (the process by which your body responds to an injury or a perceived threat, such as a bacterial infection) in the central nervous system (CNS; which consists of the brain and spinal cord). As with inflammation in the rest of the body, neuroinflammation is an essential process that plays a protective role after injury, exposure to toxins, or infection. However, neuroinflammation can be harmful if the level of inflammation is excessively high or it is activated for too long, and chronic (long-term or recurring) neuroinflammation is associated with the progression of neurodegenerative diseases such as multiple sclerosis, Parkinson disease, and Alzheimer disease.

Several processes are involved in neuroinflammation and microglia are one of the main cell types involved. They are specialized cells, making up around 10% of the total number of cells in the CNS, that regulate the development of the brain, maintain neuronal networks, and help repair injury. Microglia actively survey their environment and engulf foreign material, and dead or damaged cells, to prevent them from affecting other brain cells. They also produce cell signaling molecules called “cytokines” that can either promote or inhibit inflammation.

When microglia become “activated” when infection or injury occurs, the profile of genes that are activated inside them changes rapidly and they begin to produce more pro-inflammatory cytokines (cytokines that promote inflammation) and other molecules. This is termed the “M1 phenotype” (the word “phenotype” means an observable characteristic) of microglia. Over time, microglia become “polarized” (changed) to the “M2 phenotype” and begin to secrete anti-inflammatory cytokines that reduce neuroinflammation and promote the repair of damaged tissue. The changes in gene activation that occur when microglia are activated and polarized can be detected by analyzing the levels of different types of RNA (ribonucleic acid).

What Is RNA?

Your genes are short sections of DNA (deoxyribonucleic acid) that carry the genetic information for the growth, development, and function of your body. Each gene carries the code for a protein or an RNA. There are several different types of RNA, each with different functions, and they play important roles in normal cells and the development of disease.

Messenger RNAs are single-stranded copies of genes that are made when a gene is switched on (expressed). They carry messages regarding which proteins should be made to the cell’s protein-making machinery. In a cell, long strings of double-stranded DNA are coiled up as chromosomes in a part of the cell called the nucleus. Chromosomes are too big to move out of the nucleus to the part of the cell where proteins are made, but messenger RNA copies of genes are small enough to get through.

MicroRNAs are much smaller than messenger RNAs. They do not code for proteins and instead play important roles in regulating genes, for example by inhibiting (silencing) gene expression by binding to complementary sequences in messenger RNA molecules, stopping their “messages” from being read, and preventing the proteins they code for from being made. Some microRNAs also activate signaling pathways inside cells, turning processes on or off.

What Did the Research Article Investigate?

After a traumatic brain injury, inactive microglia become active and migrate to the regions of the brain that surround the sites of injury. They produce and release pro-inflammatory cytokines and recruit immune cells that are circulating in the bloodstream to enter the brain, which amplifies neuroinflammation. Because this can become a problem and lead to secondary brain damage, the authors of the study are interested in exploring whether excessive neuroinflammation can be inhibited in some way.

Recent studies have reported that if a molecule called TLR4 (toll-like receptor 4; a receptor molecule that is found in cell membranes and that causes cells to start producing pro-inflammatory cytokines when activated) is prevented from working in the brain in a targeted way, less neuroinflammation develops after a traumatic brain injury. There is also evidence that the levels of a microRNA called miR-124 may be linked to the activation of TLR4.

The authors of the study investigated how the levels of miR-124 changed after traumatic brain injury and found that its expression was reduced, whereas an increase in miR-124’s expression promoted the polarization of microglia to the M2 phenotype, which reduces neuroinflammation. The activity of the TLR4 pathway was also reduced, and this was found to be because miR124 inhibited a molecule inside the microglia called TRAF6, which is part of the signaling pathway that is activated by TLR4. If the signal that is produced when TLR4 is activated cannot travel along this pathway, the activation of pro-inflammatory genes is prevented and the chance of excessive neuroinflammation developing is reduced.

Research like this raises the possibility of treating traumatic brain injury more effectively in the future. If excessive activation and, consequently, neuroinflammation can be prevented, for example by developing therapies that inhibit TLR4 or TRAF6, the risk of people who have a traumatic brain injury having secondary brain damage may be reduced, improving their chance of better recovery.

Zinc Oversupplementation and Copper Deficiency

What Is the Main Idea?

Copper and zinc are essential trace nutrients that play important roles in the body. In the open-access case report “Copper Deficiency Mimicking Myelodysplastic Syndrome: Zinc Supplementation in the Setting of COVID-19”, published in the journal Case Reports in Oncology, the authors discuss how oversupplementing with zinc to prevent infection can cause copper deficiency, which can cause symptoms that are similar to a group of blood cancers called myelodysplastic syndrome.

What Else Can You Learn?

In this blog post, the roles of zinc and copper in the body, and the effects of not getting enough or too much, are discussed. The symptoms of myelodysplastic syndrome are also described.

Why Does the Body Need Copper?

Copper is classed as an “essential trace nutrient”, which means that the body needs small amounts to work properly. It is involved in important processes in the body that include making energy, absorbing iron, making red and white blood cells, keeping the immune and nervous systems healthy, making collagen (which plays an essential role in the structure and function of skin, bones, cartilage, and connective tissues), and brain development. It also acts as an antioxidant, which means that it is involved in reducing levels of molecules called “free radicals” that can damage cells and DNA, and that are produced in the body as part of its normal energy-producing processes.

How Do Our Bodies Get the Copper They Need?

Most people should be able to get all the copper their body needs by eating a balanced, healthy diet. Good dietary sources of copper include offal (such as beef liver), shellfish (such as oysters and mussels), nuts (such as cashews and almonds), seeds, chocolate, dark green leafy vegetables, legumes (beans and pulses), wholegrain breads and cereals, mushrooms, and sweet potato. Copper deficiency (defined as the levels of copper in a person’s body being too low to meet their body’s needs or that the level measured by analysis of a blood sample is lower than the normal range) is rare but can be treated. It usually affects people who have had some form of gastric bypass or intestinal surgery, or who have celiac or inflammatory bowel disease. This is because their bodies may be less able to effectively absorb copper from their food.

How Much Copper Is Enough?

As with any nutrient, too little or too much copper can be harmful to the body. Guidelines regarding recommended daily intake vary by country, but are generally between 0.9 and 1.6 mg/day. Too much copper can cause symptoms that include stomach pain, nausea, diarrhea, and dizziness, and kidney and liver damage can occur if levels are too high for a long time. In contrast, copper deficiency can cause symptoms that include fatigue, decreased production of blood cells, lightened patches of skin, weak and brittle bones, increased risk of infection, and neurological symptoms such as numbness or tingling, difficulties with muscle coordination and balance, and signs of vision loss. Importantly, copper deficiency can present in the same way as myelodysplastic syndrome and is an important differential diagnosis (a disorder that could be causing the symptoms being experienced) in patients in whom myelodysplastic syndrome is suspected.

What Is Myelodysplastic Syndrome?

Myelodysplastic syndrome (also known as myelodysplasia) is the name given to a group of rare blood cancers that result in a person not having enough healthy blood cells. This is because their bone marrow (the part of the body that makes blood cells) makes blood cells that are abnormal (they do not form or do not work properly) and unable to mature. Over time, the number of immature blood cells in the bone marrow increases, preventing it from making enough healthy, mature blood cells, and the number of mature blood cells that can get into the bloodstream decreases. Myelodysplastic syndrome can develop slowly or quickly, and in some people can develop into a type of leukemia called acute myeloid leukemia. Symptoms vary from person to person (depending on which type(s) of blood cell have become reduced in the bloodstream) and can include frequent infections, weakness, tiredness, pale skin, shortness of breath, bruising and bleeding, and anemia. If a person is experiencing these symptoms, it could be that they have copper deficiency.

How Is Copper Deficiency Linked to Zinc?

A number of studies have shown that copper deficiency can be caused by “zinc overload” (taking too much zinc into the body). This is thought to be because excessively high levels of zinc cause copper to be removed from the body at an increased rate while the rate at which it is absorbed is decreased. Like copper, zinc is an essential trace nutrient. It is involved in metabolism (the process by which the body produces energy), wound healing, your sense of taste and smell, and the immune system. However, like copper, too little or too much zinc can be harmful. Symptoms of zinc deficiency include hair loss, eye and skin sores, and diarrhea. In the short term, a very high dose of zinc can cause nausea and vomiting, headache, and stomach ache and diarrhea, while high levels of zinc over a long period can reduce levels of “good” cholesterol, cause copper deficiency, and prevent the immune system from functioning properly. This last point is important, because “over-supplementing” with zinc can cause zinc overload.

How Does This Relate to COVID-19?

Because of its role in the normal functioning of the immune system, some people began taking zinc supplements during the COVID-19 pandemic in an attempt to prevent themselves from getting infected. Zinc supplements can be bought over the counter and are widely available, and there were reports in the media that zinc (among other things) could help prevent COVID-19 infection and prevent the severity of symptoms. This has led to several case reports (a case report is a type of medical summary that outlines the signs and symptoms, diagnosis, treatment, and follow-up of an individual patient) being published that have described cases where patients have presented with symptoms that have suggested that they have myelodysplastic syndrome, but have instead been found to have copper deficiency caused by zinc overload as a result of taking high-concentration zinc supplements. In one case report, a woman had been taking eight times the recommended daily amount of zinc (which varies by country but is usually between 7 and 8 mg for women, and 9.5 and 11 mg for men) in an attempt to prevent COVID-19 infection.

The authors of this case report describe the case of a man who had no pre-existing gastric or stomach problems, who presented with myelodysplastic syndrome symptoms that were found to be caused by copper deficiency. He had taken a zinc supplement of 50 mg/day for 6 months to prevent COVID-19 infection, but had stopped taking the supplement 2 months before presenting to his healthcare provider. After being advised not to take zinc and being started on copper supplementation, some of his symptoms disappeared and others improved.

Take-Home Message

This case report emphasizes the importance of not oversupplementing. Most people are able to get all the copper and zinc, as well as other nutrients, that they need from a normal healthy diet. Good sources of zinc include meat, shellfish, poultry, nuts and seeds, wholegrains, and legumes, and most of these are also good sources of copper. If you choose to take supplements, check the label and make sure that you are staying within the recommended daily amounts for your country or region. If you take more than one supplement, check that their combination does not mean that you are taking more than the recommended daily amount for a particular nutrient. If you are concerned that you may have a nutrient deficiency, talk to your healthcare provider.

Heatwaves Caused by Climate Change: How Geomedicine Can Improve Health Outcomes

What Is the Main Idea?

Extreme climate events, such as heatwaves, have become more common because of climate change and place a heavy burden on health systems. In the open-access research article “Beyond Usual Geographical Scales of Analysis: Implications for Healthcare Management and Urban Planning”, published in the journal Portuguese Journal of Public Health, the authors discuss how geomedicine can be used to aid urban planning and the allocation of health resources to reduce the number of deaths during heatwaves.

What Else Can You Learn?

In this blog post, the effects of climate change on health are discussed with a particular focus on heatwaves. Geomedicine and how it can be used is also described.

What Is Climate Change?

Climate change is defined as long-term and large-scale shifts in weather patterns and average temperatures. Although shifts like these can occur naturally, as the result of volcanic activity or changes in the Sun, human activities over the last 200 years have had significant effects. This has mainly been due to the burning of fossil fuels like coal, gas, and oil. As a result, the Earth is now about 1.1 °C warmer than it was 100–150 years ago, and the last decade (2011–2022) was the warmest on record. This is causing environmental effects such as rising sea levels, intense droughts, scarcity of water, and declining biodiversity (the variety of living organisms), which makes climate change an economic issue because it affects the availability of food and other resources.

How Does Climate Change Affect Our Health?

Climate change can affect human health in many ways. It can affect mental health through increased stress and anxiety, and extreme weather events can cause significant trauma. Rising sea levels and increased frequency of flooding can lead to people being displaced and increase the likelihood of water supplies becoming contaminated, which increases the spread of disease. Increasing droughts can decrease food production and the supply of water, and a warming climate also affects numbers of biting insects, such as ticks and mosquitos (both can spread disease), particularly in areas where numbers of these insects had previously been low. Extreme climate events such as heatwaves have also become more common and are lasting longer, placing a heavy burden on health systems.

What Are the Health Effects of Heatwaves?

Heatwaves are known to cause increases in death rates and the numbers of people needing medical care. During a heatwave in Europe in 2003, more than 70,000 excess deaths (the number of deaths that was above the number expected over that time period) were reported. Excess heat increases pressure on the heart, lungs, and brain, increasing the risk of death from respiratory (relating to the breathing system), cerebrovascular (relating to the brain and its blood vessels), or cardiovascular (relating to the heart and blood vessels) problems.

Who Is Most at Risk during a Heatwave?

People with pre-existing health conditions, especially cardiovascular and respiratory diseases, and the elderly are particularly at risk. Over the last 20 years, the rate of elderly people dying from heat-related causes has increased significantly. Children under 1 year of age are likely to be affected by the effects of heat and dehydration, as are people who do manual work outdoors, for whom an increased risk of chronic kidney disease has also been reported. There is also evidence that people living alone, living in areas that are more socioeconomically disadvantaged (this is defined as less access to or control over economic, social, or material resources and opportunities), or living in urban environments such as city centers are at increased risk.

What Did This Study Investigate?

To be able to deal with the challenges that heatwaves cause, healthcare systems need to be able to develop plans that will ensure that those most at risk can access the support they need during a heatwave. Advances in geographic information systems have been shown to be useful in mapping how diseases are distributed and identifying any clusters or trends. They can also take into account environmental and socioeconomic factors when analyzing data, and the availability of medical facilities. This area of research is termed “geomedicine”.

What Is Geomedicine and How Can It Improve Health Outcomes?

Geomedicine is based on the idea that good health does not come by accident. Instead, factors in our environment have an effect on our health, which means that the places where we live and work now and in the past affect our health status. By linking a person’s health status to geographic factors, such as a person’s address, geomedicine can provide health data that can help medical teams make diagnoses and better assess risk.

What Did the Authors Investigate?

In this study, the authors used an approach called “geocoding” to investigate how the scale of geographic information used in geomedical analysis affects the results. Geocoding involves defining a set of geographic coordinates, usually based on latitude and longitude, that correspond to a location. The authors argue that analyzing data by geocodes, which can specify a particular street, rather than by larger areas such as a parishes or districts provides more accurate information about public health in those areas. This means that local authorities can prioritize resources to areas with greater need.

In their study, the authors analyzed data concerning heat-related deaths among elderly people in Portugal, which were linked to cardiorespiratory problems, between 2014 and 2017. Each record included information about the house number, post code, and location of the person that died, which enabled it to be geocoded. Once geocoded, the data were generalized to the neighborhood level to protect the confidentiality of the people’s data that were included.

The results showed that some neighborhoods with low cardiorespiratory death rates were located within parishes with high rates, while conversely, neighborhoods with high death rates were located within parishes with low rates. The authors therefore stress the importance of carrying out analyses at several different scales, and note that analysis by smaller administrative areas is preferable. Just as personalized medicine has the potential to revolutionize health, so does analyzing data by individual neighborhoods.

However, the authors also note the need for authorities to develop multisector responses to the challenges that climate change brings to “keep vulnerability to a minimum and increase the resilience of healthcare and urban planning”. By improving health information systems, it is possible that the accuracy of health outcome monitoring, spatial planning in urban areas, and the management of health resources may be improved.

TOP