Early Signs of Atherosclerosis in Children with Atopic Dermatitis

What Is the Main Idea?

Atherosclerosis develops when our arteries begin to become narrowed or hardened. In the research article “Assessment of Subclinical Atherosclerosis in Children with Atopic Dermatitis”, published in the journal International Archives of Allergy and Immunology, the authors investigate whether early signs of atherosclerosis beginning to develop can be detected in children with atopic dermatitis and attempted to identify risk factors associated with both conditions.

What Else Can You Learn

Atherosclerosis and its symptoms are described. Atopic dermatitis and the role of inflammation in the development of cardiovascular disease are also discussed.

What Is Atherosclerosis?

Atherosclerosis is a progressive disease that develops slowly when the arteries, a type of blood vessel that carries oxygen-rich blood from our heart to the organs and tissues around our body, become narrowed or hardened. It is caused by the buildup of fatty deposits called plaque, which consist of fats, cholesterol, and other substances. Over time, as the amount of plaque in the arteries increases, the narrowing makes it more difficult for the blood to flow freely and cardiovascular disease (a general term that is used to describe diseases that affect the heart or blood vessels) can develop.

Cardiovascular diseases that can be caused by atherosclerosis include:

  • peripheral arterial disease (where a blockage develops in the arteries that deliver blood to your limbs, usually the legs),
  • aortic disease (where the aorta, the body’s main artery, is unable to work properly),
  • stroke (where the blood supply to the brain becomes disrupted), and
  • coronary artery disease (where the coronary arteries, which are the main sources of blood supply to the heart, become narrowed or blocked), which can lead to angina or heart attack.

Although many people with atherosclerosis do not have any symptoms, some people experience pain in their chest, or in their arms or legs when exercising, a feeling of weakness and/or confusion, and may feel short of breath or tired most of the time.

What Causes Atherosclerosis?

Atherosclerosis can begin to develop in early childhood. High levels of fats and cholesterol in the blood are known to contribute because they make up some of the components of plaque. Damage or injury to the inner layers of arteries is also thought to be involved because the immune system responds to and seeks to repair the damage through a process called inflammation.

When inflammation is initiated, it causes blood cells and other substances to gather at the site of injury, and this can contribute to plaque starting to build up inside the arteries. Interestingly, there is evidence that the inflammation caused by inflammatory diseases such as rheumatoid arthritis, psoriasis, and inflammatory bowel disease can contribute to the development of atherosclerosis, and atopic dermatitis (more commonly known as eczema) may also be involved.

What Is Atopic Dermatitis?

Atopic dermatitis is an inflammatory skin condition that is usually long-term and recurrent, although in children it can improve or clear up completely as they get older. It causes the skin to be dry, cracked, itchy, and sore, and can range from occurring in small, localized patches to all over the body. Although the exact causes of atopic dermatitis are unknown, it is considered to be a systemic disease (a condition that affects the whole body rather than a single body part or organ) because the chronic inflammation that causes it often occurs in other organ systems as well as in the skin. It also often occurs in people who have allergies or asthma.

What Did This Study Investigate?

Because atopic dermatitis is an inflammatory disease and chronic inflammation has been linked to the development of atherosclerosis, it is possible that there is a relationship between people having atopic dermatitis and developing cardiovascular disease later in life. Some studies have found that the systemic inflammation caused by atopic dermatitis may double the risk of cardiovascular disease. Some research has suggested that the two may have an indirect relationship due to atopic dermatitis causing risk factors linked to increased risk of cardiovascular disease, such as sleep problems caused by itching, inactivity, and the use of corticosteroid treatments).

However, other research has suggested that there is a direct relationship caused by the excessive inflammation in the body that is independent of other factors. Recent research has shown that the levels of molecules in the blood that are prognostic (in other words, they can be used to indicate how a condition is likely to progress) for atherosclerosis and damage to the arteries are increased in skin and blood serum samples from patients with atopic dermatitis.

Most studies looking at whether there is a link between atherosclerosis and atopic dermatitis to date have involved adult patients. Considering that atherosclerosis can start to develop in early childhood, the authors of this study investigated whether early signs of atherosclerosis beginning to develop can be detected in children with atopic dermatitis and attempted to identify risk factors associated with both conditions. They compared a group of children who had atopic dermatitis with a similar number of children who did not have the disease who were alike in terms of factors like their age, weight, and height.

What Did the Study Show?

The results of the study showed that early signs of atherosclerosis were detectable in children with atopic dermatitis, with the length of time that they had had atopic dermatitis, the severity of their disease, and their age all associated with the likelihood of signs being present.

In particular, increases in a factor called carotid intima–media thickness were found to be associated with children having atopic dermatitis. Carotid intima–media thickness is calculated using a special type of ultrasound by measuring the thickness of the two most inner layers of the carotid arteries (the major arteries that supply blood to your brain, with one on each side of your neck), the intima and the media, and is used to assess whether atherosclerosis may be present. The greater the carotid intima–media thickness, the greater the likelihood that atherosclerosis is developing.

The authors of the study suggest that it may be important that children with atopic dermatitis be monitored for signs of atherosclerosis development and other risk factors that are known to be associated with cardiovascular disease. These include obesity, high levels of fats in the blood, and high blood pressure. Studies following the health of children with atopic dermatitis over longer periods of time are now needed to shed more light on the relationship between it and the development of atherosclerosis and cardiovascular disease.

Note: This post is based on an article that is not open-access; i.e., only the abstract is freely available.

Models of Childhood Glioma contributing to Treatment Development

What Is the Main Idea?

Glioma is a type of tumor that develops in the nervous system, with differences between gliomas that develop in children and adults. In the open-access review article “Pediatric Glioma Models Provide Insights into Tumor Development and Future Therapeutic Strategies”, published in the journal Developmental Neuroscience, the authors summarize different experimental models that are being used to study glioma in children and how they may contribute to improvements in its treatment.

What Else Can You Learn?

Glioma and its symptoms are described. Differences in gliomas arising in children and adults, driver mutations, and the use of experimental models in cancer research are also discussed.

What Is Glioma?

Glioma is a type of tumor that is found in the nervous system. It usually develops in the brain but can also develop in the spinal cord (a tube of nervous tissue that runs from the brain to the lower back), although this is rare. Glioma develops when glial cells begin to develop and grow out of control. There are different types of glial cell and they play essential roles in the nervous system that support the function of neurons (cells that transmit messages from one part of the nervous system to another via electrical impulses), with glial cells sometimes being described as the “glue” that holds the nervous system together.

As well as surrounding neurons and holding them in place, glial cells also create a myelin sheath around neurons that insulates their electrical impulses so that they can transmit messages effectively, a bit like the coating of an electrical wire. They supply oxygen and nutrients to neurons to keep them nourished, regulate inflammation (an immune system process through which the body responds to an injury or a perceived threat, like a bacterial infection or damaged cells). They also form the blood–brain barrier, which is a barrier between the blood vessels in the brain and the other components that make up brain tissue that allows nutrients to reach the brain while preventing other things from entering it that could cause infections or damage.

What Are the Symptoms of Glioma?

There are different types of glioma, depending on the type of glial cell from which the glioma develops and speed at which the tumor is growing. As a result, the symptoms and signs of glioma can vary between people, and are also affected by where the tumor is in the nervous system and its size. Common symptoms are:

  • headache, which may hurt more in the morning;
  • changes in mental function (such as problems with understanding information and memory) and personality;
  • feeling sick and vomiting;
  • problems with vision (such as blurred or double vision);
  • seizures, especially if the person has not had them before.

Gliomas can develop at any age, and although glioma is most commonly diagnosed in adults there are some types of glioma that are more common in children and young adults.

What Did This Article Look at?

Review articles survey the information that has been published on a topic to date. Rather than presenting new findings from their own research, the authors aim to clarify current thinking on a topic and the evidence that supports it, and sometimes set out suggestions for changes to what is considered to be best practice.

In this article, the authors review the different experimental models that are being used to study glioma in children and summarize how these models may contribute to improvements in its treatment. Experimental models use systems, such as the culturing of cells in a laboratory, to investigate processes that are thought to be involved in diseases and to evaluate new drugs that are being developed before they are assessed by going through the clinical trials process in humans.

There is a need for new treatments for glioma in children. Treatments that are currently the standard of care for childhood glioma have been chosen based on their effects on gliomas in adults, but adult and child gliomas that are high-grade (this means that they are “malignant”, growing in an uncontrolled way and able to spread to nearby tissues and other parts of the body) progress differently and have different underlying “driver mutations” (these are changes in genes in the tumor cells that give them a growth advantage and, as a result, promote the development of cancer).

Until recently, a lack of experimental models that could accurately recreate the environment in which gliomas form meant that efforts to study child gliomas were limited. However, the discovery of child glioma-specific driver mutations has enabled researchers to investigate the origins of these tumors, laying the foundation for the development of more appropriate and effective treatments.

What Experimental Models Are Being Used?

Over the last 10 years there have been major advances in the development of cell lines (a population of cells that can be grown and maintained in a laboratory) and models derived from tissue samples obtained from glioma patients during surgical procedures. Cell lines have the advantages of being relatively cost-effective and easily shared with other researchers, as well as being suitable for use in high-throughput screening (this is a process by which hundreds of samples of cells and hundreds of different potential drugs can be tested quickly, often using robotics).

Advances in stem cell engineering have also opened up new opportunities to investigate the development of glioma. Stem cells are unique in that they can self-renew, are either undifferentiated or only partially differentiated, and are the source of specialized cell types, like red blood cells and types of brain cell. Stem cells are useful in glioma research because they can be used to model tumor types from which it is difficult to obtain tissue samples or establish cell lines, which is the case for some types of glioma, and they can be controlled so that specific cell types and driver mutations can be investigated.

Organoids are three-dimensional tissue cultures that are grown from stem cells. Although cell cultures can be very useful in cancer research, they are usually grown as flat sheets of cells in tissue culture flasks and do not accurately represent all of the complicated interactions that take place between tumor cells and their environment in the body. This “tumor microenvironment” includes immune cells, signaling molecules, the matrix that surrounds cells in tissues and supports them, and the surrounding blood vessels. Tumors and their surrounding microenvironment constantly interact and it is known that they can influence each other.

Immune cells in the microenvironment can affect the growth and development of tumor cells, while a tumor can influence its microenvironment by releasing signaling molecules that promote the development of new blood vessels, which increase the supply of nutrients to the tumor and aids its ability to start to spread around the body, and inhibiting and evading the immune system’s ability to recognize and destroy tumor cells. Using organoids enables elements of the tumor microenvironment to be incorporated into models of glioma so that the experiments more accurately mimic the situation in the body.

These model systems are enabling us to better understand how glioma develops. As our understanding increases, more features of glioma cells will be identified that can be targeted specifically by new treatments, increasing the range of therapies that can be used to treat glioma in children and improving the outcomes of patients.

Understanding Glutamate and Its Effects in the Brain

What Is the Main Idea?

Glutamate is the body’s main excitatory neurotransmitter, stimulating neurons to send signals around the body. In the free-access review article “Sex Hormones, Neurosteroids, and Glutamatergic Neurotransmission: A Review of the Literature”, published in the journal Neuroendocrinology, the authors summarize the current research evidence regarding whether or not there is a link between glutamate’s role as a neurotransmitter and the levels of sex hormones and neurosteroids in the body.

What Else Can You Learn

The role of the amino acid glutamate as a neurotransmitter in the brain is discussed. Sex hormones and neuropeptides, amino acids, and the general purpose of review articles are also discussed.

What Is Glutamate?

Glutamate is a naturally occurring amino acid that is found in the food we eat and is also produced by the body. It is a type of molecule called an “amino acid”. Amino acids are best known for being the component molecules that make up proteins, with the amino acids used and the order in which they are joined together in a protein influencing its functions, shape, and ability to interact with other molecules. If the order of the amino acids in a particular protein changes (for example if the gene that codes for it becomes mutated), the protein produced may no longer be able to function properly or even at all.

An example of this is when a single amino acid is changed in a protein called beta-globin because of a mutation in its coding gene. Beta-globin is a component of hemoglobin, which is found in red blood cells and is involved in carrying oxygen around the body. The single amino acid change creates a “sticky” patch on hemoglobin molecules that causes them to clump together and distort the red blood cells into a sickle shape, giving rise to a condition called sickle cell disease.

What Does Glutamate Do in the Body?

Glutamate plays several important roles in the body. It is a key component of metabolism, the process by which the food and drink that we consume is changed into energy, and can be broken down as an energy source in the brain when glucose levels are low. Glutamate is involved in the removal of excess nitrogen from our bodies via the production of urea (which is passed out of our bodies in urine). It is believed to be involved in the regulation of the sleep–wake cycle because levels are high during the rapid-eye-movement phase of sleep and when you are awake. Another major role of glutamate is as an “excitatory neurotransmitter”.

What Are Neurotransmitters?

Neurotransmitters carry chemical signals between neurons, a type of cell that transmits messages from one part of the brain and nervous system to another, and trigger an action or change in the target cell. This can be either “inhibitory” (it prevents or blocks the message from being transmitted any further), “modulatory” (it influences the effects of other neurotransmitters), or excitatory (it “excites” the target neuron, causing it to send the message on to the next cell).

Glutamate is the most abundant excitatory neurotransmitter in the human nervous system. It is involved in processes that take place in the brain such as memory and learning (it is estimated to be involved in more than 90% of the brain’s excitatory functions), and high levels of glutamate are also associated with increased pain levels. Glutamate is also converted into an important inhibitory neurotransmitter called gamma-aminobutyric acid (GABA) that is known as the “calming” neurotransmitter because it is involved in the regulation of anxiety, relaxation, and sleep. The process by which glutamate acts as a neurotransmitter is called “glutamatergic neurotransmission”.

What Are Sex Hormones and Neurosteroids?

Sex hormones are so called because they are critical in regulating the biological differences between males and females, and are particularly involved in reproduction and puberty (hormones are chemical messenger molecules that coordinate different processes and functions in the body). In humans, the key sex hormones are estrogen, progesterone, and testosterone. Neurosteroids are steroids that are produced in the brain or that have an effect on its functions (they can also act as signaling molecules). They are involved in a wide range of roles such as memory, learning, and behavior, as well as responses to stress and depression.

What Did This Article Look at?

Review articles are conducted as a sort of survey of all the information that has been published on a topic. Rather than presenting new findings, they aim to clarify current thinking on a topic and the evidence that supports it, and sometimes set out suggestions for changes to what is considered to be best practice. Increasing numbers of research articles are being published that are reporting a link between glutamate’s role as a neurotransmitter and the levels of sex hormones and neurosteroids in the body.

There is also evidence that changes to the regulation or levels of sex hormones and neurosteroids may be linked to the development of a range of neurological conditions. For example, dysregulation of glutamate’s role as a neurotransmitter has been linked to a number of disorders including epilepsy and post-traumatic stress disorder. It has also been linked to premenstrual dysphoric disorder, which is a severe form of premenstrual syndrome. It is therefore important that we gain a better understanding of how sex hormones and neurosteroids influence the normal functioning of the brain and identify any roles in the development of conditions that affect its function.

What Were the Review’s Findings?

The authors of the review concluded that the current evidence is that sex hormones can directly affect glutamate’s role as a neurotransmitter. In particular, there was evidence that estrogens can be protective against excitotoxicity, which occurs when excessive or prolonged activation of neurotransmission, particularly if mediated by glutamate, has a negative effect on neurons, leading to their loss of function or death. This is particularly relevant to stroke, where loss of blood flow (known as “ischemia”) in a region of the brain can not only damage neurons directly, but can also affect glutamate transport resulting in glutamate levels increasing to levels at which neurons die.

Other conditions known to be linked to too high levels of glutamate in the brain include Alzheimer’s disease, multiple sclerosis, Parkinson’s disease, and chronic fatigue syndrome. Equally, levels of glutamate in the brain that are abnormally low are linked to low energy, trouble concentrating, and insomnia. Estrogen levels in the brain have also been shown to be linked to memory function in several non-human species. Progesterone may also have a neuroprotective effect although further research is needed to investigate the link.

There was some conflicting evidence regarding whether testosterone has a protective or negative effect on neurons, and a number of neurosteroids that are produced from the conversion of testosterone and progesterone may also play an independent role in altering the levels of glutamate in the brain. As we learn more about the relationships of sex hormones and neuropeptides with glutamate-mediated neurotransmission it is hoped that we will gain new insights regarding how to prevent the development of disorders and treat them more effectively.

The Risk of Drug Interactions with Complementary and Alternative Medicines

What Is the Main Idea?

The use of biologically-based complementary and alternative medicines (CAMs) by patients with long-term health conditions is increasing. In the research article “Biologically-Based Complementary and Alternative Medicine Use in Breast Cancer Patients and Possible Drug-Drug Interactions”, published in the journal Breast Care, the authors describe how the use of biologically based CAMs by patients with breast cancer has the potential to cause drug interactions, both with anticancer medicines as part of a chemotherapy treatment and with each other.

What Else Can You Learn?

In this blog post, standard medical treatment for breast cancer and the possibility of drug interactions when medicines are taken together are discussed. Different types of complementary and alternative medicine are also described.

What Is Breast Cancer?

Breast cancer can start in one or both breasts. It develops when cells in the breast become abnormal, start to grow out of control, and begin to invade the surrounding tissue. Breast cancer cells can also spread to other areas of the body by being carried there by the blood and lymphatic systems. The lymph fluid that is transported around the body by the lymphatic system is an important part of the immune system. There are different types of breast cancer, with the exact type determined by which type of cells in the breast has become cancerous. Breast cancers are also classified on the basis of whether or not the cancer cells produce certain proteins or have changes (mutations) in specific genes. Genes are short sections of DNA that carry the genetic information for the growth, development, and function of your body.

How Is Breast Cancer Treated?

Treatments that have been assessed and accepted as effective treatments for particular diseases by the medical community are known as “standard medical treatments”. The standard medical treatments for breast cancer include surgery, chemotherapy, radiotherapy, hormone therapy, and targeted therapy.

  • Types of surgery that are used to treat breast cancer include breast-conserving surgery (where a cancerous lump is removed) and mastectomy (where a whole breast is removed).
  • Chemotherapy uses medicines that are “cytotoxic” (which means that they are toxic to cells, damaging them or causing them to die) to kill cancer cells. However, because cells in the body that are not cancerous can also be affected by chemotherapy medicines, many people who receive this type of treatment experience side effects. As this term is used to describe any unintended effects of a medicine, it can refer to beneficial and/or unfavorable effects.
  • Radiotherapy aims to kill cancer cells by using controlled doses of radiation.
  • Hormone therapy is used to lower the levels of the hormones estrogen and progesterone, which naturally circulate in the body, because some breast cancers develop the ability to be stimulated to grow by them.
  • Targeted therapy specifically targets molecules that cancer cells need to survive and spread.

What Is Complementary and Alternative Medicine?

The term “complementary and alternative medicine” (CAM) is an umbrella term that describes medical practices and products that are not part of standard medical care. Complementary medicine is used alongside standard medical treatment, whereas alternative medicine is used instead of standard medical treatment. A wide range of different types of products and practices are included in CAM that can be broadly divided into five groups.

  • Whole medical systems, such as ayurveda and naturopathy
  • Mind–body therapy, including meditation, yoga, and hypnotherapy
  • Manipulative and body-based practices, such as reflexology and massage
  • Energy healing, such as reiki
  • Biologically-based approaches, such as vitamins and dietary supplements, plants and plant extracts, and special foods or diets

The effectiveness and safety of most types of CAM approaches are less well understood than for standard medical treatment and more research is needed. However, while some CAM therapies have been shown to be generally safe and effective (such as acupuncture and yoga), some may be harmful and others may not work. Some may also cause drug interactions.

What Is a Drug Interaction?

A drug interaction happens when a medicine that is being taken by a person reacts with something else. Drug interactions can happen when one medicine reacts with another medicine or medicines, with something that the person is consuming (such as a herbal supplement or a particular food), or starts to cause side effects in the person because of another condition. When drug interactions occur, the results can range from mild side effects to a drug working less well or not at all. This means that a drug interaction has the potential to have a serious effect on the patient.

What Did the Study Investigate?

Advances in standard medical treatment for breast cancer have led to significant increases in 5- and 10-year survival rates in all countries in the European Union and in the UK in recent years. At the same time, health information has become more widely available and a large proportion of patients with long-term health conditions look for ways to improve their health and quality of life that fall outside of standard medical treatment.

Research has shown that the use of biologically-based CAMs is particularly popular among women with cancer, primarily because it is hoped that biologically-based CAMs can lessen the side effects of chemotherapy and strengthen the body against the effects of anticancer treatments. However, many of the biologically-based CAMs that people use carry the risk of drug interactions, and patients may begin taking them without consulting or notifying their medical team, making it difficult for any effects caused by drug interactions that do occur to be identified.

The authors of this study followed 47 patients with breast cancer as they began chemotherapy treatment, and asked them to complete questionnaires on their first day of treatment and again 10–12 weeks later. During this time period, 91% of the participants in the study reported that they used a biologically-based CAM, with the most popular types of biologically-based CAMs including the taking of vitamins, minerals, trace elements, and plants or plant extracts.

Drug interactions that had the potential to be clinically relevant (i.e., that could affect the effectiveness of a chemotherapy medicine or increase its toxicity in the body) were identified for 30 out of the 43 patients who reported using biologically-based CAMs. This was particularly true for patients who were using turmeric and ginger supplements together, which shows that the taking of more than one biologically-based CAM at once can cause drug interactions with each other, not just with anticancer medicines.

While the consumption of turmeric and ginger in food has generally been reported to have health benefits, they can both have a blood-thinning effect when high levels are consumed. This puts a person at risk of dangerous bleeding if they are also taking an anticoagulant (a type of medicine that prevents blood clots form forming). There are also some instances where drug interactions only occur if two substances are taken together. In such cases, it is possible for a patient’s medical team to help put together a medication plan that can help avoid drug interactions by ensuring that the taking of the two medicines is done at safe time intervals.

Take-Home Message

Although some biologically-based CAMs may have beneficial effects on the health of patients undergoing treatment for breast and other cancers, further studies are needed to identify potential interactions that can occur with chemotherapy drugs and with other biologically-based CAMs. If you are undergoing treatment for breast cancer, let your medical team know if you start to use a biologically-based CAM. This will enable them to monitor you for any potential drug interactions and will also add to the pool of knowledge regarding the best CAM options for patients undergoing anticancer treatment. There may also be known potential drug interactions that should be taken into consideration, and your medical team will be able to provide advice to help you support your standard medical treatment safely.

Note: This post is based on an article that is not open-access; i.e., only the abstract is freely available.

The Benefits of Cognitive Activity on Brain Health

What Is the Main Idea?

Some changes in cognitive function are considered to be a normal part of aging, but others can indicate the presence of disease, such as dementia. In the research article “Cognitive Activity Is Associated with Cognitive Function over Time in a Diverse Group of Older Adults, Independent of Baseline Biomarkers”, published in the journal Neuroepidemiology, the authors investigate whether there is a relationship between a person’s level of cognitive activity, biomarkers in their blood that can indicate Alzheimer’s disease or dementia, and changes in their cognitive function in older age.

What Else Can You Learn?

In this blog post, changes in cognitive function as we age are described. Cognitive reserve and different forms of dementia and also discussed.

How Does Cognitive Function Change as We Age?

The term “cognitive function” describes a combination of processes that take place in the brain that enable us to learn, manipulate information, remember, and make judgements based on experience, thinking, and information from the senses. These processes affect every aspect of life and our overall health, including how we form impressions about things, fill in gaps in knowledge, and interact with the world.

Some changes in cognitive function that are considered to be a normal part of the aging process include difficulties with multitasking and sustaining attention, and an overall slowing of the speed at which we think. The ability to “hold information in mind”, which means the ability to think about something without steady input about it from the outside world, also tends to decrease.

In contrast, skills like verbal reasoning and vocabulary tend to increase or stay the same as we get older. Changes in cognitive function that are considered a normal part of aging are usually subtle over time; however, some people experience major changes in cognitive function that may indicate the development of a neurodegenerative disease caused by abnormal changes in the brain, such as dementia. The term “neurodegenerative” means the degeneration or death of neurons, a type of cell that transmits messages from one part of the brain and nervous system to another.

What Is Dementia?

Dementia mainly occurs in people aged over 65 years and covers a range of conditions with different causes. For example, vascular dementia develops when blood flow to one or more areas in the brain is blocked or reduced, preventing cells from getting the oxygen and nutrients that they need to function properly.

In contrast, Alzheimer’s disease is believed to be caused by the abnormal functioning of two proteins called beta-amyloid and tau. In people with Alzheimer’s disease, beta-amyloid forms clumps called “plaques” on neurons that make it hard for them to stay healthy and communicate with each other, while abnormal forms of tau cling to other tau proteins inside neurons and form “tau tangles”. People with dementia often experience declines in cognitive function that affect their memory and other thinking skills like language, problem-solving, attention, and reasoning. Their behaviour, feelings, and relationships can also be affected, with significant effects on their daily lives.

What Did the Study Investigate?

It is well known that the extent to which a person engages in cognitive activity (mental tasks that require focus, reading, learning, creativity, memory, and/or reasoning) can affect their cognitive function as they age. There is strong evidence that people who are more cognitively active maintain higher levels of cognitive function over time than people who are less cognitively active, regardless of whether they develop a form of dementia. In other words, some brains keep working more efficiently than others despite them experiencing similar amounts of cognitive decline and/or damage. However, it remains unclear whether this is because cognitive activity directly benefits cognitive health or because people with declining cognitive function become less cognitively active.

What Is “Cognitive Reserve”?

The possibility that cognitive activity can positively affect our brain health relates to an idea called “cognitive reserve”. It suggests that people build up a reserve of cognitive abilities during their lives that can protect them against some of the cognitive decline that can happen as the result of ageing or the development of disease such as dementia. A person can increase their cognitive reserve through activities that engage their brain, such as learning a language or new skill, solving puzzles, and high levels of social interaction, particularly if the activities are novel and varied. Regular physical activity, not smoking, and a healthy diet are also important.

The idea of cognitive reserve is supported by research that has shown that the relationship between cognitive activity and function in older age is not affected by the degree of abnormal brain changes. In other words, two people with Alzheimer’s disease may have similar levels of beta-amyloid plaques and tau tangles in their brains, but may differ regarding the extent to which their cognitive function has declined. Equally, two people who seem to have the same level of cognitive function may differ regarding the extent of abnormal change that has happened in their brains.

What Role Do Biomarkers Play?

The authors investigated whether there is a relationship between the levels of three biomarkers in the blood that can be used to predict and stage some types of dementia, including Alzheimer’s disease, and the extent to which a person’s level of cognitive activity affects their cognitive function as they age. Biomarkers are measurable characteristics, such as molecules in the blood or changes in genes (mutations), that can indicate whether the body is working normally or a disease is present.

In this study, the authors measured the levels of three biomarkers in blood samples: total tau, neurofilament light chain (NfL),and glial fibrillary acidic protein (GFAP).

  • As already mentioned, tau tangles are a characteristic of Alzheimer’s disease, and high levels of total tau (both normal and abnormal forms of tau) in the blood have been reported to be associated with increased risk of cognitive impairment.
  • High levels of NfL in the blood have been linked to neurodegeneration and there is evidence that it may be possible to use levels of NfL in the blood to detect whether a person has dementia.
  • Levels of GFAP in the blood have been shown to be increased early on in the development of Alzheimer’s disease. This can be used to determine whether a person has Alzheimer’s disease or frontotemporal dementia which is a rarer type of dementia that affects the frontal and temporal lobes of the brain responsible for language, behavior, and emotions.

Who Participated in the Study?

The people who participated in the study were all aged 65 years or older, and one-third of the participants were randomly selected to give blood samples for biomarker testing at the start of the study. All of the participants reported how often they participated in cognitive activities that were judged to be common to older adults because they are not overly dependent on a person’s financial or social situation:

  • Watching television
  • Listening to the radio
  • Visiting a museum
  • Playing games or doing puzzles
  • Reading books
  • Reading magazines
  • Reading newspapers

Their cognitive function was also assessed at the start of the study and in 3-year cycles after that, using tests of short-term and immediate memory, perceptual speed, and language functioning.

What Did the Authors Find?

The authors of the study found that higher levels of cognitive activity were associated with better cognitive function not only at the start of the study, but also after an average of 6.4 years of follow-up when the authors made contact with participants at later, prearranged dates to check on progress. However, the levels of the blood biomarkers did not affect this relationship. In other words, the benefits of high levels of cognitive activity on cognitive function were not affected by the levels of tau, NfL, and GFAP in the blood, even when they were present at high levels.

These results lend weight to the idea of cognitive reserve and suggest that people who engage in enriching activities throughout their lives may enter old age with a higher level of cognitive function, which can delay or reduce any symptoms resulting from dementia or other neurodegenerative diseases from affecting their quality of life.

Take-Home Message

Ensuring that we are cognitively active before we reach our 60s (i.e., before the age at which the study’s participants were initially assessed) may benefit our brain health and cognitive function as we age. The fact that the authors of the study did not find a link between the blood biomarkers and cognitive activity over time also suggests that people benefit from enrichment activities throughout their lives, including in their later years.

Note: This post is based on an article that is not open-access; i.e., only the abstract is freely available.

Factors Increasing Stroke Risk in Young Adults

What Is the Main Idea?

Although stroke is more common among the elderly it can happen at any age. In the research article “Risk Factors for Stroke in the Young (18–45 Years): A Case-Control Analysis of INTERSTROKE Data from 32 Countries”, published in the journal Neuroepidemiology, the authors describe how the main risk factors causing stroke in young adults have changed in recent years.

What Else Can You Learn?

In this blog post, the different types of stroke and their effects are described. Ways that you can reduce your risk of stroke and how case–control research studies are conducted are also discussed.

What Is Stroke?

Arteries are blood vessels that carry oxygen-rich blood from the heart to cells and organs throughout the body. Stroke is a disease that affects the arteries that lead to and pass through the brain. The oxygen and nutrients that brain cells need to function properly are carried around the brain by the blood. When stroke happens, the blood supply to part of the brain is cut off or reduced.

This can be caused by a blockage in an artery (this is called an “ischemic” stroke) or by an artery rupturing, causing bleeding in or around the brain (this is called a “hemorrhagic” stroke). The cells in the affected area of the brain can no longer get all the oxygen and nutrients they need and quickly begin to die. The bleeding can also cause irritation and swelling, and pressure can build up in surrounding tissues, which can increase the amount of damage in the brain.

As well as the two main types of stroke, some people experience “mini-strokes” called transient ischemic attacks (TIAs). A TIA is essentially a stroke caused by a temporary, short-term blockage of an artery. Once the blockage clears the symptoms stop. Although someone who has a TIA may feel better quickly they still need medical attention as soon as possible, because the TIA may be a warning sign that they will have a full stroke in the near future.

What Are the Effects of Stroke?

The effects of stroke differ from one person to another and depend on the severity, the area of the brain that is affected, and the type of stroke experienced. The main symptoms of stroke include one side of the face dropping or the person being unable to smile, not being able to lift both arms and keep them raised, the person having difficulty understanding what you are saying, or slurred speech or not being able to talk.

Other symptoms include confusion or memory loss, numbness or weakness on one side of the body, a sudden fall or dizziness, sudden severe headache, and/or loss of sight or blurred vision (in one or both eyes). Although some people will have a full recovery after stroke, others will have permanent effects that do not get better.

What Causes Stroke?

There are some factors that are known to increase your chance of stroke. These include your age, ethnicity, having a close relative (a sibling, parent, or grandparent) who has had a stroke, especially if the stroke happened before they reached age 65 years, and having other conditions such as diabetes or a type of heart disease. Your arteries naturally become narrower as you get older, and blood clots that cause ischemic stroke often form in areas where arteries have become narrower or blocked over time as a result of the buildup of fatty deposits (a process called “atherosclerosis”).

Smoking, high levels of lipids (fats) in the blood (such as cholesterol and triglycerides), diabetes, drinking excessive amounts of alcohol (binge drinking), obesity, and high blood pressure (also called “hypertension”) can all speed up this process. High blood pressure is also the main cause of hemorrhagic stroke because it can weaken arteries in the brain. The roles of smoking, diabetes, high lipid levels, and high blood pressure in causing stroke are so well known that they are sometimes called “traditional” risk factors.

What Did This Study Investigate?

Although stroke is more common among the elderly it can happen at any age, even in infants. There is some evidence that the global incidence of stroke among younger and middle-aged people (aged 18–64 years) is increasing, with significant increases in low- and middle-income countries. As these countries have undergone economic changes, so too have the dietary and lifestyle habits of their inhabitants, resulting in increases in high blood pressure, diabetes, and obesity.

Although it used to be thought that rare, non-traditional risk factors were mainly responsible for stroke in younger people (such as conditions that mean that a person has a tendency to develop blood clots or rheumatic heart disease), this may no longer be the case.

The authors of this study used data from a study called INTERSTROKE to assess whether traditional risk factors are now the main cause of stroke in people aged 18–45 years. INTERSTROKE was a case–control study that involved 142 centers located in 32 countries across the world between 2007 and 2015. A case–control study is a type of study that compares the medical and lifestyle histories of two different groups of people to identify risk factors that may be associated with a disease or condition:

  • one group of people with the disease being studied (cases) and
  • another similar group of people who do not have the disease (controls).

In INTERSTROKE, people who experienced their first acute stroke and who presented to medical professionals within 5 days of their symptoms beginning were matched with control participants based on their age and sex. In total, 1,582 pairs of participants were assessed.

What Did the Study Show?

As in older people, ischemic stroke was more common than hemorrhagic stroke in younger adults (accounting for 71% of cases). No statistically significant regional differences in risk factors were identified, although this may have been influenced by the low numbers of participants from individual regions. Traditional risk factors such as high blood pressure, high lipid levels, smoking, excessive alcohol consumption, obesity, and psychosocial stress (caused by our environment and relationships) were also shown to be significant risk factors for stroke in younger adults. High blood pressure was shown to be particularly significant, and was consistently identified as the strongest risk factor across all of the regions included in the study, different stroke types, and both sexes.

These results show that, worldwide, the traditional risk factors for stroke are now as important for younger adults as they are for older members of the population. The authors suggest that public health efforts that aim to identify and address traditional risk factors for stroke should start when people are in their 20s and 30s, which is much earlier than previously thought.

Take-Home Message

Taking steps to control your blood pressure and keep it low, whatever your age, can have significant health benefits that include reducing the risk of stroke. Eating a healthy diet that includes plenty of vegetables, wholegrains, fruit, some dairy products, fish, poultry, nuts, seeds, and beans, and reducing your consumption of sugars and red and processed meat can help. Stopping smoking, only drinking moderate amounts of alcohol (and avoiding binge drinking in particular), and being more active can also have significant positive effects on your health.

Note: This post is based on an article that is not open-access; i.e., only the abstract is freely available.

Can Eating Watermelon Trigger Migraine?

What Is the Main Idea?

The exact causes of migraine are unknown, but it is thought that migraine attacks develop as a result of abnormal brain activity. In the research article “Migraine Attacks Triggered by Ingestion of Watermelon”, published in the journal European Neurology, the authors describe how watermelon consumption may trigger migraine headache attacks by activating a process called the L-arginine-nitric oxide pathway.

 What Else Can You Learn?

In this blog post, different types of migraine and what is known about how migraine attacks develop are described. The processes by which nerves transmit signals throughout the body and the L-arginine-nitric oxide pathway are also discussed.

What Is Migraine?

Migraine is often characterized as a headache that causes severe throbbing pain or a pulsing sensation, usually on one side of the head. However, there are different types of migraine and headache, and it can be difficult to tell them apart. Different people also experience different migraine symptoms.

Although many migraine attacks involve a severe throbbing headache, some people will experience migraine attacks without headache (known as silent migraine). When this happens, the person experiences “aura” symptoms such as flashing lights or seeing zigzag lines, but does not develop head pain.

Other people may experience migraine that includes severe head pain with or without aura symptoms, such as changes in their vision, numbness or tingling, feeling dizzy, having difficulty speaking, and feeling or being sick. Migraine attacks can last anywhere between several hours and three days, and symptoms may start and end one or two days before headache develops.

What Causes Migraine?

The exact causes of migraine are not known, although the fact that people are more likely to get them if they have a close family member that gets migraines suggests that there is some sort of genetic involvement. It is thought that migraines develop when nerve signals, chemicals and blood vessels in the brain are affected by abnormal brain activity.

Neurogenic inflammation (a type of inflammation caused when particular types of nerves are activated and release mediators of inflammation such as nitric oxide) and the widening of blood vessels in the membrane layers that protect the brain and spinal cord are believed by some researchers to be key causes of migraine headache. Leakage of blood plasma (the liquid component of blood that does not include blood cells) from blood vessels into the surrounding tissues may also be involved.

Nerves (also known as neurons), together with the spinal cord and brain, are key components of the nervous system and consist of bundles of nerve fibers wrapped up to form cable-like cells. Nerves send electrical signals that control our senses, like pain and touch, and essential processes such as breathing, digestion, and movement, from one part of the body to another. When an electrical signal reaches the end of a nerve it is converted into a chemical signal. This causes molecules called neurotransmitters, such as dopamine and epinephrine (also known as adrenaline), to be released into the space between the end of one nerve and the start of the next one, which is called a synapse.

Once they have crossed the synapse, the neurotransmitters bind to receptors on the new nerve, and the signal is converted back into a chemical signal and travels on along the neuron. The ability of nerves to transmit signals internally or between one nerve and another is dependent on a process called depolarization, which is essential to the function of many cells and communication between them. Most cells have an internal environment that is normally negatively charged compared with the cell’s external environment.

When depolarization occurs, the internal charge of the cell temporarily becomes more positive before returning back to normal. Migraine aura is thought to be caused by a wave of “spreading depolarization” in a part of the brain called the cortex. Nitric oxide and glutamate are released during spreading depolarization, and some studies have reported increased levels of nitric oxide during headache attacks. This has led some researchers to suggest that the pathways that break down nitric oxide may be involved in migraines.

What Did This Study Investigate?

Although the exact causes of migraine are still unclear, migraine attacks are known to be triggered by stress and tiredness, hormonal changes, prolonged fasting or skipping meals, and the consumption of too much alcohol or caffeine and certain foods. Watermelon is the main natural source of an amino acid (the component units that are joined together to make proteins) called L-citrulline (in fact, its name is derived from the scientific name for watermelon, Citrullus vulgaris).

L-citrulline is also made by the body in the liver and intestine, and is an important component of the urea cycle, the process by which toxic ammonia is converted into urea so that it can be passed out of the body in urine. L-citrulline in the body can be converted to another amino acid called L-arginine, from which nitric oxide is produced via a process called the L-arginine-nitric oxide pathway. This means that watermelon may be an indirect source of nitric oxide in the body and may trigger migraine in some people.

The authors of this study conducted a clinical trial to investigate whether eating watermelon causes headache attacks in people who experience migraine. They recruited 38 volunteers who experience migraine without aura and 38 who do not, and asked them to each consume a portion of watermelon after avoiding consumption of watermelon and other L-citrulline-containing foods in the preceding 7 days, and fasting for the preceding 8 hours.

All of the volunteers gave blood samples before and after eating the watermelon to enable the researchers to assess whether there were any changes in blood serum nitrite levels (produce by the breakdown of L-citrulline). All of the volunteers then ate and were followed up for 24 hours by telephone, so that the researchers could be informed if any of the volunteers developed headache.

What Were the Results of the Study?

Headache was triggered in almost one-quarter of the people in the group who experienced migraine (23.7%) after, on average, around 2 hours after watermelon was consumed. In contrast, none of the volunteers in the migraine-free group developed headache over the 24-hour follow-up period. Interestingly, around one-quarter of the volunteers in the migraine (23.4%) and migraine-free (24.3%) groups were shown to have increased nitrite levels in their blood serum samples after consuming watermelon. These increases from the values recorded before watermelon consumption were statistically significant.

These findings suggest that eating watermelon can trigger headache attacks in people who experience migraine and increase serum nitrite levels, which may be due to activation of the L-arginine-nitric oxide pathway. Although everyone is different and not all of the migraine group volunteers developed headache after consuming watermelon, people who experience migraine may wish to consider reducing or avoiding consumption of watermelon.

Note: This post is based on an article that is not open-access; i.e., only the abstract is freely available.

Emerging Treatments for Ulcerative Colitis

What Is the Main Idea?

The treatment of ulcerative colitis has traditionally focused on the control of symptoms. In the review article “Current and Emerging Targeted Therapies for Ulcerative Colitis”, published in the journal Visceral Medicine, the authors describe how advances in targeted treatments have the potential to improve the quality of life of people with ulcerative colitis.

What Else Can You Learn?

In this blog post, ulcerative colitis and emerging treatments for it are described. Different phases of clinical trials are also discussed.

What Is Ulcerative Colitis?

Ulcerative colitis is a form of inflammatory bowel disease. People with ulcerative colitis have chronic (long-term) inflammation and ulcers (sores) in the colon (also known as the large bowel and part of the large intestine, it removes water and some nutrients from partially digested food before the remaining waste is passed out of the body).

For many people with ulcerative colitis, the disease follows a “relapsing and remitting” course, which means that there will be times when their symptoms get worse and others when their symptoms partly or completely go away. Symptoms of ulcerative colitis include needing to go to the toilet frequently and urgently, abdominal pain, a general feeling of being unwell, and fatigue, which can combine to have a major impact on a person’s quality of life and ability to work.

What Causes Ulcerative Colitis?

The exact causes of ulcerative colitis are not fully understood, but it is known that a combination of factors cause inflammation to be activated by the immune system. Inflammation is a normal process through which your body responds to an injury or a perceived threat, such as a bacterial infection. In ulcerative colitis, a high level of inflammation taking place for too long results in tissue damage in the colon and disease-related complications that cause the symptoms described above.

Ulcerative colitis is thought by some to be an autoimmune condition, which means that the body’s immune system wrongly attacks normal, healthy tissue. The intestines contain hundreds of different species of bacteria, which are part of the “gut microbiome” (the term given to all of the microorganisms that live in the intestines and their genetic material). Although some of these species can cause illness, many are essential to our health and wellbeing, playing key roles in digestion, metabolism (the chemical reactions in the body that produce energy from food), regulation of the immune system, and mood.

Some researchers believe that in ulcerative colitis, the immune system may mistakenly identify harmless bacteria inside the colon as a threat and start to attack them, causing the colon to become inflamed. Genetic factors like changes in genes and environmental factors are also known to be involved in the development of ulcerative colitis, and recent advances in our understanding have enabled new targeted therapies to be developed that selectively block or reduce the activity of components involved in inflammation.

Treatment of ulcerative colitis has traditionally focused on symptom control, whereas the development of new targeted treatments aims to achieve remission (the signs and symptoms of disease are reduced either partially or completely) and the restoration of people’s quality of life. A number of new treatments are in phase 2 or 3 clinical trials and may soon add to the range of treatments available to people with ulcerative colitis.

What Are the Different Types of Clinical Trials?

To be approved, a treatment must be proven to be safe and better than existing treatments. New treatments have to successfully go through several phases of clinical trials before they are approved for use and cannot move on to the next phase unless that particular phase of trial has yielded positive results. Phase 0 and phase 1 trials are the earliest-phase trials. They usually involve a small number of people (usually up to 50 people), aim to determine whether a treatment is safe, and (if the treatment involves a drug being given) what happens to it in the body.

Once found to be safe, treatments enter larger phase 2 trials (usually up to 100 people) where they are assessed as treatments for specific illnesses and any side effects (an unintended effect of the drug) are investigated in more detail. Phase 3 trials include hundreds or thousands of people and test new treatments against an existing treatment to see whether it is better. Phase 3 trials are randomized and often take place over several years so that the long-lasting effects of the new treatment can be assessed.

Emerging Therapies for Ulcerative Colitis

Interleukin-23 (IL-23)

A protein called interleukin-23 (IL-23) is known to inhibit the responses of a type of white blood cell called regulatory T cells. These cells play an important role in the body by suppressing the response of the immune system, ensuring that its normal level of activity remains within set limits and that its activity is reduced once a threat has been dealt with. They are also critical in preventing the development of autoimmunity.

When IL-23 inhibits regulatory T cells, inflammation is able to continue unchecked. A particular form of IL-23 called IL-23p19 has been identified as being involved in the development of ulcerative colitis. Four IL-23p19 inhibitors are currently in or have completed phase 2 or 3 trials. They appear to be particularly effective in patients whose ulcerative colitis has become resistant to treatment with tumor necrosis factor (TNF) inhibitors, and their effectiveness in combination with TNF inhibitors is also being investigated.

S1P

S1P is a type of molecule called a “lipid mediator” and is produced in response to a cell receiving a stimulus, and then exported from the cell so that it can bind to a receptor to transmit a signal to target cells. S1P binds to five different S1P receptors expressed on various types of immune cell, resulting in lymphocytes (cells that make antibodies and help control the immune system) being able to travel toward inflamed tissue in the intestine. Drugs that bind to S1P receptors and cause them to be internalized back into the cell and broken down are called S1P agonists. One S1P agonist has already been approved for the treatment of ulcerative colitis and another is in clinical development.

Toll-Like Receptor 9 (TLR-9)

A receptor inside cells called Toll-like receptor 9 (TLR-9) recognizes and binds to bacterial and viral DNA that is present inside cells. It does this by recognizing components called CpG motifs, which are made of a cytosine and a guanine bound together (these are two of the four components of DNA that make up the “genetic code”). CpG motifs are known to be the components of bacterial and viral DNA that cause the immune system to be activated.

As a result, some researchers are investigating the use of short, single-stranded synthetic stretches of DNA (called CpG oligonucleotides) to stimulate the immune system. One such molecule, which activates TLR-9 on target cells, has been shown in clinical trials to suppress immune cells that promote inflammation and to activate immune cells that suppress it, and is undergoing further testing.

microRNAs

Another approach is investigating the potential use of microRNAs. Your genes are short sections of DNA that carry the genetic information for the growth, development, and function of your body. Each gene carries the code for a protein or an RNA. There are several different types of RNA, each with different functions, and they play important roles in normal cells and the development of disease. MicroRNAs are small RNA molecules that do not code for proteins and instead play important roles in regulating genes, for example by inhibiting (silencing) gene expression.

Some microRNAs also activate signaling pathways inside cells, turning processes on or off. One such microRNA is miR-124, which negatively regulates inflammation. Reduced expression levels of miR-124 have been reported in studies of patients with ulcerative colitis, and a treatment that has been designed to upregulate miR-124 is currently in clinical trials involving patients with a variety of inflammatory diseases, including ulcerative colitis and rheumatoid arthritis.

Interleukin-6 (IL-6)

Interleukin-6 (IL-6) is another molecule that promotes inflammation and has been shown to play a central role in the development of inflammatory bowel disease. The binding of IL-6 to its receptor results in uncontrolled accumulation of activated T cells that stop inflammation from being reduced. Results of a phase 2 trial investigating an IL-6 inhibitor have been positive and it will be investigated further to assess its safety and efficacy in treating ulcerative colitis.

Take-Home Message

It is hoped that the emerging treatments described above, and others, will increase the options available to patients with ulcerative colitis. In addition, their investigation will continue to improve our understanding of how ulcerative colitis is caused, enabling further targeted therapies to be developed and opening up the possibility of personalizing each patient’s treatment.

Note: This post is based on an article that is not open-access; i.e., only the abstract is freely available. Furthermore, in the Conflict of Interest Statement at the end of this paper, the authors make a declaration about grants, research support, consulting fees, lecture fees, etc. received from pharmaceutical companies. It is normal for authors to declare this in case it might be perceived as a conflict of interest.

Treatment of Neurological Disorders: How Systematic Reviews Help Guide Research

What Is the Main Idea?

Intravenous immunoglobulin is a treatment product that is used to treat a variety of neurological conditions. In the free access research article “Adverse Reactions Associated with Intravenous Immunoglobulin Administration in the Treatment of Neurological Disorders: A Systematic Review”, published in International Archives of Allergy and Immunology, the authors discuss how they conducted a systematic review to determine whether any particular characteristics of neurological disorders are associated with an increased chance of patients experiencing adverse reactions if they are treated with intravenous immunoglobulin.

What Else Can You Learn?

In this blog post, the use of systematic reviews to evaluate what is known about specific research questions is discussed. Intravenous immunoglobulin and antibodies are also described.

What Is a Systematic Review?

A systematic review is a type of research study that seeks to summarize all of the available primary research (i.e., research that has collected data first-hand) that has been conducted to answer a research question. It involves a systematic search for data using a specific, repeatable method with a clearly defined set of objectives. The search is usually conducted using databases that hold information about research publications and aims to identify all studies within them that meet predefined eligibility criteria.

The validity of the findings for each study is then assessed, particularly regarding whether there is any risk that the results may be biased, following which the results are considered together and any conclusions drawn. Systematic reviews enable up-to-date assessment of what is known about a subject and are often used in the development and updating of clinical guidelines.

What Did This Study Investigate?

The authors of this study conducted a systematic review to summarize the results of studies that have reported adverse reactions when patients with neurological disorders – conditions that affect the brain, spinal cord, and/or nerves throughout the body – are treated with intravenous immunoglobulin. Intravenous immunoglobulin is a product that is made up of different human antibodies (immunoglobulins is another word for antibodies) that have been pooled together and are given intravenously (through a vein).

Antibodies are specialized protective proteins that are made by the immune system and recognize anything that is foreign to the body (these are called “antigens”), like bacteria and viruses. Different antibodies specifically recognize and neutralize different antigens and, once they have recognized and responded to a particular antigen once, antibodies against that antigen continue to circulate in the blood to provide protection against it if it is encountered again (this is how we become immune to some diseases).

Because intravenous immunoglobulin is prepared from blood samples donated from a large number of different people (depending on the manufacturer, the number of donors can be between 1,000 and 100,000) it contains a diverse collection of antibodies, which reflects the exposure of everyone who has donated blood to their environment, against a broad range of antigens. As a result, intravenous immunoglobulin can be effective in preventing or treating infections in people who are unable to make enough antibodies (known as “humoral immunodeficiency”) or who have an autoimmune disease (where the body mistakenly recognizes a cell type or specific protein in the body as foreign, treats it as an antigen, and attacks it).

Although a large number of clinical trials have reported that treatment with intravenous immunoglobulin is safe and generally well tolerated, some patients experience adverse reactions (an undesired effect of the treatment). The authors of this study therefore set out to systematically review studies that have reported adverse reactions to intravenous immunoglobulin therapy when it is used to treat more than one neurological disorder, to investigate whether any particular characteristics of individual neurological disorders are associated with patients experiencing adverse reactions.

How Was the Study Conducted?

The authors of the study searched three electronic databases for all research studies published up until that date using the following combination of search terms:

  • IVIg (the acronym for intravenous immunoglobulin), intravenous immunoglobulin, or immunoglobulin G (the type of immunoglobulin that makes up the greatest proportion of intravenous immunoglobulin), and
  • any term beginning with “neurolog”, and
  • adverse reaction, adverse effect, side effect, or any term beginning with “allerg”.

Articles were then included in the review if they described primary research, reported adverse reactions to intravenous immunoglobulin therapy in more than one neurological disorder, and were available as full-text publications in English. Although 2,196 studies were identified initially, only 65 met all of the eligibility criteria and were included in the final analysis.

What Did the Study Find?

After systematically reviewing the eligible studies, the authors of this study reported that when the results from all the studies were combined, the chance of patients developing an adverse reaction was estimated to be between 24 and 34%. In many studies the definition of specific adverse reactions was unclear or not specified. In addition, a large proportion of studies were conducted retrospectively, which increased the chance of selection bias. Selection bias is introduced when a group of patients is selected for analysis in a way that does not allow the sample population to be truly randomized, which means that it isn’t representative of the population as a whole, potentially leading to errors when the researchers draw conclusions about associations or outcomes.

Overall, there was a lack of high-quality comparative data (data that can be used to estimate the extent of similarity or dissimilarity between two things), which made it difficult for the authors to determine whether any specific neurological symptoms or signs are associated with patients having an increased risk of having an adverse reaction if treated with intravenous immunoglobulin therapy. Although intravenous immunoglobulin treatment was found to be generally well tolerated by patients with neurological conditions, headache was a common adverse reaction and there were some reports of “thromboembolic” complications (caused by the obstruction of a blood vessel by a blood clot that has become dislodged from another site in the circulatory system, which circulates the blood and lymph fluid through the body).

The authors concluded that patients with limited mobility (as seen in some conditions that affect both nerves and muscles), paraproteinemia, which occurs when an abnormal protein called a paraprotein starts to be secreted by a population of antibody-producing cells (as seen in some conditions where nerve damage causes pain, weakness, or numbness, often in the hands, arms, and feet), and cardiomyopathy (a general term that describes problems with your heart that make it harder for it to pump blood) were likely to have an increased risk of experiencing adverse reactions. They also found some evidence that children might be at increased risk of experiencing them.

Although the systematic review was unable to identify neurological disease characteristics that are definitely associated with adverse reactions in patients treated with intravenous immunoglobulin, the knowledge gained from this study can be used to guide the design of research studies in the future. Systematic reviews like this one play a key role in shaping future research directions by identifying areas relating to research questions that remain poorly understood or that need further investigation because different studies have reported conflicting results. This increases the chance of positive discoveries in the future that may improve the prevention and treatment of adverse reactions.

Precision Medicine to Optimize Treatment of Cholangiocarcinoma

What Is the Main Idea?

Cholangiocarcinoma is a type of cancer that is exceedingly rare in children and, as a result, there is no standard treatment protocol. In the open access research article “Identification of a Novel NRG1 Fusion with Targeted Therapeutic Implications in Locally Advanced Pediatric Cholangiocarcinoma: A Case Report”, published in Case Reports in Oncology, the authors discuss the case of a 16-year-old girl and their use of a “precision medicine” approach to optimize her treatment plan.

What Else Can You Learn?

In this blog post, the use of precision medicine approaches to treat cancer is discussed. The symptoms of cholangiocarcinoma and the role of bile ducts in the digestive system are also described.

What Is Cholangiocarcinoma?

Cholangiocarcinoma is the name given to a group of cancers that form in the bile ducts. The bile ducts are part of the digestive system and are small tubes that connect the liver to the gallbladder and small intestine. The liver has a number of roles including cleaning the blood to remove harmful substances and metabolizing proteins, fats, and carbohydrates so your body can use them. The liver also makes a fluid called bile that helps the body to break down fats from food. Bile can be stored in the gallbladder or can travel directly from the liver to the small intestine. Most of the digestive process takes place in the small intestine and it is here that nutrients and minerals from our food are absorbed into the blood.

Cholangiocarcinoma is divided into three types based on where the cancer develops in the bile ducts:

  • Intrahepatic cholangiocarcinoma starts in parts of the bile ducts that are inside the liver (“intrahepatic” literally means “inside the liver”).
  • The other two types of cholangiocarcinoma are extrahepatic (meaning that they start outside the liver). Hilar (also sometimes known as “perihilar”) cholangiocarcinoma starts just outside the liver, where the right and left bile ducts join to form the common hepatic duct (which is the area of bile duct before the gallbladder). Distal bile duct cholangiocarcinoma starts in the common bile duct, where the ducts from the liver and gallbladder join together, which passes through the pancreas and ends in the small intestine.

What Are the Signs and Symptoms of Cholangiocarcinoma?

Most people with cholangiocarcinoma don’t have any symptoms when the cancer first starts to develop, and only start to experience symptoms when the cancer is at an advanced stage and the flow of bile from the liver becomes blocked. It is when this happens and the bile starts to move back into the blood and body tissue that signs and symptoms start to develop. These can include jaundice (yellowing of the skin and the whites of your eyes), dark urine and/or white-colored stools, itchy skin, pain in the stomach area (usually in the upper right-hand side), loss of appetite, and non-specific symptoms such as fever, night sweats, fatigue, and losing weight without trying. These signs and symptoms can also be caused by other conditions so if you have any of them it is important that you consult a medical professional.

What Causes Cholangiocarcinoma?

Cholangiocarcinoma is rare. According to the National Cancer Institute it affects fewer than 6 in 10,000 people worldwide each year, although it is more common in some countries than others. It is not yet clear why some cholangiocarcinomas develop, but some factors that have been identified as increasing a person’s risk include having primary sclerosing cholangitis (a rare type of liver disease that causes long-term inflammation of the liver and hardening and scarring of the bile ducts) and liver cirrhosis (permanent scarring of the liver tissue caused by damage), bile duct problems that are present at birth (such as Caroli’s disease and choledochal cysts), liver fluke infection in areas of Southeast Asia, and biliary stones (these are similar to gallstones but form in the liver). Some DNA changes that cause inherited conditions, including Lynch syndrome and cystic fibrosis, are also associated with increased risk.

Cholangiocarcinoma is slightly more common in men than women and the risk of developing it increases with age. Although most people who are diagnosed with cholangiocarcinoma are over the age of 65, it can occur at any age. Cholangiocarcinoma does occur in children but is exceedingly rare, with some analyses concluding that fewer than 22 cases have been reported in the last 40 years, with the majority of the children who developed the cancer having a gastrointestinal disorder that is linked to its development. Because of its rarity, there is no standard treatment protocol for children diagnosed with cholangiocarcinoma.

What Did This Study Investigate?

The authors of this study describe the case of a 16-year-old girl who was diagnosed with advanced hilar cholangiocarcinoma. The authors used a “precision medicine” approach for her treatment plan, which aims to optimize the efficiency of treatment by using genetic analysis (such as DNA sequencing) or molecular profiling (laboratory analysis of tissue, blood, or fluid samples to check for certain genes, proteins, or other molecules). Specific information about a person’s cancer can then be used to help make a diagnosis, develop a targeted treatment plan, or find out how well a treatment is working.

Precision medicine is increasingly being used in the treatment of cholangiocarcinoma. A number of mutated genes have been identified in samples from cholangiocarcinomas that belong to a family of genes called “oncogenes”. Oncogenes are genes that are involved in normal cell growth and division, but can cause cancer if they become altered by changes that cause there to be too many copies of the gene or result in it being more active than normal.

The genetic changes in oncogenes that cause them to become activated result in the proteins that they code for being slightly different from the proteins that would be made if they had not become altered. These differences are being exploited by scientists to develop targeted treatments that only attack cancer cells. In this study, the authors describe how DNA sequencing of tissue from the patient’s tumor that was obtained from biopsy sampling during the diagnostic process showed that the tumor had a genetic change called an “oncogenic gene fusion”.

This means that part of the tumor cell DNA had become structurally rearranged, causing an area on chromosome 1 to become fused with chromosome 8 (humans have 23 pairs of chromosomes), producing a hybrid gene and leading to the activity of a gene called neuregulin-1 (NRG1) becoming dysregulated. NRG1 fusions have been found in a number of cancer types, including lung cancer, and have been estimated to occur in 0.8% of cholangiocarcinomas. They lead to NRG1 protein expressed at the cell surface binding to a protein called ERBB3 (its name is “erythroblastic oncogene B 3” and also known as human epidermal growth factor 3 or HER3).

This causes it to bind with ERBB2, which is also known as HER2 and is an oncogene that has been shown to play an important role in the development and progression of certain types of aggressive breast cancer, leading the two proteins to form dimers (a complex made up of two molecules linked together) that result in the activation of signaling pathways in cells that result in abnormal cell proliferation. This can contribute to the growth of a tumor.

Having identified that the patient’s tumor DNA had an NRG1 fusion, the authors were able to treat her with a combination of conventional chemotherapy and radiotherapy, followed by targeted treatment with a drug that specifically blocks signaling by members of the ERBB family of proteins. The treatment with the targeted drug was able to slow the growth of the tumor. In addition, the genetic changes that were identified in the patient’s tumor DNA meant that she was eligible to take part in a clinical trial, which she would not have been able to do if her tumor DNA had not been sequenced.

This case highlights how the increasing use of precision medicine approaches and targeted therapies can improve the quality of life of patients with rare cancers.

Tooth Erosion and the Acidity of Soft Drinks

What Is the Main Idea?

Many soft drinks, whether they are sugary or artificially sweetened, are acidic. In the open access research article “Erosive Potential of Various Beverages in the United Arab Emirates: pH Assessment”, published in the Dubai Medical Journal, the authors discuss the results of their investigation into the acidity of soft drinks available in the United Arab Emirates (UAE) and the effects that drinking soft drinks can have on dental health.

What Else Can You Learn?

In this blog post, the effects of acidic soft drinks on the teeth are discussed. The structure of the teeth and the pH scale are also described.

What Is Dental Erosion?

Teeth are essential components of the digestive system, enabling us to cut and grind our food into smaller pieces so that we can swallow it more easily. Each tooth consists of four main layers. The tooth pulp is the innermost layer and contains connective tissue, blood vessels, and nerves. Cementum is a layer that covers the root of the tooth (the part of the tooth that is not exposed to the environment inside the mouth) and helps to anchor it in the jaw. Dentin is the main supporting structure of the tooth and is made of a bone-like matrix that protects the nerves in the pulp. It sits directly under the final layer, the enamel.

Tooth enamel forms a shiny, hard protective layer around the crown (the part of the tooth that is exposed above the gums) to protect it from damage and the effects of bacteria in the mouth that can cause small openings or holes called “cavities”. It is highly mineralized, with 95% of it consisting of calcium and phosphorus bound together in small crystals called hydroxyapatite that are extremely strong.

Although enamel is the hardest substance in the body it is unable to regenerate if it becomes damaged because there are no living cells in the tooth to replace it. Physical factors such as everyday wear and tear and teeth grinding can contribute to dental erosion (the gradual destruction of tooth enamel), leading to the inner layers of the teeth becoming exposed and increasing the chance of cavities developing. Chemical factors can also cause dental erosion. For example, sugary foods can interact with bacteria in dental plaque (a sticky substance that continuously builds up on the teeth) leading to the production of acid.

How Does pH Affect Tooth Enamel?

pH is a numerical scale, ranging from 0 to 14, that describes how acidic or alkaline a substance is. A pH of 7 describes a substance that is neutral (is neither acidic nor alkaline), while a pH less than 7 describes one that is acidic, with the acidity increasing as the pH approaches 0. Conversely, a pH greater than 7 describes an alkali with the strength of alkalinity increasing as the pH approaches 14. In the mouth, acid produced by bacteria in plaque or in acidic foods and drinks softens the enamel, and can dissolve the hydroxyapatite crystals within it if the pH drops below 5.5. It has been reported that the ability of tooth enamel to dissolve increases by 10-fold with each one-unit decrease in pH.

Why Are Many Soft Drinks Acidic?

Acids either occur naturally in drinks or are added to enhance their flavor or improve their shelf life. For example, citrus juices naturally contain citric acid, but it may also be added to other drinks to increase the tanginess of the flavor or to act as a preservative. Phosphoric acid is added to some soft drinks for similar reasons. In addition, fizzy drinks get their fizziness as a result of carbon dioxide being dissolved in water under pressure in a process that forms a weak solution of carbonic acid (resulting in a tingly sensation on your tongue when you drink them).

It would be wrong to think that the presence of acids in soft drinks is always a bad thing. Ascorbic acid, another name for vitamin C, is commonly found on lists of ingredients and has several important functions in the body including keeping cells healthy, wound healing, and the maintenance of healthy skin, bones and blood vessels.

 What Did This Study Investigate?

Over recent years, increased consumption of soft drinks and fruit juices has been linked to rising rates of a wide range of health conditions that include type 2 diabetes, obesity, and the development of osteoporosis later in life. In the UAE, it has been estimated that each resident consumes an average of 103 liters of soft drinks per year, and the country is one of the top five countries in the world in terms of juice consumption per person. The high rate of acidic drink consumption in the UAE has been reported to be having a significant effect on the dental health of the country’s citizens, with over 50% of 5-year-old preschool children showing signs of tooth damage caused by the enamel starting to dissolve.

The authors of this study analyzed 306 different soft drinks that are sold in the UAE, including fizzy drinks, energy drinks, sparkling water, iced teas, juices, non-alcoholic malt beverages, coconut water, and sports drinks. They measured the pH of each drink three times using a pH meter and classified them as mildly erosive (pH 4 or more), erosive (pH between 3 and 3.99), or extremely erosive (pH less than 3).

What Did the Study Find?

The authors of the study reported that 88% of the drinks tested had a pH of less than 4, with 51% classified as erosive and 37% as extremely erosive. The most acidic drink tested, a fizzy drink, had a pH of only 2.32, although the type of drink with the lowest average pH was non-alcoholic malt beverages (pH 2.99). In addition to the pH of a drink, there is some evidence that the type of acid added or naturally occurring in it may be linked to the amount of erosion that may occur, with citric acid having been reported previously to be more aggressive than phosphoric acid.

In this study, citric, phosphoric, ascorbic, and malic acids were the acids most frequently present according to the ingredients labels. Phosphoric acid was found in both fizzy and energy drinks, citric acid was found combined with pantothenic acid in energy drinks, malic acid (which contributes to the sour taste of some foods and drinks) was mainly present in sparkling water or combined with juices containing citric acid, and ascorbic acid was mainly found in juices and malt beverages.

Take Home Message for Patients

Reducing your consumption of soft drinks can benefit the health of your teeth because it will lessen the amount of time that your enamel is exposed to high levels of acidity. If you do consume a soft drink, drinking it through a straw can help to keep it away from the teeth. In addition, if you decide to brush your teeth after having a soft drink it is important to wait 30 minutes to 1 hour before doing so. This is because it takes around this length of time for the saliva in the mouth to return the environment back to being neutral. If you brush your teeth before this happens, the acidity in the mouth may mean that the enamel is still slightly soft, increasing the chance of physical erosion.

Developments in Stem Cell-Based Alzheimer’s Disease Research

What Is the Main Idea?

Alzheimer’s disease is the most common cause of dementia among older adults, and the incidence is increasing. In the free access research article “Comprehensive Bibliometric Analysis of Stem Cell Research in Alzheimer’s Disease from 2004 to 2022”, published in the journal Dementia and Geriatric Cognitive Disorders, the authors discuss the results of their review of research literature published about stem cells and Alzheimer’s disease over the last 20 years and highlight future research directions.

What Else Can You Learn?

In this blog post, Alzheimer’s disease is discussed. Stem cells and the specialization of cells to enable them to play different roles in the body are also described.

What Is Alzheimer’s Disease?

Alzheimer’s disease is a type of dementia. Dementia is an umbrella term that is used to describe a group of conditions that affect the nervous system (known as “neurological” conditions). They directly affect the brain and get worse over time (conditions like this are described as “progressive”), usually over a number of years.

Although symptoms can be similar among different types of dementia, and some people have more than one form, Alzheimer’s is associated with memory loss and confusion in the early stages. Mild symptoms and signs range from wandering, getting lost, and repeating questions to changes in mood or personality. More moderate symptoms include impulsive behavior, misplacing things, and problems recognizing family and friends, with people potentially losing the ability to communicate if the condition becomes severe.

Alzheimer’s disease is the most common type of dementia in adults and is usually diagnosed in people aged 60 years and older. It can develop in younger people but this is rare. Incidence is increasing and it is estimated that the number of people with Alzheimer’s disease worldwide will treble by 2050.

What Causes Alzheimer’s Disease?

Our understanding of the sequence of events that lead to the development of Alzheimer’s disease is still limited. It is well known that the brains of people with Alzheimer’s disease have abnormal clumps of proteins called “amyloid plaques” and tangled bundles of fibers called “tau tangles”. These are found throughout their brains, but rather than simply being caused by a build-up of plaques and tangles, Alzheimer’s disease is now believed to be a complex condition caused by a variety of factors – including genetic, environmental, and lifestyle factors – that affect the brain over time.

As well as having plaques and tangles, neurons (brain cells that transmit messages from one part of the brain to another) in people with Alzheimer’s disease become damaged and lose their connections with each other, and many other complex brain changes are thought to be involved. There is currently no cure for Alzheimer’s disease and treatment focuses on helping people maintain their brain health, slowing or delaying symptoms, and managing behavioral changes. There is growing evidence that adopting healthy lifestyle habits, like exercising regularly and eating a healthy diet, can reduce the risk of developing dementia, in addition to reducing the risk of other conditions like cancer and heart disease.

How Might Stem Cell-Based Therapy Help?

Cell differentiation is the process by which “immature” undifferentiated (unspecialized) cells take on specific characteristics and become specialized to have a particular role in the body. Stem cells are unique in that they can self-renew, are either undifferentiated or only partially differentiated, and are the source of specialized cell types, like red blood cells and types of brain cell.

Stem cells have become a focus of medical research because it is hoped that studying differentiation will give new insights into how some conditions develop. It is also possible to guide stem cells to become a particular cell type, raising the possibility that tissues that are damaged or affected by a disease could be regenerated or repaired (this is known as “regenerative medicine”).

Focuses of research regarding Alzheimer’s disease include:

  • attempting to replace injured or lost neurons,
  • increasing the production of chemicals in the brain that influence the growth of nervous tissue,
  • reducing the build-up of the proteins that form amyloid plaques and tau tangles,
  • increasing synaptic connections,
  • decreasing inflammation in the brain,
  • repairing metabolic systems that have gone wrong (metabolism is the process by which the body produces energy), and
  • improving the immediate environments of areas in the brain.

What Did This Study Investigate?

The authors of this study used an approach called “bibliometrics” to assess trends and developments across 3,428 stem cell research reports regarding Alzheimer’s disease published between 2004 and 2022. Bibliometrics uses mathematical and statistical methods to analyze and provide an overview of a large number of documents in a particular research field. It can help researchers understand the direction in which research in a given area is heading and can contribute to the formation of clinical guidelines. It can also identify where more collaboration between different research areas is needed and identify new avenues for study.

Their analysis showed that the number of reports published on stem cell research in Alzheimer’s disease has increased dramatically over the last 20 years, particularly since 2016. The increase since 2016 is partly attributed to the combination of induced pluripotent stem cell(iPSC)-based and 3D bioprinting techniques. iPSCs are cells that are derived by reprogramming differentiated skin or blood cells back into an embryonic-like “pluripotent” state (meaning that they can develop into many different cell or tissue types, just like the stem cells in a developing embryo).

This means that a person’s blood cells could potentially be treated to become iPSCs that could then produce new neurons. 3D bioprinting is a technology that uses living cells mixed with bioinks to print natural, 3D tissue-like structures. The combination of iPSC-based and 3D bioprinting techniques has meant that it has been possible to create cultures of cell that more closely mimic the situation in the brain.

Research Hot Spots and Future Directions

A number of fields have been key areas of research for some time. These include iPSCs, microglia, and mesenchymal stem cells. Mesenchymal stem cells are a type of stem cell that are unable to differentiate into blood cells and have limited self-renewal capacity. Microglia are specialized brain cells that regulate brain development, the repair of injury, and the maintenance of neural networks. There is significant interest in their roles in healthy brains and how their dysregulation may be involved in the development of neurological conditions.

Newer areas of research interest include the roles of mitochondrial dysregulation (mitochondria are the parts of the cell where energy is produced) and autophagy (a process by which old and damaged proteins or parts of cells are broken down and destroyed) in the development of Alzheimer’s disease.

Another research area is that of exosomes, tiny sac-like structures that are involved in cell-to-cell communication. Exosomes bud off the outer surfaces of cells and are found in body fluids including blood, saliva, and cerebrospinal fluid (the fluid found in the tissue that surrounds the brain and spinal cord). They carry DNA, RNA, and proteins from the cells from which they originate. Exosomes derived from a patient’s stem cells have a strong safety profile and are unlikely to provoke a strong immune reaction.

In addition, because their primary function is shuttling cargoes between cells, it is hoped that they may be used for patient-specific drug delivery in the future, which may prove to be a more successful approach than stem cell transplantation. Combined, these research directions raise the exciting possibility that the development of effective therapies for Alzheimer’s disease may not be far away.

The Link between Autophagy and Lupus Nephritis

What Is the Main Idea?

Lupus is a type of autoimmune disease that is hard to diagnose and is not well understood. In the open-access research article “Degradation of Ubiquitin-Editing Enzyme A20 following Autophagy Activation Promotes RNF168 Nuclear Translocation and NF-κB Activation in Lupus Nephritis”, published in the Journal of Innate Immunity, the authors discuss the role that a process called autophagy plays in the development and progression of kidney damage in patients with a form of lupus called systemic lupus erythematosus (SLE), and investigate the mechanisms involved.

What Else Can You Learn?

In this blog post, SLE and lupus nephritis are discussed. Autoimmune diseases and the processes of autophagy and ubiquitination are also described.

What Is an Autoimmune Disease?

When the body’s immune system is working correctly, it recognizes invaders like bacteria and viruses as “foreign”, and attacks them using white blood cells and antibodies. In contrast, it recognizes the body’s own cells as “self” or “not foreign” and does not attack them. Autoimmune diseases – like rheumatoid arthritis, Crohn’s disease, and lupus – develop when the body’s immune system mistakenly starts to recognize the body’s own tissue as foreign and attacks it. This can cause inflammation in tissues and organs that, over time, can lead to serious damage.

What Causes Lupus and What Are Its Symptoms?

The exact causes of lupus are unknown, but it is thought to be caused by a combination of genetic and environmental factors. A number of genetic mutations (changes in genes) have been reported that seem to be linked to a person being susceptible to developing lupus. Women are most likely to be affected by the disease, and there is some evidence that hormonal changes that occur during a woman’s lifetime (such as during puberty, pregnancy, and menopause) may play a role. Lupus can be difficult to diagnose because signs and symptoms can differ from one person to another. They can also vary in their severity and develop slowly or quickly.

There are several different types of lupus. Some only affect the skin but the most common type, called systemic lupus erythematosus (SLE), can affect many parts of the body. SLE is characterized by the release of autoantibodies that bind to contents of the cell nucleus (the part of the cell that houses the DNA and is where genes are activated), including double-stranded DNA. The most common symptoms are extreme fatigue or exhaustion, swelling of or pain in the muscles or joints, skin rashes (particularly on the wrists and hands, or a butterfly-shaped rash across the cheeks and nose), mouth ulcers that keep coming back, hair loss, and fever. In addition, a form of kidney disease called lupus nephritis can develop.

What Is Lupus Nephritis?

The kidneys help to control blood pressure and make red blood cells, and remove waste products and extra water from the body to make urine. Lupus nephritis develops when the immune system starts to attack the part of the kidney that filters the waste products out of your blood, called the glomeruli, and is estimated to affect around 50% of patients with SLE. Although it can often be successfully controlled, lupus nephritis can lead to kidney failure, where a person’s kidneys stop working and they need kidney replacement therapy (in the form of dialysis or kidney transplant) to survive. In addition, it can cause high blood pressure, which can increase the risk of stroke or heart attack. Symptoms of lupus nephritis include blood or protein in the urine, weight gain, and the extra fluid that the kidneys cannot remove causing swelling (known as “edema”) in body parts like your legs or ankles.

What Happens to Glomeruli When Lupus Nephritis Develops?

There are around 1 million glomeruli in each kidney. They are made up of bundles of looping blood vessels and several specialized types of epithelial cells (this is the name given to types of cell that cover the inside and outside surfaces of your body, such as the skin, the outer surfaces of organs and internal cavities, and blood vessels). When lupus nephritis develops the glomeruli stop working properly, partly because of swelling or scarring of the small blood vessels, but also because epithelial cells in the glomeruli do not function properly. One cell type that is affected is the podocytes. These are highly specialized cells that wrap around the outer surfaces of the blood vessels in the glomeruli and play an essential role in filtering the blood by stopping proteins from being filtered out. Exactly how podocytes become damaged in lupus nephritis is unknown, but it may be caused by a combination of genetic, inflammatory, and metabolic (the processes that convert food and drink to energy in the body) factors.

What Did This Study Investigate?

The number of cells in the body is tightly regulated and a number of processes exist that check that cells and the molecules inside them are functioning normally. There are also processes that repair or remove damaged cells and molecules if things go wrong. There have been some reports that one such process called “autophagy” is linked to the development of lupus nephritis. Autophagy, which means “self-eating”, is a process by which old and damaged proteins or parts of cells are broken down and destroyed. The breakdown products are then recycled inside the cell and reused, especially during periods of starvation or stress. Autophagy plays an essential role in the immune system because it helps to destroy bacteria or viruses and is involved in inflammation.

Autophagy is typically a protective process; however, its activity is tightly regulated because if too much autophagy is taking place, it can result in programmed cell death (a method by which the body gets rid of cells that have become damaged or are no longer needed). Similarly, if the level of autophagy activity in a cell is too low, faulty proteins and parts of cells are not removed and can contribute to the development of disease. Autophagy is known to be involved in autoimmune diseases and changes in the normal functioning of autophagy have been linked to the development of cancer. The authors of this study investigated how autophagy affects podocytes in lupus nephritis, particularly regarding its effects on the levels of two proteins called A20 and RNF168.

How Are A20 and RNF168 Linked to Lupus Nephritis?

A20 is an enzyme (a type of protein that speeds up a chemical reaction) that is involved in regulating a process called “ubiquitination”, where a small protein called ubiquitin is attached to a protein and acts as a tag indicating that something should happen to it (such as the activation of another process, that it should move from one part of the cell to another, or that the protein should be broken down). Abnormal levels or functioning of A20 is known to be involved in chronic inflammation and tissue damage.

RNF168 is another enzyme and is involved in the cell’s DNA damage repair process. It helps to repair breaks in double-stranded DNA by tagging histone proteins. Histones are found in chromosomes and act as spools that the DNA winds around to become more compact and form chromosomes. They can also be marked with different types of tags that indicate whether a particular gene is “on” or “off”. In the case of RNF168, it tags histones with ubiquitin molecules near the sites of breaks in double-stranded DNA that enable proteins to bind that can repair the break.

What Did the Study Show?

The results of the study showed that autophagy in podocytes is over-activated in lupus nephritis and that this leads to the activity of A20 being reduced. At the same time, the activity of RNF168 is increased, leading to increases in both the amounts of DNA damage in podocytes and the activation of a protein called NF-κB, which activates genes involved in inflammation. In contrast, when autophagy is inhibited (this means that something is slowed down or prevented from happening), levels of A20 increase and those of RNF168 decrease, leading to an increase in DNA damage repair. These findings suggest that increasing the level of DNA damage repair that takes place in podocytes may limit the damage that occurs as lupus nephritis progresses. They also raise the possibility of autophagy, A20, and RNF168 becoming future targets in the development of therapies for its treatment and prevention.

Reducing Neuroinflammation after Traumatic Brain Injury

What Is the Main Idea?

Long-lasting damage after a traumatic brain injury can be caused by excessive neuroinflammation in the brain. In the open access research article “MiR-124 Reduced Neuroinflammation after Traumatic Brain Injury by Inhibiting TRAF6”, published in the journal Neuroimmunomodulation, the authors discuss how the levels of a microRNA called miR-124 influence the extent of neuroinflammation after a traumatic brain injury and investigate the mechanisms involved.

What Else Can You Learn?

In this blog post, the effects of a traumatic brain injury on the brain and the role of neuroinflammation are discussed. The functions of RNAs, particularly microRNAs, are also described.

What Is Traumatic Brain Injury?

A traumatic brain injury can be caused by something piercing the skull and entering the brain tissue, or by a violent blow or jolt to the head or body (for example if a person is struck by an object or is involved in a vehicle accident). Although some traumatic brain injuries cause short-term or temporary problems, others can be fatal or lead to long-term disability. When a traumatic brain injury occurs, there are usually two phases of damage that affect the brain:

  • The first, “primary” phase happens immediately when the trauma takes place and may include bleeding, brain swelling, and damage to nerve fibers.
  • “Secondary” brain damage develops after the initial injury and may take hours or weeks to develop. Secondary damage can include an increase of pressure inside the skull (usually due to the brain swelling), reduced blood pressure or oxygen flow, a breakdown of the blood–brain barrier (which controls the movement of molecules and cells between the blood and the fluid that surrounds the nerve cells in the brain), and neuroinflammation.

What Is Neuroinflammation?

The term “neuroinflammation” describes inflammation (the process by which your body responds to an injury or a perceived threat, such as a bacterial infection) in the central nervous system (CNS; which consists of the brain and spinal cord). As with inflammation in the rest of the body, neuroinflammation is an essential process that plays a protective role after injury, exposure to toxins, or infection. However, neuroinflammation can be harmful if the level of inflammation is excessively high or it is activated for too long, and chronic (long-term or recurring) neuroinflammation is associated with the progression of neurodegenerative diseases such as multiple sclerosis, Parkinson disease, and Alzheimer disease.

Several processes are involved in neuroinflammation and microglia are one of the main cell types involved. They are specialized cells, making up around 10% of the total number of cells in the CNS, that regulate the development of the brain, maintain neuronal networks, and help repair injury. Microglia actively survey their environment and engulf foreign material, and dead or damaged cells, to prevent them from affecting other brain cells. They also produce cell signaling molecules called “cytokines” that can either promote or inhibit inflammation.

When microglia become “activated” when infection or injury occurs, the profile of genes that are activated inside them changes rapidly and they begin to produce more pro-inflammatory cytokines (cytokines that promote inflammation) and other molecules. This is termed the “M1 phenotype” (the word “phenotype” means an observable characteristic) of microglia. Over time, microglia become “polarized” (changed) to the “M2 phenotype” and begin to secrete anti-inflammatory cytokines that reduce neuroinflammation and promote the repair of damaged tissue. The changes in gene activation that occur when microglia are activated and polarized can be detected by analyzing the levels of different types of RNA (ribonucleic acid).

What Is RNA?

Your genes are short sections of DNA (deoxyribonucleic acid) that carry the genetic information for the growth, development, and function of your body. Each gene carries the code for a protein or an RNA. There are several different types of RNA, each with different functions, and they play important roles in normal cells and the development of disease.

Messenger RNAs are single-stranded copies of genes that are made when a gene is switched on (expressed). They carry messages regarding which proteins should be made to the cell’s protein-making machinery. In a cell, long strings of double-stranded DNA are coiled up as chromosomes in a part of the cell called the nucleus. Chromosomes are too big to move out of the nucleus to the part of the cell where proteins are made, but messenger RNA copies of genes are small enough to get through.

MicroRNAs are much smaller than messenger RNAs. They do not code for proteins and instead play important roles in regulating genes, for example by inhibiting (silencing) gene expression by binding to complementary sequences in messenger RNA molecules, stopping their “messages” from being read, and preventing the proteins they code for from being made. Some microRNAs also activate signaling pathways inside cells, turning processes on or off.

What Did the Research Article Investigate?

After a traumatic brain injury, inactive microglia become active and migrate to the regions of the brain that surround the sites of injury. They produce and release pro-inflammatory cytokines and recruit immune cells that are circulating in the bloodstream to enter the brain, which amplifies neuroinflammation. Because this can become a problem and lead to secondary brain damage, the authors of the study are interested in exploring whether excessive neuroinflammation can be inhibited in some way.

Recent studies have reported that if a molecule called TLR4 (toll-like receptor 4; a receptor molecule that is found in cell membranes and that causes cells to start producing pro-inflammatory cytokines when activated) is prevented from working in the brain in a targeted way, less neuroinflammation develops after a traumatic brain injury. There is also evidence that the levels of a microRNA called miR-124 may be linked to the activation of TLR4.

The authors of the study investigated how the levels of miR-124 changed after traumatic brain injury and found that its expression was reduced, whereas an increase in miR-124’s expression promoted the polarization of microglia to the M2 phenotype, which reduces neuroinflammation. The activity of the TLR4 pathway was also reduced, and this was found to be because miR124 inhibited a molecule inside the microglia called TRAF6, which is part of the signaling pathway that is activated by TLR4. If the signal that is produced when TLR4 is activated cannot travel along this pathway, the activation of pro-inflammatory genes is prevented and the chance of excessive neuroinflammation developing is reduced.

Research like this raises the possibility of treating traumatic brain injury more effectively in the future. If excessive activation and, consequently, neuroinflammation can be prevented, for example by developing therapies that inhibit TLR4 or TRAF6, the risk of people who have a traumatic brain injury having secondary brain damage may be reduced, improving their chance of better recovery.

Zinc Oversupplementation and Copper Deficiency

What Is the Main Idea?

Copper and zinc are essential trace nutrients that play important roles in the body. In the open-access case report “Copper Deficiency Mimicking Myelodysplastic Syndrome: Zinc Supplementation in the Setting of COVID-19”, published in the journal Case Reports in Oncology, the authors discuss how oversupplementing with zinc to prevent infection can cause copper deficiency, which can cause symptoms that are similar to a group of blood cancers called myelodysplastic syndrome.

What Else Can You Learn?

In this blog post, the roles of zinc and copper in the body, and the effects of not getting enough or too much, are discussed. The symptoms of myelodysplastic syndrome are also described.

Why Does the Body Need Copper?

Copper is classed as an “essential trace nutrient”, which means that the body needs small amounts to work properly. It is involved in important processes in the body that include making energy, absorbing iron, making red and white blood cells, keeping the immune and nervous systems healthy, making collagen (which plays an essential role in the structure and function of skin, bones, cartilage, and connective tissues), and brain development. It also acts as an antioxidant, which means that it is involved in reducing levels of molecules called “free radicals” that can damage cells and DNA, and that are produced in the body as part of its normal energy-producing processes.

How Do Our Bodies Get the Copper They Need?

Most people should be able to get all the copper their body needs by eating a balanced, healthy diet. Good dietary sources of copper include offal (such as beef liver), shellfish (such as oysters and mussels), nuts (such as cashews and almonds), seeds, chocolate, dark green leafy vegetables, legumes (beans and pulses), wholegrain breads and cereals, mushrooms, and sweet potato. Copper deficiency (defined as the levels of copper in a person’s body being too low to meet their body’s needs or that the level measured by analysis of a blood sample is lower than the normal range) is rare but can be treated. It usually affects people who have had some form of gastric bypass or intestinal surgery, or who have celiac or inflammatory bowel disease. This is because their bodies may be less able to effectively absorb copper from their food.

How Much Copper Is Enough?

As with any nutrient, too little or too much copper can be harmful to the body. Guidelines regarding recommended daily intake vary by country, but are generally between 0.9 and 1.6 mg/day. Too much copper can cause symptoms that include stomach pain, nausea, diarrhea, and dizziness, and kidney and liver damage can occur if levels are too high for a long time. In contrast, copper deficiency can cause symptoms that include fatigue, decreased production of blood cells, lightened patches of skin, weak and brittle bones, increased risk of infection, and neurological symptoms such as numbness or tingling, difficulties with muscle coordination and balance, and signs of vision loss. Importantly, copper deficiency can present in the same way as myelodysplastic syndrome and is an important differential diagnosis (a disorder that could be causing the symptoms being experienced) in patients in whom myelodysplastic syndrome is suspected.

What Is Myelodysplastic Syndrome?

Myelodysplastic syndrome (also known as myelodysplasia) is the name given to a group of rare blood cancers that result in a person not having enough healthy blood cells. This is because their bone marrow (the part of the body that makes blood cells) makes blood cells that are abnormal (they do not form or do not work properly) and unable to mature. Over time, the number of immature blood cells in the bone marrow increases, preventing it from making enough healthy, mature blood cells, and the number of mature blood cells that can get into the bloodstream decreases. Myelodysplastic syndrome can develop slowly or quickly, and in some people can develop into a type of leukemia called acute myeloid leukemia. Symptoms vary from person to person (depending on which type(s) of blood cell have become reduced in the bloodstream) and can include frequent infections, weakness, tiredness, pale skin, shortness of breath, bruising and bleeding, and anemia. If a person is experiencing these symptoms, it could be that they have copper deficiency.

How Is Copper Deficiency Linked to Zinc?

A number of studies have shown that copper deficiency can be caused by “zinc overload” (taking too much zinc into the body). This is thought to be because excessively high levels of zinc cause copper to be removed from the body at an increased rate while the rate at which it is absorbed is decreased. Like copper, zinc is an essential trace nutrient. It is involved in metabolism (the process by which the body produces energy), wound healing, your sense of taste and smell, and the immune system. However, like copper, too little or too much zinc can be harmful. Symptoms of zinc deficiency include hair loss, eye and skin sores, and diarrhea. In the short term, a very high dose of zinc can cause nausea and vomiting, headache, and stomach ache and diarrhea, while high levels of zinc over a long period can reduce levels of “good” cholesterol, cause copper deficiency, and prevent the immune system from functioning properly. This last point is important, because “over-supplementing” with zinc can cause zinc overload.

How Does This Relate to COVID-19?

Because of its role in the normal functioning of the immune system, some people began taking zinc supplements during the COVID-19 pandemic in an attempt to prevent themselves from getting infected. Zinc supplements can be bought over the counter and are widely available, and there were reports in the media that zinc (among other things) could help prevent COVID-19 infection and prevent the severity of symptoms. This has led to several case reports (a case report is a type of medical summary that outlines the signs and symptoms, diagnosis, treatment, and follow-up of an individual patient) being published that have described cases where patients have presented with symptoms that have suggested that they have myelodysplastic syndrome, but have instead been found to have copper deficiency caused by zinc overload as a result of taking high-concentration zinc supplements. In one case report, a woman had been taking eight times the recommended daily amount of zinc (which varies by country but is usually between 7 and 8 mg for women, and 9.5 and 11 mg for men) in an attempt to prevent COVID-19 infection.

The authors of this case report describe the case of a man who had no pre-existing gastric or stomach problems, who presented with myelodysplastic syndrome symptoms that were found to be caused by copper deficiency. He had taken a zinc supplement of 50 mg/day for 6 months to prevent COVID-19 infection, but had stopped taking the supplement 2 months before presenting to his healthcare provider. After being advised not to take zinc and being started on copper supplementation, some of his symptoms disappeared and others improved.

Take-Home Message

This case report emphasizes the importance of not oversupplementing. Most people are able to get all the copper and zinc, as well as other nutrients, that they need from a normal healthy diet. Good sources of zinc include meat, shellfish, poultry, nuts and seeds, wholegrains, and legumes, and most of these are also good sources of copper. If you choose to take supplements, check the label and make sure that you are staying within the recommended daily amounts for your country or region. If you take more than one supplement, check that their combination does not mean that you are taking more than the recommended daily amount for a particular nutrient. If you are concerned that you may have a nutrient deficiency, talk to your healthcare provider.

Heatwaves Caused by Climate Change: How Geomedicine Can Improve Health Outcomes

What Is the Main Idea?

Extreme climate events, such as heatwaves, have become more common because of climate change and place a heavy burden on health systems. In the open-access research article “Beyond Usual Geographical Scales of Analysis: Implications for Healthcare Management and Urban Planning”, published in the journal Portuguese Journal of Public Health, the authors discuss how geomedicine can be used to aid urban planning and the allocation of health resources to reduce the number of deaths during heatwaves.

What Else Can You Learn?

In this blog post, the effects of climate change on health are discussed with a particular focus on heatwaves. Geomedicine and how it can be used is also described.

What Is Climate Change?

Climate change is defined as long-term and large-scale shifts in weather patterns and average temperatures. Although shifts like these can occur naturally, as the result of volcanic activity or changes in the Sun, human activities over the last 200 years have had significant effects. This has mainly been due to the burning of fossil fuels like coal, gas, and oil. As a result, the Earth is now about 1.1 °C warmer than it was 100–150 years ago, and the last decade (2011–2022) was the warmest on record. This is causing environmental effects such as rising sea levels, intense droughts, scarcity of water, and declining biodiversity (the variety of living organisms), which makes climate change an economic issue because it affects the availability of food and other resources.

How Does Climate Change Affect Our Health?

Climate change can affect human health in many ways. It can affect mental health through increased stress and anxiety, and extreme weather events can cause significant trauma. Rising sea levels and increased frequency of flooding can lead to people being displaced and increase the likelihood of water supplies becoming contaminated, which increases the spread of disease. Increasing droughts can decrease food production and the supply of water, and a warming climate also affects numbers of biting insects, such as ticks and mosquitos (both can spread disease), particularly in areas where numbers of these insects had previously been low. Extreme climate events such as heatwaves have also become more common and are lasting longer, placing a heavy burden on health systems.

What Are the Health Effects of Heatwaves?

Heatwaves are known to cause increases in death rates and the numbers of people needing medical care. During a heatwave in Europe in 2003, more than 70,000 excess deaths (the number of deaths that was above the number expected over that time period) were reported. Excess heat increases pressure on the heart, lungs, and brain, increasing the risk of death from respiratory (relating to the breathing system), cerebrovascular (relating to the brain and its blood vessels), or cardiovascular (relating to the heart and blood vessels) problems.

Who Is Most at Risk during a Heatwave?

People with pre-existing health conditions, especially cardiovascular and respiratory diseases, and the elderly are particularly at risk. Over the last 20 years, the rate of elderly people dying from heat-related causes has increased significantly. Children under 1 year of age are likely to be affected by the effects of heat and dehydration, as are people who do manual work outdoors, for whom an increased risk of chronic kidney disease has also been reported. There is also evidence that people living alone, living in areas that are more socioeconomically disadvantaged (this is defined as less access to or control over economic, social, or material resources and opportunities), or living in urban environments such as city centers are at increased risk.

What Did This Study Investigate?

To be able to deal with the challenges that heatwaves cause, healthcare systems need to be able to develop plans that will ensure that those most at risk can access the support they need during a heatwave. Advances in geographic information systems have been shown to be useful in mapping how diseases are distributed and identifying any clusters or trends. They can also take into account environmental and socioeconomic factors when analyzing data, and the availability of medical facilities. This area of research is termed “geomedicine”.

What Is Geomedicine and How Can It Improve Health Outcomes?

Geomedicine is based on the idea that good health does not come by accident. Instead, factors in our environment have an effect on our health, which means that the places where we live and work now and in the past affect our health status. By linking a person’s health status to geographic factors, such as a person’s address, geomedicine can provide health data that can help medical teams make diagnoses and better assess risk.

What Did the Authors Investigate?

In this study, the authors used an approach called “geocoding” to investigate how the scale of geographic information used in geomedical analysis affects the results. Geocoding involves defining a set of geographic coordinates, usually based on latitude and longitude, that correspond to a location. The authors argue that analyzing data by geocodes, which can specify a particular street, rather than by larger areas such as a parishes or districts provides more accurate information about public health in those areas. This means that local authorities can prioritize resources to areas with greater need.

In their study, the authors analyzed data concerning heat-related deaths among elderly people in Portugal, which were linked to cardiorespiratory problems, between 2014 and 2017. Each record included information about the house number, post code, and location of the person that died, which enabled it to be geocoded. Once geocoded, the data were generalized to the neighborhood level to protect the confidentiality of the people’s data that were included.

The results showed that some neighborhoods with low cardiorespiratory death rates were located within parishes with high rates, while conversely, neighborhoods with high death rates were located within parishes with low rates. The authors therefore stress the importance of carrying out analyses at several different scales, and note that analysis by smaller administrative areas is preferable. Just as personalized medicine has the potential to revolutionize health, so does analyzing data by individual neighborhoods.

However, the authors also note the need for authorities to develop multisector responses to the challenges that climate change brings to “keep vulnerability to a minimum and increase the resilience of healthcare and urban planning”. By improving health information systems, it is possible that the accuracy of health outcome monitoring, spatial planning in urban areas, and the management of health resources may be improved.

Blood Platelet Levels and Postpartum Hemorrhage Risk

What Is the Main Idea?

Postpartum hemorrhage (PPH) refers to a woman having sudden heavy bleeding after giving birth, which can be fatal. In the open-access research article “The Impact of Prepartum Platelet Count on Postpartum Blood Loss and Its Association with Coagulation Factor XIII Activity”, published in the journal Transfusion Medicine and Hemotherapy, the authors discuss how the levels of platelets (a type of blood cell) and a protein called coagulation factor XIII in a woman’s blood before she goes into labor may predict her risk of PPH.

What Else Can You Learn?

In this blog post, PPH in general and known risk factors are discussed. The process of blood clotting is also briefly described.

What Is Postpartum Hemorrhage?

Vaginal bleeding is normal after birth. It is mainly caused by the placenta, which delivers food and oxygen to the developing baby while it is in the uterus (womb), detaching from the wall of the uterus. Although bleeding can initially be fairly heavy, it reduces in the days after birth and usually stops within a few weeks.

PPH is different. It can start suddenly and large amounts of blood can be lost very quickly. PPH can be classed as either primary (when 500 mL of blood or more is lost in the first 24 hours after birth) or secondary (when bleeding is heavy or abnormal after the first 24 hours and up to the end of the 12th week after birth). PPH can occur after birth by vaginal delivery or delivery by cesarean section. The contractions that help the placenta to pass out of the uterus after birth also compress the blood vessels in the wall of the uterus where the placenta has been attached. PPH can develop if these contractions aren’t strong enough (this is known as “uterine atony”), if part of the placenta stays attached to the wall of the uterus, or if any internal cuts or tears happen during birth.

How Serious Is Postpartum Hemorrhage?

PPH is serious and potentially fatal because sudden heavy blood loss can cause a sharp drop in bloop pressure, which can reduce blood flow to other organs, including the brain and heart. It is treated as a medical emergency. It is important that new mothers keep their healthcare team and partner aware of any changes in their bleeding, and act quickly if bleeding suddenly becomes very heavy. Other symptoms that should be reported include blurred vision, dizziness, feeling faint, worsening pelvic or abdominal pain, nausea or vomiting, an increased heart rate and/or breathing rate, and pale or clammy skin. These symptoms may only start after the woman has left the hospital. Although PPH is estimated to occur in 1–10% of pregnancies and remains a key cause of maternal death (mortality) worldwide, the earlier the bleeding is treated the more successful the outcome.

What Increases Your Risk of Postpartum Hemorrhage?

If a woman is considered to be at high risk of PHH she will be advised to give birth in a hospital setting. Before birth, placental problems (like the placenta being located relatively low in the uterus or starting to detach from the wall of the uterus before it should) can increase a woman’s risk of PPH. Other risk factors include an overstretch uterus, which can be caused by having had more than one previous pregnancy, too much amniotic fluid (the fluid that surrounds the baby while it is in the uterus), and having a multiple pregnancy (expecting two or more babies at the same time).

During the birth, risk factors include a delay in the placenta being delivered or some of it remaining attached to the wall of the uterus, having a large baby, and the baby being delivered by forceps or ventouse. Another known risk factor is if the woman has a blood clotting disorder or other blood-related condition. The blood clotting system (known as the “coagulation” system) is activated when the lining of a blood vessel is damaged and regulates the process by which liquid blood changes to a gel, forming a blood clot, which stops the bleeding and starts the repair process.

How Does the Blood Clotting System Work?

The process by which blood clots are formed involves a number of proteins and platelets (a type of blood cell). When a blood vessel is damaged, such as when the placenta detaches from the uterus, platelets cluster at the site of damage and bind together to seal it. The platelets have receptors on their surfaces that bind a molecule called thrombin, which converts a soluble protein called fibrinogen into a different form called fibrin. Fibrin can form long, tough, insoluble strands that bind to the platelets and cross-link together to form a mesh on top of the platelet plug. Lots of different molecules are involved in this process, but platelets and fibrin are major players.

How Does Blood Clotting Relate to Postpartum Hemorrhage?

Some researchers have suggested that if a woman has a lower than normal level of platelets in her blood (a condition called “thrombocytopenia”) before she gives birth she may be at increased risk of PPH. Thrombocytopenia is estimated to occur in around 10% of pregnancies. There is also some evidence that the levels of a blood protein called coagulation factor XIII affect PPH risk. Coagulation factor XIII stabilizes fibrin as blood clots form. If low levels are present in the blood, clots can be less stable and the risk of bleeding increases.

What Did the Study Investigate?

The authors of the study evaluated whether a woman’s platelet count (the number of platelets measured in a sample of blood) measured before birth is linked to the extent of blood loss after birth. They also looked at whether there is an association between platelet count and levels of coagulation factor XIII, either before or after birth. They did this by looking at data collected as part of a previous study (this is termed “secondary analysis”) that analyzed the impact of coagulation factor levels before birth on blood loss after birth for 1,300 women. They found that the higher a woman’s pre-birth platelet count, the lower the probability of them developing PPH, and that this was seen for women whose babies were delivered either vaginally or by cesarean section. An increase in pre-birth platelet count by 50 G/L was shown to decrease the likelihood of PPH by 16%.

The authors also found that platelet count is significantly correlated (strongly linked) with coagulation factor XIII activity both before and after birth, which suggests that platelets may play an important role in the firmness of blood clots. Coagulation factor XIII is found in the cytoplasm of platelets (the fluid-like area inside a cell that does not include the nucleus, where the genetic information is stored). This suggests that the chance of developing PPH may be influenced not only by the number of platelets in the blood, but also by the availability of coagulation factor XIII in the areas of platelets that are involved in its blood clotting role.

The authors state that these findings support the importance of measuring platelet counts when identifying women who may be at high risk of PPH. Recent medical guidelines in Germany, Switzerland, and Austria have included platelet transfusion to increase the number of blood platelets in a six-step approach to treat continued bleeding. It is possible that platelet therapy may become useful in the prevention and treatment of PPH in the future.

Note: The authors of this paper make a declaration about patent ownership as well contributions to a new guideline. It is normal for authors to declare this in case it might be perceived as a conflict of interest. For more detail, see the Conflict of Interest Statement at the end of the paper.

Infant Antibody Profiles Can Predict Peanut Allergy

What Is the Main Idea?

Peanut allergy is a leading cause of anaphylaxis and some infants are more at risk of developing it than others. In the brief report “Epitope-Specific IgE at 1 Year of Age Can Predict Peanut Allergy Status at 5 Years”, published in the journal International Archives of Allergy and Immunology, the authors describe how levels of particular types of antibodies in blood samples given by infants at age 1 year can be used to predict whether they will develop peanut allergy by the time they are 5 years old.

What Else Can You Learn?

In this blog post, peanut allergy, anaphylaxis, and efforts to predict the development of severe allergy in children are described. Immune system antibodies, antigens, and epitopes are also discussed.

What Is Peanut Allergy?

Peanut allergy is a type of food allergy (an unusual reaction of the body’s immune system to a specific food). The immune systems of people with peanut allergy mistakenly identify peanut proteins as things that are harmful to the body and need to be removed. Although some allergic reactions to foods are relatively mild, causing symptoms such as a rash or abdominal pain, others are more serious. Severe allergic reactions can cause anaphylaxis, which is potentially life-threatening and should be treated as a medical emergency. In addition to the more usual allergy symptoms such as swelling, an itchy or raised rash, or feeling or being sick, anaphylaxis symptoms can include a fast heartbeat, confusion or anxiety, breathing difficulties, feeling lightheaded or faint, and the person losing consciousness. They can develop suddenly and worsen very quickly. Peanut allergy is the leading cause of anaphylaxis in the USA and ranks second (after milk) in the UK.

Peanut allergy usually develops in early childhood and incidence has increased over recent decades. Estimates of the number of affected children vary between countries, but can be as high as 3% of the population. Some infants are at greater risk of developing peanut allergy than others, including those with family members with food allergies and those with egg allergy and/or eczema. Some health services used to advise that infants should not be exposed to foods containing peanuts because of fears that it could trigger peanut allergy. However, there is now strong evidence that introducing infants at risk of peanut allergy to peanuts as early as age 4–6 months can significantly reduce their risk of developing food allergies in the future.

Why Do Some People Have More Severe Peanut Allergies than Others?

It is now believed that peanut allergy can take several different forms called “endotypes” (an endotype is a subtype of a health condition that differs from other subtypes in the way that changes in the body and its systems cause or are caused by the condition). Evidence to support this comes from the fact that around 20% of infants and young children that have an allergic reaction to peanut will outgrow their allergy, while in others the allergy will persist throughout their lives. The difference is thought to be linked to the specific molecules, called “antibodies” (also known as “immunoglobulins”), that the immune system produces when it comes into contact with peanut proteins.

What Are Antibodies?

Antibodies are glycoproteins (molecules that are made up of protein and carbohydrate chains). They are highly specific, and recognise and bind to “antigens” (this term describes anything that causes the immune system to produce antibodies against it and can include chemicals, molecules on the surfaces of bacteria and viruses, and proteins in food like peanuts). Antibodies are divided into five different classes – IgE, IgG, IgM, IgA, and IgE – based on their characteristics and roles. They all have a Y-shaped structure, and while the bottom part does not change from one antibody to another, the two “arms” do and make up the part of the antibody called the “antigen-binding site”. It is differences in this region that enable different antibodies to bind to specific regions of antigens (called “epitopes”) and not to others. For example, an antibody that binds to an epitope on a peanut protein will not bind to an epitope on protein made by the virus that causes flu. Epitopes can be described as “sequential” or “conformational”. Sequential epitopes are made up of a linear sequence of amino acids (the building blocks of proteins) like beads on a string, while conformational epitopes are made up of amino acids that are only brought close together when the string of amino acids is folded up into a three-dimensional structure.

How Can Different Antibody Types Be Used to Predict Peanut Allergy?

Some recent studies have reported that levels of sequential epitope-specific IgE (ses-IgE) antibodies in infants with persistent food allergies are lower than levels of IgE antibodies against a mixture of both conformational and sequential epitopes during the first year of life. ses-IgEs develop as infants get older, raising the possibility that children who develop a persistent peanut allergy later in life may have distinct epitope-specific profiles in infancy. If this is the case, it may become possible to identify infants who are at risk of developing peanut allergy via a simple blood test.

What Did the Study Show?

The authors monitored the development of ses-IgEs in 74 children who were at risk of developing peanut allergy who had either already been identified as allergic to peanut or were not yet allergic, and who were avoiding peanuts. They analysed blood samples taken when the children were aged 4–11 months, and again at 1 and 2.5 years of age. They used a machine learning strategy (a computer system that uses algorithms and statistical models to analyse patterns of data and draw conclusions from them) to identify prognostic biomarkers (characteristics, such as molecules in your blood, that indicate what is going on in the body) that could predict whether or not a child would have an allergic response to an oral food challenge with peanut at a 5-year visit. The results showed that blood samples from children aged as young as 1 year could be used to accurately predict the outcomes of oral food challenge tests at 5 years of age. If these results can be confirmed by further studies, it may become possible for healthcare professionals to identify infants who are likely to develop persistent peanut allergy in the future, enabling them to start peanut exposure interventions early and, hopefully, prevent severe and permanent peanut allergy from developing.

Note: This post is based on an article that is not open-access; i.e., only the abstract is freely available. Furthermore, the authors of this paper make a declaration about grants, research support, consulting fees, lecture fees, etc. received from pharmaceutical companies. It is normal for authors to declare this in case it might be perceived as a conflict of interest.

How Image-Enhanced Endoscopy Techniques May Improve Ulcerative Colitis Treatment

What Is the Main Idea?

Assessment of mucosal healing in people with ulcerative colitis by white-light endoscopy has several limitations. In the review article “Possible Role of Image-Enhanced Endoscopy in the Evaluation of Mucosal Healing of Ulcerative Colitis”, published in the journal Digestion, the authors describe how advances in image-enhanced endoscopy may improve the assessment of mucosal healing in people with ulcerative colitis and, as a result, help improve their treatment.

What Else Can You Learn?

In this blog post, image-enhanced endoscopy techniques and how they may help patients with ulcerative colitis are described. Ulcerative colitis in general, the gut microbiome, and mucosal healing are also discussed.

What Is Ulcerative Colitis?

Ulcerative colitis is an inflammatory bowel disease. People with ulcerative colitis have chronic (long-term) inflammation and ulcers (sores) in the colon (also known as the large bowel), which is part of the large intestine and removes water and some nutrients from partially digested food before the remaining waste is passed out of the body. Inflammation is the process by which your body responds to an injury or a perceived threat, such as a bacterial infection. Although the exact causes of ulcerative colitis aren’t yet fully understood, it may be an autoimmune condition, which means that the body’s immune system wrongly attacks normal, healthy tissue. The intestines contain hundreds of different species of bacteria, which are part of the “gut microbiome” (the term given to all of the microorganisms and their genetic material that live in the intestines). Although some of these species can cause illness, many are essential to our health and wellbeing, playing key roles in digestion, metabolism (the chemical reactions in the body that produce energy from food), regulation of the immune system, and mood. Several diseases are now thought to be influenced by changes to the gut microbiome, including cancer. Some researchers believe that in ulcerative colitis, the immune system may mistake harmless bacteria inside the colon as a threat and start to attack them, causing the colon to become inflamed.

What Is Mucosal Healing?

There is currently no cure for ulcerative colitis, with treatment focusing on relieving symptoms during a flare up and trying to stop them coming back, and the importance of choosing treatment strategies based on a specific therapeutic target (known as a “treat-to-target” approach) has become popular as a way to improve the long-term outcomes of patients. One of the ways that the efficacy of treatment is monitored is by assessing the level of “mucosal healing” in the colon. The mucosa is the innermost layer of the colon, and it is this layer that comes into direct contact with partially digested food and that becomes ulcered in ulcerative colitis. Mucosal healing is usually defined as an absence of friability (when the mucosa is inflamed and bleeds easily when touched), blood, erosions, and ulcers, or as a total absence of inflammation and ulcers. It is now considered a target of ulcerative colitis treatment because there is evidence that it is associated with better clinical outcomes (such as lower risks of surgery and relapse, and improved quality of life) and reduced risk of developing colorectal cancer in the future.

How Is Mucosal Healing Assessed?

Assessment of mucosal healing usually involves endoscopy, which uses a long, thin tube with a small camera inside to look inside the body, and histology, which involves the examination of samples of tissue taken from the colon by biopsy during an endoscopy procedure. Although useful, the samples that are examined by histology only reflect what is happening in the part of the colon from which they were taken, and may not represent the situation in the colon as a whole. Traditional endoscopy, which uses white light (i.e. apparently colorless light such as “normal” daylight, which is a mixture of different wavelengths of light in the visible spectrum), can also have limitations. These include the subjective nature of assessment because the results depend on the opinion of the person reviewing the results, variations in opinions between different reviewers, and difficulties seeing microscopic inflammation, which may be hard to see without some sort of enhancement.

What Is Image-Enhanced Endoscopy?

Image-enhanced endoscopy techniques produce high-contrast images using optical or electronic methods. These high-contrast images make it easier to see the detail and differences in the mucosal surface, patterns of blood vessels, and color tones of the mucosa. As a result, image-enhanced endoscopy has the potential to enable more objective (i.e., less dependent on the personal opinions of the person reviewing the results) assessment of mucosal healing and detect minute differences in mucosal healing that cannot be detected by endoscopy with white light. There are a number of different image-enhanced endoscopy approaches.

  • Narrow-band imaging uses narrow-band light created with two filters that filter light at specific wavelengths, one for blue light and one for green. It is better than white light for viewing microscopic blood vessel structures. It may be useful for detecting minor inflammation and predicting relapse by revealing incomplete renewal of blood vessels in patients with ulcerative colitis.
  • Another technique called linked-color imaging uses narrow-band imaging to pre-process images and then color separation to post-process them, so that blue, green, and red can be used to amplify differences in color, making it easier for slight differences in the color of the mucosa to be recognized. It therefore improves the visualization of changes in the mucosa caused by inflammation or a decrease in the mucosa (known as “atrophy”).
  • In contrast, a method called i-Scan uses three different algorithms to enhance images: surface, contrast, and tone enhancement. It is able to emphasize minute mucosal structures and subtle color changes, and there is evidence that it can be used to clinically stratify patients according to histologic activity, without them needing to undergo a biopsy procedure to obtain tissue samples.
  • Autofluorescence imaging involves the detection of the autofluorescence (the fluorescence of naturally occurring substances) that is produced by naturally occurring substances in the intestinal tissues (mainly type-I collagen, which is found in many structures in the body including skin, bones, tendons, cartilage, and connective tissue). Because the intensity of the autofluorescence is induced by various conditions in the body, autofluorescence imaging is expected to become useful for assessing the severity of tissue inflammation and differentiating between those changes that are due to damage and those that are due to uncontrolled, abnormal growth of cells or tissue (which may result in the development of a tumor).
  • Finally, dual-red imaging uses three wavelengths of light, of which two improve the ability to see blood vessels in submucosal tissues and bleeding points. Unusually, the pattern of blood vessels in the surface of the colon’s mucosa is partly or completely absent in the active phase of ulcerative colitis, making it difficult to assess it with traditional endoscopy that uses white light. Dual-red imaging enhances the pattern of blood vessels and makes it easier to visualize blood vessels in deeper tissues, so may be most useful in evaluating inflammation of the colon and predicting the prognosis of patients with mild to moderately active ulcerative colitis.

The approaches described above may contribute to the improved assessment of factors in ulcerative colitis that are difficult to assess by white-light endoscopy. It is hoped that this will, in turn, improve the use of treat-to-target approaches and the quality of life of people with ulcerative colitis.

Improving Active Surveillance of Low-Risk Prostate Cancer

What Is the Main Idea?

Some low-risk prostate cancers can be monitored by “active surveillance”, but the correct identification of patients with low-risk cancer at the time of diagnosis is essential. In the open access review article “Active Surveillance in Prostate Cancer: Current and Potentially Emerging Biomarkers for Patient Selection Criteria”, published in the journal Urologia Internationalis, the authors describe how biomarker testing may improve the selection of patients for this approach.

What Else Can You Learn?

In this blog post, prostate cancer, its signs and symptoms, and the active surveillance approach for managing low-risk prostate cancers are discussed. Different types of biomarkers that may indicate whether a prostate cancer has low or high risk of progression are also described.

What Is the Prostate?

The prostate is about the size of a walnut. It sits between the base of the penis and the rectum (the last few inches of the large intestine), deep inside the groin. It produces some of the fluid that mixes with sperm (from the testes) to form semen.

What Are the Signs and Symptoms of Prostate Cancer?

In the body, the growth and reproduction of cells is tightly controlled. If cells in the prostate start to grow and reproduce in an uncontrolled way prostate cancer can develop. Current estimates are that 1 in 8 men will develop prostate cancer in their lifetime. If the cancer is growing near the urethra (the tube that carries the urine or “wee” from the bladder to pass out of the body) it may start to press on it. This can cause changes in how the person urinates, such as weak flow when urinating, a feeling that the bladder hasn’t emptied properly, and needing to urinate more often and especially at night.

However, prostate cancer more usually starts to grow in a part of the prostate that is not near the urethra, so in many cases men with early-stage prostate cancer don’t have any signs or symptoms. When prostate cancer reaches a more advanced stage and starts to spread to other areas of the body (metastasize), the person may experience other symptoms such as blood in the urine or semen, problems getting or keeping an erection, and pain in the back, pelvis, or hip.

What Is “Active Surveillance”?

Some prostate cancers grow very slowly and are unlikely to spread or become life-threatening in the person’s lifetime. Such prostate cancers are described as “low risk”. In some countries, policies and screening programs have been introduced with the aim of increasing the detection of prostate cancers at an early stage when they can be cured. This has had the benefit of increasing the numbers of men who are diagnosed with early-stage prostate cancer, but also means that some men undergo treatments such as radical prostatectomy (surgical removal of the prostate gland and some of the tissue around it) and radiotherapy that they don’t necessarily need.

An alternative approach is “active surveillance”, where men with low-risk prostate cancer are monitored closely for any signs of disease progression (through regular testing to check whether there are any signs that the prostate cancer is starting to grow) and are only treated if their cancer progresses. It is essential that patients with low-risk prostate cancer are correctly identified at the time of diagnosis because men with higher-risk prostate cancer need prompt treatment. It is believed that biomarker analysis may aid this process by increasing the accuracy of identifying patients who have low-risk prostate cancer and the monitoring of their cancers over time.

What Are Biomarkers?

The term “biomarker” is short for “biological marker”. Unlike symptoms, which are things that you experience, biomarkers are measurable characteristics that indicate what is going on in the body. Your blood pressure, levels of molecules in your urine and blood, and your genes (DNA) are all biomarkers. Although they can suggest that your body is working normally, they can also show the development or progress of a disease or condition, or the effects of a treatment. Over the last decade, biomarker testing has started to transform the way that some diseases and conditions are treated, and offers hope of better outcomes through more personalized treatment.

What Did the Review Article Investigate?

The authors did a literature search (a systematic search through research that has already been published) for information about current and emerging biomarkers. They identified four currently available tissue, two blood, and six urine sample-based tests that can help identify patients with prostate cancer who could be monitored by active surveillance. In addition, new research over the last 10 years has identified new biomarkers that could improve existing tests or enable the development of new tools to identify patients for whom active surveillance may be suitable. These include messenger RNAs, microRNAs, long non-coding RNAs, and metabolites (substances used or formed when the body breaks down food, medicines or chemicals).

What Are Messenger, Micro-, and Long Non-Coding RNAs?

Your genes are short sections of DNA (deoxyribonucleic acid) that carry the genetic information for the growth, development, and function of your body. Each gene carries the code for a protein or an RNA (ribonucleic acid). There are several different types of RNA, each with different functions, and they play important roles in normal cells and the development of disease.

  • Messenger RNAs are single-stranded copies of genes that are made when a gene is switched on (expressed). In a cell, long strings of double-stranded DNA are coiled up as chromosomes in a part of the cell called the nucleus (the cell’s command center). Chromosomes are too big to move out of the nucleus to the place in the cell where proteins are made, but a messenger RNA copy of a gene is small enough. In other words, messenger RNA carries the message of which protein should be made from the chromosome to the cell’s protein-making machinery.
  • MicroRNAs are much smaller than messenger RNAs. They do not code for proteins but instead play important roles in regulating genes. They can inhibit (silence) gene expression by binding to complementary sequences in messenger RNA molecules, stopping their “messages” from being read and preventing the proteins they code for from being made. Some microRNAs also activate signaling pathways inside cells that turn processes on or off.
  • Long non-coding RNAs are another type of RNA that don’t code for proteins. They interact with other types of RNA, DNA, and proteins, and play key roles in the control of gene expression. Changes in the expression or structure of some long non-coding RNAs, or that affect the ability of proteins to bind to them, have been shown to be linked to cancer metastasis and patient survival.

How Can Biomarkers Be Used to Identify and Monitor Prostate Cancer?

Tissue biomarkers are biomarkers that can be detected in tissue samples that are obtained if a person with suspected prostate cancer has a needle biopsy (a procedure that uses a thin, hollow needle and a syringe to obtain a sample of cells, fluid, or tissue from inside the body). Although they can be highly effective at identifying prostate cancers they are less useful for long-term monitoring and the testing process can be expensive. Cancers develop different subpopulations of cells with different characteristics as they progress and these differences may affect the test results. In addition, biopsies are invasive and there can be complications. Existing tests using tissue samples primarily check for particular genes and proteins, but there is increasing evidence that some long non-coding RNAs can differentiate between prostate cancers that are likely to be aggressive and those that are low risk.

Both urine and blood sample analysis have the advantage of not being affected by the issue of tumor sampling that can occur with tissue biopsies; in other words, there is no issue regarding differences between different subpopulations of cells in the cancer. Several biomarkers found in the blood, which can be assessed using blood samples taken via normal blood tests, have been identified. These include proteins, hormones (low levels of testosterone in the blood may indicate advanced prostate cancer at diagnosis), microRNAs, and “circulating tumor cells” (tumor cells that get into the blood stream and can be detected in blood samples). The use of urine samples to detect prostate cancer has the advantages of being non-invasive, quick, and relatively cheap. Long non-coding RNAs and metabolites have been analyzed as biomarkers to improve the diagnosis of prostate cancer and assess progression. Several research studies have reported that the levels of some amino acids (the component units of proteins) are decreased and others increased in urine samples from people with prostate cancer.

Although our understanding of biomarkers and how they can be used to assess prostate cancer prognosis is improving all the time, further studies are needed to improve the identification of the aggressiveness of prostate cancers. The authors of the review article hope that future studies, including analysis of long-term outcomes and the cost-effectiveness of the use of different biomarkers, will improve the effectiveness of identifying patients who are suitable for active surveillance, reduce overtreatment, and further promote the development of personalized medicine.

Why Ureter Stone Relocation after Stenting Can Affect Treatment Decisions

What Is the Main Idea?

Treatment of large ureter stones depends on where they are located in the ureter. In the research article “Impact of Stone Localization before Emergency Ureteral Stenting on Further Stone Treatment”, published in the journal Urologia Internationalis, the authors describe how emergency ureteral stenting can change the location of a ureter stone, potentially changing the treatment approach that will be most effective.

What Else Can You Learn?

In this blog post, ureter stones and their symptoms are discussed. Ureteral stents and the different types of treatment for large ureter stones are also discussed.

What Are Ureter Stones?

Ureter stones (also known as ureteral stones) are essentially kidney stones that have moved from the kidney into the ureter (the tube that connects the kidney to the bladder, which is about the same diameter as a small vein). The main roles of the kidneys are removing waste products from the blood by filtering it, and making urine so that the waste products can be passed out of the body (excreted). Urine contains many dissolved minerals and salts. If they are present at high levels they can start to form crystals that may clump together into hard, stone-like lumps. Some are small enough to pass along the ureter and out of the body unnoticed, but larger ones may become stuck and block the flow of urine to the bladder.

What Are the Symptoms of Ureter Stones?

If a ureter stone is small it is unlikely to cause any symptoms, but for larger kidney and ureter stones the most common symptom is pain. The pain can range from mild and dull to intense and unbearable, and can radiate to other areas. People with ureter stones may also experience a need to urinate more frequently and pain or a burning sensation when they do. There may be blood in their urine, which gives it a pinkish color, and they may experience nausea and vomiting. If a person experiences fever or chills they may have a urinary tract infection (UTI). UTIs can spread to the kidney and cause a type of sepsis called “urosepsis”. It is important that people seek prompt medical treatment if they have any of the above symptoms, but sepsis can be life-threatening and is a medical emergency.

How Are Ureter Stones Treated?

The type of treatment recommended depends on the size, location and composition of the stones. If they are small enough, they can usually be encouraged to pass out of the body by the person drinking up to 3 liters of water per day. If they are larger, they may need to be removed by surgery.

  • Extracorporeal shock wave lithotripsy (ESWL) is a treatment method that uses X-rays or ultrasound from outside the body to break down the stones into particles so that they can pass out in the urine (“lithotripsy” is derived from the Greek words meaning “breaking stones”).
  • Percutaneous nephrolithotomy tends to be used if stones are large or located where it’s difficult for them to be treated by ESWL. A thin telescopic device called a nephroscope (a type of endoscope that is specially designed for looking inside the kidney) is inserted into the kidney through a small incision in the person’s back. Once the stone is located, it is either removed or broken down.
  • Uteroscopy involves a type of endoscope called a uteroscope being passed through the urethra (the tube that your urine passes through when it leaves your bladder and passes out of the body), into your bladder and then up into your ureter. Once located, the stone is either removed or broken down using laser energy or shock waves. It can only be performed if the stone is located in the lower half of the ureter.

What Is a Ureteral Stent?

If a person’s ureter is blocked by a ureter stone their urine is unable to drain from the kidney to the bladder properly. This causes the affected kidney to fill with urine and swell, and if the stone blocks the ureter for a long period of time the kidney can become damaged. To prevent this, a ureteral stent (a thin tube that’s placed inside the ureter) can be placed with one end inside the kidney and the other directly inside the bladder so that the urine can flow from one to the other. Emergency insertion of a ureteral stent is often used if a patient is experiencing severe pain and/or has developed urosepsis. However, this can change the location of the stone that’s causing the problem, which potentially changes how it needs to be treated.

What Did This Study Show?

The authors retrospectively analyzed stone locations in 649 patients who were treated by uteroscopy by looking at their medical records. For 469 patients, the locations of the ureter stones were checked both before emergency stent insertion and uteroscopy were performed. They found that around half (45.6%) of the patients had ureter stones that were accidentally relocated after the insertion of a stent, with around one-quarter (25.4%) experiencing displacement of their stones back into the kidney. Relocation of stones that were initially in the part of the ureter that connects with the kidney (known as the “proximal” ureter) was particularly likely. The authors note that the relocation of ureter stones affects the type of surgery that is most likely to be effective, and suggest that carrying out imaging to double-check the location of stones before surgical treatment may help patients to avoid more complex stone treatment in the future.

Take-Home Message

Neither national nor European guidelines for the diagnosis and therapy of ureter stones currently recommend that imaging to determine stone location be repeated after the insertion of a ureteral stent. Decision-making regarding whether to repeat imaging or the type of surgery depends on the opinions of both the surgeon and the patient. Patients with ureter stones who receive a ureteral stent may wish to discuss repeat imaging with their medical team before a final decision is made about the type of surgery that will be performed.

Note: This post is based on an article that is not open-access; i.e., only the abstract is freely available.

Predicting Outcomes after Stroke: How Components of the Blood Clotting System Can Help

What Is the Main Idea?

A stroke happens when the blood supply to part of the brain is cut off or reduced and can be life-threatening. In the open-access article “Clinical Significance of Plasma D-Dimer and Fibrinogen in Outcomes after Stroke: A Systematic Review and Meta-Analysis”, published in the journal Cerebrovascular Diseases, the authors investigate whether there is a relationship between the levels of D-dimer and fibrinogen in blood samples given by people who have experienced stroke and their outcomes.

What Else Can You Learn?

In this blog post, the symptoms and causes of stroke are described. The process of blood clotting and biomarkers are also discussed.

What Is Stroke?

A stroke is a serious medical emergency that can be life-threatening. The oxygen and nutrients that brain cells need to function properly are carried around the brain by the blood. A stroke happens when the blood supply to part of the brain is cut off or reduced, and the brain cells can no longer get all the oxygen and nutrients they need. They quickly begin to die (within minutes), which can cause brain damage and other complications.

There are two types of stroke:

  • Ischemic strokes are the most common (around 85% are this type) and are caused when blockages in blood vessels cut off or reduce the blood supply to part of the brain. The blockages may either develop in the blood vessels inside the brain or develop elsewhere in the body and travel to the brain via the bloodstream.
  • Hemorrhagic strokes are less common (around 15% are this type) and are caused by a blood vessel that supplies blood to the brain rupturing, causing bleeding in or around the brain. As well as causing brain cells to die, the bleeding causes irritation and swelling, and pressure can build up in surrounding tissues. This can lead to more brain damage.

As well as the two types of stroke described above, some people experience “mini-strokes” called transient ischemic attacks (TIAs). A TIA is essentially a stroke caused by a temporary, short-term blockage, so the symptoms do not last long. Once the blockage clears the symptoms stop. Although someone who has a TIA may feel better quickly they still need medical attention as soon as possible, because the TIA may be a warning sign that they will have a full stroke in the near future.

What Are the Symptoms of Stroke?

If someone is having a stroke they need urgent treatment. Don’t hesitate to call for medical help. The quicker they receive treatment the less brain damage is likely to occur. The main symptoms of stroke can be remembered using the word “FAST”.

  • Face: The person may be unable to smile, or one side of their face or their mouth may have dropped.
  • Arms: The person may not be able to lift both arms and keep them there.
  • Speech: The person may not be able to talk or their speech may be slurred; they may also have difficulty understanding what you are saying.
  • Time: Call for medical help immediately if the person has any of these signs or symptoms.

Other symptoms of stroke include sudden severe headache, weakness or numbness on one side of the body, confusion or memory loss, dizziness or a sudden fall, and/or blurred vision or loss of sight (in one or both eyes).

What Are the Effects and Outcomes of Stroke?

The effects of stroke vary from one person to another and depend on the type of stroke, its severity, whether this is the first stroke they’ve experienced, and which part of the brain is affected. Different parts of the brain have different functions, so the effects of stroke in the part of the brain that controls movement and speech can be very different to those in the part that controls breathing and heart functions. Predicting the outcomes of stroke is difficult. Although some people who survive a stroke recover well, others can be left with disabling problems that they never recover from. These can include physical and communication problems; extreme tiredness and fatigue; emotional, behavior, and memory changes; and thinking problems. Many factors are associated with the outcomes of people who have a stroke, including age, sex, the severity of the stroke, and whether or not they have other conditions such as atrial fibrillation or diabetes. It is hoped that the development of new ways to predict stroke outcomes can help to improve the outcomes of patients and maximise their recovery.

How Can We Predict the Outcomes of Stroke?

Studies have shown that the combination of a number of biomarkers could improve the accuracy of predicting the outcomes of stroke. Biomarkers are measurable characteristics, such as molecules in your blood or changes in your genes, that indicate what is going on in the body. They can indicate that your body is working normally, the development or progress of a disease or condition, or the effects of a treatment. Because ischemic stroke is caused by blockages in blood vessels, components of the system that regulates the process of blood clotting may be useful in stroke outcome prediction.

How Does the Blood Clotting System Work?

The blood clotting system (known as the “coagulation” system) plays an essential role in the body’s ability to heal. The system is activated when the lining of a blood vessel is damaged and regulates the process by which liquid blood changes to a gel, forming a blood clot, which stops the bleeding and starts the repair process. The process by which blood clots are formed involves a number of proteins and platelets (a type of blood cell). When a blood vessel is damaged, platelets cluster at the site of damage and bind together to seal it. The platelets have receptors on their surfaces that bind a molecule called thrombin, which converts a soluble protein called fibrinogen into a different form called fibrin. Fibrin can form long, tough, insoluble strands that bind to the platelets and cross-link together to form a mesh on top of the platelet plug. Lots of different molecules are involved in this process, but platelets and fibrin are major players.

While it is important that blood can clot when needed, it is also essential that the process is regulated so that unnecessary blood clots can be broken down. Plasminogen plays an important role in this. It circulates in the blood stream in a “closed” (inactive) form. When it binds to a blood clot it opens up, enabling enzymes to cleave (split) it to form a protein called plasmin. Plasmin is able to dissolve fibrin blood clots by cleaving fibrin and many other proteins found in blood plasma (the liquid component of blood that remains when all the blood cells are removed). One of the products when plasmin breaks down fibrin is called D-dimer, which is often measured in blood samples because it indicates whether or not the blood clotting system has been activated. Increased levels of D-dimer and fibrinogen in blood plasma have been reported to be linked to damage to the blood–brain barrier (this regulates the molecules in the blood that can enter the central nervous system).

What Did This Study Show?

The authors investigated whether or not levels of D-dimer and indicators of fibrin breakdown in the blood are associated with stroke outcomes by conducting a meta-analysis. A meta-analysis is a type of research study that statistically analyses the results of a number of studies that have been conducted independently but that have looked at the same research question. In this study, the authors analysed 52 studies that included 21,473 patients who had had a stroke. The results showed that high D-dimer and fibrinogen levels in blood samples given by the patients were significantly associated with poor outcomes such as death after stroke, having another stroke, and early neurologic degeneration (caused by cells in the nervous system stopping working or dying, affecting many of the body’s activities). This indicates that plasma D-dimer and fibrin levels could be used to screen patients for the likelihood of adverse outcomes after stroke, to identify patients at higher risk of poor outcomes so that they can benefit from close monitoring and potentially also preventive treatment. The authors hope that combining D-dimer and fibrin as biomarkers in clinical follow-up after stroke may help to improve the effectiveness of treatment strategies after stroke and enable them to be tailored to meet the needs of individual patients.

Effect of Hearing Loss on Cognition and Cognitive Reserve

What Is the Main Idea?

Subjective cognitive decline (SCD) is the self-reported experience of worsening or more frequent memory loss or confusion without clinical evidence for it. In the open-access research article “The Effect of Hearing Loss on Cognitive Function in Subjective Cognitive Decline”, published in the journal Dementia and Geriatric Cognitive Disorders, the authors investigate whether there is a relationship between hearing loss and cognitive function in people with SCD.

What Can Else You Learn?

In this blog post, dementia and particularly SCD are described. Cognition and the concept of cognitive reserve are also discussed.

What Is Dementia?

The term dementia does not describe a single, specific disease. It covers a wide range of conditions, including Alzheimer’s disease and vascular dementia. People with dementia may experience declines in memory, language, problem-solving, attention, reasoning, and other thinking skills to the extent that they have effects on normal daily activities. Behavior, feelings and relationships can also be affected. Although dementia mainly occurs in older adults (i.e., people aged over 65 years), it is not a part of normal ageing and is caused by abnormal changes in the brain. For example, Alzheimer’s disease is believed to be caused by two proteins, beta-amyloid and tau, forming plaques around brain cells that make it hard for them to stay healthy and communicate with each other. In contrast, vascular dementia develops when blood flow to parts of the brain is blocked or reduced, preventing them from getting all the oxygen and nutrients they need to function properly.

What Is Subjective Cognitive Decline (SCD)?

Most countries now have rising life expectancies, with the World Health Organization (WHO) estimating that 1 in 6 people in the world will be aged 60 years or older by 2030. An ageing global population and increased understanding of and information about dementia has led to increasing numbers of people reporting changes in cognition and seeking medical help. SCD is the name given when a person self-reports the experience of worsening or more frequent memory loss or confusion (“subjective” means “based on or influenced by personal feeling or opinions”) over the last 12 months. However, there is no objective evidence of cognitive decline, i.e., the results of standardized cognitive tests for mild cognitive impairment (MCI) and Alzheimer’s disease do not indicate that there is a problem. Dementia is a continuum, progressing from MCI to mild, moderate, and eventually severe dementia, and the boundary between SCD and MCI has not been defined clearly. Some individuals report SCD as early as 5 years before MCI is detected by objective test results. It is thought that improved understanding and management may reduce the future effects of SCD.

What Is Cognition and How Is It Assessed?

Cognition is an umbrella term that describes a combination of processes that take place in the brain, such as the ability to learn, remember, and make judgements based on experience, thinking, and information from the senses. These processes affect every aspect of life and our overall health. For example, how we form impressions about things, fill in gaps in knowledge, and interact with the world. A variety of tests have been developed to assess cognitive skills. These include the Rey Complex Figure Test, in which participants are asked to reproduce a complicated line drawing, and the Stroop Color Word Test, in which participants are asked to view a list of words printed in colors that differ from the colors that the words describe (for example, the word “blue” might be printed in yellow ink) and then name the color the word is printed in.

How Is Hearing Loss Thought to Be Linked to Cognitive Decline?

Research studies have identified a link between hearing loss and dementia, and some suggest that hearing loss may be a major risk factor for its development. This may be partly due to something called “cognitive reserve”, which is the idea that people build up a reserve of cognitive abilities during their lives, and that this reserve can protect them against some of the cognitive decline that can happen as the result of ageing or disease. In other words, some brains keep working more efficiently than others despite them experiencing similar amounts of cognitive decline and/or damage. It has been suggested that cognitive reserve may be affected by hearing loss because the cognitive resources (the capacity that a person has to carry out tasks and process information) of people with hearing loss are under greater demand than people unaffected by hearing loss. This is due to the increased effort that it takes for people with hearing loss to process auditory (relating to the sense of hearing) information.

What Did the Study Show?

The authors investigated whether hearing loss (as assessed by audiometry, which measures the range and sensitivity of a person’s hearing) affects cognitive function in people with SCD. Participants in the study were aged 60 years or older and were grouped according to whether they had normal hearing or bilateral (affecting both sides) hearing loss. They were then assessed using series of cognitive tests that evaluate attention, language, visuospatial functioning (the visual perception of the spatial relationships of objects), memory and executive functions (responsible for processes like planning, focused attention, self-control, and juggling multiple tasks). Participants also gave blood samples so that particular biomarkers could be measured and had magnetic resonance imaging (MRI) scans to look for differences in areas of the brain.

Although there were no differences between the two groups regarding biomarkers and other tests of cognition, the group with hearing loss performed worse in the Stroop Color Word Test. It is not clear why, but the authors suggest that it may be linked to the robustness of the Stroop test and its ability to measure executive function, particularly aspects to do with control of attention. When a person takes the Stroop test, they need to be able to selectively control their attention, so that they can suppress the automatic response of reading the word presented and instead focus on naming the color that the word is written in. If the idea that cognitive reserve is affected by hearing loss is correct, it might explain why the group with hearing loss did worse in the Stroop test. Another possibility is that people with SCD and hearing loss may participate in cognitive and social activities less often than people with unaffected hearing, reducing their cognitive reserve. High-level engagement in social activity and having large social networks is known to be linked to better cognitive functioning in later life.

The authors also found that people in the hearing loss group had smaller volumes of grey matter, one of the main components of the brain, in four brain regions. One of these is a major component involved in memory. However, it is unclear whether there is a causal link between hearing loss and reduced volumes of grey matter, and it may be more likely that they both result from a common cause, such as accelerated aging in some individuals.

How Can You Increase Your Cognitive Reserve?

Keeping your brain and body healthy and active is the best way to increase your cognitive reserve. Activities that engage your brain, such as learning a language or new skill or solving puzzles, as well as high levels of social interaction, are known to reduce your risk of developing dementia. However, doing the same type of puzzle every day isn’t enough. Novelty and variety are needed to stimulate the brain most effectively, even to the extent that deliberately taking routes to places that differ from the ones that you normally take can help. Regular physical activity, not smoking, and a healthy diet are also important.

Take-Home Message

There may be a link between hearing loss and cognition in people with SCD. People with SCD may be at increased risk of developing dementia in the future. As a result, it is important that people with SCD report any signs of hearing loss to healthcare practitioners promptly so that it can be managed effectively to reduce the risk of further cognitive decline. Importantly, everyone can take steps to increase their cognitive reserve and investing in making small, positive lifestyle changes now may pay dividends in the future.

Direct-to-Consumer Genetic Testing: Advantages, Disadvantages, and Ethical Concerns

What Is the Main Idea?

The direct-to-consumer genetic testing (DTC GT) market has grown rapidly over the last 20 years. In the open-access research article “Knowledge and Attitudes about Privacy and Secondary Data Use among African-Americans using Direct-to-Consumer Genetic Testing”, published in the journal Public Health Genomics, the authors describe the personal experiences and perceptions about DTC GT of a group of people that have purchased and used such tests, and their potential implications.

What Else Can You Learn?

In this blog post, gene variations and genetic testing in general are discussed. The advantages and disadvantages of DTC GT and ethical concerns are also described.

What Is Direct-to-Consumer Genetic Testing (DTC GT)?

The DTC GT market has expanded rapidly over the last 20 years as the costs of genetic testing and sequencing have fallen, and was estimated to be worth USD 1.24 billion in 2019. DTC GT involves the use of genetic tests that are marketed directly to consumers without a need for the involvement of a healthcare professional. The tests can be ordered online or by post, and the consumer completes the test at home (usually by rubbing a swab over the inside of their cheek or providing a saliva sample) before sending it away for analysis. The results are then sent by post, given via a telephone call, or accessed via a secure website or app.

How Does DTC GT Work?

Your DNA (deoxyribonucleic acid) carries the genetic information for the growth, development, and function of your body. It is made up of two long chains of units called “nucleotides” that coil around each other to form a double helix. The term “gene” describes a short section of DNA, whereas the term “genome” describes the complete set of genetic material in a cell or whole organism. Some genes have specific functions, like coding for proteins, but others don’t. The human genome is currently estimated to be made up of around 20,500 protein-coding genes, and the number of non-protein-coding genes may be greater.

Genetic testing looks for variations in the DNA sequences that make up genes. Gene variations can be inherited or can occur if a permanent change to the DNA sequence happens during a person’s lifetime. There are different types of variations that can occur in DNA, including the replacement of one nucleotide with another (known as a “substitution”) and the deletion and/or insertion of at least one nucleotide. If a substitution is found in at least 1% of the global population it is classified as a SNP (this stands for “single-nucleotide polymorphism” and is pronounced “snip”). Over 600 million SNPs have been reported to date, and although most SNPs have no effect on peoples’ health, some are associated with the risk of developing disease.

Many DTC GT test companies use a method called SNP-chip genotyping, which looks for the presence or absence of SNPs and nucleotide insertions and deletions. Other DTC GT companies use genome sequencing, which sequences a person’s entire genetic code and identifies variants that are present. Some companies provide the consumer with “raw”, uninterpreted data that needs to be interpreted by a third party, whereas others provide some form of health information based on their interpretation of the results. For example, some may combine results relating to a group of variants to place someone in a risk category while others notify the consumer if they have tested positive for specific variants implicated in a disease or response to a drug.

Why Do People Opt for DTC GT?

Some of the main advantages of DTC GT are that it is often less expensive than testing conducted via private healthcare providers, and it doesn’t involve the approval or involvement of a healthcare professional or insurance provider. The results are also delivered directly to the consumer so they don’t appear on the person’s medical or insurance records.

The process of collecting a sample for testing is usually quick, non-invasive, and can be done in the user’s home. Many users opt for DTC GT in the hope that they will get clear information about their future health and may feel that they are being proactive about their wellbeing. Some may feel that they are contributing to knowledge that may help others through future research. Consumers may also be curious to understand more about their ancestry.

What Are the Potential Disadvantages of DTC GT?

DTC GT often only takes a superficial look at particular genes and isn’t designed to diagnose genetic conditions. DTC GT is often good at detecting common genetic variants, but if a variant is detected that is rare, the result can often be a false positive (an error where the test result is positive but should actually be negative). This can be extremely worrying for the consumer if they think they have been identified as having a disease-linked genetic variation.

Equally, a test result could be a false negative (an error where the test result is negative but should actually be positive), and receiving a true negative result doesn’t necessarily mean there is no chance of developing that condition. It may just be that the company used doesn’t test the full set of possible genes that can be tested for that condition, and new variants and their implications are also being discovered all the time. Some companies offer someone to talk to about the results obtained, but consumers may struggle to access qualified advice if they receive a result that causes them to worry. Results can also be obtained that affect the whole family or demonstrate that family relationships are different to what is expected.

It is important that consumers understand the limitations of genetic testing. Finding a disease-linked genetic variation doesn’t automatically mean that someone will go on to develop that disease. A person’s health depends on a wide range of factors, not just their genetics. Environmental factors (such as exposure to harmful substances or access to healthcare), lifestyle choices and family history all play a role, and the health advice given to a person is unlikely to change as a result of them undertaking DTC GT.

Consumers also need to bear in mind that there may be implications regarding their health insurance if they undertake DTC GT and get a result that suggests genetic susceptibility to a disease. Some companies might not be clinically accredited (so the results might not credible or be backed by robust data) and there is currently little regulation of companies. As a result, the onus is on the consumer to assess the quality of the service before going ahead with testing. There may be other issues concerning whether the consumer is asked to provide proper informed consent, and some companies might collect, store, sell, or undertake research using the genetic data that they obtain (secondary data use is defined as the use of consumer data for purposes other than producing a report on the consumer’s health and/or ancestry). Law enforcement agencies may also request access to the information that the companies hold during their investigations. Risks associated with the use of secondary data and the reidentification of individuals from their data include civil lawsuits, forensic investigations, problems with immigration, discrimination, and the allocation of health resources.

What Did This Study Show?

The authors used semi-structured interviews to investigate the knowledge of and attitudes to DTC GT of a group of 20 people in the USA who self-identified as Black or African-American. In particular, the participants were asked about their motivations for testing and their views on secondary data use and privacy. The study showed that the participants were generally positive about DTC GT, but had little concrete knowledge about secondary data use practices by DTC GT companies. Few had read the company privacy policies in detail, and most expressed concerns about the privacy of their information. One-half of the group surveyed did not know whether they had the option to opt out of secondary data use by the DTC GT company. Most participants also expressed the opinion that informed consent, following the provision of clear information prior to testing about potential future uses of data, should be required for any secondary use, with the option to opt out of any and all potential future uses. The general view was that DTC GT companies need to improve transparency. In addition, when the authors compared their findings with those of a previous study, which had surveyed European-Americans who had undergone DTC GT, they found that there were differences in themes such as a moral duty to participate in research to redress historical underrepresentation in genetic studies, community considerations, and concerns about racial inequality.

Take-Home Message

DTC GT companies can do more to draw on the consumer experience and improve their consent processes and information, particularly regarding the clarity and accessibility of the language used in their privacy policies. However, the onus remains on the consumer to be informed. People considering DTC GT should look for balanced advice from reputable organisations before making a decision, and should speak to their healthcare professional if they have concerns regarding the possibility of a genetic condition in their family. If they decide to go ahead with DTC GT, consumers should check the privacy policy carefully and be clear on what help the company will provide regarding the interpretation of results.

How Might the Immune System Be Involved in the Development of Schizophrenia?

What Is the Main Idea?

Schizophrenia is a long-term (chronic) mental health condition. In the research article “Autoantibodies against Central Nervous System Antigens and the Serum Levels of IL-32 in Patients with Schizophrenia”, published in the journal Neuroimmunomodulation, the authors investigate whether dysregulation of the immune system may play a role in the development of schizophrenia.

What Else Can You Learn?

In this blog post, schizophrenia and psychosis are discussed, as well as the immune system in general and how disorders can be caused by autoimmune responses.

What Is Schizophrenia?

Schizophrenia is associated with a state of being called “psychosis”, in which a person loses some contact with reality and may be unable to distinguish their own thoughts and ideas from what is real. Key symptoms of active schizophrenia include delusions (where someone has strong beliefs that are not shared by others, for example that someone is trying to communicate important messages to them or that there is a conspiracy against them), hallucinations (where a person experiences things that only exist inside their mind, such as seeing, hearing, smelling, or feeling things that aren’t real), and muddled thoughts as a result of them. People with active schizophrenia may also have disorganized speech; lose interest in their normal social and everyday activities and personal hygiene; difficulty remembering things, understanding information, and making decisions; difficulties focusing on things; and a lack of emotion in their face or voice. Although people with schizophrenia are sometimes portrayed as having a split personality or multiple personalities, it is not something that is associated with this condition.

How Common Is Schizophrenia?

It is estimated that schizophrenia affects around 1 in 300 people worldwide (0.32% of the global population). Although it is believed to affect men and women equally, initial symptoms tend to appear earlier in men (in their late teens and early 20s) than in women (in their 20s and early 30s). Some people with schizophrenia will have episodes throughout their lifetime while others will have minimal symptoms.

What Causes Schizophrenia?

The exact causes of schizophrenia aren’t yet known but research suggests that it is likely to be caused by a combination of factors. Possible environmental factors (external influences that can affect an individual’s health and wellbeing) include increased urbanization, cannabis use in adolescence, infections, and traumatic life experiences. It is believed that some people may be more susceptible to developing schizophrenia, which suggests that genetic factors (things that are inherited from our parents) may be involved. Some research studies have suggested that abnormal or impaired regulation (dysregulation) of the immune system may also play a part.

What Is the Immune System?

The immune system protects your body from things that could make you ill and is divided into two branches: innate (non-specific) and adaptive (specific). The innate immune system defends against harmful germs and substances that enter the body. Key components are inflammation (which traps things that might be harmful and begins to heal injured tissue) and white blood cells (which identify and eliminate things that might cause infection; they are also called “leukocytes”).The adaptive immune system makes antibodies and involves specialized immune cells, which together enable the body to fight specific germs that it has previously come into contact with, sometimes providing lifelong protection.

How Might the Immune System Be Linked to Schizophrenia?

The term “antigen” describes anything that causes a response by the immune system and can include chemicals, or molecules on the surfaces of bacteria and viruses. The cells in your body also have molecules on their surfaces, but the immune system usually recognizes them as “self-antigens”; in other words, the immune system knows that they are not “foreign” and should not be removed. However, sometimes the body’s immune system starts to recognize self-antigens as foreign ones and begins to attack them. When this happens, it is described as an “autoimmune” response and can result in the destruction of normal, healthy body tissue, or changes in the function or abnormal growth of an organ. There are more than 80 known medical conditions caused by autoimmune responses that are all very different. They include type 1 diabetes, rheumatoid arthritis, multiple sclerosis and celiac disease.

Some research studies have suggested that dysregulation of the adaptive and innate immune systems may contribute to the development of schizophrenia. Some have reported that levels of cytokines, a type of protein that has an effect on the activity of the immune system, are higher during acute (short-term or begin and worsen quickly) schizophrenic episodes and lower when people are receiving treatment. The production of autoantibodies and inflammation may also be involved. Inflammation has been shown to cause a type of cell in the central nervous system called microglia to migrate into blood vessels in the brain. If the inflammation occurs over a prolonged period of time, the microglia in the blood vessels can disrupt the blood–brain barrier. The blood–brain barrier tightly regulates which molecules and cells can move from the body’s general bloodstream into the brain, and plays an important role in preventing infections from developing in it.

What Did This Study Find?

In this study, the authors investigated whether the levels of a cytokine called interleukin-32 (IL-32) differ between people with or without schizophrenia. IL-32 plays an essential role in activating the adaptive and innate immune responses, and upregulates inflammation by causing cells in the immune system to produce cytokines that increase inflammation. They found that levels of IL-32 in blood samples from people with schizophrenia were significantly higher than from a non-schizophrenia control group, and that levels of other cytokines that promote inflammation were also increased. Increased levels of IL-32 have also been reported in people with autoimmune diseases such as Grave’s disease and rheumatoid arthritis. However, the potential role of autoantibodies attacking self-antigens in the central nervous system in the development of schizophrenia is poorly understood. The authors went on to investigate whether autoantibody levels in people with schizophrenia were higher than in the non-schizophrenia group and found that levels of autoantibodies against an enzyme called GAD were significantly increased. GAD is involved in the production of a neurotransmitter (a signaling molecule that transmits a signal from one nerve to another) called gamma-aminobutyric acid (GABA), and it is known that GABA deficiency in the central nervous system can cause motor and cognitive problems. Dysfunctional neurons that rely on GABA have been seen in the brain in several neurological disorders (which affect the brain and the nerves found throughout the body). In one study, people with psychosis were found to be more than twice as likely to have GAD autoantibodies as people in the general population.

Take-Home Message

It is possible that dysregulation of the immune system plays a role in the development of schizophrenia. However, research in this area is at an early stage and more research is needed to improve our understanding of how schizophrenia develops and can be treated most effectively.

Note: This post is based on an article that is not open-access; i.e., only the abstract is freely available.

How Regular Exercise Can Benefit People Receiving Maintenance Hemodialysis

What Is the Main Idea?

Maintenance hemodialysis (HD) is a way of treating people with end-stage renal disease (ESRD) after they have experienced kidney failure. In the open-access review article “Exercise in Dialysis: Ready for Prime Time?”, published in the journal Blood Purification, the authors discuss the benefits of exercise for people receiving maintenance HD and review how it can be more widely incorporated into clinical care.

What Else Can You Learn?

In this blog post, HD in general and the advantages of regular exercise, particularly for people receiving maintenance HD, are discussed.

What Is Maintenance HD?

The kidneys do several important jobs in the body, including helping to control blood pressure and making red blood cells, and removing waste products and extra water from the body to make urine. If a person’s kidneys stop working (known as “kidney failure”), they will need kidney replacement therapy, in the form of dialysis or kidney transplant, to survive. Kidney failure treated in this way is referred to as ESRD. If a person is treated with maintenance HD, they usually have HD two or three times per week, often in a healthcare setting. During the HD process, the person’s blood leaves their body, goes through a filter in a machine that removes waste products and excess water, and the purified blood is then returned to their body.

Why Is Exercise Important for People Receiving Maintenance HD?

Regular exercise is important for everyone. Among other benefits, people who exercise regularly report that they sleep better and have more energy and muscle strength. Having ESRD has been shown to decrease a person’s level of physical activity and quality of life, partly because people with ESRD are likely to have other medical conditions (termed “comorbidities”). These may or may not be linked to their ESRD, and may contribute to them being physically inactive. However, it is widely acknowledged that people receiving maintenance HD may benefit from increasing their levels of physical activity. Regular exercise may benefit people with ESRD by improving their heart function, muscle strength, and blood pressure control, reducing the risk of diabetes, and helping to prevent anxiety and depression. The benefits of regular exercise described by the general population are similarly reported by people who receive maintenance HD. Moreover, they also perceive that they have better quality of life than those that don’t exercise because they are more able to do the things that they want and have to do in their daily lives (known as “physical function”).

What Is the Evidence that Exercise Can Benefit People Receiving Maintenance HD?

A number of research studies have been conducted that have sought to determine how exercise can benefit patients receiving maintenance HD. Most of these have involved patients using exercise bikes to cycle during HD sessions (termed “intradialytic cycling”), while some include at-home walking schedules and/or light resistance training. Although they have reported that exercise can improve physical function, cardiovascular health, and quality of life, many of the studies have only looked at small numbers of patients. Some have also suggested that the amount and intensity of the exercise that the participants are asked to do may not be enough for there to be significant improvements in their health or quality of life, and that this may be partly due to the comorbidities that they may have. Fatigue, muscle cramping, poor physical function, depression, and a lack of motivation, possibly in addition to serious comorbidities, have all been suggested to be barriers to exercise for patients receiving HD.

Nonetheless, the recently published CYCLE trial has shown that 6 months of intradialytic cycling improved the structure and function of the heart of patients receiving maintenance HD. This was shown by magnetic resonance imaging, with reductions seen in arterial stiffness and the mass of the left ventricle (one of the chambers in the heart), which are both associated with increased risk of a range of cardiovascular problems. Several other recent studies have reported that intradialytic exercise can improve a variety of patient-reported outcomes in people receiving maintenance HD, including reduced cramping, fatigue, and restless leg syndrome.

How Can People Receiving Maintenance HD Exercise Safely?

It is important that people receiving maintenance HD consult their healthcare team before starting a new program of exercise. The National Kidney Foundation suggests walking, swimming, aerobic dancing, and cycling (on an exercise bike or outside) to be good options because they involve continuously moving large muscle groups. Low-level strengthening and stretching exercises may also be good options, although heavy lifting should be avoided. As with any new exercise program, starting gently and building up from there is best. As little as 10 minutes of exercise, 3 days per week on non-consecutive days can have a positive effect. Importantly, exercise should be paused until the healthcare team can be consulted if the person’s dialysis or medicine schedule or physical condition changes.

How Is Exercise Incorporated into HD Care?

Although there are examples of exercise programs for people receiving maintenance HD in several countries, including Portugal, Germany, Mexico, and parts of Canada, implementation of exercise programs by healthcare providers worldwide remains low. It has been reported that less than 10% of dialysis centers offer exercise programs. This has been attributed by some to nephrologists possibly not feeling confident in their abilities to discuss the topic with their patients, and patients feeling that they don’t have the knowledge to exercise safely. The authors of the review article suggest that lifestyle interventions like exercise programs could be incentivized in HD centers if funding policies were changed to reward improvements in quality of life metrics as well as biochemical factors. Altering the physical environments in HD clinics, to be more inspiring and encouraging, could also help, particularly if exercise equipment was added to a designated space for exercise.

Take-Home Message for Patients

Exercise is as important for people receiving maintenance HD as anyone else and can have a wide range of benefits, including on their quality of life. People receiving maintenance HD who are interested in increasing their activity levels should consult their clinical team for advice on how to do this safely and consider accessing support and guidance from specialist organizations.

Is Warfarin Treatment Safe for Patients with Atrial Fibrillation and End-Stage Renal Disease Who Transition to Hemodialysis?

What Is the Main Idea?

Hemodialysis is a way of treating people with end-stage renal disease (ESRD) after they have experienced kidney failure. In the free-access research article “Warfarin Use, Stroke, and Bleeding Risk among Pre-Existing Atrial Fibrillation US Veterans Transitioning to Dialysis”, published in the journal Nephron, the authors discuss whether it is safe for patients with atrial fibrillation and ESRD to continue to take warfarin, a medication that reduces the risk of blood clots forming, while they transition to regular dialysis treatment.

What Else Can You Learn?

In this blog post, atrial fibrillation and ESRD are discussed, as well as anticoagulant treatments and what they are used for.

What Is End-Stage Renal Disease (ESRD)?

The kidneys help to control blood pressure and make red blood cells, and remove waste products and extra water from the body to make urine. If a person’s kidneys stop working (known as “kidney failure”) they will need kidney replacement therapy, in the form of dialysis or kidney transplant, to survive. Kidney failure treated in this way is referred to as ESRD. During the hemodialysis process, the person’s blood leaves their body, goes through a filter in a machine that removes waste products and excess water, and the purified blood is then returned to their body.

What Is Atrial Fibrillation?

A normal resting heart rate should be between 60 and 100 beats per minute and be regular. Atrial fibrillation is a heart condition that causes a person’s heart rate to be irregular (known as arrhythmia) and often very fast. The heart is divided into four “chambers”: two at the top called atria and two at the bottom called ventricles. Atrial fibrillation occurs if the atria start to beat irregularly in a way that is out of sync with the ventricles, causing the heart to be less efficient. Although some people with atrial fibrillation do not experience any symptoms, others may experience dizziness, heart palpitations (fluttering or irregular heartbeat), chest pain, shortness of breath, tiredness, and weakness. Importantly, atrial fibrillation can lead to blood clots in the heart and increases the risk of stroke, heart failure, and other heart-related complications.

How Is Atrial Fibrillation Treated?

Although not usually life-threatening, atrial fibrillation often requires treatment. Approaches to control the rate or rhythm of the heart include medications, cardioversion (where a controlled electric shock is given to the heart to restore a normal rhythm), and catheter ablation (where radiofrequency energy is used to destroy the area in the heart that’s causing the abnormal rhythm), which is often followed by a person having a pacemaker fitted to help their heart beat regularly. Because of the increased risk of stroke, people may also receive a type of medication called an anticoagulant.

What Is an Anticoagulant?

Coagulation (blood clotting) is the process by which blood clots are formed to stop bleeding. Although blood clots are an essential response to injury, for example preventing too much blood from being lost via a wound, coagulation can become a problem if blood clots form inside the body and stop blood from flowing through blood vessels, potentially starving the affected part of the body from oxygen. Depending on where a blood clot forms, this can lead to serious problems such as heart attack, deep vein thrombosis, and stroke (or mini-stroke, which is also called a transient ischemic attack). Although they are sometimes called “blood thinners”, anticoagulants don’t thin the blood. They work by reducing the blood’s ability to clot. There are three main types of anticoagulant: medicines that prevent the liver from processing vitamin K in a way that enables it to help clot the blood (these are called vitamin K antagonists), direct oral anticoagulants (also known as DOACs), and low molecular weight anticoagulants. The most commonly prescribed anticoagulant is warfarin, which is a vitamin K antagonist.

Is It Safe for Patients with Atrial Fibrillation with ESRD Who Transition to Hemodialysis to Take Anticoagulants?

Although patients with atrial fibrillation are commonly treated with anticoagulants to reduce their stroke risk, patients with ESRD are at greater risk of stroke. The decision of whether or not someone should be treated with an anticoagulant is usually weighed against the person’s risk of bleeding, which is also more common in patients receiving hemodialysis. Several risk scores have been developed to help healthcare practitioners assess this delicate balance, of which the CHA2DS2-VASc score for stroke risk and the HAS-BLED score for bleeding risk are the most widely used. However, neither has been fully assessed for validity in patients receiving dialysis, and it is unclear whether it is safe for patients with atrial fibrillation to continue anticoagulation treatment at the time of transition to hemodialysis. It is unclear whether patients who are about to transition to hemodialysis have similar stroke and bleeding risks compared with those who have received hemodialysis for years, or those who have chronic kidney disease who do not receive dialysis.

In this study, the authors looked at how accurate the CHA2DS2-VASc and HAS-BLED scores are in evaluating the stroke and bleeding risks of patients with atrial fibrillation. They also compared the risks of stroke and bleeding for patients with atrial fibrillation who transition to hemodialysis to assess whether they are likely to benefit from anticoagulation treatment with warfarin.

What Were the Findings of the Study?

The authors studied data relating to veterans of the United States military. Of the 28,620 veterans who had atrial fibrillation before they were transitioned to hemodialysis, 19% were treated with warfarin in the 6 months before transition while 81% didn’t receive any anticoagulation treatment. Of those receiving warfarin at the time of transition, 37% discontinued warfarin treatment after transition. Although the initial analyses showed that the risks of bleeding and stroke were similar between the groups taking or not taking warfarin, the authors went on to use a statistical approach called competing risk analysis to consider the effect of mortality (death). This time, the risk of stroke was 44% greater after transition for those receiving warfarin and the risk of bleeding increased by 38%.

Overall, the study suggests that patients with atrial fibrillation who receive warfarin may not have a lower risk of stroke or increased risk of bleeding compared with those who do not receive it. Importantly, the authors found that patients with atrial fibrillation who transition to hemodialysis who receive warfarin may have significantly higher bleeding and stroke risk than those who do not receive warfarin. The authors suggest that warfarin treatment should be re-evaluated at the time of transition to hemodialysis and should not be used for primary stroke prevention for people receiving hemodialysis with atrial fibrillation. However, newer anticoagulants like direct oral anticoagulants (DOACs) may be safer than warfarin, and studies are needed to assess whether patients with atrial fibrillation who transition to dialysis may benefit from switching treatment to them.

Take-Home Message for Patients

Warfarin treatment to reduce the risk of stroke in people with atrial fibrillation who are transitioning to hemodialysis may not be as safe as treatment with newer anticoagulant medications. People who are concerned should consult their clinical team.

How Can Telehealth Aid Genetic Testing for Cancer?

What Is the Main Idea?

Some people have mutations (changes) in their genes that increase their risk of developing particular types of cancer. In the open-access research article “Evaluating the Effectiveness of a Telehealth Cancer Genetics Program: A BRCA Pilot Study”, published in the journal Public Health Genomics, the authors describe the use of a telehealth platform for BRCA education and testing in people of Ashkenazi Jewish descent (PAJD) in the USA.

What Else Can You Learn?

In this blog post, tumor suppressor genes and their roles in the body are discussed, along with how telehealth can be used to aid genetic testing and education, and the role of genetic counseling.

What Is BRCA?

BRCA (pronounced “bra-ka”) is an abbreviation of BReast CAncer gene (genes are short sections of DNA that carry the genetic information for the growth, development, and function of your body, often in the form of instructions to make proteins). Everyone has two copies of two types of BRCA gene, called BRCA1 and BRCA2. They are both tumor suppressor genes.

What Are Tumor Suppressor Genes?

Tumor suppressor genes code for a type of protein, called tumor suppressor proteins, that help to control cell growth. They tend to play one of three roles: stopping cells from dividing and producing new cells, repairing damaged DNA, or causing damaged cells to be broken down through a process called apoptosis (this is also known as programmed cell death and is a normal part of development and aging; it removes cells that have become damaged or that are no longer needed). If a tumor suppressor gene gains a mutation (a change in the DNA code), the protein that it codes for may no longer be produced or may not work properly. As a result, the cell may start to grow and divide uncontrollably, which can eventually lead to the development of cancer.

How Are BRCA1 and BRCA2 Linked to Cancer?

Both BRCA1 and BRCA2 code for proteins that help repair damaged DNA. Everyone gets one copy of each gene from their mother and another from their father. If someone has a mutation in one of the BRCA genes that stops it from working properly, they will have a higher risk of developing cancer over their lifetime than someone that does not have a mutated BRCA gene. In particular, BRCA mutations are linked to breast and ovarian cancer. Although most cases of breast and ovarian cancer are thought to be sporadic (i.e., they develop in people who do not have a family history of that cancer or a DNA mutation that is known to increase their risk of developing it), 5–10% of cases are inherited. Of these cases of inherited breast and ovarian cancer, around 60% are caused by mutations in BRCA1 and BRCA2. All people with a BRCA mutation are at increased risk of developing breast and pancreatic cancer, and melanoma. Women and men are also at increased risk of developing ovarian and prostate cancer, respectively.

Are Harmful BRCA Mutations More Common in Some Populations than Others?

Individuals in some populations are more likely to have harmful BRCA mutations than others, and different racial/ethnic populations also tend to have different types of mutations. Although the exact prevalence of BRCA1 and BRCA2 mutations that can lead to cancer in the general population is not known, it’s estimated to be around 1 in 400 (0.2–0.3%). Norwegian, Dutch, and Icelandic peoples are known to have common mutations, as are PAJD. The prevalence of BRCA mutations in PAJD is around 1 in 40 (around 2.5%, which is 10 times greater than that of the general population), and in the USA, the mutations tend to be 1 of 3 types: two in BRCA1 and 1 in BRCA2. These are described as “founder mutations”: mutations that occur at high frequency in a group that is, or was, geographically or culturally isolated.

What Did the Research Article Investigate?

Although BRCA mutations are known to be more common in PAJD than the general population, it is not known whether rates of BRCA mutations are similar between those with or without a family or personal history of cancer. For many years, US national BRCA testing guidelines were defined by personal or significant family histories of cancer and/or the presence of mutations that were frequently found in families. These guidelines are often used by health insurance companies to set rules regarding what is covered. However, the national testing criteria have now expanded and the US Preventative Services Task Force has specified that being a PAJD is a risk factor. Despite this, in the USA, PAJD who do not have a family or personal history of cancer are not eligible for BRCA testing under their health insurance. Some have suggested that all PAJD should be offered BRCA testing, partly because it would reduce the number of cases of breast and ovarian cancer, which would save money and lives. All individuals with a BRCA mutation have a 50% chance of passing it on to future children, whether or not they have a family or personal history of cancer.

The authors of the above-mentioned article used a telehealth-based platform for BRCA education and testing with the goal of creating an effective model for BRCA testing in low-risk PAJD who do not meet US national testing criteria. They also sought to determine the rate of BRCA mutation in this group, to see if it is the same as in those with a family or personal history of cancer. The participants (501 people) received pre-test education in the form of a video and written summary, followed by complimentary BRCA1/2 testing (to determine whether there were any mutations), and post-test genetic counseling.

What Is Telehealth?

Telehealth, sometimes called telemedicine, describes the distribution of health-related services and information through the use of digital information and communication technologies. The advantages include increased access to healthcare of people in rural areas or who have difficulties with obtaining transport or mobility, and cost reduction. However, there are also disadvantages, including potential technical issues, the need for stable internet access, and (particularly in the USA) issues around billing and licensure between states. Examples of telehealth include apps on smartphones, test results being sent to a specialist, home monitoring through patients continuously sending health data, robotic surgery controlled by a surgeon at a different location, and health consultations such as genetic counseling using video conferencing rather than an in-person visit.

What Is Genetic Counseling?

Genetic counseling gives people information about how genetic conditions or specific gene mutations might affect them and/or their families. In relation to cancer, a genetic counselor can evaluate a person’s risk of getting certain types of cancer based on their family history. They can also help them decide whether or not to have genetic tests, explain the different types of test available, help work out whether some of the costs of testing are covered by the person’s medical insurance, and make suggestions for additional testing based on the results.

What Did the Study Find?

The study identified the rate of BRCA founder mutations in the low-risk PAJD participants to be around 0.6%, significantly lower than the generally reported rate of 2.5% for PAJD. However, one participant was found to have a non-founder mutation and the authors noted that, had only founder mutations been screened for, the participant’s BRCA mutation would not have been identified. There is a risk that carriers of BRCA mutations can be missed if only the presence of founder mutations is tested, with one potential reason being that many individuals that identify as Ashkenazi Jewish are actually of mixed Jewish ancestry.

The study also identified that most of the individuals that registered for the study but who did not participate because they did not meet eligibility criteria (because of their family histories) did not follow up with genetic counseling and testing, despite being sent information about its importance. Many of these individuals noted that this was, at least in part, because they had concerns about the ease of accessing genetic counseling. Telehealth has the potential to make this less of a problem. Of the PAJD that did take part in the study, feedback was very positive, with 97.9% stating that they were satisfied with the pre- and post-test education provided, and 99.5% stating that their post-test genetic counseling session was valuable.

What’s the Take-Home Message?

Individuals in populations with known founder mutations may benefit from considering genetic testing and counseling, whether or not they have a family or personal history of cancer. Genetic testing and counseling through telehealth is a good model for those that do not wish to, or cannot, access traditional in-person genetic counseling.

How Can Analyzing microRNAs in Blood Serum Improve the Diagnosis of Lung Cancer?

What Is the Main Idea?

Serum biomarkers can be detected by analyzing blood samples. In the open-access research article “Screening of Serum miRNAs as Diagnostic Biomarkers for Lung Cancer Using the Minimal-Redundancy-Maximal-Relevance Algorithm and Random Forest Classifier Based on a Public Database”, published in the journal Public Health Genomics, the authors describe an approach for screening serum miRNAs to see whether they are useful as diagnostic biomarkers for lung cancer.

What Else Can You Learn?

In this blog post, RNAs (particularly microRNAs) and their roles in the body are discussed, along with how serum biomarkers can aid the early diagnosis of lung cancer.

What Is a Serum Biomarker?

The term “biomarker” is short for “biological marker”. Biomarkers are measurable characteristics, such as molecules in your blood or changes in your genes, that indicate what is going on in the body. They can indicate that your body is working normally, the development or progress of a disease or condition, or the effects of a treatment. Serum biomarkers are biomarkers that can be detected by analyzing blood samples that are taken from patients (sometimes called “liquid biopsies”). Whole blood is made up of red blood cells, white blood cells, platelets, and clotting factors in a liquid called plasma. Serum is the liquid that you have left if all the cells and clotting factors are removed from the blood.

What Are the Advantages of Serum Biomarkers?

Because serum biomarkers can be easily obtained from samples taken during a standard blood test, it is relatively cheap to obtain large enough samples for analysis. In addition, the healthcare practitioners that take the samples do not need any specialist expertise. For these reasons, studies are underway to investigate how serum biomarkers can be used to diagnose a wide range of conditions, including cancer.

What Did the Research Article Investigate?

Lung cancer is one of the most common types of cancer, accounting for nearly one in six deaths worldwide in 2020. It can start in any part of the lungs or the airways that lead to the lungs from the windpipe (trachea), and is difficult to detect in its early stages. Like many other types of cancer, patients with lung cancer have better outcomes if their tumors are detected early. In this study, the authors investigated whether molecules found in blood serum called microRNAs have potential as biomarkers for the diagnosis of lung cancer and tested a method to identify them more efficiently. They screened 416 microRNAs and identified 5 that were present at different levels in people with lung cancer compared with people without lung cancer.

What Are microRNAs?

Your genes are short sections of DNA (deoxyribonucleic acid) that carry the genetic information for the growth, development, and function of your body. Each gene carries the code for a protein or an RNA (ribonucleic acid). Proteins do most of the work in cells and have lots of different functions in the body, including structural roles, catalyzing reactions (enzymes), and acting as signaling molecules. There are several different types of RNA, each with different functions, and they play important roles in normal cells and the development of disease.

Messenger RNAs are single-stranded copies of genes that are made when a gene is switched on (expressed). In a cell, long strings of double-stranded DNA are coiled up as chromosomes in a part of the cell called the nucleus (the cell’s command center). Chromosomes are too big to move out of the nucleus to the place in the cell where proteins are made, but a messenger RNA copy of a gene is small enough. In other words, messenger RNA carries the message of which protein should be made from the chromosome to the cell’s protein-making machinery.

MicroRNAs are much smaller than messenger RNAs. They do not code for proteins but instead play important roles in regulating genes. They can inhibit (silence) gene expression by binding to complementary sequences in messenger RNA molecules, stopping their “messages” from being read and preventing the proteins they code for from being made. Some microRNAs also activate signaling pathways inside cells, turning processes on or off.

Why Do microRNAs Have Potential as Serum Biomarkers?

MicroRNAs are present in body fluids such as urine, saliva, and blood. In addition, unlike some types of molecule that are relatively “unstable” and break down quickly, microRNAs that circulate in the blood are very stable. As a result, collecting samples is relatively cheap and easy, and microRNAs can also be easily detected and quantified in diagnostic laboratories.

How Are microRNAs Involved in Cancer?

MicroRNAs are involved in different types of cancer in a variety of ways. They may be expressed at abnormally high or low levels, affecting whether or not cells start to divide and multiply, or can enable cells to avoid processes that would normally cause cell death (this process is called “apoptosis”; it maintains the balance of cells in the body and removes cells that have become damaged). If microRNAs are expressed at different levels in cancer cells compared with normal cells, they could be used to indicate the presence of cancer in the body and aid earlier diagnosis. Different levels of particular microRNAs can also indicate the likely prognosis of patients with some types of cancer, and one particular microRNA (miR-506) has been shown to promote the apoptosis of cervical cancer cells.

What’s the Take-Home Message?

Over the last decade, biomarker testing has become a crucial part of optimizing the diagnosis and treatment of lung cancer. MicroRNAs in serum are biomarkers that are easy to collect and analyze, and show promise for screening to diagnose lung cancer at an early stage in the future.

Neuroblastoma: A Rare Condition in Adults

What Is the Main Idea?

Neuroblastoma is a type of solid tumor that is rare in adults. As a result, the disease course of neuroblastoma in adults is not well studied and there is no guideline-recommended chemotherapy strategy specifically for adults. In the open-access article “Adrenal Neuroblastoma Producing Catecholamines Diagnosed in Adults: Case Report”, published in the journal Case Reports in Oncology, the authors describe the case of a 24-year-old female patient and discuss considerations regarding the care of adults with neuroblastoma.

What Else Can You Learn?

In this blog post, neuroblastoma is discussed, along with issues relating to the treatment of adult patients compared with children and common complications that may develop. Embryogenesis and the role of the sympathetic nervous system are also addressed.

What Does the Case Report Describe?

In this case report, the case of a 24-year-old female patient with neuroblastoma is described. Although classification of her tumor according to an international staging system suggested that it was unlikely to be an aggressive tumor, it recurred 4 months after surgery and she needed further drug treatment.

What Is Neuroblastoma?

Neuroblastoma is a type of solid tumor, which means that it forms an abnormal mass (lump) of tissue that doesn’t usually contain any liquid areas. In childhood, neuroblastoma is the most common type of solid tumor that develops outside of the cranium (the bones that surround the brain) and the third most common childhood cancer worldwide. Neuroblastomas can develop at any location in the sympathetic nervous system but most commonly in the adrenal medulla, the inner regions of the adrenal glands, which are located on top of the kidneys.

What Is the Sympathetic Nervous System?

The sympathetic nervous system is the part of the autonomic nervous system (which controls things that you do without thinking about them, so it can be thought of as the “automatic” nervous system). The sympathetic nervous system controls rapid, involuntary responses of the body to dangerous or stressful situations, or when you are physically active. These include increasing your heart rate, improving oxygen delivery to your lungs, activating energy stores in your liver so that you can use energy quickly, and slowing down your digestion so that energy being used to digest food can go to other areas of the body that need it. Most of the signals that the sympathetic nervous system sends start in the spinal cord (a long, tube-like band of tissue that runs through the center of your spine, connecting your brain to your lower back) and are relayed all over your body. To communicate, the sympathetic nervous system uses chemicals called neurotransmitters. One family of neurotransmitters is the catecholamines, which include dopamine and epinephrine (adrenaline). Catecholamines are made by the brain and adrenal glands. Once they have been used, they are removed from your body via the urine.

How Does Neuroblastoma Develop?

When a human egg is fertilized, an embryo starts to develop through a series of processes that are together called “embryogenesis”. These processes include cell division and growth, and different groups of cells begin to develop that have specific roles in the body (this is called differentiation). In vertebrates, a group of cells called neural crest cells temporarily exists that go on to give rise to cells with very different roles, including nerve cells, melanin(pigment)-producing cells, and smooth muscle. It is thought that neuroblastoma can develop if neural crest cells start to gain mutations and changes during embryogenesis that disrupt their differentiation, although it’s not yet clear how.

Can Neuroblastoma Occur in Adults?

Neuroblastoma is considered by many to occur almost exclusively in children, with more than 90% of patients diagnosed before 10 years of age. However, although neuroblastoma is rare in adolescents and even rarer in adults, cases do occur, and while the clinical course of childhood neuroblastoma tends to be benign (mild), with some patients aged less than 1 year experiencing remission, there is evidence that the course of the disease in adults is more severe. Unfortunately, the fact that adult neuroblastoma is so rare means that it is not well understood.

What Are Specific Considerations for the Treatment of Adults with Neuroblastoma?

Children diagnosed with neuroblastoma are usually treated with intense polychemotherapy (chemotherapy involving several different drugs), which has been reported to be poorly tolerated by adult patients. However, there is currently no specific chemotherapy regimen for adult patients, so polychemotherapy is used with adjustments made on an individual-patient basis, depending on the needs of the patient and their ability to tolerate the drugs. As well as the treatment that adult patients receive, a key consideration is the level of catecholamines in the patient’s urine.

Why Are Catecholamine Levels Important?

As mentioned earlier, catecholamines are neurotransmitters that are used by the sympathetic nervous system and are involved in stress responses. It has been reported that 85–90% of patients with neuroblastoma have increased levels of catecholamines. As well as potentially indicating the presence of a rare tumor in the adrenal glands, high catecholamine levels can cause high blood pressure. As a result, patients with neuroblastoma can be at increased risk of stroke, kidney failure, and cardiovascular complications in the future, like arterial stiffness and thickening of the ventricles in the heart. Care needs to be taken so that high catecholamine levels and any complications are detected and followed up effectively.

What’s the Take-Home Message?

Although neuroblastoma is adults is rare, cases do occur. It is particularly important that the treatment of adults with neuroblastoma is tailored to the individual and any cardiovascular complications are detected and followed up properly. If a person has a high catecholamine level in their urine, the cause should be investigated because it may indicate the presence of a rare adrenal tumor.

How Mathematical Modelling Can Increase Digital Biomarker Use in Drug Development

What Is the Main Idea?

Digital biomarkers enable data from “smart” devices to be used to track health-related trends and patterns. In the open-access research article “Quantifying the Benefits of Digital Biomarkers and Technology-Based Study Endpoints in Clinical Trials: Project Moneyball”, published in the journal Digital Biomarkers, the authors show how mathematical modelling, using Parkinson’s disease as an example, can help solve some of the problems that have limited the use of digital biomarkers in drug development to date.

What Else Can You Learn?

In this blog post, biomarkers and their use in healthcare are discussed, along with Parkinson’s disease and how digital biomarkers can aid the development of future treatments.

What Are Biomarkers?

The term “biomarker” is short for “biological marker”. Unlike symptoms, which are things that you experience, biomarkers are measurable characteristics that indicate what is going on in the body. Your blood pressure, levels of molecules in your urine and blood, and your genes (DNA) are all biomarkers. Although they can suggest that your body is working normally, they can also show the development or progress of a disease or condition, or the effects of a treatment.

How Are Biomarkers Used to Help Patients?

Over the last decade, biomarker testing has started to transform the way that some diseases and conditions are treated. The development of treatments targeted against specific biomarkers and the ability to identify treatments that are not likely to work in certain patients offers hope of better outcomes through more personalized treatment.

What Are Digital Biomarkers?

The term “digital biomarker” is used to describe behavioral and physiological data that are quantifiable and objective (not influenced by personal feelings or opinions), collected and measured by portable, wearable, implantable, or digestible digital devices. Many people now use “smart” devices like watches and phones, and the large quantities of data that they collect can be paired with analytical tools to track trends and patterns, both for individuals and across populations.

How Can Digital Biomarkers Help Develop New Treatments?

Although digital biomarkers have the potential to have a significant impact on drug development, only a few have made meaningful contributions to bringing new treatments into the clinic. Understandably, pharmaceutical companies have been wary of adopting digital endpoints (events or outcomes that can be objectively measured in clinical trials to determine whether treatments being studied are beneficial) until they are fully proven. In 2019, stride velocity 95th centile (SV95C) became the first digital biomarker to be qualified by the European Medicines Agency (EMA) as a suitable endpoint for use in clinical trials researching treatments for Duchenne muscular dystrophy, a genetic disorder characterized by progressive muscle degeneration and weakness. Measured by the user wearing a device at their ankle, SV95C represents the speed of the fastest strides taken by the user over 180 hours. Interestingly, when the researchers involved in SV95C’s development analyzed the number of people from which data would need to be collected in a clinical trial to get a statistically significant sample, it was 70% less than if more traditional endpoints like the 6-minute walk test were used. This shows the potential of digital biomarkers to help new treatments go through clinical trials. Other potential benefits include more accurate selection of patients to take part in trials and better endpoint measurement.

What Did the Research Article Investigate?

The authors identified five gaps that pharmaceutical companies and technology providers need to address to increase the use of digital biomarkers in drug development:

  1. Biomarker measurements and objectives of treatment not being aligned.
  2. Differences in financial models (technology companies often expect quicker returns on investment than pharmaceutical companies).
  3. Assumptions that fast technological development in consumer technologies can be quickly translated to make regulated health devices.
  4. Uncertainties about possible impacts of digital biomarkers in clinical trials.
  5. Different value frameworks of the companies and researchers involved.

They then designed a proof-of-concept project called Moneyball (named after a book about baseball), which used mathematical modelling to try to address gaps 4 and 5 using Parkinson’s disease as an example (the disease model was deliberately oversimplified to limit the scope of the project). The authors assessed whether such an approach was useful and also discussed their ideas for technology inclusion with clinical development teams at pharmaceutical companies to get their feedback.

What Is Parkinson’s Disease?

Parkinson’s disease is a neurodegenerative disorder (a disorder that involves degeneration of the nervous system) that is characterized by motor impairments (partial or total loss of function of a body part) like tremor, slowness of movement (bradykinesia), uncontrolled involuntary movement (dyskinesia), and walking (gait) abnormalities. It usually develops in late adulthood and progresses over several decades. There is currently no approved treatment.

How Can Digital Biomarkers Help Patients with Parkinson’s Disease?

Like other neurodegenerative disorders, the diagnosis and assessment of Parkinson’s disease can be subjective (the opposite of objective: influenced by personal feelings and opinions). It often involves invasive or expensive procedures like lumbar punctures (also known as a “spinal tap”), where a thin needle is inserted between the bones in the patient’s lower spine, usually to collect some fluid for analysis, and positron emission tomography (PET) scans, which produce detailed three-dimensional images of the inside of the body. Lack of precision in assessing endpoints is a common problem in the treatment of Parkinson’s disease, and clinical trials often end in failure because not enough objective, high-quality data can be gathered. Digital biomarkers may help solve some of these problems by measuring changes in gait and speech, loss of automatic movements (reflexes), and slowed movement.

What Did the Authors Conclude?

Although the feedback that they received about their modelling approach was largely positive, some companies noted that it can be challenging to obtain the technology performance data needed to run the calculations. Nonetheless, the authors believe that their approach can identify technology-enabled measurements that will have a meaningful impact, aid the quantification of the benefits and costs of digital biomarker technologies during the design phase of clinical trials, and help resources (particularly money) to be allocated years before pivotal clinical trials begin. They hope that their work will increase collaboration between technology and pharmaceutical companies so that the potential of digital biomarkers to speed up the development of new therapies for a range of diseases can be realized.

Note: The authors of this paper make a declaration about grants, research support, consulting fees, lecture fees, etc. received from pharmaceutical companies. It is normal for authors to declare this in case it might be perceived as a conflict of interest. For more detail, see the Conflict of Interest Statement at the end of the paper.

Kangaroo Care Does Not Adversely Affect Oxygenation of Babies Born Preterm

What Is the Main Idea?

Kangaroo care is a method that puts babies born preterm or newborns in skin-to-skin contact with their parents. In the open-access review article “Impact of Kangaroo Care on Premature Infants’ Oxygenation: Systematic Review”, published in the journal Neonatology, the authors analyze and discuss the combined findings of studies that have investigated the long-term physiological effects of kangaroo care on babies born preterm compared with standard incubator care.

What Else Can You Learn?

In this blog post, general care of preterm babies is discussed, along with the method of kangaroo care and its advantages.

What Does It Mean If a Baby Is Born Preterm?

A premature birth is one that takes place more than 3 weeks before the baby’s estimated due date (at 40 weeks), in other words, before the 37th week of pregnancy. Babies born between 34 and 36 completed weeks of pregnancy are classed as “late preterm”, those born between 32 and 34 weeks as “moderately preterm”, those born at less than 32 weeks as “very preterm” and those born at or before 25 weeks as “extremely preterm”. Premature birth usually means that a baby will need to be cared for in hospital for longer than a baby born at term, with the amount of time influenced by how early he or she is born. Depending on how much care the baby needs, he or she may be admitted to an intermediate-care nursery or a neonatal intensive care unit (NICU).

What Affects Whether a Baby Is Born Preterm?

There are some known risk factors associated with premature birth. These include: the mother having had a previous premature birth, or multiple miscarriages or abortions; an interval of less than 6 months between pregnancies; smoking cigarettes or using illicit drugs; some infections and chronic conditions; stressful life events, physical injury or trauma; and being under- or overweight before pregnancy. However, the specific cause is not often clear and many women who have a premature birth have no known risk factors.

How Can Being Born Prematurely Affect a Baby?

Although some babies born prematurely do not have any complications, generally speaking, the earlier a baby is born the greater the risk. Birth weight also plays an important role. Some complications that may be apparent at birth include breathing, heart and temperature control problems, and babies may also have issues related to metabolism, the blood and the immune system (particularly increased risk of infection). Longer term, they are at increased risk of complications including cerebral palsy, chronic health issues, vision, hearing and behavioural problems, and developmental delay. Because complications at birth can influence the development of longer-term issues, babies admitted to an NICU are closely monitored by the medical team and things such as the baby’s heart rate and oxygenation (oxygen levels inside the body) are frequently checked. They are also at increased risk of developing hypothermia if they have difficulty regulating their body temperature, so are usually cared for in incubators. This helps the baby maintain an optimum temperature and can also protect him or her from noises and direct light, which can cause stress.

What Is Kangaroo Care?

Although incubator care is very effective, kangaroo care is an important component in the care of babies born both prematurely and at term. Kangaroo care is described by the World Health Organization (WHO) as a method of care consisting of putting babies in skin-to-skin contact with their parents. Skin-to-skin contact is known to be effective for thermal control, breastfeeding and bonding, regardless of setting, weight, gestational age and clinical conditions, and is recommended for all newly born babies whether they are born preterm or not. In kangaroo care, the baby wears only a nappy or diaper (and often also a hat), and is placed in a flexed (fetal) position on the parent’s chest. The baby can be secured with a wrap that goes around the naked torso of the parent, ensuring that the baby is properly positioned and supported, or both parent and baby can be covered with a blanket, gown or clothing for warmth. Kangaroo care can even be given if the baby is attached to tubes or wires, as long as the parent stays close to the machines.

What Are the Advantages of Kangaroo Care?

The skin-to-skin contact of kangaroo care provides physiological and psychological warmth and bonding to both the parent and baby. Because the parent’s body temperature is stable, it regulates the temperature of a premature baby more smoothly than an incubator. Babies born preterm that receive kangaroo care also experience more normalized heart and respiratory rates, increased weight gain and fewer hospital-acquired infections. Other benefits include the promotion of frequent breastfeeding, improved sleep/wake cycle and cognitive development, decreased stress levels and positive effects on motor development. There are advantages for the parent as well, with kangaroo care helping to promote attachment and bonding, decrease parental anxiety, improve parental confidence, and promote increased milk production and breastfeeding success. However, to date, studies on the physiological stability of preterm babies during kangaroo care have reported conflicting results.

How Does Kangaroo Care Affect Oxygenation in Premature Babies?

Uncertainties regarding the effects of kangaroo care on oxygen saturation (the oxygen level in the blood) and “regional” cerebral oxygen saturation (i.e. relating to the brain) were investigated through a systematic review of research articles that assessed oxygenation, using pulse oximetry and near-infrared spectroscopy, during kangaroo care in NICUs. Pulse oximetry is non-invasive and pain-free, involves a clip-like device being placed on a body part such as a finger or ear lobe, and uses light to measure how much oxygen is in the blood. Near-infrared spectroscopy is also non-invasive and can continuously monitor regional oxygen saturation. This is important for babies born preterm because early detection of low cerebral oxygen saturation can prevent irreversible cerebral damage that can lead to cerebral palsy.

What Do the Results of the Systematic Review Show?

In total, the results of 25 research articles were analyzed, which documented data for 1,039 premature babies undergoing kangaroo care at three different study points: pre-, during and post-kangaroo care. Although the results of the systematic review cannot be extended to premature babies requiring critical care (described in the review as “unstable”), “stable” premature babies showed no significant differences in heart rate, oxygen saturation in the arteries (blood vessels that carry oxygen-rich blood away from the heart to the tissues of the body) or fractional oxygen extraction (the balance between oxygen supply and demand) compared with routine incubator care. Regional cerebral oxygen saturation also remained stable with a slight upward trend. Although most of the studies included in the review were observational (where participants are simply compared with placebo, no treatment or an alternative condition without randomization) and further studies are needed, the authors conclude that stable preterm babies receiving or not receiving respiratory support are as physiologically stable as those receiving routine incubator care.

Take-Home Message for Parents

Parents of babies born preterm can be reassured that the many benefits of kangaroo care in the NICU do not come at the cost of their baby being adequately oxygenated. Although more research is needed, there is no evidence that premature babies receiving kangaroo care are less physiologically stable than those that receive only routine incubator care.

Kidney Failure: How Peritoneal Dialysis Has Helped Reduce COVID-19 Infections

What Is the Main Idea?

Peritoneal dialysis (PD) enables people with kidney failure to conduct dialysis at home by themselves. During the COVID-19 pandemic, increased use of PD has helped to limit the spread of COVID-19 in this vulnerable patient population. In the open-access review article “Should More Patients with Kidney Failure Bring Treatment Home? What We Have Learned from COVID-19”, published in the journal Kidney Diseases, the authors analyze and discuss the utility of PD in the Asia Pacific region during the COVID-19 pandemic.

What Else Can You Learn?

In this blog post, kidney failure in general and the advantages and disadvantages of PD, particularly in relation to the COVID-19 pandemic, are discussed.

What Is Kidney Failure?

The kidneys do several important jobs in the body, including helping to control your blood pressure and make red blood cells, and removing waste products and extra water from your body to make urine. In chronic kidney disease (CKD), the kidneys no longer work as well as they should and are unable to remove waste products from your blood. As a result, too much fluid and waste products remain in the body, which can cause health problems such as heart disease, stroke and anemia. Although CKD can be a mild condition with no or few symptoms, around 1 in 50 patients can progress to a very serious form of CKD known as kidney failure, where kidney function drops to below 15% of normal.

How Is Kidney Failure Treated?

When the kidneys stop working, kidney replacement therapy in the form of dialysis or kidney transplant are needed so that the person can survive. Kidney failure treated in this way is called end-stage renal disease. If you have a kidney transplant, a healthy kidney from a donor is placed in your body to filter your blood. In contrast, dialysis is a procedure by which the blood is “cleaned”. There are two types of dialysis. In hemodialysis (HD), your blood leaves your body, goes through a filter in a machine and is returned to your body. HD is usually delivered in a healthcare setting. In contrast, peritoneal dialysis (PD) uses the lining of your abdomen, the peritoneum, to filter the waste and extra fluid from your body. A key difference between the two is that, once you have been trained, PD can be done at home, at work or while travelling without the help of another person. Home HD is possible, but you need the help of a partner and it is not available in all regions.

How Does Peritoneal Dialysis Work?

Before a patient can begin to use PD, they need an operation to insert a catheter, usually near the bellybutton. The catheter will carry the dialysate into and out of their abdomen. The patient then usually waits up to 1 month before starting PD to give the catheter site time to heal, and is trained how to use the equipment. Once PD begins, in each session, a cleansing fluid (called “dialysate”) flows through the catheter into part of the abdomen and stays there for a fixed period of time (called the “dwell time”), usually 4–6 hours. The dialysate contains dextrose, which helps to filter waste and extra fluid from tiny blood vessels in the peritoneum. At the end of the dwell time, the dialysate drains into a sterile collecting bag, taking the waste products and extra fluid with it. There are two main ways of conducting PD: continuous ambulatory PD, which uses gravity to move the fluid through the catheter and into and out of the abdomen, and continuous cycling PD, which uses a machine to perform multiple exchanges while you sleep at night. Your medical team will help you identify which PD method is best for you.

What Are the Advantages and Disadvantages of Peritoneal Dialysis?

Compared with in-center HD, the benefits of PD include:

  • greater lifestyle flexibility and independence, which can be especially important if you have to travel long distances to a dialysis unit;
  • a less restricted diet than if you receive HD, because PD is done more continuously than HD, so there is less build-up of potassium, sodium and fluid;
  • and the possibility of longer lasting residual kidney function.

However, PD might not be suitable for you if you have extensive surgical scarring in your abdomen, a hernia, limited ability to care for yourself or caregiving support, or inflammatory bowel disease or diverticulitis. It is also likely that people using PD will eventually have a decline in kidney function that will require HD or a kidney transplant.

How Has the COVID-19 Pandemic Affected the Treatment of People with Kidney Failure?

Patients with kidney failure, especially those receiving dialysis, are more susceptible to infections like COVID-19 than the general population and are at greater risk of severe disease or death when infected, partly because they are more likely to have other conditions that have been linked to severe COVID-19 (such as cardiovascular disease, diabetes, and cerebrovascular disease). Many patients experienced difficulties accessing HD during lockdowns, and those that could travel to a dialysis unit risked exposing themselves, their family and healthcare staff to COVID-19 infection. As a result, patients and healthcare providers have been encouraged to consider PD as a preferred option for kidney replacement therapy because home-based treatment prevents chains of transmission through in-center dialysis units, reduces the risk of exposure through travel, and helps to preserve hospital resources being stretched by this and possible future pandemics.

What Has Been the Effect of Increased Use of Peritoneal Dialysis during the Pandemic?

Evidence suggests that increased use of PD during the pandemic has had a beneficial effect. Survival and efficacy rates for patients undergoing PD are similar to those undergoing HD, and observational data from multiple countries have identified lower rates of COVID-19 infection in patients undergoing PD than those receiving in-center HD. In addition, fewer healthcare staff can support a larger number of patients through ongoing interaction using telehealth, although careful monitoring is required to ensure any negative effects are identified.

Take-Home Message for Patients

PD is currently underutilized, thought to be in part because of patient hesitancy, less frequent interaction with nephrologists and perceived lower levels of clinical oversight. However, if available, PD is an important treatment option that can protect patients with kidney failure from exposure to infection and may be worth their consideration in consultation with their clinical team.

Note: The authors of this paper make a declaration about grants, research support, consulting fees, lecture fees, etc. received from pharmaceutical companies. It is normal for authors to declare this in case it might be perceived as a conflict of interest. For more detail, see the Conflict of Interest Statement at the end of the paper.

TOP