Tag Archive | Going Gentle Into That Good Night

Profiles in Dementia: B. B. King (1925 – 2015)

B. B. King 2014The blues were born in the Mississippi Delta shortly before the dawn of the 20th Century. The genre, known for its stories of hard times and suffering, originated with African-American sharecroppers who endured long, hot and hard labor picking cotton in the sweltering heat of the summer sun, lived in squalid conditions, and were kept in manipulated indentureship and perpetual debt by never quite making enough money to pay off their bills at local merchants.

While a few blues artists – Robert Johnson, W.C. Handy, Bessie Smith, and Billie Holiday – brought the sound of the blues into the mainstream of music during the first half of the 20th Century, it was not until the late 1940’s and early 1950’s that blues blossomed and hit its stride as a bona fide genre of American music.

Among now-familiar names like Lead Belly, Howlin’ Wolf, John Lee Hooker, Elmore James, and Willie Dixon emerged a young Mississippi Delta bluesman named B. B. King

An accomplished guitarist with an one-of-a-kind voice that wrung out every bit of pain, sorrow, and pragmatism that the blues had to offer, King, in many ways, became the face of the blues for a lot of America.

While blues artists had a profound influence on rock – British artists of the 1960’s drew heavily on their influence and vast body of work and groups like the Yardbirds, Cream, and Derek and the Dominos, fronted by Eric Clapton, were the crossroads where blues and rock met and married, producing generations of rock-blues musicians that continue today (listen to Nirvana’s haunting acoustic version of “Where Did You Sleep Last Night?,” and it’s as though you can hear Lead Belly singing along in the background) – in general, they continued to exist, much like jazz musicians, in a popular, but tightly-defined, niche in the landscape of popular American music.

Except for B. B. King. With his famously-named guitar – Lucille – and his showmanship as a guitarist, along with highly-accessible songs, including his eponymous “The Thrill Is Gone,” King managed to gain a large popular audience. 

Young B. B. King B. B. King stayed on the music circuit, performing along the way with artists like Clapton, The Rolling Stones (King opened for them on their 1969 tour), and U2, despite battling diabetes and high blood pressure for decades.

In the last few years, blues fans have consistently pointed out that B. B. King’s performances were erratic at best: King missed musical cues, forgot lyrics, and often went into long, rambling, and random soliloquies while onstage.

B. B. King’s last performance was on October 3, 2014 in Chicago. However, the performance had to be cut short because King wasn’t feeling well enough to continue. He was hospitalized with dehydration and exhaustion.

On May 1, 2015, after two hospitalizations due to complications from diabetes and blood pressure, B. B. King’s website announced that King had entered hospice care at his home in Las Vegas.

On May 14, 2015, B. B. King died. The official cause of King’s death was complications from dementia (vascular dementia).

Sadly, B. B. King’s family has already begun the legal fight over who will control his estate (there are allegations that King’s long-time manager, Laverne Toney, whom King appointed as his power of attorney, mishandled King’s care and money).

It’s a tragic footnote to an incredible life.

 

Profiles in Dementia: Muriel June Foster Ross (1929 – 2012) – My Beloved Mama

mama mother's dayWith Mother’s Day right around the corner, I decided to make this profile in dementia personal, and so I write about one of the heroes in my life, my mama, Muriel June Foster Ross.

Mama is the reason that I wrote Going Gentle Into That Good Night: A Practical and Informative Guide For Fulfilling the Circle of Life For Our Loved Ones with Dementias and Alzheimer’s Disease and You Oughta Know: Acknowledging, Recognizing, and Responding to the Steps in the Journey Through Dementias and Alzheimer’s Disease.

Mama is the reason the Going Gentle Into That Good Night blog exists.

And Mama is the reason why I created a Facebook support group for caregivers of loved ones with Alzheimer’s Disease, dementias, and other age-related illnesses.

My mama, Muriel June Foster Ross, was born March 2, 1929 in Erwin, Tennessee, a small town in the hollows of the Smokey Mountains in northeast Tennessee.

Fields of Gold: A Love StoryFrom the get-go, Mama had a life full of tragedy and triumph, successes and failures, bad times and good times, love and hate, deep-down sadness and uplifted-heart happiness and forgiving and forgetting, which I chronicled in the first book I wrote after her death, a memoir about my parents and us kids and our life together titled Fields of Gold: A Love Story

My mama was a most remarkable woman in so many ways, because no matter what came her way in life, she persevered, she overcame, and she prevailed.

Mama left me with an incredible legacy and some pretty big footsteps – ironically, because Mama was a lady whose physical foot size was 4.5W while my own foot size was almost twice that big and even wider – to follow in and I see continually how far I fall short of the example she left me.

However, even in my failures, I see Mama’s legacy of prevailing and not quitting. I’ve finally been able to see that even trying and failing is doing something and that beats not failing because I’m not trying to do something any day of the week.

It’s still hard for me to fail over and over, but I find myself rehearsing Mama’s life and all the places where it looked like failure and she could’ve quit but she didn’t. And, in the end, not quitting brought incredible meaning and blessings to Mama’s life.

Mama and Ethel Pennell SparksMy mama was intelligent, curious, active, humorous, whimsical, outgoing, and loving. She had a lifelong love affair with learning anything and everything. Mama was a decent writer – she got her second Bachelor’s degree in English at the age of 54 – but she was an even better oral storyteller.

Mama’s twinkling blue eyes and her mischievous smile could light up even the darkest room. She had her dark moments, her fears, and her insecurities as well, but she reserved those for the people she loved and trusted the most: my daddy and us kids.

Mama’s journey with dementias (vascular and Lewy Body) and Azheimer’s Disease probably began in 2005. The real nuts and bolts of these neurological diseases didn’t really appear in full force and persistently until 2009. And the downhill slide was Mama and me dancing togetherpretty precipitous from that point forward until her death (related to congestive heart failure) on August 14, 2012.

But I had the blessing of being beside Mama throughout the journey and through the end. That’s priceless. I also had the blessing that Mama didn’t live long enough to become completely uncommunicative and bed-ridden. That would have killed both of us. The journey was no picnic, but the blessing was that we shared it, and I am thankful for that.

It seems that each Mother’s Day since Mama’s death has made me miss her more than the one before. On the one hand, I’m glad Mama’s not suffering anymore. But, on the other hand, I miss her.

And not just the Mama I remember before these neurological diseases, but the Mama I remember after they appeared. There Mama and Daddywere moments interspersed with the chaos, the uncertainty, and the tough stuff that were some of the softest and gentlest and most loving moments Mama and I ever shared and those are etched just as deeply in my heart, in my soul, and in my mind. 

Side by side with Daddy, Mama’s resting now in the peace that often eluded her in life until the Sun of Righteousness arises with healing in His wings

May that day come quickly for us all. I love you, Mama. See you and Daddy soon.

 

Profiles in Dementia: Katherine Anne Porter’s Granny Weatherall

The Jilting of Granny Weatherall Katherine Anne PorterIn 1930, Katherine Anne Porter wrote a short story entitled “The Jilting of Granny Weatherall” (click on the link to read the short story in a new window).  When I first read this short story in high school, I had never heard the term dementia (my mom’s grandma was senile in her old age, but she was the only person I ever heard of being senile based on firsthand knowledge).

I found Granny Weatherall, an elderly woman whose reverie drifted simultaneously between the past and present as if they had become merged, interesting, but heart-breaking. And as she relived her past like it was the present, her story gave full meaning to her surname of Weatherall.

However, it wasn’t until my own mom started intersplicing her past into the present during her journey with vascular dementia, Lewy Body dementia, and Alzheimer’s Disease that Granny Weatherall came back to the front of my mind. Upon rereading the story, I realized Granny Weatherall had some type of dementia. 

In my book detailing acknowledging, recognizing, and responding to the steps of the journey we take with our loved ones through dementias and Alzheimer’s Disease, I again had Granny Weatherall on my mind as I wrote Chapter 10.

It’s a story I highly recommend for all caregivers with loved ones who have dementias and/or Alzheimer’s Disease.

Katherine Anne Porter 1930How Porter had this kind of insight into the inner workings of how these neurological diseases manifested themselves internally – with Granny Weatherall – and externally – with her caregiving daughter – is a mystery, but Porter captures it perfectly and poignantly.

I think one of the things Granny Weatherall does for us as caregivers is that she reminds us that our loved ones were once vibrant, full of life, and they’ve seen a world of ups and downs that not only may we not be privy to, but that we can’t fully imagine or understand. 

I believe another thing that Granny Weatherall does is to remind us of the fragility, the humanity, and the dignity of our loved ones. Her internal indignation at her daughter’s well-meaning, but clueless caregiving makes us take stock of our own caregiving in relationship to our loved ones.

And the last thing that Granny Weatherall does is to remind us that death is part of the circle of life and it’s often harder on those it leaves behind than those it takes.

There a lot of good lessons here. I hope you take some time to read “The Jilting of Granny Weatherall.” Porter’s a good writer, so the story moves well, but it gives us as caregivers an inside look at our loved ones that we may not have considered or even been aware of before.

Profiles in Dementia: William Shakespeare’s King Lear

William Shakespeare as a young manWilliam Shakespeare, the playwright, was one of the most intuitive and astute observers of the human race. A careful reading of his body of plays – especially the histories and the tragedies – show an author who intimately understood human nature and human folly at their very core manifestations.

In King Lear, one of Shakespeare’s most gut-wrenching plays, Shakespeare gives us an in-depth look at what dementia – and, most likely, based on the symptoms, Lewy Body dementia – looks like in action in his portrayal of King Lear.

From a big-picture standpoint, the play shows in close detail how dementia can destroy a family (and a kingdom), and it shows how family dynamics can hasten the destruction. It also shows how dishonesty with our loved ones with dementia is never acceptable

The summary of King Lear is fairly straightforward. King Lear, a monarch in pre-Christian Britain, who is in his eighties and aware of his own cognitive decline, decides to abdicate the throne and split the kingdom among his three daughters, with the promise that they will take care of him. 

The first sign of Lear’s dementia is his irrational criteria for how he’s going to decide which daughter gets the largest portion of the kingdom: not by their abilities, strengths, rulership experience, but by which one professes the greatest love for him.

His two oldest daughters are duplicitous and try to outdo each other with their professions of love for their father (they don’t love him, but they want the lion’s share of the kingdom).

King Lear’s youngest daughter, who genuinely loves her father and who is his favorite, gets disgusted with the whole thing and refuses to play the game.

King Lear, in a sudden fit of rage, then disowns his youngest daughter completely. When one of her friends, the Earl of Kent, tries to reason with the king, King Lear banishes him from the kingdom.

King Lear’s youngest daughter then marries the king of France and leaves King Lear in the hands of his two devious older daughters.

The Tragedy of King Lear - William ShakespeareBoth daughters are aware of King Lear’s vulnerability because of his cognitive decline and are intent on murdering him so that they can have everything without the responsibility of having to take care of him. They treat King Lear horribly in the process of formulating their scheme to end his life and be rid of him.

The youngest daughter comes back from France to fight her sisters, but loses and is sentenced to death instead.

While she is awaiting execution the two older sisters fight over a man they both want. The oldest sister poisons the middle sister, who then dies. 

The man the two sisters were fighting over has been fatally wounded in battle and he dies (but he reverses the execution order of the youngest sister before he dies). After his death, the oldest sister commits suicide.

Meanwhile, the youngest sister is executed before the reversal order reaches the executioners. And King Lear, upon seeing his youngest daughter dead, dies too.

Woven throughout the plot are signs that King Lear has dementia, that he knows something is cognitively wrong, and we watch him actually go through the steps of dementia throughout the play.

King Lear exhibits deteriorating cognitive impairment, irrational thinking, sudden and intense mood changes, paranoia, hallucinations, and the inability to recognize people he knows.

Lewy Body dementia seems to be evident in King Lear’s conversations with nobody (he thinks he sees them but they aren’t there) and the sleep abnormalities that are brought out in the play.

A few poignant lines spoken by King Lear give us a glimpse:

“Who is it that can tell me who I am?”

“O, let me not be mad, not mad, sweet heaven
Keep me in temper: I would not be mad!”

“I am a very foolish fond old man,
Fourscore and upward, not an hour more or less;
And, to deal plainly,
I fear I am not in my perfect mind.”

“You must bear with me:
Pray you now, forget and forgive: I am old and foolish.”

Everyone around King Lear knows he’s not himself, including his deceptive daughters, who note after he disowns his youngest daughter, how bizarre his behavior was toward someone he loved so much and how quickly his temperament changed. King Lear see

As the play progresses, King Lear’s dementia continues to be revealed in his frequent rages against fate and nature, in his disregard for personal comfort or protection from the elements, and in his eventually having fewer and fewer lucid moments in which he recognizes people and knows who he is.

If you haven’t read King Lear in a while or you’ve never read it at all, it is an entirely different experience to read it now with the knowledge of dementia as a backdrop. It’s even more tragic than we even imagined. 

 

Profiles in Dementia: Jonathan Swift (1667 – 1745)

Gulliver's Travels - Jonathan SwiftThere aren’t many people who haven’t, at some point in their lives, read Jonathan Swift’s best-known work, Gulliver’s Travels.

While most of us read it when we were too young to appreciate it because it was considered a staple in classic children’s literature (I was eight the first time I read it), reading this book as an adult and understanding what Swift is really writing about puts a whole new, interesting – and, yes, even comical at times – light on his most famous work.

But Swift was a prolific writer and a brilliant satirist beyond Gulliver’s Travels and was heavily involved in politics in both Ireland and England. He is considered one of the leading voices of The Age of Reason.

Swift’s prodigious public writing ended with Drapier Letters in 1724, and by the time his wife, Stella, died in 1728, Swift was already showing signs of neurological decline.

A fastidious and highly-organized man, Swift became more and more whimsical and capricious in his daily living. He also developed obsessive paranoia and miserliness.

Profiles in Dementia - Jonathan SwiftAs he descended further into dementia, Swift still tried to maintain a semblance of private correspondence after 1728, but eventually was unable to write at all. 

In 1740, in a rare letter to his niece, Swift confessed “I hardly understand a word I write.” By 1742, guardians had to be appointed to care for Swift and maintain his estate because he was simply unable to.

Jonathan Swift died in 1745, when “he exchanged the sleep of idiocy for the sleep of death.”

 

The Layperson’s Guide to Traumatic Brain Injury (TBI) and Chronic Traumatic Encephalopathy (CTE)

Closed Head Injury Traumatic Brain Injury

Our brains are very soft organs that are surrounded by spinal fluid and are protected by the hard outer covering of our skulls.

Under normal circumstances, spinal fluid cushions the brain and keeps it from crashing into the skull. However, if our heads or our bodies are hit hard, our brains can slam into our skulls and result in traumatic brain injuries (TBIs). TBIs are also caused when the skull is fractured and the brain is directly damaged by outside force.

BrainAlthough concussions, which we’ll discuss later, are sometimes referred to as mild TBIs, the reality is that no injury to the brain is mild and repeated injuries will lead to neurological degeneration that includes dementia.

TBIs are complex neurological injuries that result in a wide variety and severity of symptoms and disabilities.

The least severe symptoms of TBIs – and these may not happen immediately and, in fact, may occur some time after the injury, can include:

  • Temporary loss of consciousness
  • Dizziness
  • Headache
  • Slurred speech
  • Confusion
  • Temporary memory loss
  • Grogginess and sleepiness
  • Double vision or blurred vision
  • Nausea or vomiting
  • Sensitivity to light
  • Balance problems
  • Slow reaction to stimuli

The most severe symptoms of TBIs can include:

  • Extended loss of consciousness or coma
  • Permanent and severe brain damage
  • Partial or complete motor paralysis
  • Death

Going Gentle Into That Good Night TBI CDC Annual StatisicsThe most common causes of TBIs, according to the Centers for Disease Control, are:

  1. Falls (40.5%)
  2. Car accidents (14.3%)
  3. Head/body collisions with people or things (15.5%)
  4. Assaults (10.7%)

In the category of TBIs from falling, most of the falls occur disproportionately in the very young (55% of falls among children occur in children between the ages of 0 and 14) and the very old (81% of falls among adults occur in adults who are 65 or older).

Most of the TBIs in the Other category (19%) are from personal firearms and military weapons.

What CTE Does to The Brain

Courtesy of Sports Legacy Institute (http://www.sportslegacy.org/)

A type of TBI that is more frequently in the headlines today is Chronic Traumatic Encephalopathy (CTE). CTE is brain damage that occurs as a result of repeated concussions (a concussion is defined as injury to the brain from a direct blow to the head or from the head or upper body being violently shaken).

The first identified variant of CTE was described in 1928 by forensic pathologist Dr. Harrison Stanford Martland as pugilistic (from the Latin word pugil, which is translated as “boxer” or “fighter”) dementia. The symptoms included tremors (Parkinsonism), slowed movement, mental confusion, and speech difficulties.

In 1973, the neuropathology of pugilistic dementia was discovered and described by a team of pathologists led by J. A. Corsellis who documented their findings after performing thorough autopsies on the brains of 15 deceased boxers.

Going Gentle Into That Good Night Muhammad Ali BoxerFormer boxing heavyweight champion Muhammad Ali began boxing in Kentucky when he was 12 years old.

By the age of 18, he had boxed his way to the heavyweight gold medal at the Olympics (1960).

A few months later Ali began his professional boxing career. He quickly gained national prominence because of his skill in the ring and his trademark quote: “Float like a butterfly, sting like a bee. The hands can’t hit what the eyes can’t see.” He boxed professionally until his retirement in 1981.

In 1984, Ali was diagnosed with Parkinsonism (the tremors of pugilistic dementia) and his neurological health has deteriorated steadily to include all the advanced symptoms of Muhammad and Lonnie Ali 2014this variant of CTE.

His wife, Lonnie, is his caregiver and contributed to a moving article that AARP published last year about what she and Ali deal with on a daily basis as a result of the neurological degeneration that CTE has caused.

CTE has increasingly become a major health concern in the high-contact sports of professional wrestling, ice hockey, soccer, and football as more and more current and retired athletes are showing symptoms consistent with CTE.

NFL Football CTE going gentle into that good nightIn recent years, football – and especially professional football – has become the focal point for a closer examination of CTE. Not only has this sport become more violent in terms of how the game is played, but how concussions are treated – or not treated – has also come under greater scrutiny.

Joseph Maroon, Pittsburgh Steelers Team Doctor CTE Overexaggerated Rare March 18, 2015

Joseph Maroon, Pittsburgh Steelers Team Doctor

Although NFL team doctors assert that CTE is “rare” or “overexaggerated,” the hard scientific neurological and physiological evidence proves that these doctors are simply paid hirelings who care more about their paychecks than they do about the overall health of the players.

Let’s examine the facts. In a 2014 landmark study by the largest brain bank in the United States, 76 of the 79 brains of deceased NFL players that pathologists examined had TBI, and specifically, CTE.

A class action lawsuit has been filed – and a tentative agreement reached with the NFL – by retired NFL football players and/or their families (some of the players have already died from neurodegenerative causes) which claims that players were not (a) adequately protected from suffering concussions, (b) medically treated properly following concussions, and (c) provided adequate medical compensation to treat the burgeoning costs of CTE as it progresses.

This gist of this lawsuit is that the NFL used – and abused – these players to fabulously guild the seemingly-endless coffers of the NFL, often forcing the players by intimidation or fear to get back on the field as soon as they could after suffering a concussion (often in the same game), and then abandoned their responsibility to their former employees (as part of their contractual agreement) as soon as the employees began costing them money instead of making them money.

Even more damning to the NFL is the actuarial report accompanying the lawsuit that indicates that at least 1/3 of NFL players will suffer CTE.

If there is a silver lining in all of this, it is that the younger NFL players have a much greater awareness of the relationship – and their increased risk – between professional football and CTE.

They are aware of the very real probability that they will be one of the 1 out of every 3 players who develops CTE.

And they’re choosing their long-term health, including their brain health, over temporary fame and fortune.

Jake Locker Retires at 26 An unprecedented number of younger – and in-their-prime in the professional football world – NFL players have already retired before the 2015-2016 season begins.

They include:

  • Cortland Finnegan – Age 31
  • Jake Locker – Age 26
  • Jason Worilds – Age 27
  • Chris Borland – Age 24

While Finnegan, Locker, and Worilds did not publicly cite CTE as a factor in their premature retirements from the NFL, there can be no logical reason to doubt that the mounting evidence was a factor in their decisions.

Chris Borland, on the other hand, made no secret that the high probability of CTE was the reason for his decision to retire.

Chris Borland Retires from San Francisco 49ers Age 24Borland just finished his rookie season (2014-2015) with the San Francisco 49ers, but he revealed after the season that he suffered a concussion in training camp last fall. Instead of reporting the concussion, Borland covered it up so that he could continue to practice and win a starting position on the team.

This is the kind of competitive pressure that gets put on these young players by the NFL (yes, Borland made the decision and he bears the responsibility for it, but had he reported the concussion, he would have been replaced and lost the starting spot and may not have played all season).

Fortunately, though, Borland came to his senses and realized how much he had jeopardized – and would continue to – his neurological health.

As he said on the March 16, 2015 edition of ESPN’s Outside the Lines, “”I just thought to myself, ‘What am I doing? Is this how I’m going to live my adult life, banging my head, especially with what I’ve learned and knew about the dangers?'” 

We can only hope that more athletes in high-contact sports will know the higher risks of TBIs they face, not just in the professional leagues, but at the amateur levels, and they will choose to walk away from certain neurological damage.

In the meantime, we have a better understanding just in our daily lives of how TBIs can happen and what the results can be, so I hope that we’re a little more observant and attentive after falls with our little loved ones and our older loved ones, especially those already going through the journey through dementias and Alzheimer’s Disease, as they are even more prone to falling than the general elderly population.

As Sergeant Phil Esterhaus says at the end of every roll call on Hill Street Blues (a favorite TV show of mine during my high school and college years), “Hey, let’s be careful out there.”

 

Part 3 – “The End of Absence” (Michael Harris) Book Review

The Visual InternetThis is the last of a three-part series of reviews that I am writing on The End of Absence: Reclaiming What We’ve Lost in a World of Constant Connection written by Michael Harris in 2014.

The End of Absence: Reclaiming What We've Lost in a World of Constant Connection by Michael HarrisIn “Part 1 – The End of Absence (Michael Harris) Book Review,” we looked at the definition of absence and how it relates to our quality of life.

We also discussed how constant connection to technology is eliminating absence from our lives, and in the process, rewiring our brains with dementia-like characteristics. It is a lifestyle dementia that we are consciously creating by choosing to live in a world of constant connection.

We also discussed how the disappearance of absence is also causing the disappearance of our ability to think, to to reason, to plan, to dream, to create, and to innovate. In short, we’re trading the depth of real life, with all its hills and valleys, simplicities and complexities, and triumphs and failures (all of which make us better people, in the end), for a fake, virtual, shallow life that, in the end, means absolutely nothing.

In “Part 2 – The End of Absence (Michael Harris) Book Review,” we discussed data mining and predictive analysis, showing how the internet is actually shrinking our worlds, instead of expanding them.

We also discussed how our virtual worlds, with our ability to easily eliminate anyone and anything that doesn’t look us, ends up just being a mirror we look into, which first stagnates, then eliminates growth, change, maturity, and thinking.

We also discussed how we’ve surrendered our critical thinking to the internet world of public opinion, which is often ignorant, uninformed, and devoid of expertise. As a result, we get a lot of wrong, bad, and possibly even dangerous information that we are increasingly accepting as valuable and good, without any control mechanisms in place to follow through and make sure that we’re not being led down the primrose path.

And, finally, we discussed how a constant connection to technology erodes the selfless part of us (empathy, caring, serving, looking for all others) and cultivates the self-centered, self-absorbed, selfish part of us.

The reward factor of being the center of attention all the time, even when we’re just typing nonsense or run-of-the-mill things, motivates and grows this self-absorption until all we look for is adulation and affirmation.

The impact of this is that truth – as hard as it can be to stomach sometimes – goes by the wayside and a completely false sense of self, worth, and value, albeit virtual and not real, becomes our view of ourselves.

In this last part of our review of The End of Absence: Reclaiming What We’ve Lost in a World of Constant Connection, we are going to examine further how our lives are being robbed of meaning, experience and richness by our constant connection to technology.

We are also going to look at ways to bring absence back into our lives, if we’re brave enough, daring enough, and strong enough to quit following the masses into intellectual oblivion by enslaving ourselves to the machines.

My experience says that humanity in general just doesn’t have the willpower nor the intense desire to free itself from what’s destroying it. Once we get comfortable, we don’t want to move.

I hope that I’m wrong in this case, but the pragmatist in me says I’m probably not.

One of the ways in which our constant connection to technology is robbing us of meaning, experience, and richness in our lives is that our focus has become broadcasting life instead of living life. We, in effect, live in an augmented reality that we stage, produce, and filter through the lenses of our smart phones or digital cameras, but which we don’t experience in the moment or spontaneously participate in.

The Wonderful Wizard of Oz Going Gentle Into That Good NightHarris gives a perfect example of augmented reality from L. Frank Baum’s classic book, The Wonderful Wizard of Oz.

Although in the 1939 movie, Emerald City is actually green (the movie starts out in black and white and then suddenly changes to full color as soon as Dorothy leaves Kansas and is on her way to Oz), in the book it was not.

The reason that Dorothy and Scarecrow, Cowardly Lion, and Tin Man believe that Emerald City is green in the book is because the Wizard of Oz  tells them to put on safety glasses to protect their eyes. The safety glasses are tinted green, so everything these four see is green. 

In other words, they’re not seeing anything as it actually is, but instead they seeing it through a filter that makes what they are seeing seem real, but in fact, it’s not.

Just like most of us don’t normally style our food every day like the chefs on the Food Network and most of us don’t naturally stage our lives and homes to be photo-perfect. That’s just not reality, if we’re actually living our lives.

However, the trend toward this as our normal way of treating life is growing, and we are increasingly spending more time making our lives social-media-friendly than we are actually living them as they naturally occur and not even worrying about whether all our virtual world even knows anything about them.

Examples of this abound on social media with pictures of food we eat and events that we go to such as weddings, family reunions, social gatherings, etc.

How many times – and for how much time – have we stepped out of the reality of a messy kitchen while we’re cooking and plates of food that aren’t perfectly arranged and garnished to stage our breakfast or dinner meals for social media?

How many times at social gatherings do we spend all our time documenting activities and sharing them on social media instead of actually participating in what’s going on?

When we start living an augmented reality, then we lose authenticity and genuineness. The more and the longer we do this, the less able we will be able to know the difference between what’s real and what’s staged, and the less we exercise our natural and tint-free sight, the more easily we will be manipulated and controlled by other people and other things.

Wag the Dog Dustin Hoffman Robert De NiroIf you haven’t seen the movie, Wag the Dog, you should watch it soon. This movie was prescient with regard to the augmented reality of all media, politics, and “news” and how it would manipulate the United State public into believing whatever they saw or heard, without questioning and without verifying. Digital technology has just exponentially enhanced this manipulation.

It is always with this movie in the back of my mind that I take most of the stuff I read or hear from any media outlet with a grain of salt, because I know it’s not true (spinning, angling, omissions, innuendo, gossip, etc.) and I also know it’s not genuine or authentic, but instead staged and produced to have a desired effect on the general population.

Augmented reality destroys truth. For those of us – and it seems there aren’t many of us left who aren’t all caught up in it, hook, line, and sinker as if it is true – who know it’s not true, it has also destroyed our trust.

Another example of augmented reality is with US citizens and their participation in political processes.

House of Cards Kevin Spacey Robin WrightHere’s the reality. All politicians are liars and the process of politics is dishonest and dishonesty (the first two seasons were so hard for me to stomach that I refuse to watch any more of it, but Netflix’s original series House of Cards, with Kevin Spacey and Robin Wright, gets this in every disgusting and gut-wrenching detail).

And yet a lot of American citizens want to participate in a process that ultimately (here’s the other augmented reality: money and electoral colleges do the down and dirty decision-making, not the American public – the voting thing is just a ruse to make people believe they are instrumental in the process, but they’re not) chooses somebody who is thoroughly dishonest and can’t be trusted.

And when Americans are asked why they participate when confronted with the corruption, the dishonesty, and the lies of politicians and politics, nine times out of ten, one of the answers is “I’m choosing the lesser of two evils.”

That’s augmented reality, folks. Evil is evil. Why would any of us choose it at all?

And augmented reality is not limited to the media, to politics, and to politicians. It is everywhere in our society today. Education, entertainment, religion, social activism, nonprofits, business – if you can name it, augmented reality rules.

And technology has fueled this infiltration into everything we see and we hear.

But that alone is not enough to dupe us, to manipulate us, and to control us.

What makes us entirely susceptible to being duped, manipulated, and controlled is our constant connection to technology. It is analogous to the certainty of radiation contamination – and death – with prolonged exposure to radioactive materials.

Where and with what we spend most of our time is what we come to believe is true and reality.

Because we have, over time, chosen to spend our time constantly connected to digital technology and have gradually eliminated absence in our lives, our ability to objectively think, logically think, and critically think, as well as to prove or disprove information and things as true or untrue by analysis and research, we have also put ourselves into the position of completely accepting lies as truth and fake as real.  

How many times have we seen some internet hoax automatically recycled on the internet as truth (and then tons of people start sharing it and broadcasting it), when a simple (and fast) check of Snopes before we share it all over the internet would tell us it’s a hoax?

Our constant connection to digital technology has made us vulnerable and gullible. We are much more willing to accept augmented reality than we are actual reality.

Here’s why. Actual reality contains inherent risks. It’s also messy at times. It’s hard at times. It’s ugly at times. And it’s negative at times. That’s part of breathing for a living.

But digital technology, with its filtering capabilities that let us choose to unfriend, unfollow, unlike anything that is risky, messy, hard, ugly, and negative, has essentially created an augmented reality made up of rainbows, lollipops, and unicorns that completely disconnects us from the realities of life, growth, change, and maturity, as well as developing our uniquely human capacity to care, to empathize, to comfort, to encourage, to be patient, and to be kind and merciful toward other people.

Of course, we expect all those things from other people – and we get an inauthentic and superficial version from our virtual world (I mean, really, how hard is to type a few letters saying “sorry,” and then just go on with life because it is not right in front of you and it’s not impacting you in real time?) – because our constant connection to digital technology has led us to believe that everything really is “all about me.”

Technology is not the originator of this “You’re Good Enough, You’re Smart Enough, and Doggone It, People Like Younarcissism that infects our entire society today, but it has been the catalyst for its rampant and invasive spread into every part of our society and our lives.

So now you know the bad news that all of us are facing with regard to a constant connection to technology.

Are we doomed to this fate with no recourse?

Have we irreversibly surrendered all our power to this invisible monster that is gorging itself on all the things that make you you and me me until we’re all just hollow shells of nothingness on the outside attached to technology’s puppetmaster strings?

The good news is that we are not doomed with no recourse nor is this current trajectory irreversible.

However, like any addiction or entrenched habit, we will first have to consciously choose, then commit, and then act, making those actions a permanent replacement for what we are doing now, to reverse it.

And it will be hard until it becomes our new (and for those of born before 1985, our old) habit. And it will take a huge amount of self-control and discipline to actually accomplish it.

Are we up to the challenge? I hope so.

So, then, what steps can we implement right now to start the reversal?

The first step is to limit our exposure to constant connection.

Instead of checking email every hour, commit to checking it no more than three times a day (morning, noon, and, this is my usual cutoff, the end of the day…meaning the end of daylight hours). 

Instead of wearing your smart phone like underwear, leave it on a desk or a cabinet out of your immediate reach. You really don’t have to pick it up and answer every text or every call as soon as they come in. If someone really wants to talk to you, they’ll leave a voicemail (most people don’t).

Limit checking texts and voicemails to three times a day. Set aside, within each of those times, a certain amount of time to deal with them, and stop when time runs out. And put the phone away again until the next time you’re scheduled to check it.

Here’s the funny thing. People will adjust to this schedule and they will learn when you’re available and when you’re not and eventually that’ll be the only time they contact you. 

Emergencies, of course, are still emergencies and they are always exceptions to this rule.

However, we need to make sure that we understand what a real emergency is. Being out of milk for coffee, for example, is not an emergency. Our brains are going have to be retrained in a lot of different ways.

Allocate a certain amount of time each day (no more than two hours total) that you will spend on social media sites. The reality is that social media sites are the biggest time-wasters, for the most part, within digital technology.

This is time that we can easily recover for absence – solitude, peace, and quiet to reflect, to think, to dream, to plan, to innovate, to create, to learn – to be a part of our daily lives.

Instead of immediately going to Google when you don’t know something or you can’t remember something, write the question down and go to the library or a bookstore when you’re able and find a book and look it up.

This will be hard, because our constant connection to technology has produced impatience and a need for immediate gratification in us.

But delayed gratification will do two things. First, it will build patience. Second, we will begin to sort through things and regain a balance of what’s important and what isn’t.

If the effort of going to library or a bookstore to answer a question we have isn’t worth the time and energy, we’ll know that’s unimportant – and we can get rid of it.

However, if we can’t wait to get to the library or the bookstore to research our question, and we make that an urgent to-do item, then we’ll know that’s important – and we will keep it.

With a constant connection to technology, everything’s important, while in real life, there are some things that are important and some things that aren’t. This will help us regain that balance and perspective.

Turn your devices and all the noise (including music) off. On weekdays, set a time and turn them off with no exceptions.

Replace that time you would have spent on them with interacting with a good book (yeah, the ones with the pages and the real covers) or interacting with real people, like family and friends, by having dinner together or playing a board game or cards (not video games) together. This will naturally lead to conversation and connection with real people and real life. Do not turn the devices back on until the next day.

Choose one or two days a week to disconnect altogether from technology. Turn it all off. The weekend is an excellent time to do this and will give you plenty of absence in which to rest, recharge, and regroup with no extraneous interference impeding you.

I personally find it very difficult to jump back into the world of connection each week when I do this myself. I love not even thinking about and I don’t miss it at all.

With all the absence it builds into my weekends, I often find myself wishing I never had to reconnect ever again because I realize how disruptive it is in my life, even though I have strict limits on it and I’ve cut my exposure time down to the bare minimum.

In the end, even a little is still too much, at least for me.

When you have all the time back that doing these few things will give you, use it wisely.

If you have a neglected hobby, take it up again. If you don’t have a hobby, find one.

Read books. Take walks.

If you’ve got snow on the ground, bundle up and go outside to play in it. Build a snow fort or build a snowman. Admire the beauty and cleanness of a freshly-fallen snow.

Watch how the sun reflects off of it. Watch the clouds in the sky. Watch a sunset from beginning to end.

When spring comes, go find a lush, grassy hill or meadow and lie down on the ground and look at the sky.

Ride a bike. In the summer, go outside at night and look at the sky and the stars and the planets and dream.

In the fall, walk through the unparalleled beauty of the vast array of colors of the trees as they change.

Get outside and do something, not just for your body, but also for your mind.

The bottom line is there is no substitute for absence.

We aren’t missing it because we let it go gradually along the way over time and we didn’t even notice.

But when we start bringing absence back into our lives, we will be surprised, after we get used to it again, how much we missed it and how much we almost lost it for good, and, my hope, is that we will be determined never to let it go again.

 

Part 2 – “The End of Absence” (Michael Harris) Book Review

information superhighway going gentle into that good nightThis is the second of a three-part series of reviews that I am writing on The End of Absence: Reclaiming What We’ve Lost in a World of Constant Connection written by Michael Harris in 2014.

The End of Absence: Reclaiming What We've Lost in a World of Constant Connection by Michael HarrisIn “Part 1 – ‘The End of Absence’ (Michael Harris) Book Review,” we looked at the definition of absence and how it relates to our quality of life.

We discussed how absence gives rise to critical thinking, problem-solving, short-term and long-term planning, concentrated focus, and creativity.

We also discussed the physical, emotional, and mental benefits of absence.

And, finally, we discussed how absence has been eroded by our constant connection to technology to the point that it is virtually extinct in our current society.

We discussed how this has dumbed down society as a whole and how susceptible that makes us to being controlled, manipulated, and deceived by technology.

And, finally, we looked at how much technology and our constant connection to it mirrors the society that George Orwell described in 1984, coming to the conclusion that the frighteningly eerie similarities should compel each of us to consciously choose not to follow the crowd and intentionally limit our connection and ensure a healthy amount of absence exists in our lives individually.

In this post, we’ll take a behind-the-scenes look at what happens with all the data you’re willingly and freely putting into digital technology every time you text on your phone, go to a website, input anything onto social media (including the infamous “like” button on Facebook), do a Google search, buy something online, watch streaming video, and play internet video games.

We’ll also see how being constantly connected to digital technology brings that data back to us and shrinks our exposure to real and complete knowledge (Google infamously does this with their industry-standard data mining and predictive analysis processes, which narrow search results down to what we want to see, based on our input, rather than everything there is to see).

In effect, we are being shaped and manipulated in an endless loop of our own little world of preferences and beliefs with subtle changes and false ideas about value and credibility being implanted along the way.

Our constant connection to technology is literally rewiring and incorrectly programming our brains. This negatively affects – if not outright destroys – our value systems and belief systems.

Additionally, our ability to not only think for ourselves – and change our minds based on that – but also to critically and objectively think, as well as to think outside the boxes of what we know and are familiar with is rapidly being destroyed because we depend on technology to do our “thinking” for us.

Additionally, we’ll continue our look at how our constant connection to technology is essentially creating a virtual life (think the movie The Matrix) that we are being conned into believing is real life, while actual real life, which includes lack and absence, is rapidly disappearing for all but a few of us who are aware of what’s happening and refusing to let it happen to us.

Our lifestyles, which now center around technology, are creating a new kind of lifestyle dementia, and most of us don’t even realize it’s happening. That’s why you need to read this book and that’s why I’m spending so much time reviewing it.

don't surrender you're ability to think to anyone or anything elsePerhaps you think what is being described here is impossible and this is just an alarmist warning that you can blow off because “that’ll never happen.”

It’s already happened and it is happening. I know technology very well from a big-picture and a behind-the-scenes perspective, so I’m speaking as an insider and an expert who has worked and does work with this on a daily basis.

Here’s the reality. Whether you choose to ignore this is immaterial. It’s already well in motion and progressing rapidly and, if we choose to remain ignorant and we choose to continue our constant connection, we will be devastatingly changed in the process.

And the sad part is that, like the society that Orwell discusses in 1984, not only will we not be aware, but we will not care, even if it’s the most destructive thing that can happen to humanity.

One of the ways in which our constant connection to technology has changed us is that now our default choice is to use technology to interact with people and things rather than actually interact with people and things for real. 

Here’s a simple comparative survey of why our brains have been rewired to prefer technological interaction with people and things rather than real interaction with people and things.

With technology, we can ignore or eliminate or limit our time with anybody or anything we don’t want to have to deal with. This can include people and things we find challenging, who disagree with us, who don’t “tickle our fancy,” and who “make” our lives “harder” just by their presence.

With a click of a button, we can unfriend them or unfollow them and turn off their news feeds, or we can avoid those things altogether until they simply no longer exist to us.

What we end up with in the process is an artificial, virtual world that we create to make us feel good. It’s also a shallow and stagnant world that ends up being essentially us looking in a mirror and seeing nothing but our own image reflected, because the people and things that are left after our unfriending, unfollowing, and avoiding are those that never challenge us, always agree with us (even when we’re wrong), and boost our feel-good emotions (as we do theirs).

In real life, those people or things are right there with us and we have figure out the best way to deal with them whether we want to or not, even if that means putting up with our co-workers, friends, and relatives or all the tough things that exist in real life.

In other words, we can’t turn them off (and if we eliminate them, in the case of people, then we go to prison). So it forces us to find creative and workable ways to share the same space with them and it increases our relating-to-humanity-and-things skills and builds traits like patience, kindness, gentleness, understanding, empathy and mercy.

These are character-related traits that cannot be developed in the artificial, virtual world that constant connection to technology enables us to create in our own image.

And our artificial, virtual worlds make demands on us as well, although this dark side is seldom, if ever, on our minds or consciences. They demand our 24/7 attention and presence and because of our acquiescence to those demands, we lose absence. Solitude. Peace. Disconnection.

Absence gives us time alone with our thoughts, alone with ourselves, and alone with our ideas, our dreams, our hopes, and our imaginations. Absence also gives us the ability to regroup and recharge our brains and ourselves. It gives us a chance to get away from all the “noise” of life and have peace and quiet.

Here’s the irony. We need solitude as part of our mental, emotional, physical and spiritual health. There’s no other way to survive life.

Yet, even for those of us born before 1985, from the moment we’re born the emphasis is on socialization.

Society is so insistent on this – my parents often had to drag me kicking and screaming as a small child into social situations because I was always very uncomfortable with them, and as I got into my teenage years and could make my own choices, more often than not, I chose staying home over going somewhere either for a few hours or overnight – that most of us are uncomfortable being alone and being quiet, with nothing to entertain or distract us.

Technology and constant connection ensure that we don’t have to be uncomfortable, and it amplifies the illusion of constant company.

This, by the way, began before digital technology. Before there was the internet, there was television. And before television, there was radio. All of these technologies gave – and give – the illusion of constant company because of the noise and the distraction they provide.

And here’s the reality for humanity now. For those of us who remember absence, we have the constant choice of saying “yes” or “no” to constant connection. For those of us who came of age with constant connection as part of our normal lives, we don’t even know there is a choice. And that is truly sad.

Because our artificial, virtual worlds seem real to us because they’re replacing real life, our brains get rewired in additional ways by the illusion this creates.

One way is that we feel surrounded by people like us, so we feel free to say whatever we want to say however we want to say it. We don’t care how wrong it is, how hurtful it is, or how confessional it is. Constant connection, by subverting thinking, has removed the filtering that normally goes into thinking before we speak.

In this way, the words spewed out on the internet actually mimic one of the tell-tale signs of dementia: the loss of impulse control and ability to know what things to verbalize and what things to keep to ourselves. 

Another way that constant connection to technology rewires our brains is that it promotes the self all the time. With an artificial, virtual world that we have created and are the center of, we can continuously draw all the attention to ourselves.

This self-broadcasting, which shares many traits with narcissism, includes fervent self-documentation consisting of constant tweets, continual status updates, and a never-ending supply of selfies.

In effect, a constant connection to technology makes us incredibly self-centered, self-absorbed, selfish, and it reinforces our belief that “it’s all about me.”

So it’s no surprise that we’re less empathetic, less genuinely caring (caring for someone online takes little effort, engagement, involvement, and commitment while caring for someone in real life takes continual effort, engagement, involvement, and commitment, no matter what circumstances arise), less able to listen and hear what people are saying or trying to say, less understanding, and less able to provide authentic comfort, encouragement and support.

In other words, a constant connection to technology makes us less human.

So why do we do it? Because it’s rewarding online. The more attention we garner, the more we want. If everybody notices us and loves – or likes – us, that is very motivating to continue our self-tracking because it feeds our egos.

A constant connection to technology and self-broadcasting gives us the approval we crave just for living life and doing the mundane things it requires of all of us. Somehow, having a bunch of people like and praise some routine, ordinary thing we’ve done makes us feel extraordinary and accomplished.

It doesn’t happen like that in real life. Most of what we say and do goes completely unnoticed, even though we may say and do a lot and say and do a lot of good, but despite that reality, those of us who are invested in real life just keep going on and putting one foot in front of the other.

A constant connection to technology rewires our brains to stop doing our own thinking and shop it out the the public opinion of the internet.

This costs us far more than we are remotely aware of.

In choosing constant connection and public opinion to do our thinking and decision-making, we choose to abandon the most powerful workshop we have access to, which is our lone minds.

In our lone minds, which only solitude can give us, we can think objectively and critically through things. We can solve problems. We can fill in missing pieces of the puzzles that life inherently has. We can find connections between things that don’t look connected on the surface. And we can innovate and create scenarios and options that point us forward in our lives.

When we abandon our lone minds, we offer ourselves up to indiscriminate information from public opinion, much of which is conflicting, wrong, and worthless.

But because our brains are rewired to believe that’s a valid and real world, we accept all the input we’re given and make the erroneous assumption that it all has the same quality, the same value, and the same veracity.

And that will destroy us, because most of what we get is uninformed, uneducated, and unknowledgeable in the context of being “expert” information.

In addition to this and what most people don’t know is that public opinion is manipulated, especially on the organizational level.

For example, many organizations have people internal to the organization write a lot of positive reviews about whatever their products are to feed the search engines to give them a higher rating of satisfaction.

Data mining cannot analyze quality, only quantity. So the more times a search engine sees a name and sees positive input, the higher it ranks it organically. This is a driving force – and goal – in every organization with an online presence.

There are two types of search engine results, paid and organic.

paid organic search engine results PPC

Paid search engine results (the ones in the example above with AD to the left of the link) are those that organizations pay, often a lot of money, to the search engine for significant keywords to get top-of-the-page (or top-right-side-of-the-page), first-page placement.

This is known as pay-per-click (PPC) advertising. Each time someone clicks on the paid advertisement, whatever that keyword costs is what is charged to the organization. This can get really expensive really fast.

Organic search engine results (in the example above, below the faint gray line, starting with the Alzheimer’s Association’s link) are generated in order by how many times the keyword appears on the site and how much traffic (search engines don’t really care where the traffic comes from, only how much of it there is) goes to the site (this is where social media sharing has really taken center stage in driving traffic to sites). This doesn’t cost anything.

So, it should be obvious why organizations manipulate their data behind the scenes to get higher organic ranking. The most prevalent (and most dishonest) way has become social media sharing and having people internal to the organization physically go to the site as often as they can. More hits equals higher ranking in the organic search results.

What does that have to do with us and the end of absence and constant connection to technology? Everything!

We instinctively choose what’s listed first because we connect that with what must be the best. However, because what’s listed first is simply because of manipulation (which we are unaware of) and not because of proven and tested quality, we get duped in accepting things as “best,” “right,” or “most” when in fact there is no proof any of those things are true. It’s all an illusion.

google-logoBecause we have come to believe that Google is always right and if it’s on the internet then it must true and because the answers are alway immediate, we have abandoned the mental processes that time would allow – comparison, analysis, perspective, insight, and wisdom – so that we could be sure we were making the right and best choice. That’s the lack of absence that real life decision-making gives us.

instagram-logoAnd what do Google and Facebook do with all that data you share with Twitter, Facebook, Instagram, and Google (these are just a few – everything you do on the internet gets stored somewhere and is analyzed by software that gets a sense of who and what you are about using predictive analysis, so that what you ask for ends up being things that appeal to or interest you, not everything there is on the subject)?

The next time you do a Google search, log in to your Facebook account afterwards. Look at the right hand side of the screen where the ads are. Odds are good they will be for what you just searched for in Google.

twitter-logoPay attention when you share links on Facebook to that same right hand side of the screen. The odds are good that whatever the content is within the link you shared will be what the advertising is for.

facebook-logoThis is predictive analysis in your face. Most of it is not, but Facebook makes no secret that is what they are doing to try to get you to buy something.

Google’s method is invisible, but much more detrimental and dangerous.

Google uses what is known as a “filter bubble” to generate search results. This gets personalized for each person that uses Google and it is based on our preferences and our activities.

Google keeps meticulous track of our searching history, promoting the same results each time we repeat a search and further personalizing them based on which results we choose to follow through on by clicking on the links Google shows.

Each time we do the search, results are pared down to match our personalization preferences, which in effect means we get exposed to a narrower and narrower view of the universe.

Facebook uses this same algorithm in our newsfeeds. We might have 100 Facebook friends, but we interact with 10 or so almost constantly.

All the statuses of those 10 will always show up in our news feeds. The other 90 friends will randomly show up in our news feeds based on how much we interact with them and they interact with us.

The more interaction, the more likely the statuses will show up randomly – not always – in our news feeds. For friends with whom we have little interaction on Facebook, their statuses disappear from our news feeds altogether.

In other words, the internet is making our worlds smaller, not bigger.

And the personalization that makes our worlds smaller, not bigger has affected every part of our lives. The music we listen to. The suggested content for us to watch on live streaming. How and if we get employed by an organization.

And it seems that our brains are, with their constant connection rewiring, accepting this as being okay and we’ve adopted an “out of sight. out of mind” mentality toward anyone or anything we don’t see regularly or at all.

Here’s what we must understand and realize about how dangerous this is and how much we’re losing in the process.

Personalization is really just the glorification of our own tastes and our own opinions. It eliminates the big picture and a general, broad and comprehensive base of knowledge and understanding while embracing customization, specialization, and a singular viewpoint that takes nothing around it into account (no context).

Personalization cuts off our access to real learning and real knowledge. It cuts us off from the very things – and people – who could help us the most.

Because there is no “surprise” content to challenge us, to think about, to learn from, and to grow and mature in, we stagnate in life.

Stagnation is one step away from the regression to the kind of mindlessness that typified 1984‘s society as a whole. We are not that far from it ourselves.

In the next and last post reviewing The End of Absence: Reclaiming What We’ve Lost in a World of Constant Connection, we will look at the final third of the book, still looking for signs of hope, although the prospects of that are getting dimmer.

Facts About the Flu for Our Loved Ones with Dementias and Alzheimer’s Disease

Our loved ones with dementias and Alzheimer’s Disease are much more susceptible to getting the flu than the general population, including senior citizens in general (the overwhelming majority of deaths from the flu each year occur in people over the age of 65).

With the peak of flu season upon us, it would be a good idea to review some basics about the flu. Click on the infographic below to see the full article.


Source: Fix.com

Will Poor Sleep and Sleep Deprivation Now Lead to a Lifestyle-Related Dementia Later?

restorative sleep dementias going gentle into that good nightThe answer is “probably.”

There have been several studies in the last two years on the effects – positive and negative – of sleep on the brain. They all agree on one point: to function optimally, the brain requires quality sleep and enough of it.

They also agree on another point: the way our modern society is structured, the majority of us are not getting enough sleep, and the little sleep we are getting is not quality sleep.

The fact that poor sleep and future dementia are linked is not new.

A sleep disorder known as REM sleep behavior disorder is a key characteristic of Lewy Body dementia, but the sleep disorder is often present decades before symptoms of Lewy Body dementia emerge.

In a study published in the The Journal of the American Medical Association in 2011, researchers showed a strong link between sleep apnea (sleep-disordered breathing) and dementia.

However, new research is now showing that even those of us without these two sleep disorders are getting less sleep and the sleep we do get is not quality sleep. New neurological research is showing us how important enough sleep and good sleep is for our present and future neurological help.

circadian-rhythm-sleepThe body has a natural circadian rhythm designed to promote and facilitate sleep as daylight turns into evening and then night and to promote and facilitate wakefulness as night turns into day.

Until the Industrial Revolution, which actually consists of two iterations (one in the late 18th century and the second, which was the more profound of the two, in the mid-19th century, the human race generally slept and awakened based on the body’s natural circadian rhythm.

After the second iteration of the Industrial Revolution, when crude ways to keep the lights on 24 hours a day, 7 days a week emerged, all that changed. Initially, the only segment of the population that it affected were those who were employed in factories, mines, and foundries.

factory work shift work sleep deprivation going gentle into that good nightAs textile factories, ore and mineral mines, and metal foundries remade the work day into two 11-hour shifts – generally, 7 am – 6 pm and 7 pm – 6 am – the second shift of workers were forced to ignore and work against their natural circadian rhythms to fuel the manufacturing boom, which was bolstered by a greater demand for manufactured goods throughout all strata of the population.

Although there was less concern about the workers – health, quality of life, and even death – then, there is still a significant amount of data from that period that shows most of horrific accidents (the majority of which were attributable to human error and resulted in both permanent disabilities and death) occurred during the later hours of the 2nd shift.

In the early 20th century, as manufacturing expanded into transportation, work days were again revised into three shifts – 7 am – 3 pm, 3 pm – 11 pm, and 11 pm to 7 am – with similar higher accident rates in the 2nd and 3rd shifts.

medical professionals shift work going gentle into that good nightMedical professionals in hospitals, nursing facilities, and emergency services work were the next group of people to be required to work in shifts. Additionally, of all the careers in which shift workers were employed, it was not unusual for many medical professionals to work double shifts (back-to-back shifts) to provide necessary services.

During World War II, almost all manufacturing facilities in the U.S. transitioned to 24/7 production and a 1st, 2nd, and 3rd shift to support the Allies’ efforts in the war. After World War II, as those factories transitioned back to civilian manufacturing, they kept 24/7 production and three shifts in place. 

As the Technological Revolution replaced the Industrial Revolution (also in two iterations, with the first one beginning after World War II, and the second one, which now affects every human on the planet, beginning in the late 1960’s) and the world became instantaneously and simultaneously intricately connected, the 24/7 workday began to affect almost everyone on the planet, white-color workers working late nights going gentle into that good nightincluding white-collar workers who saw their workdays – and nights – lengthened beginning in the late 1980’s.

As more and more people have been, by necessity, forced into living and working in a 24/7 environment, researchers have kept a close eye on how successful our efforts to work against our natural circadian rhythms have been.

The answer is we’re all pretty much failures at it and the results are poor quality sleep and sleep deprivation.

And like our ancestors in the Industrial Revolution, working late into the night or all night, whether in a medical facility, an emergency services department, a manufacturing facility, an office, or at home (because half the world’s awake when it’s time for people in the U.S. to go to bed), shows the same elevated risks of accidents and injuries (both work-related and non-work-related) when compared to working during daylight hours.

Here are a few statistics directly tied to shift work (if you’re an office jockey reading this, remember that this applies equally to you and all those late nights and overnights you’re working wherever you’re working them):

  • Work-related injuries increased to a little over 15% on the 2nd shift and almost 28% on the 3rd shift.
  • The longer the shift, the higher the risk of injuries: 13% higher on a 10-hour shift and almost 30% higher on a 12-hour shift. 
  • The more consecutive night shifts worked, the greater the risk of sustaining an injury (37% higher by the fourth consecutive night shift as opposed to 17% higher by the fourth consecutive day shift).
  • Almost 50% of the late-night (10 pm – 1 am) and early-morning (5 am – 8 am) car accidents – fatal and non-fatal – involve drivers who are driving to or from work.

Pretty scary, huh? And, yet, despite all the evidence that it’s a really bad idea, a dangerous idea, and a dumb idea, we, as a society, keep doing it. I won’t get in-depth into the reasons for that here, except to say that they are tied to greed and competitiveness, which are soul issues.

What is the biology behind the statistics above?

That we can answer. And I’ve had more jobs than not where I worked 10-12 hours on a Sunday-Thursday night schedule, where I’ve worked many late, late nights only to be back at my office first thing the next morning, and where I’ve pulled many all-nighters, so I’ve got a lot of firsthand experience to bring to the table.

The reality is that unless you’re physically exhausted – mental exhaustion actually keeps the brain in gear and is totally counterproductive – you can’t get any real quality sleep during the day. Melatonin production is off and all the hormones to keep you awake are in action, so trying to sleep well is a losing battle.

So while you may be able to get a few hours of restless sleep, you do not go through the normal sleep cycles associated with nighttime restorative sleep.

As a result, because your brain is “foggy” when you’re awake, your response times are sluggish, and, combined with the normal circadian rhythm of sleep kicking in at night – even if you’re awake – all of these are directly tied to the increased risks of accidents and injuries during work hours at night.

The later you work at night the more likely you will have an injury and/or accident because these are the normal hours when sleep is deepest and during which you’ll be fighting sleep the most.

But the long-term effects of poor sleep and sleep deprivation are just as serious with regard to neurological health.

In a series of studies on sleep published in late 2013, researchers discovered that good sleep and normal sleep (7-8 hours at night) enables the brain to clean out the toxins – including beta amyloid proteins, which are involved in the development and progression of Alzheimer’s Disease – that have accumulated in it during the day’s mental activities. This process is so energy-intensive that it can be done only during sleep, when the brain doesn’t have anything else to do.

And here’s the thing. Perpetually skimping on sleep, for a lot of us who don’t do shift work and don’t have careers that demand a lot of late, late nights and early, early mornings on a consistent basis, is a lifestyle choice.

Technically, however, all of these types of careers, except for manufacturing work, which puts food on the table and pays the bills for people who might not be able to do so otherwise, are lifestyle choices because anyone going into these careers know the demands before they choose the education and jobs that lead to them.

And that substantially increases your risk of developing a lifestyle dementia.

digital and electronic connectivity sleep deprivedWe, as a society, are very sleep-deprived. And that includes a lot of people who are not earning their living during the night.

Much of that, in my opinion, is because we are digitally and electronically connected all the time and that crowds out the time we allocate for sleep.

A few questions should help you know if this applies to you personally.

  1. Do you watch TV for several hours in bed or do you play video games before you go to sleep?
  2. Is your smart phone or tablet beside your bed so you can check email or keep up with social media? Do you check them during the night?
  3. Are you digitally and electronically connected last thing before you close your eyes at night and first then when you awaken in the morning?
  4. Do you remember what you did at night before you got digitally and electronically connected?

If the answer to the first three questions is “yes” and the answer to the last question is “no,” then you’re making a lifestyle choice, probably sacrificing sleep (it’s important to remember that all these digital and electronic things stimulate the brain, so their after-effects stay with you for quite some time after you turn them off, and that means it takes you longer to fall asleep), to stay connected all the time to a world, that quite frankly, isn’t all that important or real anyway.

And whatever is real or important about it can wait until tomorrow. Like it did when a lot of us were little kids and there was no cable tv, there was no public internet, there were no video games, there were no personal digital/electronic devices, and there were no cell phones.

The world didn’t end then, and it won’t end now if you put all these away early in the evening and give your brain a chance to relax by playing a game with your family, listening to music that soothes your soul, getting lost in a book, or simply being quiet for a little while, using that time to meditate and reflect on your day and make plans for tomorrow.

Even though since I was born I’ve always had trouble sleeping a lot and getting good sleep when I do, I purposely shut everything down early in the evening to engage in quieter and more reflective activities and I stay away from it until I’ve had some quality time in the morning to get ready to tackle it again.

One day each week – for me, it’s the weekly Sabbath – I disconnect completely for the 24 hours between sunset Friday and sunset Saturday, and I’ve begun to move away from being connected much on Sundays as well.

I rarely have my cell phone anywhere near me and even when I do, I rarely use it. I certainly don’t want it in my bedroom with me at night.

With my sleep history, I’m already behind in this game, so I make lifestyle choices to improve my odds the best I can. It may not be enough to stave off dementias, but at least I know the choices I’m making increase the odds that, if I live long enough (I always pray I don’t…we start dying the day we’re born, so it’s pretty much all downhill from that point on), they’re either mild or short and done.

For all of us who can read this today, now is the time to start making sure we’re doing everything in our power to get enough sleep and to get good sleep when we do. That’s a lifestyle choice that only you can make for you and that only I can make for me.

It may mean some hard choices. It may mean a career change. It may mean disconnecting during nighttime from technology. It may mean looking at our lives and figuring out what’s really important in the long-term, instead of buying into the pervasive idea that now is the only important time in our lives.

But in the end, from this moment on, at least in the realm of sleep, you can do something to help yourself, but you have to decide what you’re willing to trade off now and what you’re willing to live with in the future.