MythBusters: The Human Brain Edition
Source Credit: 7 Myths About the Brain
Separating Fact From Fiction
By Kendra Cherry, About.com Guide
The human brain is amazing and sometimes mysterious. While researchers are still uncovering the secrets of how the brain works, they have discovered plenty of information about what goes on inside your noggin. Unfortunately, there are still a lot of brain myths out there.
The following are just a few of the many myths about the brain.
Myth 1: You only use 10 percent of your brain.
You’ve probably heard this oft-cited bit of information several times, but constant repetition does not make it any more accurate. People often use this popular urban legend to imply that the mind is capable of much greater things, such as dramatically increased intelligence, psychic abilities, or even telekinesis. After all, if we can do all the things we do using only 10 percent of our brains, just imagine what we could accomplish if we used the remaining 90 percent.
Reality check: Research suggests that all areas of the brain perform some type of function. If the 10 percent myth were true, brain damage would be far less likely – after all, we would really only have to worry about that tiny 10 percent of our brains being injured. The fact is that damage to even a small area of the brain can result in profound consequences to both cognition and functioning. Brain imaging technologies have also demonstrated that the entire brain shows levels of activity, even during sleep.
“It turns out though, that we use virtually every part of the brain, and that [most of] the brain is active almost all the time. Let’s put it this way: the brain represents three percent of the body’s weight and uses 20 percent of the body’s energy.” – Neurologist Barry Gordon of Johns Hopkins School of Medicine, Scientific American
Myth 2: Brain damage is permanent.
The brain is a fragile thing and can be damaged by things such as injury, stroke, or disease. This damage can result in a range of consequences, from mild disruptions in cognitive abilities to complete impairment. Brain damage can be devastating, but is it always permanent?
Reality check: While we often tend to think of brain injuries as lasting, a person’s ability to recover from such damage depends upon the severity and the location of the injury. For example, a blow to the head during a football game might lead to a concussion. While this can be quite serious, most people are able to recover when given time to heal. A severe stroke, on the other hand, can result in dire consequences to the brain that can very well be permanent.
However, it is important to remember that the human brain has an impressive amount of plasticity. Even following a serious brain event, such as a stroke, the brain can often heal itself over time and form new connections within the brain.
“Even after more serious brain injury, such as stroke, research indicates that — especially with the help of therapy — the brain may be capable of developing new connections and “reroute” function through healthy areas.” – BrainFacts.org
Myth 3: People are either “right-brained” or “left-brained.”
Have you ever heard someone describe themselves as either left-brained or right-brained? This stems from the popular notion that people are either dominated by their right or left brain hemispheres. According to this idea, people who are “right-brained” tend to be more creative and expressive, while those who are “left-brained tend to be more analytical and logical.
Reality Check: While experts do recognize that there is lateralization of brain function (that is, certain types of tasks and thinking tend to be more associated with a particular region of the brain), no one is fully right-brained or left-brained. In fact, we tend to do better at tasks when the entire brain is utilized, even for things that are typically associated with a certain area of the brain.
“No matter how lateralized the brain can get, though, the two sides still work together. The pop psychology notion of a left brain and a right brain doesn’t capture their intimate working relationship. The left hemisphere specializes in picking out the sounds that form words and working out the syntax of the words, for example, but it does not have a monopoly on language processing. The right hemisphere is actually more sensitive to the emotional features of language, tuning in to the slow rhythms of speech that carry intonation and stress.” – Carl Zimmer, Discover
Myth 4: Humans have the biggest brains.
The human brain is quite large in proportion to body size, but another common misconception is that humans have the largest brains of any organism. How big is the human brain? How does it compare to other species?
Reality Check: The average adult has a brain weighing in at about three pounds and measuring up to about 15 centimeters in length. The largest animal brain belongs to that of a sperm whale, weighing in at a whopping 18 pounds! Another large-brained animal is the elephant, with an average brain size of around 11 pounds.
But what about relative brain size in proportion to body size? Humans must certainly have the largest brains in comparison to their body size, right? Once again, this notion is also a myth. Surprisingly, one animal that holds the largest body-size to brain ratios is the shrew, with a brain making up about 10 percent of its body mass.
“Our primate lineage had a head start in evolving large brains, however, because most primates have brains that are larger than expected for their body size. The Encephalization Quotient is a measure of brain size relative to body size. The cat has an EQ of about 1, which is what is expected for its body size, while chimps have an EQ of 2.5 and humans nearly 7.5. Dolphins, no slouches when it comes to cognitive powers and complex social groups, have an EQ of more than 5, but rats and rabbits are way down on the scale at below 0.4.” – Michael Balter, Slate.com
Myth 5: We are born with all the brain cells we ever have, and once they die, these cells are gone forever.
Traditional wisdom has long suggested that adults only have so many brain cells and that we never form new ones. Once these cells are lost, are they really gone for good?
Reality Check: In recent years, experts have discovered evidence that the human adult brain does indeed form new cells throughout life, even during old age. The process of forming new brain cells is known as neurogenesis and researchers have found that it happens in at least one important region of the brain called the hippocampus.
“Above-ground nuclear bomb tests carried out more than 50 years ago resulted in elevated atmospheric levels of the radioactive carbon-14 isotope (14C), which steadily declined over time. In a study published yesterday (June 7) in Cell, researchers used measurements of 14C concentration in the DNA of brain cells from deceased patients to determine the neurons’ age, and demonstrated that there is substantial adult neurogenesis in the human hippocampus.” – Dan Cossins, The Scientist
Myth 6: Drinking alcohol kills brain cells.
Partly related to the myth that we never grow new neurons is the idea that drinking alcohol can lead to cell death in the brain. Drink too much or too often, some people might warn, and you’ll lose precious brain cells that you can never get back. We’ve already learned that adults do indeed get new brain cells throughout life, but could drinking alcohol really kill brain cells?
Reality Check: While excessive or chronic alcohol abuse can certainly have dire health consequences, experts do not believe that drinking causes neurons to die. In fact, research has shown that even binge drinking doesn’t actually kill neurons.
“Scientific medical research has actually demonstrated that the moderate consumption of alcohol is associated with better cognitive (thinking and reasoning) skills and memory than is abstaining from alcohol. Moderate drinking doesn’t kill brain cells but helps the brain function better into old age. Studies around the world involving many thousands of people report this finding.” – PsychCentral.com
Myth 7: There are 100 billion neurons in the human brain.
If you’ve ever thumbed through a psychology or neuroscience textbook, you have probably read that the human brain contains approximately 100 billion neurons. How accurate is this oft-repeated figure? Just how many neurons are in the brain?
Reality Check: The estimate of 100 billion neurons has been repeated so often and so long that no one is completely sure where it originated. In 2009, however, one researcher decided to actually count neurons in adult brains and found that the number was just a bit off the mark. Based upon this research, it appears that the human brain contains closer to 85 billion neurons. So while the often-cited number is a few billion too high, 85 billion is still nothing to sneeze at.
“We found that on average the human brain has 86bn neurons. And not one [of the brains] that we looked at so far has the 100bn. Even though it may sound like a small difference the 14bn neurons amount to pretty much the number of neurons that a baboon brain has or almost half the number of neurons in the gorilla brain. So that’s a pretty large difference actually.” – Dr. Suzana Herculano-Houzel
More Psychology Facts and Myths:
References
Balter, M. (2012, Oct. 26). Why are our brains so ridiculously big? Slate. Retrieved from http://www.slate.com/articles/health_and_science/human_evolution/2012/10/human_brain_size_social_groups_led_to_the_evolution_of_large_brains.html Boyd, R. (2008, Feb 7). Do people only use 10 percent of their brains? Scientific American. Retrieved from http://www.scientificamerican.com/article.cfm?id=people-only-use-10-percent-of-brain BrainFacts.org. (2012). Myth: Brain damage is always permanent. Retrieved from http://www.brainfacts.org/diseases-disorders/injury/articles/2011/brain-damage-is-always-permanent Cossins, D. (2013, June 7). Human adult neurogenesis revealed. The Scientist. Retrieved from http://www.the-scientist.com/?articles.view/articleNo/35902/title/Human-Adult-Neurogenesis-Revealed/ Hanson, D. J. (n.d.). Does drinking alcohol kill brain cells? PsychCentral.com. Retrieved from http://www2.potsdam.edu/hansondj/HealthIssues/1103162109.html Herculano-Houzel S (2009). The human brain in numbers: A linearly scaled-up primate brain. Frontiers in Human Neuroscience, 3(31). doi:10.3389/neuro.09.031.2009 Randerson, J. (2012, Feb 28). How many neurons make a human brain? Billions fewer than we thought. The Guardian. Retrieved from http://www.guardian.co.uk/science/blog/2012/feb/28/how-many-neurons-human-brain The Technium. (2004). Brains of white matter. http://www.kk.org/thetechnium/archives/2004/11/brains_of_white.php Zimmer, C. (2009, April 15). The Big Similarities & Quirky Differences Between Our Left and Right Brains. Discover Magazine. Retrieved from http://discovermagazine.com/2009/may/15-big-similarities-and-quirky-differences-between-our-left-and-right-brainsRelated articles
Symptoms or Circuits? The Future of Diagnosis
Source Credit: PSYPOST
We live in the most exciting and unsettling period in the history of psychiatry since Freud started talking about sex in public.
On the one hand, the American Psychiatric Association has introduced the fifth iteration of the psychiatric diagnostic manual, DSM-V, representing the current best effort of the brightest clinical minds in psychiatry to categorize the enormously complex pattern of human emotional, cognitive, and behavioral problems. On the other hand, in new and profound ways, neuroscience and genetics research in psychiatry are yielding insights that challenge the traditional diagnostic schema that have long been at the core of the field.
“Our current diagnostic system, DSM-V represents a very reasonable attempt to classify patients by their symptoms. Symptoms are an extremely important part of all medical diagnoses, but imagine how limited we would be if we categorized all forms of pneumonia as ‘coughing disease,” commented Dr. John Krystal, Editor of Biological Psychiatry.
A paper by Sabin Khadka and colleagues that appears in the September 15th issue of Biological Psychiatry advances the discussion of one of these roiling psychiatric diagnostic dilemmas.
One of the core hypotheses is that schizophrenia and bipolar disorder are distinct scientific entities. Emil Kraepelin, credited by many as the father of modern scientific psychiatry, was the first to draw a distinction between dementia praecox (schizophrenia) and manic depression (bipolar disorder) in the late 19th century based on the behavioral profiles of these syndromes. Yet, patients within each diagnosis can have a wide variation of symptoms, some symptoms appear to be in common across these diagnoses, and antipsychotic medications used to treat schizophrenia are very commonly prescribed to patients with bipolar disorder.
But at the level of brain circuit function, do schizophrenia and bipolar differ primarily by degree or are there clear categorical differences? To answer this question, researchers from a large collaborative project called BSNIP looked at a large sample of patients diagnosed with schizophrenia or bipolar disorder, their healthy relatives, and healthy people without a family history of psychiatric disorder.
They used a specialized analysis technique to evaluate the data from their multi-site study, which revealed abnormalities within seven different brain networks. Generally speaking, they found that schizophrenia and bipolar disorder showed similar disturbances in cortical circuit function. When differences emerged between these two disorders, it was usually because schizophrenia appeared to be a more severe disease. In other words, individuals with schizophrenia had abnormalities that were larger or affected more brain regions. Their healthy relatives showed subtle alterations that fell between the healthy comparison group and the patient groups.
The authors highlight the possibility that there is a continuous spectrum of circuit dysfunction, spanning from individuals without any familial association with schizophrenia or bipolar to patients carrying these diagnoses. “These findings might serve as useful biological markers of psychotic illnesses in general,” said Khadka.
Krystal agreed, adding, “It is evident that neither our genomes nor our brains have read DSM-V in that there are links across disorders that we had not previously imagined. These links suggest that new ways of organizing patients will emerge once we understand both the genetics and neural circuitry of psychiatric disorders sufficiently.”
Related articles
Forget What You’ve Learnt About Learning
Source and authorship credit: Everything you thought you knew about learning is wrong Psychology Today
http://www.psychologytoday.com/
Everything You Thought You Knew About Learning Is Wrong How, and how NOT, to learn anything Published on January 28, 2012 by Garth Sundem in Brain Candy
Learning through osmosis didn’t make the strategies list
Taking notes during class? Topic-focused study? A consistent learning environment? All are exactly opposite the best strategies for learning. Really, I recently had the good fortune to interview Robert Bjork, director of the UCLA Learning and Forgetting Lab, distinguished professor of psychology, and massively renowned expert on packing things in your brain in a way that keeps them from leaking out. And it turns out that everything I thought I knew about learning is wrong. Here’s what he said.
First, think about how you attack a pile of study material.
“People tend to try to learn in blocks,” says Bjork, “mastering one thing before moving on to the next.” But instead he recommends interleaving, a strategy in which, for example,instead of spending an hour working on your tennis serve, you mix in a range of skills like backhands, volleys, overhead smashes, and footwork. “This creates a sense of difficulty,” says Bjork, “and people tend not to notice the immediate effects of learning.”
Instead of making an appreciable leap forward with yourserving ability after a session of focused practice, interleaving forces you to make nearly imperceptible steps forward with many skills.
But over time, the sum of these small steps is much greater than the sum of the leaps you would have taken if you’d spent the same amount of time mastering each skill in its turn.
Bjork explains that successful interleaving allows you to “seat” each skill among the others: “If information is studied so that it can be interpreted in relation to other things in memory, learning is much more powerful,” he says.
There’s one caveat: Make sure the mini skills you interleave are related in some higher-order way. If you’re trying to learn tennis, you’d want to interleave serves, backhands, volleys, smashes, and footwork—not serves, synchronized swimming, European capitals, and programming in Java.
Similarly, studying in only one location is great as long as you’ll only be required to recall the information in the same location. If you want information to be accessible outside your dorm room, or office, or nook on the second floor of the library, Bjork recommends varying your study location.
And again, these tips generalize. Interleaving and varying your study location will help whether you’re mastering math skills, learning French, or trying to become a better ballroom dancer.
So too will a somewhat related phenomenon, the spacing effect, first described by Hermann Ebbinghaus in 1885. “If you study and then you wait, tests show that the longer you wait, the more you will have forgotten,” says Bjork. That’s obvious—over time, you forget. But here’s thecool part:
If you study, wait, and then study again, the longer the wait, the more you’ll have learned after this second study session.
Bjork explains it this way: “When we access things from our memory, we do more than reveal it’s there. It’s not like a playback. What we retrieve becomes more retrievable in the future. Provided the retrieval succeeds, the more difficult and involved the retrieval, the more beneficial it is.” Note that there’s a trick implied by “provided the retrieval succeeds”: You should space your study sessions so that the information you learned in the first session remains just barely retrievable. Then, the more you have to work to pull it from the soup of your mind, the more this second study session will reinforce your learning. If you study again too soon, it’s too easy.
Along these lines, Bjork also recommends taking notes just after class, rather than during—forcing yourself to recall a lecture’s information ismore effective than simply copying it from a blackboard. “Get out of court stenographer mode,” says Bjork. You have to work for it.
The more you work, the more you learn, and the more you learn, the more awesome you can become.
“Forget about forgetting,” says Robert Bjork.
“People tend to think that learning is building up something in your memory and that forgetting is losing the things you built.
But in some respects the opposite is true.” See, once you learn something, you never actually forget it. Do you remember your childhood best friend’s phone number? No? Well, Dr. Bjork showed that if you were reminded, you would retain it much more quickly and strongly than if you were asked to memorize a fresh seven-digit number. So this oldphone number is not forgotten—it lives somewhere in you—only, recall can be a bit tricky.
And while we count forgetting as the sworn enemy of learning, in some ways that’s wrong, too. Bjork showed that the two live in a kind of symbiosis in which forgetting actually aids recall.
“Because humans have unlimited storage capacity, having total recall would be a mess,” says Bjork. “Imagine you remembered all the phone numbers of all the houses you had ever lived in. When someone asks you your current phone number, you would have to sort it from this long list.” Instead, we forget the old phone numbers, or at least bury them far beneath theease of recall we gift to our current number. What you thought were sworn enemies are more like distant collaborators.
* Excerpted from Brain Trust: 93 Top Scientists Dish the Lab-Tested Secrets of Surfing, Dating, Dieting, Gambling, Growing Man-Eating Plants and More (Three Rivers Press, March 2012)
@garthsundem
Garth Sundem is the bestselling author of Brain Candy, Geek Logik, and The Geeks’ Guide to World Domination. more…
Maslow’s Hierarchy Of Facebook
Author Credit: futurecomms.co.uk
The Psychology Behind Facebook
A new study from Boston University has looked at why people use Facebook. But not in the conventional ‘to keep in touch with friends’ or ‘to share photos’ sense. Oh no, this is FAR more interesting.
The study looks at human needs (think Maslow) and attempts to explain where Facebook fits within that context. The authors’ proposition is that Facebook (and other social networks) meets two primary human needs. The first is the need to belong to a sociodemographic group of like-minded people (linked to self-esteem and self-worth). Given this ‘need to belong’, it is hypothesised that there are differences in the way people use and share on Facebook according to cultural factors (individualistic v collectivist cultures). The thing is, some studies have suggested that being active on Facebook may not improve self-esteem, so we may be kidding ourselves if that’s (partly) why we use it!
The second need is the need for self-presentation. Further studies suggest that the person people portray on Facebook IS the real person, not an idealised version. BUT, it’s a person as seen through a socially-desirable filter. In other words, we present ourselves as highly sociable, lovable and popular even if we sit in our bedrooms in the dark playing World of Warcraft ten hours a day. There’s an aspirational element to our online selves. And hey, for me that’s certainly true – I’m a miserable sod in real life!
It’s a fascinating topic area, an understanding of which could really help marketers. Click the Source link below to read more about this study and lots of associated material. But in the meantime, stop showing off on Facebook and start just being yourself :o)
(Source: readwriteweb.com)
iPhone Addiction: Does Smart Phone = Dumber You?
It’s not much of a leap to extrapolate from the GPS to the smartphone. A normal cellphone can remember numbers for you so that you no longer have to do so. Confess– can you remember the actual cellphone number of the people you call most frequently? We used to rely on our neurons to hold onto these crucial bits of information. Now they reside somewhere out there in the ether. What’s worse is that most people don’t even take the time to write down a new phone number anymore. You call your new acquaintance and your new acquaintance calls you, and the information is automatically stored in your contacts. It’s great for efficiency’s sake, but you’ve now given your working memory one less important exercise. Memory benefits from practice, especially in the crucial stage of encoding. Let’s move from phone numbers to information in general. People with smartphones no longer have to remember important facts because when in doubt, they can just tap into Google. When was the last time St. Louis was in the World Series, you wonder? Easy! Just enter a few letters (not even the whole city name) into your “smart” search engine. Your fingers, much less your mind, don’t have to walk very far at all. Trying to give your brain a workout with a crossword puzzle? What’s to stop you from taking a few shortcuts when the answers are right there on your phone? No mental gymnastics necessary. This leads us to Siri, that seductress of the smartphone. With your iPhone slave on constant standby, you don’t even have to key in your questions. Just say the question, and Siri conjures up the answer in an instant. With a robot at your fingertips, why even bother to look the information up yourself? The irony is that smartphones have the potential to make our brains sharper, not dumber. Researchers are finding that videogame play involving rapid decision-making can hone your cognitive resources. Older adults, in particular, seem to be able to improve their attentional and decision-making speeded task performance when they play certain games. People with a form of amnesia in which they can’t learn new information can also be helped by smartphones, according to a study conducted by Canadian researchers (Svobodo & Richards, 2009). The problem is not the use of the smartphone itself; the problem comes when the smartphone takes over a function that your brain is perfectly capable of performing. It’s like taking the elevator instead of the stairs; the ride may be quicker but your muscles won’t get a workout. Smartphones are like mental elevators. Psychologists have known for years that the “use it or lose it” principle is key to keeping your brain functioning in its peak condition throughout your life. As we become more and more drawn to these sleeker and sexier gadgets, the trick will be learning how to “use it.” So take advantage of these 5 tips to help your smartphone keep you smart: 1. Don’t substitute your smartphone for your brain. Force yourself to memorize a phone number before you store it, and dial your frequently called numbers from memory whenever possible. If there’s a fact or word definition you can infer, give your brain the job before consulting your electronic helper. 2. Turn off the GPS app when you’re going to familiar places. Just like the GPS-hippocampus study showed, you need to keep your spatial memory as active as possible by relying on your brain, not your phone, when you’re navigating well-known turf. If you are using the GPS to get around a new location, study a map first. Your GPS may not really know the best route to take (as any proper Bostonian can tell you!). 3. Use your smartphone to keep up with current events. Most people use their smartphones in their leisure time for entertainment. However, with just a few easy clicks, you can just as easily check the headlines, op-eds, and featured stories from respected news outlets around the world. This knowledge will build your mental storehouse of information, and make you a better conversationalist as well. 4. Build your social skills with pro-social apps. Some videogames can actually make you a nicer person by strengthening your empathic tendencies. Twitter and Facebook can build social bonds. Staying connected is easier than ever, and keeping those social bonds active provides you with social support. Just make sure you avoid some of the social media traps of over-sharing and FOMO (fear of missing out) syndrome. 5. Turn off your smartphone while you’re driving. No matter how clever you are at multitasking under ordinary circumstances, all experts agree that you need to give your undivided attention to driving when behind the wheel. This is another reason to look at and memorize your route before going someplace new. Fiddling with your GPS can create a significant distraction if you find that it’s given you the wrong information. Smartphones have their place, and can make your life infinitely more productive as long as you use yours to supplement, not replace, your brain. Reference: Svoboda, E., & Richards, B. (2009). Compensating for anterograde amnesia: A new training method that capitalizes on emerging smartphone technologies. Journal of the International Neuropsychological Society, 15(4), 629-638. doi:10.1017/S1355617709090791 Follow Susan Krauss Whitbourne, Ph.D. on Twitter @swhitbo for daily updates on psychology, health, and aging and please check out my website,www.searchforfulfillment.com where you can read this week’s Weekly Focus to get additional information, self-tests, and psychology-related links.
Related articles
- Why Do So Many Robots Have A Woman’s Voice? [Technology] (jezebel.com)
- Siri lets strangers control some iPhone functions (redtape.msnbc.msn.com)
Clicking For The Cause: Does Online Activism Transfer To Real-Life Action?
People believe that social media, such as Facebook and Twitter, can help promote real political change. But do people actually do anything political outside of Facebook?
A team of researchers from Michigan State University led by Jessica Vitak set to find out, by looking at how young adults interacted with Facebook and in real life politics during the 2008 election.
According to background information in the new study, during the 2008 election, both Republican and Democratic presidential candidates utilized Facebook to maintain pages that allowed users to post comments, share news and videos, and connect with other users.
Furthermore, Facebook members had access to various site features that allowed them to share their political views and interact with others on the site, including both their “friends” on the site, as well as other users to whom they connected with through shared use of political groups and pages.
“But did these efforts make a difference to the political participation of Facebook users?” the researchers asked.
Recruiting students from the University of Michigan campus, a survey email was sent to a random sample of 4,000 students, with 683 usable responses. Participants took a number of surveys about their use of Facebook — including the Facebook Intensity quiz — as well as their political activities outside of Facebook.
Respondents tended to be female (68 percent) and white (86 percent), with a mean age of 20 years. Most participants reported having a Facebook account (96 percent) and being registered to vote (96 percent).
After analyzing the data, the researchers discovered that there is a complex relationship between young people’s use of Facebook and their political participation.
Researchers found that while young voters participate in political activity, the degree of this participation is somewhat superficial. The most common forms of general political participation tended to be informational and low in resource intensity (e.g., watching a debate), whereas political actions that required a greater commitment of resources (e.g., volunteering) were less frequent.
“This finding in isolation lends credibility to the concern that young citizens are becoming “slacktivists,” engaging in feel-good forms of political participation that have little or no impact on effecting change,” note the researchers.
“While there are a variety of ways to participate, our sample indicated they overwhelmingly engaged in the least intrusive, least time-consuming activities.”
But the researchers suggested an alternative interpretation of their data, too. “As we age, our political participation inevitably increases, in part due to the accumulation of civic skills. By this line of reasoning, any political activity — whether occurring on Facebook or in other venues — facilitates the development of civic skills, which in turn increases political participation.”
“One advantage to the more lightweight political activity enabled via Facebook is the opportunity to “practice” civic skills with a minimal commitment of time and effort. Not only is Facebook accessible at any time of the day, but activities such as joining a political group or sharing a link can be accomplished with a few clicks of the mouse. These site characteristics create unique opportunities for participants to develop skills in their own time, representing a lower threshold for informal civic-engagement education.”
The study found that as the number of political activities people engage in on Facebook increases, so does political participation in other venues, and vice versa.
The researchers found a strong negative relationship between Facebook Intensity and general political participation.
The negative relationship between Facebook Intensity and general political participation is more difficult to explain. One interpretation of this relationship is that the most intense users of Facebook are classic “slacktivists,” — they do not translate their political activities on the site into other more commonly valued forms of political participation.
However, a number of alternative explanations are also possible. It may be that politically active users are only accessing Facebook to supplement their political participation in other venues.
Most importantly, this study has revealed that political activity on Facebook is significantly related to more general political participation.
“Facebook and other social networking services may offer young citizens an opportunity to experiment with their political opinions and beliefs while also being exposed to those of their peers, which could, in turn, stimulate their own interest and knowledge,” the researchers say.
“While Facebook may not be the cure-all to lagging political participation among young adults in the United States, this research provides support to the Internet-as-supplement argument that other researchers have made in regards to general communication.”
The study appears in the July 2010 issue of Cyberpsychology, Behavior, and Social Networking.
Source: Cyberpsychology, Behavior, and Social Networking
Reference:
Vitak, J., Zube, P., Smock, A., Carr, C.T., Ellison, N., Lampe, C. (2010). It’s Complicated: Facebook Users’ Political Participation in the 2008 Election. Cyberpsychology, Behavior, and Social Networking.
Related articles
- Narcissism, Self-Esteem & Facebook (peterhbrown.wordpress.com)
- What Were You Thinking? The Causes Of Online Disinhibition (peterhbrown.wordpress.com)
“That’s One Small Step…”: Up To 92% Of Parents Plant Their Child’s First Digital Footprint Before They Are 2 Years Old
It seems like many of our children will no longer have to worry about those embarrassing photos popping up at 16,18th or 21st birthdays anymore. Many of them will have their lives broadcast as they grow via the internet, some before they are even born! The following article, based on research undertaken by internet security company AVG raises some interesting and concerning questions about how we publicly share our childrens’ lives, beginning before they are even old enough to speak, let alone protest…
Digital Birth: Welcome to the Online World
AVG Study Finds a Quarter of Children Have Online Births Before Their Actual Birth Dates
Source:AMSTERDAM–(BUSINESS WIRE)
Uploading prenatal sonogram photographs, tweeting pregnancy experiences, making online photo albums of children from birth, and even creating email addresses for babies – today’s parents are increasingly building digital footprints for their children prior to and from the moment they are born.
“Secondly, it reinforces the need for parents to be aware of the privacy settings they have set on their social network and other profiles. Otherwise, sharing a baby’s picture and specific information may not only be shared with friends and family but with the whole online world.”
Internet security company AVG surveyed mothers in North America (USA and Canada), the EU5 (UK, France, Germany, Italy and Spain), Australia/New Zealand and Japan, and found that 81 percent of children under the age of two currently have some kind of digital profile or footprint, with images of them posted online. In the US, 92 percent of children have an online presence by the time they are two compared to 73 percent of children in the EU5.
According to the research, the average digital birth of children happens at around six months with a third (33%) of children’s photos and information posted online within weeks of being born. In the UK, 37 percent of newborns have an online life from birth, whereas in Australia and New Zealand the figure is 41 percent.
Almost a quarter (23%) of children begin their digital lives when parents upload their prenatal sonogram scans to the Internet. This figure is higher in the US, where 34 percent have posted sonograms online, while in Canada the figure is even higher at 37 percent. Fewer parents share sonograms of their children in France (13%), Italy (14%) and Germany (15%). Likewise only 14 percent of parents share these online in Japan.
Seven percent of babies and toddlers have an email address created for them by their parents, and five percent have a social network profile.
When asked what motivates parents to post images of their babies on the Internet, more than 70 percent of all mothers surveyed said it was to share with friends and family. However, more than a fifth (22%) of mothers in the US said they wanted to add more content to their social network profiles, while 18 percent of US mothers said they were simply following their peers.
Lastly, AVG asked mothers how concerned they are (on a scale of one to five with five being very concerned) about the amount of online information available on their children in future years. Mothers were moderately concerned (average 3.5), with Spanish mothers being the most concerned.
According to AVG CEO JR Smith, “It’s shocking to think that a 30-year-old has an online footprint stretching back 10–15 years at most, while the vast majority of children today will have online presence by the time they are two-years-old – a presence that will continue to build throughout their whole lives.
“Our research shows that the trend is increasing for a child’s digital birth to coincide with and in many cases pre-date their real birth date. A quarter of babies have sonogram photos posted online before they have even physically entered into the world.
“It’s completely understandable why proud parents would want to upload and share images of very young children with friends and families. At the same time, we urge parents to think about two things:
“First, you are creating a digital history for a human being that will follow him or her for the rest of their life. What kind of footprint do you actually want to start for your child, and what will they think about the information you’ve uploaded in future?
“Secondly, it reinforces the need for parents to be aware of the privacy settings they have set on their social network and other profiles. Otherwise, sharing a baby’s picture and specific information may not only be shared with friends and family but with the whole online world.”
The research was conducted by Research Now among 2200 mothers with young (under two) children during the week of 27 September. Mothers in the EU5 (UK, Germany, France, Italy, Spain), Canada, the USA, Australia, New Zealand and Japan were polled.
Key results
1 – Mothers with children aged under two that have uploaded images of their child
Overall – 81%
USA – 92%
Canada – 84%
UK – 81%
France – 74%
Italy – 68%
Germany – 71%
Spain – 71%
(EU5 – 73%)
Australia – 84%
New Zealand – 91%
Japan – 43%
2 – Mothers that uploaded images of their newborn
Overall – 33%
USA – 33%
Canada – 37%
UK – 37%
France – 26%
Italy – 26%
Germany – 30%
Spain – 24%
(EU5 – 28.6%)
Australia – 41%
New Zealand – 41%
Japan – 19%
3 – Mothers that have uploaded antenatal scans online
Overall – 23%
USA – 34%
Canada – 37%
UK – 23%
France – 13%
Italy – 14%
Germany – 15%
Spain – 24%
(EU5 – 20%)
Australia – 26%
New Zealand – 30%
Japan – 14%
4 – Mothers that gave their baby an email address
Overall – 7%
USA – 6%
Canada – 9%
UK – 4%
France – 7%
Italy – 7%
Germany – 7%
Spain – 12%
(EU5 – 7%)
Australia – 7%
New Zealand – 4%
Japan – 7%
5 – Mothers that gave their baby a social network profile
Overall – 5%
USA – 6%
Canada – 8%
UK – 4%
France – 2%
Italy – 5%
Germany – 5%
Spain – 7%
(EU5 – 5%)
Australia – 5%
New Zealand – 6%
Japan – 8%
Related articles
- Good Parenting? Thousands of Babies Are on Facebook (bettyconfidential.com)
- Look Both Ways: Keeping Your Kids Safe On Facebook (peterhbrown.wordpress.com)
“Have The Time Of Your Life” Or “Beat It”: The Dance Moves That Make Men Attractive
The key dance moves that make men attractive to women have been discovered by psychologists at Northumbria University.
Credit: Medical News Today:
Using 3D motion-capture technology to create uniform avatar figures, researchers have identified the key movement areas of the male dancer’s body that influence female perceptions of whether their dance skills are “good” or “bad”.
The study, led by psychologist Dr Nick Neave and researcher Kristofor McCarty, has for the first time identified potential biomechanical differences between “good” and “bad” male dancers. Its findings are published in the Royal Society Journal Biology Letters on Wednesday 8th September.
Dr Neave believes that such dance movements may form honest signals of a man’s reproductive quality, in terms of health, vigour or strength, and will carry out further research to fully grasp the implications.
Researchers, at Northumbria’s School of Life Sciences, filmed 19 male volunteers, aged 18–35, with a 3-D camera system as they danced to a basic rhythm. Their real-life movements were mapped onto feature-less, white, gender-neutral humanoid characters, or avatars, so that 35 heterosexual women could rate their dance moves without being prejudiced by each male’s individual level of physical attractiveness.
The results showed that eight movement variables made the difference between a “good” and a “bad” dancer. These were the size of movement of the neck, trunk, left shoulder and wrist, the variability of movement size of the neck, trunk and left wrist, and the speed of movement of the right knee.
Female perceptions of good dance quality were influenced most greatly by large and varied movements involving the neck and trunk.
Dr Neave said: “This is the first study to show objectively what differentiates a good dancer from a bad one. Men all over the world will be interested to know what moves they can throw to attract women.
“We now know which area of the body females are looking at when they are making a judgement about male dance attractiveness. If a man knows what the key moves are, he can get some training and improve his chances of attracting a female through his dance style.”
Kristofor McCarty said: “The methods we have used here have allowed us to make some preliminary predictions as to why dance has evolved. Our results clearly show that there seems to be a strong general consensus as to what is seen as a good and bad dance, and that women appear to like and look for the same sort of moves.
“From this, we predict that those observations have underlying traits associated with them but further research must be conducted to support such claims.”
Dr Neave and Kristofor McCarty also worked with fellow Northumbria researchers Dr Nick Caplan and Dr Johannes Hönekopp, and Jeanette Freynik and Dr Bernhard Fink, from the University of Goettingen, on the landmark study.
Sources: Northumbria University, AlphaGalileo Foundation.