Friday 28 November 2014

Loneliness changes the structure and function of the brain

Loneliness increases the risk of poor sleep, higher blood pressure, cognitive and immune decline, depression, and ultimately an earlier death. Why? The traditional explanation is that lonely people lack life’s advisors: people who encourage healthy behaviours and curb unhealthy ones. If so, we should invest in pamphlets, adverts and GP advice: ignorance is the true disease, loneliness just a symptom.

But this can’t be the full story. Introverts with small networks aren’t at especial health risk, and people with an objectively full social life can feel lonely and suffer the consequences. A new review argues that for the 800,000 UK citizens who experience it all or most of the time, loneliness itself is the disease: it directly alters our perception, our thoughts, and the very structure and chemistry of our brains. The authors – loneliness expert John Cacioppo, his wife Stephanie Cacioppo, and their colleague John Capitanio – build their case on psychological and neuroscientific research, together with animal studies that help show loneliness really is the cause, not just the consequence, of various mental and physical effects.

The review suggests lonely people are sensitive to negative social outcomes and accordingly their responses in social settings are dampened. We know the former from reaction time tasks involving negative social words (lonely people respond faster), and tasks involving the detection of concealed pain in faces (lonely people are extra sensitive when the faces are dislikeable). Functional imaging evidence also shows lonely people have a suppressed neural response to rewarding social stimuli, which reduces their excitement about possible social contact; they also have dampened activity in brain areas involved in predicting what others are thinking – possibly a defence mechanism based on the idea that it’s better not to know. All this adds up to what the authors characterise as a social "self-preservation mode."

Meanwhile, animal models are helping us to understand the deeper, biological correlates associated with loneliness. For mice, being raised in isolation depletes key neurosteroids including one involved in aggression; it reduces brain myelination, which is vital to brain plasticity and may account for the social withdrawal and inflexibility seen in isolated animals; and it can influence gene expression linked to anxious behaviours.

What about changes to our neural tissue? Human research is suggestive: in one study, people who self-identified as lonelier were more likely to develop dementia. Here, initial cognitive decline could be causing loneliness, but animal work gives us some plausible mechanisms for loneliness’ impact: animals kept in isolation have suppressed growth of new neurons in areas relating to communication and memory, just as very social periods such as breeding season see a pronounced spike in growth.

Other basic brain processes are also upset by isolation. Isolated mice show reduced delta-wave activity during deep sleep; and their inflammatory responses also change, meaning that in one study, three in five isolated mice died following an induced stroke, whereas every one of their cage-sharing peers survived the same process.

The research is clear that loneliness directly impacts health, so we need to do what we can to help people free themselves from social marginalisation. I’ve seen one approach during my time serving with time banking charities, in which people give their own time in return for someone else’s in a different situation – a process that can build social networks. Also the issue is acquiring momentum through the Campaign to End Loneliness and technology solutions such as the RSA’s Social Mirror project – an app that tells people about local social groups and activities. Mainstream health is also picking this up under the term “social prescription” (physicians advise patients of social groups and activities and “facilitators” help the patients take up the opportunities). But amongst all the institutional activity, we mustn’t forget our individual duties: sometimes all that’s needed is to reach out.

_________________________________ ResearchBlogging.org

Cacioppo, S., Capitanio, J., & Cacioppo, J. (2014). Toward a neurology of loneliness. Psychological Bulletin, 140 (6), 1464-1504 DOI: 10.1037/a0037618

Post written by Alex Fradera (@alexfradera) for the BPS Research Digest.

Friday 14 November 2014

New sleep research



For many years psychologists have divided people into two types based on their sleeping habits. There are Larks who rise early, feel sprightly in the morning, and retire to bed early; and Owls, who do the opposite, preferring to get up late and who come alive in the evening.

Have you ever thought that you don't fit either pattern; that you're neither a morning nor evening person? Even in good health, maybe you feel sluggish most of the time, or conversely, perhaps you feel high energy in the morning and evening. If so, you'll relate to a new study published by Arcady Putilov and his colleagues at the Siberian Branch of the Russian Academy of Sciences.

The researchers invited 130 healthy people (54 men) to a sleep lab and kept them awake for just over 24 hours. The participants were asked to refrain from coffee and alcohol, and several times during their stay they filled out questionnaires about how wakeful or dozy they were feeling. They also answered questions about their sleep patterns and wakeful functioning during the preceding week.


By analysing the participants' energy levels through the 24 period and their reports about their functioning during the previous week, Putilov and his team identified four distinct groups. Consistent with past research, there were Larks (29 of them), who showed higher energy levels on the first and second mornings at 9AM, but lower levels when tested at 9PM and midnight; and there were Owls (44 of them), who showed the opposite pattern. The Larks also reported rising earlier and going to bed earlier through the previous week, whereas the Owls showed the opposite pattern. There was an average two-hour difference between the sleep and wake cycles of these two groups.

The researchers also identified two further chronotypes. There was a "high energetic" group of 25 people who reported feeling relatively sprightly in both the morning and evening; and a "lethargic" group of 32 others, who described feeling relatively dozy in both the morning and evening. Unlike the Owls and Larks, these two groups didn't show differences in terms of their time to bed and time of waking - their habits tended to lie mid-way between the Larks and Owls.


The researchers said their results support the idea of there being "four diurnal types, and each of these types can ... be differentiated from any of three other types on self-scorings of alertness-sleepiness levels in the course of 24-hours sleep deprivation."

We already have bird names for morning and evening people - Owls and Larks. Part of the title of this new paper is "A search for two further 'bird species'". I was hoping the authors might propose two new bird names for their high energy and lethargic categories, but sadly they don't. What about Swift for the high energy category? I'm not sure about a lethargic bird. It's over to you - any ideas? [Readers on Twitter have so far proposed Dodo and Pelican].

_________________________________ ResearchBlogging.org

Putilov, A., Donskaya, O., & Verevkin, E. (2015). How many diurnal types are there? A search for two further “bird species” Personality and Individual Differences, 72, 12-17 DOI: 10.1016/j.paid.2014.08.003

Depression can effect our 'gut instincts'

People who are depressed often complain that they find it difficult to make decisions. A new study provides an explanation. Carina Remmers and her colleagues tested 29 patients diagnosed with major depression and 27 healthy controls and they found that the people with depression had an impaired ability to go with their gut instincts, or what we might call intuition.

Intuition is not an easy skill to measure. The researchers' approach was to present participants with triads of words (e.g. SALT DEEP FOAM) and the task was to decide in less than three and a half seconds whether the three words were linked in meaning by a fourth word (in this case the answer was "yes" and the word was SEA). Some triads were linked, others weren't.

If the participants answered that the words were linked, they were given eight more seconds to provide the linking fourth word. However, it was perfectly acceptable for them to say that they felt the words were linked, but that they didn't know how. Indeed, when this occurred, it was taken by the researchers as an instance of intuition - that is, "knowing without knowing how one knows".

There were no differences between the depressed patients and controls in the number of times they provided the correct fourth, linking word, nor in the number times they provided no response at all. This suggests both groups were equally motivated and attentive to the task. But crucially, the depressed patients scored fewer correct intuitive answers (i.e. those times they stated correctly that the words were linked, but they didn't consciously know how).

Having poorer intuition on the task was associated with scoring higher on a measure of brooding (indicated by agreement with statements like "When I am sad, I think 'Why do I have problems others don't have?'"), and in turn this association appeared to be explained by the fact that the brooding patients felt more miserable.

Remmers and her team said their study makes an important contribution - in fact, it's the first time that intuition has been studied in people with major depression. The results are also consistent with past research involving healthy people that's shown low mood encourages an analytical style of thought and inhibits a creative, more intuitive thinking style.

However, I couldn't help doubting the realism of the measure of intuition used in this study. Is a judgement about word meanings really comparable to the gut decisions people have to make in their lives about jobs and relationships?

Two further questions that also remain outstanding are whether an impairment in intuitive thinking is a symptom or cause of depression; and is this intuition deficit specific to depression or will it be found in patients with other mental health problems?

_________________________________ ResearchBlogging.org

Remmers C, Topolinski S, Dietrich DE, & Michalak J (2014). Impaired intuition in patients with major depressive disorder. The British journal of clinical psychology / the British Psychological Society PMID: 25307321

Mate 'poaching'



According to one estimate, 63 per cent of men and 54 per cent of women are in their current long-term relationships because their current partner "poached" them from a previous partner. Now researchers in the US and Australia have conducted the first investigation into the fate of relationships formed this way, as compared with relationships formed by two unattached individuals.

An initial study involved surveying 138 heterosexual participants (average age 20; 71 per cent were women) four times over nine weeks. All were in current romantic relationships that had lasted so far from 0 to 36 months. Men and women who said they'd been poached by their current partner tended to start out the study by reporting less commitment to their existing relationship, feeling less satisfied in it, committing more acts of infidelity and looking out for more alternatives.What's more, over the course of the study, these participants reported progressively lower levels of commitment and satisfaction in their relationships. They also showed continued interest in other potential romantic partners and persistent levels of infidelity. This is in contrast to participants who hadn't been poached by their partners - they showed less interest in romantic alternatives over time.

The researchers led by Joshua Foster attempted to replicate these results with a second sample of 140 heterosexual participants who were surveyed six times over ten weeks. Again the participants who said they'd been poached by their partners tended to report less commitment and satisfaction in their current relationships, and more interest in romantic alternatives. However, unlike the first sample, this group did not show deterioration in their relationship over the course of the study. The researchers speculated this may be because the study was too short-lived or because deterioration in these relationships had already bottomed out.

It makes intuitive sense that people who were poached by their partners showed less commitment and satisfaction in their existing relationship. After all, if they were willing to abandon a partner in the past, why should they not be willing or even keen to do so again? This logic was borne out by a final study of 219 more heterosexual participants who answered questions not just about the way their current relationship had been formed, but also about their personalities and attitudes.

Foster and his team summarised the findings: "individuals who were successfully mate poached by their current partners tend[ed] to be socially passive, not particularly nice to others, careless and irresponsible, and narcissistic. They also tend[ed] to desire and engage in sexual behaviour outside of the confines of committed relationships." The last factor in particular (measured formally with the "Socio-sexual Orientation Inventory-revised") appeared to explain a large part of the link between having been poached by one's partner and having weak commitment to the new relationship.

Across the three studies, between 10 and 30 per cent of participants said they'd been poached by their current partners. This shows again that a significant proportion of relationships are formed this way, the researchers said, and that more research is needed to better understand how these relationships function. "We present the first known evidence [showing] specific long-term disadvantages for individuals involved in relations that formed via mate poaching," they concluded.

_________________________________ ResearchBlogging.org

Foster, J., Jonason, P., Shrira, I., Keith Campbell, W., Shiverdecker, L., & Varner, S. (2014). What do you get when you make somebody else’s partner your own? An analysis of relationships formed via mate poaching Journal of Research in Personality, 52, 78-90 DOI: 10.1016/j.jrp.2014.07.008

Monday 3 November 2014

Happy walking helps keep people more positive.

Walking in a more happy style could help counter the negative mental processes associated with depression. That's according to psychologists in Germany and Canada who used biofeedback to influence the walking style of 47 university students on a treadmill.

The students, who were kept in the dark about the true aims of the study, had their gait monitored with motion capture technology. For half of them, the more happily they walked (characterised by larger arm and body swings, and a more upright posture), the further a gauge on a video monitor shifted to the right; the sadder their gait, the more it shifted leftwards. The students weren't told what the gauge measured, but they were instructed to experiment with different walking styles to try to shift the bar rightwards. This feedback had the effect of encouraging them to walk with a gait characteristic of people who are happy.

For the other half of the students, the gauge direction was reversed, and the sadder their gait, the further the gauge shifted to the right. Again, these students weren't told what the gauge measured, but they were instructed to experiment with their walking style and to try to shift the gauge rightwards as far as possible. In other words, the feedback encouraged them to adopt a style of walking characteristic of people who are feeling low.

After four minutes of gait feedback on the treadmill, both groups of students were asked how much forty different positive and negative emotional words were a good description of their own personality. This quiz took about two minutes, after which the students continued for another eight minutes trying to keep the gait feedback gauge deflected to the right. The students' final and crucial task on the treadmill was to recall as many of the earlier descriptive words as possible.

The striking finding is that the students who were unknowingly guided by feedback to walk with a happier gait tended to remember more positive than negative self-referential words, as compared with the students who were guided to walk with a more negative style. That is, the happy walkers recalled an average of 6 positive words and 3.8 negative words, compared with the sad walkers who recalled an average of 5.47 positive words and 5.63 negative words. Focusing on the students who achieved the happiest style of gait, they recalled three times as many positive words as the students who achieved the saddest style of gait.

"Our results show that biased memory towards self-referent negative material [a feature of depression] can be changed by manipulating the style of walking," said the research team led by Johannes Michalak. The observed effects of gait on memory were not accompanied by any group differences in the students' self-reported mood at the end of the study, suggesting a direct effect of walking style on emotional memory processes.

The results build on past research that suggests pulling a happy facial expression can lift people's mood. There could be exciting practical implications for helping people with depression, but the researchers acknowledged some issues need to be addressed. For example, the current study involved a small non-clinical sample, and the researcher who delivered the forty emotional words to the walking students was not blind to the gait condition they were in, raising the possibility that he or she inadvertently influenced the results in some way. It's also notable that there wasn't data from a baseline control group whose gait was not influenced; it would have been useful to see how they performed on the memory test.
_________________________________

  ResearchBlogging.orgMichalak, J., Rohde, K., & Troje, N. (2015). How we walk affects what we remember: Gait modifications through biofeedback change negative affective memory bias Journal of Behavior Therapy and Experimental Psychiatry, 46, 121-125 DOI: 10.1016/j.jbtep.2014.09.004

Friday 17 October 2014

New research- effects of digital devices

There's a brilliant study that came out two weeks ago," Baroness Professor Susan Greenfield said at a recent event promoting her new book, "... they took away all [the pre-teens'] digital devices for five days and sent them to summer camp ... and tested their interpersonal skills, and guess what, even within five days they'd changed."

Greenfield highlighted this study in the context of her dire warnings about the harmful psychological effects of modern screen- and internet-based technologies. She is clearly tapping into a wider societal anxiety around how much time we now spend online and plugged in. But the Baroness' critics argue that her pronouncements are vague, sensationalised and evidence-lite. The fact she mentioned this specific new study provides a rare opportunity to examine what she considers to be strong evidence backing her claims. Let's take a look.

The research team led by Yalda Uhls studied two groups of pupils at a state school in Southern California. Both had an average age of 11 years and said they usually spent an average of 4.5 hours a day texting, watching TV and video-gaming. One group of 51 children was sent on a five-day outdoor education camp 70 miles outside of California. Mobile devices, computers and TVs were banned. The children lived together in cabins, went on hikes, and worked as a team to build emergency shelters. The other group of 54 children attended five days of school as usual.

On Monday at the beginning of the week, both groups completed two psychological tests. The first required that they identify the emotions displayed by photographs of actors' faces. The second involved identifying the emotions displayed by characters in short video clips of social scenes, in which the sound was switched off. At the end of the week, on Friday, both groups completed the tests again.

Uhls and her colleagues highlight the fact that the summer camp group improved more on the face test over the course of the week, as compared with the school group. The summer camp group also showed improvement on the video test, whereas the school group showed no such improvement (the camp scores rose from 26 per cent correct to 31 per cent; the school group flatlined at 28 per cent). The researchers' conclusion: "This study provides evidence that, in five days of being limited to in-person interaction without access to screen-based or media device for communication, preteens improved on measures of nonverbal emotion understanding, significantly more than the control group."

Unfortunately there are a number of acute problems with this study, which make this conclusion insupportable. Above all, the experiences of the two groups of children varied in so many different ways, other than the fact that one group was banned from screen technologies, that it is impossible to know what factors may have led to any group differences.

It's also notable that the summer camp group performed worse at the two tests at the start of the week as compared with the school group. For example, they began with an average of 14 errors on the face task whereas the school group made an average of just 9. Perhaps the camp kids were distracted because they were excited or anxious about the week ahead. We don't know because the researchers didn't measure any other psychological factors such as mood or motivation. By the end of the week, the two groups registered a similar number of errors on the face task. In other words, the technology-free summer camp kids didn't end the week with super interpersonal skills, they'd merely caught up with their screen-addled school colleagues.

We can also speculate about why the school kids didn't show improvement on the video task, whereas the summer campers did. Perhaps, after a long school week, the children at school were tired out. The campers, by contrast, may well have been on a high after their week in the wilderness with friends. Technology might have had nothing to do with it.

Other problems with the study are more generic, but just as serious. The children were not randomised to the two conditions. There's no mention that the people administering the emotional tests were blinded to which children were allocated to which condition, nor to the aims of the study, which introduces the risk they might have inadvertently influenced the results.

In fairness, Uhls and her team admit to many of these shortcomings in their paper, but it doesn't stop them from interpreting their results in line with their prior beliefs about the likely harmful effects of digital technologies, which they outline at the start of their paper. They couch their findings firmly in the wider context of technology fears, and they hope their paper will be "a call to action for research that thoroughly and systematically examines the effects of digital media on children's social development."

Is it easy to understand why Baroness Professor Greenfield was pleased with this study. I will leave you to judge whether she was right to label it "brilliant", and whether the results do anything to support her arguments about the adverse effects of digital technology on developing minds.
_________________________________

  ResearchBlogging.org
Uhls, Y., Michikyan, M., Morris, J., Garcia, D., Small, G., Zgourou, E., & Greenfield, P. (2014). Five days at outdoor education camp without screens improves preteen skills with nonverbal emotion cues Computers in Human Behavior, 39, 387-392 DOI: 10.1016/j.chb.2014.05.036

Do television and video games have an impact on the wellbeing of younger children?

We’re often bombarded with panicky stories in the news about the dangers of letting children watch too much television or play too many video games. The scientific reality is that we still know very little about how the use of electronic media affects childhood behaviour and development. A new study from a team of international researchers led by Trina Hinkley at Deakin University might help to provide us with new insights.

The study used data from 3,600 children from across Europe, taken as part of a larger study looking into the causes and potential prevention of childhood obesity. Parents were asked to fill out questionnaires that asked about their children’s electronic media habits, along with various wellbeing measures – for example, whether they had any emotional problems, issues with peers, self-esteem problems, along with details about how well the family functioned. Hinkley and colleagues looked at the associations between television and computers/video game use at around the age of four, and these measures of wellbeing some two years later.

The results are nuanced. The researchers set up a model that controlled for various factors that might have an effect – things like the family’s socioeconomic status, parental income, unemployment levels and baseline measures of the wellbeing indicators. On the whole, after accounting for all of these factors, there were very few associations between electronic media use and wellbeing indicators. For girls, every additional hour they spent playing electronic games (either on consoles or on a computer) on weekdays was associated with a two-fold increase in the likelihood of being at risk for emotional problems – for example being unhappy or depressed, or worrying often. For both boys and girls, every extra hour of television watched on weekdays was associated with a small (1.2- to 1.3-fold) increase in the risk of having family problems – for example, not getting on well with parents, or being unhappy at home. A similar association was found for girls between weekend television viewing and being at risk of family problems. However, no associations were found between watching television or playing games and problems with peers, self-esteem or social functioning.



So it seems as if these types of media can potentially impact on childhood development by negatively affecting mental wellbeing. However, what we can’t tell from these data is whether watching television or playing games causes these sorts of problems. It may well be the case that families who watch lots of television are not providing as much support for young children’s wellbeing from an early stage – so the association with television or game use is more to do with poor family functioning than the media themselves. Furthermore, the results don’t tell us anything about what types of television or genres of games might have the strongest effects – presumably the content of such media is important, in that watching an hour of Postman Pat will have very different effects on a four-year-old’s wellbeing than watching an episode of Breaking Bad. And as the authors note, relying on subjective reports from parents alone might introduce some unknown biases in the data – “an objective measure of electronic media use or inclusion of teacher or child report of wellbeing may lead to different findings”, they note. So the results should be treated with a certain amount of caution, as they don’t tell us the whole story. Nevertheless, it’s a useful addition to a now-growing body of studies that are trying to provide a balanced, data-driven understanding of how modern technologies might affect childhood development.



- Post written by guest host Dr Pete Etchells, Lecturer in Psychology at Bath Spa University and Science Blog Co-ordinator for The Guardian. 

_________________________________ ResearchBlogging.org
Hinkley, T., Verbestel, V., Ahrens, W., Lissner, L., Molnár, D., Moreno, L., Pigeot, I., Pohlabeln, H., Reisch, L., Russo, P., Veidebaum, T., Tornaritis, M., Williams, G., De Henauw, S., & De Bourdeaudhuij, I. (2014). Early Childhood Electronic Media Use as a Predictor of Poorer Well-being JAMA Pediatrics DOI: 10.1001/jamapediatrics.2014.94

Little Albert- Who was he?

In 1920, in what would become one of the most infamous and controversial studies in psychology, a pair of researchers at Johns Hopkins University taught a little baby boy to fear a white rat. For decades, the true identity and subsequent fate of this poor infant nicknamed "Little Albert" has remained a mystery.

But recently this has changed, thanks to the tireless detective work of two independent groups of scholars. Now there are competing proposals for who Little Albert was and what became of him. Which group is correct - the one led by Hall Beck at Appalachian State University in North Carolina, or the other led by Russell Powell at MacEwan University in Alberta?

These developments are so new they have yet to be fully documented in any textbooks. Fortunately, Richard Griggs at the University of Florida has written an accessible outline of the evidence unearthed by each group. His overview will be published in the journal Teaching of Psychology in January 2015, but the Research Digest has been granted an early view.

The starting point for both groups of academics-cum-detectives was that Little Albert is known to have been the son of a wet nurse at John Hopkins. Hall Beck and his colleagues identified three wet nurses on the campus in that era, and they found that just one of them had a child at the right time to have been Little Albert. This was Arvilla Merritte, who named her son Douglas. Further supporting their case, Beck's group found a portrait of Douglas and their analysis suggested he looked similar to the photographs and video of Little Albert and could well be the same child.

The Merritte line of enquiry was further supported, although controversially so, when a clinical psychologist Alan Fridlund and his colleagues analysed footage of Little Albert and deemed that he was neurologically impaired. If true, this would fit with the finding that Douglas Merritte's medical records show he had hydrocephalus ("water on the brain"). Of course this would also mean that the Little Albert study was even more unethical than previously realised.

Perhaps the most glaring short-coming of the Merritte theory is why the original researchers John Watson and Rosalie Rayner called the baby Albert if his true name was Douglas Merritte. Enter the rival detective camp headed by Russell Powell. Their searches revealed that in fact another of the John Hopkins' wet nurses had given birth to a son at the right time to have been Little Albert. This child was William A. Barger, although he was recorded in his medical file as Albert Barger. Of course, this fits the nickname Little Albert (and in fact, in their writings, Watson and Rayner referred to the child as "Albert B").

Also supporting the William Barger story, Powell and his team found notes on Barger's weight which closely match the weight of Little Albert as reported by Watson and Rayner. This is also ties in with the fact that Little Albert looks healthily chubby in the videos (Merritte, by contrast, was much lighter). Meanwhile, other experts have criticised the idea of diagnosing Little Albert as neurologically impaired based on a few brief video clips, further tilting the picture in favour of the Barger interpretation. Indeed, summing the evidence for each side, Griggs decides in favour of Powell's camp. "Applying Occam's razor to this situation would indicate that Albert Barger is far more likely to have been Little Albert," he writes.

What do the two accounts mean for the fate of Little Albert? If he was Douglas Merritte, then the story is a sad one - the boy died at age six of hydrocephalus. In contrast, if Little Albert was Willam Barger, he in fact lived a long life, dying in 2007 at the age of 87. His niece recalls that he had a mild dislike of animals. Was this due to his stint as an infant research participant? We'll probably never know.

_________________________________ ResearchBlogging.org

Richard Griggs (2015). Psychology's Lost Boy: Will The Real Little Albert Please Stand Up? Teaching of Psychology

Friday 10 October 2014

Later school start time 'may boost GCSE results'


Sleepy teenager Many parents struggle to get their children out of bed in the morning

Thousands of teenagers are to get an extra hour in bed in a trial to see whether later school start times can boost GCSE results.

University of Oxford researchers say teenagers start functioning properly two hours later than older adults.

A trial tracking nearly 32,000 GCSE pupils in more than 100 schools will assess whether a later school start leads to higher grades.

Improved mental health and wellbeing could also result, the scientists say.

“Start Quote

If we adapt our system to the biological status of the young person, we might have more success”
End Quote Prof Colin Espie Oxford University

Professor of sleep medicine Colin Espie said: "Our grandparents always told us our sleep is incredibly important.

"We have always known that, but it's only recently that we've become engaged in the importance of sleep and circadian rhythm.

"We know that something funny happens when new teenagers start to be slightly out of sync with the rest of the world.

"Of course, your parents think that's probably because you're a little bit lazy and opinionated, if only you got to bed early at night, then you would be able to get up early in the morning.

"But science is telling us, in fact there are developmental changes during the teenage years, which lead to them actually not being as tired as we think they ought to be at normal bedtime and still sleepy in the morning.

"What we're doing in the study is exploring the possibility that if we actually delay the school start time until 10am, instead of 9am or earlier, that additional hour taken on a daily dose over the course of a year will actually improve learning, performance, attainment and in the end school leaving qualifications."

He added: "If we adapt our system to the biological status of the young person, we might have more success than trying to fit them into our schedules."

Prof Russell Foster, director of sleep and circadian neuroscience at Oxford University, said that getting a teenager to start their day at 07:00 is like an adult starting theirs at 05:00.
'Results boost'
He also highlighted the results of a small trial at Monkseaton High School in North Tyneside where school start times were shifted from 08:50 to 10:00. This led to an increase in the percentage of pupils getting five good GCSEs from about 34% to about 50%.

Among disadvantaged pupils, the increase had been from about 19% to about 43%, he said.

Now in the wider, year-long study starting next September, Year 10 and 11 pupils at more than 100 schools will be divided into two groups, with one starting school at 10:00, and the other following the usual school timetable.

Both sets of pupils will also be given education on the importance of getting enough sleep.

Pupils' results will be assessed before the trial and at the end, and comparisons drawn between the late start and normal start time groups.

Some pupils will be fitted with non-invasive bio-telemetric monitoring devices recording their sleep-wake patterns. Analysis of these results will be fed into the study as well.

The study is one of six projects funded by £4m from the Education Endowment Foundation and science charity the Wellcome Trust, looking at how the application of neuroscience can improve teaching and learning in schools.

Education Endowment Foundation chief executive Kevan Collins said: "We're delighted to be researching these cutting-edge strategies based on the latest knowledge in neuroscience."

Friday 3 October 2014

10-most-controversial-psychology.

British Psychological Society  
Controversy is essential to scientific progress. As Richard Feynman said, "science is the belief in the ignorance of experts." Nothing is taken on faith, all assumptions are open to further scrutiny. It's a healthy sign therefore that psychology studies continue to generate great controversy. Often the heat is created by arguments about the logic or ethics of the methods, other times it's because of disagreements about the implications of the findings to our understanding of human nature. Here we digest ten of the most controversial studies in psychology's history. Please use the comments to have your say on these controversies, or to highlight provocative studies that you think should have made it onto our list.

1. The Stanford Prison Experiment
Conducted in 1971, Philip Zimbardo's experiment had to be aborted when students allocated to the role of prison guards began abusing students who were acting as prisoners. Zimbardo interpreted the events as showing that certain situations inevitably turn good people bad, a theoretical stance he later applied to the acts of abuse that occurred at the Abu Ghraib prison camp in Iraq from 2003 to 2004. This situationist interpretation has been challenged, most forcibly by the British psychologists Steve Reicher and Alex Haslam. The pair argue, on the basis of their own BBC Prison study and real-life instances of prisoner resistance, that people do not yield mindlessly to toxic environments. Rather, in any situation, power resides in the group that manages to establish a sense of shared identity. Critics also point out that Zimbardo led and inspired his abusive prison guards; that the Stanford Prison Experiment (SPE) may have attracted particular personality types; and that many guards did behave appropriately. The debate continues, as does the influence of the SPE on popular culture, so far inspiring at least two feature length movies.

Zimbardo, P. G. (1972). Comment: Pathology of imprisonment. Society, 9(6), 4-8. Google Scholar Citations: 324.
Haney, C., Banks, W. C., & Zimbardo, P. G. (1973). Study of prisoners and guards in a simulated prison. Naval Research Reviews, 9(1-17). Google Scholar Citations: 216.


2. The Milgram "Shock Experiments"
Stanley Milgram's studies conducted in the 1960s appeared to show that many people are incredibly obedient to authority.  Given the instruction from a scientist, many participants applied what they thought were deadly levels of electricity to an innocent person. Not one study, but several, Milgram's research has inspired many imitations, including in virtual reality and in the form of a French TV show. The original studies have attracted huge controversy, not only because of their ethically dubious nature, but also because of the way they have been interpreted and used to explain historical events such as the supposedly blind obedience to authority in the Nazi era. Haslam and Reicher have again been at the forefront of counter-arguments. Most recently, based on archived feedback from Milgram's participants, the pair argue that the observed obedience was far from blind - in fact many participants were pleased to have taken part, so convinced were they that their efforts were making an important contribution to science. It's also notable that many participants in fact disobeyed instructions, and in such cases, verbal prompts from the scientist were largely ineffective.

Milgram, S. (1963). Behavioral study of obedience. The Journal of Abnormal and Social Psychology, 67(4), 371. Google Scholar Citations: 3474


3. The "Elderly-related Words Provoke Slow Walking" Experiment (and other social priming research)
One of the experiments in a 1996 paper published by John Bargh and colleagues showed that when people were exposed to words that pertained to being old, they subsequently walked away from the lab more slowly. This finding is just one of many in the field of "social priming" research, all of which suggest our minds are far more open to influence than we realise. In 2012, a different lab tried to replicate the elderly words study and failed. Professor Bargh reacted angrily. Ever since, the controversy over his study and other related findings has only intensified. Highlights of the furore include an open letter from Nobel Laureate Daniel Kahneman to researchers working in the area, and a mass replication attempt of several studies in social psychology, including social priming effects. Much of the disagreement centres around whether replication attempts in this area fail because the original effects don't exist, or because those attempting a replication lack the necessary research skills, make statistical errors, or fail to perfectly match the original research design.

Bargh, J. A., Chen, M., & Burrows, L. (1996). Automaticity of social behavior: Direct effects of trait construct and stereotype activation on action. Journal of personality and social psychology, 71(2), 230. Google Scholar Citations: 3276


4. The Conditioning of Little Albert
Back in 1920 John Watson and his future wife Rosalie Rayner deliberately induced fears in an 11-month-old baby. They did this by exposing him to a particular animal, such as a white rat, at the same time as banging a steel bar behind his head. The research is controversial not just because it seems so unethical, but also because the results have tended to be reported in an inaccurate and overly simplified way. Many textbooks claim the study shows how fears are easily conditioned and generalised to similar stimuli; they say that after being conditioned to fear a white rat, Little Albert subsequently feared all things that were white and fluffy. In fact, the results were far messier and more inconsistent than that, and the methodology was poorly controlled. Over the last few years, controversy has also developed around the identity of poor Little Albert. In 2009, a team led by Hall Beck claimed that the baby was in fact Douglas Merritte. They later claimed that Merritte was neurologically impaired, which if true would only add to the unethical nature of the original research. However, a new paper published this year by Ben Harris and colleagues argues that Little Albert was actually a child known as Albert Barger.

Watson, J. B., & Rayner, R. (1920). Conditioned emotional reactions. Journal of Experimental Psychology, 3(1), 1. Google Scholar Citations: 2031


5. Loftus' "Lost in The Mall" Study
In 1995, Elizabeth Loftus and Jacqueline Pickrell documented how easy it was to implant in people a fictitious memory of having been lost in a shopping mall as a child. The false childhood event is simply described to a participant alongside true events, and over a few interviews it soon becomes absorbed into the person's true memories, so that they think the experience really happened. The research and other related findings became hugely controversial because they showed how unreliable and suggestible memory can be. In particular, this cast doubt on so-called "recovered memories" of abuse that originated during sessions of psychotherapy. This is a highly sensitive area and experts continue to debate the nature of false memories, repression and recovered memories. One challenge to the "lost in the mall" study was that participants may really have had the childhood experience of having been lost, in which case Loftus' methodology was recovering lost memories of the incident rather than implanting false memories. This criticism was refuted in a later study (pdf) in which Loftus and her colleagues implanted in people the memory of having met Bugs Bunny at Disneyland. Cartoon aficionados will understand why this memory was definitely false.

Loftus, E. F., & Pickrell, J. E. (1995). The formation of false memories. Psychiatric annals, 25(12), 720-725. Google Scholar Citations: 677
Loftus, E. F. (1993). The reality of repressed memories. American psychologist, 48(5), 518. Google Scholar Citations: 1413


6. The Daryl Bem Pre-cognition Study
In 2010 social psychologist Daryl Bem attracted huge attention when he claimed to have shown that many established psychological phenomena work backwards in time. For instance, in one of his experiments, he found that people performed better at a memory task for words they revised in the future. Bem interpreted this as evidence for pre-cognition, or psi - that is, effects that can't be explained by current scientific understanding. Superficially at least, Bem's methodology appeared robust, and he took the laudable step of making his procedures readily available to other researchers. However, many experts have since criticised Bem's methods and statistical analyses (pdf), and many replication attempts have failed to support the original findings. Further controversy came from the the fact that the journal that published Bem's results refused at first to publish any replication attempts. This prompted uproar in the research community and contributed to what's become known as the "replication crisis" or "replication wars" in psychology. Unabashed, Bem published a meta-analysis this year (an analysis that collated results from 90 attempts to replicate his 2010 findings) and he concluded that overall there was solid support for his earlier work. Where will this controversy head next? If Bem's right, you probably know the answer already.

Bem, D. J. (2011). Feeling the future: experimental evidence for anomalous retroactive influences on cognition and affect. Journal of personality and social psychology, 100(3), 407. Google Scholar Citations: 276


7. The Voodoo Correlations in Social Neuroscience study
This paper was released online before print, where it initially bore the provocative title "Voodoo correlations in social neuroscience". Voodoo in this sense meant non-existent or spurious. Ed Vul and his colleagues had analysed over 50 studies that linked localised patterns of brain activity with specific aspects of behaviour or emotion, such as one that reported feelings of rejection were correlated highly with activity in the anterior cingulate cortex. Vul and his team said the high correlations reported in these papers were due to the use of inappropriate analyses - a form of "double-dipping" in which researchers took two or more steps: first identifying a region, or even a single voxel, linked with a certain behaviour, and then performing further analyses on just that area. The paper caused great offence to the many brain imaging researchers in social neuroscience whose work had been targeted. "Several of [Vul et al's] conclusions are incorrect due to flawed reasoning, statistical errors, and sampling anomalies," said the authors of one rebuttal paper. However, concerns about the statistical analyses used in imaging neuroscience haven't gone away. For example, in 2012 Joshua Carp wrote a paper claiming that most imaging papers fail to provide enough methodological detail to allow others to attempt replications.

Vul, E., Harris, C., Winkielman, P., & Pashler, H. (2009). Puzzlingly high correlations in fMRI studies of emotion, personality, and social cognition. Perspectives on psychological science, 4(3), 274-290. Google Scholar Citations: 688.


8. The Kirsch Anti-Depressant Placebo Effect Study
In 2008 Irving Kirsch, a psychologist who was then based at the University of Hull in the UK, analysed all the trial data on anti-depressants, published and unpublished, submitted to the US Food and Drug Administration. He and his colleagues concluded that for most people with mild or moderate depression, the extra benefit of anti-depressants versus placebo is not clinically meaningful.  The results led to headlines like "Depression drugs don't work" and provided ammunition for people concerned with the overprescription of antidepressant medication. But there was also a backlash. Other experts analysed Kirsch's dataset using different methods and came to different conclusions. Another group made similar findings to Kirsch, but interpreted them very differently - as showing that drugs are more effective than placebo. Kirsch is standing his ground. Writing earlier this year, he said: "Instead of curing depression, popular antidepressants may induce a biological vulnerability making people more likely to become depressed in the future."

Kirsch, I., Deacon, B. J., Huedo-Medina, T. B., Scoboria, A., Moore, T. J., & Johnson, B. T. (2008). Initial severity and antidepressant benefits: a meta-analysis of data submitted to the Food and Drug Administration. PLoS medicine, 5(2), e45. Google Scholar Citations: 1450.


9. Judith Rich Harris and the "Nurture Assumption"
You could fill a library or two with all the books that have been published on how to be a better parent. The implicit assumption, of course, is that parents play a profound role in shaping their offspring. Judith Rich Harris challenged this idea with a provocative paper published in 1995 in which she proposed that children are shaped principally by their peer groups and their experiences outside of the home. She followed this up with two best-selling books: The Nurture Assumption and No Two Alike. Writing for the BPS Research Digest in 2007, Harris described some of the evidence that supports her claims: "identical twins reared by different parents are (on average) as similar in personality as those reared by the same parents ... adoptive siblings reared by the same parents are as dissimilar as those reared by different parents ... [and] ... children reared by immigrant parents have the personality characteristics of the country they were reared in, rather than those of their parents' native land." Harris has powerful supporters, Steven Pinker among them, but her ideas also unleashed a storm of controversy and criticism. "I am embarrassed for psychology," Jerome Kagan told Newsweek after the publication of Harris' Nurture Assumption.

Harris, J. R. (1995). Where is the child's environment? A group socialization theory of development. Psychological review, 102(3), 458. Google Scholar Citations: 1535


10. Libet's Challenge to Free Will

Your decisions feel like your own, but Benjamin Libet's study using electroencephalography (EEG) appeared to show that preparatory brain activity precedes your conscious decisions of when to move. One controversial interpretation is that this challenges the notion that you have free will. The decision of when to move is made non-consciously, so the argument goes, and then your subjective sense of having willed that act is tagged on afterwards. Libet's study and others like it have inspired deep philosophical debate. Some philosophers like Daniel Dennett believe that neuroscientists have overstated the implications of these kinds of findings for people's conception of free will. Other researchers have pointed out flaws in Libet's research, such as people's inaccuracy in judging the instant of their own will. However, the principle of non-conscious neural activity preceding conscious will has been replicated using fMRI, and influential neuroscientists like Sam Harris continue to argue that Libet's work undermines the idea of free will.

Libet, B., Gleason, C. A., Wright, E. W., & Pearl, D. K. (1983). Time of conscious intention to act in relation to onset of cerebral activity (readiness-potential) the unconscious initiation of a freely voluntary act. Brain, 106(3), 623-642. Google Scholar Citations: 1483

Where do you stand on the implications and interpretations of these 10 psychology studies/theories? Which controversial studies do you think should have made it onto our list?
_________________________________

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.