Research Coordinator Position at UC Institute for Prediction Technology

The UC Institute for Prediction Technology (UCIPT) studies real-world problems at the intersection of psychology, public health/medicine, and technology. UCIPT bridges together researchers across University of California campuses to study how data from social media, wearable devices, and online technologies can be used to predict real-world events in areas like health and medicine, politics, crime, education, and finance. While our work is broadly focused, most of our day-to-day work is with the UCLA Center for Digital Behavior in the UCLA Department of Family Medicine.

Under the direction of the Principal Investigator and Research Manager, the Research Coordinator will be responsible for day-to-day operations of assigned research projects and writing briefs related to research. Organize and monitor research activities, monitor students and volunteers, and maintain study files. Ensure timelines are met. Conduct community outreach efforts. The Research Coordinator will also conduct statistical analyses on current and future research projects, and write up the research results into manuscripts, briefs, and grant applications.

The ideal candidate will be extremely organized and detail-oriented. Must have a minimum of a Bachelor's Degree and greater than two years of research and/or work experience. Demonstrated knowledge and skill in current and emerging internet and social media platforms.

Required Skills

  • Excellent writing skills; ability to confidently and independently write research manuscripts and briefs that will convey research to local clinics and organizations
  • At least 3 years of research experience carrying out behavioral interventions
  • Demonstrated knowledge and skill in the use of social media platforms
  • Demonstrated statistical analysis skills in order to analyze research results to determine study outcomes
  • Ability to establish and maintain cooperative working relationships
  • Willingness to work or respond to participant requests/emails on evenings and weekends
  • Organizational skills to maintain records and coordinate other tasks as required.  
  • Ability to communicate with others about sensitive behavioral issues, including drug use and criminal behavior, and frequency of high-risk behaviors.

Preferred Skills

  • Bilingual English/Spanish a plus
  • Experience in data entry and cleaning
  • Experience working with substance abusers and/or with HIV-positive populations

Please submit your résumé, a cover letter, and writing samples to: jobs@predictiontechnology.ucla.edu

Explaining Pokémon Go Through the "Science of Social"

My friend Jason’s love of Pokémon Go is nothing short of fanatical. The day he downloaded the game to his phone his Fitbit logged 50,000 steps, five times what he walks on an average day. He’s taken to standing up every day at work, not because of the many health benefits, but rather so that every five minutes he can pace from one end of the office to the other to collect PokéBalls and experience points at a nearby PokéStop. Several days in, he started walking home every day from his San Francisco workplace—a journey that takes him daily through the rough-and-tumble neighborhood of the Tenderloin. Jason hardly even noticed as the buildings got more and more rundown—his face was glued to his screen. That was how the mugging happened...

To read the full post, please visit my new column on The Huffington Post.

 

Going Viral for Good

Just two weeks before the tragedy in Orlando, Florida, where 50 people were murdered in a nightclub, President Obama responded to a question about his views on gun control. Despite being on the not-so-mainstream PBS news channel, the video quickly reached more than 800,000 viewers.

When you think about viral media, you typically think about cat videos, right? And maybe music videos, too, as long as the instruments are being played by cats. But viral media isn't just about cat videos, as we know from the Arab Spring, the Obama video, and our Institute’s work on predicting diseases. Viral media can also be used for good.

How can social media be used to get people to do good things?

In one study, 120 African-American and Latino men who have sex with men (MSM) were randomly assigned to join one of two private online community groups on Facebook. One was an HIV intervention group, designed to get participants to test for HIV, while the other was a general health (control) group. Throughout the 12-week study, which is known as HOPE (Harnessing Online Peer Education), participants were connected with peer group role models who encouraged them to get tested for HIV.

Over the course of the study, participants shared information about being gay and their views about HIV testing. The act of sharing health information created an online community that brought together people from different stigmatized groups.

What did we find? People in the intervention group were two to three times more likely to get an HIV test. In other words, people who had access to the HOPE community group were more likely to change their behavior than people who did not join a HOPE community. It was inspiring to see people actively communicate with each other and help build an organic, real-world community from a social media forum.

But our findings from the HOPE study don’t apply just to testing for HIV—the technology can be applied much more broadly. I use the term “HOPEify” to describe how to make a technology engaging and able to create positive social change.  

Now, how does this relate to viral media?

Viral media isn’t just about cat videos or nonsense topics. It’s a powerful tool that can be used to create positive social change in the world. There’s a science behind how to do this, which I wrote about in an article for TechCrunch. We need to start leveraging social media for social good and improving the world to prevent future incidents like Orlando.

I’ve done several research studies with the LGBT community, so it was comforting to hear these words from President Obama:

The fact that [the shooting] took place at a club frequented by the LGBT community I think is also relevant. We’re still looking at all the motivations of the killer. But it’s a reminder that regardless of race, religion, faith or sexual orientation, we’re all Americans, and we need to be looking after each other and protecting each other at all times in the face of this kind of terrible act.

His statement has been viewed nearly a million times on YouTube.

Why Are Wearable Health Technologies Failing?

The goal of most mobile health (mHealth) devices is simple: help users track and change their health behaviors. Many types of devices have been released, but most still fail to achieve this goal. Why?

Supported by endless media hype, the stock of mHealth apps and wearable devices continues to rise. Samsung, which expanded its line of wearables in early June, has joined Fitbit and Apple in the never-ending quest to improve fitness, reverse bad habits, and increase productivity (and sell more gadgets).

There’s a problem, though: more than half of mHealth apps in the iTunes store have been downloaded less than 500 times. And according to one survey, one-third of people who buy wearables stop using them within six months. In response, a new field of science is trying to understand the interface between human behavior and technology. This field, which I call the “Science of Social,” is maturing slowly, but it offers a lot of insight into the future of mHealth technologies.

The Science of Social

The three most important factors in behavior change are easy to summarize:

  1. The power of social norms
  2. The power of role models
  3. The power of social support

Social norms strongly influence what we perceive as “normal” behavior. Do you silence your phone in the movie theater to avoid jeers? Have you ever listened to a genre of music you don’t like or rooted for a sports team that you don’t care about because your friends did? We value social support, so it’s common to adjust our behavior to what we think is expected.

Role models are responsible for popularizing behavior. New social norms spread when influencers adopt them first. For example, music trends in high school typically follow a top-down hierarchy. When I started to play bass guitar in 7th grade, I quickly discovered my favorite band—the Red Hot Chili Peppers—after hanging out with an 11th-grader role model.

Social support is key to sustaining behavior change. Friends and family fulfill key psychological needs, such as the needs to trust, fit in, and feel empowered. Successful offline programs like Weight Watchers and Alcoholics Anonymous include a strong element of community, so it’s not surprising that the top-downloaded apps are starting to include this feature.

These three simple principles determine the success of positive behavior change in the real world. Tech companies are catching onto this fact, but they’ve had trouble bringing all three elements together in one product.

How Tech Is Trying to Keep Users Engaged

One successful adjustment the mHealth market has made is using a reward system. The reward system can be complex, as in Zombies, Run!’s use of badges to unlock the next part of the story, or it can be simple, as in Fitbit’s growing flower that’s pegged to how many steps you take each day.

I’ve spent a lot of time advising companies on how they should incorporate reward systems into their products. The research shows that actionable goals (“I want to lose 15 pounds,” “I want to walk 15,000 steps today”) are key to making mHealth devices appealing to people in general, but especially to younger users. That’s why gamification and other interactive features that have a social element, like the personal coach found in Nike+ and other running apps, have helped increase the hype and sales of wearables. It turns out this age-old behavioral psychology technique—reward the desired behavior, or gamify it—is just as successful with us humans as it is with rats.

But gamification doesn’t always work, and there's still the problem of getting people to stay engaged. So, now what?

Using the Science of Social to Get People to Love Their Apps

At UCLA, we’re trying to improve the appeal of mHealth devices with a holistic model that includes all three elements of the Science of Social. This model, known as HOPE, targets one specific behavior to change.

Private community groups are the key to the HOPE intervention. People can support one another through group discussions, private messages, and “liking” posts and comments, just like Facebook or other social media sites. Interestingly, we’ve found that groups become actual communities after the intervention ends—they keep in touch, meet up with each other, and become part of each other’s lives.

In a nutshell, that’s the key to using the Science of Social to retain users: bridge the gap between online and offline worlds with social psychology. In order to do this, companies need to start being more aware of basic tenets of behavior change science, like the Science Of Social, and design around them. It’s encouraging to see health apps are going in this direction.

Academia vs. the Private Sector: A Q&A with Sean Young, PhD

In addition to being a professor, you’ve worked closely with start-up companies. In my experience, people from academia and the private sector have difficulty finding common ground. What’s your secret to bridging these two fields?

I give a lot of detail about this in a recent presentation I gave at the Seoul Forum. I think it’s important for researchers to spend time with entrepreneurs and business people to understand their work and needs. As researchers, we’re supposed to be open-minded and think about how our work can apply to the world and the best way to do that is to interact with people from other disciplines in the public and private sectors. We can learn a lot from them about how to focus our research as well as tools that can be integrated into a research study. I’ve always made an effort to do this by taking classes in fields outside my expertise, making friends with people with very different training backgrounds, volunteering my time to work in areas where I have little training but can learn a lot, and taking on additional projects that could complement my skills.

The draw of start-up companies is pretty strong for graduating students. What would you say to retain a “star” data scientist who can get paid much more in the private sector? 

That’s a tough question. I think it’s less about what I would say and more about the questions I would ask to see if the candidate is a good fit. Most people are driven to make as much money as they can. For many people, they don’t have options other than to make a lot of money. They might have families that they need to support, or parents with expensive healthcare bills. That being said, if a person has an entrepreneurial mindset and can take risks in the present for big gains in the future, then working as a data scientist at a public institution could be the right fit. For example, in our group, we’re building technologies that have the potential to compete with companies like Google and Facebook. While our technologies are designed to be open source to give back to the world, it’s possible to develop proprietary products on top of our technologies. Those products have the potential to make a lot of money. But overall, people should join our group if they love creativity, want to have influence over the direction of research and a growing organization, and are excited about making an impact in the world. Unlike most companies, our end customer is not shareholders—it’s the general public and the organizations that seek to provide information, health, and safety for the world.

What start-up practices can be incorporated into a university setting to increase organizational efficiency? Also, is there anything academia has to offer the private sector in terms of how to operate more effectively?

Great question. I’ve recently been asked to join a board at UCLA to address these topics. I think we need to have good leadership and modeling. We need examples of researchers who have designed efficient research programs, and of researchers who have commercialized their work successfully. We need these researchers to share their work and become role models just like our role models in the HOPE Study. We also need partnerships with industry and venture capital to provide roadmaps and funding for how to streamline research and show researchers that if they can efficiently manage research projects, then funding and business people are waiting to help them apply their work in the world. Stanford University did a great job of this and it really inspired me when I was there as a graduate student.

Do you think about the potential commercial applications of your research when you start a study?

I sometimes think about commercial applications, but more likely I think of general applications. I don’t care necessarily if what we do has commercial appeal. I care if it solves an important problem. There are great models for how to study whether your work will solve an important need. Steve Blank and Eric Ries have written a lot about the customer development cycle and how startups can use them. I’m a big believer that these principles can be applied to research to ensure that we’re working with our end users (e.g., government stakeholders, individuals, or business people) to conduct research that will benefit them.

Do you feel academics would benefit from using leadership styles more commonly seen in the private sector, or vice versa?

I think they both can benefit from each other. I’m a big believer in education. In the social sciences and humanities students are taught to educate themselves by spending time with people from different cultural and ethnic backgrounds. I definitely agree with that but am also a big believer in educating ourselves by spending time with people from different training and work backgrounds.

Startups have to be nimble. They have to learn fast. This is the opposite of the process in many large corporations and academic institutions. These institutions could learn a lot from startups. On the other hand, corporations care about making money. They do this through having good relationships, understanding how to market and sell ideas and products, and by creating processes to manage large groups of people. Startups and academic researchers could learn a lot from corporations. Academic researchers are great at studying one topic for a long time and really understanding everything about that topic. They’re great at being passionate about their work about thinking critically about the long-term consequences of their work. So, start-ups and corporations could learn a lot from them.

What Inspires You? A Q&A with Sean Young, PhD

Image courtesy of Lynwood Lord

Image courtesy of Lynwood Lord

Who were the instructors or mentors that made the biggest difference in your life?

My mentors have shaped my life so much. I guess I should start with my parents, who encouraged me to pursue any area of work that I wanted. They taught me that I need to satisfy my basic living needs of having a roof over my head, but after that all I need is to pursue what makes me happy. That if I really went after what I wanted I'd be successful enough, and more importantly, I'd be happy. I have my music teachers, like Roberto Miranda, my bass instructor at UCLA, who taught me to live in the moment and listen to things around me. My psychology professors in college like Traci Mann and Matt Lieberman taught me that I was a bigger nerd than I thought I was, by inspiring me so much that I would show up at their office hours every week just to talk to them and learn. I had graduate school advisors who accepted me to Stanford and then taught me that I wasn't as smart as I thought I was, that there are really brilliant people out there and that it's humbling being a researcher as you have to be wrong a lot. I've also been inspired by friends in the tech and business world who have helped me see a vision of how technology and psychology is the future of the world. Finally, I think I'm constantly guided by a mentor I never met, my mom's father, but I frequently hear stories about how he would have loved to see me playing music and working in medicine as he put himself through medical school by being a concert violinist.

I really think that who I am in life is less about what I've done and more about how others have shaped me, whether they were my ancestors who died before I was born to help me, or my mentors who shaped my life while I've been alive. But I guess that just proves I’m a social psychologist.

What book made the biggest impact on you? Also, are there any science writers or authors in general who you look to to inspire the public about developments in technology or psychology?

I think more than books, the thousands of psychology research papers I have read have really made a difference in my life. They taught me that the way I used to see the world was actually an area of study. They also taught me that people are much more similar to each other and much more connected than I would have thought. They taught me to be open to people and optimistic about society because we're all in it together and experiencing similar things.

Off the top of my head, the first influential book that comes to mind is
Market Wizards. That may be because I'm odd (that book has probably never been named in a top-10 list of influential books), or maybe it means I don't read books enough, but the other part is that I think it is really fascinating and influential. It's a book of interviews with some of the top hedge fund managers and traders. It's not just interesting for the advice they give on finance, but it's extremely rich in psychology. People are extremely emotional when it comes to losing money and these market wizards have mastered that psychology. They explain their processes and paths, with many ups and downs, and I realized that the same principles could be applied far beyond trading, to mastering psychology and emotions throughout all parts of life.

You graduated from Stanford, so I’m wondering if you’ve been inspired by entrepreneurs in Silicon Valley? Also, the Los Angeles area has its own technological boomtown in Santa Monica, which has embraced the nickname “Silicon Beach.” Do you see an opportunity to work with start-up companies in Santa Monica?

I’ve always cared about making sure my work applies to the real world. One way I've done this is to keep one foot rooted in the start-up community. For example, last week, I gave a presentation at the Seoul Forum in Korea (click here for video). At the beginning of the talk, I gave an example of a company I worked with at Stanford that got started when we were all students. The company took research I had done in psychology and incorporated it into a healthcare product. I was involved in a number of start-ups in grad school, throwing myself into every new experience I could find. Some examples were a rating system for assisted living facilities, a sports betting app, and a way to connect healthcare workers across the world to people in areas that experienced disasters like earthquakes.

When I moved to L.A., there wasn’t yet much of a start-up scene so I had to pull friends from the Bay Area to work with me, but over the past few years the L.A. start-up has gotten really hot. Some of the start-ups I’ve been involved with in L.A. have been one on creating an online health community, a prediction market that can be used to predict sports, music, and political events, and my own automated stock trading method to predict moves in the market. I haven't had much time where I can lead a start-up, so lately I've spent more time advising companies. I currently advise five companies that are primarily in the health and technology space.

School districts are competing to see who can install the most up-to-date technology and online learning tools. Do you see tech educational aids as a universal good for students, or have you heard of instances where they hinder learning?

I think technologies are just tools that can make things more efficient and able to reach a lot of people. They can be used for good or bad. Facebook, Twitter, and Instagram can lead to bad things, like people bullying each other, but they can also lead to good things, like getting people to be healthier when social media is paired with the HOPE intervention. I think the situation is the same for education. If used correctly, tech aids can improve education and inspire students. We were recently asked by a large funder to modify the HOPE intervention to improve teaching methods among teachers. I believe tools like HOPE that allow people to become educated all across the world at the same time can be really valuable in our educational system.

If you could make a 30-second speech to the entire world, what would you say?

If I could address the entire world, I'd rather do it in a song than a speech!

Tuesdays with Tito: How to Live Life to the Fullest, Every Day

Remembering that I'll be dead soon is the most important tool I've ever encountered to help me make the big choices in life. Almost everything—all external expectations, all pride, all fear of embarrassment or failure—these things just fall away in the face of death, leaving only what is truly important… – Steve Jobs, in a commencement speech at Stanford University

There are few things scarier than the thought of death, but as Ben Franklin noted, there are only two things certain in life: death and taxes.

In some cultures, death is viewed as the natural end of the life cycle. People think of it as a way to create change that “clears out the old to make way for the new,” as Jobs said in his speech. In the United States, however, death is often a taboo subject—people avoid discussing it and are really scared of the dying process.   

Instead of death causing people to be scared, how can it be used as a motivation to live every day to the fullest? 

In one study, researchers had participants do three different tasks. First, word pairs were shown to two groups of people. The word pairs were randomly generated for both groups (e.g., CALCULATOR | LETTUCE), but the second group was “primed” with subliminal prompts that showed the words PAIN or DEATH for 33 milliseconds (literally, a split second) between each pair of words. In a second task, participants wrote how they felt about either dental pain or death.

Participants were then given a “Humor Generation Task” where they received four uncaptioned cartoons from The New Yorker and were asked to “write down the funniest caption that they could think of.” Participants as well as people who were not involved in the study (outside raters) then rated how funny each caption was.

What did they find? The outside raters thought that the captions from people who saw the word DEATH were funnier than the captions from people who saw the word PAIN. For the self-ratings, those who worked on the death writing task thought their captions were funnier than those who were exposed to messages about dental pain. In other words, people were more entertaining after they were primed to think about death.

As in our posts every week, I’m telling you about a strange research study that probably leaves you asking, how can people apply this research finding to real life?

First, people should realize that thinking about death doesn’t need to be a bad thing, and it doesn’t have to be depressing. Considered in the right light, thinking about death can make you laugh and love more each day, and treat every day as if it were your last.

I call this ability to use death as inspiration “Tuesdays with Tito.” Why? Tito, aka Matt Cutler, was my neighbor in college. Tito and I met at the beginning of our sophmore year in college and quickly clicked and spent a lot of time together that year. Although Tito loved to go to parties and experiment like many of our 19-year-old friends, unlike most students, he wasn’t doing it to find himself. Instead, he was consciously doing it to live life to the fullest, every day. And that’s why Tito wasn’t just focused on partying, studying, and other selfish things; he was also a genuinely good person who would go out of his way to help others.

On Tuesday nights, after we went to UCLA’s “$1 pint night” (I’m dating myself back to when it was only $1 to get a pint of beer! —and the event was shut down for serving alcohol to minors), Tito and I would often sneak into a neighborhood or hotel hot tub and talk about things that would have been scary or depressing to many people, like bad things that had happened to us and people we loved in our life. We talked about relatives who died at a young age, friends who were diagnosed with life-threatening diseases, and about the uncertainty of our own lives and how we could die at any moment. But these were far from depressing conversations. The point of our talks was to remember and appreciate how lucky we were in life because it could all change or end at any moment. We were reminding ourselves to do what excites us and not worry about what others think of us because life is short. Relax. And smile.

I continue to think about those days with Tito, and this post reminds me that I need to check on him and see how he’s doing. I hope 15 years later that he’s still sneaking into hotel hot tubs, even if he’s got enough cash at this point to own one himself.

Beyond showing that death affects us in both conscious and unconscious ways, the takeaway of this week’s study is simple: a sense of humor is invaluable. Being able to laugh helps alleviate stress and encourages creative thinking, and it’s one of the most accessible ways to cope with problems at work, school, or home. As Mahatma Gandhi said, “If I had no sense of humor, I would long ago have committed suicide.”

Einstein.jpg

The Office Zen Master and the Art of Multitasking

“Successful people focus on one thing at a time. You’re working on way too many things at the same time. You need to quit some of your projects and focus.” I can’t tell you how many times I’ve gotten this advice from colleagues and mentors. No matter how many times I hear it, I can’t seem to follow it. But I’m not sure I need to change.

Working on multiple things at the same time keeps me busy, constantly learning, and having fun. I’m more efficient at work when I multitask because I can work on other projects while I’m waiting for people to respond to my emails, return my calls, or complete their role in a project.  

There seems to be a lot of confusion about whether or not people should multitask. Although many employers seek to hire multitaskers, a lot of smart people say that multitaskers do worse than if they focused on just one activity. So, should people try to learn how to multitask?

In one study, researchers from Microsoft and the University of California observed thirty-two employees during one week of work. The researchers monitored every digital move the workers made, including the duration of their web browsing, mouse and keyboard activity, and when their computer went into sleep mode. Workers also responded to “probes” that asked questions about mood, whether they were challenged, and their productivity.

The purpose of collecting these data was to learn whether and how employees use multitasking as a way to take a break from their work. Specifically, the study looked at when and how employees get distracted in different communication contexts (face-to-face vs. digital) and at different time points (prior to communicating with someone; throughout the day; and at the end of the day).

What did they find? In general, context made no difference in a person’s ability to multitask — that is, employees completed their work with no difference in quality regardless of whether they were interrupted by a colleague, email, or other distraction. However, there was an emotional cost to switching tasks: people felt more stress, higher levels of frustration, and more time pressure to complete their work.  

For the three time points, the most interesting finding was that different mental states led people to be more susceptible to certain types of interaction. For example, a “rote” state (i.e., working on a simple task) resulted in more face-to-face interactions, and a “bored” state led to both Facebook and face-to-face interaction. On the other hand, focused or aroused states throughout the day led people to write more.  The more time an employee spent communicating with others, and the more total app switches that were logged, the less productive he or she felt at the end of the day.

The authors had an interesting take on these findings: “People might move toward online or offline communications that lead them to be in a state where they are more balanced psychologically.” They call this emotional homeostasis. The idea is that people multitask to reduce tension and find balance during the workday.   

If that’s the case, what’s the best way to multitask at work?

First, it’s important to be aware of what communication style you prefer. You’ll feel more productive and happier if you have a good understanding of your communication preferences and which colleagues you work best with. For example, if you prefer to finish a project with face-to-face conversation versus an email, make this clear to colleagues. You’ll feel less stressed at being interrupted by written communication throughout the day.   

Second, your brain has to adjust every time you switch tasks, which makes it hard to filter out irrelevant information. To reduce the effects of task switching, it can help to bundle similar tasks together. For example, write all of your project-related emails and notes at the same time each morning. I also use this technique a lot at home: I’ll cluster housework tasks like cleaning and doing the laundry while listening to a podcast.   

My feeling is that people associate multitasking with being distracted, but that’s not always the case. Multitasking is bad if you’re doing it because you’re bored or need a distraction, but can be good if you’re happy with your work and simply want to make progress on additional projects. It’s important to be aware of how you’re feeling while you work to determine whether you’re increasing efficiency or simply reducing boredom. I call this awareness of work emotional states being an “Office Zen master.”

Just like Bruce Lee learned to master his own body enough to do one-finger push-ups and fend off huge armies of fighters, being an Office Zen Master can help you multitask and become more efficient at work.

How to Dance Away Your Fear of Public Speaking

“According to most studies, people’s number one fear is public speaking. Number two is death. Death is number two. Does that sound right? This means to the average person, if you go to a funeral, you're better off in the casket than doing the eulogy.” ― Jerry Seinfeld

Public speaking is a major problem for people, but so is, more generally, shyness. It’s natural to be nervous in certain situations, like interviewing for a job or giving a speech, but dreading all social interaction can make life miserable. Research shows that social anxiety can have a huge impact on your overall health, so it’s important to find a way to fit in and be more social.

How do people overcome shyness and reduce social anxiety?

In a classic study, researchers tried to determine how friendships form. This project, which came to be known as the Westgate studies, investigated friendship patterns among students at MIT. The researchers asked students to name their three closest friends who lived on campus and made observations about their behavior.

The study found that more than 10 times as many friendships developed between people who lived in the same dormitory. Moreover, the strongest friendships formed between students who lived right next door to each other and students who lived on different floors, if one of the students lived near a stairwell.

The researchers called this finding the law of propinquity. This law argues that geographic proximity increases the opportunity for interaction, which in turn increases the comfort level between people. The implication is that the more we see someone, the more we like him or her.

Before you raise an eyebrow, the results of this study have been repeated many times since 1950, when propinquity was first described. Similar effects have been shown in public housing projects, job training programs, and schools. All of these studies showed that simply being seen and interacting with other people resulted in more friends.            

So, how can people who suffer from shyness use this research?

First, most people who are shy are given advice that’s hard to implement: "Go make friends!” is a familiar refrain. "Learn to network" and become a “social butterfly” are also common pieces of advice. Unfortunately, these ideas are tough to implement — it’s hard to stop being who you are and suddenly become a different person.

The Westgate studies showed that there’s an alternative: making friends can be as easy as talking to a person you see day after day. It’s as simple as putting yourself in spots where you will continue to interact with people on a regular basis. Therefore, try hanging around a crowded place — think coffee shops, local markets, cultural venues — instead of looking for friends who fit certain criteria. This might sound hard to do at first, but it’s a simple strategy to build confidence in social situations. I call this the “Tango Approach” to curbing social anxiety because, before online meetup groups, a lot of people would go to dance classes to learn to tango or ballroom dance in order to meet new people. Dancers quickly bond over a shared interest in music and dancing, sometimes without saying a word to each other.

Today, social media, chat rooms, and comment boards allow people to easily connect and develop friendships. In this way, the Internet has replaced “geographic proximity” with “psychological proximity.” (This idea has a lot of implications for online dating, which I’ll address in an upcoming post.) For many people, online communication is easier, so it can be useful to think of the Internet as a sort of sandbox for eventual in-person meetings.

Overall, if you want to overcome shyness and find a friend, mate, or business partner, it pays to be visible. Sometimes all it takes to make a friend is to make sure you cross their path.

Layer Cakes Can Prevent Cyberbullying

“R U gay, Robbie? I think you are!” “No one likes you,” “Stop skipping school pretending to be sick, just go kill yourself.”                   

Each week, Robert looks at his phone and finds 10 to 20 abusive text messages like these. He finds similar messages on Instagram and Snapchat calling him a loser. After months of keeping everything inside, one night he breaks down in tears in front of his mom at the dinner table. 

His mother, astonished, asks if he’s ok. Robert holds himself together enough to tell her he’s fine. He realizes that the only way to deal with the problem is to join in. He grabs his phone, pulls up an anonymous profile on Yik Yak, and pecks out a stream of insults to random users.

Cyberbullying has led to an increase in depression and suicide among young children. It’s a tremendous public health problem, but tweens and teens often don’t even realize they’re cyberbullying others. In addition, it’s a difficult problem to diagnose because children don’t like to talk about their online lives. So, how do we stop cyberbullying among youth?

In one study, researchers gave questionnaires to 2,186 middle and high school students. The aim of the study was to examine how frequently students were involved in cyberbullying and to understand the factors that contribute to bullying behavior. More specifically, the authors wanted to distinguish between three groups: victims, bullies, and bully–victims (i.e., students who reported both cyberbullying someone and being the victim of a bully).

Cyberbullying is a relatively new field of research, so it’s notable that this study included thousands of kids. What did the authors find?

First, a lot of students participate in cyberbullying! More than 30% of the participants identified as being a victim or a bully. More surprisingly, one in four students identified as being both a bully and victim during the previous three months.

When we look at these findings more closely, a few things are worth noting. First, in traditional bullying, bully–victims are usually the smallest group of concern, but in this study it was the most common group of students. Second, the authors found that girls were more often bully–victims. And finally, the three groups of children had some shared risk factors, including whether they shared passwords with their friends.

Other studies show that cyberbullying is practically an epidemic: 42% of teenagers with tech access reported being cyberbullied in the past year, and 81% of teens say bullying online is easier to get away with. As an adult, it can be easy to dismiss these statistics: spats usually erupt over trivial things like celebrities or gossip, and fights are sometimes forgotten the day after they begin. But it’s important to remember that cyberbullying often leads to face-to-face confrontations and some students become afraid to go to school.

So, what do these findings mean for parents and teachers? As soon as kids start controlling digital devices on their own, talk with them about the potential risks and rewards of online communication. As they grow older and more proficient with technology, you can add other elements to your talks. I refer to this as the “Layer Cake Method” of online education. For example, you can start with a base-layer talk about netiquette and then move on to topics like online predators, identity theft, and Internet porn. During these talks, it’s important to emphasize that anonymity makes it easier to be a bully, and that respect in online communication is just as important as it is in real life. Finally, being more open about the dangers of cyberbullying may help reduce the risk of young girls reciprocating with bullying behavior.  

Cyberbullying poses a serious challenge, but there are many resources available if you feel overwhelmed. Most importantly, there’s a clear protocol to follow when cyberbullying happens. Research shows that victims rarely share their experiences, so it’s up to authority figures to be aware of the fact that schools, technology providers, and local governments have policies in place that can help resolve problems before they get out of hand.

The (Social Media) Doctor Is In: Twitter Can Be Used to Monitor Health

 

“I hate Donald Trump!” “I’m exhausted and my boss doesn’t care,” “I'm jealous of my sister's new car.”

If this is what your tweets look like, then you might want to reconsider your words: research shows you may be at risk for heart disease. Most people don’t realize it, but the language they use in social media posts can be used to predict their well-being, like their risk for heart disease or other serious conditions. How can social media be used to monitor people’s health and overall well-being?

In one study, researchers evaluated 100 million tweets from 1,300 counties in the United States. The language in these tweets was analyzed and categorized as having either negative or positive sentiment. Then, the authors separated the words of each tweet into word clouds that reflected “risky” language (e.g., despise, hate, jealous, tired) or “protective” language (e.g., opportunity, strength, hope, great).

Using machine learning methods, the authors created algorithms that compared the sentiment of the tweets from each county to CDC data on causes of death. Their findings were dramatic: counties whose tweets expressed more negative emotion (e.g., tweets filled with words such as “hate”) had more heart disease–related deaths compared to counties that featured tweets with more protective language.

When looking further into the data, the methods worked very well at predicting death from hardening of the arteries (atherosclerosis), which is the leading cause of death in the United States. It’s interesting that the people who were tweeting were not the people whose deaths were measured. Instead, the overall tone of the tweets — which were from people too young to be suffering from heart problems — appeared “to have captured a snapshot of the psychology of the community at large.” The authors had therefore discovered a similar result to one that our own team found in an earlier study focusing on HIV.

The authors claim that their methods predicted death from heart disease more accurately than risk factors such as obesity, smoking, and diabetes. Moreover, the prediction accuracy remained strong even after they considered classic predictors of heart disease such as education as poverty. These claims might make you raise your eyebrows, but decades of research has shown that the words people use shows a lot about their psychology. In this case, the algorithms created by the researchers were able to predict personality traits. In fact, the authors claim the algorithms they developed predicted personality traits as well as or better than friends who filled out personality surveys about the participants!

So, how does this research impact your life? For one, an entirely new field of research, known as “digital epidemiology,” has sprung up around social media. Now that social media is widely used (65% of American adults visit social media sites regularly, and 90% of young adults use at least one service), doctors and researchers have an entirely new tool to predict communitywide well-being. Healthcare providers already monitor Google searches to forecast disease outbreaks (e.g., flu, malaria, STDs), which helps determine where resources should be allocated.

For individuals, the hope is that as more and more people use social media, health predictions will become even better. For example, future studies may ask patients to provide access to their social media accounts when they go to an emergency room or visit their primary care doctor. Language in postings and status updates could provide clues to the risk for depression, which is a major risk factor in the recurrence of many diseases. And existing studies have already helped identify post-partum depression among new mothers.

If trends are any indication, social media is going to become even more popular and integrated into tools to solve real-world problems. I’m hopeful that we’ll find ways to use social media to predict other leading causes of death in the near future, and that we’ll find an effective way to protect personal privacy at the same time.

How to Stop a Terrorist: Understanding "the College Greek Life Effect”

Umbrella.jpg

On this date in 1995, a truck bomb exploded outside the Federal Building in Oklahoma City, killing 168 people and injuring hundreds. Timothy McVeigh was convicted of the crime and sentenced to death. At the time, the bombing was the worst act of terrorism America had ever experienced, but sadly it was just the beginning of a long cycle of violence.

Police and national security experts have spent billions of dollars trying to identify terrorists before they act. Despite these efforts, terrorist activities continue to plague the United States and the international community on an almost daily basis. What, then, can be done to stop terrorism?

In one study, a team of researchers from Australia and the United Kingdom studied the concept of belief formation. The researchers presented an optical illusion, called the autokinetic effect, like in the classic Sherif and Asch conformity studies, to a group of six participants in a darkened room. In this illusion, a stationary point of light “moves” around in different directions for about 15 seconds. During a series of 25 trials, the participants were asked to estimate the farthest distance the light reached from its starting point.

Of the six participants in each trial, three were confederates (i.e., people who were working as “secret agents” for the researchers). Each of the confederates was instructed to extend the distance estimate of a real participant by five centimeters. In addition, the researchers made some participants go through tasks as a group before conducting the experiment, with the idea that these tasks would encourage bonding.  

The researchers also asked the real participants a series of probing questions to see whether participants were trying to fit in with the other group members: “Are your guesses becoming more accurate with each trial? Are you trying to fit in with others' judgments? Were you influenced by others’ judgments?” The overall goal of the study was to determine how the decisions, attitudes, and beliefs of a person influence the opinions of other members in a group.

So, did group “belongingness” cause the real participants to change their estimates to match those of the confederates? Yes—in almost every trial! In a nutshell, the authors found that participants who felt they fit in with the group, compared to those who felt they didn’t fit in, were more likely to conform to the groups’ estimate of how far the light had traveled. In other words, people who felt less embedded in their group gave responses that were very different from the rest of the responses.

Now, what do guesses about a traveling light have to do with combating terrorism? The study showed that we can predict whether people will conform to or oppose their social group based on whether they feel they fit in with that group.

I call this “The College Greek Life Effect” because, even though I wasn’t in a fraternity in college, I had a lot of friends in them and noticed an interesting pattern. Greek students who felt aligned with the goals of their fraternity or sorority brothers and sisters enthusiastically participated in Greek events. These were the students who could be seen doing upside-down kegstands, wearing their nicest dress to a sorority formal dinner, or rocking out to the house band at a frat party. On the flipside, there were other members of a fraternity or sorority who used to feel aligned with Greek Life, but no longer felt they fit in with the rest of their house. They might have been long-time members of the house who weren’t ready to drop out of being in the Greek circle, but it was clear they were trying to separate themselves by telling their non-Greek friends, “I’m not like other people in houses.”

The College Greek Life Effect exists much more broadly in society — we can see it based on how citizens respond to government policies and popular culture. People who feel like they fit in with and generally agree with their government’s policies are happy conforming to it. They feel comfortable relying on elected representatives and peers to help them make good decisions. However, when people feel isolated from general society or stop agreeing with policies, then they can, in the worst-case scenario, join gangs, cults, or other extremist groups to express their difference of opinion. Law enforcement personnel are trained to spot this sort of group dynamic, but parents, religious leaders, and teachers are often the first to see these “oppositional” relationships form — so it’s important to be aware of (and say something about!) unusual group behavior. The purpose of speaking out is not to force everyone to conform to each other — it’s good to live in a society with differing opinions and actions — but to bring attention to the fact that people who are acting dramatically different from the rest of society might feel isolated. In turn, this may be a sign that an individual is in need of help. More broadly, if people are hardwired to fit in with others, then parents and other community leaders need to promote diversity and positive group experiences. This is especially important in a society like the United States that has citizens from such a wide range of backgrounds. Exposing a would-be terrorist to other ways of thinking won’t necessarily change their mind, but helping him or her feel more comfortable in society may make that person less likely to want to destroy society.

Terrorism is a complex issue, so I don’t want to imply that the prescription above is the solution or to oversimplify how we think about acts of terrorism. Rather, the major take-home I want to offer is that it’s common to characterize terrorists as psychotic or sadistic, but the study above — and many others like it — suggests that they are often ordinary people driven by group dynamics. No matter how heated the discussion may get, I think it’s important to remember this fact.

Online Predators, and How Parents Should Deal with Them

“Oh my gosh, did you know that Nick Jonas was just caught with the Hollywood It Girl? Kate must be soooo mad!!!”

Danielle, a 13-year-old girl, often exchanges direct messages with her followers, so she isn’t surprised to see this question pop up on her phone. The message comes from a guy who identifies himself as David, an aspiring teen actor who just moved to town. They exchange a few more messages and he “seems nice,” so when David asks her to meet up the next day she agrees and gives him her cell number.

David starts texting her that night, sending a headshot, then a “muscle-shot,” and asks for a photo of her. David looks much older than expected, so Danielle gets weirded out and starts ignoring his texts. He gets persistent, and even aggressive, saying he’ll show her a lot more if she doesn’t respond. She responds with, “Stop texting me.”

Three days later, Danielle takes a call from an unknown number. It’s David. She immediately tells him to stop contacting her and hangs up. Scared to contact police or tell her parents for fear of getting in trouble, Danielle tries to distract herself by looking at Instagram photos.

Because of the Internet and social media, today’s kids are meeting more people than any previous generation. Unfortunately, like in the story above, they're not just meeting friends or teachers — they’re also meeting online predators who use a fake identity to try to lure teenagers, often into sexual acts.

What, then, can parents do to stop kids from meeting online predators?

In one study, 454 parents and their children participated in a research project on youth and Internet behaviors. The researchers assumed that most parents would underestimate how often children engage in risky online behavior, including unsafe interactions with strangers and exposure to sexual material. The goal of the study was to identify what contributed to this misconception.

Parents were asked, “How often has your child been approached online by a worrisome stranger?” For the children, the same question was phrased to distinguish what type of stranger they may have met: (1) an adult stranger that seemed interested in a sexual or romantic relationship, (2) someone who wanted to meet in real life, or (3) someone who was “just weird.” Parents were also asked how frequently they thought their child had been exposed to sexual content by accident or by intentionally seeking it out.

What did they find? One of the major findings was that parents who had poor communication with their kids were more likely to underestimate how safe their kids were online. Moreover, children who found it hard to talk with their parents were less likely to tell them about strangers they met online. These findings were similar to another study in which only half the children surveyed remembered being warned by their parents about talking online to strangers.

In their discussion, the authors describe the third-person hypothesis," which proposes that people think that media messages are more harmful to others than themselves. Third-person perceptions in the study significantly increased the odds that parents would underestimate whether their child had been approached by a worrisome stranger. The authors speculate that parents who have a strong third-person orientation may assume that their child is smarter than other kids and, therefore, less at risk to be lured into a face-to-face meeting with a stranger.

So, how does this study provide answers about what parents can do to stop their kids from meeting online predators? For one, parents need to start communicating with children at a much younger age about online risks.

Children as young as 3 are starting to use digital devices, and estimates suggest that up to 90% or more of 12- to 18-year-old kids have access to the Internet. Society teaches that the “Birds and the Bees” talk should be the first sex talk that parents should have with their kids, but these days, parents should begin educating kids about risqué content and online predators well before that talk. This pre-“birds and bees” talk, which I call “The Birds and the Bees, Part 1,” should occur between ages 6 to 9. Yes, it’s unfortunate that kids need to be exposed to this conversation at such an early age, but it’s important to teach them about the risks they face. There’s no doubt this discussion adds another layer of complexity to the already-hard job of parenting, but the conversation doesn’t necessarily have to go into lurid detail. Instead, the idea is to open a line of dialogue and make kids aware that a parent is a safe adult and the primary person they should talk to if something online makes them feel uncomfortable or scared.

The findings from the study described above also have important implications for the increasing rates of cyberbullying happening among kids. And as you may imagine, parents underestimate their kids’ risk of being cyberbullied, too, but more about that in a future post.

Personal Privacy vs. Societal Good: Thoughts on the San Bernardino iPhone Controversy

Image: iStock.

Image: iStock.

If you commit murder and leave evidence on your iPhone, Apple won’t turn you in. That was the company’s stance on a request from the U.S. government to access the locked phone of Syed Farook, one of the killers in the December 2015 San Bernardino mass shooting.

Apple claimed bypassing the security functions of Farook’s phone would be an invasion of privacy and damage their reputation. However, in this case, it might have been worth violating digital privacy laws in order to protect the broader public. The question is, though, who was right, Apple or the U.S. government? Or more broadly, how do we determine the ethics of whether and when to compromise privacy for public health?

In a recent study, researchers used an online survey to explore the relationship between ethical factors and brand loyalty. The researchers recruited a diverse group of 220 university students and asked participants to answer questions about their latest online purchase. The survey focused on four different factors: security, privacy, non-deception (e.g., accurate product descriptions, customer support), and fulfillment (e.g., clear pricing, timely delivery). In turn, these four factors were considered in the context of satisfaction and loyalty to a manufacturer and its website.

In terms of satisfaction, fulfillment was the most important factor to consumers, followed by non-deception and security. In terms of loyalty, the findings were more interesting: privacy turned out to be a major concern — it was the only factor related directly with loyalty — but non-deception and fulfillment were not of significant concern, which was contrary to the authors’ expectations.

Now you might be asking, what do people’s perceptions of brands and products have to do with whether Apple should protect a killer? The tie-together and take-home from this study is that companies like Apple place a high value on what their customers think and want to make them happy. If consumers will be more loyal to products they think take their security seriously, then companies will start touting the security of their products. And nothing says we’ll keep your information secure like resisting an order from the U.S. government to provide information.

Just last week, the government announced that it was able to unlock Farook's iPhone without any help. This development may swing the conversation toward whether Apple’s security measures are actually effective, but as marketers have pointed out, the company’s stand will underscore its commitment to privacy and security in the eyes of consumers. Apple has already expressed interest in learning how the phone was unlocked, and its public statements have focused relentlessly on the needs of its customers (and avoided the ethical minefield associated with the San Bernardino case and other high-profile requests for access to locked phones).

It’s growing increasingly clear that the government wants more oversight and companies — and people, for the most part — want more privacy for individuals. So, the short answer to the question about when ethics trumps public health depends on whether or not you’re a customer who values privacy. This issue will continue to be under a very bright spotlight, given the government has cited a controversial interpretation of a law from 1789 during its battle with Apple. If enacted, the law will give judges the power to force companies to comply with any court order to release digital information.  

Who will get the last word? Stay tuned — I’ll have more to say about privacy and other hot button issues like cyberbullying and identity theft in upcoming posts.

Your March Madness Soothsayer: Prediction Technologies

Image

“I’ll bet you $50 that an Oregon player will fall on the ground before a Saint Joseph’s player!” said a 40-ish guy wearing a Syracuse basketball jersey to a younger guy with a Utah jacket and beer-stained shorts. They were both waiting in about an hour-long line to make March Madness bets in Las Vegas. “I’m here to bet on Syracuse but this line is taking forever,” said the Syracuse fan. “We might as well make it interesting and place a couple more bets while we’re waiting in line watching the game, right? So I say an Oregon player trips and falls on the court before a Saint Joe’s player. You in?”

People will bet on virtually anything, and not just during March Madness. Will Donald Trump win the Republican primary? Will the Republican Party throw him under the bus if he wins? If Trump were to be elected, would the United States look like Hill Valley in Back to the Future 2 under Mayor Biff Tannen?

If people could predict the future, we could accomplish a lot more than just knowing which basketball player will screw up first or knowing who will win an election. We could solve some really important world problems, like preventing the spread of HIV, stopping drug abuse, and preventing crime. But even though people love making predictions, and betting on anything and everything, they’re actually pretty bad at it. What, then, can be done to help make better predictions about the future?

In one study, doctors and nurses were given the opportunity to bet on whether they thought a flu outbreak would occur. They bought and sold actual contracts over the Internet and would make or lose money based on whether they were correct. This “prediction market,” or a stock market where you bet on events, was built by researchers at the University of Iowa to test whether healthcare workers could forecast an outbreak of seasonal influenza.

The idea behind the study was that, just like people in the stock market bet on the perceived future value of Apple, Google, or Coca-Cola, the market participants in this “influenza market” would bet on the perceived future likelihood of an outbreak of the flu. And just like the current stock price of Apple or Google or the price of oil futures contracts, the current price of “the flu” would give the best estimate of how hard the flu might hit that season.

Here’s how it worked: Each trader started with $100 in their account. They would be able to buy or sell the likelihood of a Centers for Disease Control and Prevention (CDC)–reported flu outbreak hitting within 8 weeks. The prediction market was open 24 hours a day, and prices were updated in real time and visible to all participants. At the end of the 8 weeks, the market would close and report the flu outcome based on data from the CDC. People who had bet correctly would make money.

Because betting is illegal, at the end of the flu season, instead of giving the participants cash, the money in the “winners'" account was converted into an educational grant. This was a way of incentivizing people to take the bets seriously using real money, but preventing the market from being an illegal betting site.

So how did the market perform? Compared to using historical data to predict the flu, the market did a lot better. It reached about 70% accuracy in predicting the CDC flu statistics. The study had some limitations (e.g., a small number of traders did the bulk of the trading; the pool of traders was from a small area), but overall the market successfully predicted flu outbreaks 2 to 4 weeks in advance.

In recent years, specialized trading prediction markets like this influenza market have been used to help predict the outcome of uncertain events. The University of Iowa markets, which started in 1988, has accurately forecasted the sales of computer products and events in popular culture such as the Oscars. For elections, the prediction record of the Iowa model has been significantly better than standard statistical models.

So how can we improve our ability to predict events in order to solve real-world problems? One way is to incorporate the science behind prediction markets. While there are existing markets that can be used for prediction like Iowa’s Electronic Markets and the U.K.-based Betfair, the general principle of crowdsourcing people’s guesses can be incorporated into almost any product or service in order to improve predictions. For example, a product could crowdsource bets on how much exercise people get and use that information to help intervene. Prediction markets and crowdsourcing are just a few tools to help with prediction.

Our Institute is developing a bunch of these “prediction technologies," with the goal to bring together leading experts across disciplines to create digital technologies that solve important real-world problems.

But back to what inspired this post: college basketball. Even with the availability of accurate predictive tools, there are always surprises. For example, take a look at this tweet that showed a near 100% winning chance for a team that was leading by 12 points with 44 seconds left in the game. It turns out, the team lost in overtime. (Watch the video—it’s amazing.) So even with the best tools, we can’t always be accurate, but I’m confident that as time goes by prediction technologies will help us solve more and more difficult problems.

Vaccines and Evidence-Based Medicine: A Q&A with Sean Young, PhD

Researchers have discredited studies that link autism to vaccines, yet it remains a topic of concern for some parents, particularly among the affluent. What’s your take on why this remains an enduring issue in the news?

It’s easy to find associations between things. Sometimes those associations are true, sometimes they aren’t. For example, a classic example is the association between eating ice cream and death by drowning. Someone could look at that link and say that eating ice cream causes drowning, but that would obviously be false. The real reason for the link is that people eat ice cream on hot days, and they also go swimming on hot days. When it’s hot, you find more people eating ice cream and drowning, but it’s not that ice cream causes drowning, it’s that both happen together on hot days. A common expression you’ll hear in cases like this is “correlation does not imply causation.” That is, having two things happen together doesn’t mean one thing necessarily caused the other.

How does this relate to autism? People have seen an increase in autism and they are scared, and they’ve also seen an increase in vaccinations during roughly the same time period in the late 20th century. Some people see these associations and start making claims that vaccines cause autism, but science doesn’t back up those claims. But it’s a compelling argument because it’s built a base among educated, affluent people who are scared their kids will get autism. When people are fearful, it’s hard to use science or facts to convince them that their fears are unfounded – people’s fear, rather than science, takes precedence. Problems that elicit fear and other strong emotions make good news because people will pay attention.

Have you used social media to help people understand the benefits of adhering to a vaccination schedule or general medication schedule?

We haven’t done anything around using social media to change people’s perceptions of the link between vaccines and autism or other health problems. I have, however, talked in a previous post about how data can be used to understand and predict events. Because people readily share their views about vaccines, we could apply similar methods to mine social media data about vaccines and use that to predict whether people support vaccines and how this support would affect vaccination rates and disease outbreaks.

Despite advances in understanding both HIV and the human immune system, a fully successful vaccine to treat HIV is still not available. Do you have hope that one might still be developed, or do you feel other preventative therapies such as PrEP are as far as we’ll go?

I have hope, but I don’t think anyone knows the answer to this question. One current approach is to use genetics/genomics approaches to change genes and target HIV susceptible and infected cells.

Do you agree with doctors who have implemented a policy of only treating children who have been immunized according to the American Academy of Pediatrics schedule? And more generally, what do you see as the core behaviors/belief systems that might cause people to not have their infant vaccinated?

As a scientist, I trust that science is the current best approach, whether it’s science recommended by the American Academy of Pediatrics or other organizations. All that matters to me is whether it’s good science, as opposed to poor science like the correlation relationship I described above. Science might not always prove to be right, as new results emerge and testing methods change, but at any given time I trust that scientific approaches are best. Rather than focus on what causes people to not have their infants vaccinated, I prefer to focus on what is working and leverage that science. We know that social norms have a tremendous affect on what people do, and that extends to vaccines. By creating a social norm that encourages people to vaccinate their kids, such as by using the HOPE social media model, I think we can make major changes in vaccination rates and reduce the spread of diseases caused by lack of vaccination.

Your Future Self: A New Way to Make the Most of Daylight Saving Time

Imagine waking up in the morning, looking in the bathroom mirror, and seeing yourself 20 years in the future.

Yesterday, everyone living in the Northern Hemisphere gained a “bonus” hour of daylight. There’s no shortage of recommendations for what to do with that extra time: exercise, learn a new skill, or volunteer top most lists of productive suggestions. Most of us, though, make poor use of free time—it’s much easier to use that hour to nap or get distracted by digital technologies like social media or email. 

So, what can people to do to make better use of their free time?

One of the problems with using time wisely is an inability to imagine how our present and future selves connect. Dr. Hal Hershfield, a psychologist with the UCLA Anderson School of Management, describes the problem this way: “When people think of themselves in the future, it feels to them like they are seeing a different person entirely—like a stranger on the street.” This disconnect can lead to a host of bad decisions in terms of health behavior (e.g., smoking) or long-term planning related to family or work.

In one study, Dr. Hershfield describes an experiment designed to evaluate how and why people save money for the future. The study proceeds from the premise that people are indeed estranged from their future selves (i.e., that saving money is a choice between saving today and giving money to a stranger in the future). I interviewed Dr. Hershfield for BlackBoxPhd.com about this study as well as his general approach to future-self research (click here).        

The study makes clever use of virtual reality technology. In the first part of the study, participants were shown a visual representation of themselves (a digital “avatar”) that simulated how their body and face would look in the future. You can get a general idea of how this age progression technology works by visiting Merrill Lynch’s Face Retirement page.

Dr. Hershfield and his colleagues randomly assigned participants to one of two groups (current self or future self). Participants were told that they were going to enter a virtual reality environment and that they would answer a series of personal questions. The experimenter then showed participants two images of their avatar before they entered the virtual reality environment: participants in the current-self group saw front and side views of their avatar, and participants in the future-self group saw young and age-morphed versions of their avatar.

In line with the researchers’ predictions, people who interacted with the aged version of their self were more willing to allocate money to a savings account. In fact, they saved more than twice as much money to their retirement account! When the study was adjusted to account for the influence of immediacy (i.e., whether participants were reacting to demands of the researchers) or emotion, people who interacted with their aged self still had an increased tendency to allocate a higher percentage of pay for retirement. As you might expect, the phenomenon of caring more about present versus future economic conditions is common, but Dr. Hershfield’s study is a compelling and unique way to prompt behavior change.

In general, future-self research focuses on high-stakes events such as career success and financial planning. But it’s easy to see how this research can help in the short-term, too. For example, if you wanted to use your extra hour of daylight to jog a few times a week, you can use wearable technologies such as a Fitbit or Nike+ sensor, which provide the same sort of continuous data that Dr. Hershfield collects in his studies. With some tweaks by the manufacturer, the collected data could depict a happy avatar of yourself after attaining a mileage goal, or display a fatigued avatar if you don’t sleep right. In other words, technology that’s available right now can help you visualize the effort needed to reach a near-term goal, like getting in shape for the summer beach season, or the effects of not taking care of yourself.

Although you might be scared to see a realistic image of your future self, an avatar might end up becoming a good friend or even a mentor. He or she will give you honest feedback, respond immediately to criticism and change, and can teach you how to spend your time more wisely. I’ll explore this idea more in future posts.

Rethinking Gender Equality in the Workplace

"Women will only have true equality when men share with them the responsibility of bringing up the next generation." – Supreme Court Justice Ruth Bader Ginsburg on how to catalyze gender equality reform

In March, we celebrate the contributions women have made to culture, society, and science. It’s inspiring to see women are increasingly visible in leadership roles — less than 100 years ago, it wasn't possible for a woman to vote, let alone run for president of the United States.

Despite the great strides we've made in women’s rights, there are many places where women still have limited options. In fact, the World Economic Forum recently reported that the global gender gap won’t close entirely until 2133. There’s also the danger that young people feel the battles of feminism have already been won, and it doesn’t help that gender arguments have become increasingly frivolous.

So, how can we accelerate gender parity across the world (and in our own community)? More specifically, how can we increase gender equality in the workplace?

A study published in Scientific American notes that although people think that women and men have equivalent leadership capabilities, women are still under-represented in high-level positions like corporate CEOs, professors, and politicians. Researchers identified two reasons for this imbalance: 1) women face skepticism about their work abilities, and 2) cultural norms impact people’s views about what roles women want in the workplace. For example, men more often choose dangerous or competitive work environments, and often engage in aggressive or dominant behaviors that lead to professional advancement. The article offers an interesting, and somewhat controversial, explanation for this disparity: men and women have fundamentally different reasons for why they seek a high-level position.

The authors collected data from more than 4,000 people. Their results showed that “women view high-level positions as equally attainable as men do, but less desirable.” The reasoning for this finding was twofold: women have a more diverse set of life goals, and they think promotions might lead to negative consequences like more stress and less time for relationships.

The two authors — both women, incidentally — found the same results in executive-level classes they teach and across a wide range of industries. They underscore the point that men and women have different “preferences” by presenting a comparison that asked people to rate their current, ideal, and highest job they were capable of attaining. The authors found no differences among men and women for their current and highest attainable position, but women stated that their ideal position was lower.

The authors speculate that this finding is related to people’s life goals. When they asked men and women to list their life goals, women listed more goals and fewer of them were related to prestige or power in the workplace. The authors conclude: “By definition, if you have more goals, you can’t allocate as much time and attention to any one of them (on average), including professional advancement.

Although this study provides a valuable new wrinkle to gender studies, it raises an interesting question: is it helpful or counterproductive to the fight for gender equality for women to have more goals than men?

I think the answer lies in the type of goals that women set for themselves. If the authors had found that women have a greater number of personal goals, like the desire to take on more hobbies, then reducing the number of these goals could help reduce gender inequality. But it’s unlikely that this is the reason for inequality. It’s more likely that women’s goals were focused on addressing important social needs, like balancing family life, relationships, and being compassionate toward others. This type of behavior should be encouraged rather than penalized.

The problem, therefore, is that society places more responsibility on women to balance achievement in a greater number of domains, including personal development, physical attractiveness, family relationships, and in recent generations, equal levels of power and income in the workplace. With all of these expectations, it’s no surprise that women can’t achieve every goal they set — they must pick and choose where to focus their energy.

Justice Ginsburg hit the nail on the head: the solution to increasing gender equality in the workplace doesn’t lie in reducing the number of goals for women. Instead, we should encourage men to seek out more goals, and absorb more responsibility, outside of the workplace. The hope is that a broader set of goals related to family, education, and non-work–related activities will enhance personal well-being and benefit society as well.

A Practical Guide to Inspiring Volunteerism

Fifty-five years ago, President John F. Kennedy created the Peace Corps. This volunteer program has been a huge success, with more than 200,000 people giving their time to work on social and economic issues in developing countries. More than ever, people seem to be excited about getting involved in their communities.   

However, despite people saying they want to help, volunteerism is actually decreasing. In 2014, the U.S. Labor Department recorded the lowest rate of volunteering in more than 10 years.    

How, then, do we get people to follow through with their plans to volunteer? Or more broadly, how do we foster a society where people make time to help others?

There is a biblical story about a man who helped a traveler beaten by a gang of thugs, while other people just passed by. This story has become known as the parable of the Good Samaritan, which is often used to show that good people help and bad people don't help those in need. 

The Princeton Theological Seminary designed a research experiment around this story. In one building, participants who completed a questionnaire were instructed to walk to another building to give a talk either about jobs or the parable of the Good Samaritan. Like most social psychological experiments, participants were divided into two groups: half were told they were late to give the presentation while the other half were told they had a reasonable amount of time to get to the building. The experimenters also staged an actor, called a confederate, who the participants would see slouched in an alley, moaning and coughing, as they walked to give the presentation. This man was clearly visible and obviously in need of help. The question was, would people stop to help him?

The study answered this question with two interesting findings. First and counterintuitively, the topic of the presentation didn’t affect whether people were willing to help, even when the talk was about the Good Samaritan. Second, the degree of urgency introduced had a major effect on whether participants stopped to help. People who were told they were late to give the talk were the least likely to help the man in the alley. In fact, only 10% of people in the “high hurry” category helped the actor.

So what is the take home-point of this research as it relates to volunteerism or helping others? It’s natural to assume that a person giving a talk on the Good Samaritan would stop to help someone in order to avoid feeling like a hypocrite. Similarly, it seems logical that people who express interest in volunteering would feel like hypocrites if they don’t follow through and volunteer. However, the biggest predictor of whether people actually help is often how much time they have. In other words, planning doesn’t have as big of an impact on volunteering as a person’s immediate situation.

How, then, can we apply this research so that more people can make time to help others? Making an internal commitment to volunteer is the first step, but it’s important to strategically block off time so that we’re not in a hurry to get other things done that could stop us from following through. Making more time for others can be as simple as using a scheduler. Calendars offered by Google or “to-do” applications such as Asana or TeuxDeux are fantastic tools to plan personal tasks, but there’s nothing stopping you from using them to slot time for others. These online tools have the benefit of being shareable, and many offer amusing “attaboys” when you finish a task. Booking an event is important because then you don’t have to make more (often unavailable) time to volunteer. As strange as it might sound, research supports the idea that blocking off time in a calendar to help others can help you follow through with your plans.*

We all have busy lives, so start by volunteering your time close to home (e.g., by helping a family member or friend with a project). You’re most likely to make an impact among those closest to you, and for many people that may be all the time that is available. If you find yourself wanting to get more involved, however, schedule time to help a local non-profit organization, church, or other community organization. And if you’re really ambitious, find a way to devote two years of your life to the Peace Corps.

* The strategy presented in this week’s post extends far beyond getting people to help others—it can also be used to help people follow through with a large number of activities in their personal lives and work. I’ll write more about those applications in the near future.