Cognitive Psychology and the Smartphone

The iPhone was released 10 years ago and that got me thinking about the relationships I’ve had with smartphones and mobile devices. Of course, I remember almost all of them…almost as if they were real relationships. The first one, the Qualcomm QPC 860, was solid but simple. That was followed by a few forgettable flip phone and a Motorola “ROKR” phone that never really lived up to its promise.

But then came the iPhone, and everything changed. I started really loving my phone. I had an iPhone 3GS (sleek and black) and a white iPhone 4S which I regard at the pinnacle of iPhone design, and I still have as a backup phone. A move to Android saw a brief run with an HTC and I’ve been in a steady commitment with my dependable and conservative Moto X Play for 2 years now. It’s with me every single day, and almost all the time. Is that too much? Probably.

Smartphones are used for many things

There is a very good chance that you are reading this on a smartphone. Most of us have one, and we probably use it for many different tasks.

  • Communication (text, email, chat)
  • Social Media (Facebook, Twitter)
  • Taking and sharing photos
  • Music
  • Navigation
  • News and weather
  • Alarm clock

One thing that all of these tasks have in common is that the smart phone has replaced other means of accomplishing the same tasks. That was original idea for the iPhone, one device to do many things. Not unlike “the one ring”, the smart phone has become the one device to rule them all. Does it rule us also?

The Psychological Cost of Having a Phone.

For many people, the device is always with them. Just look around a public area: it’s full of people on their phones. As such, the smartphone starts to become part of who we are. This ubiquity could have psychological consequences. And there have been several studies looking at the costs. Here are two that piqued my interest.

A few years ago, Cary Stothart did a cool study in which research participants were asked to engage in an attention monitoring task (the SART). They did the task twice, and on the second session, 1/3 of the participants received random text notifications while they did the task, 1/3 received a random call to their phone, and 1/3 proceeded as they did in the first session, which no additional interference. Participants in the control condition performed at the same level on the second session, but participants who received random notifications (text or call) made significantly more errors on the task during the second session. In other words, there was a real cost to getting a notification. Each buzz distracted the person just a bit, but enough to reduce performance.

So put your phone on “silent”? Maybe not…

A paper just published by Adrian Ward and colleagues (Ward, Duke, Gneezy, & Bos, 2017) seems to suggest that just having your phone near you can interfere with some cognitive processing. In their study, they asked 448 undergraduate volunteers to come into the lab and participate in a series of psychological tests. Participants were randomly assigned to one of three conditions: desk, pocket/bag, or other room. People in the other room condition left all of their belongings in the lobby before entering the testing room. People in the desk condition left most of their belongings in the lobby but took their phones into the testing room and were instructed to place their phones face down on the desk. Participants in the pocket/bag condition carried all of their belongings into the testing room with them and kept their phones wherever they naturally would (usually pocket or bag). Phones were kept on silent.

The participants in all three groups then engaged in a test of working memory and executive function called the “operation span” task, in which participants had to work out basic math tests and keep track of letters (you can run the task yourself here), as well as the Raven’s progressive matrices task which is a test of fluid intelligence. The results were striking. In both cases having the phone near you significantly reduced your performance on these tasks.

A second study found that people who were more dependent were affected more by the phone. This is not good news for someone like me, who seems to always have his phone nearby. They write:

Those who depend most on their devices suffer the most from their salience, and benefit the most from their absence.

Are Smartphones a Smart Idea?

Despite the many uses for these devices, I wonder how helpful they really are….for me at least. When I am writing or working, I often turn the wifi off (or use Freedom) to reduce digital distractions. But I still have my phone sitting right on the desk and I catch myself looking at it. There is a cost to that. I tell students to put their phones on silent and in their bag during an exam. There is a cost to that. I tell students to put them on the desk on silent mode during lecture. There is a cost to that. When driving, I might have the phone in view because I use it to play music and navigate with Google Maps. There is a cost to that.

It’s a love hate relationship. One of the reasons I still have my iPhone4S is because it’s slow and has no email/social media apps. I’ll bring it with me on a camping trip or hike so that I have weather, maps, phone and text, but nothing else: it’s less distracting. Though it seems weird to have to own a second phone to keep me from being distracted by my real one.

Many of us spend hundreds of dollars on a smart phone and several dollars a data for a data usage plan and at the same time, have to develop strategies to avoid using the device. It’s a strange paradox of modern life that we pay to use something that we have to work hard to avoid using.

What do you think? Do you find yourself looking at your phone and being distracted? Do you have the same love/hate relationship? Let me know in the comments.

References

Ward, A. F., Duke, K., Gneezy, A., & Bos, M. W. (2017). Brain Drain: The Mere Presence of One’s Own Smartphone Reduces Available Cognitive Capacity. Journal of the Association for Consumer Research. https://doi.org/10.1086/691462

Stothart, C., Mitchum, A., & Yehnert, C. (2015). The attentional cost of receiving a cell phone notification. Journal of Experimental Psychology: Human Perception and Performance 41(4), 893–897. http://doi.org/10.1037/xhp0000100

 

Advertisements

A Computer Science Approach to Linguistic Archeology and Forensic Science

Last week (Sept 2014),  I heard a story on NPR’s morning edition that really got me thinking…(side note, I’m in Ontario so there is no NPR but my favourite station is WKSU via TuneIn radio on my smart phone). It was a short story, but I thought it was one the most interesting I’ve heard in last few months, and it got me thinking about how computer science has been used to understand natural language cognition.

Linguistic Archeology

Here is a link to the actual story (with transcript). MIT computer scientist Boris Katz realized that when people learn English as second language, they make certain errors that are a function of their native language (e.g. native Russian speakers leave out articles in English). This is not a novel finding, people have known this. Katz, by the way, is one of many scientists that worked with Watson, the IBM computer that competed on jeopardy

Katz trained a computer model to learn from samples of English text productions such that it could detect the writer’s native language based on errors in their written English text. But the model also learned to determine similarities among other native languages. The model discovered, based on errors in English, that Polish and Russian have historical overlap. In short, the model was able to determinethe well know linguistic family tree among many natural languages.

The next step is to use the model to uncover new things about dying or languages. As Katz says

But if those dying languages have left traces in the brains of some of those speakers and those traces show up in the mistakes those speakers make when they’re speaking and writing in English, we can use the errors to learn something about those disappearing languages.”

Computational Linguistic Forensics

This is only one example. Another one that fascinated me was the work of Ian Lancashire, an English professor at the University of Toronto and Graeme Hirst, a professor in the computer science department. The noticed that the output of Agatha Christie—she wrote around 80 novels, and many short stories— declined in quality in her later years. That itself is not surprising, but they thought there was a pattern. After digitizing her work, they analyzed the technical quality of her output and found richness of her vocabulary fell by one-fifth between the earliest two works and the final two works. That, and other patterns, are more consistent with Alzheimer’s than normal aging. In short, they are tentatively diagnosing Christie with Alzheimer disease, based on her written work. You can read a summary HERE and you can read the actual paper HERE.  It’s really cool work.

Text Analysis at Large

I think this work is really fascinating and exciting. It highlights just how much can be understood via text analysis. Some of the this is already commonplace. We educators rely on software to detect plagiarism. Facebook and Google are using these tools as well. One assumes that the NSA might be able to rely on many of these same ideas to infer and predict information and characteristics about the author of some set of written statements. And if a computer can detect a person’s linguistic origin from English textual errors, I’d imagine it can be trained to mimic the same effects and produce English that looks like  it was written by a native speaker of another language…but was not. That’s slightly unnerving…

Music and the Mind

As I am sitting down to write this blog entry, my younger daughter is practicing her piano lessons for the week. She will put in twenty minutes of practice, paying extra attention to counting (her teacher really likes her students to count). In the short term, she will progress to being able to play more complicated pieces, to play music (rather than just notes) and our living room will be filled with the sounds of elementary piano music.

Hearing our children play music is an undeniably wonderful thing.

But in the long term, there is increasing evidence that the time she spends on music instruction may have long lasting and beneficial effects on cognitive function, social behavior, and academic performance. That seems to be the conclusion of much of the contemporary research on the effects of music on the brain and mind.

Full disclosure, although I study cognition and thinking, this is not my area of expertise. I’m interested as a psychologist, but also as a parent and music lover. So I’m not endorsing anything in my professional capacity, I just find this work really fascinating.

The study music and the mind had a dubious moment of fame in the 1990s, and everyone has heard of the “mozart effect”. The idea, which was wildly over interpreted by many, was that listening to music (specifically to the music of Mozart) will “make you smarter”. Of course, the original paper did not make this claim, and the authors were clear that these were short term effects of listening to a piece of music and subsequent  performance on spatial reasoning tasks. But  the public was so enamored by this finding that a whole industry was spawned (“Baby Einstein” DVDs) and the governor of the state of Georgia actually set aside money to make sure that every baby that was born in that state was given a classical music CD.

Although the idea that passive listening to classical music would make babies and kids more intelligent and more creative is erroneous, interest in music and the mind has not disappeared, and a few weeks ago, I came across several popular science articles that suggest a renewed interest in the topic. And this time, the claims are more credible and the possible benefits much more long lasting.

But what effects does music–either listening to, or playing–have on the mind?

There is robust evidence from Glenn Schellenberg’s lab at the University of Toronto that music instruction is directly linked to higher IQ scores. A paper from 2005 summarized this work, and found that music instruction was correlated with improvements in spatial, mathematical, and verbal tasks. He writes , “Does music make you smarter? The answer is a qualified yes.” The reasoning is that music instruction seems to have these effects because it is school-like, requires attention, is enjoyable, and engages many areas of the brain. Learning about music also requires and encourages abstract thought.  The suggestion here is that a person can identify the same tune even if played in a different tempo, instrument, or key because that they have processed it as an abstraction. The  “qualified yes” is that it is not clear if music lessons are the only way to get this improvement and Schellenberg suggests that other kinds of private lessons (drama, for example) might show similar cognitive  benefits.

But other research has begun to track the academic performance and brain function of students who engage in music instruction. A longitudinal study being run by Nina Kraus at Northwestern University is looking carefully at long term benefits of school-based music curricula (as opposed to private lessons as in the Schellenberg study). In essence, music instruction in school seems to improve children’s communication skills, attention, and memory. Kraus’s team is also examining the neural correlates to these benefits and even finds that the auditory processing advantages and neural changes that come from music instruction are robust into adulthood. In other words, if there are cognitive and perceptual enhancements from studying music as a child, these changes may persist long after music instruction is over.

Finally, a recent editorial in the New York Times asked “Is Music the Key to Success?” The author notes that many very successful people benefited from extensive music training.  Allen Greenspan, Stephen Spielberg, Larry Page, Paul Allen, Condoleezza Rice, and others were (and are) trained musicians. This is not to say that piano lessons at age 6 = future Secretary of State, and of course the Op Ed  asks “Will your school music program turn your kid into a Paul Allen, the billionaire co-founder of Microsoft (guitar)? Or a Woody Allen (clarinet)? Probably not.” But the correlations are there, and the evidence (including the more rigorous studies above) is compelling.

The message is: Learn to play music,  or have your children learn an instrument.

Obviously, there is no evidence that instruction in music will produce negative effects. None. So why do schools and school boards sometimes look cutting to music and arts programs as a way to make ends meet? Just this year, the Toronto school board decided (controversially) to make some severe cuts to its music program, and this problem is province wide (though thankfully, not our kids’ public elementary school…we have a great music program).  And this problem is not unique to Ontario, of course. California has seen its school music program decimated.

This is not a good idea.

My point is, there is ample evidence—even when viewed with a skeptical eye—that music instruction has tangible benefits and there is literally no downside. If anything, I’d argue for more music instruction in schools. We’ll likely see wide-ranging cognitive and academic benefits as a result. But if nothing else, we’ll maybe create more musicians.

Gladwell versus the academy (a modern David and Goliath)

I’ll start with an admission: I have never read any of Malcolm Gladwell’s books.

It’s nothing personal or principled, but I just never got around to it;  I tend to prefer reading fiction in my spare time anyway. I have enjoyed some of his essays in the New Yorker, but that’s about it. So I am not writing about the content of his books.  I’m writing about the reception that his book receive, the criticisms, and the apparent belief by many that he’s a scientist. This, it seems, really bothers some actual scientists.

Malcom Gladwell is an enormously successful and gifted writer. No one can argue with this. His books Blink, and The Tipping Point, and Outliers have have made accessible to many people outside the academic and scientific world an understanding of some of the most interesting and exciting ideas in cognition, social psychology, and neuroscience. He has a long career as a journalist, is well read, and he’s no Jonah Leher….

With each book, Gladwell’s stature has grown, but I have noticed the reaction from academics has been less than enthusiastic. Many feel that he misunderstands (or worse, misrepresents) the scientific studies upon which many of his books are built. Dan Simons and Chris Chabris are two of the more vocal critics, and they are both well-respected and well-known scientific psychologists. They argued (in an article posted in the Chronicle of Higher Education that many people were overly enthusiastic about the premises in Blink, namely that intuition can produce better outcomes than analytic cognition. It’s not that they necessarily thought the book was wrong so much that they felt everyone was misinterpreting what it was about. In fact, Simons and Chabris are the authors of The Invisible Gorilla: How Our Intuitions Deceive Us, which argues that human intuitions can be very deceptive. The title, by the way, refers to one of Simons’s most well-know experiments.

They are not the only vocal critics. Steven Pinker is probably closer to Malcolm Gladwell in terms of being a public intellectual (and he has received his fair share of criticism as well). And he too is critical of Gladwell’s books for some of the same reasons. In a review of Outliers,  Pinker writes that “The reasoning in “Outliers,” which consists of cherry-picked anecdotes, post-hoc sophistry and false dichotomies, had me gnawing on my Kindle.”

So now Malcolm Gladwell has a new book, David and Goliath.  As I mentioned before, I have not read this book, so I make no attempts to provide my own critique. But one anecdote in particular seems to have garnered a lot of attention. Gladwell discusses several stories of people who became very successful despite having dyslexia. His thesis seems to be that having dyslexia made it just a little harder for these people to get by, and so maybe they worked a little harder and compensated for the dyslexia and thus achieved greatness. Gladwell calls this  “the theory of desirable difficulty.” He bases this (apparently) on a study from 2007 in which subjects who read a mathematical reasoning problem in a hard-to-read typeface actually outperformed subjects who read the same problems in an easier to read typeface. So there may be a connection, but there may not be.

In a recent review in the WSJ, Christopher Chabris takes Gladwell to task. He points out that the 2007 study in question has not replicated that well. He wonders why Gladwell does not point this out. He wonders why Gladwell asserts as “laws” phenomena with many possible interpretations. The review is critical, and very good, and points out what I really think people should be aware of  when they read Gladwell’s book, namely that  it contains interesting anecdotes mixed with science, and that the writing is very good and persuasive. This need not be a bad thing, and Gladwell and his supportive critics point out that this is a great narrative form, and is exactly what makes Gladwell so good. Stories matter. Narrative matters. But the expanded version on Chabris’s blog went further, and Chabris worries that Gladwell knows full well that people over interpret his books and he simply does not care. He writes “I can certainly think of one gifted writer with a huge audience who doesn’t seem to care that much. I think the effect is the propagation of a lot of wrong beliefs among a vast audience of influential people. And that’s unfortunate.”

Ouch.

Is this envy? I do not think so. Dan Simons and Chabris are successful authors in their own right. So is Steven Pinker. But the difference is that they are also successful academics and researchers. Chabris makes the point that many people simply consider Gladwell to be an authority, rather than an author. The term “Gladwellian” exists.

The review was critical enough to cause Mr Gladwell to respond on Slate.com. Gladwell suggested that “Chabris should calm down”, and  he even takes a mild swipe at Mr. Chabris’ wife. Why so personal? I will confess, that I did not find Gladwell’s Slate response to be very flattering. It came across as arrogant and dismissive. Does Gladwell imagine himself as the David and the Academy as the Goliath? Possibly, though I’m inclined to think the opposite. Gladwell’s “brand” is so big that he is very likely the Goliath in this this fight. And (in keeping with the these of his new book)  his gifts–his incredible writing talent– may very well be what could bring him down.

In the end, I’m glad that this debate is even able to happen. I’m glad that there is a journalist and writer like Malcolm Gladwell  who is interested and exited enough by human behavior and psychology to write best sellers. I’m glad that there are serious and respected scientist like Chabris and Simons to call him out when the claims go to far.

In the course of following these criticisms and counter criticisms  I’ve become much more interested in reading this work. I fully plan to read Gladwell’s book of Essays (What The Dog Saw)  and some of his books. As well, I’m planning to read Simons and Chabris book too. All concerned parties can rest assured  that I’ll be checking them out of my public library soon, and that no actual cash will flow.