14 votes

Immediate effects of mobile phone app for depressed mood in young adults with subthreshold depression: A pilot randomized controlled trial

11 comments

  1. [11]
    MasterCEO
    Link
    "Background: Preventive interventions for subthreshold depression (StD) are essential to reduce the incidence of major depressive disorder. Our smartphone application presenting positive word...

    "Background: Preventive interventions for subthreshold depression (StD) are essential to reduce the incidence of major depressive disorder. Our smartphone application presenting positive word stimulation in video (ie, Subliminal Priming with Supraliminal Reward Stimulation, SPSRS) was suggested to improve depressive symptoms in people with StD, although it is unclear whether it can immediately improve depressed mood. This pilot randomized controlled trial (RCT) aimed to investigate the preliminary efficacy of SPSRS application intervention on depressive mood in people with StD.

    Methods: Thirty-two participants with StD were randomly assigned to the experimental (n = 16) or control group (n = 16). The experimental group received SPSRS application intervention (10-minute video with positive word stimulation) and the control group received YouTube application intervention (10-minute video without positive word stimulation). Both groups used identical iPhones managed by the research team. The primary outcome was the change from baseline in depression-dejection on the Profile of Mood States 2nd Edition-Adult Short (POMS 2-A Short) after the intervention.

    Results: No participants dropped out of the study. The experimental group showed a small improvement in depression-dejection on the POMS 2-A Short score (adjusted Hedges’s g = − 0.32) compared to the control group. Post-hoc power analyses estimated a sample size of 56 per group (112 total) to evaluate depression-dejection on the POMS 2-A Short in a future full-scale RCT.

    Conclusion: SPSRS application intervention may be effective in immediately improving depressive mood in people with StD. A future full-scale RCT based on a formally calculated sample size should be conducted to replicate these findings."

    3 votes
    1. [10]
      UniquelyGeneric
      Link Parent
      Call me jaded, but the behavioral economist in me assumes that if this study is taken to a business application it won’t be to limit the amount of negative user comments someone is exposed to, but...

      Call me jaded, but the behavioral economist in me assumes that if this study is taken to a business application it won’t be to limit the amount of negative user comments someone is exposed to, but rather to balance out negative news with out-of-touch, positively affirming advertisements so you remain a happy consumer. “You’re beautiful just the way you are. Let $PRODUCT_NAME show the world your inner truth.”

      I’m also not a statistician, but the sample size here also feels low and the conclusions suspect (in the sense than an hour of positive feedback feels like brainwashing on a small scale).

      That being said, the effect in principle has been known for a while. Facebook infamously performed a far larger study of social media’s influence on happiness that showed positively biased news/posts lead to positive user sentiment (how surprising!). Heck, advertising after the 50s has basically devolved into overt manipulations of your mental state for corporate profit. It’s a “tale as old as time”: a phrase which poignantly evokes Disney’s omnipresence over the cultural zeitgeist…reiterating my overall point.

      8 votes
      1. [9]
        Astrospud
        Link Parent
        You're not wrong - the sample size is wayyyyy too small. It's also a relatively new concept study instead of someone else trying to reproduce their results so this whole study is only of use to...

        You're not wrong - the sample size is wayyyyy too small. It's also a relatively new concept study instead of someone else trying to reproduce their results so this whole study is only of use to future studies. I'm not a fan when people post studies like this with a title that suggests it actually does something instead of something like "study needs further research but early steps of positive word exposure may help reduce instances of developing depression."

        7 votes
        1. [8]
          unknown user
          (edited )
          Link Parent
          To be fair the title did state that it's a pilot study, though I agree with your general sentiment. We don't often see primary research papers and having them mixed in with general news articles...

          To be fair the title did state that it's a pilot study, though I agree with your general sentiment. We don't often see primary research papers and having them mixed in with general news articles like this could give the wrong impression (and expectation). Since mods can edit titles, maybe we could start standardizing a format to distinguish these? @cfabbro

          Edit: This is one of my petpeeve with r/science actually, in most posts there are always comments about the same thing: sample size is too small, or correlation isn't causation. Those are not wrong, but they are also the expected limitations when it comes to preliminary studies. Maybe it'd be more productive to set the right expectation upfront so we don't need to mention them every time.

          Edit: switched some words around so it's less inflammtory.

          3 votes
          1. [7]
            cfabbro
            (edited )
            Link Parent
            I'm not really sure what can be done to fundamentally change user impressions or expectations with just a title edit when it comes to study links. Especially since, as you said, the title pretty...

            I'm not really sure what can be done to fundamentally change user impressions or expectations with just a title edit when it comes to study links. Especially since, as you said, the title pretty clearly states it's just a pilot trial study already. I absolutely agree with you that the ubiquitous sample size complaints on /r/science are annoying AF though, even if they are sometimes a valid criticism.

            However, this sort of thing is well outside my wheelhouse, so I will gladly defer to more experienced people when it comes to the best ways to moderate ~science here on Tildes. So to that end, @CosmicDefect, @Nate (and any other /r/science mods lurking around here), your opinions and advice on these issues would be greatly appreciated.

            2 votes
            1. [5]
              CosmicDefect
              (edited )
              Link Parent
              I don't particularly have a good solution to this. Public literacy of how science publications work is quite low even among technically minded people which is why we're inundated with commentary...

              I don't particularly have a good solution to this. Public literacy of how science publications work is quite low even among technically minded people which is why we're inundated with commentary about things like sample size. Whereas so much about scientific methodology can be opaque to average readers, the number of say trials or people involved is a simple to grasp concept which is why people gravitate towards commenting on it even when the sample size is perfectly valid for the type of study done. I mean the paper itself concludes with:

              A future full-scale RCT based on a formally calculated sample size should be conducted to replicate these findings.

              Medical research is well outside my wheelhouse, but this appears to be a paper which passed peer review and is published in legit journal (impact factor is about 3). I feel like distinguishing it from any other publication by virtue of being a pilot study or preliminary work is a relatively unfair stigma towards a perfectly normal type of research product.

              One suggestion I can make is to perhaps start including peer reviewed and not peer reviewed (an alternative would be published research and preprint) tags to research article ~science submissions. For example, these threads on the recent superconductor buzz

              are explicitly about unpublished preprints which haven't been vetted by anyone. Making sure such posts are clearly marked as preprints would be a good thing imo. So perhaps we need three designations for articles posted in ~science with tags:

              • Peer reviewed published research (from journals directly)

              • Unpublished preprints (mostly from arXiv, or any other preprint host)

              • Science news or journalist articles (pop science, etc...)

              I don't know how messy this would get, but maybe primary research and preprints should have a specialized tag like how spoilers or nsfw get a little colorful tag which is distinct from normal tags. While Tildes isn't just about ~science, I don't see it being too big a deal considering how spoiler threads on ~tv or ~anime aren't a bother to most folks. And people can just unsubscribe from tags they don't like anyway.

              2 votes
              1. [4]
                cfabbro
                (edited )
                Link Parent
                I just wanted to acknowledge your comment while I had a free moment, and thank you for taking the time to reply and offer your advice. I'm pretty busy this weekend, but when I get some more free...

                I just wanted to acknowledge your comment while I had a free moment, and thank you for taking the time to reply and offer your advice. I'm pretty busy this weekend, but when I get some more free time in a few days I will have a proper think about what you've suggested, and write a proper response (and prob create some related feature requests on Gitlab). So, thanks. Your thoughtful reply is much appreciated. :)

                p.s. Preliminary thought: I think a lot of that will ultimately have to wait until we get more dedicated, specialized knowledge users willing to take on the responsibility of looking after individual groups like ~science, since I wouldn't even know how to evaluate a study enough to tag them with details about peer review and such. And I suspect @mycketforvirrad is overburdened enough already just trying to keep up with adding generic tags to all the new submissions across the entire site to take on an additional task like that as well.

                cc: @Deimos, since I think you might appreciate reading @CosmicDefect's thoughts above as well.

                2 votes
                1. [3]
                  CosmicDefect
                  (edited )
                  Link Parent
                  My pleasure. :) The process I'm suggesting would not be evaluating the articles themselves in any real manner. That's way too much effort and nobody is an expert in everything. Sorry, I'm using...

                  My pleasure. :)

                  I think a lot of that will ultimately have to wait until we get more dedicated, specialized knowledge users willing to take on the responsibility of looking after individual groups like ~science, since I wouldn't even know how to evaluate a study enough to tag them with details about peer review and such.

                  The process I'm suggesting would not be evaluating the articles themselves in any real manner. That's way too much effort and nobody is an expert in everything. Sorry, I'm using language which might not be clear. When I say an article is peer reviewed, it means that the article is published in a reputable journal that had someone else (usually a third-party expert in the topic) already vet the article.

                  So, for example, this article,

                  was published in The Astrophysical Journal Letters. So, it's a peer reviewed article, and the journal itself is vouching for its scientific value. The basic gist is (a) check that the link is to a real journal then (b) tag it as a published work. Otherwise, if the link is like from arXiv, medRxiv, or any other preprint host, it's still unpublished unvetted work. Unless the journal or article is open access however, people might prefer to post the preprint since it's free to read.

                  Just to give a sense of scale however, just browsing the frontpage of ~science which covers about two weeks of posts, I see about four submissions to direct published scientific works and four to preprints. The majority are articles written as press releases by research orgs or journalists from outlets. I don't think those kinds of posts need any special treatment and should just be treated as normal posts.

                  So, this isn't a huge problem here or anything on the platform. Most people don't link directly to research, they post news articles or blogs.

                  1 vote
                  1. [2]
                    cfabbro
                    (edited )
                    Link Parent
                    I wasn't suggesting mods peer review anything themselves... :P What I mean was I don't even have the slightest idea how to recognize whether something is peer reviewed or not in the first place,...

                    I wasn't suggesting mods peer review anything themselves... :P What I mean was I don't even have the slightest idea how to recognize whether something is peer reviewed or not in the first place, and have no idea what journals are legitimate/reputable or not either. That's what I was referring to about needing at least some specialized knowledge to evaluate such thing.

                    1 vote
                    1. CosmicDefect
                      Link Parent
                      Ah, gotcha. My bad for misreading.

                      Ah, gotcha. My bad for misreading.

                      1 vote
            2. [2]
              Comment deleted by author
              Link Parent
              1. CosmicDefect
                Link Parent
                To clarify, the only "mod" on Tildes with comment pruning and general powers is the site admin Deimos. Everyone else falls into two groups: Those who can label comments as a sort of communal...

                It is particularly troublesome to me that the moderator(s) is/are not specialists like they (sort of) are on the reddit page

                To clarify, the only "mod" on Tildes with comment pruning and general powers is the site admin Deimos. Everyone else falls into two groups: Those who can label comments as a sort of communal quality assessment (Exemplary, Offtopic, Joke, Noise, Malice) or the small subset of users who can move posts, edit titles and tags in addition to labelling.

                but the reddit mods fail to uphold their rules, so I'm not sure there's really a difference at the end of the day, except as regards general discussion quality.

                I popped into /r/science for a quick looksee and while I can't say tell if modding activity has substantially changed since before versus after the API drama, I still see active mods removing junk posts in most threads there. But it's thousands of comments and clicks to do all this. So like Sisyphus rolling that rock up the hill for eternity, it's always a battle of endurance.

                2 votes