A friend just sent me, thought it might be of interest. I didnt read the study yet but it shows how important it is to be careful with study design when trying to legitimize the use of these substances through official investigations.
1. Unbalanced design compromises a lot of things, particularly your ability to evaluate the effects of the therapy protocol alone. They seemed much more interested in comparing MDMA vs placebo, not placebo+therapy vs MDMA+therapy. This is one way to look at the problem, but I argue it is the less valid way to look at it
2. It’s pretty hard to argue for an effect when your control group is not functioning as a control, and your blinding is nonexistent.
jamie said:kinda weird to critisize the double blind part based on the inevitable fact that people are gunna know that they got MDMA once it kicked in. What kind of moron does not understand that? That whole point just sounded stupid to make.
What's the point is giving people too low of a dose so that they dont know if they got anything? How is that useful for a study like this?
3. The non-standardized dosing (the optional additional dose) presents a big confound.
4. The variability of therapy sessions and post-therapy-session drug treatment (with sleep drugs and benzodiazepines, which I view as a considerable blunder) again reduces the validity of the results. We now have a number of confounding factors that distract from the main effect they attempt to demonstrate.
Well, you have to start from somewhere, right? A psychiatrist or psychologist could possibly say that the neuropathology or causes-and-results between different PTSDs is similar, so the MDMA results could apply to others as well.5. This is a tiny study, including just 20 people (12 MDMA, 8 placebo followed by crossover MDMA). Their population was primarily female and primarily survivors of child abuse or sexual trauma, so it’s also a big jump to assume that all populations and all causes of PTSD are going to apply