Nathanial.Dread said:
It doesn't make any predictions that can be tested and it doesn't answer any questions, it just shuffles them around a bit. We're no closer to solving either the Hard or Soft problems, we've just cluttered the picture a bit with a bunch of other stuff that (from my perspective) lack sufficient evidence to make them work taking seriously. Radin lists a lot of studies on his site, but looking at them, a lot of these studies seem to be small, heavily reliant on p-values to make their claim and reproducability seems low.
You cited the psi stuff in the other thread you linked to, and to be honest, so far, I just don't buy it. Extraordinary claims, extraordinary evidence and all that, and so far, I haven't been wowed. I'll happily eat my words if/when the time comes, but I'm pretty conservative about these things.
I'd obviously have to disagree your criticism of psi research. Most of the criticisms above are false, empirically, as I'll try to show below. Of course, I did not expect you (or anyone) to exhaustively look into the entire body of research that has been published on psi (that would just be plain cruel lol) - however, this research should not be cavalierly dismissed. Regarding Radin's page, I'm not sure you looked carefully enough b/c the points you raised don't match the bulk of the data from the studies listed there (however, it is true that some paradigms have been less successful than many of the others, so it's important not to generalize.)
Before I address psi, a word on OBEs (if you'd like complete citations of what I refer to, please ask). When you speak of OBEs being induced in a lab, I suppose you're referring to Olof Blanke's brain stimulation studies w/ epileptic patients. A) It's not at all clear that genuine OBEs were in fact induced in these experiments. If one reads the studies as written by Blanke et al. (and not merely the articles about them in popular science news forums), it's clear from the patients' reports that the phenomenology of the experiences induced by stimulation is significantly different than that of the phenomenology of spontaneous OBEs. Blanke's study has been criticized on these and other grounds, by Greyson, Parnia, and Fenwick (2008 ) and Holden, Long, and Maclurg (2002). B) Even if genuine OBEs had been induced in these experiments, it doesn't necessarily negate their reality. For instance, it would be compatible with theories of the brain as a filter. It's entirely feasible that OBEs (and related phenomena) are mediated by the brain, without being reducible to it. As Aldous Huxley often said (in relation to mystical and psychedelic experiences), certain neural events "are the occasion rather than the (sole) cause". (NDEs are a topic too deep to get into in this comment, but I will note that the majority of researchers who have directly studied NDEs prospectively in the past 2 decades have expressed serious doubts that the phenomena can be reductively explained, and forcefully argued against such a position (e.g., Van Lommel, Parnia, Greyson, Sartori, Fenwick, Beauregard, Ring, Sabom, et al.))
On to psi : Firstly, I mentioned "psi" in response to Valmar's thread which asked "is consciousness a product of the brain or is the brain a receiver of consciousness?" Psi seems to provide evidence that consciousness has non-local properties of some kind, which would suggest that mind or consciousness extends beyond the boundaries of the ordinary sensory channels. Thus, it seems psi could be evidence in favor of a "receiver model" of consciousness. Secondly, it absolutely makes predictions that can be tested - and have been tested, many successfully so. The whole history of research done in this area goes against the contrary assertion.
Yes, psi research is evaluated statistically - both with frequentist and bayesian statistics. But if you are dismissing the findings simply because of this, then you also have to discard the vast majority of findings from the psychological and behavioral sciences, as well as many of the findings in modern physics and biology. All of these fields are evaluated in precisely the same manner, using statistical methods of analysis. If there are problems with applying statistical tests in empirical science, such problems would not be endemic to psi research alone. (I should mention that Dr. Jessica Utts, president of the American Statistical Association in 2016, has evaluated many of the research reports and meta-analyses in the study of psi. In several publications, she has demonstrated that psi phenomena are statistically robust, highly significant and replicable (Utts, 2015. In May and Marwaha (Eds.)) - and she is a major proponent the research. So I find it quite unlikely that there are any statistical errors being made in the assessment of these studies.
I'm not sure what you mean by "these studies are small". If you're referring to how many participants were involved, then that varies based on the study. If your referring the sheer number of studies in total, then the statement is false, as there are literally several hundreds of studies that have demonstrated statistically significant effects in favor of the experimental hypothesis. It's true that the effect sizes of this research are considered to be "small to medium" sized. However, this in and of itself is certainly not grounds for rejection, as one must of course also consider the statistical significance and replication rate of the studies in tandem. Note that the effect of aspirin on reducing heart attack has a much smaller effect size than does effects routinely observed in psi research. This is also holds true the evidence for the Higgs Boson (which was evaluated statistically, btw).
It is also false that the reproducibility rate of psi studies is "low", or low enough such that there is warrant for discarding the evidence. Bapista and Derkshani (2014, 2015) and Utts (1991, 2015) assessed this very question of replication rates, and found that overall, the replicability rates of psi research was comparable to that of mainstream psychological research. A number paradigms assessed in psi research were found to have higher replication rates than the average study published in mainstream psychology, the latter of which are normally accepted as fact. Additionally, the statistical power of psi studies was also found to be comparable to that of mainstream research published in prominent psychology (Rossi 1990), social psychology (Richard, Bond, Stokes-Zoota 2003), and neuroscience (Button, 2013) journals - in fact, for certain classes of psi experiments statistical power was significantly better than in these mainstream areas of research.
You mention another well worn maxim, "extraordinary claims require extraordinary evidence." I believe this claim has hardened into something of a dogma, accepted uncritically and often wielded in a way that allows one to cursorily "explain-away" a number of troubling experimental results. It's also notoriously difficult and subjective to determine both what is extraordinary, and what constitutes extraordinary evidence. I see no uncontroversial way of defining either terms. (We're all familiar with a heap facts about quantum physics that seem incredibly bizarre, and in conflict with common sense - hence "extraordinary".)
Nevertheless let's take a quick look at a handful of relevant meta-analyses. A number of paradigms in psi research which have been subject to scrutiny indeed demonstrate robust enough evidence to warrant being taken seriously. Roughly 6-sigma data (Z-score), and high levels of statistical significance - far better than a p-value of at least p < .05, which is customarily grounds for rejecting the null hypothesis. Off the top of my head I can think about 3-4 times as many meta-anlyses in addition to these with statistical results roughly in the same ballpark.
- Presentiment/ physiological anticipation of unpredicible stimuli (Mossbridge, Tressoldi, Utts 2014): 26 studies, z = 6.9, p < 2.7 × 10−12 (that's 10 to the negative 12th)
- Ganzfeld (Storm, Tressoldi, Di Risio 2010): 108 total studies, Stouffer Z = 8.31, p = 10-16
- Non-ganzfeld free response (Storm, Tressoldi, Di Risio 2010): 16 studies, Z = 3.35, p = 2.08 x 10-4
- Precognitive habituation (Bem, Tressoldi, Rabeyron, Duggan 2016): 90 studies, z = 6.40, p = 1.2 × 10-10
- Forced choice (Storm, Tressoldi, Di Risio 2012): 72 studies, Stouffer Z = 4.86, p = 5.90 × 10–7
- Forced choice (Honorton and Ferrari 1989): 309 studies, Stouffer Z = 6.02, p = 1.10 × 10−9
- Free response (Milton 1997): 78 studies, Stouffer Z of 5.72, p = 5.40 X 10-9
- Remote viewing (Jahn and Dunne, 2003): 653 tests, composite z-score against chance of 5.418, p = 3 x 10-8
Is this "extraordinary evidence"? I don't know - but it's certainly far beyond sufficient for accepting any kind of "mainstream" phenomena. I think it warrants being taken seriously.