Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How would you propose we differentiate between students "playing along" and students really having false memories? Couldn't I discard any psychological study by saying "well, the participants could have faked it", since it's by definition not falsifiable (any test could also be faked)?


Yes, you could discard any study under that rationale among others. And the interesting thing is you'd likely be far more 'informed' for doing so. Social psychology has the lowest replication rate of any field. You're looking at top journals with replication rates in the twenties. [1]

What this means is that if you were to take any give study in social psychology, and simply assume it was 'wrong' (either saying the opposite of the truth, or claiming a statistically significant effect when there really is none), then you'd, on average, be dramatically more 'informed' than somebody taking the studies at face value. [2]

[1] - https://en.wikipedia.org/wiki/Replication_crisis#In_psycholo...

[2] - https://en.wikipedia.org/wiki/Pseudoscience


I don't want to discount the replication crisis, but I'd like to ask you the same question again: how, specifically, do you design any psychological study that you can't discard with "well, they could be faking it"?


It being extremely difficult or impossible to design a convincing study doesn't make the other ones convincing.


I am not saying that any study is or is not convincing. I am literally just asking: how could you define a test for this? I am not stating that any individual study is good or bad or anything in between. I want to know: if I design a study, how do I design it in a way that makes it a good study in regards to this question?


I don’t think it is reasonable to expect for a person on in a sort of informal online conversation like this to be able to define a bulletproof experiment for the topic.

Let’s apply a similar logic to something that is clearly possible to design an experiment around. How about detecting neutrinos? If we were commenting on an article about detecting neutrinos by dropping a piece of paper and seeing if it bounced around a bit on the way down, I think we’d all identify that as not a very good experiment. But, I could not design a very good experiment for detecting neutrinos. And, this doesn’t say much about whether or not neutrinos exist, I’m just not a physicist.

So, if posed with that question, I would not have a good response. Despite this, I would still believe the dropped piece of paper experiment was not very good.


> I don’t think it is reasonable to expect for a person on in a sort of informal online conversation like this to be able to define a bulletproof experiment for the topic.

I am allowed to ask, right? It doesn't seem crazy to me that when someone criticizes a study, someone else will ask "how could you solve the problem you describe?". I would like to know if someone has any idea on how to falsify it, because if someone were to find one, it might convince me of the criticism.

> Let’s apply a similar logic to something that is clearly possible to design an experiment around. How about detecting neutrinos? If we were commenting on an article about detecting neutrinos by dropping a piece of paper and seeing if it bounced around a bit on the way down, I think we’d all identify that as not a very good experiment. But, I could not design a very good experiment for detecting neutrinos. And, this doesn’t say much about whether or not neutrinos exist, I’m just not a physicist.

I don't think that is a good comparison. A better one would be if there is an experiment, and then someone comments on the results by saying "well, it could have been dark matter particles interacting in some way". It might as well be true, but it's not definitive if they don't explain how to measure and isolate that. Someone else asking for how they would design the experiment to take this into account seems like a completely reasonable ask to me.


I don't think you're asking an unfair question. Its now been answered, by many people, to the effect of "this study isn't convincing, but we don't see a simple way to design a convincing one".


I suppose it has. The answer is (hopefully understandably) disappointing, since it can be used to discard any scientific efforts involving subjects, but it's what I expected.


You'd need the participants not to know they are part of a study.

They'd need to be in a situation where they believe that the statement of their true beliefs is critical.

This isn't impossible (many studies do use primary sources which were not created for the purposes of a study, and where there was some incentive based on real-world circumstances for them to be truthful), but it is much more difficult and leaves gaps where insufficient primary sources materials addressing the questions of the study are available.

Psychologists are well aware that the current model for many psychology studies, which consists of either asking an extremely small sample of participants to sit in a computer lab at a university for 20 minutes answering questions, or sending out very loosely-targeted emails soliciting participation, are both very inefficient.

But they gotta get published, cause that's how you put food on the table.


Was this not the whole point of the study in TFA comparing false memories of crimes to false memories of non-crimes? The assumption that people would be less willing to admit to a serious crime than to having been injured?

> Intriguingly, the criminal false events seemed to be just as believable as the emotional ones. Students tended to provide the same number of details, and reported similar levels of confidence, vividness, and sensory detail for the two types of event.


Well, maybe you don't? Science gives you the tools to know when you're wrong. If you can't falsify your hypotheses then you're not generating new knowledge. So maybe stop trying and do something else.

Which is not to say that we should stop psychological research- not at all. But maybe we should not treat psychology in the same way we treat science. Specifically, we shouldn't be trying to apply the scientific method in psychological research. Applying the tools of science in a domain where no scientific questions can be answered is only going to produce noise and confusion.

I know that would really hurt psychologists' pride but, for example, philosophers are not considered scientists and yet their contributions to human knowledge are difficult to deny.

Similar points have been made much more forcefully in "Cargo Cult Science", but I don't want to be mean myself. I'm just trying to say that I think there's far better ways to do psychology than trying to do it like it's physics or chemistry.


It might not be possible with current knowledge, putting psychology outside the realm of the sciences.


Measure changes in something the test subject isn't expecting and wouldn't realize they need to fake? E.g. if you had to sit in the waiting room for 5 extra minutes, your test score changes like this.

Measure differences below conscious control, like reaction speed to positive/negative/etc words after exposure to certain inputs.


> Measure changes in something the test subject isn't expecting and wouldn't realize they need to fake? E.g. if you had to sit in the waiting room for 5 extra minutes, your test score changes like this.

I can discard this by saying "well, the subject might behave subconsciously differently when they know they are in any experiment". How would you refute that?

> Measure differences below conscious control, like reaction speed to positive/negative/etc words after exposure to certain inputs.

What if there are phenomena that don't correlate with the differences you know to measure?


This is too vague. Hiw would you test this hypothesys, that this stidu is teating, by measuring such unrelated signals?


Perhaps this hypothesis simply is harder to test scientifically. It seems lots of claims in psychology are not verifiable.


> What this means is that if you were to take any give study in social psychology, and simply assume it was 'wrong' (either saying the opposite of the truth, or claiming a statistically significant effect when there really is none), then you'd, on average, be dramatically more 'informed' than somebody taking the studies at face value.

The parenthetical seems like a problem to me. In the sense that, maybe you are right and most psychology research tells us nothing. But that won’t tell us the opposite is true. There are lots of possible outcomes in a psychological experiment, “the opposite” is ill defined.


You should indeed be extremely skeptical of studies whose only dependent variable is a voluntary participant response and where the participant has an incentive to tell the experimenter what they think they want to hear (for example, to make the experiment end faster).


How would you propose, in this specific study, they verify what you want? Please be specific regarding what questions you want asked, when they should be asked, and how these questions can't be discarded by saying "well, they could be faking it".


I think this whole experimental paradigm is inherently unfixable.


So we just stop studying psychology completely?


Or the people undertaking these studies (and more importantly, the people peer reviewing and publishing the work) push back on broad conclusions that aren't substantiated by the research.

Having seen the nonsense my partner has to put up with in order to get a post-graduate qualification to practice as a professional psychologist I'm not holding my breath. From where I'm sitting the replication crisis in this field looks like a crisis of incompetence and apathy.


> broad conclusions that aren't substantiated by the research.

But these aren't broad conclusions, they are specific conclusions, and they are substantiated by the research. You're just positing that the results could be entirely fake on a whim.


Designing good studies is hard. That doesn’t mean we should just roll over and be satisfied with bad studies. The same applies to most studies about human diet and nutrition.


That is why I asked for some way to test for what GP complained about, and the only answer I got was "there is no way". Do you have a way to design a study so the complaint "well, the participants could be faking it" isn't valid? I'm legitimately asking, so far all I've gotten was "no way".


Not sure if I replied to you further up, or someone else, but in any scientific study you should be attempting to minimize the impact of the observation method used on the outcome of the data, and when it comes to answering questions truthfully, someone being aware that they are participating in a study does not do that at all. They should be using data from real-world situations in which people did not have an incentive to lie. Problem is, those are hard to find, and most studies are done in order for the researchers to get paid so they can pay rent and buy food.

There is far more incentive to conduct shoddy research in psychology (since as a corrolary to being hard to produce good evidence for, it's hard to produce good evidence against, which non-reproducability is often not treated as), than there is to only proceed with the studies you actually have sufficient known-good data to work with.

We have spent billions of dollars on single technology projects in order to answer questions about physics, like the LHC, because without those we often cannot actually get the data needed to answer certain questions.

Nothing equivalent is done in social psychology. There are no massive, multi-use infrastructures being built to engage with it.

While that is not the fault of social scientists, but of the devaluation of their field by the society around them, it still does mean they often do not have the necessary tools to do their jobs correctly.


> So we just stop studying psychology completely?

That's not the only option. Why is it the only one you can think of?

I mean, where are you going with "Our only options are to draw the wrong conclusions, or to stop studying it altogether"?


> That's not the only option. Why is it the only one you can think of?

It's not the only one I can think of, but no other options have been presented to me. I have been asking for solutions, but nobody has provided any that can't be dismissed on the same grounds.

> I mean, where are you going with "Our only options are to draw the wrong conclusions, or to stop studying it altogether"?

Where did I write anything close to that? I am not saying anything about drawing conclusions, I am literally asking: how would you design a test that measures the proposed problem? Why are you accusing me of advocating for "drawing the wrong conclusions"?

It's incredible how many people want to read my comments as advocating for something they are not. If you feel attacked because I'm asking how we could measure your idea, maybe I'm not the problem, but your idea is?


> It's not the only one I can think of, but no other options have been presented to me.

Well, then, with all due respect, if you already know of other options, why are you asking?

> I'm asking how we could measure your idea

Who cares? Even if you cannot "measure" (whatever that means) the alternatives, the fact still remains that people are drawing conclusions from incorrect and invalid data.

That's the point, really - the field of psychology is bereft of replicable studies.

Observing that the way a particular field performs research results in invalid conclusions doesn't put any obligation on the observer to provide alternatives.

It's enough to point out that a thing is wrong; there's no requirement to also provide the correct answer.


> Well, then, with all due respect, if you already know of other options, why are you asking?

If someone engages in a conversation with me and presents a thought, I would like them to present it fully. If there are obvious flaws or implications, I try to reflect them, both so I can be sure I correctly understood the other party, and to give them a chance to fill in those holes. I do not wish to just inject my own ideas everywhere, I want to understand people as they express themselves. Seems like a pretty normal way to talk to other humans to me.

> Who cares? Even if you cannot "measure" (whatever that means) the alternatives, the fact still remains that people are drawing conclusions from incorrect and invalid data.

And because you are declaring the data incorrect and invalid that means it is so? You will either have to share your research credentials which allow you to judge it as such, or you'll have to answer the obvious questions that come after such a statement (e.g. "how can you prove that?"). It's pretty normal in science that instead of people just blurting out thoughts which are taken as truths, we reflect on ideas and ask questions that help us get to a better understanding divided from our subjective point of view. If the best you can come up with is "well, it's obviously true, duh" then your thought isn't as good as you think.

> It's enough to point out that a thing is wrong

You are wrong. There, I said that you are wrong - does that make you wrong? If not, is there maybe some system by which we can determine whether you're right or wrong? Your response to this question is "no need, I am right".


> You will either have to share your research credentials which allow you to judge it as such

I was a research scientist[1] for 7 years of my 25 years of working experience.

What are your credentials?

> You are wrong. There, I said that you are wrong - does that make you wrong?

That's just your opinion. The lack of replicable results in psychology is not an opinion.

We are not debating whether or not the research can be replicated, are we?

I mean, are you seriously claiming that psychology is filled with replicable studies?

[1] EDIT: an accredited national research institution.


> I was a research scientist[1] for 7 years of my 25 years of working experience.

In what field? Is it close to psychology?

> That's just your opinion. The lack of replicable results in psychology is not an opinion.

I did not claim that psychology doesn't have a lack of replicable results.

> We are not debating whether or not the research can be replicated, are we?

No, we are not. We are debating whether we can dismiss this study without defining a way to falsify our claim.

> I mean, are you seriously claiming that psychology is filled with replicable studies?

Again, I am not claiming this, nor have I claimed it previously.


You can't trust psychology because people sometimes lie.

You can't trust physics because you might be hallucinating.

What's left? Authoritarianism? Consensus? Tradition?


Not all of psychology, but this type of problem, yes. We just don't have the neurological knowledge required to get such information meaningfully yet. It would be like people in the 1700s trying to study nuclear forces - it's not an absurd endeavor, but they just didn't have the means yet.


No, but the focus for studies like this should be on nonvoluntary responses like reaction times and EEG in my opinion.


How do you know that the EEG monitoring doesn't influence people? Couldn't they subconsciously fake it due to being actively monitored?


Or the science goes were the data is.. The servers of search engines or social networks. Psychology is what survives the queries into all of humanity.


That’s for the author of the paper to design into the experiment


And I'm asking you: what solution would make you happy? What would it have to look like?


Except in this case for the follow up interviews there was no incentive to lie.


They’re incentivized to tell the experimenters what they want to hear so they can get paid and go home faster.


Having participated in plenty of these (as it was required for class credit, and they also paid reasonably well for an in-between classes hour type thing to go do), the 'researchers' can also be completely obnoxious. One study I was involved in had me listen to some rant about how women do better on a math exam I was about to take, being left in a room for about 10 minutes alone, and then being given the exam - on pink paper. It was generic algebra 2 / SAT level test.

After the exam the researchers just endlessly hounded me from a hundred different angles trying to get me to express doubt or uncertainty in how I performed on it. If I just wanted my $20 and credit, it would have been far easier to just give them what they wanted. But I've been ever the contrarian. They also said they'd email me my results in 8 weeks. They did not. I've always wondered if I ended up getting culled from their 'study' as an 'outlier.'


That just sounds like bad research. In general if you know their angle that clearly then they’ve failed at how to do a research study of this type — obnoxious or not. It sounds like they don’t know how to do this type of research.


I'm not sure that was the problem. Journals don't tend to publish negative results, and social science journals seem to have 0 issues with publishing studies that are overtly designed to confirm a hypothesis, rather than challenge it. Add in a bit of publish or perish, and this behavior is ubiquitous. What would you say is an example of good research in social psychology?

As an example of bad science, you look to what's likely the single most well known experiment in social psychology - the Stanford Prisoner Experiment. It's not only failed to replicate, but was mostly entirely fabricated. [1] The famous clip of the guy screaming out 'I can’t stand another night! I just can’t take it anymore!' came from a guy who was intentionally faking a meltdown, because he needed to go study and they weren't allowing him access to his books, the guards were actively trained to be cruel, and more. Incidentally Zimbardo (the experimenter), far from being shunned, has received countless accolades for this experiment, some as recent as 2012.

[1] - https://gen.medium.com/the-lifespan-of-a-lie-d869212b1f62


Funny, but my one and only contact with PLATO when I was at the U of I was for a Psych 100 experiment like this.


In the follow ups they are simply asked to recount and give their confidence of recounting. It’s only the original interview they are told to try to remember details even if they can’t.

The premise of the study is that the first interview is used to plant the memories. The follow up interviews are to see if they stick. They specifically aren’t trying to intervene in those interviews.


In most psychological experiments, volunteers are told it will take an hour/an afternoon/whatever at the time they sign up to participate. The notion that all of them actually want to cut corners and just obtain the cash in return for the bare minimum of time and effort is an assumption about people in general, not a fact.


Is it that hard to imagine that teens and the mentally impaired would do the same, focusing on resolving the present conflict (locked in room) over a future hypothetical problem (locked in jail)?


> How would you propose we differentiate between students "playing along" and students really having false memories?

Remove the incentive for the student to fabulate.

Have another, supposedly (to the students) 'independent' researcher ask them questions about it days or weeks later. A potential way of doing this would be to have a member of the Institutional Review Board (or supposed member of said board) following up with the students as a matter of 'quality control' and verification that the study followed IRB guidelines. Phrase the reason for the questioning as looking for non-punitive, but developmental, feedback for the researcher.

For ethical reasons this would probably require blinding the original researcher to the identities of the students, but this could be done by having people other than the PI actually ask the questions.

For a particular study on whether or not the student did something criminal, have the questioner truthfully assert to the student binding confidentiality as to their identity by dint of being a psychologist or someone like that who has the legal duty to not divulge.


Thank you for an actual proposal! It makes a lot of sense to me, but there is one reason it probably wouldn't be allowed: the study "implanted" false memories (in the sense that participants stated they believed false memories). I don't think the study would pass ethical review if you were only to follow up days or weeks later (after the supposed end of the study), since that would also be the earliest point you would be able to clear up the lie.


It could probably be tailored to a shorter time period.

A possible great way to do it is have the students originally questioned by grad students, and then the professor themself 'follow up' shortly afterward as a 'check' on their grad student's process. The problem with that is that some of the students may feel incentivized to make the grad student look good.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: