Sunday, June 15, 2025

The Indian research ecosystem and importance of regular publishing and communication of one's work

Group conversational Transcripts from 2011:

CBBLE moderator: To add another perspective to 'publications':

There was a researcher in India around 50 years back who got trained from Europe in 'Reproductive Physiology' and after returning to India set about applying his training to develop what could have become the first IVF success story in the world.

His only problem was he had set his eyes on the 'deeper' goal of producing the 'baby' without publishing his 'superficial' findings that led him toward his goals in a step by step manner and each of these original steps were later analyzed as worthy of single papers in "Nature" that would have brought him instant recognition in his field ( and perhaps got him a ticket to be a faculty in a university abroad ) even before he developed the baby.

Although the British team produced the baby first ( a year before his baby came out and was mercilessly rejected by his own countrymen ) his approach to producing the baby was an astounding piece of original creativity and his techniques were rediscovered and popularized a decade or two later by Western researchers. (Current Science, Vol .72. No. 7, 10th april1997

He didn't publish these 'superficial steps' in any journal and finally after reaching his goal published his 'deeper findings' in an Indian journal and after publishing this he perished ( committed suicide ) for being violently rejected as a 'fraud' by the Indian scientific community. This story has been portrayed with a different research setting around 20 years years back in a bollywood movie 'ek doctor ki maut,' although understandably bollywood didn't touch on this particular aspect about publication.

conflict of interest COI: He studied in the same medical college as me and we do not have anything else in common. :-(

The bottom line/learning point here could be :

Keep documenting whatever we do in research and start sharing them globally ( which is what publications are all about) unless we want to perish? It is like working toward eating an entire piece of flesh over the week by slicing it into one 'salami' a day?

regards,



From: Gitanjali Batmanabane
Date: Wed, Apr 13, 2011 at 9:52 AM
Subject: Re: [netrum] scientific fraud in writing - 2
 

Dear Netrumians
 
Just to add that all of these listed by Vijay cannot be labelled 'scientific fraud' in the strict sense of the term.
 
For e.g. Salami slicing - this is unethical but many people resort to this. Editors accept the manuscript because there is no way of knowing whether it is a part of the study which was published before (and there is nothing unethical about that). Funding agencies like it because the measure of the success of a project lies in the number of papers published. For e.g. if you conduct a survey of antimicrobial use in public, private hospitals and elsewhere (community) but when publishing - publish one manuscript on use in public hospital, one on private hospital, one on community use and so on. The issue with salami slicing is that most authors justify it and of course it adds to the numbers of papers in the CV. The issue here is that the whole topic becomes so superficial when published. Imagine if someone publishes a paper with all three aspects together - there will be depth and readers will have a good overview of things. Sadly this is never thought of. Many PhD theses get sliced only because the authors want to increase the 'n'. Those of you who read papers published in Journal of Pharmacology & Experimental Therapeutics will understand what I mean. Most papers in that journal are a pleasure to read (and very difficult to understand) because they fully investigate the research question.
 
So salami slicing falls into a grey area.
 
Gitanjali
 


On Wed, Apr 13, 2011 at 4:05 AM, Arin Basu wrote:
 

Vijay sir has posted a very interesting list of scientific fraud in writing in this forum. I'd like to pick up three of them -- selective reporting as scientific fraud, salami slicing of reports, and omission of others' original publications - and post my observations and feelings. I think each of the three is an example of academic dishonesty but there is a deeper societal and cultural systemic basis in each one. I also think using "fraud" to label them as such is too strong a term to use. The reason I think fraud is too strong a term for this set, is in the complexity around each of these, and how people are sometimes compelled under circumstances. 


Take subgroup analysis for example (See Vijay sir's listing "Reporting only the findings that support the original hypothesis." as fraud. )
I think there is a fine line between academic pragmatism, dishonesty, and fraud, let's leave at that. On the surface of it, reporting only those data that support the hypotheses seem like fraud (although I'd think it's too strong a word, I'd rather go for "dishonesty" or something similar expressive). I htink these issues merit discussion here. In particular when is selective reporting dishonesty versus when it is not.

There are situations where the investigators have set out rival hypotheses, gone about data collection in as unbiased a manner as possible, and then have analyzed their data. In doing so, they realize that data in general support their hypotheses and while writing up their publication from the project, would deliberately highlight those points that support their hypotheses. That's straightforward, and most people would not count that as dishonest practices. 

However, problems arise when people cut some slack on their data, or report justifiable claims of their hypothesis on the basis of subgroup analyses.  For a good discussion of the publication bias and other problems that arise (goes beyond moral obligations of the researcher or the author, and plagiarism charges), see Rifai et al,Reporting Bias in Diagnostic and Prognostic Studies: time for action, full text here (http://www.clinchem.org/cgi/content/full/54/7/1101)

As you see, these are systemic problems inherent in the culture of academics that people have come to accept and grow on. These are problems that need to be addressed from a range of different perspectives, as educators, we need to remember that it is the study quality (rather than the precise results) that are important; that p-value does not really tell you anything other the probability of your findings under conditions of the null and that's that; that there is no such thing as "positive" or "negative" studies. There is also a need for registry for all kinds of studies for all countries, or a common database. 



Similar situations arise when you consider salami slicing of your reports. By salami slicing is meant where you do repeated analysis of the same data, develop different messages out of them package them as publications and get credit for separate publications. In reality, you could have written up all of them in one paper and be done with. Is this dishonesty? Well, yes and no, depending on whose perspective you consider. From an academic knowldege management perspective, it borders on stacking up your plate whereas one paper would do; if you ask the investigator, he or she will justify that each message is vitally important in its own merit, and that any number of messages can be collapsed into one paper, but does that happen all the time? Plus, add to it, the system wants you be more productive and write more papers. Where are the data going to come from? One data, several different messages. The grantor organizations want you to be productive in that, buck for buck, you need to have as much productivity for one project, there is the "culture" of "publish or perish", and indeed, in an academic sense, who'd like to remove oneself from the academic gene pool of excellence and ladder climbing? Add to that the complexity of peer review process and it's no wonder that people would like to cut their data too thin to send to as many journals as possible and hope to get published. One side of the multiple comparison problem in academic data analysis. 

Is there a way out? Once again, I think it's a systemic issue. There are now multiple channels of academic publishing (of course in biomedical sciences we are a little slow to adapt whereas physics and math people are way ahead with their prepublication archives like ArXiv and so on). There are channels such as blogs and wikis and each is a good way to express your papers. Can we not build an academic knowldege base around them? Must we have peer review processes? For a good discussion, search for Richard Smith [AU] and peer review. Here's a sharp criticism of some problems plaguing our culture of publications in science (perhaps from a biomedical perspective), here by Jet Akst "I hate your paper", http://www.the-scientist.com/templates/trackable/display/article1.jsp?a_day=1&index=1&year=2010&page=36&month=8&o_url=2010/8/1/36/1
It of course talks of the peer review system, but you get the idea. 

The third thing is about lack of citing previous research perceived as academic fraud. And again, I do not know anymore if it is a fraud or if it is dishonesty, or if it is just pragmatism, or jealousy, or not wiling to issue the credit where it's due, or ego clash, or claims, or what it is. At the least, it's irritating. But most discerning readers do find out about these things anyway. Isn't it systemic? You bet. In an environment, where funds are tight, there is intense competition among rival groups working on the same project(s), it is not unusual to see people NOT citing one another and willing to give credit. Dishonesty? Yes, most certainly is. But I also think it's ingrained in our culture where we are hesitant to applaud others (not all are like that, but for many of us, openly applauding others for work that they have done does not come naturally; As Amzad Ali Khan, the famous saradiya, once lamented, he found his desi audience to be too miserly in clapping). Again, issues are ingrained in our culture, in our system, in our psyche. 

My point here is that, whether it is selective reporting on the basis of subgroup analysis (or other strategies), whether it is salami slicing of data, or wilful resistance to cite other peer groups, each of these is a symptom of a deeper systemic issue ingrained in our culture, be it academic, be it otherwise a generic social issue. Of course, that is not to take away any blame from the reporter/student/researcher. My point is this, issuing a warning is not enough under the circumstances. There is now also a case to strengthen the structure and influence the mindset of the students/early career researchers/funders to alert them about the perils of moral crises. 

# :-), my two cents

# /Arin 




 

https://commons.m.wikimedia.org/wiki/File:InVitroFertilization.jpg

More: https://en.wikipedia.org/wiki/In_vitro_fertilisation


https://en.wikipedia.org/wiki/Subhash_Mukhopadhyay_(physician)

Friday, June 6, 2025

Shooting from the hat with AI driven precision and accuracy to understand reliability and validity: UDLCO CRH





Conversational Transcripts:


[06/06, 09:18]AY: See the beauty of AI's language, where it constructs the sentence "Experts know what they don't know"


[06/06, 09:56]cm: Knowing what's unknown is the crux of "Meta cognition"?



[06/06, 11:19]ay: How do you guarantee validity (val idi _iti_) of a PaJR?


[06/06, 11:27]cm: The word validity is etymologically derived from the Latin validus meaning strength which uses the Sanskrit root Val aka Bal or in Bengali Ball!

So validity is in a way "might is right!"


[06/06, 11:28]ay: And idi

What about iti? 😃

Iti is Sanskrit and derived.

Choose idi from whatever is normatively cognate


[06/06, 11:29]cm: Eta ota

Iti is shesh


[06/06, 11:30]ay: Will tackle this over the weekend 🙂


[06/06, 11:32]cm: In Western theory of research evidence generation, the concept of the power of a study often determines the bal (validity) of the study but unfortunately study power itself currently probably is simply restricted to sample size?

Perhaps something more about this can be worked out over the weekend?


[06/06, 11:33]ay: That's because our (geolocated and geotagged linguaphonetically) research often reaches the waste urn!


[06/06, 12:31]si: Nice representation of validity & reliability: 




[06/06, 16:07]cm: The same is used as diagrammatic markers for precision and accuracy!





Does that mean precision medicine is all about reliability and not so much about accuracy and perfect medicine could be about both?



[06/06, 18:09]ay: 😭 this is not what and how we teach shooting. 

Grouping is important. It's Valid! 

All that is needed is a minor adjustment of "sights".

PS: Grouping is accuracy and precision combined.

Accuracy is of the posture and breath, and precision is of "sighting" and reaction.


[06/06, 18:19]ay: How AI explains it...

The infographic labelling tight groupings offset from the bullseye as “high precision but low accuracy” may be visually appealing but is conceptually misleading, especially when applied to research. In reality, tight groupings indicate both control and consistency—hallmarks of sound method and trained execution. If the shots are consistently clustered, the issue is not with accuracy of posture or breath (i.e., the research design and methodology), but with the sight alignment—akin to a minor misalignment in hypothesis framing or measurement scale. Dismissing such groupings as "inaccurate" is analogous to rejecting nuanced, repeatable research outcomes simply because they don’t conform to a pre-assumed center—often a stereotype or oversimplified standard. It mirrors the problematic practice of discarding data as “outliers” when in fact they may reflect expertise navigating edge cases or reveal important deviations. True accuracy emerges from refining the aim, not questioning the shooter’s competence when they’re grouping tight. Let’s not confuse deviation from norm with deficiency in method.