by Alex G » Wed Nov 02, 2016 8:55 am
Thanks a lot guys for great and detailed answers! I understand that my questions are vague and general, but this is a whole different world and I know virtually nothing about it. I do realize that things are very different in different places, I am merely trying to grasp on some basic principles. I assume biotech has a totally different way of thinking and approaching things and it is important to understand their ways before making any long term commitments.
I guess it boils down to this for me. Based on my (and many others') experience in academia, the original project/hypothesis that you are hired to research is rarely the same what you end up with a few years later (if you are indeed successful). I would say usually the original approach or hypothesis is always wrong, what varies is the extent of how much is it wrong, and your actual job is to figure out what is right. And this is just the state of things. That's OK to find a way to get your project to work (with some PIs, hehe). Often you are hired to do a project that is based on a completely unrealistic hypothesis. And PI has a grant for it, needs to show productivity and does not care. One way out of it that I saw is when people demonstrate with enough evidence that, OK, this just does not work, but there is another way, and we still can get something out of it. PIs won't be happy but something is better than nothing if they are reasonable enough to see the original flaw. Hey, I know enough people who published high impact papers on side projects that they developed on their own or by doing something their PI did not want them to do in the first place. It will not always work out like this, but the freedom you have in the academia gives you at least some room to maneuver out of such situations.
What I think is going on in the industry, and I most probably is wrong? On one hand, things should be a bit easier because they already are working with things that are proven to work in some way. But, as this is the industry, I assume there is always a whole deal of things that have been overlooked. Some of them might have been overlooked at the proof of concept stage, some may just stack up eventually. This is the inevitable consequence of the "good enough" approach, I think. Now it is not my intention to go there and start stirring trouble by challenging everything I see, but what if this will happen?
Say I went there, were given one project then another and another and they just don't work out. By that I mean to produce "negative data" in a biotech way of thinking. Again, I will be judged based on my productivity, and it appears that it will depend a lot on some external factor that I won't have any control over. And nobody will care why things are not working, at the end that will be my fault, as always.
And based on some small clues and my academic knowledge and experience, I suspect that the original approach is wrong, say the drug might have some alternative mechanism of action. I can't just show up to my manager and say I think the approach is not the right one. I would need to present some hard and quite compelling evidence first to even get their attention given that they will care at all. In the academia I would do some research around and then say: hey, we got it all wrong initially, but this is how it really works, and in many cases it will be even OK. Actually if I don't do this I will be just continuing to bang my head against the wall, frustrating my PI and eventually end up with a failed project. What is the way to deal with this in the industry if I won't have the freedom to generate and test my own hypothesis?