Showing posts with label editors. Show all posts
Showing posts with label editors. Show all posts

Tuesday, 25 November 2014

HUPO-PSI Meeting 2014: Rookie’s Notes

Standardisation: the most difficult flower to grow.
The PSI (Proteomics Standard Initiative) 2014 Meeting was held this year in Frankfurt (13-17 of April) and I can say I’m now part of this history. First, I will try to describe with a couple of sentences (for sure I will fai) the incredible venue, the Schloss Reinhartshausen Kempinski. When I saw for the first time the hotel, first thing came to my mind was those films from the 50s. Everything was elegant, classic, sophisticated - from the decoration to a small latch. The food was incredible and the service is first class from the moment you set foot on the front step and throughout the whole stay. 
  
Standardization is the process of developing and implementing technical standards. Standardization can help to maximize compatibility, interoperability, safety, repeatability, or quality. It can also facilitate commoditization of formerly custom processes. In bioinformatics, the standardization of file formats, vocabulary, and resources is a job that all of us appreciate but for several reasons nobody wants to do. First of all, standardization in bioinformatics means that you need to organize and merge different experimental and in-silico pipelines to have a common way to represent the information. In proteomics for example, you can use different sample preparation, combined with different fractionation techniques and different mass spectrometers; and finally using different search engines and post-processing tools. The diversity and possible combinations is needed because allow to explore different solutions for complex problems. (Standarization in Proteomics: From raw data to metadata files).

Wednesday, 9 October 2013

My List of Most Influential Authors in Computational Proteomics (according to Articles References, Google Scholar, twitter, Linkedin, Microsoft Academic Search and ResearchGate)

Young researchers starting their careers will often look for reviews, opinions and research manuscripts from the most influential authors of their chosen field. In science, however, unlike many other topics on the Internet, ranked lists or manuscript repositories of top authors sorted by research topic are hard to come by. For some researchers, the idea of such a task brings the words ‘wasted time’ to their minds; the most critical condemn it as a frivolous pursuit. Maybe so. In my opinion, however, it as an excellent starting point.

ResearchGate Home page
Home Page of ResearchGate with more than 3 millions of users

These days, more people than ever are involved in science and research. Just look at ResearchGate’s homepage.  There are over 3 million persons there –and we’re only counting ResearchGate users. Once simple undertakings, such as finding the right manuscript to cite, the most authoritative group on a topic, or the best software application for a specific task, have become increasingly difficult for graduate students navigating this ocean of data, despite the availability of services such as Google Scholar or Pubmed. The situation will only worsen in the future, as is easy to see by simply tallying the number of  published papers in the fields of Proteomics, Genomics, Bioinformatics and Computational Proteomics since 1997:

Number of published manuscripts in Pubmed per year (1997-2012). the statistics was done using the Medline Trend Service http://dan.corlan.net/medline-trend.html

In 2012 alone, over 6,000 and 17,000 manuscripts were published in the fields of proteomics and bioinformatics, respectively. Our young field, computational proteomics, published more than four hundred papers. Perhaps well-established PI’s or Group Leaders can easily tell apart derivative or me-too contributions from groundbreaking work, but young scientists, who spend most of their time implementing someone else’s ideas, can certainly have a hard time doing so. Although technology has come to the rescue with today’s mixture of search engines and social networking tools (ResearchGate, Google Scholar, twitter and LinkedIn among them), the best way to harness its power is, precisely, by starting from a ranked list of the most authoritative voices within a field of research, whose whereabouts can then be traced in the scientific literature, the blogosphere, and anywhere else.


Monday, 21 May 2012

An "in-house" Tool

One of the small hidden details in publications, even in those with a higher impact, is the use of "in-house programs". What is an "in-house" program or tool: Normally is a piece of software that researchers use to analyze process or visualize the experimental data, but most important the software it-self is not published

The term by itself is inoffensive, but the concept could be extremely dangerous. We can cite hundreds of manuscripts that included in the data analysis "in-house" tools, but never the terms "in-house instruments". The authors always needs to cite the manufacturer, the reagents, even the year and the company. I know, we have a section to describe data processing but mostly we cite some parameters, and the well known software like search engines (Mascot, X!Tandem, Sequest, etc). But at some point of this section several times you can find the term "in-house" tool. It could be a reference to an excel formula or to a complete and complex java program with many tasks like parsing a search engine output, computing the FDR, removing false-positive identifications, computing peptide-spectrum-match redundancy, etc. The are not a real/objective measure to distinguish between a little-simple tool and a complex tool one.