W. Edwards Deming, a prominent statistician and consultant in the field of quality management during the latter half of the twentieth century, asserted that every corporation should have a skilled statistician reporting to the senior leadership so that critical decisions could be based on sound statistical evidence. The rise of data science as a discipline over the last couple of decades has demonstrated that Deming was at least partially correct—that decisions should be based on sound statistical analysis of data. However, we’ve learned that it’s probably more valuable to instill a little bit of the statistician into everyone rather than rely on a single individual for all the statistical expertise. What has made this shift possible? The tools of applied data science statistics, machine learning, and artificial intelligence—have become more accessible and vailable over time, making it much easier for people without a specialized statistical background to reason about data and to build powerful analytical and predictive models.
We will see in this issue of the Pipeline Technology Journal how the tools of data science, applied by engineers and scientists, are permeating the pipeline integrity management space to advance the state of the art in inline inspection, integrity assessment, and monitoring. It is exciting to see the tools of data science in the hands of those closest to the critical problems, bringing all their deep expertise in pipeline integrity to the table. Having subject matter experts so close to the efforts raises the likelihood that we are working on the problems that are worth addressing and that will bring lasting value to the pipeline industry.
Chief Engineer - Global Pipeline Integrity