by: VMware Automation Manager Vinod Kumar C

What if we can identify the possible challenges in, or feature of, an application in advance, even before it's created? The answer is that it would save time, effort, and cost for the company. This idea was triggered by a comment from the VMware CTO, Kit Colbert, when he asked: "Can you design your tool to predict the vulnerability of an application/feature that is yet to be made, based on the composition of the development team that is poised to implement them?"

It certainly sounds intriguing, and VMware IT eagerly picked up the threads to weave into the fabric of our test automation platform.

Many thoughts were deliberated, including-static testing, test-driven development, statistical prediction based on test results, and analytical approach using AI.

Based on the discussions, we IT started with extending the closest and the most recent utility, one that was used to publish metrics data from test execution reports.

The challenge

Our test automation platform uses Tableau and Power BI as visual presentation utilities to provide insights for important QA KPIs, such as defects, test case executions, and automation coverage.

IT thought the same tools could be further extended to gain insights from historical data, such as defect fixed developer, the density of defects over a certain line of code, the number of times that a defect was reopened, and the age of a defect before fixing it. It was handy that the same data sources used to publish earlier KPIs could be relied upon for our new attempt to gauge developer efficiency.

We knew this could be stitched-but what materials, in what sequence-should be interpreted. That became the next challenge.

The journey

The algorithms that go into the solution depend on the baseline that we set as source data. We decided to rely on the development team's efficiency measures to get to that baseline, and we finalized the data sources much faster than we expected. Now comes the implementation approach-do we need to develop our algorithms? or should we customize the business intelligence (BI) tools to train our data models?

At this stage, we decided to split the case scenario into two parts:

  • first, use our own prediction algorithms to process the raw data from our earlier data sources of defects, and generate an intermediary data source matching those criteria of developer efficiency.
  • second use that intermediary data source for the BI tools to create visual metrics.

This shaped up our static analytical tool for the developer efficiency index.

What's next?

We intend to make this tool an AI solution that can potentially serve as an alternate to a QA lead's discretionary steps while signing off from a test cycle. It targets replicating the intelligence of interpreting the baseline KPIs on defects, test executions and automation development, and coverage.

And the cherry on the cake will be integrating this AI tool for QA sign-off in the continuous delivery (CD) pipeline.

There is a myriad of possibilities.

VMware on VMware blogs are written by IT subject matter experts sharing stories about our digital transformation using VMware products and services in a global production environment. Contact your sales rep orvmwonvmw@vmware.comto schedule a briefing on this topic. Visit the VMware on VMwaremicrositeand follow us onTwitter.

Attachments

  • Original Link
  • Original Document
  • Permalink

Disclaimer

VMware Inc. published this content on 04 January 2022 and is solely responsible for the information contained therein. Distributed by Public, unedited and unaltered, on 04 January 2022 15:38:01 UTC.